Performance Analysis of Deep Learning Libraries: TensorFlow and PyTorch
- 1 Universidade Federal de Sergipe, Brazil
Abstract
Through the increase in deep learning study and use, in the last years there was a development of specific libraries for Deep Neural Network (DNN). Each one of these libraries has different performance results and applies different techniques to optimize the implementation of algorithms. Therefore, even though implementing the same algorithm and using different libraries, the performance of their executions may have a considerable variation. For this reason, developers and scientists that work with deep learning need scientific experimental studies that examine the performance of those libraries. Therefore, this paper has the aim of evaluating and comparing these two libraries: TensorFlow and PyTorch. We have used three parameters: Hardware utilization, hardware temperature and execution time in a context of heterogeneous platforms with CPU and GPU. We used the MNIST database for training and testing the LeNet Convolutional Neural Network (CNN). We performed a scientific experiment following the Goal Question Metrics (GQM) methodology, data were validated through statistical tests. After data analysis, we show that PyTorch library presented a better performance, even though the TensorFlow library presented a greater GPU utilization rate.
DOI: https://doi.org/10.3844/jcssp.2019.785.799
Copyright: © 2019 Felipe Florencio, Thiago Valenç, Edward David Moreno and Methanias Colaço Junior. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- 5,834 Views
- 3,045 Downloads
- 9 Citations
Download
Keywords
- Tensorflow
- PyTorch
- Comparison
- Evaluation Performance
- Benchmarking
- Deep Learning Library