Is Facebook-backed PyTorch better than Google’s TensorFlow?

//Is Facebook-backed PyTorch better than Google’s TensorFlow?

PyTorch and TensorFlow, two competing tools for machine learning and artificial intelligence. Find out which one might serve you best in your next project.




The rapid rise of tools and techniques in Artificial Intelligence and Machine learning of late has been astounding. Deep Learning, or “machine learning on steroids” as some say, is one area where data scientists and machine learning experts are spoilt for choice in terms of the libraries and frameworks available.


There are two libraries that are competing with each other at the forefront, TensorFlow and PyTorch. There’s no doubt, TensorFlow is the best in class, but PyTorch is known for its ease of use and is much lighter to work with to build rapid prototypes.


So, which one is better? How do the two deep learning libraries compare to one another?


TensorFlow and PyTorch: the Basics


Google’s TensorFlow and Facebook’s PyTorch are both widely used machine learning and deep learning frameworks. TensorFlow was open sourced in 2015 and backed by a huge community of machine learning experts, and it went on to become THE framework of choice by many organizations for their machine learning and deep learning needs.


PyTorch, on the other hand, a Python package released by Facebook in 2016 for training neural networks, is adapted from the Lua-based deep learning library Torch. It quickly gained popularity because developers found it easy to use unlike TensorFlow. PyTorch is one of the few available DL frameworks that uses tape-based autograd system to allow building dynamic neural networks in a fast and flexible manner.


Pytorch vs TensorFlow


In the past, these two frameworks had a lot of major differences, such as syntax, design, feature support, and so on; but now with their communities growing, they have evolved their ecosystems too. Both the libraries have picked up the best features from each other and are no longer that different. Here we are trying to emphasize on the two libraries capturing the present scenario. Let’s get into the details – let the PyTorch vs TensorFlow match up begin…


What programming languages support PyTorch and TensorFlow?


Although primarily written in C++ and CUDA, TensorFlow contains a Python API sitting over the core engine, making it easier for Pythonistas to use. Additional APIs for C++, R, Julia, Java, Go, and C# are also included which means developers can code in their preferred language. In recent times, the TensorFlow team is extensively working to bring in new features and updates to their libraries such as TensorFlow.js and Swift for TensorFlow (early release) to support JavaScript and Swift respectively.


Note: TensorFlow ecosystem provides bindings for other languages as well, such as Ruby, Haskell, Rust, MATLAB, and Scala.

Although PyTorch is a Python package, there’s provision for you to code using the basic C/C++ languages using the APIs provided. If you are comfortable using the Lua programming language, you can code neural network models in PyTorch using the Torch API. In one of the recent releases of PyTorch, they have introduced support for the Java language. In short, if you are into C++ or Java, PyTorch has a version for you.


>How easy are PyTorch and TensorFlow to use?


The TensorFlow library was a bit complex to start with, if used as a standalone framework. The recent version, TensorFlow 2.0, was a major update compared to its earlier version. It has come with good features and the library has been simplified too. Writing deep learning models has become easier with Keras, a deep learning API written in Python, which sits on top of TensorFlow, thus integrating Keras in the main API itself. Moreover, TensorFlow now supports eager execution, a define-by-run interface that allows developers to run operations immediately the moment they are called from Python. They’ve also simplified the overall API by cleaning up deprecated APIs thus reducing duplication. This has made TensorFlow user-friendly for research and development unlike its previous version.


When compared to PyTorch, TensorFlow was the first to support Distributed training, be it across multiple machines, GPUs, or TPUs. However, the recent releases of PyTorch have given birth to a new distributed package of PyTorch that would enable researchers and developers to distribute training across processes and multiple machines.


Due to the inclusion of the Python API, TensorFlow is also production-ready, that is, it can be used to train and deploy enterprise-level deep learning models. For years, TensorFlow has always been the most favorable choice for deploying models in production, thanks to the TensorFlow Serving framework.


Until recently, PyTorch did not have an equivalent feature. However, Facebook released a new feature TorchServe, a new model serving framework for PyTorch which allows you to deploy models at scale. PyTorch was rewritten in Python due to the complexities of Torch. This makes PyTorch more native to developers. It has an easy to use framework that provides maximum flexibility and speed. It also allows quick changes within the code during training without hampering its performance.


If you already have some experience with deep learning and have used Torch before, you will like PyTorch even more, because of its speed, efficiency, and ease of use. PyTorch includes custom-made GPU allocator, which makes deep learning models highly memory efficient. Due to this, training large deep learning models becomes easier. Hence, large organizations such as Facebook, Twitter, Salesforce, and many more are embracing PyTorch.


Training Deep Learning models with PyTorch and TensorFlow


Both TensorFlow and PyTorch are used to build and train Neural Network models. Earlier, TensorFlow used to work on Static Computational Graph (SCG) that includes defining the graph statically before the model starts execution. However, once the execution starts the only way to tweak changes within the model is using tf.session and tf.placeholder tensors. Even though SCG can achieve good performance, it was however a major pain to debug.


With TensorFlow 2.0 release, they have done a major leap by moving from static graphs to eager execution. With the help of its “Eager” mode, TensorFlow now allows to build Dynamic Computational Graph (DCG).


Moreover, TensorFlow also released TensorFlow Fold, a library designed to create TensorFlow models that work on structured data, where the formation of the computational graph depends on the form of the input data. Like PyTorch, it implements the DCGs and gives massive computational speeds of up to 10x on CPU and more than 100x on GPU! With the help of Dynamic Batching, you can now implement deep learning models which vary in size as well as structure. However, this library is still not an official Google product.


On the other hand, PyTorch always supported DCG (Dynamic Computational Graph), where one can define and make changes within the model on the go. In a DCG, each block can be debugged separately, which makes training of neural networks easier. PyTorch is well suited to train Recursive Neural Networks (RNNs) as they run faster in PyTorch than in TensorFlow. In their last major update, has produced a new revamp for PyTorch 1.0 with added support for SCG.


The key point to takeaway here is static and dynamic modes are now available in both the frameworks, which were used to be a major design difference between the two frameworks in the past.


Comparing GPU and CPU optimizations


TensorFlow has faster compile times than PyTorch and provides flexibility for building real-world applications. It can run on literally any kind of processor from a CPU, GPU, mobile devices, to a Raspberry Pi (IoT Devices). Google has also made its custom hardware accelerator, Tensor Processing Units (TPUs), available for third-party users. It can run computations a lot faster than GPUs. TensorFlow has a bit of edge here since both the products are from Google, so it’s much easier to run codes on TPUs using TensorFlow as opposed to PyTorch.


PyTorch, on the other hand, includes Tensor computations which can speed up deep neural network models up to 50x or more using GPUs. These tensors can dwell on CPU or GPU. Both CPU and GPU are written as independent libraries; making PyTorch efficient to use, irrespective of the Neural Network size. In case if any PyTorch users want to run codes on TPUs, they will need to use via third-party libraries like XLA.


Community Support


TensorFlow is one of the most popular Deep Learning frameworks today, and with this comes huge community support. It has great documentation, and an eloquent set of online tutorials. TensorFlow also includes numerous pre-trained models which are hosted and available on GitHub. These models aid developers and researchers who are keen to work with TensorFlow with some ready-made material to save their time and efforts.


With a lot of improvements and changes in the TensorFlow 2.0 ecosystem, which addresses a lot of shortcomings of earlier TensorFlow versions, we can expect this community to grow stronger, which means more tutorials, more discussions, more extensions to help developers collaborate on various mediums.


PyTorch, on the other hand, being the younger framework, has also developed a strong community and the momentum is picking up. Since the release of PyTorch 1.0, there has been so many amazing features introduced to the framework, which made it possible to use it for both research and production purposes. The community is striving to make the framework better with every new release. Also, it is one of the well documented frameworks out there and someone who is new to ML/DL will find it easy to get started.


PyTorch and TensorFlow – the Conclusion


Both TensorFlow and PyTorch are the best libraries out in the market, they have their own advantages to get started with building efficient deep learning applications. Based on their previous track records, Python enthusiasts and researchers have preferred PyTorch more, while TensorFlow has been favored by the developers who would like to build scalable deep learning models for use in production.


However, as mentioned earlier, the latest releases for both the libraries have adopted good features from each other, hence it’s safe to say they’re converging towards a similar goal. With this approach, it’s a win-win situation for all the deep learning frameworks because it’s now become easier than ever to switch between the two libraries due to their increasing similarities.


So, for someone who is new to AI, it would fair to try them both and choose the one that best suits your purpose.


Find out more


Packt have a number of titles that cover Deep Learning with TensorFlow, PyTorch plus other frameworks and libraries. Browse them here or pick from some of the titles below:


The Deep Learning Workshop

The Applied TensorFlow and Keras Workshop

The Deep Learning with PyTorch Workshop


By | 2021-03-11T15:50:14+00:00 October 9th, 2020|Long Read|0 Comments

About the Author:


Leave A Comment