Someone mentioned Tensorflow 2.0 and Keras in the last hacker’s hangout, so I wondered: could there be a good enough reason to move to it from PyTorch? Has anyone looked into the benefits of tensorflow 2.0 yet? Can it complement PyTorch?
PyTorch used to have better debugging and dynamic compute graph generation, but Tensorflow 2.0 should have those now. In addition, Tensorflow 2.0 should keep its better compatibility with servers and other hardware, and I believe it has more developers behind it.
I kind of wonder if TensorFlow vs. PyTorch is like HD-DVD vs BluRay, but I’m not sure who would be the winner. I suppose not too much would be lost if one system won or specialized in a different area since a lot of the underlying functions are the same. Still, we might be missing out on some features from the other system.
There may already a larger community around pytorch though, as many Google searches lead me towards a repository with PyTorch.
1 Like
Start here:
The usability revolution
Going forward, Keras will be the high level API for TensorFlow and it’s extended so that you can use all the advanced features of TensorFlow directly from tf.keras .
1 Like
Then grab your ten-league boots and walk through this:
3 Likes
Finally got around to reading through that, though I skimmed a bit, then read some of the links, then googled some.
So, here’s my breakdown of the pros and cons.
Tensorflow 2.0:
- Supports more programming languages (Java, Go, C, Rust…)
- Supports more devices (Android, iOS)
- JavaScript support
- cloud support
- Ragged Tensors (nested variable length tensors. Better hierarchical support.)
- Depending on implementation, this might give the most reason to switch. However, it may lead to huge slowdowns in the case of very different lengths.
PyTorch:
- More developed 3rd party libraries
- This is actually pretty big, because there are some pretty powerful third party libraries already taking away TensorFlow 2.0’s edge
- Node.js 3rd party library
- padded/packed sequences instead of ragged tensors
- These absolutely would be slow in the case of large row length differences I mentioned before, but it’s a bit more obvious why.
- There are ways to port trained models onto Android
At this point, I’m still not sure if it’s worth it to switch. I’m not even sure which language someone starting out in machine learning should choose now.
3 Likes
My 5 cents on this: I was an early adopter of TF, and a big fan, despite the lazy execution paradigm. But I eventually moved to Caffe because most of the papers I was reading were implemented in Caffe, and then to PyTorch when most of the implementations moved to PyTorch (mainly due to the eager vs lazy execution discussion, which is over now with the release of TF2).
If you are researching it is easier to use whatever the research community is adopting, while maintaining some flexibility to be able to understand code bases in other programming languages and/or frameworks. Right now the most used one is PyTorch. However if your goal is to implement in production, I would certainly choose the less verbose, easier to use and easier to test - and for me that would be Keras/TF2.
5 Likes
Hi…Is there some fundamental data on the most proficient method to assemble TF 2.0 from source? I took a stab at passing ‘- - config=v2’ to bazel in different ways without progress, it is possible that it orders 1.12 or it gives a mistake during the aggregation procedure that - config=v2 is definitely not a bolstered alternative.
prototype pcb assembly services