A C++/Python neural network toolkit by Google. I am using it for solving general machine-learning problems, and frequently enough that I need notes.
The construction of graphs is more explicit than in Theano, so I find it easier to understand, although this means that you lose the near-python syntax of Theano.
Tensorflow also claims to compile to smartphones etc, although that looks buggy ATM.
- Keras supports tensorflow and Theano as a backend, for comfort and convenience. See below for some notes.
- tensorflowslim eases some boring bits.
- tflearn wraps the tensorflow machine in scikit-learn (Although the implementation is not very enlightening, nor the syntax especially clear.)
Debugging in TensorFlow: Overview
- Explicitly fetch, and print (or do whatever you want)!
- Tensorboard: Histogram and Image Summary
- the tf.Print() operation
- Interpose any python codelet in the computation graph
- A step-by-step debugger
- tfdbg_: The TensorFlow debugger
Getting data in
This is a depressingly complex topic; Likely it’s more lines of code than building your actual learning algorithm.
For example, things break differently if
- you are inputting data of variable dimensions via python (which requires a “feed”, which requires keeping references to a placeholder Op around, and ALWAYS resubmitting the data every time you run an op, even if the data is not required for the current Op), or
- Or inputting a Variable (which may also be feeds, just to mess with you, and claim to also be variable dimensions but that never works for me) via C++.
These interact in various different ways that seem irritating, but are probably to do with enabling very large scale data reading workflows, so that you might accidentally solve a problem for Google and they can get your solution for cheap.
My experience that that stuff is so horribly messy that you should just build different graphs for the estimation and deployment phases of your mode and implement them each according to convenience.
I’m not yet sure how to easily transmit the estimated parameters between graphs in these two separate phases… I’ll make notes about THAT when i come to it.
- CNNs for text classification
- CNN axis ordering is easy to mess up
- The Theano guide to convolutions is superior if you want to work out the actual dimensions your tensors should have. It also gives an intelligible account of how you invert convolutions for decoding.
- The Tensorflow convolution guide is more lackadaisical, but it does get us there:
For the SAME padding, the output height and width are computed as:out_height = ceil(float(in_height) / float(strides)) out_width = ceil(float(in_width) / float(strides))
For the VALID padding, the output height and width are computed as:out_height = ceil(float(in_height - filter_height + 1) / float(strides)) out_width = ceil(float(in_width - filter_width + 1) / float(strides))
TensorFlow supports NHWC (default) and NCHW (cuDNN default). The best practice is to build models that work with both NCHW and NHWC as it is common to train using NCHW on GPU, and then do inference with NHWC on CPU.
NCHW is, to be clear, (batch, channels, height, width).
Theano by contrast, is AFAICT always NCHW.
The documentation for these is abysmal.
To write: How to create standard linear filters in Tensorflow.
The Tensorflow RNN documentation, as bad as it is, is not even easy to find, being scattered across several non-obvious locations without consistent crosslinks.
To make it actually make sense without unwarranted time wasting and guessing, you will then need to read other stuff:
- seq2seq models with GRUs : Fun with Recurrent Neural Nets.
- Variable sequence length HOWTO.
- Where do the RNN weights come from? Magic.
- Denny Britz’s blog posts * RNNs in Tensorflow, a practical guide and undocumented features. * He also gives a good explanation of vanishing gradients.
- Danijar Hafner * Introduction to Recurrent Networks in TensorFlow * Variable sequence lengths HOWTO
- Philippe Remy, Stateful LSTM in Keras
- Ben Bolte, Deep Language Modeling for Question Answering using Keras
You probably want to start here unless your needs are extraordinarily esoteric, since it removes a lot of boilerplate, and make even writing new boilerplate easier.
Go faster for free
.. code:: shell
./configure bazel build —config=opt //tensorflow/tools/pip_package:build_pip_package bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg pip install /tmp/tensorflow_pkg/tensorflow-1.0.0-py2-none-any.whl
- bazel build -c opt —copt=-mavx —copt=-mavx2 —copt=-mfma —copt=-mfpmath=both —copt=-msse4.2 —config=cuda -k //tensorflow/tools/pip_package:build_pip_package
- bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg pip3 install /tmp/tensorflow_pkg/tensorflow-1.0.0-cp35-cp35m-macosx_10_6_intel.whl
Getting models out
- For a local app: Hamed MP, Exporting trained TensorFlow models to C++ the RIGHT way!
- For serving it online, Tensorflow serving is the preferred means. See the Serving documentation.
Doing it in the cloud because you don’t have NVIDIA sponsorship
See practical cloud computing, which has a couple of sections on that.