The Living Thing / Notebooks :

Tensorflow

the framework to use for deep learning if you groupthink like Google

A C++/Python neural network toolkit by Google. I am using it for solving general machine-learning problems, and frequently enough that I have notes.

Abstractions

There are some other frontends, which seem a bit less useful to my mind:

I’m not convinced these latter options actually solve any problems. They seem to make the easy bits not easier but different, and the hard bits no easier.

Tutorials

See also keras tutorials below.

Debugging

Google’s own Tensorflow without a phd.

Joonwook Choi recommends:

Basic ways:

Advanced ways:

Tensorboard

Tensorboard is a de facto debugging tool standard. It’s not immediately intuitive; I recommend reading Li Yin’s explanation.

Minimally,

tensorboard --logdir=path/to/log-directory

or, more usually,

tensorboard --logdir=name1:/path/to/logs/1,name2:/path/to/logs/2 --host=localhost

or, lazily, (bash)

tensorboard --logdir=$(ls -dm *.logs |tr -d ' \n\r') --host=localhost

(fish)

tensorboard --logdir=(string join , (for f in *.logs; echo (basename $f .logs):$f; end)) --host=localhost

In fact, that sometimes works not so well for me. Tensorboard reeeeally wants you to explicitly specify your folder names.

#!/bin/env python3
from pathlib import Path
from subprocess import run
import sys

p = Path('./')

logdirstring = '--logdir=' + ','.join([
    str(d)[:-5] + ":" + str(d)
    for d in p.glob('*.logs')
])

proc = run(
    [
        'tensorboard',
        logdirstring,
        '--host=localhost'
    ]
)

Getting data in

This is a depressingly complex topic; Likely it’s more lines of code than building your actual learning algorithm.

For example, things break differently if

These interact in various different ways that seem irritating, but are probably to do with enabling very large scale data reading workflows, so that you might accidentally solve a problem for Google and they can get your solution for cheap.

Here’s a walk through of some of the details. And here are the manual pages for feeding and queueing

My experience that that stuff is so horribly messy that you should just build different graphs for the estimation and deployment phases of your mode and implement them each according to convenience. This of course is asking for trouble with errors

I’m not yet sure how to easily transmit the estimated parameters between graphs in these two separate phases… I’ll make notes about THAT when i come to it.

(Non-recurrent) convolutional networks

For the SAME padding, the output height and width are computed as:

For the VALID padding, the output height and width are computed as:

Tensorflow’s 4d tensor packing for images?

TensorFlow supports NHWC (default) and NCHW (cuDNN default). The best practice is to build models that work with both NCHW and NHWC as it is common to train using NCHW on GPU, and then do inference with NHWC on CPU.

NCHW is, to be clear, (batch, channels, height, width).

Theano by contrast, is AFAICT always NCHW.

Recurrent/fancy networks

The documentation for these is abysmal.

To write: How to create standard linear filters in Tensorflow.

For now, my recommendation is to simply use keras, which makes this easier inside tensorflow, or pytorch, which makes it easier overall.

tensorflow fold is a library which ingests structured data and simulates pytorch-style dynamic graphs dependent upon its structure.

Official documentation

The Tensorflow RNN documentation, as bad as it is, is not even easy to find, being scattered across several non-obvious locations without consistent crosslinks.

To make it actually make sense without unwarranted time wasting and guessing, you will then need to read other stuff.

Community guides

You probably want to start using a higher level keras unless your needs are extraordinarily esoteric or you like reinventing wheels. Keras is a good choice, since it removes a lot of boilerplate, and makes even writing new boilerplate easier.

It adds only a few minor restrictions to your abilities, but by creating a consistent API, has become something of a standard for early access to complex new algorithms you would never have time to re-implement yourself.

I would use it if I were you for anything involving standard neural networks, especially any kind of recurrent network. If you want to optimise a generic, non-deep neural model, you might find the naked tensorflow API has less friction.

Getting models out

Training in the cloud because you don’t have NVIDIA sponsorship

See practical cloud computing, which has a couple of sections on that.

Extending

Tensorflow allows binary extensions but don’t really explain how it integrates with normal python builds. Here is an example from Uber.

Misc HOWTOs

Nightly builds

http://ci.tensorflow.org/view/Nightly/ (or build your own)

Dynamic graphs

Pytorch has JIT graphs and they are super hip, so now tensorflow has a dynamic graph mode, called Eager.

GPU selection

setGPU sets NVIDIA_VISIBLE_GPU to the least loaded GPU.

Silencing tensorflow

TF_CPP_MIN_LOG_LEVEL=1 primusrun python run_job.py biquad_fast

Hessians and higher order optimisation

Basic Newton method optimisation example. Very basic example that also shows how to create a diagonal hessian.

Slightly outdated, Hessian matrix. There is a discussion on Jacobians in TF, including, e.g. some fancy examples by jjough:

here’s mine – works for high-dimensional Jacobians (numerator and denominator have >1 dimension), undefined batch sizes, and tensors that are not statically known.

Remember to use an interactive session, otherwise tf.get_default_session() will not be able to find the session.

And here’s one for batched tensors:

Manage tensorflow environments

Tensorflow+pip+conda

Optimisation tricks

Using traditional/experimental optimisers rather than SGD-type ones.

Simplify distributed training using Horovod.