The Living Thing / Notebooks :

Automatic differentiation

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪 🤪
Incompleteness: 🚧 🚧 🚧

Getting your computer to tell you the gradient of a function, without resorting to finite difference approximation, or coding an analytic derivative by hand. We usually mean this in the sense of automatic forward or reverse mode differentiation, which is not, as such, a symbolic technique, but symbolic differentiation gets an incidental look-in, and these ideas do of course relate.

Infinitesimal/Taylor series formulations, the related dual number formulations, and even fancier hyperdual formulations. Reverse-mode, a.k.a. Backpropagation, versus forward-mode etc. Computational complexity of all the above.

There is a beautiful explanation of reverse-mode the basics by Sanjeev Arora and Tengyu Ma.

You might want to do this for ODE quadrature, or sensitivity analysis, or for optimisation, either batch or SGD, especially in neural networks, matrix factorisations, variational approximation etc. This is not news these days, but it took a stunningly long time to become common since its inception in the… 1970s? See, e.g. Justin Domke, who claimed automatic differentiation to be the most criminally underused tool in the machine learning toolbox. (That escalated quickly.) See also a timely update by Tim Viera.

Related: symbolic mathematical calculators.

There are many ways you can do automatic differentiation, and I won’t attempt to comprehensively introduce the various approaches here. This is a well-ploughed field. There is much of good material out there already with fancy diagrams and the like. Symbolic, numeric, dual/forward, backwards mode… Notably, you don’t have to choose between them - e.g. you can use forward differentiation to calculate an expedient step in the middle of backward differentiation, for example.

To do: investigate unorthodox methods such as Benoît Pasquier’s F-1 Method. (source)

This package implements the F-1 algorithm described […] It allows for efficient quasi-auto-differentiation of an objective function defined implicitly by the solution of a steady-state problem.

See, e.g. Mike Innes’ hand-on introduction, or his terse, opinionated introductory paper, Innes (2018). There is a well-establish terminoligy for sensititvity analysis discussing adjoints, e.g. Steven Johnson’s class notes, and his references (Johnson 2012; Errico 1997; Cao et al. 2003).

Software

Julia

Julia has an embarrassment of different methods of autodiff (Homoiconicity and introspection makes this comparatively easy.) and it’s not always clear the comparative selling points of each.

The juliadiff project produces ForwardDiff.jl and ReverseDiff.jl which do what I would expect, namely autodiff in forward and reverse mode respectively. ForwardDiff claims to be very advanced. ReverseDiff works but is abandoned.

ForwardDiff implements methods to take derivatives, gradients, Jacobians, Hessians, and higher-order derivatives of native Julia functions

In my casual tests it seems to be a slow for my purposes, due to constantly needing to create a new closure with a single argument it and differentiate it all the time. Or maybe I’m doing it wrong, and the compiler will deal with this if I set it up right? Or maybe most people are not solving my kind of problems, e.g. finding many different optima in similar sub problems. I suspect this difficulty would vanish if you were solving one big expensive optimisation with many steps, as with neural networks. update: I has doing it wrong. This gets faster if you avoid type ambiguity by, e.g setting up your problem in a function to avoid type ambiguities. I’m not sure if there is any remaining overhead in this closure-based system, but it’s not so bad.

In forward mode (desirable when, e.g. I have few parameters with respect to which I must differentiate), when do I use DualNumbers.jl? Probably never; it seems to be deprecated in favour of a similar system in ForwardDiff.jl. But ForwardDiff is well supported. It seems to be fast for functions with low-dimensional arguments. It is not clearly documented how one would provide custom derivatives, but apparently you can still use method extensions for Dual types, of which there is an example in the issue tracker. The recommended way is extending DiffRules.jl which is a little circuitous if you are building custom functions to interpolate. It does not seem to support Wirtinger derivatives yet.

Related to this forward differential formalism is Luis Benet and David P. Sanders’ TaylorSeries.jl, which is satisfyingly explicit, and seems to generalise in several unusual directions.

TaylorSeries.jl is an implementation of high-order automatic differentiation, as presented in the book by W. Tucker (2011). The general idea is the following.

The Taylor series expansion of an analytical function \(f(t)\) with one independent variable \(t\) around \(t_0\) can be written as

\[ f(t) = f_0 + f_1 (t-t_0) + f_2 (t-t_0)^2 + \cdots + f_k (t-t_0)^k + \cdots, \] where \(f_0=f(t_0)\), and the Taylor coefficients \(f_k = f_k(t_0)\) are the \(k\)th normalized derivatives at \(t_0\):

\[ f_k = \frac{1}{k!} \frac{{\rm d}^k f} {{\rm d} t^k}(t_0). \]

Thus, computing the high-order derivatives of \(f(t)\) is equivalent to computing its Taylor expansion.… Arithmetic operations involving Taylor series can be expressed as operations on the coefficients.

It has a number of functional-approximation analysis tricks. 🚧

HyperDualNumbers, promises cheap 2nd order derivatives by generalizing Dual Numbers to HyperDuals. (ForwardDiff claims to support Hessians by Dual Duals, which are supposed to be the same as HyperDuals.) I am curious which is the faster way of generating Hessians out of ForwardDiff’s Dual-of-Dual and HyperDualNumbers. HyperDualNumbers has some very nice tricks. Look at the HyperDualNumbers homepage example, where we are evaluating derivatives of f at x by evaluating it at hyper(x, 1.0, 1.0, 0.0).

> f(x) = ℯ^x / (sqrt(sin(x)^3 + cos(x)^3))
> t0 = Hyper(1.5, 1.0, 1.0, 0.0)
> y = f(t0)
4.497780053946162 + 4.053427893898621ϵ1 +
  4.053427893898621ϵ2 + 9.463073681596601ϵ1ϵ2

The first term is the function value, the coefficients of both ϵ1 and ϵ2 (which correspond to the second and third arguments of hyper) are equal to the first derivative, and the coefficient of ϵ1ϵ2 is the second derivative.

Really nice. However, AFAICT this method does not actually get you a Hessian, except in a trivial sense, because it only seems to return the right answer for scalar functions of scalar arguments. This is amazing, if you can reduce your function to scalar parameters, in the sense of having a diagonal Hessian. But that skips lots of interesting cases. One useful case it does not skip, if that is so, is diagonal preconditioning of tricky optimisations.

Pro tip: the actual manual is the walk-through which is not linked from the purported manual.

Another curiosity: Benoît Pasquier’s (n.d.) (F-1 Method) Dual MAtrix Tools and Hyper Dual Matrix Tools. which extend this to certain implicit derivatives arising in something or other.

How about Zygote.jl then? That’s an alternative AD library from the creators of the aforementioned Flux. It usually operates in reverse mode and does some zany compilation tricks to get extra fast. It also has forward mode. Has many fancy features including compiling to Google Cloud TPUs. Hessian support is “somewhat”. Flux itself does not yet default to Zygote, using its own specialised reverse-mode autodiff Tracker, but promises to switch transparently to Zygote in the future. In the interim Zygote is still attractive has many luxurious options, such as defining optimised custom derivatives easily, as well as weird quirks such as occasionally bizarre error messages and failures to notice source code updates.

One could roll one’s own autodiff system using the basic diff definitions in DiffRules. There is also the very fancy planned Capstan, which aims to use a tape system to inject forward and reverse mode differentiation into even very hostile code, and do much more besides. However it also doesn’t work yet, and depends upon Julia features that also don’t work yet, so don’t hold your breath. (Or: help them out!)

See also XGrad which does symbolic differentiation. It prefers to have access to the source code as text rather than as an AST. So I think that makes it similar to Zygote, but with worse PR?

Refs

Baydin, Atilim Gunes, and Barak A. Pearlmutter. 2014. “Automatic Differentiation of Algorithms for Machine Learning,” April. http://arxiv.org/abs/1404.7456.

Baydin, Atilim Gunes, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. 2015. “Automatic Differentiation in Machine Learning: A Survey,” February. http://arxiv.org/abs/1502.05767.

Baydin, Atılım Güneş, Barak A. Pearlmutter, and Jeffrey Mark Siskind. 2016. “Tricks from Deep Learning,” November. http://arxiv.org/abs/1611.03777.

Cao, Y., S. Li, L. Petzold, and R. Serban. 2003. “Adjoint Sensitivity Analysis for Differential-Algebraic Equations: The Adjoint DAE System and Its Numerical Solution.” SIAM Journal on Scientific Computing 24 (3): 1076–89. https://doi.org/10.1137/S1064827501380630.

Carpenter, Bob, Matthew D. Hoffman, Marcus Brubaker, Daniel Lee, Peter Li, and Michael Betancourt. 2015. “The Stan Math Library: Reverse-Mode Automatic Differentiation in C++.” arXiv Preprint arXiv:1509.07164. http://arxiv.org/abs/1509.07164.

Errico, Ronald M. 1997. “What Is an Adjoint Model?” Bulletin of the American Meteorological Society 78 (11): 2577–92. https://doi.org/10.1175/1520-0477(1997)078<2577:WIAAM>2.0.CO;2.

Fike, Jeffrey, and Juan Alonso. 2011. “The Development of Hyper-Dual Numbers for Exact Second-Derivative Calculations.” In 49th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition. Orlando, Florida: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2011-886.

Fischer, Keno, and Elliot Saba. 2018. “Automatic Full Compilation of Julia Programs and ML Models to Cloud TPUs,” October. http://arxiv.org/abs/1810.09868.

Giles, Mike B. 2008. “Collected Matrix Derivative Results for Forward and Reverse Mode Algorithmic Differentiation.” In Advances in Automatic Differentiation, edited by Christian H. Bischof, H. Martin Bücker, Paul Hovland, Uwe Naumann, and Jean Utke, 64:35–44. Berlin, Heidelberg: Springer Berlin Heidelberg. http://eprints.maths.ox.ac.uk/1079/.

Gower, R. M., and A. L. Gower. 2016. “Higher-Order Reverse Automatic Differentiation with Emphasis on the Third-Order.” Mathematical Programming 155 (1-2): 81–103. https://doi.org/10.1007/s10107-014-0827-4.

Griewank, Andreas, and Andrea Walther. 2008. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. 2nd ed. Philadelphia, PA: Society for Industrial and Applied Mathematics.

Haro, A. 2008. “Automatic Differentiation Methods in Computational Dynamical Systems: Invariant Manifolds and Normal Forms of Vector Fields at Fixed Points.” IMA Note. http://www.maia.ub.es/~alex/admcds/admcds.pdf.

Innes, Michael. 2018. “Don’t Unroll Adjoint: Differentiating SSA-Form Programs,” October. http://arxiv.org/abs/1810.07951.

Johnson, Steven G. 2012. “Notes on Adjoint Methods for 18.335,” 6.

Laue, Soeren, Matthias Mitterreiter, and Joachim Giesen. 2018. “Computing Higher Order Derivatives of Matrix and Tensor Expressions.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 2750–9. Curran Associates, Inc. http://papers.nips.cc/paper/7540-computing-higher-order-derivatives-of-matrix-and-tensor-expressions.pdf.

Maclaurin, Dougal, David K. Duvenaud, and Ryan P. Adams. 2015. “Gradient-Based Hyperparameter Optimization Through Reversible Learning.” In ICML, 2113–22. http://www.jmlr.org/proceedings/papers/v37/maclaurin15.pdf.

Neidinger, R. 2010. “Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming.” SIAM Review 52 (3): 545–63. https://doi.org/10.1137/080743627.

Neuenhofen, Martin. 2018. “Review of Theory and Implementation of Hyper-Dual Numbers for First and Second Order Automatic Differentiation,” January. http://arxiv.org/abs/1801.03614.

Pasquier, B, and F Primeau. n.d. “The F-1 Algorithm for Efficient Computation of the Hessian Matrix of an Objective Function Defined Implicitly by the Solution of a Steady-State Problem.” SIAM Journal on Scientific Computing, 10. https://www.bpasquier.com/publication/pasquier_primeau_sisc_2019/.

Rall, Louis B. 1981. Automatic Differentiation: Techniques and Applications. Lecture Notes in Computer Science 120. Berlin ; New York: Springer-Verlag.

Revels, Jarrett, Miles Lubin, and Theodore Papamarkou. 2016. “Forward-Mode Automatic Differentiation in Julia,” July. http://arxiv.org/abs/1607.07892.

Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. 1986. “Learning Representations by Back-Propagating Errors.” Nature 323 (6088): 533–36. https://doi.org/10.1038/323533a0.

Tucker, Warwick. 2011. Validated Numerics: A Short Introduction to Rigorous Computations. Princeton: Princeton University Press. http://public.eblib.com/choice/publicfullrecord.aspx?p=683309.