The Living Thing / Notebooks :

Optimisation, Newton-like

Second order as a side order

Notes here iff I need ’em.

Newton-type optimization uses 2nd-order gradient information (i.e. a Hessian matrix) to solve optimiztion problems.

Can one do higher order optimisation? Of course you can, but in practice second order is already challenging, so let’s pause to examin it.

Hessian free

Second order optimisation that does not require the Hessian matrix to be given explicitly.

Andre Gibiansky’s example for coders.

Stochastic

LiSSA attempts to make 2nd order gradient descent methods practical (AgBH16):

linear time stochastic second order algorithm that achieves linear convergence for typical problems in machine learning while still maintaining run-times theoretically comparable to state-of-the-art first order algorithms. This relies heavily on the special structure of the optimization problem that allows our unbiased hessian estimator to be implemented efficiently, using only vector-vector products.

David McAllester observes:

Since \(H^{t+1}y^t\) can be computed efficiently whenever we can run backpropagation, the conditions under which the LiSSA algorithm can be run are actually much more general than the paper suggests. Backpropagation can be run on essentially any natural loss function.

What is Francis Bach’s new baby? Finite sample guarantees for certain Newton-like treatments of SGD for certain problems: (BaMo11, BaMo13)

Beyond stochastic gradient descent for large-scale machine learning

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations (‘large n’) and each of these is large (‘large p’). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. (joint work with Nicolas Le Roux, Eric Moulines and Mark Schmidt).

Secant conditions and update designs

Let’s say we are designing a second-order update method.

See e.g. Nocedal and Wright.

the BFGS update satisfies the secant condition \[H_k s=y\] i.e. \[H_k(x^{(k)} − x^{(k−1)}) = \nabla f(x^{(k)}) − \nabla f(x^{(k−1)})\]

TBC

Refs