The Living Thing / Notebooks :

Optimisation

Crawling through alien landscapes in the fog, looking for mountain peaks.

I'm mostly interested in continuous optimisation, but, you know, combinatorial optimisation is a whole thing.

A vast topic, with many sub-topics.

As Moritz Hardt observes (and this is just in the convex context)

It’s easy to spend a semester of convex optimization on various guises of gradient descent alone. Simply pick one of the following variants and work through the specifics of the analysis: conjugate, accelerated, projected, conditional, mirrored, stochastic, coordinate, online. This is to name a few. You may also choose various pairs of attributes such as “accelerated coordinate” descent. Many triples are also valid such as “online stochastic mirror” descent. An expert unlike me would know exactly which triples are admissible. You get extra credit when you use “subgradient” instead of “gradient”. This is really only the beginning of optimization and it might already seem confusing.

When I was younger and even more foolish I decided the divide was between online optimization and offline optimization, which in hindsight is neither clear nor useful. Now there are more tightly topical pages, such as gradient descent, and Hessian free optimisation, surrogate optimisation, and I shall create additional such as circumstances demand.

Brief taxonomy here.

TODO: Diagram.

See Zeyuan Allen-Zhu and Elad Hazan on their teaching strategy which also gives a split into 16 different areas:

The following dilemma is encountered by many of my friends when teaching basic optimization: which variant/proof of gradient descent should one start with? Of course, one needs to decide on which depth of convex analysis one should dive into, and decide on issues such as “should I define strong-convexity?”, “discuss smoothness?”, “Nesterov acceleration?”, etc.

[…] If one wishes to go into more depth, usually in convex optimization courses, one covers the full spectrum of different smoothness/ strong-convexity/ acceleration/ stochasticity regimes, each with a separate analysis (a total of 16 possible configurations!)

This year I've tried something different in COS511 @ Princeton, which turns out also to have research significance. We've covered basic GD for well-conditioned functions, i.e. smooth and strongly-convex functions, and then extended these result by reduction to all other cases! A (simplified) outline of this teaching strategy is given in chapter 2 of Introduction to Online Convex Optimization.

Classical Strong-Convexity and Smoothness Reductions:

Given any optimization algorithm A for the well-conditioned case (i.e., strongly convex and smooth case), we can derive an algorithm for smooth but not strongly functions as follows.

Given a non-strongly convex but smooth objective , define a objective by .

It is straightforward to see that differs from by at most ϵ times a distance factor, and in addition it is ϵ-strongly convex. Thus, one can apply A to minimize and get a solution which is not too far from the optimal solution for itself. This simplistic reduction yields an almost optimal rate, up to logarithmic factors.

Keywords: Complimentary slackness theorem, High or very high dimensional methods, approximate method, Lagrange multipliers, primal and dual problems, fixed point methods, gradient, subgradient, proximal gradient, optimal control problems, convexity, sparsity, ways to avoid wrecking finding the extrema of perfectly simple little 10000-parameter functions before everyone observes that you are a fool in the guise of a mathematician but everyone is not there because you wandered off the optimal path hours ago, and now you are alone and lost in a valley of lower-case Greek letters.

See also geometry of fitness landscapes, expectation maximisation, matrix factorisations, discrete optimisation, nature-inspired “meta-heuristic” optimisation.

General

Brief intro material

Textbooks

Whole free textbooks online. Mostly convex.

Constrained optimisation

Related: constraint solvers.

Lagrange multipliers

Constrained optimisation using Lagrange's one weird trick, and the Karush–Kuhn–Tucker conditions. The search for saddle points and roots.

Alternating Direction Method of Multipliers

Dunno. It's everywhere, though. Maybe this is a problem of definition, though? (Boyd10)

In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop Map Reduce implementations.

Reductions

Oh crap, where did I rip this quote from?

The diverse world of machine learning applications has given rise to a plethora of algorithms and optimization methods, finely tuned to the specific regression or classification task at hand. We reduce the complexity of algorithm design for machine learning by reductions: we develop reductions that take a method developed for one setting and apply it to the entire spectrum of smoothness and strong-convexity in applications.

It will be from a Hazan and Allen-Zhu paper.

Continuous approximations of iterations

Recent papers (WiWJ16 WiWi15) argue that the discrete time steps can be viewed as a discrete approximation to a continuous time ODE which approaches the optimum (which in itself is trivial), but moreover that many algorithms fit into the same families of ODEs, that these ODEs explain Nesterov acceleration and generate new, improved optimisation methods. (which is not trivial.)

Continuous relaxations of parameters

Solving discrete problems with differentiable, continuous, surrogate parameters.

Convex

…Of composite functions

Hip for sparse regression, compressed sensing. etc. “FISTA” is one option. (Bubeck explains.)

Second order (Quasi-Newton)

If you have the second derivative you can be fancy when finding zeros.

Trust region

Pretend the gradient is locally quadratic, then seeing how bad that pretense was.

Least Squares

This particular objective function has some particular shortcuts; e.g. you don't necessarily need a gradient to do it.

Sebastien Bubeck has a good write-up. Part 1, PArt 2.

Conjugate gradient method

Finding quadratic minima, as in Least Squares.

And also not-quite-linear uses of this.

A first order method.

…on a manifold

What if your constraints are naturally represented as some kind of smooth manifold? Is that worth thinking about? Apparently sometimes it is. See, e.g. ToKW16, BMAS14, or the free textbook on this theme.

See also Nicolas Boumen's introductory blog post.

Optimization on manifolds is about solving problems of the form

where is a nice, known manifold. By “nice”, I mean a smooth, finite-dimensional Riemannian manifold.

Practical examples include the following (and all possible products of these):

Conceptually, the key point is to think of optimization on manifolds as unconstrained optimization: we do not think of mathcal{M} as being embedded in a Euclidean space. Rather, we think of mathcal{M} as being “the only thing that exists,” and we strive for intrinsic methods. Besides making for elegant theory, it also makes it clear how to handle abstract spaces numerically (such as the Grassmann manifold for example); and it gives algorithms the “right” invariances (computations do not depend on an arbitrarily chosen representation of the manifold).

There are at least two reasons why this class of problems is getting much attention lately. First, it is because optimization problems over the aforementioned sets (mostly matrix sets) come up pervasively in applications, and at some point it became clear that the intrinsic viewpoint leads to better algorithms, as compared to general-purpose constrained optimization methods (where mathcal{M} is considered as being inside a Euclidean space mathcal{E}, and algorithms move in mathcal{E}, while penalizing distance to mathcal{M}). The second is that, as I will argue momentarily, Riemannian manifolds are “the right setting” to talk about unconstrained optimization. And indeed, there is a beautiful book by [Absil, Sepulchre, Mahony], called Optimization algorithms on matrix manifolds (freely available), that shows how the classical methods for unconstrained optimization (gradient descent, Newton, trust-regions, conjugate gradients…) carry over seamlessly to the more general Riemannian framework.

To file

Miscellaneous optimisation techniques suggested on Linkedin

The whole world of exotic specialized optimisers. See, e.g. Nuit Blanche name-dropping Bregmann iteration, alternating method, augmented Lagrangian…

Coordinate descent

Descent each coordinate individually.

Small clever hack for certain domains: log gradient descent.

Fixed point methods

Contraction maps are nice when you have them. TBD.

Primal/dual problems

Majorization-minorization

Difference-of-Convex-objectives

When your objective function is not convex but you can represent it in terms of convex functions, somehow or other, use DC optimisation. (GaRC09.) (I don't think this guarantees you a global optimum, but rather faster convergence to a local one)

Non-convex optimisation

“How doomed are you?”

Gradient-free optimization

Not all the methods described here use gradient information, but it's frequently assumed to be something you can access easily. It's worth considering which objectives you can optimize easily

But not all objectives are easily differentiable, even when parameters are continuous. For example, if you are not getting your measurement from a mathematical model, but from a physical experiment you can't differentiate it since reality itself is usually not analytically differentiable. In this latter case, you are getting close to a question of online experiment design, as in ANOVA, and a further constraint that your function evaluations are possibly stupendously expensive. See Bayesian optimisation for one approach to this i the context of experiment design.

In general situations like this we use gradient-free methods, such as simulated annealing or numerical gradient etc. “Meta-heuristic” methods


Biologically-inspired or arbitrary. Evolutionary algorithms, particle swarm optimisation, ant colony optimisation, harmony search. A lot of the tricks from these are adopted into mainstream stochastic methods. Some not.

See biometic algorithms for the care and husbandry of such as those.

Annealing and Monte Carlo optimisation methods

Simulated annealing: Constructing a process to yield maximally-likely estimates for the parameters. This has a statistical mechanics justification that makes it attractive to physicists; But it's generally useful. You don't necessarily need a gradient here, just the ability to evaluate something interpretable as a “likelihood ratio”. Long story. I don't yet cover this at Monte carlo methods but I should.

Expectation maximization

See expectation maximisation.

My problem: constrained, pathwise sparsifying-penalty optimisers for nonlinear problems

I'm trialling a bunch of sparse Lasso-like regression models. I want them to be fast-ish to run and fast to develop. I want them to go in python. I would like to be able to vary my regularisation parameter and warm-restart, like the glmnet Lasso. I would like to be able to handle constraints, especially component-wise non-negativity, and matrix non-negative-definiteness.

Notes on that here.

Ideas:

use scipy.optimize

give up the idea of warm restarts, and enforce constraints in the callback.

use cvxpy

Pretty good, but not that fast since it does not in general exploit gradient information. For some problems this is fine, though.

use spams

Wonderful, but only if your problem fits one of their categories. Otherwise you can maybe extract some bits from their code and use them, but that is now a serious project. They have a passable LASSO.

Roll my own algorithm

Potentially yak-shaving. But I can work from the examples of my colleagues, which are special-purpose algorithms, usually reasonably fast ones.

The hip high-dimensionally-tractable version of classic offline optimisation, one step at a time.

This page is deprecated! It was an awful way of organising optimization sub-disciplines, and didn't even meet my study needs. I will gradually be salvaging the good bits and deleting it.

Mini-batch and stochastic methods for minimising loss when you have a lot of data, or a lot of parameters, and using it all at once is silly, or when you want to iteratively improve your solution as data comes in, and you have access to a gradient for your loss, ideally automatically calculated. It's not clear at all that it should work, except by collating all your data and optimising offline, except that much of modern machine learning shows that it does.

Sometimes this apparently stupid trick it might even be fast for small-dimensional cases, so you may as well try.

Technically, “online” optimisation in bandit/RL problems might imply that you have to “minimise regret online”, which has a slightly different meaning and, e.g. involves seeing each training only as it arrives along some notional arrow of time, yet wishing to make the “best” decision at the next time, and possibly choosing your next experiment in order to trade-off exploration versus exploitation etc.

In SGD you can see your data as often as you want and in whatever order, but you only look at a bit at a time. Usually the data is given and predictions make no difference to what information is available to you.

Some of the same technology pops up in each of these notions of online optimisation, but I am really thinking about SGD here.

There are many more permutations and variations used in practice.

To file

Gradient Descent

The workhorse of many large-ish scale machine-learning techniques and especially deep neural networks. See Sebastian Ruder's explanation and his comparison.

Conditional Gradient

a.k.a. Frank-Wolfe algorithm: Don't know much about this.

Stochastic

Variance-reduced

Zeyuan Allen-Zhu : Faster Than SGD 1: Variance Reduction:

SGD is well-known for large-scale optimization. In my mind, there are two (and only two) fundamental improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. In this post I'd love to conduct a survey regarding (1),

Second order (Quasi-newton at scale)

LiSSA attempts to make 2nd order gradient descent methods practical (AgBH16):

linear time stochastic second order algorithm that achieves linear convergence for typical problems in machine learning while still maintaining run-times theoretically comparable to state-of-the-art first order algorithms. This relies heavily on the special structure of the optimization problem that allows our unbiased hessian estimator to be implemented efficiently, using only vector-vector products.

David McAllester observes:

Since can be computed efficiently whenever we can run backpropagation, the conditions under which the LiSSA algorithm can be run are actually much more general than the paper suggests. Backpropagation can be run on essentially any natural loss function.

What is Francis Bach's new baby? Finite sample guarantees for certain unusual treatements of SGD for certain problems: (BaMo11, BaMo13)

Beyond stochastic gradient descent for large-scale machine learning

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ('large n') and each of these is large ('large p'). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. (joint work with Nicolas Le Roux, Eric Moulines and Mark Schmidt).

…With sparsity

Hmmm.

Sundry Hacks

Yellowfin an automatic SGD momentum tuner

Parallel

Classic, basic SGD takes walks through the data set example-wise or feature-wise - but this doesn't work in parallel, so you tend to go for mini-batch gradient descent so that you can at least vectorize. Apparently you can make SGD work in “true” parallel across communication-constrained cores, but I don't yet understand how.

Implementations

Specialised optimisation software.

See also statistical software, and online optimisation

Many of these solvers optionally use commercial backends such as Mosek.

Refs

The hip high-dimensionally-tractable version of classic offline optimisation, one step at a time.

This page is deprecated! It was an awful way of organising optimization sub-disciplines, and didn't even meet my study needs. I will gradually be salvaging the good bits and deleting it.

Mini-batch and stochastic methods for minimising loss when you have a lot of data, or a lot of parameters, and using it all at once is silly, or when you want to iteratively improve your solution as data comes in, and you have access to a gradient for your loss, ideally automatically calculated. It's not clear at all that it should work, except by collating all your data and optimising offline, except that much of modern machine learning shows that it does.

Sometimes this apparently stupid trick it might even be fast for small-dimensional cases, so you may as well try.

Technically, “online” optimisation in bandit/RL problems might imply that you have to “minimise regret online”, which has a slightly different meaning and, e.g. involves seeing each training only as it arrives along some notional arrow of time, yet wishing to make the “best” decision at the next time, and possibly choosing your next experiment in order to trade-off exploration versus exploitation etc.

In SGD you can see your data as often as you want and in whatever order, but you only look at a bit at a time. Usually the data is given and predictions make no difference to what information is available to you.

Some of the same technology pops up in each of these notions of online optimisation, but I am really thinking about SGD here.

There are many more permutations and variations used in practice.

To file

Gradient Descent

The workhorse of many large-ish scale machine-learning techniques and especially deep neural networks. See Sebastian Ruder's explanation and his comparison.

Conditional Gradient

a.k.a. Frank-Wolfe algorithm: Don't know much about this.

Stochastic

Variance-reduced

Zeyuan Allen-Zhu : Faster Than SGD 1: Variance Reduction:

SGD is well-known for large-scale optimization. In my mind, there are two (and only two) fundamental improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. In this post I'd love to conduct a survey regarding (1),

Second order (Quasi-newton at scale)

LiSSA attempts to make 2nd order gradient descent methods practical (AgBH16):

linear time stochastic second order algorithm that achieves linear convergence for typical problems in machine learning while still maintaining run-times theoretically comparable to state-of-the-art first order algorithms. This relies heavily on the special structure of the optimization problem that allows our unbiased hessian estimator to be implemented efficiently, using only vector-vector products.

David McAllester observes:

Since can be computed efficiently whenever we can run backpropagation, the conditions under which the LiSSA algorithm can be run are actually much more general than the paper suggests. Backpropagation can be run on essentially any natural loss function.

What is Francis Bach's new baby? Finite sample guarantees for certain unusual treatements of SGD for certain problems: (BaMo11, BaMo13)

Beyond stochastic gradient descent for large-scale machine learning

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ('large n') and each of these is large ('large p'). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. (joint work with Nicolas Le Roux, Eric Moulines and Mark Schmidt).

…With sparsity

Hmmm.

Sundry Hacks

Yellowfin an automatic SGD momentum tuner.

Parallel

Classic, basic SGD takes walks through the data set example-wise or feature-wise - but this doesn't work in parallel, so you tend to go for mini-batch gradient descent so that you can at least vectorize. Apparently you can make SGD work in “true” parallel across communication-constrained cores, but I don't yet understand how.

Refs