The Living Thing / Notebooks :

Optimisation

Crawling through alien landscapes in the fog, looking for mountain peaks.

I’m mostly interested in continuous optimisation, but, you know, combinatorial optimisation is a whole thing.

A vast topic, with many sub-topics. I have neither the time nor the expertise to construct a detailed map of these As Moritz Hardt observes (and this is just in the convex context)

It’s easy to spend a semester of convex optimization on various guises of gradient descent alone. Simply pick one of the following variants and work through the specifics of the analysis: conjugate, accelerated, projected, conditional, mirrored, stochastic, coordinate, online. This is to name a few. You may also choose various pairs of attributes such as “accelerated coordinate” descent. Many triples are also valid such as “online stochastic mirror” descent. An expert unlike me would know exactly which triples are admissible. You get extra credit when you use “subgradient” instead of “gradient”. This is really only the beginning of optimization and it might already seem confusing.

When I was younger and even more foolish I decided the divide was between online optimization and offline optimization, which in hindsight is neither clear nor useful. Now there are more tightly topical pages, such as gradient descent, and 2nd order methods, surrogate optimisation, contrained optimisation, and I shall create additional such as circumstances demand.

Brief taxonomy here.

TODO: Diagram.

See Zeyuan Allen-Zhu and Elad Hazan on their teaching strategy which also gives a split into 16 different areas:

The following dilemma is encountered by many of my friends when teaching basic optimization: which variant/proof of gradient descent should one start with? Of course, one needs to decide on which depth of convex analysis one should dive into, and decide on issues such as “should I define strong-convexity?”, “discuss smoothness?”, “Nesterov acceleration?”, etc.

[…] If one wishes to go into more depth, usually in convex optimization courses, one covers the full spectrum of different smoothness/ strong-convexity/ acceleration/ stochasticity regimes, each with a separate analysis (a total of 16 possible configurations!)

This year I’ve tried something different in COS511 @ Princeton, which turns out also to have research significance. We’ve covered basic GD for well-conditioned functions, i.e. smooth and strongly-convex functions, and then extended these result by reduction to all other cases! A (simplified) outline of this teaching strategy is given in chapter 2 of Introduction to Online Convex Optimization.

Classical Strong-Convexity and Smoothness Reductions:

Given any optimization algorithm A for the well-conditioned case (i.e., strongly convex and smooth case), we can derive an algorithm for smooth but not strongly functions as follows.

Given a non-strongly convex but smooth objective \(f\), define a objective by \(f_1(x)=f(x)+e\|x\|^2\).

It is straightforward to see that \(f_1\) differs from \(f\) by at most ϵ times a distance factor, and in addition it is ϵ-strongly convex. Thus, one can apply A to minimize \(f_1\) and get a solution which is not too far from the optimal solution for \(f\) itself. This simplistic reduction yields an almost optimal rate, up to logarithmic factors.

Keywords: Complimentary slackness theorem, High or very high dimensional methods, approximate method, Lagrange multipliers, primal and dual problems, fixed point methods, gradient, subgradient, proximal gradient, optimal control problems, convexity, sparsity, ways to avoid wrecking finding the extrema of perfectly simple little 10000-parameter functions before everyone observes that you are a fool in the guise of a mathematician but everyone is not there because you wandered off the optimal path hours ago, and now you are alone and lost in a valley of lower-case Greek letters.

See also geometry of fitness landscapes, expectation maximisation, matrix factorisations, discrete optimisation, nature-inspired “meta-heuristic” optimisation.

General

Brief intro material

Textbooks

Whole free textbooks online. Mostly convex.

To file

Alternating Direction Method of Multipliers

Dunno. It’s everywhere, though. Maybe this is a problem of definition, though? (Boyd10)

In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn’s method of partial inverses, Dykstra’s alternating projections, Bregman iterative algorithms for \(\ell_1\) problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop Map Reduce implementations.

Continuous approximations of iterations

Recent papers (WiWJ16 WiWi15) argue that the discrete time steps can be viewed as a discrete approximation to a continuous time ODE which approaches the optimum (which in itself is trivial), but moreover that many algorithms fit into the same families of ODEs, that these ODEs explain Nesterov acceleration and generate new, improved optimisation methods. (which is not trivial.)

Continuous relaxations of parameters

Solving discrete problems with differentiable, continuous, surrogate parameters.

See also Nicolas Boumen’s introductory blog post.

Optimization on manifolds is about solving problems of the form

\[\mathrm{minimize}_{x\in\mathcal{M}} f(x),\]

where \(\mathcal{M}\) is a nice, known manifold. By “nice”, I mean a smooth, finite-dimensional Riemannian manifold.

Practical examples include the following (and all possible products of these):

Conceptually, the key point is to think of optimization on manifolds as unconstrained optimization: we do not think of mathcal{M} as being embedded in a Euclidean space. Rather, we think of mathcal{M} as being “the only thing that exists,” and we strive for intrinsic methods. Besides making for elegant theory, it also makes it clear how to handle abstract spaces numerically (such as the Grassmann manifold for example); and it gives algorithms the “right” invariances (computations do not depend on an arbitrarily chosen representation of the manifold).

There are at least two reasons why this class of problems is getting much attention lately. First, it is because optimization problems over the aforementioned sets (mostly matrix sets) come up pervasively in applications, and at some point it became clear that the intrinsic viewpoint leads to better algorithms, as compared to general-purpose constrained optimization methods (where mathcal{M} is considered as being inside a Euclidean space mathcal{E}, and algorithms move in mathcal{E}, while penalizing distance to mathcal{M}). The second is that, as I will argue momentarily, Riemannian manifolds are “the right setting” to talk about unconstrained optimization. And indeed, there is a beautiful book by [Absil, Sepulchre, Mahony], called Optimization algorithms on matrix manifolds (freely available), that shows how the classical methods for unconstrained optimization (gradient descent, Newton, trust-regions, conjugate gradients…) carry over seamlessly to the more general Riemannian framework.

To file

Miscellaneous optimisation techniques suggested on Linkedin

The whole world of exotic specialized optimisers. See, e.g. Nuit Blanche name-dropping Bregmann iteration, alternating method, augmented Lagrangian…

Fixed point methods

Contraction maps are nice when you have them. TBD.

Primal/dual problems

Majorization-minorization

Difference-of-Convex-objectives

When your objective function is not convex but you can represent it in terms of convex functions, somehow or other, use DC optimisation. (GaRC09.) (I don’t think this guarantees you a global optimum, but rather faster convergence to a local one)

Gradient-free optimization

Not all the methods described here use gradient information, but it’s frequently assumed to be something you can access easily. It’s worth considering which objectives you can optimize easily

But not all objectives are easily differentiable, even when parameters are continuous. For example, if you are not getting your measurement from a mathematical model, but from a physical experiment you can’t differentiate it since reality itself is usually not analytically differentiable. In this latter case, you are getting close to a question of online experiment design, as in ANOVA, and a further constraint that your function evaluations are possibly stupendously expensive. See Bayesian optimisation for one approach to this i the context of experiment design.

In general situations like this we use gradient-free methods, such as simulated annealing or numerical gradient etc.

“Meta-heuristic” methods

Biologically-inspired or arbitrary. Evolutionary algorithms, particle swarm optimisation, ant colony optimisation, harmony search. A lot of the tricks from these are adopted into mainstream stochastic methods. Some not.

See biometic algorithms for the care and husbandry of such as those.

Annealing and Monte Carlo optimisation methods

Simulated annealing: Constructing a process to yield maximally-likely estimates for the parameters. This has a statistical mechanics justification that makes it attractive to physicists; But it’s generally useful. You don’t necessarily need a gradient here, just the ability to evaluate something interpretable as a “likelihood ratio”. Long story. I don’t yet cover this at Monte carlo methods but I should.

Expectation maximization

See expectation maximisation.

Parallel

Classic, basic SGD takes walks through the data set example-wise or feature-wise – but this doesn’t work in parallel, so you tend to go for mini-batch gradient descent so that you can at least vectorize. Apparently you can make SGD work in “true” parallel across communication-constrained cores, but I don’t yet understand how.

Implementations

Specialised optimisation software.

See also statistical software, and online optimisation

Many of these solvers optionally use commercial backends such as Mosek.

Refs