The Living Thing / Notebooks :

Optimisation of model functions for experiment design

Bayesian and other surrogate noisy optimisation methods

Closely related is AutoML

Problem statement

According to Gilles Louppe and Manoj Kumar:

We are interested in solving

$$ x^* = \arg \min_x f(x) $$

under the constraints that

This is similar to the typical framing of reinforcement learning problems where there is a similar explore/exploit tradeoff, although I do not know the precise disciplinary boundaries that may transect these areas. They both might be thought of as stochastic optimal control problems.

The most common method seems to the “Bayesian optimisation”, which is based on Gaussian Process regressions. However, this is not a requirement, and many possible wacky regression models can give you the optimisation surrogate.

Of renewed interest for its use in hyperparameter/model selection, in e.g. regularising complex models, which is compactly referred to these days as automl.

You could also obviously use it in industrial process control, which is where I originally saw this kind of thing, in the form of sequential ANOVA design, which is an incredible idea itself, although that is now years old so is not nearly so hip. Since this effectively an attempt at optimal, nonlinear, heteroskedastic sequential ANOVA, I am led to wonder if we can dispense with ANOVA now. Does this stuff actually work well enough? Or is it the same thing, repackaged?

Refs