Closely related is AutoML

## Problem statement

According to Gilles Louppe and Manoj Kumar:

We are interested in solving

$$ x^* = \arg \min_x f(x) $$under the constraints that

\(f\) is a black box for which no closed form is known (nor its gradients);

\(f\) is expensive to evaluate;

evaluations of \(y=f(x)\) may be noisy.

This is similar to the typical framing of reinforcement learning problems where there is a similar explore/exploit tradeoff, although I do not know the precise disciplinary boundaries that may transect these areas. They both might be thought of as stochastic optimal control problems.

The most common method seems to the “Bayesian optimisation”, which is based on Gaussian Process regressions. However, this is not a requirement, and many possible wacky regression models can give you the optimisation surrogate.

Of renewed interest for its use in hyperparameter/model selection, in e.g. regularising complex models, which is compactly referred to these days as automl.

You could also obviously use it in industrial process control, which is where I originally saw this kind of thing, in the form of sequential ANOVA design, which is an incredible idea itself, although that is now years old so is not nearly so hip. Since this effectively an attempt at optimal, nonlinear, heteroskedastic sequential ANOVA, I am led to wonder if we can dispense with ANOVA now. Does this stuff actually work well enough? Or is it the same thing, repackaged?

## Refs

- HuHL13: Frank Hutter, Holger Hoos, Kevin Leyton-Brown (2013) An Evaluation of Sequential Model-based Optimization for Expensive Blackbox Functions. In Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation (pp. 1209–1216). New York, NY, USA: ACM DOI
- GeSA14: Michael A. Gelbart, Jasper Snoek, Ryan P. Adams (2014) Bayesian Optimization with Unknown Constraints. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence (pp. 250–259). Arlington, Virginia, United States: AUAI Press
- FKES15: Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, Frank Hutter (2015) Efficient and Robust Automated Machine Learning. In Advances in Neural Information Processing Systems 28 (pp. 2962–2970). Curran Associates, Inc.
- SKKS12: Niranjan Srinivas, Andreas Krause, Sham M. Kakade, Matthias Seeger (2012) Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design.
*IEEE Transactions on Information Theory*, 58(5), 3250–3265. DOI - LJDR16: Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar (2016) Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization.
*ArXiv:1603.06560 [Cs, Stat]*. - SSZA14: Jasper Snoek, Kevin Swersky, Rich Zemel, Ryan Adams (2014) Input Warping for Bayesian Optimization of Non-Stationary Functions. In Proceedings of the 31st International Conference on Machine Learning (ICML-14) (pp. 1674–1682).
- SwSA13: Kevin Swersky, Jasper Snoek, Ryan P Adams (2013) Multi-Task Bayesian Optimization. In Advances in Neural Information Processing Systems 26 (pp. 2004–2012). Curran Associates, Inc.
- ALSW17: Zeyuan Allen-Zhu, Yuanzhi Li, Aarti Singh, Yining Wang (2017) Near-Optimal Design of Experiments via Regret Minimization. In PMLR (pp. 126–135).
- Močk75: J. Močkus (1975) On Bayesian Methods for Seeking the Extremum. In Optimization Techniques IFIP Technical Conference (pp. 400–404). Springer Berlin Heidelberg DOI
- FDFP17: Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil (2017) On Hyperparameter Optimization in Learning Systems.
- AmKo17: Brandon Amos, J. Zico Kolter (2017) OptNet: Differentiable Optimization as a Layer in Neural Networks. In PMLR (pp. 136–145).
- SnLA12: Jasper Snoek, Hugo Larochelle, Ryan P. Adams (2012) Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems (pp. 2951–2959). Curran Associates, Inc.
- GAOS10: Steffen Grünewälder, Jean-Yves Audibert, Manfred Opper, John Shawe-Taylor (2010) Regret Bounds for Gaussian Process Bandit Problems. (Vol. 9, pp. 273–280).
- HuHL11: Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown (2011) Sequential Model-Based Optimization for General Algorithm Configuration. In Learning and Intelligent Optimization (Vol. 6683, pp. 507–523). Berlin, Heidelberg: Springer, Berlin, Heidelberg DOI
- StBa12: Joe Staines, David Barber (2012) Variational Optimization.
*ArXiv:1212.4507 [Cs, Stat]*.