A big family of algorithms inspired by nature, which work by trying lots of shit, seeing what works and discarding what doesn’t. Basic, sturdy genetic algorithms, John Holland style. Genetic Programming, Koza-style. Evolutionary strategies, à la Bienert, Techenberg and Schwefel. Simple; available to amateurs. (Mind you, the same goes for neural networks these days, and they are better in most circumstances) They can be made robust against noisy fitness functions. They can solve some tricky problems. And, compared to more specific algorithms, they are outrageously slow and profligate.
So I don’t use them practically, although they are interesting to think about as theoretical models for systems in the real world, such as evolution itself.
Things to understand
- Evolving “robustness”, and “modularity”, whatever those are.
- Generalising from an evolutionary to a market metaphor for fitness, or finding some nice combination of those two metaphors.
- When is an evolutionary algorithms the best you can do? Why?
- E. Vladislavleva’s symbolic regression for genetic programming publications look tasty
- Appendix A in Cosma Shalizi’s paper Dynamics of Bayesian Updating with Dependent Data and Misspecified Models. Electronic Journal of statistics. doi:10.1214/09-EJS485. (Arxiv)… maps Bayesian inference to the replicator equation (!)
- Brownlee’s “Clever algorithms” book includes some examples of GP and other nature-inspired learning techniques.
- The extensive, free, Koza-endorsed Field Guide to Genetic Programming
- Holland, John H. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence.
- Bill Tozier