The Living Thing / Notebooks :

Free energy

In Variational Bayes

A formalism for learning and inference in variational Bayes approximation, borrowing bits from statistical mechanics, and leveraging message passing in graphical models.

Free energy comes as, AFAICT, two flavours, Bethe and Helmholtz. See your favourite variational Bayes text for more.

Is this also some kind of universal learning scheme?

Not “free as in speech” or “free as in beer”, nor “free energy” in the sense of perpetual motion machines, zero point energy or pills that turn your water into petroleum.

My question is - how much more credible than these latter examples is the “free energy principle” as a unifying, uh, thing for learning systems?

The chief pusher of this wheelbarrow appears to be Karl Friston.

He starts his Nature Reviews Neuroscience with this statement of the principle:

The free-energy principle says that any self-organizing system that is at equilibrium with its environment must minimize its free energy.

Is that “must” in

  1. the sense of moral obligation, or is it
  2. a testable conservation law of some kind?

If the latter, self-organising in what sense? What class of equilibrium? For which definition of the free energy? What is our chief experimental evidence for this hypothesis? Rather than a no-nonsense unpacking of these, the article goes on to meander through an ocean fashionable other stuff (The Bayesian Brain Hypothesis) which I have not yet trawled for salient details, so I don’t realy know about that at this point.

Fortunately we do get a definition of free energy itself, with a diagram, which

…shows the dependencies among the quantities that define free energy. These include the internal states of the brain \(\mu(t)\) and quantities describing its exchange with the environment: sensory signals (and their motion) \(\bar{s}(t) = [s,s',s''\ldots ]^T\) plus action \(a(t)\). The environment is described by equations of motion, which specify the trajectory of its hidden states. The causes \(\vartheta \supset {\bar{x}, \theta, \gamma }\) of sensory input comprise hidden states \(\bar{x} (t)\), parameters \(\theta\), and precisions \(\gamma\) controlling the amplitude of the random fluctuations \(\bar{z}(t)\) and \(\bar{w}(t)\). Internal brain states and action minimize free energy \(F(\bar{s}, \mu)\), which is a function of sensory input and a probabilistic representation \(q(\vartheta|\mu)\) of its causes. This representation is called the recognition density and is encoded by internal states \(\mu\).

The free energy depends on two probability densities: the recognition density \(q(\vartheta|\mu)\) and one that generates sensory samples and their causes, \(p(\bar{s},\vartheta|m)\). The latter represents a probabilistic generative model (denoted by \(m\)), the form of which is entailed by the agent or brain…

\begin{equation*} F = -<\ln p(\bar{s},\vartheta|m)>_q + -<\ln q(\vartheta|\mu)>_q \end{equation*}

This, on the other hand, seems to be option 1: Any right thinking brain, seeking to avoid the vice of slothful and decadent perception after the manner of heathens, foreigners, and compulsive masturbators, would do well to seek to maximise its free energy before partaking of a stimulating and refreshing physical recreation such as a game of cricket.

Presumably if I drill deeper, this will be related back to other definitions, precise types of energies employed etc, the predictions to be made and experiments done, though.

See also: Exergy, Landauer’s Principle, the Slate Star Codex Friston dogpile, based on a nice exposition by Wolfgang Schwarz.

To Read

  • Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127. DOI. Online.
  • Friston, K. (2013). Life as we know it. Journal of The Royal Society Interface, 10(86). DOI. Online.
  • Friston, K., & Friston, K. (2010). Is the free-energy principle neurocentric? Nature Reviews Neuroscience, 11(8), 605. DOI. Online.
  • Yukalov, V. I., & Sornette, D. (2014). Self-organization in complex systems as decision making. Advances in Complex Systems, 17(03n04), 1450016. DOI. Online.