The Living Thing / Notebooks :

Probability divergences

metrics, contrasts and divergences and other ways of quantifying how similar are two randomnesses

Quantifying difference between probability measures. Measuring the distribution itself, for, e.g. badness of approximation of a statistical fit. The theory of binary experiments. You probably care about these because you want to test for independence or do hypothesis testing or model selection, or density estimation, or to prove convergence for some random variable, or probability inequalities inequalities, or to model the distinguishability of the distributions from some process and a generative model of it, as seen in adversarial training. That kind of thing. Frequently the distance here is between a measure and an empirical estimate thereof, but this is no requirement.

A good choice of probability metric might give you a convenient distribution of a test statistic, an efficient loss function to target, simple convergence behaviour for some class of estimator, or simply a warm fuzzy glow.

Distance” and “metric” both often imply symmetric functions obeying the triangle inequality, but on this page we have a broader church, and include premetrics, metric-like functions which still “go to zero when two things get similar”, without including the other axioms of distances. These are also called divergences. This is still useful for the aforementioned convergence results. I’ll use “true metric” or “true distance” to make it clear when needed. “Contrast” is probably better here, but is less common.

nle;dr Don’t read my summary, read the epic Reid and Williamson paper, [ReWi11], which, in the quiet solitude of my own skull, I refer to as One regret to rule them all and in divergence bound them.

Wait, you are still here?

Norms with respect to Lebesgue measure on the state space

Well now, this is a fancy name. But this is probably the most familiar to many, as it’s a plain old functional approximation metric applied to probability distributions on the state space of the random variable.

The usual norms can be applied to density, Most famously, \(L_p\) norms (which I will call \(L_k\) norms because I am using \(p\)).

When \(p=2\), you get a convenient Hilbert space for free.

When written like this, the norm is taken between densities, i.e. Radon-Nikodym derivatives, not distributions. (Although see the Kolmogorov metric for an application of the \(p=\infty\) norm to cumulative distributions.)

A little more generally, consider some RV \(X\sim P\) taking values on \(\mathbb{R}\) with a Radon-Nikodym derivative (a.k.a. density) continuous with respect to the Lebesgue measure \(\lambda\), \(p=dP/d\lambda\).

$$\begin{aligned} L_k(P,Q)&:= \left\|\frac{dP-dQ}{d\lambda}\right\|_k\\ &=\left[\int \left(\frac{dP-dQ}{d\lambda}\right)^k d\lambda\right]^{1/k}\\ &=\mathbb{E}\left[\frac{dP-dQ}{d\lambda}^k \right]^{1/k} \end{aligned}$$

 L_2 on probability densities

\(L_2\) norm are classics for kernel density estimates, because it allows you to use all the machinery of function approximation.

\(L_k, k\geq 1\) norms do observe the triangle inequality, and \(L_2\) norms have lots of additional features, such as Wiener filtering formulations, and Parseval’s identity.

There are the standard facts about about \(L_k,\,k\geq 1\) spaces (i.e. expectation of arbitrary measurable functions), e.g. domination

$$k>1 \text{ and } j>k \Rightarrow \|f\|_k\geq\|g\|_j$$

Hölder’s inequality for probabilities

$$1/k + 1/j \leq 1 \Rightarrow \|fg\|_1\leq \|f\|_k\|g\|_j$$

and the Minkowski (i.e. triangle) inequality

$$\|x+y\|_k \leq \|x\|_k+\|y\|_k$$

However, it’s an awkward choice for a distance on a probability space, the \(L_k\) space on densities.

If you transform the random variable by anything other than a linear transform, then your distances transform in an arbitrary way. And we haven’t exploited the non-negativity of probability densities so it might feel as if we are wasting some information -If our estimated density \(q(x)<0,\;\forall x\in A\) for some nonempty interval \(A\) then we know it’s plain wrong, since probability is never negative.

Also, such norms are not necessarily convenient. Exercise: Given \(N\) i.i.d samples drawn from \(X\sim P= \text{Norm}(\mu,\sigma)\), find a closed form expression for estimates \((\hat{\mu}_N, \hat{\sigma}_N)\) such that the distance \(E_P\|(p-\hat{p})\|_2\) is minimised.

Doing this directly is hard; But indirectly can work —if we try to directly minimise a different distance, such as the KL divergence, we can squeeze the \(L_2\) distance. TODO: come back to this point.

Finally, these feel like setting up an inappropriate problem to solve statistically, since an error is penalised equally everywhere in the state-space; Why are errors penalised just as much for where \(p\simeq 0\) as for \(p\gg 0\)? Surely there are cases where we care more, or less, about such areas? That leads to…

\(\phi\)-divergences

Why not call \(P\) close to \(Q\) if closeness depends on the probability weighting of that place? Specifically, some divergence \(R\) like this, using scalar function \(\phi\) and pointwise loss \(\ell\)

$$R(P,Q):=\psi(E_Q(\ell(p(x), q(x))))$$

If we are going to measure divergence here, we also want the properties that \(P=Q\Rightarrow R(P,Q)=0\), and \(R(P,Q)\gt 0 \Rightarrow P\neq Q\). We can get this if we chose some increasing \(\psi\) and \(\ell(s,t)\) such that

$$ \begin{aligned} \begin{array}{rl} \ell(s,t) \geq 0 &\text{ for } s\neq t\\ \ell(s,t)=0 &\text{ for } s=t\\ \end{array} \end{aligned} $$

Let \(\psi\) be the identity function for now, and concentrate on the fiddly bit, \(\ell\). We try a form of function that exploits the non-negativity of densities and penalises the derivative of one distribution with respect to the other (resp. the ratio of densities) :

$$\ell(s,t) := \phi(s/t)$$

If \(p(x)=q(x)\) then \(q(x)/p(x)=1\). So to get the right sort of penalty, we choose \(\phi\) to have a minimum where the argument is 1, \(\phi(1)=0\) and \(\phi(t)\geq 0, \forall t\)

phi function

It turns out that it’s also wise to take \(\phi\) to be convex. (Exercise: why?) And, note that for these not to explode we will now require \(P\) be dominated by \(Q.\) (i.e. \(Q(A)=0\Rightarrow P(A)=0,\, \forall A \in\text{Borel}(\mathbb{R})\)

Putting this all together, we have a family of divergences

$$D_\phi(P,Q) := E_Q\phi\left(\frac{dP}{dQ}\right)$$

And BAM! These are the \(\phi\)-divergences. You get a different one for each choice of \(\phi\).

a.k.a. Csiszár-divergences, \(f\)-divergences or Ali-Silvey distances, after the people who noticed them. ([AlSi66], [Csis72])

These are in general mere premetrics. And note they are no longer in general symmetric -We should not necessarily expect

$$D_\phi(Q,P) = E_P\phi\left(\frac{dQ}{dP}\right)$$

to be equal to

$$D_\phi(P,Q) = E_Q\phi\left(\frac{dP}{dQ}\right)$$

Anyway, back to concreteness, and recall our well-behaved continuous random variables; we can write, in this case,

$$D_\phi(P,Q) = \int_\mathbb{R}\phi\left(\frac{p(x)}{q(x)}\right)q(x)dx$$

Let’s explore some \(\phi\)s.

Kullback-Leibler divergence

We take \(\phi(t)=t \ln t\), and write the corresponding divergence, \(D_\text{KL}=\operatorname{KL}\),

$$\begin{aligned} \operatorname{KL}(Q,P) &= E_Q\phi\left(\frac{p(x)}{q(x)}\right) \\ &= \int_\mathbb{R}\phi\left(\frac{p(x)}{q(x)}\right)q(x)dx \\ &= \int_\mathbb{R}\left(\frac{p(x)}{q(x)}\right)\ln \left(\frac{p(x)}{q(x)}\right) q(x)dx \\ &= \int_\mathbb{R} \ln \left(\frac{q(x)}{p(x)}\right) p(x)dx \end{aligned}$$

Indeed, so long as P is absolutely continuous wrt Q,

$$\operatorname{KL}(P,Q) = E_Q\log \left(\frac{dP}{dQ}\right)$$

This is one of many, many possible derivations of the Kullback-Leibler divergence a.k.a. KL divergence, or relative entropy; It pops up because of information-theoretic significance. e.g. information criteria.

TODO: revisit in a maximum likelihood and variational inference settings, where we have good algorithms exploiting its nice properties.

Total variation distance

Take \(\phi(t)=|t-1|\). We write \(\delta(P,Q)\) for the divergence. I will use the set \(A:=\left\{x:\frac{dP}{dQ}\geq 1\right\}=\{x:dP\geq dQ\}.\)

$$\begin{aligned} \delta(P,Q) &= E_Q\left|\frac{dP}{dQ}-1\right| \\ &= \int_A \left(\frac{dP}{dQ}-1 \right)dQ - \int_{A^C} \left(\frac{dP}{dQ}-1 \right)dQ\\ &= \int_A \frac{dP}{dQ} dQ - \int_A 1 dQ - \int_{A^C} \frac{dP}{dQ}dQ + \int_{A^C} 1 dQ\\ &= \int_A dP - \int_A dQ - \int_{A^C} dP + \int_{A^C} dQ\\ &= P(A) - Q(A) - P(A^C) + Q(A^C)\\ &= 2[P(A) - Q(A)] \\ &= 2[Q(A^C) - P(A^C)] \\ \text{ i.e. } &= 2\left[P(\{dP\geq dQ\})-Q(\{dQ\geq dP\})\right] \end{aligned}$$

I have also the standard fact that for any probability measure \(P\) and \(P\)-measurable set, \(A\), it holds that \(P(A)=1-P(A^C)\).

Equivalently

$$\delta(P,Q) :=\sup_{B \in \sigma(Q)} \left\{ |P(B) - Q(B)| \right\}$$

To see that \(A\) attains that supremum, we note for any set \(B\supseteq A,\, B:=A\cup D\) for some \(Z\) disjoint from \(A\), it follows that \(|P(B) - Q(B)|\leq |P(A) - Q(A)|\) since, on \(Z,\, dP/dQ\leq 1\), by construction.

It should be clear that this is symmetric.

Supposedly, [KhFG07] show that this is the only possible f-divergence which is also a true distance, but I can’t access that paper to see how.

TODO: Prove that for myself -Is the representation of divergences as “simple” divergences helpful? See in [ReWi09] (credited to Österreicher and Wajda)

TODO: talk about triangle inequalities.

Hellinger divergence

For this one, we write \(H^2(P,Q)\), and take \(\phi(t):=(\sqrt{t}-1)^2\). Step-by-step, that becomes

$$\begin{aligned} H^2(P,Q) &:=E_Q \left(\sqrt{\frac{dP}{dQ}}-1\right)^2 \\ &= \int \left(\sqrt{\frac{dP}{dQ}}-1\right)^2 dQ\\ &= \int \frac{dP}{dQ} dQ -2\int \sqrt{\frac{dP}{dQ}} dQ +\int dQ\\ &= \int dP -2\int \sqrt{\frac{dP}{dQ}} dQ +\int dQ\\ &= \int \sqrt{dP}^2 -2\int \sqrt{dP}\sqrt{dQ} +\int \sqrt{dQ}^2\\ &=\int (\sqrt{dP}-\sqrt{dQ})^2 \end{aligned}$$

It turns out to be another symmetrical \(\phi\)-divergence. The square root of the Hellinger divergence \(H=\sqrt{H^2}\) is the Hellinger distance on the space of probability measures which is a true distance. (exercise: prove)

It doesn’t look intuitive, but has convenient properties for proving inequalities (simple relationships with other norms, triangle inequality) and magically good estimation properties ([Bera77]), e.g. in robust statistics.

TODO: make some of these “convenient properties” explicit.

\(\alpha\)-divergence

a.k.a Rényi divergences, which are a sub family of the f divergences with a particular parameteriation. Includes KL, reverse-KL and Hellinger as special cases.

We take \(\phi(t):=\frac{4}{1-\alpha^2} \left(1-t^{(1+\alpha )/2}\right).\)

This gets fiddly to write out in full generality, with various undefined or infinite integrals needing definitions in terms of limits and is supposed to be constructed in terms of “Hellinger integral”…? I will ignore that for now and write out a simple enough version. See [ErHa14] or [LiVa06] for gory details.

$$D_\alpha(P,Q):=\frac{1}{1-\alpha}\log\int \left(\frac{p}{q}\right)^{1-\alpha}dP$$

\(\chi^2\) divergence

As made famous by count data significance tests.

For this one, we write \(\chi^2\), and take \(\phi(t):=(t-1)^2\). Then, by the same old process…

$$\begin{aligned} \chi^2(P,Q) &:=E_Q \left(\frac{dP}{dQ}-1\right)^2 \\ &= \int \left(\frac{dP}{dQ}-1\right)^2 dQ\\ &= \int \left(\frac{dP}{dQ}\right)^2 dQ - 2 \int \frac{dP}{dQ} dQ + \int dQ\\ &= \int \frac{dP}{dQ} dP - 1 \end{aligned}$$

Normally you see this for discrete data indexed by \(i\), in which case we may write

$$\begin{aligned} \chi^2(P,Q) &= \left(\sum_i \frac{p_i}{q_i} p_i\right) - 1\\ &= \sum_i\left( \frac{p_i^2}{q_i} - q_i\right)\\ &= \sum_i \frac{p_i^2-q_i^2}{q_i}\\ \end{aligned}$$

If you have constructed these discrete probability mass functions from \(N\) samples, say, \(p_i:=\frac{n^P_i}{N}\) and \(q_i:=\frac{n^Q_i}{N}\), this becomes

$$\chi^2(P,Q) = \sum_i \frac{(n^P_i)^2-(n^Q_i)^2}{Nn^Q_i}$$

This is probably familiar from some primordial statistics class.

The main use of this one is its ancient pedigree, (used by Pearson in 1900, according to Wikipedia) and its non-controversiality, so you include it in lists wherein you wish to mention you have a hipper alternative.

TBD: Reverse Pinsker inequalities (e.g. [BeHK12]), and covering numbers and other such horrors.

Hellinger inequalities

Wr/t the total variation distance,

$$H^2(P,Q) \leq \delta(P,Q) \leq \sqrt 2 H(P,Q)\,.$$
$$H^2(P,Q) \leq \operatorname{KL}(P,Q)$$

Additionally,

$$0\leq H^2(P,Q) \leq H(P,Q) \leq 1$$

Pinsker inequalities

[BeHK12] attribute this to Csiszár (1967 article I could not find) and Kullback ([Kull70]) instead of [Pins80] (which is in any case in Russian and I haven’t read it).

$$\delta(P,Q) \leq \sqrt{\frac{1}{2} D_{K L}(P\|Q)}$$

[ReWi09] derives the best-possible generalised Pinsker inequalities, in a certain sense of “best” and “generalised”, i.e. they are tight bounds, but not necessarily convenient.

Here are the most useful 3 of their inequalities: (\(P,Q\) arguments omitted)

$$\begin{aligned} H^2 &\geq 2-\sqrt{4-\delta^2} \\ \chi^2 &\geq \mathbb{I}\{\delta\lt 1\}\delta^2+\mathbb{I}\{\delta\lt 1\}\frac{\delta}{2-\delta}\\ \operatorname{KL} &\geq \min_{\beta\in [\delta-2,2-\delta]}\left(\frac{\delta+2-\beta}{4}\right)\log\left(\frac{\beta-2-\delta}{\beta-2+\delta}\right) + \left(\frac{\beta+2-\delta}{4}\right)\log\left(\frac{\beta+2-\delta}{\beta+2+\delta}\right) \end{aligned}$$

Integral probability metrics

TBD. For now, see [SGSS07]. Weaponized in [GFTS08] as an independence test.

Included:

Analysed in RKHS distribution embeddings.

Wasserstein distance(s)

“Earthmover distance(s).”

TBC.

“Neural Net distance”

Wasserstein distance with a baked in notion of the capacity of the discriminators which much measure the distance. ([AGLM17]) Is this actually used? The name is suspiciouslu awful.

Others

P-divergence”

Metrizes convergence in probability. Note this is defined upon random variables with an arbitrary joint distribution, not upon two distributions per se.

Lèvy metric

This monster metrizes convergence in distribution

$$D_L(P,Q) := \inf\{\epsilon >0: P(x-\epsilon)-\epsilon \leq Q(x)\leq P(x+\epsilon)+\epsilon\}$$

Kolmogorov metric

the \(L_\infty\) metric between the cumulative distributions (i.e. not between densities)

$$D_K(P,Q):= \sup_x \left\{ |P(x) - Q(x)| \right\}$$

Skorokhod

Hmmm.

What even are the Kuiper and Prokhorov metrics?

To read

Refs