The Living Thing / Notebooks :

Matrix norms, divergences, metrics

I write the singular value decomposition of a matrix

where we have unitary matrices and a matrix, with non-negative diagonals , of respective dimensions .

The diagonal entries of , written \mathbf{B}).

For Hermitian matrices we may write an eigenvalue decomposition

For unitary and diagonal matrix with entries the eigenvalues.

Spectral norm

TBD.

Frobenius norm

Coincides with the norm when the matrix happens to be a column vector.

We can define this in terms of the entries of :

Equivalently, if is square,

If we have the SVD, we might instead use

Schatten norms

incorporating nuclear and Frobenius norms.

If the singular values are denoted by , then the Schatten p-norm is defined by

The most familiar cases are p = 1, 2, ∞. The case p = 2 yields the Frobenius norm, introduced before. The case p = ∞ yields the spectral norm, which is the matrix norm induced by the vector 2-norm (see above). Finally, p = 1 yields the nuclear norm

Bregman divergence

TBD. Relation to exponential family and maximum likelihood.

Mark Reid: Meet the Bregman divergences:

If you have some abstract way of measuring the “distance” between any two points and, for any choice of distribution over points the mean point minimises the average distance to all the others, then your distance measure must be a Bregman divergence.