The Living Thing / Notebooks :

Mixture models

A method of semi-parametric density estimation.

This is also very close to clustering, and indeed there are lots of papers noting the connection.

We start from “classic” flavoured Gaussian Mixture Models, and pausing for a rest stop at expectation maximisation, with a moment to muse how we can unify this with kernel density estimation, and the suggestive smoothing connection of functional regression, terminating at adaptive mixture sieves, wondering momentarily if orthogonally decomposable tensors have anything to add. But we are not done, because we have a knotty model selection problem.

To learn:

regularity conditions for ML asymptotics and which ML results you can use, computational complexity.

Connections

Connection with kernel PCA ( SKSB98 ) and metric multidimensional scaling ( Will01) and such are explored in kernel approximation.

Mixture zoo

The following categories are not mutually exclusive; In fact I'm mentioning them all to work out what exactly the intersections are.

BaLi13, ZeMe97 and Geer96 discuss some useful results common to various convex combination estimators.

Dasg08 ch 33 is a high-speed no-filler all-killer summary of various convergence results and mixture types, including a connection to Donoho-Jin “Higher criticism” and nonparametric deconvolution and multiple testing (ch34).

Chee11 goes into dissertation-depth.

“Classic Mixtures”

Finite location-scale mixtures.

Your data are vectors .

Your density looks like this:

Traditionally, is given, and given as the normal density, but any “nice” unimodal density will do, and we can appeal to, e.g. ZeMe97 or LiBa00 to argue that we can get “close” to any density in large classes by choosing the Gaussian for big .

Also traditionally, is given by magic.

Fitting the parameters of this model, then, involves choosing

Why would you bother? We know that this class is dense in the space of all densities under the total variation metric (ChLi09), which is a rationale if not a guarantee of its usefulness. Moreover, we know that it's computationally tractable.

Radial basis functions

Finite mixtures by another name, from the function approximation literature. When your component density has an assumed form

Here is a scale parameter for the density function. This corresponds to a spherical approximating function, rather than estimating a full multidimensional bandwidth.

Kernel density estimators

If you have as many mixture components as data points, you have kernel density estimate. This is clearly also a finite mixture model, just a limiting case. To keep the number of parameters manageable you usually assume that the mixture components themselves all share the same scale parameters, although

Normal Variance mixtures

Mixtures of normals where you only vary scale parameters. I clearly don't know enough about this to write the entry; this is just a note to myself. TBD. z-distributions and generalized hyperbolic distributions are the keywords. Various interesting properties about infinitely divisible distributions, and they include many other distributions as special cases.

nonparametric mixtures

Noticing that a classic location mixture is a convolution between a continuous density and an atomic density, the question arises whether you can convolve two more general densities. Yes, you can. Estimate a nonparametric mixing density. Now you have a nonparametric estimation problem, something like, estimate . See, e.g. Chee11 who didn't invent it, but did collect a large literature on this into one place.

Bayesian dirichlet mixtures

TBD; Something about using a Dirichlet process for the… weights of the mixture components? Giving you a posterior distribution over coutnably finite mixture parameters? something like that?

Non-affine mixtures

Do the mixture components have to have location/scale parameterisations? Not necessarily.

For, e.g. a tail estimate where we want to divine properties of heavy tails, this might not do what we want. Estimating the scale parameters is already a PITA; what can we say about more general shape parameters? This is probably not a theoretical issue, in that the asymptotic behaviour of the M-estimators of a Beta-mixture don't change. Practically, however, the specific optimisation problem might get very hard. Mind you, it's notoriously not that easy even with location-scale parameters/ Can we actually find our global maximum?

Convex neural networks

Maybe? What are these? See BRVD05 and let me know.

Matrix factorization approximations

Surely someone has done this, since it looks like an obvious idea at the intersection of kernel methods, matrix factorisation and matrix concentration inequalities. Maybe it got filed in the clustering literature.

Dasg99 probably counts, and MiVW16 introduces others, but most of these seem not to address the approximation problem. but the clustering problem. Clustering doesn't fit perfectly with our purpose here; we don't necessarily care about assigning things correctly to clusters; rather, we want to approximate the overall density well.

The restricted-isometry-like property here seems to be that component centres may not coincide; can we avoid that?

See MiVW16 for some interesting connections at least:

The study of theoretical guarantees for learning mixtures of Gaussians started with the work of Dasgupta [Dasg99]. His work presented an algorithm based on random projections and showed this algorithm approximates the centers of Gaussians […] in separated by[…] the biggest singular value among all [covariance matrices]. After this work, several generalizations and improvements appeared. […] To date, techniques used for learning mixtures of Gaussians include expectation maximization [DaSc07], spectral methods [VeWa04, KuKa10, AwSh12], projections (random and deterministic) [Dasg99, MoVa10, ArKa01], and the method of moments [MoVa10].

Every existing performance guarantee exhibits one of the following forms:

Results of type (1), which include [VeWa04, KuKa10, AwSh12, AcMc07], require the minimum separation between the Gaussians centers to have a multiplicative factor of polylog N, where N is the number of points. This stems from a requirement that every point be closer to their Gaussian’s center (in some sense) than the other centers, so that the problem of cluster recovery is well-posed. Note that in the case of spherical Gaussians, the Gaussian components can be truncated to match the stochastic ball model in this regime, where the semidefinite program we present is already known to be tight with high probability [ABCK15, IMPV15]. Results of type (2) tend to be specifically tailored to exploit unique properties of the Gaussians, and as such are not easily generalizable to other data models. […] For instance, if , then . In high dimensions, since the entries of the Gaussians are independent, concentration of measure implies that most of the points will reside in a thin shell with center μ and radius about . This property allows algorithms to cluster even concentric Gaussians as long as the covariances are sufficiently different. However, algorithms that allow for no separation between the Gaussian centers require a sample complexity which is exponential in k [MoVa10].

Hmm.

Estimation methods

(local) maximum likelihood

A classic method; There are some subtleties here since the global maximum can be badly behaved; you have to mess around with local roots of the likelihood equation and thereby lose some of the the lovely asymptotics of MLE methods in exponential families.

However, I am not sure which properties you lose. McRa14, for example, makes hte sweeping assertion that the AIC condtions don't hold, but the BIC ones (whatever they are) do hold. BIC is “feels” unsatisfying, however.

Method of moments

Particularly popular in recent times for mixtures. Have not yet divined why.

Minimum distance

Minimise the distance between the empirical PDF and the estimated PDF in some metric. For reasons I have not yet digested, one is probably best to do this in the Hellinger metric if one wishes convenient convergence, (Bera77), although kernel density estimates tend to prefer , as with e.g, regression smoothing problems. How on earth you numerically minimise Hellinger distance from data is something I won't think about for now, although I admit to being curious.

Regression smoothing formulation

Not quite mixture models; you have a smoothness penalty on the quadratic componened, a log-quadratic regression spline, then the results are “nearly” guassian mixture models. See EiMa96.

Adaptive mixtures

Here's one lately-popular extension to the finite mixture mode: Choosing the number of mixture components adaptively, using some kind of model selection procedure, as per Prie94 with the “sieve” - MuYA94 uses an information criterion.

Sieve method

Argh! so many variants. What I would like for my mixture sieve

Prie94, GeWa00, Battey

Akaike Information criterion

Use an Akaike-type information criterion

See BaRY98, BHLL08, AnKI08, MuYA94

KoKi08 6.1. is a compressed summary intro to general regularized basis expansion in a regression setting, i.e. we approximate an arbitrary function. Density approximation is more constrained, since know that our mixture must integrate to 1. Also we don't have a separate error term; rather, we assume our components completely summarise the randomness. Usually, although not always, we further require the components be non-negative functions with non-negative weights to give us specifically a convex combination of functions. Anyway, presumably we can extract a similar result from that?

McRa14 claims this doesn't work here; but the BIC/MDL approach does. I'm curious which regularity conditions are violated?

quantization and coding theory

The information theoretic cousin. Non-uniform quantisation in communication theory is when you optimally dsitribute the density of your quantization symbols according to the density of the signal, in order to compress a signal while still reconstructing it as precisely as possible. This connection is most commonly raised in the multidimensional case, when it is “Vector quantization” or VQ to its friends. See, e.g. PaDi51, NaKi88, Gray84, GeGr12. This is then a coding theory problem.

From reading the literature it is not immediately apparent how, precisely, vector quantisation is related to mixture density estimation, although there is a family resemblance. In vector quantisation you do something like reduce the signal to a list of Voronoi cells and the coordinates of their centres, then code a signal to the nearest centre; Squinting right makes this look like a mixture problem. LeSe01 and LeSe99 make this connection precise. Investigate.

Now, how do you choose this optimal code from measurements of the signal? THAT is the statistical question.

Minimum description length//BIC

Related to both the previous previous, in some way I do not yet understand.

Rissanen's Minimum Description length, as applied to mixture density estimation? Putatively related to the information criteria method in the form of the Bayesian Information Criterion, which is an purportedly an MDL measure. (Should look into that, eh?) Andrew Barron and co-workers seem to own the statistical MDL approach to mixture estimation. See, Barr91, BaCo91, BaRY98, with literature reviews in BHLL08. BHLL08 constructs discretized mixture models as “two stage codes”, and achieves prediction risk bounds for finite samples using them.

Unsatifactory thing #1: Model selection

Grrr. See model selection in mixtures.

Unsatisfactory thing #2: scale parameter selection theory

All the really good results take the scale parameter as given.

What if, as in the original GMM, we are happy to have our mixture components parameters vary? This is fine, as far as it goes, but the scale parameter selection is the reason why we are bothering with this class of model, typically, otherwise this is simply a weird convex deconvolution problem, which is not so interesting. In particular, how to handle the scale parameter selection in model selection?

ShSh10, BeRV16 make a start.

Unsatisfactory thing #3: approximation loss

There's a lot of theory about how well we can learn things about the centres of clusters, but less theory about how we can approximate an overall density. In particular, identifying the centres and scales is subject to usual ML results on identifiability and asymptotic estimator distribution, but if those are all just nuisance parameters, what do you have left?

Miscellaney

http://blog.sigfpe.com/2016/10/expectation-maximization-with-less.html

Refs

A method of semi-parametric density estimation.

This is also very close to clustering, and indeed there are lots of papers noting the connection.

We start from “classic” flavoured Gaussian Mixture Models, and pausing for a rest stop at expectation maximisation, with a moment to muse how we can unify this with kernel density estimation, and the suggestive smoothing connection of functional regression, terminating at adaptive mixture sieves, wondering momentarily if orthogonally decomposable tensors have anything to add. But we are not done, because we have a knotty model selection problem.

To learn:

regularity conditions for ML asymptotics and which ML results you can use, computational complexity.

Connections

Connection with kernel PCA ( SKSB98 ) and metric multidimensional scaling ( Will01) and such are explored in kernel approximation.

Mixture zoo

The following categories are not mutually exclusive; In fact I'm mentioning them all to work out what exactly the intersections are.

BaLi13, ZeMe97 and Geer96 discuss some useful results common to various convex combination estimators.

Dasg08 ch 33 is a high-speed no-filler all-killer summary of various convergence results and mixture types, including a connection to Donoho-Jin “Higher criticism” and nonparametric deconvolution and multiple testing (ch34).

Chee11 goes into dissertation-depth.

“Classic Mixtures”

Finite location-scale mixtures.

Your data are vectors .

Your density looks like this:

Traditionally, is given, and given as the normal density, but any “nice” unimodal density will do, and we can appeal to, e.g. ZeMe97 or LiBa00 to argue that we can get “close” to any density in large classes by choosing the Gaussian for big .

Also traditionally, is given by magic.

Fitting the parameters of this model, then, involves choosing

Why would you bother? We know that this class is dense in the space of all densities under the total variation metric (ChLi09), which is a rationale if not a guarantee of its usefulness. Moreover, we know that it's computationally tractable.

Radial basis functions

Finite mixtures by another name, from the function approximation literature. When your component density has an assumed form

Here is a scale parameter for the density function. This corresponds to a spherical approximating function, rather than estimating a full multidimensional bandwidth.

Kernel density estimators

If you have as many mixture components as data points, you have kernel density estimate. This is clearly also a finite mixture model, just a limiting case. To keep the number of parameters manageable you usually assume that the mixture components themselves all share the same scale parameters, although

Normal Variance mixtures

Mixtures of normals where you only vary scale parameters. I clearly don't know enough about this to write the entry; this is just a note to myself. TBD. z-distributions and generalized hyperbolic distributions are the keywords. Various interesting properties about infinitely divisible distributions, and they include many other distributions as special cases.

nonparametric mixtures

Noticing that a classic location mixture is a convolution between a continuous density and an atomic density, the question arises whether you can convolve two more general densities. Yes, you can. Estimate a nonparametric mixing density. Now you have a nonparametric estimation problem, something like, estimate . See, e.g. Chee11 who didn't invent it, but did collect a large literature on this into one place.

Bayesian dirichlet mixtures

TBD; Something about using a Dirichlet process for the… weights of the mixture components? Giving you a posterior distribution over coutnably finite mixture parameters? something like that?

Non-affine mixtures

Do the mixture components have to have location/scale parameterisations? Not necessarily.

For, e.g. a tail estimate where we want to divine properties of heavy tails, this might not do what we want. Estimating the scale parameters is already a PITA; what can we say about more general shape parameters? This is probably not a theoretical issue, in that the asymptotic behaviour of the M-estimators of a Beta-mixture don't change. Practically, however, the specific optimisation problem might get very hard. Mind you, it's notoriously not that easy even with location-scale parameters/ Can we actually find our global maximum?

Convex neural networks

Maybe? What are these? See BRVD05 and let me know.

Matrix factorization approximations

Surely someone has done this, since it looks like an obvious idea at the intersection of kernel methods, matrix factorisation and matrix concentration inequalities. Maybe it got filed in the clustering literature.

Dasg99 probably counts, and MiVW16 introduces others, but most of these seem not to address the approximation problem. but the clustering problem. Clustering doesn't fit perfectly with our purpose here; we don't necessarily care about assigning things correctly to clusters; rather, we want to approximate the overall density well.

The restricted-isometry-like property here seems to be that component centres may not coincide; can we avoid that?

See MiVW16 for some interesting connections at least:

The study of theoretical guarantees for learning mixtures of Gaussians started with the work of Dasgupta [Dasg99]. His work presented an algorithm based on random projections and showed this algorithm approximates the centers of Gaussians […] in separated by[…] the biggest singular value among all [covariance matrices]. After this work, several generalizations and improvements appeared. […] To date, techniques used for learning mixtures of Gaussians include expectation maximization [DaSc07], spectral methods [VeWa04, KuKa10, AwSh12], projections (random and deterministic) [Dasg99, MoVa10, ArKa01], and the method of moments [MoVa10].

Every existing performance guarantee exhibits one of the following forms:

Results of type (1), which include [VeWa04, KuKa10, AwSh12, AcMc07], require the minimum separation between the Gaussians centers to have a multiplicative factor of polylog N, where N is the number of points. This stems from a requirement that every point be closer to their Gaussian’s center (in some sense) than the other centers, so that the problem of cluster recovery is well-posed. Note that in the case of spherical Gaussians, the Gaussian components can be truncated to match the stochastic ball model in this regime, where the semidefinite program we present is already known to be tight with high probability [ABCK15, IMPV15]. Results of type (2) tend to be specifically tailored to exploit unique properties of the Gaussians, and as such are not easily generalizable to other data models. […] For instance, if , then . In high dimensions, since the entries of the Gaussians are independent, concentration of measure implies that most of the points will reside in a thin shell with center μ and radius about . This property allows algorithms to cluster even concentric Gaussians as long as the covariances are sufficiently different. However, algorithms that allow for no separation between the Gaussian centers require a sample complexity which is exponential in k [MoVa10].

Hmm.

Estimation methods

(local) maximum likelihood

A classic method; There are some subtleties here since the global maximum can be badly behaved; you have to mess around with local roots of the likelihood equation and thereby lose some of the the lovely asymptotics of MLE methods in exponential families.

However, I am not sure which properties you lose. McRa14, for example, makes hte sweeping assertion that the AIC condtions don't hold, but the BIC ones (whatever they are) do hold. BIC is “feels” unsatisfying, however.

Method of moments

Particularly popular in recent times for mixtures. Have not yet divined why.

Minimum distance

Minimise the distance between the empirical PDF and the estimated PDF in some metric. For reasons I have not yet digested, one is probably best to do this in the Hellinger metric if one wishes convenient convergence, (Bera77), although kernel density estimates tend to prefer , as with e.g, regression smoothing problems. How on earth you numerically minimise Hellinger distance from data is something I won't think about for now, although I admit to being curious.

Regression smoothing formulation

Not quite mixture models; you have a smoothness penalty on the quadratic componened, a log-quadratic regression spline, then the results are “nearly” guassian mixture models. See EiMa96.

Adaptive mixtures

Here's one lately-popular extension to the finite mixture mode: Choosing the number of mixture components adaptively, using some kind of model selection procedure, as per Prie94 with the “sieve” - MuYA94 uses an information criterion.

Sieve method

Argh! so many variants. What I would like for my mixture sieve

Prie94, GeWa00, Battey

Akaike Information criterion

Use an Akaike-type information criterion

See BaRY98, BHLL08, AnKI08, MuYA94

KoKi08 6.1. is a compressed summary intro to general regularized basis expansion in a regression setting, i.e. we approximate an arbitrary function. Density approximation is more constrained, since know that our mixture must integrate to 1. Also we don't have a separate error term; rather, we assume our components completely summarise the randomness. Usually, although not always, we further require the components be non-negative functions with non-negative weights to give us specifically a convex combination of functions. Anyway, presumably we can extract a similar result from that?

McRa14 claims this doesn't work here; but the BIC/MDL approach does. I'm curious which regularity conditions are violated?

quantization and coding theory

The information theoretic cousin. Non-uniform quantisation in communication theory is when you optimally dsitribute the density of your quantization symbols according to the density of the signal, in order to compress a signal while still reconstructing it as precisely as possible. This connection is most commonly raised in the multidimensional case, when it is “Vector quantization” or VQ to its friends. See, e.g. PaDi51, NaKi88, Gray84, GeGr12. This is then a coding theory problem.

From reading the literature it is not immediately apparent how, precisely, vector quantisation is related to mixture density estimation, although there is a family resemblance. In vector quantisation you do something like reduce the signal to a list of Voronoi cells and the coordinates of their centres, then code a signal to the nearest centre; Squinting right makes this look like a mixture problem. LeSe01 and LeSe99 make this connection precise. Investigate.

Now, how do you choose this optimal code from measurements of the signal? THAT is the statistical question.

Minimum description length//BIC

Related to both the previous previous, in some way I do not yet understand.

Rissanen's Minimum Description length, as applied to mixture density estimation? Putatively related to the information criteria method in the form of the Bayesian Information Criterion, which is an purportedly an MDL measure. (Should look into that, eh?) Andrew Barron and co-workers seem to own the statistical MDL approach to mixture estimation. See, Barr91, BaCo91, BaRY98, with literature reviews in BHLL08. BHLL08 constructs discretized mixture models as “two stage codes”, and achieves prediction risk bounds for finite samples using them.

Unsatifactory thing #1: Model selection

Grrr. See model selection in mixtures.

Unsatisfactory thing #2: scale parameter selection theory

All the really good results take the scale parameter as given.

What if, as in the original GMM, we are happy to have our mixture components parameters vary? This is fine, as far as it goes, but the scale parameter selection is the reason why we are bothering with this class of model, typically, otherwise this is simply a weird convex deconvolution problem, which is not so interesting. In particular, how to handle the scale parameter selection in model selection?

ShSh10, BeRV16 make a start.

Unsatisfactory thing #3: approximation loss

There's a lot of theory about how well we can learn things about the centres of clusters, but less theory about how we can approximate an overall density. In particular, identifying the centres and scales is subject to usual ML results on identifiability and asymptotic estimator distribution, but if those are all just nuisance parameters, what do you have left?

Miscellaney

http://blog.sigfpe.com/2016/10/expectation-maximization-with-less.html

Refs