The Living Thing / Notebooks : Mixture models

A method of semi-parametric density estimation.

This is also very close to clustering, and indeed there are lots of papers noting the connection.

We start from “classic” flavoured Gaussian Mixture Models, and pausing for a rest stop at expectation maximisation, with a moment to muse how we can unify this with kernel density estimation, and the suggestive smoothing connection of functional regression, terminating at adaptive mixture sieves, wondering momentarily if orthogonally decomposable tensors have anything to add. But we are not done, because we have a knotty model selection problem.

To learn:

regularity conditions for ML asymptotics and which ML results you can use, computational complexity.

Connections

Connection with kernel PCA ( SKSB98 ) and metric multidimensional scaling ( Will01) and such are explored in kernel approximation.

Mixture zoo

The following categories are not mutually exclusive; In fact I’m mentioning them all to work out what exactly the intersections are.

BaLi13, ZeMe97 and Geer96 discuss some useful results common to various convex combination estimators.

Dasg08 ch 33 is a high-speed no-filler all-killer summary of various convergence results and mixture types, including a connection to Donoho-Jin “Higher criticism” and nonparametric deconvolution and multiple testing (ch34).

Chee11 goes into dissertation-depth.

Classic Mixtures”

Finite location-scale mixtures.

Your data are vectors \(x_j\in \mathbb{R}^d\).

Your density looks like this:

\begin{equation*} f(x_j) = w_0 + \sum_{i=1}^{i=m}w_i\phi((x_j-\mu_j)^T\sigma_i^{-1}(x_j-\mu_j))/|\sigma| \end{equation*}

Traditionally, \(\phi\) is given, and given as the normal density, but any “nice” unimodal density will do, and we can appeal to, e.g. ZeMe97 or LiBa00 to argue that we can get “close” to any density in large classes by choosing the Gaussian for big \(m\).

Also traditionally, \(m\) is given by magic.

Fitting the parameters of this model, then, involves choosing \((\mathbf{w},\boldsymbol{\mu},\mathbf{\sigma}),\, \mathbf{w}\in\mathbb{R}^{m+1}, \boldsymbol{\mu}\in\mathbb{R}^{m\times d}, \boldsymbol{\sigma}\in\mathbb{R}^{d\times d}\)

Why would you bother? We know that this class is dense in the space of all densities under the total variation metric (ChLi09), which is a rationale if not a guarantee of its usefulness. Moreover, we know that it’s computationally tractable.

  • Dustin Mixon does a beautiful explanation of his paper and a few others in the field, in The Voronoi Means Conjecture

    As I mentioned in a previous blog post, I have a recent paper with Soledad Villar and Rachel Ward that analyzes an SDP relaxation of the k-means problem. It turns out that the solution to the relaxation can be interpreted as a denoised version of the original dataset, sending many of the data points very close to what appear to be k-means-optimal centroids. This is similar in spirit to Dasgupta’s random projection, and we use a similar rounding scheme to estimate the optimal k-means-centroids. Using these centroid estimates as Gaussian center estimates, we are able to prove performance bounds of the form \(\mathrm{MSE}\lesssim k^2\sigma^2\) when \(\mathrm{SNR}\gtrsim k^2\), meaning the performance doesn’t depend on the dimension, but rather the model order.

    But does it make sense that the performance should even depend on the model order?

  • Also reading: Geer96, which leverages the convex combination issue to get bounds relating KL and Hellinger convergence of density estimators (including kernel density estimators)

Radial basis functions

Finite mixtures by another name, from the function approximation literature. When your component density has an assumed form

\begin{equation*} f(x_j) = w_0 + \sum_{i=1}^{i=m}w_i\phi(h_j\|x_j-\mu_j\|)/h_j \end{equation*}

Here \(h\) is a scale parameter for the density function. This corresponds to a spherical approximating function, rather than estimating a full multidimensional bandwidth.

Kernel density estimators

If you have as many mixture components as data points, you have kernel density estimate. This is clearly also a finite mixture model, just a limiting case. To keep the number of parameters manageable you usually assume that the mixture components themselves all share the same scale parameters, although

Normal Variance mixtures

Mixtures of normals where you only vary scale parameters. I clearly don’t know enough about this to write the entry; this is just a note to myself. TBD. z-distributions and generalized hyperbolic distributions are the keywords. Various interesting properties about infinitely divisible distributions, and they include many other distributions as special cases.

nonparametric mixtures

Noticing that a classic location mixture is a convolution between a continuous density and an atomic density, the question arises whether you can convolve two more general densities. Yes, you can. Estimate a nonparametric mixing density. Now you have a nonparametric estimation problem, something like, estimate \(dP(\mu,\sigma)\). See, e.g. Chee11 who didn’t invent it, but did collect a large literature on this into one place.

Bayesian dirichlet mixtures

TBD; Something about using a Dirichlet process for the… weights of the mixture components? Giving you a posterior distribution over coutnably finite mixture parameters? something like that?

Non-affine mixtures

Do the mixture components have to have location/scale parameterisations? Not necessarily.

For, e.g. a tail estimate where we want to divine properties of heavy tails, this might not do what we want. Estimating the scale parameters is already a PITA; what can we say about more general shape parameters? This is probably not a theoretical issue, in that the asymptotic behaviour of the M-estimators of a Beta-mixture don’t change. Practically, however, the specific optimisation problem might get very hard. Mind you, it’s notoriously not that easy even with location-scale parameters/ Can we actually find our global maximum?

Convex neural networks

Maybe? What are these? See BRVD05 and let me know.

Matrix factorization approximations

Surely someone has done this, since it looks like an obvious idea at the intersection of kernel methods, matrix factorisation and matrix concentration inequalities. Maybe it got filed in the clustering literature.

Dasg99 probably counts, and MiVW16 introduces others, but most of these seem not to address the approximation problem. but the clustering problem. Clustering doesn’t fit perfectly with our purpose here; we don’t necessarily care about assigning things correctly to clusters; rather, we want to approximate the overall density well.

The restricted-isometry-like property here seems to be that component centres may not coincide; can we avoid that?

See MiVW16 for some interesting connections at least:

The study of theoretical guarantees for learning mixtures of Gaussians started with the work of Dasgupta [Dasg99]. His work presented an algorithm based on random projections and showed this algorithm approximates the centers of Gaussians […] in \(R^m\) separated by[…] the biggest singular value among all [covariance matrices]. After this work, several generalizations and improvements appeared. […] To date, techniques used for learning mixtures of Gaussians include expectation maximization [DaSc07], spectral methods [VeWa04, KuKa10, AwSh12], projections (random and deterministic) [Dasg99, MoVa10, ArKa01], and the method of moments [MoVa10].

Every existing performance guarantee exhibits one of the following forms:

  1. the algorithm correctly clusters all points according to Gaussian mixture component, or
  2. the algorithm well-approximates the center of each Gaussian (a la Dasgupta [Dasg99]).

Results of type (1), which include [VeWa04, KuKa10, AwSh12, AcMc07], require the minimum separation between the Gaussians centers to have a multiplicative factor of polylog N, where N is the number of points. This stems from a requirement that every point be closer to their Gaussian’s center (in some sense) than the other centers, so that the problem of cluster recovery is well-posed. Note that in the case of spherical Gaussians, the Gaussian components can be truncated to match the stochastic ball model in this regime, where the semidefinite program we present is already known to be tight with high probability [ABCK15, IMPV15]. Results of type (2) tend to be specifically tailored to exploit unique properties of the Gaussians, and as such are not easily generalizable to other data models. […] For instance, if \(x \sim N (\mu, \sigma^2 Im )\), then \(E(\|x - \mu\|^2) = m\sigma^2\). In high dimensions, since the entries of the Gaussians are independent, concentration of measure implies that most of the points will reside in a thin shell with center μ and radius about \(\sqrt{m\sigma}\). This property allows algorithms to cluster even concentric Gaussians as long as the covariances are sufficiently different. However, algorithms that allow for no separation between the Gaussian centers require a sample complexity which is exponential in k [MoVa10].

Hmm.

Estimation methods

(local) maximum likelihood

A classic method; There are some subtleties here since the global maximum can be badly behaved; you have to mess around with local roots of the likelihood equation and thereby lose some of the the lovely asymptotics of MLE methods in exponential families.

However, I am not sure which properties you lose. McRa14, for example, makes hte sweeping assertion that the AIC condtions don’t hold, but the BIC ones (whatever they are) do hold. BIC is “feels” unsatisfying, however.

Method of moments

Particularly popular in recent times for mixtures. Have not yet divined why.

Minimum distance

Minimise the distance between the empirical PDF and the estimated PDF in some metric. For reasons I have not yet digested, one is probably best to do this in the Hellinger metric if one wishes convenient convergence, (Bera77), although kernel density estimates tend to prefer \(L_2\), as with e.g, regression smoothing problems. How on earth you numerically minimise Hellinger distance from data is something I won’t think about for now, although I admit to being curious.

Regression smoothing formulation

Not quite mixture models; you have a smoothness penalty on the quadratic componened, a log-quadratic regression spline, then the results are “nearly” guassian mixture models. See EiMa96.

Adaptive mixtures

Here’s one lately-popular extension to the finite mixture mode: Choosing the number of mixture components adaptively, using some kind of model selection procedure, as per Prie94 with the “sieve” - MuYA94 uses an information criterion.

Sieve method

Argh! so many variants. What I would like for my mixture sieve

Prie94, GeWa00, Battey

Akaike Information criterion

Use an Akaike-type information criterion

See BaRY98, BHLL08, AnKI08, MuYA94

KoKi08 6.1. is a compressed summary intro to general regularized basis expansion in a regression setting, i.e. we approximate an arbitrary function. Density approximation is more constrained, since know that our mixture must integrate to 1. Also we don’t have a separate error term; rather, we assume our components completely summarise the randomness. Usually, although not always, we further require the components be non-negative functions with non-negative weights to give us specifically a convex combination of functions. Anyway, presumably we can extract a similar result from that?

McRa14 claims this doesn’t work here; but the BIC/MDL approach does. I’m curious which regularity conditions are violated?

quantization and coding theory

The information theoretic cousin. Non-uniform quantisation in communication theory is when you optimally dsitribute the density of your quantization symbols according to the density of the signal, in order to compress a signal while still reconstructing it as precisely as possible. This connection is most commonly raised in the multidimensional case, when it is “Vector quantization” or VQ to its friends. See, e.g. PaDi51, NaKi88, Gray84, GeGr12. This is then a coding theory problem.

From reading the literature it is not immediately apparent how, precisely, vector quantisation is related to mixture density estimation, although there is a family resemblance. In vector quantisation you do something like reduce the signal to a list of Voronoi cells and the coordinates of their centres, then code a signal to the nearest centre; Squinting right makes this look like a mixture problem. LeSe01 and LeSe99 make this connection precise. Investigate.

Now, how do you choose this optimal code from measurements of the signal? THAT is the statistical question.

Minimum description length//BIC

Related to both the previous previous, in some way I do not yet understand.

Rissanen’s Minimum Description length, as applied to mixture density estimation? Putatively related to the information criteria method in the form of the Bayesian Information Criterion, which is an purportedly an MDL measure. (Should look into that, eh?) Andrew Barron and co-workers seem to own the statistical MDL approach to mixture estimation. See, Barr91, BaCo91, BaRY98, with literature reviews in BHLL08. BHLL08 constructs discretized mixture models as “two stage codes”, and achieves prediction risk bounds for finite samples using them.

Unsatifactory thing #1: Model selection

Grrr. See model selection in mixtures.

Unsatisfactory thing #2: scale parameter selection theory

All the really good results take the scale parameter as given.

What if, as in the original GMM, we are happy to have our mixture components parameters vary? This is fine, as far as it goes, but the scale parameter selection is the reason why we are bothering with this class of model, typically, otherwise this is simply a weird convex deconvolution problem, which is not so interesting. In particular, how to handle the scale parameter selection in model selection?

ShSh10, BeRV16 make a start.

Unsatisfactory thing #3: approximation loss

There’s a lot of theory about how well we can learn things about the centres of clusters, but less theory about how we can approximate an overall density. In particular, identifying the centres and scales is subject to usual ML results on identifiability and asymptotic estimator distribution, but if those are all just nuisance parameters, what do you have left?

Miscellaney

http://blog.sigfpe.com/2016/10/expectation-maximization-with-less.html

Refs

AcMc05
Achlioptas, D., & McSherry, F. (2005) On Spectral Learning of Mixtures of Distributions. In P. Auer & R. Meir (Eds.), Learning Theory (pp. 458–469). Springer Berlin Heidelberg
AcMc07
Achlioptas, D., & Mcsherry, F. (2007) Fast Computation of Low-rank Matrix Approximations. J. ACM, 54(2). DOI.
AnHK12
Anandkumar, A., Hsu, D., & Kakade, S. M.(2012) A Method of Moments for Mixture Models and Hidden Markov Models.
AnKI08
Ando, T., Konishi, S., & Imoto, S. (2008) Nonlinear regression modeling via regularized radial basis function networks. Journal of Statistical Planning and Inference, 138(11), 3616–3633. DOI.
ArKa01
Arora, S., & Kannan, R. (2001) Learning Mixtures of Arbitrary Gaussians. In Proceedings of the Thirty-third Annual ACM Symposium on Theory of Computing (pp. 247–257). New York, NY, USA: ACM DOI.
ABCK15
Awasthi, P., Bandeira, A. S., Charikar, M., Krishnaswamy, R., Villar, S., & Ward, R. (2015) Relax, No Need to Round: Integrality of Clustering Formulations. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science (pp. 191–200). New York, NY, USA: ACM DOI.
AwSh12
Awasthi, P., & Sheffet, O. (2012) Improved Spectral-Norm Bounds for Clustering. arXiv Preprint arXiv:1206.3204.
Bach14
Bach, F. (2014) Breaking the Curse of Dimensionality with Convex Neural Networks. arXiv:1412.8690 [Cs, Math, Stat].
Barr91
Barron, A. R.(1991) Complexity Regularization with Application to Artificial Neural Networks. In G. Roussas (Ed.), Nonparametric Functional Estimation and Related Topics (pp. 561–576). Springer Netherlands
Barr93
Barron, A. R.(1993) Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3), 930–945. DOI.
Barr94
Barron, A. R.(1994) Approximation and Estimation Bounds for Artificial Neural Networks. Mach. Learn., 14(1), 115–133. DOI.
BaCo91
Barron, A. R., & Cover, T. M.(1991) Minimum complexity density estimation. IEEE Transactions on Information Theory, 37(4), 1034–1054. DOI.
BHLL08
Barron, A. R., Huang, C., Li, J. Q., & Luo, X. (2008) MDL, penalized likelihood, and statistical risk. In Information Theory Workshop, 2008. ITW’08. IEEE (pp. 247–257). IEEE DOI.
BaRY98
Barron, A., Rissanen, J., & Yu, B. (1998) The minimum description length principle in coding and modeling. IEEE Transactions on Information Theory, 44(6), 2743–2760. DOI.
BaLi14
Battey, H., & Linton, O. (2014) Nonparametric estimation of multivariate elliptic densities via finite mixture sieves. Journal of Multivariate Analysis, 123, 43–67. DOI.
BaLi13
Battey, H., & Liu, H. (2013) Smooth projected density estimation. arXiv:1308.3968 [Stat].
BaLi16
Battey, H., & Liu, H. (2016) Nonparametrically filtered parametric density estimation.
BaSa13
Battey, H., & Sancetta, A. (2013) Conditional estimation for dependent functional data. Journal of Multivariate Analysis, 120, 1–17. DOI.
BeGr85
Bei, C.-D., & Gray, R. (1985) An Improvement of the Minimum Distortion Encoding Algorithm for Vector Quantization. IEEE Transactions on Communications, 33(10), 1132–1133. DOI.
BeRV16
Belkin, M., Rademacher, L., & Voss, J. (2016) Basis Learning as an Algorithmic Primitive. (pp. 446–487). Presented at the 29th Annual Conference on Learning Theory
BRVD05
Bengio, Y., Roux, N. L., Vincent, P., Delalleau, O., & Marcotte, P. (2005) Convex neural networks. In Advances in neural information processing systems (pp. 123–130).
Bera77
Beran, R. (1977) Minimum Hellinger Distance Estimates for Parametric Models. The Annals of Statistics, 5(3), 445–463. DOI.
BePR11
Bertin, K., Pennec, E. L., & Rivoirard, V. (2011) Adaptive Dantzig density estimation. Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, 47(1), 43–74. DOI.
BiRo06
Birgé, L., & Rozenholc, Y. (2006) How many bins should be put in a regular histogram. ESAIM: Probability and Statistics, 10, 24–45. DOI.
Bish91
Bishop, C. (1991) Improving the Generalization Properties of Radial Basis Function Neural Networks. Neural Computation, 3(4), 579–588. DOI.
Chee11
Chee, C.-S. (2011) A mixture-based framework for nonparametric density estimation. . ResearchSpace@ Auckland
Chen95
Chen, J. (1995) Optimal Rate of Convergence for Finite Mixture Models. The Annals of Statistics, 23(1), 221–233. DOI.
ChLi09
Cheney, E. W., & Light, W. A.(2009) A Course in Approximation Theory. . American Mathematical Soc.
Chri15
Christian Bauckhage. (2015) Lecture Notes on Data Science: Soft k-Means Clustering. DOI.
DaLS12
Daniely, A., Linial, N., & Saks, M. (2012) Clustering is difficult only when it does not matter. arXiv:1205.4891 [Cs].
Dasg08
DasGupta, A. (2008) Asymptotic Theory of Statistics and Probability. . New York: Springer New York
Dasg99
Dasgupta, S. (1999) Learning mixtures of Gaussians. In Foundations of Computer Science, 1999. 40th Annual Symposium on (pp. 634–644). IEEE DOI.
DaSc07
Dasgupta, S., & Schulman, L. (2007) A Probabilistic Analysis of EM for Mixtures of Separated, Spherical Gaussians. Journal of Machine Learning Research, 8(Feb), 203–226.
EiMa96
Eilers, P. H. C., & Marx, B. D.(1996) Flexible smoothing with B-splines and penalties. Statistical Science, 11(2), 89–121. DOI.
EzRa14
Ezequiel López-Rubio, & RafaelM Luque-Baena. (2014) Online Learning by Stochastic Approximation for Background Modeling. In Background Modeling and Foreground Detection for Video Surveillance (Vols. 1-0, pp. 8-1-8–23). Chapman and Hall/CRC
Fan91
Fan, J. (1991) On the Optimal Rates of Convergence for Nonparametric Deconvolution Problems. The Annals of Statistics, 19(3), 1257–1272. DOI.
GeHw82
Geman, S., & Hwang, C.-R. (1982) Nonparametric Maximum Likelihood Estimation by the Method of Sieves. The Annals of Statistics, 10(2), 401–414. DOI.
GeWa00
Genovese, C. R., & Wasserman, L. (2000) Rates of convergence for the Gaussian mixture sieve. Annals of Statistics, 1105–1127.
GeGr12
Gersho, A., & Gray, R. M.(2012) Vector Quantization and Signal Compression. . Springer Science & Business Media
GhVa01
Ghosal, S., & van der Vaart, A. W.(2001) Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities. The Annals of Statistics, 29(5), 1233–1263. DOI.
Gray84
Gray, R. (1984) Vector quantization. IEEE ASSP Magazine, 1(2), 4–29. DOI.
Hall87
Hall, P. (1987) On Kullback-Leibler Loss and Density Estimation. The Annals of Statistics, 15(4), 1491–1519. DOI.
HsKa13
Hsu, D., & Kakade, S. M.(2013) Learning Mixtures of Spherical Gaussians: Moment Methods and Spectral Decompositions. In Proceedings of the 4th Conference on Innovations in Theoretical Computer Science (pp. 11–20). New York, NY, USA: ACM DOI.
HuCB08
Huang, C., Cheang, G. L. H., & Barron, A. R.(2008) Risk of penalized least squares, greedy selection and l1 penalization for flexible function libraries.
Ibra01
Ibragimov, I. (2001) Estimation of analytic functions. In Institute of Mathematical Statistics Lecture Notes - Monograph Series (pp. 359–383). Beachwood, OH: Institute of Mathematical Statistics
IMPV15
Iguchi, T., Mixon, D. G., Peterson, J., & Villar, S. (2015) Probably certifiably correct k-means clustering. arXiv:1509.07983 [Cs, Math, Stat].
KBGP16
Keriven, N., Bourrier, A., Gribonval, R., & Pérez, P. (2016) Sketching for Large-Scale Learning of Mixture Models. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6190–6194). DOI.
KoKi08
Konishi, S., & Kitagawa, G. (2008) Information criteria and statistical modeling. . New York: Springer
Kris11
Krishnamurthy, A. (2011) High-dimensional clustering with sparse gaussian mixture models. Unpublished Paper, 191–192.
KuKa10
Kumar, A., & Kannan, R. (2010) Clustering with Spectral Norm and the k-Means Algorithm. In 2010 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS) (pp. 299–308). DOI.
LeSe99
Lee, D. D., & Seung, H. S.(1999) Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755), 788–791. DOI.
LeSe01
Lee, D. D., & Seung, H. S.(2001) Algorithms for Non-negative Matrix Factorization. In T. K. Leen, T. G. Dietterich, & V. Tresp (Eds.), Advances in Neural Information Processing Systems 13 (pp. 556–562). MIT Press
LeSc12
Lee, G., & Scott, C. (2012) EM algorithms for multivariate Gaussian mixture models with truncated and censored data. Computational Statistics & Data Analysis, 56(9), 2816–2829. DOI.
LeNP06
Lee, Y., Nelder, J. A., & Pawitan, Y. (2006) Generalized linear models with random effects. . Boca Raton, FL: Chapman & Hall/CRC
LeBa06
Leung, G., & Barron, A. R.(2006) Information Theory and Mixing Least-Squares Regressions. IEEE Transactions on Information Theory, 52(8), 3396–3410. DOI.
LiXG12
Li, D., Xu, L., & Goodman, E. (2012) On-line EM Variants for Multivariate Normal Mixture Model in Background Learning and Moving Foreground Detection. Journal of Mathematical Imaging and Vision, 48(1), 114–133. DOI.
LiZJ05
Li, H., Zhang, K., & Jiang, T. (2005) The regularized EM algorithm. In Proceedings of the national conference on artificial intelligence (Vol. 20, p. 807). Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999
LiBa00
Li, J. Q., & Barron, A. R.(2000) Mixture Density Estimation. In S. A. Solla, T. K. Leen, & K. Müller (Eds.), Advances in Neural Information Processing Systems 12 (pp. 279–285). MIT Press
LiBG80
Linde, Y., Buzo, A., & Gray, R. (1980) An Algorithm for Vector Quantizer Design. IEEE Transactions on Communications, 28(1), 84–95. DOI.
Lloy82
Lloyd, S. (1982) Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2), 129–137. DOI.
McJo88
McLachlan, G. J., & Jones, P. N.(1988) Fitting Mixture Models to Grouped and Truncated Data via the EM Algorithm. Biometrics, 44(2), 571–578. DOI.
McRa14
McLachlan, G. J., & Rathnayake, S. (2014) On the number of components in a Gaussian mixture model. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 4(5), 341–355. DOI.
MiVW16
Mixon, D. G., Villar, S., & Ward, R. (2016) Clustering subgaussian mixtures by semidefinite programming. arXiv:1602.06612 [Cs, Math, Stat].
MoVa10
Moitra, A., & Valiant, G. (2010) Settling the Polynomial Learnability of Mixtures of Gaussians. In 2010 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS) (pp. 93–102). DOI.
MuYA94
Murata, N., Yoshizawa, S., & Amari, S. (1994) Network information criterion-determining the number of hidden units for an artificial neural network model. IEEE Transactions on Neural Networks, 5(6), 865–872. DOI.
NaKi88
Nasrabadi, N. M., & King, R. A.(1988) Image coding using vector quantization: a review. IEEE Transactions on Communications, 36(8), 957–971. DOI.
Nore10
Norets, A. (2010) Approximation of conditional densities by smooth mixtures of regressions. The Annals of Statistics, 38(3), 1733–1766. DOI.
Orr96
Orr, M. J.(1996) Introduction to radial basis function networks. . Technical Report, Center for Cognitive Science, University of Edinburgh
Orr99
Orr, M. J. L.(1999) Recent Advances in Radial Basis Function Networks. . Technical Report www.ed.ac.uk/ mjo/papers/recad.ps, Institute for Adaptive and Neural Computation
PaDi51
Panter, P. F., & Dite, W. (1951) Quantization Distortion in Pulse-Count Modulation with Nonuniform Spacing of Levels. Proceedings of the IRE, 39(1), 44–48. DOI.
PeWe07
Peng, J., & Wei, Y. (2007) Approximating K‐means‐type Clustering via Semidefinite Programming. SIAM Journal on Optimization, 18(1), 186–205. DOI.
Prie94
Priebe, C. E.(1994) Adaptive Mixtures. Journal of the American Statistical Association, 89(427), 796–806. DOI.
PrMa00
Priebe, C. E., & Marchette, D. J.(2000) Alternating kernel and mixture density estimates. Computational Statistics & Data Analysis, 35(1), 43–65. DOI.
QiCh94
Qian, S., & Chen, D. (1994) Signal representation using adaptive normalized Gaussian functions. Signal Processing, 36(1), 1–11. DOI.
QuBe00
Que, Q., & Belkin, M. (n.d.) Back to the future: Radial Basis Function networks revisited.
RaDe14
Rabusseau, G., & Denis, F. (2014) Learning Negative Mixture Models by Tensor Decompositions. arXiv:1403.4224 [Cs].
RaPM05
Rakhlin, A., Panchenko, D., & Mukherjee, S. (2005) Risk bounds for mixture density estimation. ESAIM: Probability and Statistics, 9, 220–229. DOI.
ReWa84
Redner, R., & Walker, H. (1984) Mixture Densities, Maximum Likelihood and the EM Algorithm. SIAM Review, 26(2), 195–239. DOI.
Riss84
Rissanen, J. (1984) Universal coding, information, prediction, and estimation. IEEE Transactions on Information Theory, 30(4), 629–636. DOI.
RoWa97
Roeder, K., & Wasserman, L. (1997) Practical Bayesian Density Estimation Using Mixtures of Normals. Journal of the American Statistical Association, 92(439), 894–902. DOI.
RuYZ11
Ruan, L., Yuan, M., & Zou, H. (2011) Regularized Parameter Estimation in High-Dimensional Gaussian Mixture Models. Neural Computation, 23(6), 1605–1622. DOI.
SKSB98
Schölkopf, B., Knirsch, P., Smola, A., & Burges, C. (1998) Fast Approximation of Support Vector Kernel Expansions, and an Interpretation of Clustering as Approximation in Feature Spaces. In P. Levi, M. Schanz, R.-J. Ahlers, & F. May (Eds.), Mustererkennung 1998 (pp. 125–132). Springer Berlin Heidelberg
ScSM97
Schölkopf, B., Smola, A., & Müller, K.-R. (1997) Kernel principal component analysis. In W. Gerstner, A. Germond, M. Hasler, & J.-D. Nicoud (Eds.), Artificial Neural Networks — ICANN’97 (pp. 583–588). Springer Berlin Heidelberg DOI.
ShSh10
Shimazaki, H., & Shinomoto, S. (2010) Kernel bandwidth optimization in spike rate estimation. Journal of Computational Neuroscience, 29(1–2), 171–182. DOI.
SiGo08
Singh, A. P., & Gordon, G. J.(2008) A unified view of matrix factorization models. In Machine Learning and Knowledge Discovery in Databases (pp. 358–373). Springer
Tipp00
Tipping, M. E.(2000) The Relevance Vector Machine. In Advances in Neural Information Processing Systems (pp. 652–658). MIT Press
Tipp01
Tipping, M. E.(2001) Sparse Bayesian learning and the relevance vector machine. The Journal of Machine Learning Research, 1, 211–244. DOI.
TiNh01
Tipping, M. E., & Nh, C. C.(2001) Sparse Kernel Principal Component Analysis.
ToAi15
Toulis, P., & Airoldi, E. M.(2015) Scalable estimation strategies based on stochastic approximations: classical results and new insights. Statistics and Computing, 25(4), 781–795. DOI.
VaWe96
Vaart, A. van der, & Wellner, J. (1996) Weak Convergence and Empirical Processes: With Applications to Statistics. . Springer Science & Business Media
Geer96
van de Geer, S. (1996) Rates of convergence for the maximum likelihood estimator in mixture models. Journal of Nonparametric Statistics, 6(4), 293–310. DOI.
Vand97
Van De Geer, S. (1997) Asymptotic normality in mixture models. ESAIM: Probability and Statistics, 1, 17–33.
Geer03
van de Geer, S. (2003) Asymptotic theory for maximum likelihood in nonparametric mixture models. Computational Statistics & Data Analysis, 41(3–4), 453–464. DOI.
VeWa04
Vempala, S., & Wang, G. (2004) A spectral algorithm for learning mixture models. Journal of Computer and System Sciences, 68(4), 841–860. DOI.
WeVe12
Weidmann, C., & Vetterli, M. (2012) Rate Distortion Behavior of Sparse Sources. IEEE Transactions on Information Theory, 58(8), 4969–4992. DOI.
Will01
Williams, C. K. I.(2001) On a Connection between Kernel PCA and Metric Multidimensional Scaling. In T. K. Leen, T. G. Dietterich, & V. Tresp (Eds.), Advances in Neural Information Processing Systems 13 (Vol. 46, pp. 675–681). MIT Press DOI.
WoSh95
Wong, W. H., & Shen, X. (1995) Probability Inequalities for Likelihood Ratios and Convergence Rates of Sieve MLES. The Annals of Statistics, 23(2), 339–362. DOI.
WuYZ07
Wu, Q., Ying, Y., & Zhou, D.-X. (2007) Multi-kernel regularized classifiers. Journal of Complexity, 23(1), 108–134. DOI.
XPPP08
Xu, J.-W., Paiva, A. R. C., Park, I., & Principe, J. C.(2008) A Reproducing Kernel Hilbert Space Framework for Information-Theoretic Learning. IEEE Transactions on Signal Processing, 56(12), 5891–5902. DOI.
XuJo96
Xu, L., & Jordan, M. I.(1996) On Convergence Properties of the EM Algorithm for Gaussian Mixtures. Neural Computation, 8(1), 129–151. DOI.
ZeMe97
Zeevi, A. J., & Meir, R. (1997) Density Estimation Through Convex Combinations of Densities: Approximation and Estimation Bounds. Neural Networks: The Official Journal of the International Neural Network Society, 10(1), 99–109. DOI.
ZeMM98
Zeevi, A. J., Meir, R., & Maiorov, V. (1998) Error bounds for functional approximation and estimation using mixtures of experts. IEEE Transactions on Information Theory, 44(3), 1010–1025. DOI.