Maximum likelihood inference

February 16, 2015 — October 13, 2016

likelihood
optimization
probability
statistics

M-estimation based on maximising the empirical likelihood with respect to the model by choosing the appropriate parameters appropriatedly.

See also expectation maximisation, information criteria, robust statistics, decision theory, all of machine learning, optimisation etc.

One intuitively natural way of choosing the “best” parameter values for a model based on the data you have. It is prized for various nice properties, especially in the asymptotic limit, and especially, especially for exponential families. It produces, as side-products, some good asymptotic hypothesis tests and some model comparison statistics, most notably the Akaike Information Criterion.

It has rather fewer nice properties for small samples sizes, but is still regarded as a respectable default choice.

This is an extremum estimator with objective (i.e. negative loss) function

\[ {\hat \ell }(\theta |x)={\frac 1n}\sum _{{i=1}}^{n}\ln f(x_{i}|\theta ), \]

which is motivated as being the sample estimate of the expected log-likelihood

\[ \ell (\theta )=\operatorname {E}_{\theta_0}[\,\ln f(x_{i}|\theta )\,] \]

for true and unknown parameter value \(\theta_0\).

Why we choose this particular loss function is a whole other question, or, rather, a whole other field of research. Others are possible, but this one is a nice start.

1 Estimator asymptotic optimality

See large sample theory.

2 Fisher Information

Used in ML theory and kinda-sorta in robust estimation A matrix that tells you how much a new datum affects your parameter estimates. See large sample theory.

3 Fun features with exponential families

🏗

3.1 Conditional transformation models

What are they? (Hothorn, Kneib, and Bühlmann 2014; Hothorn, Möst, and Bühlmann 2015).

3.2 the method of sieves

Nonparametrics and maximum likelihood? (Geman and Hwang 1982):

Maximum likelihood estimation often fails when the parameter takes values in an infinite dimensional space. For example, the maximum likelihood method cannot be applied to the completely nonparametric estimation of a density function from an iid sample; the maximum of the likelihood is not attained by any density. In this example, as in many other examples, the parameter space (positive functions with area one) is too big. But the likelihood method can often be salvaged if we first maximize over a constrained subspace of the parameter space and then relax the constraint as the sample size grows. This is Grenander’s “method of sieves.” Application of the method sometimes leads to new estimators for familiar problems, or to a new motivation for an already well-studied technique.

3.3 Variants

Wherein we resolve lexical confusion using brute-force clarity.

What is the difference between a partial likelihood, profile likelihood and marginal likelihood?

4 Conditional likelihood

You have incidental nuisance parameters? If you can find a sufficient statistic for them and then condition upon it, they vanish.

5 Marginal likelihood

“the marginal probability of the data given the model, with marginalization performed over unobserved variables”

The version that crops up in Bayesian inference. And elsewhere? Need to make this bit precise.

6 Profile likelihood

TBD

7 Partial likelihood

What’s that? I will start by mangling an introduction from the internet (Where?)

Let \(Y_i\) denote the observed time (either censoring time or event time) for subject \(i\), and let \(C_i\) be the indicator that the time corresponds to an event (i.e. if \(C_i=1\) the event occurred and if \(C_i=0\) the time is a censoring time). The hazard function for the Cox proportional hazard model has the form

\[ \lambda(t|X) = \lambda_0(t)\exp(\beta_1 X_1 + \cdots + \beta_p X_p) = \lambda_0(t)\exp(X \beta^\prime). \]

This expression gives the hazard at time \(t\) for an individual with covariate vector (explanatory variables) \(X\). Based on this hazard function, a partial likelihood can be constructed from the datasets as

\[ L(\beta) = \prod_{i:C_i=1}\frac{\theta_i}{\sum_{j:Y_j\ge Y_i}\theta_j}, \]

where \(θ_j=\exp(X_j\beta^\prime)\) and \(X_1, …, X_n\) are the covariate vectors for the \(n\) independently sampled individuals in the dataset (treated here as column vectors).

The corresponding log partial likelihood is

\[ \ell(\beta) = \sum_{i:C_i=1} \left(X_i \beta^\prime - \log \sum_{j:Y_j\ge Y_i}\theta_j\right). \]

This function can be maximized over \(\beta\) to produce maximum partial likelihood estimates of the model parameters.

The partial score is

\[ \ell^\prime(\beta) = \sum_{i:C_i=1} \left(X_i - \frac{\sum_{j:Y_j\ge Y_i}\theta_j X_j}{\sum_{j:Y_j\ge Y_i}\theta_j}\right), \]

and the Hessian of the partial log likelihood is

\[ \ell^{\prime\prime}(\beta) = -\sum_{i:C_i=1} \left(\frac{\sum_{j:Y_j\ge Y_i}\theta_jX_jX_j^\prime}{\sum_{j:Y_j\ge Y_i}\theta_j} - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j\times \sum_{j:Y_j\ge Y_i}\theta_jX_j^\prime}{[\sum_{j:Y_j\ge Y_i}\theta_j]^2}\right). \]

Using this score function and Hessian matrix, the partial likelihood can be maximized in the usual fashion. The inverse of the Hessian matrix, evaluated at the estimate of \(\beta\), can be used as an approximate variance-covariance matrix for the estimate, also in the usual fashion.

8 Pseudo-likelihood

Dunno. As seen in spatial point processes and other undirected random fields.

From (Baddeley and Turner 2000):

Originally Besag (1975, 1977) defined the pseudolikelihood of a finite set of random variables \(X_1, \dots, X_n\) as the product of the conditional likelihoods of each \(X_i\) given the other variables \(\{X_j, j \neq i\}\). This was extended (Besag, 1977; Besag et al., 1982) to point processes, for which it can be viewed as an infinite product of infinitesimal conditional probabilities.

9 Quasi-likelihood

The casual explanation I got was that this is somewhat like maximum likelihood inference, but based solely upon the means and variances of the parameters in question oh and p.s. if you have over-dispersed data for a Poisson regression this will help you.

AFAICT this is exclusively relevant to generalised linear models.

10 H-likelihood

…is some kind of extension to quasi-likelihood, for hierarchical generalised linear models.

11 References

Arnold, and Strauss. 1991. Pseudolikelihood Estimation: Some Examples.” Sankhyā: The Indian Journal of Statistics, Series B (1960-2002).
Baddeley, and Turner. 2000. Practical Maximum Pseudolikelihood for Spatial Point Patterns.” Australian & New Zealand Journal of Statistics.
Berman, and Turner. 1992. Approximating Point Process Likelihoods with GLIM.” Journal of the Royal Statistical Society. Series C (Applied Statistics).
Bertl, Ewing, Kosiol, et al. 2015. Approximate Maximum Likelihood Estimation.” arXiv:1507.04553 [Stat].
Besag. 1974. Spatial Interaction and the Statistical Analysis of Lattice Systems.” Journal of the Royal Statistical Society. Series B (Methodological).
———. 1975. Statistical Analysis of Non-Lattice Data.” Journal of the Royal Statistical Society. Series D (The Statistician).
———. 1977. Efficiency of Pseudolikelihood Estimation for Simple Gaussian Fields.” Biometrika.
Cox. 1975. Partial Likelihood.” Biometrika.
Cox, and Reid. 2004. A Note on Pseudolikelihood Constructed from Marginal Densities.” Biometrika.
Efron. 1986. How Biased Is the Apparent Error Rate of a Prediction Rule? Journal of the American Statistical Association.
Efron, and Hinkley. 1978. Assessing the Accuracy of the Maximum Likelihood Estimator: Observed Versus Expected Fisher Information.” Biometrika.
Flammia, Gross, Liu, et al. 2012. Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators.” New Journal of Physics.
Geman, and Hwang. 1982. Nonparametric Maximum Likelihood Estimation by the Method of Sieves.” The Annals of Statistics.
Geyer. 1991. Markov Chain Monte Carlo Maximum Likelihood.”
Gong, and Samaniego. 1981. Pseudo Maximum Likelihood Estimation: Theory and Applications.” The Annals of Statistics.
Goulard, Särkkä, and Grabarnik. 1996. Parameter Estimation for Marked Gibbs Point Processes Through the Maximum Pseudo-Likelihood Method.” Scandinavian Journal of Statistics.
Heyde. 1997. Quasi-likelihood and its application a general approach to optimal parameter estimation.
Hothorn, Kneib, and Bühlmann. 2014. Conditional Transformation Models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Hothorn, Möst, and Bühlmann. 2015. Most Likely Transformations.” arXiv:1508.06749 [Stat].
Huang, and Ogata. 1999. Improvements of the Maximum Pseudo-Likelihood Estimators in Various Spatial Statistical Models.” Journal of Computational and Graphical Statistics.
Hu, and Zidek. 2002. The Weighted Likelihood.” The Canadian Journal of Statistics / La Revue Canadienne de Statistique.
Janková, and van de Geer. 2015. Honest Confidence Regions and Optimality in High-Dimensional Precision Matrix Estimation.” arXiv:1507.02061 [Math, Stat].
Jensen, and Künsch. 1994. On Asymptotic Normality of Pseudo Likelihood Estimates for Pairwise Interaction Processes.” Annals of the Institute of Statistical Mathematics.
Jensen, and Møller. 1991. Pseudolikelihood for Exponential Family Models of Spatial Point Processes.” The Annals of Applied Probability.
Kasy. 2015. Uniformity and the Delta Method.” arXiv:1507.05731 [Math, Stat].
Liu, Zhan, and Niu. 2021. Hilbert–Schmidt Independence Criterion Regularization Kernel Framework on Symmetric Positive Definite Manifolds.” Mathematical Problems in Engineering.
Ma, Lewis, and Kleijn. 2020. The HSIC Bottleneck: Deep Learning Without Back-Propagation.” Proceedings of the AAAI Conference on Artificial Intelligence.
Millar. 2011. Maximum Likelihood Estimation and Inference: With Examples in R, SAS and ADMB. Statistics in Practice.
Ollinger. 1990. Iterative Reconstruction-Reprojection and the Expectation-Maximization Algorithm.” IEEE Transactions on Medical Imaging.
Raue, Kreutz, Maiwald, et al. 2009. Structural and Practical Identifiability Analysis of Partially Observed Dynamical Models by Exploiting the Profile Likelihood.” Bioinformatics.
Strauss, and Ikeda. 1990. Pseudolikelihood Estimation for Social Networks.” Journal of the American Statistical Association.
Sundberg. 1976. An Iterative Method for Solution of the Likelihood Equations for Incomplete Data from Exponential Families.” Communications in Statistics - Simulation and Computation.
Tibshirani, Rinaldo, Tibshirani, et al. 2015. Uniform Asymptotic Inference and the Bootstrap After Model Selection.” arXiv:1506.06266 [Math, Stat].
Vanlier, Tiemann, Hilbers, et al. 2012. An Integrated Strategy for Prediction Uncertainty Analysis.” Bioinformatics.
Varin. 2008. On Composite Marginal Likelihoods.” Advances in Statistical Analysis.
Varin, Reid, and Firth. 2011. An Overview of Composite Likelihood Methods.” Statistica Sinica.
Wang, Steven Xiaogang. 2001. Maximum Weighted Likelihood Estimation.”
Wang, Tinghua, Dai, and Liu. 2021. Learning with Hilbert–Schmidt Independence Criterion: A Review and New Perspectives.” Knowledge-Based Systems.
Wedderburn. 1974. Quasi-Likelihood Functions, Generalized Linear Models, and the Gauss—Newton Method.” Biometrika.
Wolter. 2007a. Introduction to Variance Estimation. Statistics for Social and Behavioral Sciences.
———. 2007b. Taylor Series Methods.” In Introduction to Variance Estimation. Statistics for Social and Behavioral Sciences.