The Living Thing / Notebooks :

Maximum likelihood inference

M-estimation based on maximising the empirical likelihood with respect to the model by choosing the appropriate parameters appropriatedly.

See also expectation maximisation, information criteria, robust statistics, decision theory, all of machine learning, optimisation etc.

One intuitively natural way of choosing the “best” parameter values for a model based on the data you have. It is prized for various nice properties, especially in the asymptotic limit, and especially, especially for exponential families. It produces, as side-products, some good asymptotic hypothesis tests and some model comparison statistics, most notably the Akaike Information Criterion.

It has rather fewer nice properties for small samples sizes, but is still regarded as a respectable default choice.

This is an extremum estimator with objective (i.e. negative loss) function

\begin{equation*} {\hat \ell }(\theta |x)={\frac 1n}\sum _{{i=1}}^{n}\ln f(x_{i}|\theta ), \end{equation*}

which is motivated as being the sample estimate of the expected log-likelihood

\begin{equation*} \ell (\theta )=\operatorname {E}_{\theta_0}[\,\ln f(x_{i}|\theta )\,] \end{equation*}

for true and unknown parameter value \(\theta_0\).

Why we choose this particular loss function is a whole other question, or, rather, a whole other field of research. Others are possible, but this one is a nice start.

Estimator asymptotic optimality

See large sample theory.

Fisher Information

Used in ML theory and kinda-sorta in robust estimation A matrix that tells you how much a new datum affects your parameter estimates. See large sample theory.

Fun features with exponential families

TBD

Conditional transformation models

Look cool. But what are they? see HoKB14, HoMB15.

the method of sieves

Nonparametrics and maximum likelihood?

GeHw82 :

Maximum likelihood estimation often fails when the parameter takes values in an infinite dimensional space. For example, the maximum likelihood method cannot be applied to the completely nonparametric estimation of a density function from an iid sample; the maximum of the likelihood is not attained by any density. In this example, as in many other examples, the parameter space (positive functions with area one) is too big. But the likelihood method can often be salvaged if we first maximize over a constrained subspace of the parameter space and then relax the constraint as the sample size grows. This is Grenander’s “method of sieves.” Application of the method sometimes leads to new estimators for familiar problems, or to a new motivation for an already well-studied technique.

Variants

Wherein we resolve lexical confusion using brute-force clarity.

What is the difference between a partial likelihood, profile likelihood and marginal likelihood?

Conditional likelihood

You have incidental nuisance parameters? If you can find a sufficient statistic for them and then condition upon it, they vanish.

Marginal likelihood

“the marginal probability of the data given the model, with marginalization performed over unobserved variables”

The version that crops up in Bayesian inference. And elsewhere? Need to make this bit precise.

Profile likelihood

hmmmm. Same as marginal likelihood?

Partial likelihood

What’s that? I will start by mangling an introduction from the internet (Where?)

Let \(Y_i\) denote the observed time (either censoring time or event time) for subject \(i\), and let \(C_i\) be the indicator that the time corresponds to an event (i.e. if \(C_i=1\) the event occurred and if \(C_i=0\) the time is a censoring time). The hazard function for the Cox proportional hazard model has the form

\begin{equation*} \lambda(t|X) = \lambda_0(t)\exp(\beta_1 X_1 + \cdots + \beta_p X_p) = \lambda_0(t)\exp(X \beta^\prime). \end{equation*}

This expression gives the hazard at time \(t\) for an individual with covariate vector (explanatory variables) \(X\). Based on this hazard function, a partial likelihood can be constructed from the datasets as

\begin{equation*} L(\beta) = \prod_{i:C_i=1}\frac{\theta_i}{\sum_{j:Y_j\ge Y_i}\theta_j}, \end{equation*}

where \(\theta _j=\exp(X_j\beta^\prime)\) and \(X_1, ..., X_n\) are the covariate vectors for the \(n\) independently sampled individuals in the dataset (treated here as column vectors).

The corresponding log partial likelihood is

\begin{equation*} \ell(\beta) = \sum_{i:C_i=1} \left(X_i \beta^\prime - \log \sum_{j:Y_j\ge Y_i}\theta_j\right). \end{equation*}

This function can be maximized over \(\beta\) to produce maximum partial likelihood estimates of the model parameters.

The partial score is

\begin{equation*} \ell^\prime(\beta) = \sum_{i:C_i=1} \left(X_i - \frac{\sum_{j:Y_j\ge Y_i}\theta_j X_j}{\sum_{j:Y_j\ge Y_i}\theta_j}\right), \end{equation*}

and the Hessian of the partial log likelihood is

\begin{equation*} \ell^{\prime\prime}(\beta) = -\sum_{i:C_i=1} \left(\frac{\sum_{j:Y_j\ge Y_i}\theta_jX_jX_j^\prime}{\sum_{j:Y_j\ge Y_i}\theta_j} - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j\times \sum_{j:Y_j\ge Y_i}\theta_jX_j^\prime}{[\sum_{j:Y_j\ge Y_i}\theta_j]^2}\right). \end{equation*}

Using this score function and Hessian matrix, the partial likelihood can be maximized in the usual fashion. The inverse of the Hessian matrix, evaluated at the estimate of \(\beta\), can be used as an approximate variance-covariance matrix for the estimate, also in the usual fashion.

Pseudo-likelihood

Dunno. As seen in spatial point processes and other undirected random fields.

From BaTu00:

Originally Besag (1975, 1977) defined the pseudolikelihood of a finite set of random variables X1, … , Xn as the product of the conditional likelihoods of each Xi given the other variables {X j , j = i}. This was extended (Besag, 1977; Besag et al., 1982) to point processes, for which it can be viewed as an infinite product of infinitesimal conditional probabilities.

Quasi-likelihood

The casual explanation I got was that this is somewhat like maximum likelihood inference, but based solely upon the means and variances of the parameters in question oh and p.s. if you have over-dispersed data for a Poisson regression this will help you.

AFAICT this is exclusely relevent to generalised linear models.

H-likelihood

…is some kind of extension to quasi-likelihood, for hierarchical generalised linear models.

Refs

ArSt91
Arnold, B. C., & Strauss, D. (1991) Pseudolikelihood Estimation: Some Examples. Sankhyā: The Indian Journal of Statistics, Series B (1960-2002), 53(2), 233–243.
BaTu00
Baddeley, A., & Turner, R. (2000) Practical Maximum Pseudolikelihood for Spatial Point Patterns. Australian & New Zealand Journal of Statistics, 42(3), 283–322. DOI.
BeTu92
Berman, M., & Turner, T. R.(1992) Approximating Point Process Likelihoods with GLIM. Journal of the Royal Statistical Society. Series C (Applied Statistics), 41(1), 31–38. DOI.
BEKF15
Bertl, J., Ewing, G., Kosiol, C., & Futschik, A. (2015) Approximate Maximum Likelihood Estimation. arXiv:1507.04553 [Stat].
Besa74
Besag, J. (1974) Spatial Interaction and the Statistical Analysis of Lattice Systems. Journal of the Royal Statistical Society. Series B (Methodological), 36(2), 192–236.
Besa75
Besag, J. (1975) Statistical Analysis of Non-Lattice Data. Journal of the Royal Statistical Society. Series D (The Statistician), 24(3), 179–195. DOI.
Besa77
Besag, J. (1977) Efficiency of Pseudolikelihood Estimation for Simple Gaussian Fields. Biometrika, 64(3), 616–618. DOI.
Cox75
Cox, D. R.(1975) Partial likelihood. Biometrika, 62(2), 269–276. DOI.
CoRe04
Cox, D. R., & Reid, N. (2004) A note on pseudolikelihood constructed from marginal densities. Biometrika, 91(3), 729–737. DOI.
Efro86
Efron, B. (1986) How biased is the apparent error rate of a prediction rule?. Journal of the American Statistical Association, 81(394), 461–470. DOI.
EfHi78
Efron, B., & Hinkley, D. V.(1978) Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher information. Biometrika, 65(3), 457–483. DOI.
FGLE12
Flammia, S. T., Gross, D., Liu, Y.-K., & Eisert, J. (2012) Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators. New Journal of Physics, 14(9), 95022. DOI.
GeHw82
Geman, S., & Hwang, C.-R. (1982) Nonparametric Maximum Likelihood Estimation by the Method of Sieves. The Annals of Statistics, 10(2), 401–414. DOI.
Geye91
Geyer, C. J.(1991) Markov chain Monte Carlo maximum likelihood.
Gida88
Gidas, B. (1988) Consistency of maximum likelihood and pseudo-likelihood estimators for Gibbs distributions. In Stochastic differential systems, stochastic control theory and applications (pp. 129–145). Springer
GoSa81
Gong, G., & Samaniego, F. J.(1981) Pseudo Maximum Likelihood Estimation: Theory and Applications. The Annals of Statistics, 9(4), 861–869.
GoSG96
Goulard, M., Särkkä, A., & Grabarnik, P. (1996) Parameter estimation for marked Gibbs point processes through the maximum pseudo-likelihood method. Scandinavian Journal of Statistics, 365–379.
Heyd97
Heyde, C. C.(1997) Quasi-likelihood and its application a general approach to optimal parameter estimation. . New York: Springer
HoKB14
Hothorn, T., Kneib, T., & Bühlmann, P. (2014) Conditional transformation models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1), 3–27. DOI.
HoMB15
Hothorn, T., Möst, L., & Bühlmann, P. (2015) Most Likely Transformations. arXiv:1508.06749 [Stat].
HuOg99
Huang, F., & Ogata, Y. (1999) Improvements of the Maximum Pseudo-Likelihood Estimators in Various Spatial Statistical Models. Journal of Computational and Graphical Statistics, 8(3), 510–530. DOI.
JaGe15
Janková, J., & van de Geer, S. (2015) Honest confidence regions and optimality in high-dimensional precision matrix estimation. arXiv:1507.02061 [Math, Stat].
JeKü94
Jensen, J. L., & Künsch, H. R.(1994) On asymptotic normality of pseudo likelihood estimates for pairwise interaction processes. Annals of the Institute of Statistical Mathematics, 46(3), 475–486. DOI.
JeMø91
Jensen, J. L., & Møller, J. (1991) Pseudolikelihood for Exponential Family Models of Spatial Point Processes. The Annals of Applied Probability, 1(3), 445–461. DOI.
Kasy15
Kasy, M. (2015) Uniformity and the delta method. arXiv:1507.05731 [Math, Stat].
Mill11
Millar, R. B.(2011) Maximum Likelihood Estimation and Inference: With Examples in R, SAS and ADMB. . Chichester, UK: John Wiley & Sons, Ltd
Olli90
Ollinger, J. M.(1990) Iterative reconstruction-reprojection and the expectation-maximization algorithm. IEEE Transactions on Medical Imaging, 9(1), 94–98. DOI.
RKMB09
Raue, A., Kreutz, C., Maiwald, T., Bachmann, J., Schilling, M., Klingmüller, U., & Timmer, J. (2009) Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood. Bioinformatics, 25(15), 1923–1929. DOI.
StIk90
Strauss, D., & Ikeda, M. (1990) Pseudolikelihood estimation for social networks. Journal of the American Statistical Association, 85(409), 204–212.
Sund76
Sundberg, R. (1976) An iterative method for solution of the likelihood equations for incomplete data from exponential families. Communications in Statistics - Simulation and Computation, 5(1), 55–64. DOI.
TRTW15
Tibshirani, R. J., Rinaldo, A., Tibshirani, R., & Wasserman, L. (2015) Uniform Asymptotic Inference and the Bootstrap After Model Selection. arXiv:1506.06266 [Math, Stat].
VTHR12
Vanlier, J., Tiemann, C. A., Hilbers, P. a. J., & Riel, N. A. W. van. (2012) An integrated strategy for prediction uncertainty analysis. Bioinformatics, 28(8), 1130–1135. DOI.
Vari08
Varin, C. (2008) On composite marginal likelihoods. AStA Advances in Statistical Analysis, 92(1), 1–28. DOI.
VaRF11
Varin, C., Reid, N., & Firth, D. (2011) An overview of composite likelihood methods. Statistica Sinica, 21(1), 5–42.
Wedd74
Wedderburn, R. W. M.(1974) Quasi-likelihood functions, generalized linear models, and the Gauss—Newton method. Biometrika, 61(3), 439–447. DOI.