The Living Thing / Notebooks :

Maximum likelihood inference

M-estimation based on maximising the empirical likelihood with respect to the model by choosing the appropriate parameters appropriatedly.

See also expectation maximisation, information criteria, robust statistics, decision theory, all of machine learning, optimisation etc.

One intuitively natural way of choosing the “best” parameter values for a model based on the data you have. It is prized for various nice properties, especially in the asymptotic limit, and especially, especially for exponential families. It produces, as side-products, some good asymptotic hypothesis tests and some model comparison statistics, most notably the Akaike Information Criterion.

It has rather fewer nice properties for small samples sizes, but is still regarded as a respectable default choice.

This is an extremum estimator with objective (i.e. negative loss) function

which is motivated as being the sample estimate of the expected log-likelihood

for true and unknown parameter value .

Why we choose this particular loss function is a whole other question, or, rather, a whole other field of research. Others are possible, but this one is a nice start.

Estimator asymptotic optimality

See large sample theory.

Fisher Information

Used in ML theory and kinda-sorta in robust estimation A matrix that tells you how much a new datum affects your parameter estimates. See large sample theory.

Fun features with exponential families

TBD

Conditional transformation models

Look cool. But what are they? see HoKB14, HoMB15.

the method of sieves

Nonparametrics and maximum likelihood?

GeHw82 :

Maximum likelihood estimation often fails when the parameter takes values in an infinite dimensional space. For example, the maximum likelihood method cannot be applied to the completely nonparametric estimation of a density function from an iid sample; the maximum of the likelihood is not attained by any density. In this example, as in many other examples, the parameter space (positive functions with area one) is too big. But the likelihood method can often be salvaged if we first maximize over a constrained subspace of the parameter space and then relax the constraint as the sample size grows. This is Grenander's “method of sieves.” Application of the method sometimes leads to new estimators for familiar problems, or to a new motivation for an already well-studied technique.

Variants

Wherein we resolve lexical confusion using brute-force clarity.

What is the difference between a partial likelihood, profile likelihood and marginal likelihood?

Conditional likelihood

You have incidental nuisance parameters? If you can find a sufficient statistic for them and then condition upon it, they vanish.

Marginal likelihood

“the marginal probability of the data given the model, with marginalization performed over unobserved variables”

The version that crops up in Bayesian inference. And elsewhere? Need to make this bit precise.

Profile likelihood

hmmmm. Same as marginal likelihood?

Partial likelihood

What's that? I will start by mangling an introduction from the internet (Where?)

Let denote the observed time (either censoring time or event time) for subject , and let be the indicator that the time corresponds to an event (i.e. if the event occurred and if the time is a censoring time). The hazard function for the Cox proportional hazard model has the form

This expression gives the hazard at time for an individual with covariate vector (explanatory variables) . Based on this hazard function, a partial likelihood can be constructed from the datasets as

where and are the covariate vectors for the independently sampled individuals in the dataset (treated here as column vectors).

The corresponding log partial likelihood is

This function can be maximized over to produce maximum partial likelihood estimates of the model parameters.

The partial score is

and the Hessian of the partial log likelihood is

Using this score function and Hessian matrix, the partial likelihood can be maximized in the usual fashion. The inverse of the Hessian matrix, evaluated at the estimate of , can be used as an approximate variance-covariance matrix for the estimate, also in the usual fashion.

Pseudo-likelihood

Dunno. As seen in spatial point processes and other undirected random fields.

From BaTu00:

Originally Besag (1975, 1977) defined the pseudolikelihood of a finite set of random variables X1, . . . , Xn as the product of the conditional likelihoods of each Xi given the other variables {X j , j = i}. This was extended (Besag, 1977; Besag et al., 1982) to point processes, for which it can be viewed as an infinite product of infinitesimal conditional probabilities.

Quasi-likelihood

The casual explanation I got was that this is somewhat like maximum likelihood inference, but based solely upon the means and variances of the parameters in question oh and p.s. if you have over-dispersed data for a Poisson regression this will help you.

AFAICT this is exclusely relevent to generalised linear models.

H-likelihood

…is some kind of extension to quasi-likelihood, for hierarchical generalised linear models.

Refs