The Living Thing / Notebooks :

Generalised linear models

Using the machinery of linear regression to predict in somewhat more general regressions, using least-squares or quasi-likelihood approaches. This means you are still doing something like Maximum Likelihood regression, but outside the setting of homoskedastic Gaussian noise and linear response.

Classic linear models

Consider the original linear model. We have a (column) vector \(\mathbf{y}=[y_1,y_2,\dots,t_n]^T\) of \(n\) observations, an \(n\times p\) matrix \(\mathbf{X}\) of \(p\) covariates where each column corresponds to a different covariate and each row to a different observation.

We assume the observations are assumed to related to the covariates by

$$ \mathbf{y}=\mathbf{Xb}+\mathbf{e} $$

where \(\mathbf{b}=[b_1,y_2,\dots,b_p]\) gives the parameters of the model which we don’t yet know, We call \(\mathbf{e}\) the “residual” vector. Legendre and Gauss pioneered the estimation of the parameters of a linear model by minimising the squared residuals, \(\mathbf{e}^T\mathbf{e}\), i.e.

$$ \hat{\mathbf{b}} &=\operatorname{arg min}_\mathbf{b} (\mathbf{y}-\mathbf{Xb})^T (\mathbf{y}-\mathbf{Xb})\\ &=\operatorname{arg min}_\mathbf{b} \|\mathbf{y}-\mathbf{Xb}\|_2\\ &=\mathbf{X}^+\mathbf{y} $$

where we find the pseudo inverse \(\mathbf{X}^+\) using a numerical solver of some kind, using one of many carefully optimised methods that exists for least squares.

So far there is no statistical argument, merely function approximation.

However it turns out that if you assume that the \(\mathbf{e}_i\) are distributed randomly and independently i.i.d. errors in the observations (or at least indepenedent with constant variance), then there is also a statistical justification for this idea;

TODO: more exposition of these. Linkage to Maximum likelihood.

Generalised linear models

The original extension. TODO: explain.

To learn:

Response distribution

TBD. What constraints do we have here?

Linear Predictor

An invertible (monotonic?) function relating the mean of the linear predictor and the mean of the response distribution.

Quaslilikelihood

An generalisation of likelihood of use in some tricky corners of GLMs. Wedd74 used it to provide a unified GLM/ML rationale.

I don’t yet understand it.

Heyde says (Heyd97):

Historically there are two principal themes in statistical parameter estimation theory

It is now possible to unify these approaches under the general description of quasi-likelihood and to develop the theory of parameter estimation in a very general setting. […]

It turns out that the theory needs to be developed in terms of estimating functions (functions of both the data and the parameter) rather than the estimators themselves. Thus, our focus will be on functions that have the value of the parameter as a root rather than the parameter itself.

Hierarchical generalised linear models

GLM + hierarchical model = HGLM.

Generalised additive models

Generalised generalised linear models.

Semiparametric simultaneous discovery of some non-linear predictors and their response curve under the assumption that the interaction is additive in the transformed predictors

$$ g(\operatorname{E}(Y))=\beta_0 + f_1(x_1) + f_2(x_2)+ \cdots + f_m(x_m). $$

These have now also been generalised in the obvious way.

Generalised additive models for location, scale and shape

Folding GARCH and other regession models into GAMs.

GAMLSS website:

GAMLSS is a modern distribution-based approach to (semiparametric) regression models, where all the parameters of the assumed distribution for the response can be modelled as additive functions of the explanatory variables

Generalised hierarchical additive models for location, scale and shape

Exercise for the student.

Refs