The Living Thing / Notebooks :

Generalised linear models

Using the machinery of linear regression to predict in somewhat more general regressions, using least-squares or quasi-likelihood approaches. This means you are still doing something like Maximum Likelihood regression, but outside the setting of homoskedastic Gaussian noise and linear response.

Classic linear models

Consider the original linear model. We have a (column) vector \(\mathbf{y}=[y_1,y_2,\dots,t_n]^T\) of \(n\) observations, an \(n\times p\) matrix \(\mathbf{X}\) of \(p\) covariates where each column corresponds to a different covariate and each row to a different observation.

We assume the observations are assumed to related to the covariates by

\begin{equation*} \mathbf{y}=\mathbf{Xb}+\mathbf{e} \end{equation*}

where \(\mathbf{b}=[b_1,y_2,\dots,b_p]\) gives the parameters of the model which we don’t yet know, We call \(\mathbf{e}\) the “residual” vector. Legendre and Gauss pioneered the estimation of the parameters of a linear model by minimising the squared residuals, \(\mathbf{e}^T\mathbf{e}\), i.e.

\begin{align*} \hat{\mathbf{b}} &=\operatorname{arg min}_\mathbf{b} (\mathbf{y}-\mathbf{Xb})^T (\mathbf{y}-\mathbf{Xb})\\ &=\operatorname{arg min}_\mathbf{b} \|\mathbf{y}-\mathbf{Xb}\|_2\\ &=\mathbf{X}^+\mathbf{y} \end{align*}

where we find the pseudo inverse \(\mathbf{X}^+\) using a numerical solver of some kind, using one of many carefully optimised methods that exists for least squares.

So far there is no statistical argument, merely function approximation.

However it turns out that if you assume that the \(\mathbf{e}_i\) are distributed randomly and independently i.i.d. errors in the observations (or at least indepenedent with constant variance), then there is also a statistical justification for this idea;

TODO: more exposition of these. Linkage to Maximum likelihood.

Generalised linear models

The original extension. TODO: explain.

To learn:

  1. When we can do this? e.g. Must the response be from an exponential family for really real? What happens if not?
  2. Does anything funky happen with regularisation? what?
  3. When you combine all these fancy GLM extensions, how the hell do you work out if your parameters are identifiable?
  4. non-monotonic relations between predictors - how does one handle these?
  5. model selection?

Response distribution

TBD. What constraints do we have here?

Linear Predictor

Quaslilikelihood

An generalisation of likelihood of use in some tricky corners of GLMs. Wedd74 used it to provide a unified GLM/ML rationale.

I don’t yet understand it.

Heyde says (Heyd97):

Historically there are two principal themes in statistical parameter estimation theory

least squares (LS):
introduced by Gauss and Legendre and founded on finite sample considerations (minimum distance interpretation)
maximum likelihood (ML):
introduced by Fisher and with a justification that is primarily asymptotic (minimum size asymptotic confidence intervals, ideas of which date back to Laplace)

It is now possible to unify these approaches under the general description of quasi-likelihood and to develop the theory of parameter estimation in a very general setting. […]

It turns out that the theory needs to be developed in terms of estimating functions (functions of both the data and the parameter) rather than the estimators themselves. Thus, our focus will be on functions that have the value of the parameter as a root rather than the parameter itself.

Hierarchical generalised linear models

GLM + hierarchical model = HGLM.

Generalised additive models

Generalised generalised linear models.

Semiparametric simultaneous discovery of some non-linear predictors and their response curve under the assumption that the interaction is additive in the transformed predictors

\begin{equation*} g(\operatorname{E}(Y))=\beta_0 + f_1(x_1) + f_2(x_2)+ \cdots + f_m(x_m). \end{equation*}

These have now also been generalised in the obvious way.

Generalised additive models for location, scale and shape

Folding GARCH and other regession models into GAMs.

GAMLSS website:

GAMLSS is a modern distribution-based approach to (semiparametric) regression models, where all the parameters of the assumed distribution for the response can be modelled as additive functions of the explanatory variables

Generalised hierarchical additive models for location, scale and shape

Exercise for the student.

Refs

BuHT89
Buja, A., Hastie, T., & Tibshirani, R. (1989) Linear Smoothers and Additive Models. The Annals of Statistics, 17(2), 453–510.
CuDE06
Currie, I. D., Durban, M., & Eilers, P. H. C.(2006) Generalized linear array models with applications to multidimensional smoothing. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(2), 259–280. DOI.
DRGV00
Dimitrios Stasinopoulos, Robert Anthony Rigby, Gillian Heller, Vlasios Voudouris, & Fernanda De Bastiani. (n.d.) Flexible Regression and Smoothing: Using GAMLSS in R.
FrHT10
Friedman, J., Hastie, T., & Tibshirani, R. (2010) Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software, 33(1), 1–22. DOI.
Hans10
Hansen, N. R.(2010) Penalized maximum likelihood estimation for generalized linear point processes. arXiv:1003.0848 [Math, Stat].
HaTi90
Hastie, T. J., & Tibshirani, R. J.(1990) Generalized additive models. (Vol. 43). CRC Press
Heyd97
Heyde, C. C.(1997) Quasi-likelihood and its application a general approach to optimal parameter estimation. . New York: Springer
Hoss09
Hosseinian, Sahar. (2009) Robust inference for generalized linear models: binary and poisson regression. . ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE
JaMi12
Jacobson, L. D., & Miller, T. J.(2012) Albatross-Bigelow survey data calibration for American lobsters.
LeNP06
Lee, Y., Nelder, J. A., & Pawitan, Y. (2006) Generalized linear models with random effects. . Boca Raton, FL: Chapman & Hall/CRC
MFHK12
Mayr, A., Fenske, N., Hofner, B., Kneib, T., & Schmid, M. (2012) Generalized additive models for location, scale and shape for high dimensional data—a flexible approach based on boosting. Journal of the Royal Statistical Society: Series C (Applied Statistics), 61(3), 403–427. DOI.
Mccu84
McCullagh, P. (1984) Generalized linear models. European Journal of Operational Research, 16(3), 285–292. DOI.
NeBa04
Nelder, J. A., & Baker, R. J.(2004) Generalized Linear Models. In Encyclopedia of Statistical Sciences. John Wiley & Sons, Inc.
NeWe72
Nelder, J. A., & Wedderburn, R. W. M.(1972) Generalized Linear Models. Journal of the Royal Statistical Society. Series A (General), 135(3), 370–384. DOI.
PrLu13
Proietti, T., & Luati, A. (2013) Generalised Linear Spectral Models (CEIS Research Paper No. 290). . Tor Vergata University, CEIS
SGVV13
Scandroglio, G., Gori, A., Vaccaro, E., & Voudouris, V. (2013) Estimating VaR and ES of the spot price of oil using futures-varying centiles. International Journal of Financial Engineering and Risk Management, 1(1), 6–19. DOI.
StRO07
Stasinopoulos, D. M., Rigby, R. A., & others. (2007) Generalized additive models for location scale and shape (GAMLSS) in R. Journal of Statistical Software, 23(7), 1–46. DOI.
Wedd74
Wedderburn, R. W. M.(1974) Quasi-likelihood functions, generalized linear models, and the Gauss—Newton method. Biometrika, 61(3), 439–447. DOI.
Wedd76
Wedderburn, R. W. M.(1976) On the existence and uniqueness of the maximum likelihood estimates for certain generalized linear models. Biometrika, 63(1), 27–32. DOI.
Wood08
Wood, S. N.(2008) Fast stable direct fitting and smoothness selection for generalized additive models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(3), 495–518. DOI.
XiWJ14
Xia, T., Wang, X.-R., & Jiang, X.-J. (2014) Asymptotic properties of maximum quasi-likelihood estimator in quasi-likelihood nonlinear models with misspecified variance function. Statistics, 48(4), 778–786. DOI.