The Living Thing / Notebooks :

Survival analysis and reliability

Hazard rates, proportional hazard regression, life testing, mean time to failure

Estimating survival rates

Here’s the set-up: looking at a data set of individuals’ lifespans you would like to infer the distributions—Analysing when people die, or things break etc. The statistical problem of estimating how long people’s lives are is complicated somewhat by the particular structure of the data – loosely, “every person dies at most one time”, and there are certain characteristic difficulties that arise, such as right-censorship. (If you are looking at data from an experiment and not all your subjects have died yet, they presumably die later, but you don’t know when.)

Handily, the tools one invents to solve this kind of problem end up being useful to solve other problems, such as point process inference.

So let’s say you have a a random variable \(X\) of positive support according to which the lifetime of your people (components, machines, whatever) are distributed, which possesses a pdf \(f_X(t)\) and cdf \(F_X(T)\).

We define several useful functions:

The survival function

\[S(t):=1-F(t)\]

the hazard function

\[\lambda(t):=f(t)/S(t)\]

the cumulative hazard function

\[\Lambda(t) :=\int_0^t\lambda(s) \textrm{d} s\].

Why? Because it happens to come out nicely if we do that, and these functions acquire intuitive interpretations once we squint at them a bit. The hazard function will turn out to be the probability density for a death at time \(t\) given that one has not yet occurred. The survival function is the probability of an individual surviving to time \(t\) etc.

Using the chain rule we can find the following useful relation:

\[S(t)=\exp[-\Lambda (t)]={\frac {f(t)}{\lambda (t)}}\]

Cox proportional hazards

A classic, in which we don’t care about the baseline rate, just treatment effects. We assume the following model for our data, with some measured predictors \(X_j\). The magical trick is that we cancel out the nuisance baseline hazard rate \(\lambda_0\) which is nice in medial application as its the thing we definitionally can’t change.

\[ λ(t) = λ_0(t)\exp(β_1X_1 +β_2X_2 +\dots+β_pX_p) = λ_0(t) \exp(β′X), \] The resulting partial likelihood is

\[ L(\beta)=\prod_{r\in D}\frac{\exp\beta'x_r}{\sum_{j\in R_r}\exp\beta'x_j} \]

It seems one could get a more general effects model than a basic linear link and have everything still work, but I won’t look into that here; my purposes for now involve also identifying the baseline rate and not necessarily treatment effects.

Nelson-Aalen estimates

a.k.a. Empirical Cumulative Hazard Function estimator.

The original Aalen paper on this is notoriously beautiful because of clever construction of a life point process and associated martingale. Clear and worth reading. Spoiler, despite the elegant derivation, the actual estimator is something a high-school student could probably discover by guessing.

TBC.

Other reliability stuff

Reliawiki has good stuff, e.g.comprehensive docs on the Weibull law. It’s in support of some software package their are trying to sell, I think?

Refs