The Living Thing / Notebooks :

Poisson processes

Usefulness: 🔧 🔧 🔧
Novelty: 💡
Uncertainty: 🤪
Incompleteness: 🚧 🚧

\(\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}{\mathbb{I}}\)

Poisson processes are possibly the simplest subordinators, i.e. non-decreasing Lévy processes. They pop up everywhere, especially as a representation of point processes, and as the second continuous-time stochastic process anyone learns after the Brownian motion.

A Poisson process \(\{\mathsf{n}\}\sim \operatorname{PoisP}(\lambda)\) is a stochastic process whose inter-occurrence times are identically and independently distributed such that \(\mathsf{t}_i-\mathsf{t}_{i-1}\sim\operatorname{Exp}(\lambda)\) (rate parameterization). By convention, we set all \(\mathsf{t}_i\geq 0\) and \(\mathsf{n}(0)=0\) a.s. The counting process that increases for each event is one convenient representation for such a process. In that case we think of the paths of the process as \(\mathsf{n}: \mathbb{R}\mapsto\mathbb{Z}^+\) such that \(\mathsf{n}(t)\equiv \sum_{i=1}^\mathsf{n}\mathbb{I}_{\{\mathsf{t}_i<t\}}\).

At some point I’ll get rmarkdown enabled on this blog and then the following will be a representation of such a creature based on Matt Motoki’s answer on stackoverflow.

set.seed(1234)
lambda <- 17
# the length of time horizon for the simulation T_length <- 31
last_arrival <- 0
arrival_time <- c()
inter_arrival <- rexp(1, rate = lambda)
while (inter_arrival + last_arrival < T_length) {
last_arrival <- inter_arrival + last_arrival
arrival_time <- c(arrival_time,last_arrival)
inter_arrival <- rexp(1, rate = lambda)
 }
n <- length(arrival_time)
counts <- 1:n

plot(arrival_time, counts, pch=16, ylim=c(0, n))
points(arrival_time, c(0, counts[-n]))
segments(
  x0 = c(0, arrival_time[-n]),
  y0 = c(0, counts[-n]),
  x1 = arrival_time,
  y1 = c(0, counts[-n])
)

It is easy to show that \(\mathsf{n}(t)\sim\textrm{Pois}(\lambda t)\), where the pmf of a Poisson RV is

\[f(k;\lambda)={\frac {\lambda^{n}}{n!}}e^{-\lambda}.\]

It is a standard result that the increments of such processes over disjoint intervals such process have a Poisson distribution. For \(t \geq s\):

\[\mathsf{n}(t)-\mathsf{n}(s) \sim \textrm{Pois}\left((t-s)\lambda\right).\]

Note also the standard result that

\[\lambda:=\lim_{h\to 0} \frac{\mathrm E\left(\mathsf{n}(t,t+h)\right)}{h}.\]

We call \(\lambda\) the rate.

Poisson bridge

Suppose we are given the value of a Poisson process at time \(0\) and time \(1\) and are concerned with some \(t\in(0,1).\) We wish to know the conditional distribution \(P(\mathsf{n}(t)|\mathsf{n}(1)=S)\), i.e. the Poisson bridge. Using the distributions of the increments and the independence property,

\[\begin{aligned} P[\mathsf{n}(t)=n|\mathsf{n}(1)=S] &=\frac{P[\mathsf{n}(t)=n\cap \mathsf{n}(1)=S]}{P[\mathsf{n}(1)=S]}\\ &=\frac{P[\mathsf{n}(t)=n\cap \tilde{\mathsf{n}}(1-t)=S-n]}{P[\mathsf{n}(1)=S]}\\ &=\frac{ \frac{(\lambda t)^{n}}{n!}e^{-\lambda t} \frac{(\lambda (1-t)^{S-n})}{(S-n)!}e^{-\lambda (1-t)} }{ \frac{\lambda^S }{S!}e^{-\lambda } }\\ &=\frac{(\lambda t)^{n}(\lambda (1-t))^{S-n}}{\lambda^{S}} \frac{e^{-\lambda t}e^{-\lambda (1-t)}}{e^{-\lambda}} \frac{S!}{(S-n)!n!}\\ &=(t)^{n} (1-t)^{S-n} \left(\begin{array}{cc}S\\n\end{array}\right)\\ &=\operatorname{Binom}(n;S, t). \end{aligned}\] So we can simulate a point Poisson bridge at some \(t<1\) by sampling a Binomial random variable.

Hitting time

Consider the hitting time of a level \(\ell\in \bb{N}\) for a Poisson process \(\mathsf{n}\).

\[\begin{aligned} \bb{P}\left[\mathsf{n}(t)<\ell\right] &=\bb{P}\left[\inf_s [\mathsf{n}(s) \geq \ell] >t\right]\\ &=\bb{P}\left[ \mathsf{t}_{\ell} >t\right]\\ &=\bb{P}\left[ \left(\sum_{1<k\leq \ell }\mathsf{t}_{k}-\mathsf{t}_{k-1} \right)>t\right]. \end{aligned}\]

But since \(\mathsf{t}_{k}-\mathsf{t}_{k-1} \sim \operatorname{Exp}(\lambda),\) and i.i.d. and the sum of i.i.d. Exponential variates is a Gamma variate, we have that

\[\mathsf{t}_\ell \sim \operatorname{Gamma}(\ell, \lambda)\]

and hence the density of this random variable is the gamma distribution density,

\[f(t)={\frac{\ell ^{\lambda}}{\Gamma(\lambda )}}x^{\lambda -1}e^{-\ell x}.\]

The Gamma process is a kind of dual to the Poisson; we can think of it as a distribution over specific hitting times of an imaginary latent Poisson process.

Refs

Giles, Michael B. 2016. “Algorithm 955: Approximation of the Inverse Poisson Cumulative Distribution Function.” ACM Trans. Math. Softw. 42 (1): 7:1–7:22. https://doi.org/10.1145/2699466.