# Point processes

Another intermittent obsession, tentatively placemarked. Discrete-state random fields/processes with a continuous index. In general I also assume they are non-lattice and simple, which terms I will define if I need them. For now, see DaVe03. If they are Markov, they are simply Poisson processes.

The most interesting class for me are the branching processes.

I’ve just spent 6 months thinking about nothing else, so I won’t write much here.

There is a comprehensive introduction in the 2 volume Daley and Jones epic. (DaVe03, DaVe08)

A curious thing is that much point process estimation theory concerns estimating statistics from a single realisation of the point process. But in fact you may have many point process realisations. This is not news per se, just a new emphasis.

## Temporal point processes

Sometimes including spatiotemporal point processes, depending on mood.

In these, one has an arrow of time which simplifies things because you know that you “only need to consider the past of a process to understand its future”, which potentially simplifies many calculations about the conditional intensity processes; We consider only interactions from the past to the future, rather than some kind of mutual interaction.

In particular, for nice processes you can do fairly cheap likelihood calculations to estimate process parameters etc.

Using the regular point process representation of the probability density of the occurrences, we have the following joint log likelihood for all the occurrences

\begin{aligned} L_\theta(t_{1:N}) &:= -\int_0^T\lambda^*_\theta(t)dt + \int_0^T\log \lambda^*_\theta(t) dN_t\\ &= -\int_0^T\lambda^*_\theta(t)dt + \sum_{j} \log \lambda^*_\theta(t_j) \end{aligned}

I do a lot of this, for example, over at the branching processes notebook, and I have no use at the moment for other types of process, so I won’t say much about other cases for the moment.