# Change of time

### stochastic processes derived by varying the rate of time's passage, which is more convenient than you'd think

TBD. Various notes on a.e. continuous monotonic changes of index in order to render a process “simple” in some sense.

In Warping and registration problems you try to align two or more processes. But here, the target is some “null”, basic process. This special case is often more computationally tractable or statistically well behaved.

## To explore

Lamperti representation for continuous state branching processes,

Ogata’s time rescaling: Intensity estimation for point processes uses this as a statistical test. To understand:

## Subordinator

I’m going to follow Applebaum’s presentation (Appl09), which is brusque without being incomprehensible or unmotivated.

A subordinator is just a one-dimensional Lévy process which happens to be non-decreasing. i.e. A subordinator is an a.s. non-decreasing stochastic process $$\Lambda(t), t \in \mathbb{R}$$ with state space $$\mathbb{R}$$ such that

• the increments are stationary
$$\Lambda(t)-\Lambda(s) \sim \Lambda(t-s), \,\forall t \geq s$$
• The increments are independent

$$\Lambda(t)-\Lambda(s)\perp \Lambda(t-s) \,\forall t \geq s$$

• The process is stochastically continuous

$$\lim_{t\searrow s} \mathbb{P} (|\Lambda(t)-\Lambda(s)|>\epsilon)=0, \,\forall \epsilon \gt 0$$

• The increments are non-negative
$$\mathbb{P}(\Lambda(t)-\Lambda(s)\lt 0)=0\forall t \geq s$$

The first three are standard Lévy process stuff. The last is only for subordinators.

Some definitions additionally require the increment distribution is a.s. positive, rather than non-negative, or that there are no atoms at zero in the increment distribution.

Curiously, upon giving that definition, many proceed to immediately assert that such a process is a model for a random change of time. This sounds not insane per se, but doesn’t have much in the way of narrative flow. TBD: explain why one would bother doing such an arbitrary thing as changing time in such a fashion.

Anyway I hope to use these to get a handle on time-changed residual tests and Lamperti representations. TBC.

The subordinator may be extended to multiple dimensions by requiring that each dimension is a.s. increasing. TBC.

## Point process transforms

As used in point process residual goodness of fit tests.

A summary in VeSc04 of the point process flavour:

Knight (Knig70) showed that for any orthogonal sequence of continuous local martingales, by rescaling time for each via its associated predictable process, we form a multivariate sequence of independent standard Brownian motions. Then Meyer (Meye71) extended Knight’s theorem to the case of point processes, showing that given a simple multivariate point process $${N_i ; i = 1, 2, . . . , n}$$, the multivariate point process obtained by rescaling each $$N_i$$ according to its compensator is a sequence of independent Poisson processes, each having intensity 1. Since then, alternative proofs and variations of this result have been given by Brém72, Papa72, AaHo78, Kurt80 and BrNa88. Papangelou (Papa72) gave the following interpretation in the univariate case:

Roughly, moving in $$[0, \infty)$$ so as to meet expected future points at a rate of one per time unit (given at each instant complete knowledge of the past), we meet them at the times of a Poisson process.

[…]

Generalizations of Meyer’s result to point processes on $$\mathbb{R}^d$$ have been established by MeNu86, Nair90 and Scho99. In each case, the method used has been to focus on one dimension of the point process, and rescale each point along that dimension according to the conditional intensity.

## Going Multivariate

As seen in BaPS01 and others. How does multivariate time work then?