The Living Thing / Notebooks :

Statistics for long memory processes

Stochastic processes where we know that ancient history is still relevant for the future predictions, even if we know the recent history; how do we analyse such things?

I haven’t said anything about the types of generating process here. Maybe there is an explicit long-time dependency in your process which is not mediated through a hidden state; can we statistically distinguish these cases? Sometimes, e.g. [#Küns86], but generally you probably want a model, or how will you do it?

There are some obvious process model which have a long-memory in this sense, such as stack automata, or Hawkes processes with non-exponential kernels.

Note that “long memory” can work with not only time series but any random field, e.g. spatial, or random fields indexed by any number of dimensions, over whatever topology, causal or non-causal. I guess you’d need a notion of distance to make this meaningful, so let’s presume we are on a metric space at least.

For now, see learning theory for dependent data, which has some concrete results for predictive time series. Also see ergodic theory, for one perspective on the difficulty of sampling dependent series.

Refs