The Living Thing / Notebooks : Feedback system identification, not necessarily linear

System Identification
Model parameter estimation” for stochastic processes, in the argot of signal processing people.
Non-linear system identification
dropping the assumption that the model has a nice linear transfer function.

Dropping the assumption of linearity in your model of a After all, if you have a system whose future evolution is so important to predict, why not try infer an actual plausible model?

A compact overview is inserted incidentally in Cosma’s review of Fan and Yao —- FaYa03 —- (wherein he also recommends Bosq98, TaKa00 and BoBl07.)

To reconstruct the state, as opposed to the parameters, you do state filtering. There can be interplay between these steps, if you are doing simulation-based inference.

Anyway, what kind of generalised systems can you infer? Mutually exciting point processes? Yep, EFBS04 do that.

From an engineering/control perpsective, we have BrPK16, who give a sparse regression version. Generally it seems it can be done by indirect inference, or recursive hiararchical generalised linear models, generalising the process for linear time series.

there are many highly general formulations; Kita96 gives a multiparametric Bayesian “smooth” one.

See e.g. the HeDG15 paper:

We address […] these problems with a new view of predictive state methods for dynamical system learning. In this view, a dynamical system learning problem is reduced to a sequence of supervised learning problems. So, we can directly apply the rich literature on supervised learning methods to incorporate many types of prior knowledge about problem structure. We give a general convergence rate analysis that allows a high degree of flexibility in designing estimators. And finally, implementing a new estimator becomes as simple as rearranging our data and calling the appropriate supervised learning subroutines.

[…] More specifically, our contribution is to show that we can use much-more- general supervised learning algorithms in place of linear regression, and still get a meaningful theoretical analysis. In more detail:

  1. we point out that we can equally well use any well-behaved supervised learning algorithm in place of linear regression in the first stage of instrumental-variable regression;
  2. for the second stage of instrumental-variable regression, we generalize ordinary linear regression to its RKHS counterpart;
  3. we analyze the resulting combination, and show that we get convergence to the correct answer, with a rate that depends on how quickly the individual supervised learners converge

All this gets more complicated with multivariate series, which is what I’m looking at at the moment. Kita96 gives a general “smooth” time series formulation which might handle the multivariate thing?

Also, sparsely or unevenly observed series are tricky. I’m looking at those too.

Refs

AMGC02
Arulampalam, M. S., Maskell, S., Gordon, N., & Clapp, T. (2002) A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 50(2), 174–188. DOI.
Bosq98
Bosq, D. (1998) Nonparametric statistics for stochastic processes: estimation and prediction. (2nd ed.). New York: Springer
BoBl07
Bosq, D., & Blanke, D. (2007) Inference and prediction in large dimensions. . Chichester, England ; Hoboken, NJ: John Wiley/Dunod
Bran99
Brand, M. (1999) An entropic estimator for structure discovery. In Advances in Neural Information Processing Systems (pp. 723–729). MIT Press
BHIK09
Bretó, C., He, D., Ionides, E. L., & King, A. A.(2009) Time series analysis via mechanistic models. The Annals of Applied Statistics, 3(1), 319–348. DOI.
BrPK16
Brunton, S. L., Proctor, J. L., & Kutz, J. N.(2016) Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15), 3932–3937. DOI.
Carm14
Carmi, A. Y.(2014) Compressive System Identification. In A. Y. Carmi, L. Mihaylova, & S. J. Godsill (Eds.), Compressed Sensing & Sparse Filtering (pp. 281–324). Springer Berlin Heidelberg
CaRS15
Cassidy, B., Rae, C., & Solo, V. (2015) Brain Activity: Connectivity, Sparsity, and Mutual Information. IEEE Transactions on Medical Imaging, 34(4), 846–860. DOI.
ClBj04
Clark, J. S., & Bjørnstad, O. N.(2004) Population time series: process variability, observation errors, missing values, lags, and hidden states. Ecology, 85(11), 3140–3150. DOI.
COMG07
Cook, A. R., Otten, W., Marion, G., Gibson, G. J., & Gilligan, C. A.(2007) Estimation of multiple transmission rates for epidemics in heterogeneous populations. Proceedings of the National Academy of Sciences, 104(51), 20392–20397. DOI.
DoJR13
Doucet, A., Jacob, P. E., & Rubenthaler, S. (2013) Derivative-Free Estimation of the Score Vector and Observed Information Matrix with Application to State-Space Models. arXiv:1304.5768 [Stat].
DuKo97
Durbin, J., & Koopman, S. J.(1997) Monte Carlo maximum likelihood estimation for non-Gaussian state space models. Biometrika, 84(3), 669–684. DOI.
DuKo12
Durbin, J., & Koopman, S. J.(2012) Time series analysis by state space methods. (2nd ed.). Oxford: Oxford University Press
EFBS04
Eden, U., Frank, L., Barbieri, R., Solo, V., & Brown, E. (2004) Dynamic Analysis of Neural Encoding by Point Process Adaptive Filtering. Neural Computation, 16(5), 971–998. DOI.
FaYa03
Fan, J., & Yao, Q. (2003) Nonlinear time series: nonparametric and parametric methods. . New York: Springer
FuBa02
Fukasawa, T., & Basawa, I. V.(2002) Estimation for a class of generalized state-space time series models. Statistics & Probability Letters, 60(4), 459–473. DOI.
HaKo05
Harvey, A., & Koopman, S. J.(2005) Structural Time Series Models. In Encyclopedia of Biostatistics. John Wiley & Sons, Ltd
HeIK10
He, D., Ionides, E. L., & King, A. A.(2010) Plug-and-play inference for disease dynamics: measles in large and small populations as a case study. Journal of The Royal Society Interface, 7(43), 271–283. DOI.
HeDG15
Hefny, A., Downey, C., & Gordon, G. (2015) A New View of Predictive State Methods for Dynamical System Learning. arXiv:1505.05310 [Cs, Stat].
HMCH08
Hong, X., Mitchell, R. J., Chen, S., Harris, C. J., Li, K., & Irwin, G. W.(2008) Model selection approaches for non-linear system identification: a review. International Journal of Systems Science, 39(10), 925–946. DOI.
IBAK11
Ionides, E. L., Bhadra, A., Atchadé, Y., & King, A. (2011) Iterated filtering. The Annals of Statistics, 39(3), 1776–1802. DOI.
IoBK06
Ionides, E. L., Bretó, C., & King, A. A.(2006) Inference for nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 103(49), 18438–18443. DOI.
KaSc04
Kantz, H., & Schreiber, T. (2004) Nonlinear time series analysis. (2nd ed.). Cambridge, UK ; New York: Cambridge University Press
KEMW05
Kendall, B. E., Ellner, S. P., McCauley, E., Wood, S. N., Briggs, C. J., Murdoch, W. W., & Turchin, P. (2005) Population cycles in the pine looper moth: Dynamical tests of mechanistic hypotheses. Ecological Monographs, 75(2), 259–276.
Kita87
Kitagawa, G. (1987) Non-Gaussian State—Space Modeling of Nonstationary Time Series. Journal of the American Statistical Association, 82(400), 1032–1041. DOI.
Kita96
Kitagawa, G. (1996) Monte Carlo Filter and Smoother for Non-Gaussian Nonlinear State Space Models. Journal of Computational and Graphical Statistics, 5(1), 1–25. DOI.
KiGe96
Kitagawa, G., & Gersch, W. (1996) Smoothness Priors Analysis of Time Series. . New York, NY: Springer New York : Imprint : Springer
Sark07
Sarkka, S. (2007) On Unscented Kalman Filtering for State Estimation of Continuous-Time Nonlinear Systems. IEEE Transactions on Automatic Control, 52(9), 1631–1641. DOI.
StMu13
Städler, N., & Mukherjee, S. (2013) Penalized estimation in high-dimensional hidden Markov models with state-specific graphical models. The Annals of Applied Statistics, 7(4), 2157–2179. DOI.
TaKa00
Taniguchi, M., & Kakizawa, Y. (2000) Asymptotic theory of statistical inference for time series. . New York: Springer
Tani01
Tanizaki, H. (2001) Estimation of unknown parameters in nonlinear and non-Gaussian state-space models. Journal of Statistical Planning and Inference, 96(2), 301–323. DOI.