Feedback system identification, linear

July 27, 2016 — January 21, 2022

calculus
dynamical systems
geometry
Hilbert space
how do science
Lévy processes
machine learning
neural nets
PDEs
physics
regression
sciml
SDEs
signal processing
statistics
statmech
stochastic processes
surrogate
time series
uncertainty
Figure 1

In system identification, we infer the parameters of a stochastic dynamical system of a certain type, i.e. usually ones with feedback, so that we can e.g. simulate it, or deconvolve it to find the inputs and hidden state, maybe using state filters. In statistical terms, this is the parameter inference problem for dynamical systems.

Moreover, it totally works without Gaussian noise; that’s just convenient in optimal linear filtering, Kalman filtering isn’t rocket science, after all. Also, mathematically Gaussian is a useful crutch if you decide to go to a continuous time index, cf Gaussian processes.

This is the mostly offline version. There is a sub-notebook focussing on online recursive estimation.

1 Intros

Oppenheim and Verghese, Signals, Systems, and Inference is free online. ritvikmath explains partial autocorrelation as a graphical model, which is not complicated, but for some reason I never had it laid out this way in my own time series courses. See also Kenneth Tay, The relationship between MA(q)/AR(p) processes and ACF/PACF

(Martin 1999):

Consider the basic autoregressive model,

\[ Y(k) = \sum_{j=1}^pa_jY(k-j)=\epsilon(k). \]

Estimating AR(p) coefficients:

The [power] spectrum is easily obtained from [the above] as

\[ P(f) = \frac{\sigma^2}{|1+ \sum_{j=1}^pa_jz^{-1}|^2},\\ z=\exp 2\pi if\delta t \]

with \(\delta t\) the intersample spacing. […] for any given set of data, we need to be able to estimate the AR coefficients \(\{a_j\}_{j=1}^N\) conveniently. Three methods for achieving this are the Yule-Walker, Burg and Covariance methods. The Yule-Walker technique uses the sample autocovariance to obtain the coefficients; the Covariance method defines, for a set of numbers \(\mathbf{a}=\{a_j\}_{j=1}^N,\) a quantity known as the total forward and backward prediction error power:

\[ E(Y,\mathbf{a}) = \frac{1}{2(N-p)}\sum_{n=p+1}^N\left\{ \left|Y(n)+\sum{j=1}^pa_jY(n-p)\right|^2 + \left|Y(n-p)+\sum{j=1}^pa^*_jY(n-p+j)\right|^2 \right\} \]

and minimises this w.r.t. \(\mathbf{a}\). As \(E(Y, \mathbf{a})\) is a quadratic function of \(\mathbf{a}\), \(\partial E(Y, \mathbf{a})/partial a\) is linear in \(\mathbf{a}\) and so this is a linear optimisation problem. The Burg method is a constrained minimisation of \(E(Y, \mathbf{a})\) using the Levinson recursion, a computational device derived from the Yule-Walker method.

2 Instrumental variable regression

3 Unevenly sampled

4 Model estimation/system identification

You don’t know a parameterised model for the data (and hence a precise bandwidth) and you wish to estimate it.

This is a system identification problem, although the non-uniform sampling means that it has an unusual form.

(Martin 1999) summarizes:

One could consider the general problem in an approximate way as the missing data problem with a very high proportion of missing data points, but (Jones 1981, 1984) this is not very realistic. This has led to the consideration of the continuous-time model […] . (Lii and Masry 1992) shows that the coefficients in that equation may be obtained from the [irregularly sampled autocorrelation moments, but], the estimation of these requires a large amount of data and the results are asymptotic in the limit of infinite data. The other continuous-time approach is that of Jones (Jones 1981, 1984) who has used Kalman recursive estimation […] to obtain a likelihood function \(\operatorname{lik}(x|b)\) which is then maximised w.r.t. b to obtain an estimate of the true parameters.

There is a partial review and comparison of methods in (P. M. Broersen 2006; Stoica and Moses 2005). From the latter:

(Martin 1999) applied autoregressive modeling to irregularly sampled data using a dedicated method. It was particularly good in extracting sinusoids from noise in short data sets. (Söderström and Mossberg 2000) evaluated the performance of methods for identifying continuous-time autoregressive processes, which replace the differentiation operator by different approximations. (Larsson and Söderström 2002) apply this idea to randomly sampled autoregressive data. They report promising results for low-order processes. (Lahalle, Fleury, and Rivoira 2004) estimate continuous-time ARMA models. Unfortunately, their method requires explicit use of a model for irregular sampling instants. The precise shape of that distribution is very important for the result, but it is almost impossible to establish it from practical data.

No generally satisfactory spectral estimator for irregular data has been defined yet. Continuous time series models can be estimated for irregular data, and they are the only possible candidates for obtaining the Cramér-Rao lower boundary, because the true process for irregular data is a continuous-time process. (Jones 1981) has formulated the maximum likelihood estimator for irregular observations. However, (Jones 1984) also found that the likelihood has several local maxima and the optimisation requires extremely good initial estimates. (P. M. T. Broersen and Bos 2006) used the method of Jones to obtain maximum likelihood estimates for irregular data. If simulations started with the true process parameters as initial conditions, that was sometimes, but not always, good enough to converge to the global maximum of the likelihood. However, sometimes even those perfect and nonrealisable starting values were not capable of letting the likelihood converge to an acceptable model. So far, no practical maximum likelihood method for irregular data has solved all numerical problems, and certainly no satisfactory realisable initial conditions can be given. As an example, it has been verified in simulations that taking the estimated AR( p—1) model together with an additional zero for order p as starting values for AR( p) estimation does not always converge to acceptable AR( p) models. The model with the maximum value of the likelihood might not in all cases be accurate and many good models have significantly lower numerical values of the likelihood. (Martin 1999) suggests that the exact likelihood is sensitive to round-off errors. (P. M. T. Broersen and Bos 2006) calculated the likelihood as a function of true model parameters, multiplied by a constant factor. Only the likelihood for a single pole was smooth. Two poles already gave a number of sharp peaks in the likelihood, and three or more poles gave a very rough surface of the likelihood. The scene is full of local minima, and the optimisation cannot find the global minimum, unless it starts very close to it.

4.1 Slotting

Asymptotic methods based on gridding observations.

4.2 Method of transformed coefficients

Useful tool: equivalence of a continuous time Ito integral and a discrete ARIMA process (attributed by Martin (1998) to Bartlett (1946)) also implies you can estimate the model without estimating missing data, which is satisfying, although the precise form this takes is less satisfying.

A popular overview seem to be Martin (1999).

4.3 State filters

(Note that you can also do the signal reconstruction problem using state filters, but I’m interested in doing system identification using state filters.) [Jones (1981); MartinAutoregression1998gave this a go; while (Martin 1998) mentioned problems, I’m curious when it does work, since this seems natural, simple, and easier to make robust against model violations than the other methods.

(Martin 1998):

It is well known that if a univariate continuous time autoregression is sampled at equally spaced time intervals, the resulting, discrete time process is ARMA(p,p-1). If the sampling includes observational error, the resulting process is ARMA(p,p); however, these 2p parameters depend only on the p continuous time autoregression coefficients and the observational error variance. Modeling, the process as a continuous time autoregression with observational error may be much more parsimonious than modeling the discrete time process, whether or not the data are equally spaced. The direct modeling of observational error has the effect of smoothing noisy data and may eliminate the need for moving average terms.

5 Online

See recursive estimation.

6 Incoming

Gradient descent learns Linear Dynamical systems

6.1 Linear Predictive Coding

LPC introductions traditionally start with a physical model of the human vocal tract as a resonating pipe, then mumble away the details. This confused the hell out of me. AFAICT, an LPC model is just a list of AR regression coefficients and a driving noise source coefficient. This is “coding” because you can round the numbers, pack them down a smidgen and then use it to encode certain time series, such as the human voice, compactly. But it’s still a regression analysis, and can be treated as such.

The twists are that

  • we usually think about it in a compression context
  • Traditionally one performs many regressions to get time-varying models

It’s commonly described as a physical model because we can imagine these regression coefficients corresponding to a simplified physical model of the human vocal tract; But we can think of the regression coefficients as corresponding to any all-pole linear system, so I don’t think that brings special insight; especially as the models of, say, a resonating pipe, would intuitively be described by time-delays corresponding to the length of the pipe, not time-lags corresponding to a corresponding sample plus computational convenience. Sure we can get similar spectral response for this model as with a pipe, according to linear systems theory, but if you are going to assume so much advanced linear systems theory anyway, and mix it with crappy physics, why not just start with the linear systems and ditch the physics?

To discuss: these coefficients as spectrogram smoothing.

7 References

Agarwal, Amjad, Shah, et al. 2018. Time Series Analysis via Matrix Estimation.” arXiv:1802.09064 [Cs, Stat].
Akaike. 1973. Maximum Likelihood Identification of Gaussian Autoregressive Moving Average Models.” Biometrika.
Ansley, and Kohn. 1986. A Note on Reparameterizing a Vector Autoregressive Moving Average Model to Enforce Stationarity.” Journal of Statistical Computation and Simulation.
Antoniano-Villalobos, and Walker. 2016. A Nonparametric Model for Stationary Time Series.” Journal of Time Series Analysis.
Atal. 2006. The History of Linear Prediction.” IEEE Signal Processing Magazine.
Bartlett. 1946. On the Theoretical Specification and Sampling Properties of Autocorrelated Time-Series.” Supplement to the Journal of the Royal Statistical Society.
Berkhout, and Zaanen. 1976. A Comparison Between Wiener Filtering, Kalman Filtering, and Deterministic Least Squares Estimation*.” Geophysical Prospecting.
Box, George E. P., Jenkins, Reinsel, et al. 2016. Time Series Analysis: Forecasting and Control. Wiley Series in Probability and Statistics.
Box, G. E. P., and Pierce. 1970. Distribution of Residual Autocorrelations in Autoregressive-Integrated Moving Average Time Series Models.” Journal of the American Statistical Association.
Broersen, Petrus MT. 2006. Automatic Autocorrelation and Spectral Analysis.
Broersen, P. M. T., and Bos. 2006. Estimating Time-Series Models from Irregularly Spaced Data.” In IEEE Transactions on Instrumentation and Measurement.
Broersen, Piet M. T., de Waele, and Bos. 2004. Autoregressive Spectral Analysis When Observations Are Missing.” Automatica.
Bühlmann, and Künsch. 1999. Block Length Selection in the Bootstrap for Time Series.” Computational Statistics & Data Analysis.
Carmi. 2013. Compressive System Identification: Sequential Methods and Entropy Bounds.” Digital Signal Processing.
———. 2014. Compressive System Identification.” In Compressed Sensing & Sparse Filtering. Signals and Communication Technology.
Chen, and Hong. 2012. Testing for the Markov Property in Time Series.” Econometric Theory.
Christ, Kempa-Liehr, and Feindt. 2016. Distributed and Parallel Time Series Feature Extraction for Industrial Big Data Applications.” arXiv:1610.07717 [Cs].
de Matos, and Fernandes. 2007. Testing the Markov Property with High Frequency Data.” Journal of Econometrics, Semiparametric methods in econometrics,.
Doucet, Jacob, and Rubenthaler. 2013. Derivative-Free Estimation of the Score Vector and Observed Information Matrix with Application to State-Space Models.” arXiv:1304.5768 [Stat].
Durbin, and Koopman. 1997. Monte Carlo Maximum Likelihood Estimation for Non-Gaussian State Space Models.” Biometrika.
———. 2012. Time Series Analysis by State Space Methods. Oxford Statistical Science Series 38.
Eguchi, and Uehara. n.d. Schwartz-Type Model Selection for Ergodic Stochastic Differential Equation Models.” Scandinavian Journal of Statistics.
Geweke, and Meese. 1981. Estimating Regression Models of Finite but Unknown Order.” Journal of Econometrics.
Gu, Johnson, Goel, et al. 2021. Combining Recurrent, Convolutional, and Continuous-Time Models with Linear State Space Layers.” In Advances in Neural Information Processing Systems.
Hardt, Ma, and Recht. 2018. Gradient Descent Learns Linear Dynamical Systems.” The Journal of Machine Learning Research.
Harvey, and Koopman. 2005. Structural Time Series Models.” In Encyclopedia of Biostatistics.
Hazan, Singh, and Zhang. 2017. Learning Linear Dynamical Systems via Spectral Filtering.” In NIPS.
Heaps. 2020. Enforcing Stationarity Through the Prior in Vector Autoregressions.” arXiv:2004.09455 [Stat].
Hefny, Downey, and Gordon. 2015. A New View of Predictive State Methods for Dynamical System Learning.” arXiv:1505.05310 [Cs, Stat].
Hencic, and Gouriéroux. 2015. Noncausal Autoregressive Model in Application to Bitcoin/USD Exchange Rates.” In Econometrics of Risk. Studies in Computational Intelligence 583.
Holan, Lund, and Davis. 2010. The ARMA Alphabet Soup: A Tour of ARMA Model Variants.” Statistics Surveys.
Ives, Abbott, and Ziebarth. 2010. Analysis of Ecological Time Series with ARMA( p,q ) Models.” Ecology.
Jones. 1981. “Fitting a Continuous Time Autoregression to Discrete Data.” In Applied Time Series Analysis II.
———. 1984. Fitting Multivariate Models to Unequally Spaced Data.” In Time Series Analysis of Irregularly Observed Data.
Kailath, Sayed, and Hassibi. 2000. Linear Estimation. Prentice Hall Information and System Sciences Series.
Kalouptsidis, Mileounis, Babadi, et al. 2011. Adaptive Algorithms for Sparse System Identification.” Signal Processing.
Kavčić, and Moura. 2000. Matrices with Banded Inverses: Inversion Algorithms and Factorization of Gauss-Markov Processes.” IEEE Transactions on Information Theory.
Kay. 1993. Fundamentals of Statistical Signal Processing. Prentice Hall Signal Processing Series.
Kemerait, and Childers. 1972. Signal Detection and Extraction by Cepstrum Techniques.” IEEE Transactions on Information Theory.
Künsch. 1986. “Discrimination Between Monotonic Trends and Long-Range Dependence.” Journal of Applied Probability.
Lahalle, Fleury, and Rivoira. 2004. Continuous ARMA Spectral Estimation from Irregularly Sampled Observations.” In Proceedings of the 21st IEEE Instrumentation and Measurement Technology Conference, 2004. IMTC 04.
Laroche. 2007. On the Stability of Time-Varying Recursive Filters.” Journal of the Audio Engineering Society.
Larsson, and Söderström. 2002. Identification of Continuous-Time AR Processes from Unevenly Sampled Data.” Automatica.
Lii, and Masry. 1992. Model Fitting for Continuous-Time Stationary Processes from Discrete-Time Data.” Journal of Multivariate Analysis.
Ljung. 1999. System Identification: Theory for the User. Prentice Hall Information and System Sciences Series.
Ljung, and Söderström. 1983. Theory and Practice of Recursive Identification. The MIT Press Series in Signal Processing, Optimization, and Control 4.
Makhoul. 1975. Linear Prediction: A Tutorial Review.” Proceedings of the IEEE.
Manton, Krishnamurthy, and Poor. 1998. James-Stein State Filtering Algorithms.” IEEE Transactions on Signal Processing.
Marelli. 2007. A Functional Analysis Approach to Subband System Approximation and Identification.” IEEE Transactions on Signal Processing.
Martin. 1998. Autoregression and Irregular Sampling: Filtering.” Signal Processing.
———. 1999. Autoregression and Irregular Sampling: Spectral Estimation.” Signal Processing.
McDonald, Shalizi, and Schervish. 2011a. Generalization Error Bounds for Stationary Autoregressive Models.” arXiv:1103.0942 [Cs, Stat].
———. 2011b. Risk Bounds for Time Series Without Strong Mixing.” arXiv:1106.0730 [Cs, Stat].
McLeod. 1998. Hyperbolic Decay Time Series.” Journal of Time Series Analysis.
McLeod, and Zhang. 2008. Faster ARMA Maximum Likelihood Estimation.” Computational Statistics & Data Analysis.
Milanese, and Vicino. 1993. Information-Based Complexity and Nonparametric Worst-Case System Identification.” Journal of Complexity.
Pagano. 1974. Estimation of Models of Autoregressive Signal Plus White Noise.” The Annals of Statistics.
Pereyra, Schniter, Chouzenoux, et al. 2016. A Survey of Stochastic Simulation and Optimization Methods in Signal Processing.” IEEE Journal of Selected Topics in Signal Processing.
Pillonetto. 2016. The Interplay Between System Identification and Machine Learning.” arXiv:1612.09158 [Cs, Stat].
Plis, Danks, and Yang. 2015. Mesochronal Structure Learning.” Uncertainty in Artificial Intelligence : Proceedings of the … Conference. Conference on Uncertainty in Artificial Intelligence.
Pugachev, and Sinit︠s︡yn. 2001. Stochastic systems: theory and applications.
Ragazzini, and Zadeh. 1952. The Analysis of Sampled-Data Systems.” Transactions of the American Institute of Electrical Engineers, Part II: Applications and Industry.
Roy, Mcelroy, and Linton. 2019. Constrained Estimation of Causal Invertible VARMA.” Statistica Sinica.
Scargle. 1981. “Studies in Astronomical Time Series Analysis. I-Modeling Random Processes in the Time Domain.” The Astrophysical Journal Supplement Series.
Shen, and Yu. 2018. Fractional Programming for Communication Systems—Part I: Power Control and Beamforming.” IEEE Transactions on Signal Processing.
Simchowitz, Boczar, and Recht. 2019. Learning Linear Dynamical Systems with Semi-Parametric Least Squares.” arXiv:1902.00768 [Cs, Math, Stat].
Simchowitz, Mania, Tu, et al. 2018. Learning Without Mixing: Towards A Sharp Analysis of Linear System Identification.” arXiv:1802.08334 [Cs, Math, Stat].
Söderström, and Mossberg. 2000. Performance evaluation of methods for identifying continuous-time autoregressive processes.” Automatica.
Stoica, and Moses. 2005. Spectral Analysis of Signals.
Taniguchi, and Kakizawa. 2000. Asymptotic Theory of Statistical Inference for Time Series. Springer Series in Statistics.
Tufts, and Kumaresan. 1982. Estimation of Frequencies of Multiple Sinusoids: Making Linear Prediction Perform Like Maximum Likelihood.” Proceedings of the IEEE.
Unser, and Tafti. 2014. An Introduction to Sparse Stochastic Processes.
van de Geer. 2002. “On Hoeffdoing’s Inequality for Dependent Random Variables.” In Empirical Process Techniques for Dependent Data.
van Delft, and Eichler. 2016. Locally Stationary Functional Time Series.” arXiv:1602.05125 [Math, Stat].
Vandenberghe. 2012. Convex Optimization Techniques in System Identification.” IFAC Proceedings Volumes, 16th IFAC Symposium on System Identification,.
Wedig. 1984. A Critical Review of Methods in Stochastic Structural Dynamics.” Nuclear Engineering and Design.
Werbos. 1988. Generalization of Backpropagation with Application to a Recurrent Gas Market Model.” Neural Networks.
Xu, and Raginsky. 2017. Information-Theoretic Analysis of Generalization Capability of Learning Algorithms.” In Advances In Neural Information Processing Systems.