“Approximate models”

July 4, 2016 — July 4, 2016

statistics

Just saw the author present this in a lecture:

Here’s my summary of his chat:

Produces a frequentist-esque machinery, with different assumptions You have no true models, no repeatability as such, but you can construct approximations for certain goals and with certain kinds of guarantee. Hard to see how you would extract a law of nature in this context, but looks natural for machine learning problems. Assuming no good model rather than a contaminated model?

Obviously I don’t know enough about this to say anything, but looks interesting. However it’s also a one-man shop.

Connection with learning theory?

1 References

Davies, P. L. 2008. Approximating Data.” Journal of the Korean Statistical Society.
Davies, Patrick Laurie. 2014. Data Analysis and Approximate Models: Model Choice, Location-Scale, Analysis of Variance, Nonparametic Regression and Image Analysis. Monographs on Statistics and Applied Probability.
Davies, Laurie. 2016. On \(p\)-Values.” arXiv:1611.06168 [Stat].
Davies, P. L., and Meise. 2008. Approximating Data with Weighted Smoothing Splines.” Journal of Nonparametric Statistics.