The Living Thing / Notebooks :

Function approximation and interpolation

On constructing an approximation of some arbitrary function, and measuring the badness thereof.

THIS IS CHAOS RIGHT NOW. I need to break out the sampling/interpolation problem for regular data, for one thing.

Choosing the best approximation

In what sense? Most compact? Most easy to code?

If we are not interpolating, how much smoothing do we do?

We can use cross-validation, especially so-called “generalized” cross validation, to choose smoothing parameter this efficiently, in some sense.

Or you might have noisy data, in which case you now have a function approximation and inference problem, with error due to both approximation and sampling complexity. Compressive sensing has some finite-sample guarantees.

To discuss: loss functions.

An interesting problem here is how you align the curves that are your objects of study; That is a problem of warping.

Polynomial spline smoothing of observations

The classic, and not just for functional data, but filed here because that’s where the action is now.

Special superpowers: Easy to differentiate and integrate.

Special weakness: many free parameters, not easy to do in high dimension.

Chebychev polynomials

Fourier bases

Radial basis function approximation

I actually care about this mostly for densities, so see mixture models, for what information I do have.

Rational approximation

Padé’s is the method I’ve heard of. Are there others? Really handy for computation. Trivial to implement once calculated.

Easy to differentiate. OK to integrate if you cheat using a computation mathematics package.