On choosing the right model and regularisation parameter in sparse regression, which turn out to be very nearly the same, and closely coupled to doing the regression. There are some wrinkles.

## What?

TBD: Explain my laborious reasoning that generalised Akaike information criteria don’t seem work when the penalty term is not continuous (e.g. \(L_1\) ), and the issues that therefore arise in model selection for such cases.

Present alternatives for choosing the optimal regularisation coefficient, especially *outside* cross-validation, especially computationally tractable ones. Methods based on statistical learning theory or concentration inequalities win gratitude.

## Stability selection

TBD

## Relaxed Lasso

## Dantzig Selector

## Garotte

TBD.

## Degrees-of-freedom penalties

See degrees of freedom.

## Refs

On choosing the right model and regularisation parameter in sparse regression, which turn out to be very nearly the same, and closely coupled to doing the regression. There are some wrinkles.

## What?

TBD: Explain my laborious reasoning that generalised Akaike information criteria don’t seem work when the penalty term is not continuous (e.g. \(L_1\) ), and the issues that therefore arise in model selection for such cases.

Present alternatives for choosing the optimal regularisation coefficient, especially *outside* cross-validation, especially computationally tractable ones. Methods based on statistical learning theory or concentration inequalities win gratitude.

## Stability selection

TBD

## Relaxed Lasso

## Dantzig Selector

## Garotte

TBD.

## Degrees-of-freedom penalties

See degrees of freedom.