The Living Thing / Notebooks :

Boosting, bagging, voting

Ensemble methods; mixing predictions from simple learners to get sophisticated predictions.

Fast to train, fast to use. Gets you results. May not get you answers. So, like neural networks but from the previous hype cycle.

Jeremy Kun: Why Boosting Doesn’t Overfit:

Boosting, which we covered in gruesome detail previously, has a natural measure of complexity represented by the number of rounds you run the algorithm for. Each round adds one additional “weak learner” weighted vote. So running for a thousand rounds gives a vote of a thousand weak learners. Despite this, boosting doesn’t overfit on many datasets. In fact, and this is a shocking fact, researchers observed that Boosting would hit zero training error, they kept running it for more rounds, and the generalization error kept going down! It seemed like the complexity could grow arbitrarily without penalty. […] this phenomenon is a fact about voting schemes, not boosting in particular.

Questions

Randoms trees, forests, jungles

Implementations

Refs