I am looking for papers or texts that compare and discuss (either empirically or theoretically):
- Boosting and Decision trees algorithms such as Random Forests or AdaBoost, and GentleBoost applied to decision trees.
- Deep learning methods such as Restricted Boltzmann Machines, Hierarchical Temporal Memory, Convolutional Neural Networks, etc.
More specifically, does anybody know of a text that discusses or compares these two blocks of ML methods in terms of speed, accuracy or convergence? Also, I am looking for texts that explain or summarize the differences (e.g. pros and cons) between the models or methods in the second block.
Any pointers or answers addressing such comparisons directly would be greatly appreciated.
Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest.
I’m also not sure how to compare methods like boosting and DL, as boosting is really just a collection of methods. What other algorithms are you using with the boosting?
In general, DL techniques can be described as layers of encoder/decoders. Unsupervised pre-training works by first pre-training each layer by encoding the signal, decoding the signal, then measuring the reconstruction error. Tuning can then be used to get better performance (e.g. if you use denoising stacked-autoencoders you can use back-propagation).
One good starting point for DL theory is:
as well as these:
(sorry, had to delete last link due to SPAM filtration system)
I didn’t include any information on RBMs, but they are closely related (though personally a little more difficult to understand at first).