Why not always use ensemble learning?

It seems to me that ensemble learning WILL always give better predictive performance than with just a single learning hypothesis.

So, why don’t we use them all the time?

My guess is because of perhaps, computational limitations? (even then, we use weak predictors, so I don’t know).


In general it is not true that it will always perform better. There are several ensemble methods, each with its own advantages/weaknesses. Which one to use and then depends on the problem at hand.

For example, if you have models with high variance (they over-fit your data), then you are likely to benefit from using bagging. If you have biased models, it is better to combine use them with Boosting. There are also different strategies to form ensembles. The topic is just too wide to cover it in one answer.

But my point is: if you use the wrong ensemble method for your setting, you are not going to do better. For example, using Bagging with a biased model is not going to help.

Also, if you need to work in a probabilistic setting, ensemble methods may not work either. It is known that Boosting (in its most popular forms like AdaBoost) delivers poor probability estimates. That is, if you would like to have a model that allows you to reason about your data, not only classification, you might be better off with a graphical model.

Source : Link , Question Author : Community , Answer Author : jpmuc

Leave a Comment