What bagging algorithms are worthy successors to Random Forest?

For boosting algorithms, I would say that they evolved pretty well. In early 1995 AdaBoost was introduced, then after some time it was Gradient Boosting Machine (GBM). Recently, around 2015 XGBoost was introduced, which is accurate, handles overfitting and has become a winner of multiple Kaggle competitions. In 2017 LightGBM was introduced by Microsoft, it offers a significantly lower training time comparing to XGBoost. Also, CatBoost was introduced by Yandex for handling categorical features.

Random Forest was introduced in early 2000s, but has there been any worthy successors to it? I think if a better bagging algorithm than Random Forest existed (which can be easily applied in practice) it would have gained some attention at places like Kaggle. Also, why did boosting became the more popular ensemble technique, is it because you can build less trees for an optimal prediction?

Answer

xgboost, catboost and lightgbm use some features of random forest (random sampling of variables/observations), so I think they are a successor of boosting and RF together and take the best things from both. 😉

Attribution
Source : Link , Question Author : Marius , Answer Author : PhilippPro

Leave a Comment