Why is the variable importance metric suggested by Breiman specific only to random forests?

In the Random Forest paper they describe a nice way of measuring a variable importance – take your validation data, measure error rate, permute the variable and re-measure error rate.

Question – why is that method specific to Random Forests? I understand that in other classifiers (SVM, LR, etc.) we don’t have the concept of OOB, but we certainly can use a regular train-validation split.

What am I missing here? Why isn’t this method a common practice?


Any bagged learner can produce an analogue of Random Forests importance metric.

You can’t get this kind of feature importance in a common cross-validation scheme, where all the features are used all the time.

Source : Link , Question Author : ihadanny , Answer Author : Firebug

Leave a Comment