Baseline for Precision-Related Metrics

When working with ROC-AUC as a metric for binary classification, one often considers a value of 0.5 as a baseline from a random classifier (i.e. a data-blind classifier that randomly classifies test instances with equal probability).

I have read that average precision (or more generally, mean average precision) may be a better metric when the positive class is of higher interest than the negative class. This claim deserves its own question, but aside from that, what is a reasonable random baseline for (baseline) average precision?

I am inclined to think that such random baseline should be P/(P+N) (i.e. fraction of positives of the total). How should I go about stablishing this baseline?

Answer

Attribution
Source : Link , Question Author : Amelio Vazquez-Reina , Answer Author : Community

Leave a Comment