How would I bias my binary classifier to prefer false positive errors over false negatives?

I’ve put together a binary classifier using Keras’ Sequential model. Of its errors, it predicts with false negatives more frequently than false positives.

This tool is for medical application, where I’d prefer a false positive as to err on the side of caution.

How might I try to tweak the model to prefer one class over the other?


A standard way to go about this is as follows:

  1. As mentioned in Dave’s answer, instead of taking the binary predictions of the Keras classifier, use the scores or logits instead — i.e. you need to have a confidence value for the positive class, instead of a hard prediction of “1” for the positive class and “0” for the negative class. (most Keras models have a model.predict() method that gives you the confidence for each class).

  2. Now plot a ROC curve, for which sklearn has some nifty functions ready-made: This curve basically plots the true positive rate versus the false positive rate, which are obtained by setting various thresholds on the predicted confidence and calculating the true positive rate (TPR) and false positive rate (FPR).

  3. Looking at the ROC curve, you can select a point you would prefer (i.e. with very few false negatives and an acceptable number of false positives). The threshold that gives this (TPR, FPR) point should be the operating point of your classifier (i.e. apply this threshold on the model confidence for “class 1”).

Source : Link , Question Author : PhlipPhlops , Answer Author : AruniRC

Leave a Comment