I have a Dynamic Naive Bayes Model trained on a couple of temporal variables. The output of the model is the prediction of
P(Event) @ t+1, estimated at each
The plot of
timeis as given in the figure below. In this figure, the black line represents
P(Event)as predicted by my model; the horizontal red line represents the prior probability of the event happening; and the dotted vertical lines represent the (five) event occurrences on the time series.
Ideally, I wish to see the predicted
P(Event)peak prior to observing any events and remain close to zero when there is no prospect of an event.
I want to be able to report how well my model (the black line) performs in predicting the event occurrences. An obvious candidate to compare my model with is the prior probability of event (the red line), which -if used as a predictor- would predict the same probability value for all
What is the best formal method to achieve this comparison?
P.S: I am currently using the (intuitive) scoring as coded below, where an overall lower score indicates better prediction performance. I found that it is actually quite difficult to beat the prior with this scoring:
# Get prediction performance model_score = 0; prior_score=0; for t in range(len(timeSeries)): if(timeSeries[t]== event): # event has happened cur_model_score = 1- prob_prediction[t]; cur_prior_score = 1 - prior else: # no event cur_model_score = prob_prediction[t] - 0; cur_prior_score = prior - 0; model_score = model_score + abs(cur_model_score); prior_score = prior_score + abs(cur_prior_score);
You can create a ROC curve. For a given value of p between 0 and 1 you predict that the event is going to happen if the predicted probability is greater than p. Then you calculate TPR and FPR which gives you a single point on the ROC curve.
By varying p between zero and one you obtain the entire curve. E.g. for p < 0.005 the prior-based predictor will always say that the event will happen at all times.
For more, see: