# Evaluating Time Series Prediction Performance

I have a Dynamic Naive Bayes Model trained on a couple of temporal variables. The output of the model is the prediction of `P(Event) @ t+1`, estimated at each `t`.

The plot of `P(Event)` versus `time` is as given in the figure below. In this figure, the black line represents `P(Event)` as predicted by my model; the horizontal red line represents the prior probability of the event happening; and the dotted vertical lines represent the (five) event occurrences on the time series.

Ideally, I wish to see the predicted `P(Event)` peak prior to observing any events and remain close to zero when there is no prospect of an event. I want to be able to report how well my model (the black line) performs in predicting the event occurrences. An obvious candidate to compare my model with is the prior probability of event (the red line), which -if used as a predictor- would predict the same probability value for all `t`.

What is the best formal method to achieve this comparison?

P.S: I am currently using the (intuitive) scoring as coded below, where an overall lower score indicates better prediction performance. I found that it is actually quite difficult to beat the prior with this scoring:

``````# Get prediction performance
model_score = 0; prior_score=0;

for t in range(len(timeSeries)):

if(timeSeries[t]== event):  # event has happened
cur_model_score = 1- prob_prediction[t];
cur_prior_score = 1 - prior
else: # no event
cur_model_score = prob_prediction[t] - 0;
cur_prior_score = prior - 0;

model_score = model_score + abs(cur_model_score);
prior_score = prior_score + abs(cur_prior_score);
``````