How to combine the forecasts when the response variable in forecasting models was different?


In forecasts combination one of the popular solutions is based on the application of some information criterion. Taking for example Akaike criterion $AIC_j$ estimated for the model $j$, one could compute the differences of $AIC_j$ from $AIC^* = \min_j{AIC_j}$ and then $RP_j = e^{(AIC^*-AIC_j)/2}$ could be interpreted as the relative probability of model $j$ to be the true one. The weights then are defined as

$$w_j = \frac{RP_j}{\sum_j RP_j}$$


A difficulty that I try to overcome is that the models are estimated on the differently transformed response (endogenous) variables. For example, some models are based on annual growth rates, another – on quarter-to-quarter growth rates. Thus the extracted $AIC_j$ values are not directly comparable.

Tried solution

Since all that matters is the difference of $AIC$s one could take the base model’s $AIC$ (for example I tried to extract lm(y~-1) the model without any parameters) that is invariant to the response variable transformations and then compare the differences between the $j$th model and the base model $AIC$. Here however it seems the weak point remains – the difference is affected by the transformation of the response variable.

Concluding remarks

Note, the option like “estimate all the models on the same response variables” is possible, but very time consuming. I would like to search for the quick “cure” before going to the painful decision if there is no other way to resolve the problem.


I think one of the most reliable methods for comparing models is to cross-validate out-of-sample error (e.g. MAE). You will need to un-transform the exogenous variable for each model to directly compare apples to apples.

Source : Link , Question Author : Dmitrij Celov , Answer Author : Zach

Leave a Comment