Statistical inference under model misspecification

I have a general methodological question. It might have been answered before, but I am not able to locate the relevant thread. I will appreciate pointers to possible duplicates.

(Here is an excellent one, but with no answer. This is also similar in spirit, even with an answer, but the latter is too specific from my perspective. This is also close, discovered after posting the question.)


The theme is, how to do valid statistical inference when the model formulated before seeing the data fails to adequately describe the data generating process. The question is very general, but I will offer a particular example to illustrate the point. However, I expect the answers to focus on the general methodological question rather than nitpicking on the details of the particular example.


Consider a concrete example: in a time series setting, I assume the data generating process to be
yt=β0+β1xt+ut
with uti.i.N(0,σ2u). I aim to test the subject-matter hypothesis that dydx=1. I cast this in terms of model (1) to obtain a workable statistical counterpart of my subject-matter hypothesis, and this is
H0: β1=1.
So far, so good. But when I observe the data, I discover that the model does not adequately describe the data. Let us say, there is a linear trend, so that the true data generating process is
yt=γ0+γ1xt+γ2t+vt
with vti.i.N(0,σ2v).

How can I do valid statistical inference on my subject-matter hypothesis dydx=1?

  • If I use the original model, its assumptions are violated and the estimator of β1 does not have the nice distribution it otherwise would. Therefore, I cannot test the hypothesis using the t-test.

  • If, having seen the data, I switch from model (1) to (2) and change my statistical hypothesis from H0: β1=1 to H0: γ1=1, model assumptions are satisfied and I get a well-behaved estimator of γ1 and can test H0 with no difficulty using the t-test.
    However, the switch from (1) to (2) is informed by the data set on which I wish to test the hypothesis. This makes the estimator distribution (and thus also inference) conditional on the change in the underlying model, which is due to the observed data. Clearly, the introduction of such conditioning is not satisfactory.

Is there a good way out? (If not frequentist, then maybe some Bayesian alternative?)

Answer

The way out is literally out of sample test, a true one. Not the one where you split sample into training and hold out like in crossvalidation, but the true prediction. This works very well in natural sciences. In fact it’s the only way it works. You build a theory on some data, then you’re expected to come up with a prediction of something that was not observed yet. Obviously, this doesn’t work in most social (so called) sciences such as economics.

In the industry this works as in sciences. For instance, if the trading algorithm doesn’t work, you’re going to lose money, eventually, and then you abandon it. Cross validation and training data sets are used extensively in development and making a decision to deploy the algorithm, but after it’s in production it’s all about making money or losing. Very simple out of sample test.

Attribution
Source : Link , Question Author : Richard Hardy , Answer Author : Aksakal

Leave a Comment