Why is the assumption of a normally distributed residual relevant to a linear regression model? [duplicate]

Isn’t it just the assumptions of autocorrelation, unbiasedness and homoscedasicty that are relevant to proving the efficiency and the unbiasedness of OLS estimators?

How does the normality in the distribution of the residuals play in here?


The usual small-sample inference — confidence intervals, prediction intervals, hypothesis tests – rely on normality. You can of course make different parametric assumptions.

While Gauss-Markov gives you BLUE, the problem is if you’re far enough from normality, all linear estimators may be bad, so choosing the best among them may be nearly useless.

Source : Link , Question Author : HeyThereGur , Answer Author : Glen_b

Leave a Comment