# Type of residuals to check linear regression assumptions

I would like to better understand some recommendations usually given to chose one or another type of residuals when checking the assumptions of a linar model.

• Lets define the raw residuals as the classical errors $$\hat{\epsilon}_i = \hat{y}_i – y_i$$
• The standardised residuals are defined by $$\frac{\hat{\epsilon}_i}{\hat{\sigma}\sqrt{1 – h_{ii}}}$$
• The studentized residuals are defined by $$\frac{\hat{\epsilon}_i}{\hat{\sigma_{(i)}}\sqrt{1 – h_{ii}}}$$

I perfectly understand why standardised or studentized residuals are preferable to raw residuals when checking for outliers for example. But for other “post-fit checks”, I do not always see the difference. Here are my questions:

1. When checking for normality of errors, does it make any difference to use one type of residuals or another? Some authors go for a QQ-plot of raw residuals against theoretical normal quantiles, other authors recommend a QQ-plot of studentized residuals against theoretical $$t$$ quantiles, but this sounds equivalent to me. (I cannot imagine any situation where these two plots would lead to different conclusions.)

2. When checking for constant variance, we can often read that it is slightly better to use standardised or studentized residuals rather than raw residuals. I guess this is because $$V(\hat{\epsilon}) = \sigma^2 (I − P_X)$$, i.e. the variance of raw residuals is non-constant by construction? Consequently, if a quick visual inspection of raw residuals (plotted against fitted values) reveals a slight heteroscedasticity, we cannot really know if it comes from the inherent nonconstant variance of raw residuals or from a true phenomenon in the data? Is this the reason?

3. Except for autocorrelation checks, is there any reason to prefer studentized residuals over standardised residuals (for normality, heteroscedasticity and outliers checks)?

1. From my experience, you should not get different conclusions when assessing the normality of residuals.

2. Some authors note that standardized residuals z >|2.00| should be assessed. However, note that the calculation of standardized residuals (ZRESID) is based on the generally untenable assumption that all residuals have the same variance. To avoid making this assumption, it is suggested that studentized residuals (SRESID) are used instead. Essentially, you can accomplish this by dividing each residual by its estimated standard deviation.

3. To be frank, I am not sure, but I wanted to add a couple of notes for consideration. In terms of autocorrelation: it normally only makes sense to test for it, once your observations have some ordering (e.g. time, distance). Also, when checking for outliers and influential cases, you might think of using Cook’s D (distance) instead (Cook, 1977). This measure was designed to identify an influential observation or an outlier whose influence is due to its status on the independent variables(s), the dependent variable, or both.

References:

Cook, R. D. (1977). Detection of influential observation in linear regression. Technometrics, 19(1), 15-18.

Pedhazur, E. J. (1997). Multiple regression in behavioral research: Explanation and prediction. Thompson Learning. Inc: New York, NY.