A few months ago I posted a question about homoscedasticity tests in R on SO, and Ian Fellows answered that (I’ll paraphrase his answer very loosely):
Homoscedasticity tests are not a good tool when testing the goodness of fit of your model. With small samples, you don’t have enough power to detect departures from homoscedasticity, while with big samples you have “plenty of power”, so you are more likely to screen even trivial departures from equality.
His great answer came as a slap in my face. I used to check normality and homoscedasticity assumptions each time I ran ANOVA.
What is, in your opinion, best practice when checking ANOVA assumptions?
In applied settings it is typically more important to know whether any violation of assumptions is problematic for inference.
Assumption tests based on significance tests are rarely of interest in large samples, because most inferential tests are robust to mild violations of assumptions.
One of the nice features of graphical assessments of assumptions is that they focus attention on the degree of violation and not the statistical significance of any violation.
However, it’s also possible to focus on numeric summaries of your data which quantify the degree of violation of assumptions and not the statistical significance (e.g., skewness values, kurtosis values, ratio of largest to smallest group variances, etc.). You can also get standard errors or confidence intervals on these values, which will get smaller with larger samples. This perspective is consistent with the general idea that statistical significance is not equivalent to practical importance.