What statistical methods are archaic and should be omitted from textbooks? [closed]

In answering a question about a confidence interval for a binomial proportion I pointed out the fact that the normal approximation is an unreliable method that is archaic. It should not be taught as a method, although there might be an argument that it be included as a part of a lesson about what makes an adequate method.

What are other ‘standard’ statistical approaches that have passed their use-by date and should be omitted from future editions of textbooks (thereby making space for useful ideas)?


These three would probably rank somewhere in a list of deprecated exercises:

  1. looking for quantiles of the normal/F/t distribution in a table.
  2. Tests of normality.
  3. Tests of equality of variances before doing the two sample t-tests or anova.
  4. Classical (e.g. non robust) univariate parametric tests and confidence intervals.

Statistics has moved in the age of computers and large multivariate dataset. I don’t expect this to be rolled back. By necessity, the approaches taught in more advanced courses have in some sense been influenced by Breiman’s and Tukey’s critics. The focus has, IMO, permanently shifted towards those approach that require fewer assumptions to be met in order to work. An introductory course should reflect that.

I think some of the elements could still be taught in a latter stage to students interested in the history of statistical thoughts.

Source : Link , Question Author : Community , Answer Author :
3 revs

Leave a Comment