The Kolmogorov-Smirnov (KS) test tells one how confident they can be that a sample comes from a hypothesized distribution. It is my understanding that this test can be used to justify whether or not the underlying distribution of a sample is normal. If with high confidence one can say that the underlying distribution of a data source is normal then the z- or t-interval can be applied to compute confidence intervals for the mean of the data.

My question has to do with the inverse of this problem. Say we have data with a known (non-normal) distribution and want to justify under what parameters the distribution is “normal enough” to use z- and t-intervals for the mean. One way of comparing a distribution to its normal approximation that I see a lot is to compute the Kolmogorov metric between two distributions, i.e. for a random variable X and it’s normal approximation X∗∼N(E[X],var[X]) the Kolmogorov metric is

K(X,X∗)=sup

Note that the above is not referring to the KS statistic but rather the maximum discrepancy between two theoretical cdf’s. One famous result using this approach is the Berry-Esseen theorem which puts an upper bound on the Kolmogorov metric for an arbitrary distribution and it’s normal approximation.

My question is this: Is there an accepted value for the Kolmogorov metric that has been used to justify sufficient normality for the purposes of confidence intervals?Perhaps a equivalent question would be: Does the Berry-Esseen theorem have any practical application (in an analogous way to the KS-test) for justifying normality?

**Answer**

No. Different confidence interval procedures have difference sensitivities to normality, and various requirements of the work can influence how much deviation from the theoretical coverage is acceptable.^{\dagger}

For instance, while confidence intervals for the mean that are based on the t-test are known for their robustness to deviations from normality, confidence intervals for variance that are based on the F-test are known for their lack of robustness to such deviations.

^{\dagger}Is 94.5\% coverage good enough for a 95\% confidence interval? Under most circumstances, I would say so, but perhaps you have an application where that is inadequate.

**Attribution***Source : Link , Question Author : Aaron Hendrickson , Answer Author : Dave*