Is there any statistical test that is parametric and non-parametric?

Is there any statistical test that is parametric and non-parametric?
This question was asked by an interview panel. Is it valid question?


It is fundamentally difficult to tell exactly what is meant by a “parametric test” and a “non-parametric test”, though there are many concrete examples where most will agree on whether a test is parametric or non-parametric (but never both). A quick search gave this table, which I imagine represents a common practical distinction in some areas between parametric and non-parametric tests.

Just above the table referred to there is a remark:

“… parametric data has an underlying normal distribution …. Anything else is non-parametric.”

It may be a accepted criterion in some areas that either we assume normality and use ANOVA, and this is parametric, or we don’t assume normality and use non-parametric alternatives.

It’s perhaps not a very good definition, and it’s not really correct in my opinion, but it may be a practical rule of thumb. Mostly because the end goal in the social sciences, say, is to analyze data, and what good is it to be able to formulate a parametric model based on a non-normal distribution and then not be able to analyze the data?

An alternative definition, is to define “non-parametric tests” as tests that do not rely on distributional assumptions and parametric tests as anything else.

The former as well as the latter definition presented defines one class of tests and then defines the other class as the complement (anything else). By definition, this rules out that a test can be parametric as well as non-parametric.

The truth is that also the latter definition is problematic. What if there are certain natural “non-parametric” assumptions, such as symmetry, that can be imposed? Will that turn a test statistic that does otherwise not rely on any distributional assumptions into a parametric test? Most would say no!

Hence there are tests in the class of non-parametric tests that are allowed to make some distributional assumptions as long as they are not “too parametric”. The borderline between the “parametric” and the “non-parametric” tests has become blurred, but I believe that most will uphold that either a test is parametric or it is non-parametric, perhaps it can be neither but saying that it is both makes little sense.

Taking a different point of view, many parametric tests are (equivalent to) likelihood ratio tests. This makes a general theory possible, and we have a unified understanding of the distributional properties of likelihood ratio tests under suitable regularity conditions. Non-parametric tests are, on the contrary, not equivalent to likelihood ratio tests per se there is no likelihood and without the unifying methodology based on the likelihood we have to derive distributional results on a case-by-case basis. The theory of empirical likelihood developed mainly by Art Owen at Stanford is, however, a very interesting compromise. It offers a likelihood based approach to statistics (an important point to me, as I regard the likelihood as a more important object than a p-value, say) without the need of typical parametric distributional assumptions. The fundamental idea is a clever use of the multinomial distribution on the empirical data, the methods are very “parametric” yet valid without restricting parametric assumptions.

Tests based on empirical likelihood have, IMHO, the virtues of parametric tests and the generality of non-parametric tests, hence among the tests I can think of, they come closest to qualify for being parametric as well as non-parametric, though I would not use this terminology.

Source : Link , Question Author : Biostat , Answer Author : NRH

Leave a Comment