Is there a useful way to define the “best” confidence interval?

The standard definition of (say) a 95% confidence interval (CI) simply requires that the probability that it contains the true parameter is 95%. Obviously, this is not unique. The language I’ve seen suggests that among the many valid CI, it usually makes sense to find something like the shortest, or symmetric, or known precisely even when some distribution parameters are unknown, etc. In other words, there seems to be no obvious hierarchy of what CI are “better” than others.

However, I thought one equivalent definition of CI is that it consists of all values such that the null hypothesis that the true parameter equals that value wouldn’t be rejected at the appropriate significance level after seeing the realized sample. This suggests that as long as we choose a test that we like, we can automatically construct the CI. And there’s a standard preference among tests based on the concept of UMP (or UMP among unbiased tests).

Is there any benefit in defining CI as the one corresponding to the UMP test or something like that?

Answer

A bit long for a comment. Check out the discussion on UMP’s in this paper “The fallacy of placing confidence in confidence intervals” by Morey et al. In particular, there are some examples where:

“Even more strangely, intervals from the UMP procedure initially
increase in width with the uncertainty in the data, but when the width
of the likelihood is greater than 5 meters, the width of the UMP
interval is inversely related to the uncertainty in the data, like the
nonparametric interval. The UMP and sampling distribution procedures
share the dubious distinction that their CIs cannot be used to work
backwards to the observations. In spite of being the “most powerful”
procedure, the UMP procedure clearly throws away important
information.”

Attribution
Source : Link , Question Author : max , Answer Author : Alex R.

Leave a Comment