Why would one use `random’ confidence or credible intervals?

I was reading a paper recently that incorporated randomness in its confidence and credible intervals, and I was wondering if this is standard (and, if so, why it is a reasonable thing to do). To set notation, assume that our data is xX and we are interested in creating intervals for a parameter θΘ. I am used to confidence/credibility intervals being constructed by building a function:

fx:Θ{0,1}

and letting our interval be I={θΘ:fx(θ)=1}.

This is random in the sense that it depends on the data, but conditional on the data it is just an interval. This paper instead defines

gx:Θ[0,1]

and also a collection of iid uniform random variables {Uθ}θΘ on [0,1]. It defines the associated interval to be I={θΘ:fx(θ)Uθ}. Note that this depends a great deal on auxillary randomness, beyond whatever comes from the data.

I am very curious as to why one would do this. I think that `relaxing’ the notion of an interval from functions like fx to functions like gx makes some sense; it is some sort of weighted confidence interval. I don’t know of any references for it (and would appreciate any pointers), but it seems quite natural. However, I can’t think of any reason to add auxillary randomness.

Any pointers to the literature/reasons to do this would be appreciated!

Answer

Randomized procedures is used sometimes in theory because it simplifies the theory. In typical statistical problems, it does not make sense in practice, while in game-theory settings it can make sense.

The only reason I can see to use it in practice, is if it somehow simplifies calculations.

Theoretically, one can argue it should not be used, from the sufficiency principle: statistical conclusions should only be based on sufficient summaries of the data, and randomization introduces dependence of an extraneous random
U which is not part of a sufficient summary of the data.

UPDATE  

To answer whuber’s comments below, quoted here: “Why do randomized procedures “not make sense in practice”? As others have noted, experimenters are perfectly willing to use randomization in the construction of their experimental data, such as randomized assignment of treatment and control, so what is so different (and impractical or objectionable) about using randomization in the ensuing analysis of the data? “

Well, randomization of the experiment to get the data is done for a purpose, mainly to break causality chains. If and when that is effective is another discussion. What could be the purpose for using randomization as part of the analysis? The only reason I have ever seen is that it makes the mathematical theory more complete! That’s OK as long as it goes. In game-theory contexts, when there is an actual adversary, randomization my help to confuse him. In real decision contexts (sell, or not sell?) a decision must be taken, and if there is not evidence in the data, maybe one could just throw a coin. But in a scientific context, where the question is what we can learn from the data, randomization seems out of place. I cannot see any real advantage from it! If you disagree, do you have an argument which could convince a biologist or a chemist? (And here I do not think about simulation as part of bootstrap or MCMC.)

Attribution
Source : Link , Question Author : QQQ , Answer Author : kjetil b halvorsen

Leave a Comment