How is it possible to control false discovery rate (FDR) without knowing the power and the prevalence of the nulls?

If we have p-value 0.05, to calculate probability of our discovery to be false positive, we need to use complex formula with prevalence (prior probability) and statistical power.

If we have lots of p-values, we magically don’t need to deal with prevalence/power anymore and can just apply some Benjamini or others correction. And it will control FDR at 0.05 level (on average, if we repeat same experiment infinite amount of times).

I can not merge these 2 facts in my head. If we have only 2 p-values, will the trick with FDR work (obviously no) and where is the border? Do we need to take prevalence into account for multiple testing correction (my simulations and theory shows that kind of no if we don’t take degenerative cases such as “almost all our data comes from null” into account – FDR there is either ~0% or ~100%), but how come?

Answer

Attribution
Source : Link , Question Author : German Demidov , Answer Author : Community

Leave a Comment