I often use a 90% confidence level, accepting that this has a greater degree of uncertainty than 95% or 99%.
But are there any guidelines on how to choose the right confidence level? Or guidelines for the confidence levels used in different fields?
Also, in interpreting and presenting confidence levels, are there any guides to turn the number into language? For example, such as guides like this for Pearson’s r (edit: these descriptions are for social sciences):
http://faculty.quinnipiac.edu/libarts/polsci/Statistics.html (page unresponsive on 26.12.2020)
Thanks for the answers below. They were all VERY helpful, insightful and instructive.
In addition, below are some nice articles on choosing significance level (essentially the same question) that I came across while looking into this question. They validate what is said in the answers below.
What’s the significance of 0.05 significance? on p-value.info (6 January 2013);
Scientific method: Statistical errors by
Regina Nuzzo, Nature News & Comment, 12 February 2014.
In addition to Tim’s great answer, there are even within a field different reasons for particular confidence intervals. In a clinical trial for hairspray, for example, you would want to be very confident your treatment wasn’t likely to kill anyone, say 99.99%, but you’d be perfectly fine with a 75% confidence interval that your hairspray makes hair stay straight.
In general, confidence intervals should be used in such a fashion that you’re comfortable with the uncertainty, but also not so strict they lower the power of your study into irrelevance. A 90% confidence interval means when repeating the sampling you would expect that one time in ten intervals generate will not include the true value. Based on what you’re researching, is that acceptable? On the other hand, if you prefer a 99% confidence interval, is your sample size sufficient that your interval isn’t going to be uselessly large? (Hopefully you’re deciding the CI level before doing the study, right?)
In my experience (in the social sciences) and from what I’ve seen of my wife’s (in the biological sciences), while there are CI/significance sort-of-standards in various fields and various specific cases, it’s not uncommon for the majority of debate over a topic be whether you appropriately set your CI interval or significance level. I’ve been in meetings where a statistician patiently explained to a client that while they may like a 99% two sided confidence interval, for their data to ever show significance they would have to increase their sample tenfold; and I’ve been in meetings where clients ask why none of their data shows a significant difference, where we patiently explain to them it’s because they chose a high interval – or the reverse, everything is significant because a lower interval was requested. When you publish a paper, it’s not uncommon for three reviewers to have three different opinions of your CI level, if it’s not on the high end for your discipline.
What I suggest is to read some of the major papers in your field (as close to your specific topic as possible) and see what they use; combine that with your comfort level and sample size; and then be prepared to defend what you choose with that information at hand. Unless you’re in a field with very strict rules – clinical trials I suspect are the only ones that are really that strict, at least from what I’ve seen – you’ll not get anything better. (And if there are strict rules, I’d expect the major papers in your field to follow it!)