# Whether to report confidence intervals of effect sizes such as $r$ and $\eta^2$?

I have calculated the bivariate linear correlation coefficient between $x$ and $y$ and it is $r= 0.45$; $p<001$.

I know $r$ is a measure of effect size.

Should I also be reporting the confidence interval in this case?

If yes, how can it be justified? i.e. why is it important to report it together with the effect size ($r$) – a bit of non-statistical explanation will certainly be very useful for me.

The same is for my linear regression analysis of $x$ and $y$. I am reporting the $F$ statistics, $p<0.05$ and $\eta^2$.

Should I also work out the confidence interval for the linear regression analysis?

The answer is almost always: report both. This way, your audience can decide on the interestingness and importance of your results, instead of just having to believe you. Confidence intervals are similarly always useful, because they give a neat indication of both effect size, and significance, in one. Even better if it’s on a graph 🙂

It may be best to look at all the possible outcomes, remembering that a significance level ($\alpha$) is an arbitrary threshold, and you can choose anyone you like (higher ones are of course harder to defend).

1. Low p-value, low effect size: You’ve got a result, you’re pretty sure it’s not down to chance, but it doesn’t really say anything interesting. An example might be a new drug, that significantly improves on an old one. But if the improvement is a only 2%, then your result might not means so much when weighed up against other factors (like extra costs or new side effects)

2. High p-value, high effect size: Looks like you’re on to some thing, but you can’t say for certain that it wasn’t just the result of chance. For this drug, things look promising, but you DEFINITELY want to do some more testing, probably with a modified experiment that (hopefully) will reduce some of that ridiculous variability you’re seeing.

3. High p-value, low effect size: Nothing interesting here. Go and design a better experiment.

4. Low p-value, large effect size: Win. You’ve got a big effect, and you’re sure it’s not down to chance. If it’s a new drug, then you’ve got a good chance that your results will make a big impact, and it’ll get pushed out to market quickly, even if it costs more, or if there are some side effects.

(Note: not pushing drugs here, I’m as mistrustful as the next paranoiac about big pharma, but drugs make for good statistical examples 🙂