# Relative size of p values at different sample sizes

How does the relative size of a p value change at different sample sizes?
Like if you got $p=0.20$ at $n=45$ for a correlation and then at $n=120$ you got the same p value of 0.20, what would be the relative size of the p value for the second test, compared to the original p value when $n=45$?

Consider tossing a coin which you suspect may come up heads too often.

You perform an experiment, followed by a one tailed hypothesis test. In ten tosses you get 7 heads. Something at least as far from 50% could easily happen with a fair coin. Nothing unusual there.

If instead, you got 700 heads in 1000 tosses, a result at least as far from fair as that would be astonishing for a fair coin.

So 70% heads is not at all strange for a fair coin in the first case and very strange for a fair coin in the second case. The difference is sample size.

As the sample size increases, our uncertainty about where the population mean could be (the proportion of heads in our example) decreases. So larger samples are consistent with smaller ranges of possible population values – more values tend to become “ruled out” as samples get larger.

The more data we have, the more precisely we can pin down where the population mean could be… so a fixed value of the mean that is wrong will look less plausible as our sample sizes become large. That is, p-values tend to become smaller as sample size increases, unless $$H0H_0$$ is true.