# significance of difference between two counts

Is there a way to determine whether a difference between a count of road accidents at time 1 is significantly different from a count at time 2?

I have found different methods for determining the difference between groups of observations at different times (e.g. comparing poisson means) but not for comparing only two counts. Or is it invalid to even try? Any advice or direction would be appreciated. I am happy to follow up leads myself.

If you’re happy to assume each count follows a Poisson distribution (with its own mean under the alternative hypothesis; with a common mean under the null) there’s no problem—it’s just that you can’t check that assumption without replicates. Overdispersion can be quite common with count data.

An exact test given counts $x_1$ & $x_2$ is straightforward because the overall total of counts $n=x_1+x_2$ is ancillary; conditioning on it gives $X_1 \sim \mathrm{Bin}\left(\frac{1}{2},n\right)$ as the distribution of your test statistic under the null. It’s an intuitive result: the overall count, reflecting perhaps how much time you could be bothered to spend observing the two Poisson processes, carries no information about their relative rates, but affects the power of your test; & therefore other overall counts you might’ve got are irrelevant.

See Likelihood-based hypothesis testing for the Wald test (an approximation).

† Each count $x_i$ has a Poisson distribution with mean $\lambda_i$

Reparametrize as

where $\theta$ is what you’re interested in, & $\phi$ is a nuisance parameter. The joint mass function can then be re-written:

The total count $n$ is ancillary for $\theta$, having a Poisson distribution with mean $\phi$

while the conditional distribution of $X_1$ given $n$ is binomial with Bernoulli probability $\theta$ & no. trials $n$