# Does an exact test always yield a higher P value then an approximated test?

I ran a simulation on that for mcnemar test, and the answer seemed to be yes.

I was wondering if this can always be said to be the case that the exact P value is higher (or not smaller) then the p value which is arrived to through an approximation.

some code for example:

set.seed(234)
n <- 100 # number of total subjects
P <- numeric(100)
P_exact <- numeric(100)
for(i in 1:100)
{
x = table(sample(1:2, n, T), sample(1:2, n, T))
P[i] <- mcnemar.test(x, correct = F)$p.v P_exact[i] <- binom.test(x[2,1],x[1,2]+x[2,1])$p.valu
}

#for different n - the level of problem is worse
#plot(jitter(P,0,.01), P_exact )
plot(P, P_exact )
abline(0,1)


No, the p-value from an asymptotically valid distribution is not always smaller than an exact p-value. Consider two examples from traditional “non-parametric” tests:

The Wilcoxon Rank-Sum Test for location shift (e.g, median) for two independent samples of size $n_{1}$ and $n_{2}$ calculates the test statistic as follows:

1. put all observed values into one large sample of size $N = n_{1} + n_{2}$
2. rank these values from $1, \ldots, N$
3. sum the ranks for the first group, call this $L_{N}^{+}$. Often, the test statistic is defined as $W = L_{N}^{+} – \frac{n_{1} (n_{1} + 1)}{2}$ (this test statistic is then identical to Mann-Whitney’s U), but this doesn’t matter for the distribution shape.

The exact distribution for $L_{N}^{+}$ for fixed $n_{1}$ and $n_{2}$ is found by generating all ${N \choose n_{1}}$ possible combinations of ranks for the first group, and calculating the sum in each case. The asymptotic approximation uses $z := \frac{L_{n}^{+} – n_{1} (N+1) / 2}{\sqrt{(n_{1} n_{2} (N+1)) / 12}} \sim N(0, 1)$, i.e., a standard-normal approximation of the $z$-transformed test statistic.

Similarly, the Kruskal-Wallis-H-Test for location shift (e.g., median) for $p$ independent samples uses a test statistic based on the rank-sums $R_{+j}$ in each group $j$: $H := \frac{12}{N (N+1)} \sum\limits_{j=1}^{p}{\frac{1}{n_{j}} \left(R_{+j} – n_{j} \frac{N+1}{2}\right)^{2}}$. Again, the exact distribution for H is found by generating all combinations of ranks for the groups. For 3 groups, there are ${N \choose n_{1}} {N-n_{1} \choose n_{2}}$ such combinations. The asymptotic approximation uses a $\chi^{2}_{p-1}$ distribution.

Now we can compare the distribution shapes in terms of the cumulative distribution function $F()$ for given group sizes. The (right-sided) p-value for a given value $t$ of the test-statistic equals $1-F(t)$ for the continuous distribution. In the discrete case, the p-value for $t_{m}$ (the $m$-th possible value for the test statistic) is $1-F(t_{m-1})$. The diagram shows that the exact distribution produces sometimes larger, sometimes smaller p-values, in the H-test: For $H = 5$ (the 32nd of 36 possible H-values), the exact p-value is 0.075 (sum(dKWH_08[names(dKWH_08) >= 5]) with the code below), while the approximate p-value is 0.082085 (1-pchisq(5, P-1)). For $H = 2$ (15th possible value), the exact p-value is 0.425 (sum(dKWH_08[names(dKWH_08) >= 2])), the approximate equals 0.3678794 (1-pchisq(2, P-1)).

#### Wilcoxon-Rank-Sum-Test: exact distribution
n1      <- 5                           # group size 1
n2      <- 4                           # group size 2
N       <- n1 + n2                     # total sample size
ranks   <- t(combn(1:N, n1))           # all possible ranks for group 1
LnPl    <- apply(ranks, 1, sum)        # all possible rank sums for group 1 (Ln+)
dWRS_9  <- table(LnPl) / choose(N, n1) # exact probability function for Ln+
pWRS_9  <- cumsum(dWRS_9)              # exact cumulative distribution function for Ln+
muLnPl  <- (n1    * (N+1)) /  2        # normal approximation: theoretical mean
varLnPl <- (n1*n2 * (N+1)) / 12        # normal approximation: theoretical variance

#### Kruskal-Wallis-H-Test: exact distribution
P  <- 3                                # number of groups
Nj <- c(3, 3, 2)                       # group sizes
N  <- sum(Nj)                          # total sample size
IV <- rep(1:P, Nj)                     # factor group membership
library(e1071)                         # for permutations()
permMat <- permutations(N)             # all permutations of total sample
getH <- function(rankAll) {            # function to calc H for one permutation
Rj <- tapply(rankAll, IV, sum)
H  <- (12 / (N*(N+1))) * sum((1/Nj) * (Rj-(Nj*(N+1) / 2))^2)
}

Hscores <- apply(permMat, 1, getH)     # all possible H values for given group sizes
dKWH_08 <- table(round(Hscores, 4)) / factorial(N)  # exact probability function
pKWH_08 <- cumsum(dKWH_08)             # exact cumulative distribution function


Note that I calculate the exact distribution for H by generating all permutations, not all combinations. This is unnecessary, and computationally much more expensive, but it’s simpler to write down in the general case… Now do the plot comparing the function shapes.

dev.new(width=12, height=6.5)
par(mfrow=c(1, 2), cex.main=1.2, cex.lab=1.2)
plot(names(pWRS_9), pWRS_9, main="Wilcoxon RST, N=(5, 4): exact vs. asymptotic",
type="n", xlab="ln+", ylab="P(Ln+ <= ln+)", cex.lab=1.4)
curve(pnorm(x, mean=muLnPl, sd=sqrt(varLnPl)), lwd=2, n=200, add=TRUE)
points(names(pWRS_9), pWRS_9, pch=16, col="red")
abline(h=0.95, col="blue")
legend(x="bottomright", legend=c("exact", "asymptotic"),
pch=c(16, NA), col=c("red", "black"), lty=c(NA, 1), lwd=c(NA, 2))

plot(names(pKWH_08), pKWH_08, type="n", main="Kruskal-Wallis-H, N=(3, 3, 2):
exact vs. asymptotic", xlab="h", ylab="P(H <= h)", cex.lab=1.4)