# For the binomial distribution, why does no unbiased estimator exist for $1/p$?

Suppose that $X$ ~ $Binomial(n,p)$ for $0 < p < 1$

Why does no unbiased estimator exist for $1/p$?

My approach:

We try to find the structure of $E_p(U(x))$, where $U(x)$ is any estimator of $1/p$.

Now, we will have:

$\sum{U(x)\binom{n}{x}p^x(1-p)^{n-x}}<\sum{U(x)\binom{n}{x}}=M(n)<\infty$

so that the expectation is bounded above. So this is supposed to mean that if $p < 1/M(n)$, then the expectation cannot attain $p$ but I am not sure why the above argument even makes sense and what being bounded means for the expectation.

By definition, an estimator of any property of the distribution of $X$ is a function $t$ of the possible values of $X,$ here equal to $0, 1, \ldots, n.$

Given $n,$ assume $X$ has some Binomial$(n,p)$ distribution where $p$ is known only to lie within a given set $\Omega \subset[0,1].$ We will say more about $\Omega$ at the end.

The expectation of $t$ is its probability-weighted average,

$$E[t(X)] = \sum_{x=0}^n \Pr(X=x) t(x) = \sum_{x=0}^n \binom{n}{x}p^x(1-p)^{n-x} t(x).$$

(Note that the expressions “$t(x)$” are just numbers, one for each $x=0,1,\ldots, n.$)

This expectation depends on $p.$ In the specific case where $1/p$ is to be estimated, the estimator is unbiased when it equals $1/p$ for all values of $p \in\Omega;$ that is,

$$\frac{1}{p} = E[t(X)] = \sum_{x=0}^n \binom{n}{x}p^x(1-p)^{n-x} t(x).\tag{*}$$

Since $p\ne 0,$ this is algebraically equivalent to

\eqalign{ 0 &= pE[t(x)] – 1 \\ &= -1 + \sum_{x=0}^n \binom{n}{x}p^{x+1}(1-p)^{n-x} t(x)\\ &= -1 + \sum_{x=0}^n \binom{n}{x}p^{x+1}\sum_{i=0}^{n-x}\binom{n-x}{i}(-p)^i t(x)\\ &= -1 + \sum_{x=0}^n\sum_{i=0}^{n-x}(-1)^i t(x) \binom{n}{x}\binom{n-x}{i}\,p^{x+1+i}\\ &= -1 + \sum_{k=1}^{n+1} \left(\sum_{i=0}^{k-1}(-1)^i t(k-1-i)\binom{n}{k-1-i}\binom{n-k+1+i}{i}\right)\, p^k\\ &= \sum_{k=0}^{n+1} a_k\, p^k }

where $a_0=-1$ and

$$a_k = \sum_{i=0}^{k-1}(-1)^i t(k-1-i)\binom{n}{k-1-i}\binom{n-k+1+i}{i}$$

are constants determined by $t.$

This is explicitly a nonzero polynomial of degree at most $n+1$ in $p$ and therefore can have at most $n+1$ zeros. If, then, $\Omega$ contains more than $n+2$ values, this equation cannot hold for all of them, whence $t$ cannot be unbiased.

Generalizations of this result to certain other functions of $p,$ besides $1/p,$ should be obvious.

The reason why this argument does not generalize to, say, estimating $p,$ is that similar calculations give a polynomial whose coefficients actually can be reduced to zero by a suitable choice of $t:$ that’s why it was crucial to observe that the polynomial determined by $a_0, a_1, \ldots, a_{n+1}$ is nonzero (because $a_0=-1$ no matter what).

Take, for instance, the case $n=2.$ The condition $(*)$ of unbiasedness becomes, for all $p\in\Omega,$

\eqalign{ 0 &= -p + E[t(X)] \\&= -p + \left[t(0)(1-p)^2 + 2t(1)p(1-p) + t(2)p^2\right] \\ &= t(0) + (-1-2t_0+2t_1)p + (t(0)-2t(1) + t(2))p^2. }

Working from left to right we find that the coefficients can all be made zero by setting $t(0)=1,$ then $t(1)=1/2,$ and finally $t(2) = 1.$ This is the only set of choices that does so. Thus, when $n=2$ and $\Omega$ contains at least three elements, this estimator $t$ is the unique unbiased estimator of $p.$

Finally, as an example of why the content of $\Omega$ matters, suppose $\Omega=\{1/3, 2/3\}.$ That is, we know $X$ counts the heads in two flips of a coin that favors either tails or heads by odds of $2:1$ (but we don’t know which way). An unbiased estimate $1/p$ is obtained by the estimator $t(0) = 11/2,$ $t(1) = 1 = t(2).$ The check is straightforward: when $p=1/3$, the expectation of $t$ is

$$(2/3)^2\,t(0) + 2(2/3)(1/3)\,t(1) + (1/3)^2\,t(2) = (4/9)(11/2) + 4/9 + 1/9 = 3$$

and when $p=2/3$ the expectation is

$$(1/3)^2\,t(0) + 2(1/3)(2/3)\,t(1) + (2/3)^2\,t(2) = (1/9)(11/2) + 4/9 + 4/9 = 3/2.$$

In each case the expectation indeed is $1/p.$ (It is amusing that none of the values of $t,$ though, are actually equal $3$ or $3/2,$ which are the only two possible values of $1/p.$)