# An impossible estimation problem?

### Question

The variance of a negative binomial (NB) distribution is always greater than its mean. When the mean of a sample is greater than its variance, trying to fit the parameters of a NB with maximum likelihood or with moment estimation will fail (there is no solution with finite parameters).

However, it is possible that a sample taken from a NB distribution has mean greater than variance. Here is a reproducible example in R.

set.seed(167)
x = rnbinom(100, size=3.2, prob=.8);
mean(x) # 0.82
var(x) # 0.8157576


There is a non-zero probability that the NB will produce a sample for which parameters cannot be estimated (by maximum likelihood and moment methods).

1. Can decent estimates be given for this sample?
2. What does estimation theory say when estimators are not defined for all samples?

The answers of @MarkRobinson and @Yves made me realize that parametrization is the main issue. The probability density of the NB is usually written as

$$P(X=k)=Γ(r+k)Γ(r)k!(1−p)rpkP(X = k) = \frac{\Gamma(r+k)}{\Gamma(r)k!}(1-p)^rp^k$$
or as
$$P(X=k)=Γ(r+k)Γ(r)k!(rr+m)r(mr+m)k.P(X = k) = \frac{\Gamma(r+k)}{\Gamma(r)k!} \left(\frac{r}{r+m}\right)^r \left(\frac{m}{r+m}\right)^k.$$

Under the first parametrization, the maximum likelihood estimate is $$(∞,0)(\infty, 0)$$ whenever the variance of the sample is smaller than the mean, so nothing useful can be said about $$pp$$. Under the second, it is $$(∞,ˉx)(\infty, \bar{x})$$, so we can give a reasonable estimate of $$mm$$. Finally, @MarkRobinson shows that we can solve the problem of infinite values
by using $$r1+r\frac{r}{1+r}$$ instead of $$rr$$.

In conclusion, there is nothing fundamentally wrong with this estimation problem, except that you cannot always give meaningful interpretations of $$rr$$ and $$pp$$ for every sample. To be fair, the ideas are present in both answers. I chose that of @MarkRobinson as the correct one for the complements he gives.