## Question

The variance of a negative binomial (NB) distribution is always greater than its mean. When the mean of a sample is greater than its variance, trying to fit the parameters of a NB with maximum likelihood or with moment estimation will fail (there is no solution with finite parameters).

However, it is possible that a sample taken from a NB distribution has mean greater than variance. Here is a reproducible example in R.

`set.seed(167) x = rnbinom(100, size=3.2, prob=.8); mean(x) # 0.82 var(x) # 0.8157576`

There is a non-zero probability that the NB will produce a sample for which parameters cannot be estimated (by maximum likelihood and moment methods).

- Can decent estimates be given for this sample?
- What does estimation theory say when estimators are not defined for all samples?
## About the answer

The answers of @MarkRobinson and @Yves made me realize that parametrization is the main issue. The probability density of the NB is usually written as

P(X=k)=Γ(r+k)Γ(r)k!(1−p)rpk

or as

P(X=k)=Γ(r+k)Γ(r)k!(rr+m)r(mr+m)k.Under the first parametrization, the maximum likelihood estimate is (∞,0) whenever the variance of the sample is smaller than the mean, so nothing useful can be said about p. Under the second, it is (∞,ˉx), so we can give a reasonable estimate of m. Finally, @MarkRobinson shows that we can solve the problem of infinite values

by using r1+r instead of r.In conclusion, there is nothing fundamentally wrong with this estimation problem, except that you cannot always give meaningful interpretations of r and p for every sample. To be fair, the ideas are present in both answers. I chose that of @MarkRobinson as the correct one for the complements he gives.

**Answer**

Basically, for your sample, the estimate of the size parameter is on the boundary of the parameter space. One could also consider a reparameterization such as d = size / (size+1); when size=0, d=0, when size tends to infinity, d approaches 1. It turns out that, for the parameter settings you have given, size estimates of infinity (d close to 1) happen about 13% of the time for Cox-Reid adjusted profile likelihood (APL) estimates, which is an alternative to MLE estimates for NB (example shown here). The estimates of the mean parameter (or ‘prob’) seem to be ok (see figure, blue lines are the true values, red dot is the estimate for your seed=167 sample). More details on the APL theory are here.

So, I would say to 1.: Decent parameter estimates can be had .. size=infinity or dispersion=0 is a reasonable estimate given the sample. Consider a different parameter space and the estimates will be finite.

**Attribution***Source : Link , Question Author : gui11aume , Answer Author : Community*