Suppose we have a Bernoulli process with failure probability q (which will be small, say, q≤0.01) from which we sample until we encounter 10 failures. We thereby estimate the probability of failure as ˆq:=10/N where N is the number of samples.
Question: Is ˆq a biased estimate of q? And, if so, is there a way to correct it?
I’m concerned that insisting the last sample is a fail biases the estimate.
It is true that ˆq is a biased estimate of q in the sense that E(ˆq)≠q, but you shouldn’t necessarily let this deter you. This exact scenario can be used as a criticism against the idea that we should always use unbiased estimators, because here the bias is more of an artifact of the particular experiment we happen to be doing. The data look exactly as they would if we had chosen the number of samples in advance, so why should our inferences change?
Interestingly, if you were to collect data in this way and then write down the likelihood function under both the binomial (fixed sample size) and negative binomial models, you would find that the two are proportional to one another. This means that ˆq is just the ordinary maximum likelihood estimate under the negative binomial model, which of course is a perfectly reasonable estimate.