Why do we need an estimator to be consistent?

I think, I have already understood the mathematical definition of a consistent estimator. Correct me if I’m wrong:

W_n is an consistent estimator for \theta if \forall \epsilon>0

\lim_{n\to\infty} P(|W_n – \theta|> \epsilon) = 0, \quad \forall\theta \in \Theta

Where, \Theta is the Parametric Space.  But I want to understand the need for an estimator to be consistent. Why an estimator that is not consistent is bad? Could you give me some examples?

I accept simulations in R or python.

Answer

If the estimator is not consistent, it won’t converge to the true value in probability. In other words, there is always a probability that your estimator and true value will have a difference, no matter how many data points you have. This is actually bad, because even if you collect immense amount of data, your estimate will always have a positive probability of being some \epsilon>0 different from the true value. Practically, you can consider this situation as if you’re using an estimator of a quantity such that even surveying all the population, instead of a small sample of it, won’t help you.

Attribution
Source : Link , Question Author : Fam , Answer Author : gunes

Leave a Comment