Are there parameters where a biased estimator is considered “better” than the unbiased estimator? [duplicate]

A perfect estimator would be accurate (unbiased) and precise (good estimation even with small samples).

I never really thought of the question of precision but only the one of accuracy (as I did in Estimator of σ2μ(1μ) when sampling without replacement for example).

Are there cases where the unbiased estimator is less precise (and therefore eventually “less good”) than a biased estimator? If yes, I would love a simple example proving mathematically that the less accurate estimator is so much more precise that it could be considered better.

Answer

One example is estimates from ordinary least squares regression when there is collinearity. They are unbiased but have huge variance. Ridge regression on the same problem yields estimates that are biased but have much lower variance. E.g.

install.packages("ridge")
library(ridge)
set.seed(831)

data(GenCont)
ridgemod <- linearRidge(Phenotypes ~ ., data = as.data.frame(GenCont))
summary(ridgemod)
linmod <- lm(Phenotypes ~ ., data = as.data.frame(GenCont))
summary(linmod)

The t values are much larger for ridge regression than linear regression. The bias is fairly small.

Attribution
Source : Link , Question Author : Remi.b , Answer Author : Peter Flom

Leave a Comment