I’ve heard the term “root-n” consistent estimator’ used many times. From the resources I’ve been instructed by, I thought that a “root-n” consistent estimator meant that:
- the estimator converges on the true value (hence the word “consistent”)
- the estimator converges at a rate of 1/√n
This puzzles me, since 1/√n doesn’t converge? Am I missing something crucial here?
What hejseb means is that √n(ˆθ−θ) is “bounded in probability”, loosely speaking that the probability that √n(ˆθ−θ) takes on “extreme” values is “small”.
Now, √n evidently diverges to infinity. If the product of √n and (ˆθ−θ) is bounded, that must mean that (ˆθ−θ) goes to zero in probability, formally ˆθ−θ=op(1), and in particular at rate 1/√n if the product is to be bounded. Formally,
ˆθ−θ=op(1) is just another way of saying we have consistency – the error “vanishes” as n→∞. Note that ˆθ−θ=Op(1) would not be enough (see the comments) for consistency, as that would only mean that the error ˆθ−θ is bounded, but not that it goes to zero.