root-n consistent estimator, but root-n doesn’t converge?

I’ve heard the term “root-n” consistent estimator’ used many times. From the resources I’ve been instructed by, I thought that a “root-n” consistent estimator meant that:

  • the estimator converges on the true value (hence the word “consistent”)
  • the estimator converges at a rate of 1/n

This puzzles me, since 1/n doesn’t converge? Am I missing something crucial here?


What hejseb means is that n(ˆθθ) is “bounded in probability”, loosely speaking that the probability that n(ˆθθ) takes on “extreme” values is “small”.

Now, n evidently diverges to infinity. If the product of n and (ˆθθ) is bounded, that must mean that (ˆθθ) goes to zero in probability, formally ˆθθ=op(1), and in particular at rate 1/n if the product is to be bounded. Formally,
ˆθθ=op(1) is just another way of saying we have consistency – the error “vanishes” as n. Note that ˆθθ=Op(1) would not be enough (see the comments) for consistency, as that would only mean that the error ˆθθ is bounded, but not that it goes to zero.

Source : Link , Question Author : makansij , Answer Author : Christoph Hanck

Leave a Comment