root-n consistent estimator, but root-n doesn’t converge?

I’ve heard the term “root-n” consistent estimator’ used many times. From the resources I’ve been instructed by, I thought that a “root-n” consistent estimator meant that:

• the estimator converges on the true value (hence the word “consistent”)
• the estimator converges at a rate of $1/\sqrt{n}$

This puzzles me, since $1/\sqrt{n}$ doesn’t converge? Am I missing something crucial here?

What hejseb means is that $\sqrt{n}(\hat\theta-\theta)$ is “bounded in probability”, loosely speaking that the probability that $\sqrt{n}(\hat\theta-\theta)$ takes on “extreme” values is “small”.
Now, $\sqrt{n}$ evidently diverges to infinity. If the product of $\sqrt{n}$ and $(\hat\theta-\theta)$ is bounded, that must mean that $(\hat\theta-\theta)$ goes to zero in probability, formally $\hat\theta-\theta=o_p(1)$, and in particular at rate $1/\sqrt{n}$ if the product is to be bounded. Formally,
$\hat\theta-\theta=o_p(1)$ is just another way of saying we have consistency – the error “vanishes” as $n\to\infty$. Note that $\hat\theta-\theta=O_p(1)$ would not be enough (see the comments) for consistency, as that would only mean that the error $\hat\theta-\theta$ is bounded, but not that it goes to zero.