# For what models does the bias of MLE fall faster than the variance?

Let $\hat\theta$ be a maximum likelihood estimate of a true parameter $\theta^*$ of some model. As the number of data points $n$ increases, the error $\lVert\hat\theta-\theta^*\rVert$ typically decreases as $O(1/\sqrt n)$. Using the triangle inequality and properties of the expectation, it’s possible to show that this error rate implies that both the “bias” $\lVert \mathbb E\hat\theta - \theta^*\rVert$ and “deviation” $\lVert \mathbb E\hat\theta - \hat\theta\rVert$ decrease at the same $O(1/\sqrt{n})$ rate. Of course, it is possible for models to have bias that shrinks at a faster rate. Many models (like oridinary least squares regression) have no bias.

I’m interested in models that have bias that shrinks faster than $O(1/\sqrt n)$, but where the error does not shrink at this faster rate because the deviation still shrinks as $O(1/\sqrt n)$. In particular, I’d like to know sufficient conditions for a model’s bias to shrink at the rate $O(1/n)$.

In general, you need models where the MLE is not asymptotically normal but converges to some other distribution (and it does so at a faster rate). This usually happens when the parameter under estimation is at the boundary of the parameter space. Intuitively, this means that the MLE will approach the parameter “only from the one side”, so it “improves on convergence speed” since it is not “distracted” by going “back and forth” around the parameter.

A standard example, is the MLE for $\theta$ in an i.i.d. sample of $U(0,\theta)$ uniform r.v.’s The MLE here is the maximum order statistic,

Its finite sample distribution is

So $B(\hat \theta_n) = O(1/n)$. But the same increased rate will hold also for the variance.

One can also verify that to obtain a limiting distribution, we need to look at the variable $n(\theta - \hat \theta_n)$,(i.e we need to scale by $n$) since

which is the CDF of the Exponential distribution.

I hope this provides some direction.