# Motivation

In the context of post-model-selection inference, Leeb & Pötscher (2005) write:

Although it has long been known that uniformity (at least locally) w.r.t. the parameters is an important issue in asymptotic analysis, this lesson has often been forgotten in the daily practice of econometric and statistical theory where we are often content to prove pointwise asymptotic results (i.e., results that hold for each fixed true parameter value). This amnesia — and the resulting practice — fortunately has no dramatic consequences as long as only sufficiently “regular” estimators in sufficiently “regular” models are considered. However, because post-model-selection estimators are quite “irregular,” the uniformity issues surface here with a vengeance.

# Background

Uniform convergence

Suppose an estimator $$ˆθn(α)\hat\theta_n(\alpha)$$ convergences uniformly (w.r.t. $$α\alpha$$) in distribution to some random variable $$ZZ$$. Then for a given precision $$ε>0\varepsilon>0$$ we can always find a sample size $$NεN_{\varepsilon}$$ such that for every $$α\alpha$$ the distance of the distribution of $$ˆθn(α)\hat\theta_{n}(\alpha)$$ and the distribution of $$ZZ$$ (i.e. the limiting distribution) will be at most $$ε\varepsilon$$ for every $$n>Nn>N$$.

This can be useful in practice:

1. When designing an experiment, we can bound the imprecision at a desired, arbitrarily small level $$ε\varepsilon$$ by finding the corresponding $$NεN_{\varepsilon}$$.
2. For a given sample of size $$NN$$, we can find $$εN\varepsilon_N$$ to bound the imprecision.

Pointwise (but nonuniform) convergence

On the other hand, suppose an estimator $$ˆψn(α)\hat\psi_n(\alpha)$$ converges in a pointwise manner (w.r.t. $$α\alpha$$) — but not uniformly — in distribution to some random variable $$ZZ$$. Due to the nonuniformity, there exists a precision $$εN>0\varepsilon_N>0$$ such that for any sample size $$NN$$ we can always find a value $$αN\alpha_N$$ such that the distance of the distribution of $$ˆψn(αN)\hat\psi_{n}(\alpha_N)$$ and the distribution of $$ZZ$$ (i.e. the limiting distribution) will be at least $$ε\varepsilon$$ for some $$n>Nn>N$$.

Some thoughts:

1. This does not tell us how large the $$εN\varepsilon_N$$ will be.
2. When designing an experiment, we can no longer bound our imprecision at an arbitrary $$ε\varepsilon$$ by finding a suitable $$NεN_{\varepsilon}$$. But perhaps we could bound $$εN\varepsilon_N$$ at some low level, then we would not need to worry about it. But we might not always be able to bound it where we want it.
3. We may or may not find $$εN\varepsilon_N$$ to bound the imprecision for a given sample of size $$NN$$.

# Questions

1. Does lack of uniform convergence make the estimator largely useless?
(I guess, the answer is “no” since so many papers focus on pointwise convergence…)
2. If no, then what are some basic examples where the nonuniformly-convergent estimator is useful?

References:

It’s hard to give a definitive answer, because “useful” and “useless” are not mathematical and in many situations subjective (in some others one could try to formalise usefulness, but such formalisations are then again open to discussion).

Here are some thoughts.

(a) Uniform convergence is clearly much stronger than pointwise convergence; with pointwise convergence there is no guarantee, if you don’t know the true parameter value, that for any given $$nn$$ you are anywhere near to where you want to be.

(b) Pointwise convergence is still stronger than not having any convergence at all.

(c) If you have a given $$nn$$ that is not huge and uniform convergence, the uniform bound that you can actually show with the $$nn$$ you have may not be any good. This doesn’t mean that your estimator is bad, it rather means that the uniform convergence bound doesn’t guarantee that you’re close enough to the true value. You may still be.

(d) In case we don’t have a uniform convergence result, there are various possibilities:

i) Uniform convergence may in fact hold but nobody has managed to prove it yet.

ii) Uniform convergence may be violated, however it may only be violated in areas of the parameter space that are not realistic, so the actual convergence behaviour maybe alright. As in (c), just because you don’t have a theorem that guarantees that you’re close to the true value doesn’t mean you are far.

iii) Uniform convergence may be violated and you may encounter irregular behaviour in all kinds of realistic situations. Tough luck.

iv) There may even be small $$nn$$-situations in which for the $$nn$$ actually available in practice something that is not convergent at all is better than something that is pointwise or uniformly convergent.

(e) Now you may say, so uniform convergence is clearly useful because it gives us a guarantee with a clear practical value and without that we won’t have any guarantee. But apart from the fact that an estimator may be good even if we can’t guarantee that it is good, also in fact we never have a guarantee that really applies in practice, because in practice model assumptions don’t hold, and the situation is actually more complicated than saying, OK, model P is wrong but there is a true model Q that is just too complicated and may be tamed by a nonparametric uniform convergence result; no, all these models are idealisations and nothing is i.i.d. or follows any regular dependence or non-identity pattern in the first place (not even the random numbers that we use in simulations are in fact random numbers). So also the uniform convergence guarantee applies to a situation that is idealised, and practice is a different story. We use theory like uniform convergence to make quality statements about estimators in idealised situations, because these are the situations we can handle. We can really only say, in such idealised situations, constructed artificially by us for the sake of making theory possible, uniform convergence is stronger than pointwise convergence and pointwise convergence is stronger than nothing, but other ways of reasoning (such as simulation studies or even weaker theory) play a role, too, and may in some situations dominate what we know from asymptotic theory.

Sorry, no specific examples, but in any setup in which you cannot find a uniformly convergent estimator but only a pointwise convergent one, chances are the pointwise convergent one will help you (sometimes an estimator of which you cannot even show pointwise convergence may help you as well or even more). Then it may not, but then for whatever practical reason (issue with model assumptions, small $$nn$$, measurement, whatever) the uniformly convergent one may be misleading in a specific situation as well.