I have recently stumbled across a mention of “double/triple bootstrap” or “iterative bootstrap”. As I understand, each bootstrap sample is bootstrapped again.

What is the point? How is it used?

**Answer**

That paper you mention in comments refers to Davidson and MacKinnon, who give this motivation:

Although bootstrap P values will often be very reliable, this will not be true in every

case. For an asymptotic test, one way to check whether it is reliable is simply to use

the bootstrap. If the asymptotic and bootstrap P values associated with a given test

statistic are similar, we can be fairly confident that the asymptotic one is reasonably

accurate. Of course, having gone to the trouble of computing the bootstrap P value,

we may well want to use it instead of the asymptotic one.

In a great many cases, however, asymptotic and bootstrap P values are quite different.

When this happens, it is almost certain that the asymptotic P value is inaccurate,

but we cannot be sure that the bootstrap one is accurate. In this paper, we discuss

techniques for computing modified bootstrap P values which will tend to be similar

to the ordinary bootstrap P value when the latter is reliable, but which should often

be more accurate when it is unreliable. These techniques are closely related to the

double bootstrap originally proposed by Beran (1988), but they are far less expensive

to compute. In fact, the amount of computational effort beyond that needed to obtain

ordinary bootstrap P values is roughly equal to the amount needed to compute the

latter in the first place.

That looks like a pretty clear reason to (i) perform iterative bootstrapping and (ii) to try to pursue efficient methods for doing it — which is what the paper you point to and this paper seem to be trying to do.

(So far this answer only relates to the ‘what’s the point?’ part of the question.)

**Attribution***Source : Link , Question Author : Max , Answer Author : Glen_b*