Adaptively selecting the number of bootstrap replicates

As with most Monte Carlo methods, the rule for bootstrapping is that the larger the number of replicates, the lower the Monte Carlo error. But there are diminishing returns, so it doesn’t make sense to run as many replicates as you possibly can.

Suppose you want to ensure that your estimate $\hat θ$ of a certain quantity $θ$ is within $ε$ of the estimate $\tilde θ$ that you would get with infinitely many replicates. For example, you might want to be reasonably sure that the first two decimal places of $\hat θ$ are not wrong due to Monte Carlo error, in which case $ε = .005$. Is there an adaptive procedure you can use in which you keep generating bootstrap replicates, checking $\hat θ$, and stopping according to a rule such that, say, $|\hat θ – \tilde θ| < ε$ with 95% confidence?

N.B. While the existing answers are helpful, I’d still like to see a scheme to control the probability that $|\hat θ – \tilde θ| < ε$.

Answer

If the estimation of $\theta$ on the replicates are normally distributed I guess you can estimate the error $\hat{\sigma}$ on $\hat{\theta}$ from the standard deviation $\sigma$:

$$
\hat{\sigma} = \frac{\sigma}{\sqrt{n}}
$$

then you can just stop when $1.96*\hat{\sigma} < \epsilon$.

Or have I misunderstood the question? Or do you want an answer without assuming normality and in presence of significant autocorrelations?

Attribution
Source : Link , Question Author : Kodiologist , Answer Author : fabiob

Leave a Comment