Why do we need Bootstrapping?

I’m currently reading Larry Wasserman’s “All of Statistics” and puzzled by something he wrote in the chapter about estimating statistical functions of nonparametric models.

He wrote

“Sometimes we can find the estimated standard error of a statistical
function by doing some calculations. However in other cases it’s not
obvious how to estimate the standard error”.

I’d like to point out that in the next chapter he talks about bootstrap to address this issue, but since I don’t really understand this statement I don’t fully get the incentive behind Bootstrapping?

What example is there for when it’s not obvious how to estimate the standard error?

All the examples I’ve seen so far have been “obvious” such as X1,...Xn Ber(p) then ^se(ˆpn)=ˆp(1ˆp)/n


Two answers.

  1. What’s the standard error of the ratio of two means? What’s the standard error of the median? What’s the standard error of any complex statistic? Maybe there’s a closed form equation, but it’s possible that no one has worked it out yet.
  2. In order to use the formula for (say) the standard error of the mean, we must make some assumptions. If those assumptions are violated, we can’t necessarily use the method. As @Whuber points out in the comments, bootstrapping allows us to relax some of these assumptions and hence might provide more appropriate standard errors (although it may also make additional assumptions).

Source : Link , Question Author : matanc1 , Answer Author : Jeremy Miles

Leave a Comment