Why should boostrap sample size equal the original sample size? [duplicate]

When searching the internet on the sample size of for boostrap samples I generally find that the size should equal the original sample size. However, I can’t find any explanation on why this should be the case.

  • In what way is it bad to use a smaller sample size?
  • Is it always better to use a larger sample size?

Some of the sites which report this:
http://www.stata.com/support/faqs/statistics/bootstrapped-samples-guidelines/
http://www2.stat.duke.edu/courses/Fall12/sta101.002/Sec3-34.pdf

Answer

This is from Efron and Tibshirani’s An Introduction to the Bootstrap (first sentence of Chapter 2):

The bootstrap is a computer-based method for assigning measures of
accuracy to statistical estimates.

This suggests that we should in some way respect the correct sample size n: The accuracy of statistical estimates depends on the sample size, and your statistical estimate will come from a sample of size n.


How to estimate the standard error of the mean with the bootstrap, and how you’re fooling yourself if you draw bootstrap samples of any other size than n.

We understand the behavior of the mean very well. As you’ll no doubt remember from intro to stats, the standard error of the mean depends on the sample size, n, in the following manner: SEM=s/n, where s2 is the sample variance.

The bootstrap principle is that a bootstrap sample relates to your sample as your sample relates to the population. In other words you’re assuming your sample is a pretty good approximation to the population and that you can use it as a proxy. Let xb denote the b-th bootstrap sample, and let ˆμb be the mean of this bootstrap sample. The bootstrap estimate of standard error is:

SEboot=Bb=1(ˆμbˉμ)2(B1)
where B is the number of bootstrap samples you’ve drawn (the more the merrier,) and ˉμ=ˆμb/B is the average of the bootstrapped means. This is a long way of saying that the bootstrap estimate of standard error is simply the sample standard deviation of the bootstrapped statistics. You’re using the spread in the bootstrapped means to say something about the accuracy of the sample mean.

Now, we’re bootstrapping, so we’re treating the original sample as a population: it is a discrete distribution with mass 1/n at each data point xi. We can draw as many samples from this as we want, and in principle we can make them as large or small as we want. If we draw an n-sized bootstrap sample and estimate its mean ˆμ, we know that ˆμN(ˆμ,s/n). For n=n the standard deviation of your bootstrapped mean is exactly the central limit theorem-dictated SEM for the original sample. This isn’t true for any other n.

So in this example, if n=n, the sample standard deviation of {ˆμb} is a good representation of the correct standard error of the mean. If you draw larger bootstrap samples you get really good estimates of the sample mean, but their spread no longer directly relates to the standard error you’re trying to estimate because you can make their distribution arbitrarily tight.

Attribution
Source : Link , Question Author : Andrew , Answer Author : einar

Leave a Comment