Why is the jackknife less computationally intensive than the bootstrap?

It’s often claimed that the jackknife is less computationally intensive. How is that the case?

My understanding is that the jackknife involves the following steps:

1. Remove 1 data point
2. Estimate the statistic of interest (e.g. sample mean) on the remaining point
3. Repeat step 1) and 2) to get a sampling distribution of the statistic of interest.

The bootstrap involves the following steps:

1. Generate a bootstrap sample (sample with replacement)
2. Estimate the statistic of interest (e.g. sample mean) on the bootstrap sample
3. Repeat step 1) and 2) to get a sampling distribution of the statistic of interest.

It seems to me that Step 2 is the much more computationally intensive part, and is exactly the same between the jackknife and the bootstrap. If so, then how is the jackknife less computationally intensive?

1. Convention During a jackknife, the estimation step is always done exactly $$nn$$ times: one data point is omitted from each jackknife estimate. If you had a dataset of $$n=50n=50$$ points, you’d therefore run the estimation procedure 50 times, leaving out the 1st, 2nd, …nth point in turn. Bootstraps, by comparison, are almost run “a large number of times” (~1000); bootstrapping with only $$k=50k=50$$ repeats is virtually unheard of and people rarely compute jackknife estimates from absolutely massive samples ($$n=109n=10^9$$), in part because it would be pointlessly slow.
For example you want to estimate the mean. For the bootstrap, you’re stuck adding all $$nn$$ values together each time; $$bnbn$$ additions are required for $$bb$$ bootstrap iterations. For the jackknife estimate, you can instead add all $$nn$$ numbers *once* to find $$S=∑xS=\sum x$$. Next, compute the mean for the sample where the $$ii$$th data point is removed as $$S−xin−1\frac{S-x_i}{n-1}$$. This requires only $$2n2n$$ additions/subtractions for the whole jackknife. Similar tricks exist for other statistics.