When to use bootstrap vs. bayesian technique?

I have a rather complicated decision analysis problem involving reliability testing and the logical approach (to me) seems to involve using MCMC to support a Bayesian analysis. However, it has been suggested that it would be more appropriate to use a bootstrapping approach. Could someone suggest a reference (or three) that might support the use of either technique over the other (even for particular situations)? FWIW, I have data from multiple, disparate sources and few/zero failure observations. I also have data at the subsystem and system levels.

It seems a comparison like this should be available, but I’ve had not luck searching the usual suspects. Thanks in advance for any pointers.


To my thinking, your problem description points to two main issues. First:

I have a rather complicated decision analysis…

Assuming you’ve got a loss function in hand, you need to decide whether you care about frequentist risk or posterior expected loss. The bootstrap lets you approximate functionals of the data distribution, so it will help with the former; and posterior samples from MCMC will let you assess the latter. But…

I also have data at the subsystem and system levels

so these data have hierarchical structure. The Bayesian approach models such data very naturally, whereas the bootstrap was originally designed for data modelled as i.i.d. While it has been extended to hierarchical data (see references in the introduction of this paper), such approaches are relatively underdeveloped (according to the abstract of this article) .

To summarize: if it really is frequentist risk that you care about, then some original research in the application of the bootstrap to decision theory may be necessary. However, if minimizing posterior expected loss is a more natural fit to your decision problem, Bayes is definitely the way to go.

Source : Link , Question Author : Aengus , Answer Author : Cyan

Leave a Comment