Doing MCMC: use jags/stan or implement it myself

I am new to Bayesian Statistics research. I heard from researchers that Bayesian researchers better implement MCMC by themselves rather than using tools like JAGS/Stan. May I ask what is the benefit of implementing MCMC algorithm by oneself (in a “not quite fast” languages like R), except for learning purpose?


In general, I would strongly suggest not coding your own MCMC for a real applied Bayesian analysis. This is both a good deal of work and time and very likely to introduce bugs in the code. Blackbox samplers, such as Stan, already use very sophisticated samplers. Trust me, you will not code a sampler of this caliber just for one analysis!

There are special cases in which in this will not be sufficient. For example, if you needed to do an analysis in real time (i.e. computer decision based on incoming data), these programs would not be a good idea. This is because Stan requires compiling C++ code, which may take considerably more time than just running an already prepared sampler for relatively simple models. In that case, you may want to write your own code. In addition, I believe there are special cases where packages like Stan do very poorly, such as Non-Gaussian state-space models (full disclosure: I believe Stan does poorly in this case, but do not know). In that case, it may be worth it to implement a custom MCMC. But this is the exception, not the rule!

To be quite honest, I think most researchers who write samplers for a single analysis (and this does happen, I have seen it) do so because they like to write their own samplers. At the very least, I can say that I fall under that category (i.e. I’m disappointed that writing my own sampler is not the best way to do things).

Also, while it does not make sense to write your own sampler for a single analysis, it can make a lot of sense to write your own code for a class of analyses. Being that JAGs, Stan, etc. are black-box samplers, you can always make things faster by specializing for a given model, although the amount of improvement is model dependent. But writing an extremely efficient sampler from the ground up is maybe 10-1,000 hours of work, depending on experience, model complexity etc. If you’re doing research in Bayesian methods or writing statistical software, that’s fine; it’s your job. But if your boss says “Hey, you can you analyze this repeated measures data set?” and you spend 250 hours writing an efficient sampler, your boss is likely to be upset. In contrast, you could have written this model in Stan in, say, 2 hours, and had 2 minutes of run time
instead of the 1 minute run time achieved by the efficient sampler.

Source : Link , Question Author : user112758 , Answer Author : Cliff AB

Leave a Comment