I started to do Monte Carlo in R as a hobby, but eventually a financial analyst advised to migrate to Matlab.

I’m an experienced software developer.

but a Monte Carlo beginner.

I want to construct static models with sensitivity analysis, later dynamic models.

Need good libraries/ algorithms that guide me.To me seems that R has excellent libraries, and I suspect mathlab is preferred by inexperienced programmers because of the easy pascal-like language. The R language is based on scheme and this is hard for beginners, but not for me. If Matlab/ Octave has not advantages on the numerical/ library side I would stick with R.

**Answer**

I use both. I often prototype functions & algorithms in Matlab because, as stated, it is easier to express an algorithm in something which is close to a pure mathematical language.

R does have excellent libraries. I’m still learning it, but I’m starting to leave Matlab in the dust because once you know R, it’s also fairly easy to prototype functions there.

However, I find that if you want algorithms to function efficiently within a production environment, it is best to move to a compiled language like C++. I have experience wrapping C++ into both Matlab and R (and excel for that matter), but I’ve had a better experience with R. *Disclaimer:* Being a grad student, I haven’t used a recent version of Matlab for my dlls, I’ve been working almost exclusively in Matlab 7.1 (which is like 4 years old). Perhaps the newer versions work better, but I can think of two situations off the top of my head where a C++ dll in the back of Matlab caused Windows XP to blue screen because I walked inappropriately outside an array bounds — a very hard problem to debug if your computer reboots every time you make that mistake…

Lastly, the R community appears to be growing much faster and with much more momentum than the Matlab community ever had. Further, as it’s free you also don’t have deal with the Godforsaken flexlm license manager.

Note: Almost all of my development is in MCMC algorithms right now. I do about 90% in production in C++ with the visualization in R using ggplot2.

**Update for Parallel Comments:**

A fair amount of my development time right now is spent on parallelizing MCMC routines (it’s my PhD thesis). I have used Matlab’s parallel toolbox and Star P’s solution (which I guess is now owned by Microsoft?? — jeez another one is gobbled up…) I found the parallel toolbox to be a configuration **nightmare** — when I used it, it required root access to every single client node. I think they’ve fixed that little “bug” now, but still a mess. I found *’p solution to be elegant, but often difficult to profile. I have not used Jacket, but I’ve heard good things. I also have not used the more recent versions of the parallel toolbox which also support GPU computation.

I have virtually no experience with the R parallel packages.

It’s been my experience that parallelizing code must occur at the C++ level where you have a finer granularity of control for task decomposition and memory/resource allocation. I find that if you attempt to parallelize programs at a high level, you often only receive a minimal speedup *unless your code is trivially decomposable* (also called dummy-parallelism). That said, you can even get reasonable speedups using a **single-line** at the C++ level using OpenMP:

```
#pragma omp parallel for
```

More complicated schemes have a learning curve, but I really like where gpgpu things are going. As of JSM this year, the few people I talked to about GPU development in R quote it as being only “toes in the deep end” so to speak. But as stated, I have minimal experience — to change in the near future.

**Attribution***Source : Link , Question Author : Roland Kofler , Answer Author : M. Tibbits*