What are the pros and cons of learning about a distribution’s properties algorithmically (via computer simulations) versus mathematically?

It seems like computer simulations can be an alternative learning method, especially for those new students who do not feel strong in calculus.

Also it seems that coding simulations can offer an earlier and more intuitive grasp of the concept of a distribution.

**Answer**

This is an important question that I have given some thoughts over the years in my own teaching, and not only regarding distributions but also many other probabilistic and mathematical concepts. I don’t know of any research that actually targets this question so the following is based on experience, reflection and discussions with colleagues.

First it is important to realize that what motivates students to understand a fundamentally mathematical concept, such as a distribution and its mathematical properties, may depend on a lot of things and vary from student to student. Among math students in general I find that mathematically precise statements are appreciated and too much beating around the bush can be confusing and frustrating (hey, get to point man). That is **not** to say that you shouldn’t use, for example, computer simulations. On the contrary, they can be very illustrative of the mathematical concepts, and I know of many examples where computational illustrations of key mathematical concepts could help the understanding, but where the teaching is still old-fashioned math oriented. It is important, though, for math students that the precise math gets through.

However, your question suggests that you are not so much interested in math students. If the students have some kind of computational emphasis, computer simulations and algorithms are really good for quickly getting an intuition about what a distribution is and what kind of properties it can have. The students need to have good tools for programming and visualizing, and I use R. This implies that you need to teach some R (or another preferred language), but if this is part of the course anyway, that is not really a big deal. If the students are not expected to work rigorously with the math afterwords, I feel comfortable if they get most of their understanding from algorithms and simulations. I teach bioinformatics students like that.

Then for the students who are neither computationally oriented nor math students, it may be better to have a range of real and relevant data sets that illustrate how different kinds of distributions occur in their field. If you teach survival distributions to medical doctors, say, the best way to get their attention is to have a range of real survival data. To me, it is an open question whether a subsequent mathematical treatment or a simulation based treatment is best. If you haven’t done any programming before, the practical problems of doing so can easily overshadow the expected gain in understanding. The students may end up learning how to write if-then-else statements but fail to relate this to the real life distributions.

As a general remark, I find that one of the really important points to investigate with simulations is how distributions transform. In particular, in relation to test statistics. It is quite a challenge to understand that this single number you computed, the t-test statistic, say, from your entire data set has anything to do with a distribution. Even if you understand the math quite well. As a curious side effect of having to deal with multiple testing for microarray data, it has actually become much easier to show the students how the distribution of the test statistic pops up in real life situations.

**Attribution***Source : Link , Question Author : b_dev , Answer Author : NRH*