# Can we really sample from a Continuous distribution (Scipy function) and what does it mean?

I’ve seen this answer: How is it logically possible to sample a single value from a continuous distribution?, but it’s still not very clear to me.

In Scipy, there’s a function `scipy.stats.norm.rvs()` which samples from a normal distribution.

I was trying to understand how it works, imagining that “under the hood” we’re really sampling from a discrete Random Variable at some arbitrary level of granularity.

However, our professor explained that we can always sample values from a continuous distribution, and that “The density represents the range of different values with the associated likelihood of them occurring”.

I’m having trouble reconciling that statement with the fact that if \$X\$ is a continuous RV, then \$P(X=x) = 0\$.

If \$P(X=x) = 0\$, how can the density value of the PDF give a likelihood of a value, or range of values, occurring? What am I missing?

In practice the functions that sample from continuous distributions at best sample only to some level of accuracy. For example, if we’re sampling from a uniform on the unit interval, typically what happens is there’s an algorithm that samples uniformly over some (very large) range of integers (say \$0,1,…,m-1\$) and these may be converted to numbers in \$[0,1)\$ by dividing by \$m\$. So you can see \$n/m\$ or \$(n-1)/m\$ or \$(n+1)/m\$ but not values in between.

If you think of these discrete values as representing values within a range (\$n/m\$ in some sense “stands for” values in \$[n/m, (n+1)/m)\$ then there’s a sense in which the sampled values could be regarded as standing for an interval of truly continuous values; while such considerations become somewhat more complex once you start transforming them, nevertheless in many situations the endpoints of the intervals can be tracked through such transformations and the process maintained as needed.

Note that your professor’s comment doesn’t seem to be talking about what is usually done but rather what we could do. In that case whuber’s comment at the linked post is relevant:

One (inefficient) way is to generate each successive binary digit independently until the number is known sufficiently precisely for the calculations.

One way to look at that is that we can (as before) regard any current representation as a proxy for an interval of values but we can generate as many additional digits as required when we need them. In that case, a given generated value has always only been partly generated; the process of generating to more precision may be undertaken as the precision is needed.

In reality all our representations of continuous quantities (not just in random generation, but in any measurement) are limited in accuracy; normally this doesn’t do any harm to our notions of continuous variables as a suitable model for what we are doing.