Difference in Probability Measure vs. Probability Distribution

I am trying to better understand the Difference in “Probability Measure” and “Probability Distribution

I came across the following link : https://math.stackexchange.com/questions/1073744/distinguishing-probability-measure-function-and-distribution

” The difference between the terms “probability measure” and “probability distribution” is in some ways more of a difference in connotation of the terms rather than a difference between the things that the terms refer to. It’s more about the way the terms are used. “

The answer over here suggests that these two concepts might be the same thing?

In this case – could we consider the “Normal Probability Distribution Function” as a “Probability Measure”?

Thanks!

Answer

First off, I am not used to the term “Probability Distribution Function”. In case you are referring to a “PDF” when you are using “Probability Distribution Function”, then I would like to point out that PDF is rather the abbreviation for “Probability Density Function”. Below, I will presume you meant probability density function.

Second, the word “distribution” is used very differently by different people, but I will refer to the definition of this notion in the scientific community.

In a nutshell: The distribution of a random variable X is a measure on R, while the PDF of X is a function on R and the PDF doesn’t even always exist. So they are very different.

And now we get to the mathematical details:
First, let’s define the term random variable because that is what all those terms usually refer to (unless you get a step further and want to talk about random vectors or random elements, but I will restrict this here to random variables). I.e., you talk about the distribution of a random variable.

Given a probability space (Ω,F,p) (Ω is just a set, F is a sigma algebra on Ω, and p is a measure on (Ω,F)), a random variable X is a measureable map X:ΩR.

Then we can define: The distribution pX of the random variable X is the measure pX=pX1 on R.

I.e. you push the measure p from Ω forward to R via the measurable function X.

Next we define the probability density function (PDF) of a random variable X: The PDF fX of a random variable X, if it exists, is the Radon-Nikodym derivative of its distribution w.r.t. the Lebesgue measure λ, i.e. fX=dpXdλ.

So the distribution pX and the PDF fX of a random variable are very different entities (to a stickler, at least). But very often, if the PDF fX exists, it contains all of the relevant information about the distribution pX and can thus be used as a handy substitute for the unwieldy pX.

Attribution
Source : Link , Question Author : stats_noob , Answer Author : frank

Leave a Comment