For which distributions does uncorrelatedness imply independence?

A time-honored reminder in statistics is “uncorrelatedness does not imply independence”. Usually this reminder is supplemented with the psychologically soothing (and scientifically correct) statement “when, nevertheless the two variables are jointly normally distributed, then uncorrelatedness does imply independence”.

I can increase the count of happy exceptions from one to two: when two variables are Bernoulli-distributed , then again, uncorrelatedness implies independence. If X and Y are two Bermoulli rv’s, XB(qx),YB(qy), for which we have P(X=1)=E(X)=qx, and analogously for Y, their covariance is

Cov(X,Y)=E(XY)E(X)E(Y)=SXYp(x,y)xyqxqy

=P(X=1,Y=1)qxqy=P(X=1Y=1)P(Y=1)qxqy

=(P(X=1Y=1)qx)qy

For uncorrelatedness we require the covariance to be zero so

Cov(X,Y)=0P(X=1Y=1)=P(X=1)

P(X=1,Y=1)=P(X=1)P(Y=1)

which is the condition that is also needed for the variables to be independent.

So my question is: Do you know of any other distributions (continuous or discrete) for which uncorrelatedness implies independence?

Meaning: Assume two random variables X,Y which have marginal distributions that belong to the same distribution (perhaps with different values for the distribution parameters involved), but let’s say with the same support eg. two exponentials, two triangulars, etc. Does all solutions to the equation Cov(X,Y)=0 are such that they also imply independence, by virtue of the form/properties of the distribution functions involved? This is the case with the Normal marginals (given also that they have a bivariate normal distribution), as well as with the Bernoulli marginals -are there any other cases?

The motivation here is that it is usually easier to check whether covariance is zero, compared to check whether independence holds. So if, given the theoretical distribution, by checking covariance you are also checking independence (as is the case with the Bernoulli or normal case), then this would be a useful thing to know.
If we are given two samples from two r.v’s that have normal marginals, we know that if we can statistically conclude from the samples that their covariance is zero, we can also say that they are independent (but only because they have normal marginals). It would be useful to know whether we could conclude likewise in cases where the two rv’s had marginals that belonged to some other distribution.

Answer

“Nevertheless if the two variables are normally distributed, then uncorrelatedness does imply independence” is a very common fallacy.

That only applies if they are jointly normally distributed.

The counterexample I have seen most often is normal XN(0,1) and independent Rademacher Y (so it is 1 or -1 with probability 0.5 each); then Z=XY is also normal (clear from considering its distribution function), Cov(X,Z)=0 (the problem here is to show E(XZ)=0 e.g. by iterating expectation on Y, and noting that XZ is X2 or X2 with probability 0.5 each) and it is clear the variables are dependent (e.g. if I know X>2 then either Z>2 or Z<2, so information about X gives me information about Z).

It's also worth bearing in mind that marginal distributions do not uniquely determine joint distribution. Take any two real RVs X and Y with marginal CDFs FX(x) and GY(y). Then for any α<1 the function:

HX,Y(x,y)=FX(x)GY(y)(1+α(1FX(x))(1FY(y)))

will be a bivariate CDF. (To obtain the marginal FX(x) from HX,Y(x,y) take the limit as y goes to infinity, where FY(y)=1. Vice-versa for Y.) Clearly by selecting different values of α you can obtain different joint distributions!

Attribution
Source : Link , Question Author : Alecos Papadopoulos , Answer Author : Silverfish

Leave a Comment