A time-honored reminder in statistics is “uncorrelatedness does

notimply independence”. Usually this reminder is supplemented with the psychologically soothing (and scientifically correct) statement “when, nevertheless the two variables arejointly normally distributed, then uncorrelatedness does imply independence”.I can increase the count of happy exceptions from one to two: when two variables are

Bernoulli-distributed, then again, uncorrelatedness implies independence. If X and Y are two Bermoulli rv’s, X∼B(qx),Y∼B(qy), for which we have P(X=1)=E(X)=qx, and analogously for Y, their covariance isCov(X,Y)=E(XY)−E(X)E(Y)=∑SXYp(x,y)xy−qxqy

=P(X=1,Y=1)−qxqy=P(X=1∣Y=1)P(Y=1)−qxqy

=(P(X=1∣Y=1)−qx)qy

For uncorrelatedness we require the covariance to be zero so

Cov(X,Y)=0⇒P(X=1∣Y=1)=P(X=1)

⇒P(X=1,Y=1)=P(X=1)P(Y=1)

which is the condition that is also needed for the variables to be independent.

So my question is:

Do you know of any other distributions (continuous or discrete) for which uncorrelatedness implies independence?

Meaning:Assume two random variables X,Y which havemarginaldistributions that belong to the same distribution (perhaps with different values for the distribution parameters involved), but let’s say with the same support eg. two exponentials, two triangulars, etc. Does all solutions to the equation Cov(X,Y)=0 are such that they also imply independence, by virtue of the form/properties of the distribution functions involved? This is the case with the Normal marginals (given also that they have a bivariate normal distribution), as well as with the Bernoulli marginals -are there any other cases?The motivation here is that it is usually easier to check whether covariance is zero, compared to check whether independence holds. So if, given the theoretical distribution, by checking covariance you are also checking independence (as is the case with the Bernoulli or normal case), then this would be a useful thing to know.

If we are given two samples from two r.v’s that have normal marginals, we know that if we can statistically conclude from the samples that their covariance is zero, we can also say that they are independent (but only because they have normal marginals). It would be useful to know whether we could conclude likewise in cases where the two rv’s had marginals that belonged to some other distribution.

**Answer**

“Nevertheless if the two variables are normally distributed, then uncorrelatedness does imply independence” is a very common fallacy.

That only applies if they are *jointly* normally distributed.

The counterexample I have seen most often is normal X∼N(0,1) and independent Rademacher Y (so it is 1 or -1 with probability 0.5 each); then Z=XY is also normal (clear from considering its distribution function), Cov(X,Z)=0 (the problem here is to show E(XZ)=0 e.g. by iterating expectation on Y, and noting that XZ is X2 or −X2 with probability 0.5 each) and it is clear the variables are dependent (e.g. if I know X>2 then either Z>2 or Z<−2, so information about X gives me information about Z).

It's also worth bearing in mind that marginal distributions do not uniquely determine joint distribution. Take any two real RVs X and Y with marginal CDFs FX(x) and GY(y). Then for any α<1 the function:

HX,Y(x,y)=FX(x)GY(y)(1+α(1−FX(x))(1−FY(y)))

will be a bivariate CDF. (To obtain the marginal FX(x) from HX,Y(x,y) take the limit as y goes to infinity, where FY(y)=1. Vice-versa for Y.) Clearly by selecting different values of α you can obtain different joint distributions!

**Attribution***Source : Link , Question Author : Alecos Papadopoulos , Answer Author : Silverfish*