Can we conclude from Eg(X)h(Y)=Eg(X)Eh(Y)\DeclareMathOperator{\E}{\mathbb{E}}\E g(X)h(Y)=\E g(X) \E h(Y) that X,YX,Y are independent?

Well, we cannot, see for example https://en.wikipedia.org/wiki/Subindependence for an interesting counterexample. But the real question is: Is there some way to strengthen the condition so that independence follows? For example, is there some set of functions g1,,gn so that if Egi(X)gj(Y)=Egi(X)Egj(Y) for all i,j then independence follows? And, how large must such a set of function be, infinite?

And, in addition, is there some good reference that treats this question?

Answer

Let (Ω,F,P) be a probability space. By definition two random variables X,Y:ΩR are independent if their σ-algebras SX:=σ(X) and SY:=σ(Y) are independent, i.e. ASX,BSY we have P(AB)=P(A)P(B).

Let ga(x)=I(xa) and take G={ga:aQ} (thanks to @grand_chat for pointing out that Q suffices). Then we have
E(ga(X)gb(Y))=E(I(Xa)I(Yb))=E(I(Xa,Yb))=P(XaYb)
and
E(ga(X))E(gb(Y))=P(Xa)P(Yb).

If we assume that a,bQ
P(XaYb)=P(Xa)P(Yb)
then we can appeal to the πλ theorem to show that
P(AB)=P(A)P(B)ASX,BSY
i.e. XY.

So unless I’ve made a mistake, we’ve at least got a countable collection of such functions and this applies to any pair of random variables defined over a common probability space.

Attribution
Source : Link , Question Author : kjetil b halvorsen , Answer Author : jld

Leave a Comment