# An Unexpected Expectation!

Due to the linearity property of the expectations, we can write $$E(X+Y)=E(X)+E(Y)E(X+Y) = E(X)+E(Y)$$. But in Chapter 6.1 of the book, “Counterexamples in probability“, the writer, Jordan Stoyanov argued that this is not true for three random variables, i.e., for $$E(X+Y+Z)E(X+Y+Z)$$.
The paper cited as the reference in that chapter is “An Unexpected Expectation“(1977) by Gordon Simons. But unfortunately, in that paper, the writer didn’t provide any formal proof for this phenomenon. And stated that “We have no explanations why the theorem given above should hold for two variables but fail for three.
I haven’t quite understood why this is the case or found any proof for this. Can anyone please help me understand why the linearity property will not be held for the case of three random variables? And please correct me if I am getting it totally wrong.

What I interpret Simons to be claiming is that if $$E[X+Y]+E[X+Y]^+$$ or $$E[X+Y]−E[X+Y]^-$$ is finite, then to calculate $$E[X+Y]E[X+Y]$$ you do not need to know or assume anything about how $$XX$$ and $$YY$$ are jointly distributed. Rather, you only need to know about marginal distributions. But, he says, this doesn’t extend to the case of three or more variables. In other words, the finitude of $$E[X+Y+Z]+E[X+Y+Z]^+$$ or $$E[X+Y+Z]−E[X+Y+Z]^-$$ does not imply that one can calculate $$E[X+Y+Z]E[X+Y+Z]$$ using only knowledge of the marginal distributions of $$XX$$, $$YY$$, and $$ZZ$$. He gives a counterexample to demonstrate this point.