In explaining why uncorrelated does not imply independent, there are several examples that involve a bunch of random variables, but they all seem so abstract: 1 2 3 4.

This answer seems to make sense. My interpretation: A random variable and its square may be uncorrelated (since apparently lack of correlation is something like linear independence) but they are clearly dependent.

I guess an example would be that (standardised?) height and height2 might be uncorrelated but dependent, but I don’t see why anyone would want to compare height and height2.

For the purpose of giving intuition to a beginner in elementary probability theory or similar purposes, what are some real-life examples of uncorrelated but dependent random variables?

**Answer**

In finance, GARCH (generalized autoregressive conditional heteroskedasticity) effects are widely cited here: stock returns rt:=(Pt−Pt−1)/Pt−1, with Pt the price at time t, themselves are uncorrelated with their own past rt−1 if stock markets are efficient (else, you could easily and profitably predict where prices are going), but their squares r2t and r2t−1 are not: there is time dependence in the variances, which cluster in time, with periods of high variance in volatile times.

Here is an artificial example (yet again, I know, but “real” stock return series may well look similar):

You see the high volatility cluster around in particular t≈400.

Generated using R code:

```
library(TSA)
garch01.sim <- garch.sim(alpha=c(.01,.55),beta=0.4,n=500)
plot(garch01.sim, type='l', ylab=expression(r[t]),xlab='t')
```

**Attribution***Source : Link , Question Author : BCLC , Answer Author : luchonacho*