In particular, I am wondering why we have this concept Multiple R (which I can understand as the correlation between observed and predicted scores in multiple regression), and then a separate concept R-squared which is just the square or R.

I’ve been informed that R-squared is the percentage variation explained and R is not, but I don’t understand the distinction that is being made between correlation and explained variation.

**Answer**

A main issue here is that the measure of “variation” in regression analysis is related to the *squared* differences of observed variables from their predicted mean values. This is a useful choice of a measure of variation, both for theoretical analysis and in practical work, because squared differences from the mean are related to the variance of a random variable, and the variance of the sum of two independent random variables is simply the sum of their individual variances.

R2 in multiple regression represents the fraction of “variation” in the observed variable that is accounted for by the regression model when *squared* differences from predicted means are used as the measure of variation. The Multiple R is simply the square root of R2.

I’m afraid that I’ve never understood the usefulness of specifying the value of the Multiple R rather than R2. Unlike the correlation coefficient r in a univariate regression, which shows both the direction and strength of the relation between 2 variables, specifying the Multiple R doesn’t seem to add much beyond a chance for additional confusion.

**Attribution***Source : Link , Question Author : user1205901 – Слава Україні , Answer Author : EdM*