# How can I represent R squared in matrix form?

This question is a follow-up to a prior question.

Basically, I wanted to study under what conditions when we regress the residuals to $x_1$, we will get $\small R^2$ of 20%.

As a first step to attack this problem, my question is, how do I express $\small R^2$ in matrix form?

Then I will try to express “$\small R^2$ of regressing residuals to $x_1$” using matrix form.

Also, how can I add regression weights into the expression?

We have

where $\tilde{y}$ is a vector $y$ demeaned.

Recall that $\hat{\beta} = (X^\prime X)^{-1} X^\prime y$, implying that $e= y - X\hat{\beta} = y - X(X^\prime X)^{-1}X^\prime y$. Regression on a vector of 1s, written as $l$, gives the mean of $y$ as the predicted value and residuals from that model produce demeaned $y$ values; $\tilde{y} = y - \bar{y} = y - l(l^\prime l)^{-1}l^\prime y$.

Let $H = X(X^\prime X)^{-1}X^\prime$ and let $M = l(l^\prime l)^{-1}l^\prime$, where $l$ is a vector of 1’s. Also, let $I$ be an identity matrix of the requisite size. Then we have

where the second line comes from the fact that $H$ and $M$ (and $I$) are idempotent.

In the weighted case, let $\Omega$ be the weighting matrix used in the OLS objective function, $e^\prime \Omega e$. Additionally, let $H_w = X \Omega^{1/2} (X^\prime \Omega X)^{-1} \Omega^{1/2} X^\prime$ and $M_w = l \Omega^{1/2}(l^\prime \Omega l)^{-1} \Omega^{1/2} l^\prime$. Then,