# Why the Ridge Regression is NOT scale-invariant?

In the Element of Statistical Learning, Chapter 3, we know that the linear regression is scale-invariant since the scale matrix for coefficient will be canceled eventually, but the Ridge regression doesn’t have it? Since the form of Ridge coefficient has the closed-form
$$\beta = (X^{T}X + \lambda I)^{-1}X^{T}Y, \beta = (X^{T}X + \lambda I)^{-1}X^{T}Y,$$
I don’t see why the scale-invariance doesn’t hold in here?
Can anyone suggest a prove of it?

The intuition here is that there’s a sleight-of-hand happening when you use the same symbol $$XX$$ for both the original data and the rescaled data. It’s misleading because the rescaling $$\tilde{X}= XD\tilde{X}= XD$$ is not the same as the original $$XX$$, so we should make that explicit and write down how we’re rescaling.

We can demonstrate this by considering two cases, first with the original units in $$XX$$ and second the case where we use a rescaled matrix $$\tilde{X}= XD\tilde{X}= XD$$ where $$DD$$ is a diagonal matrix that has all positive entries on the diagonal. If $$XX$$ has shape $$n \times pn \times p$$ then $$DD$$ has shape $$p \times pp \times p$$. (You can actually use any $$D_{ii} \neq 0D_{ii} \neq 0$$ but “rescaling” is almost always meant to be restricted to multiplication by a positive scalar.)

In the first case, we have
$$\beta(X) = (X^TX + \lambda I)^{-1}X^T y\beta(X) = (X^TX + \lambda I)^{-1}X^T y$$
which is just as written in the question.

In the second case, we apply the rescaling to $$XX$$ and we have
\begin{aligned} \beta(\tilde{X}) &= (\tilde{X}^T\tilde{X} + \lambda I)^{-1}\tilde{X}^T y\\ &= (DX^TXD + \lambda I)^{-1}D X^Ty \\ &= (D(X^\top X + \lambda D^{-2})D)^{-1}DX^Ty \\ &= D^{-1}(X^T X + \lambda D^{-2})^{-1}X^Ty \end{aligned}\begin{aligned} \beta(\tilde{X}) &= (\tilde{X}^T\tilde{X} + \lambda I)^{-1}\tilde{X}^T y\\ &= (DX^TXD + \lambda I)^{-1}D X^Ty \\ &= (D(X^\top X + \lambda D^{-2})D)^{-1}DX^Ty \\ &= D^{-1}(X^T X + \lambda D^{-2})^{-1}X^Ty \end{aligned}

(remembering that $$DD$$ is diagonal, so $$D^T = DD^T = D$$).

From this we can conclude that the coefficients $$\beta_X\beta_X$$ and $$\beta_\tilde{X}\beta_\tilde{X}$$ are only the same if $$D=ID=I$$.

The final line shows that the rescaling two effects on the coefficients.

1. It has a multiplicative effect on the coefficients, just as we would intuitively expect based on what happens when we rescale in the OLS case.
2. The last line makes explicit that the change in scale is “absorbed” in $$\lambda\lambda$$, and that the change in scale is gives $$\beta(\tilde{X})_i\beta(\tilde{X})_i$$ penalized inversely to the square of the rescaling $$D_{ii}D_{ii}$$.
(Thanks to Firebug for this helpful suggestion.)