# t-statistic for linear regression

I know how to calculate t-statistics and p-values for linear regression, but I’m trying to understand a step in the derivation. I understand where Student’s t-distribution comes from, namely I can show that if $Z$ is a random variable drawn from a standard normal distribution $Z \sim \mathcal{N}(0,1)$ and if $\chi$ is drawn from a $\chi^2_k$ distribution, then the new random variable

will be drawn from a t-distribution with $k$ degrees of freedom.

The part that confuses me is the application of this general formula to the estimates for the coefficients of a simple linear regression. If I parametrize the linear model as $y = \alpha + \beta x + \varepsilon$, with $\varepsilon$ a random variable with zero mean characterizing the errors, then the best estimate for $\beta$ is

where the bar’s indicate sample means. The standard error of $\hat{\beta}$ is

Here $\sigma = \sqrt{\operatorname{Var}(\varepsilon)}$. The part I am confused about is why

is taken to be drawn from a t-distribution, assuming the null hypothesis. If I could write $t$ in the form of the above variable $T$, cleanly identifying the $Z$ and the $\chi$ variables, then everything would be clear.

Notice that $\sigma$ appears in both the numerator and the denominator of $Z/\sqrt{\chi^2_k/k}$ and cancels out.