# Estimating linear regression with OLS vs. ML

Assume that I’m going to estimate a linear regression where I assume $u\sim N(0,\sigma^2)$. What is the benefit of OLS against ML estimation? I know that we need to know a distribution of $u$ when we use ML Methods, but since I assume $u\sim N(0,\sigma^2)$ whether I use ML or OLS this point seems to be irrelevant. Thus the only advantage of OLS should be in the asymptotic features of the $\beta$ estimators. Or do we have other advantages of the OLS method?

Using the usual notations, the log-likelihood of the ML method is

$l(\beta_0, \beta_1 ; y_1, \ldots, y_n) = \sum_{i=1}^n \left\{ -\frac{1}{2} \log (2\pi\sigma^2) - \frac{(y_{i} - (\beta_0 + \beta_1 x_{i}))^{2}}{2 \sigma^2} \right\}$.

It has to be maximised with respect to $\beta_0$ and $\beta_1$.

But, it is easy to see that this is equivalent to minimising

$\sum_{i=1}^{n} (y_{i} - (\beta_0 + \beta_1 x_{i}))^{2}$.

Hence, both ML and OLS lead to the same solution.

More details are provided in these nice lecture notes.