Estimating linear regression with OLS vs. ML

Assume that I’m going to estimate a linear regression where I assume uN(0,σ2). What is the benefit of OLS against ML estimation? I know that we need to know a distribution of u when we use ML Methods, but since I assume uN(0,σ2) whether I use ML or OLS this point seems to be irrelevant. Thus the only advantage of OLS should be in the asymptotic features of the β estimators. Or do we have other advantages of the OLS method?

Answer

Using the usual notations, the log-likelihood of the ML method is

l(β0,β1;y1,,yn)=ni=1{12log(2πσ2)(yi(β0+β1xi))22σ2}.

It has to be maximised with respect to β0 and β1.

But, it is easy to see that this is equivalent to minimising

ni=1(yi(β0+β1xi))2.

Hence, both ML and OLS lead to the same solution.

More details are provided in these nice lecture notes.

Attribution
Source : Link , Question Author : MarkDollar , Answer Author : ocram

Leave a Comment