Why do we estimate mean using MLE when we already know that mean is average of the data?

I have come across a problem in textbook to estimate mean.
The textbook problem is as follows:

Assume that $N$ data points, $x_1$, $x_2$, . . . , $x_N$ , have been
generated by a one-dimensional Gaussian pdf of unknown mean, but of
known variance. Derive the ML estimate of the mean.

My question is, Why do we need to estimate mean using MLE when we already know that mean is average of the data? The solution also says that MLE estimate is the average of the data. Do I need to do all the tiring maximizing MLE steps to find out that mean is nothing but average of the data i.e. $(x_1+x_2+\cdots+x_N)/N$ ?

Answer

Why do we need to estimate mean using MLE when we already know that mean is average of the data?

The text book problem states that $x_1,x_2,\dots,x_N$ is from $$x\sim\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$
They tell you that $\sigma$ is known, but $\mu$ has to be estimated.

Is it really that obvious that a good estimate $\hat\mu=\bar x$?!

Here, $\bar x=\frac{1}{N}\sum_{i=1}^Nx_i$.

It wasn’t obvious to me, and I was quite surprised to see that it is in fact MLE estimate.

Also, consider this: what if $\mu$ was known and $\sigma$ unknown? In this case MLE estimator is $$\hat\sigma^2=\frac{1}{N}\sum_{i=1}^N(x-\bar x)^2$$

Notice, how this estimator is not the same as a sample variance estimator! Don’t “we already know” that the sample variance is given by the following equation?
$$s^2=\frac{1}{N-1}\sum_{i}(x-\bar x)^2$$

Attribution
Source : Link , Question Author : Niranjan Kotha , Answer Author : Aksakal

Leave a Comment