Comparing maximum likelihood estimation (MLE) and Bayes’ Theorem

In Bayesian theorem, p(y|x)=p(x|y)p(y)p(x), and from the book I’m reading, p(x|y) is called the likelihood, but I assume it’s just the conditional probability of x given y, right?

The maximum likelihood estimation tries to maximize p(x|y), right? If so, I’m badly confused, because x,y are both random variables, right? To maximize p(x|y) is just to find out the ˆy? One more problem, if these 2 random variables are independent, then p(x|y) is just p(x), right? Then maximizing p(x|y) is to maximize p(x).

Or maybe, p(x|y) is a function of some parameters θ, that is p(x|y;θ), and MLE tries to find the θ which can maximize p(x|y)? Or even that y is actually the parameters of the model, not random variable, maximizing the likelihood is to find the ˆy?

UPDATE

I’m a novice in machine learning, and this problem is a confusion from the stuff I read from a machine learning tutorial. Here it is, given an observed dataset {x1,x2,...,xn}, the target values are {y1,y2,...,yn}, and I try to fit a model over this dataset, so I assume that, given x, y has a form of distribution named W parameterized by θ, that is p(y|x;θ), and I assume this is the posterior probability, right?

Now to estimate the value of θ, I use MLE. OK, here comes my problem, I think the likelihood is p(x|y;θ), right? Maximizing the likelihood means I should pick the right θ and y?

If my understanding of likelihood is wrong, please show me the right way.

Answer

I think the core misunderstanding stems from questions you ask in the first half of your question. I approach this answer as contrasting MLE and Bayesian inferential paradigms. A very approachable discussion of MLE can be found in chapter 1 of Gary King, Unifying Political Methodology. Gelman’s Bayesian Data Analysis can provide details on the Bayesian side.

In Bayes’ theorem, p(y|x)=p(x|y)p(y)p(x)
and from the book I’m reading, p(x|y) is called the likelihood, but I assume it’s just the conditional probability of x given y, right?

The likelihood is a conditional probability. To a Bayesian, this formula describes the distribution of the parameter y given data x and prior p(y). But since this notation doesn’t reflect your intention, henceforth I will use (θ,y) for parameters and x for your data.

But your update indicates that x are observed from some distribution p(x|θ,y). If we place our data and parameters in the appropriate places in Bayes’ rule, we find that these additional parameters pose no problems for Bayesians:
p(θ|x,y)=p(x,y|θ)p(θ)p(x,y)

I believe this expression is what you are after in your update.

The maximum likelihood estimation tries to maximize p(x,y|θ), right?

Yes. MLE posits that p(x,y|θ)p(θ|x,y)
That is, it treats the term p(θ,y)p(x) as an unknown (and unknowable) constant. By contrast, Bayesian inference treats p(x) as a normalizing constant (so that probabilities sum/integrate to unity) and p(θ,y) as a key piece of information: the prior. We can think of p(θ,y) as a way of incurring a penalty on the optimization procedure for “wandering too far away” from the region we think is most plausible.

If so, I’m badly confused, because x,y,θ are random variables, right? To maximize p(x,y|θ) is just to find out the ˆθ?

In MLE, ˆθ is assumed to be a fixed quantity that is unknown but able to be inferred, not a random variable. Bayesian inference treats θ as a random variable. Bayesian inference puts probability density functions in and gets probability density functions out, rather than point summaries of the model, as in MLE. That is, Bayesian inference looks at the full range of parameter values and the probability of each. MLE posits that ˆθ is an adequate summary of the data given the model.

Attribution
Source : Link , Question Author : avocado , Answer Author : Sycorax

Leave a Comment