It is usually said that priors on bayesian statistics can be regarded as regularization factors since they penalize solutions where the prior places low density of probability.

Then, given this simple model whose MLE parameters are:

$$

argmax_{\mu} \text{ } \mathcal{N}(y; \mu, \sigma)

$$and I add a prior:

$$

argmax_{\mu} \text{ } \mathcal{N}(y; \mu, \sigma) \mathcal{N}(\mu; 0, \sigma_0)

$$

the parameters are not the MLE parameters but the MAP parameters.

Question: Does this mean that if I introduce some regularization in my model I am doing a bayesian analysis (even if only use point-estimates)?Or this just makes no sense making this “ontological” distinction at this point since the method to find either MLE or MAP is the same (isn’t it?)?

**Answer**

It means that the analysis has a Bayesian interpretation, but that doesn’t mean that it might not also have a frequentist interpretation as well. The MAP estimate might be viewed as being a partialy Bayesian approach, with a more complete Bayesian approach being to consider the posterior distribution over the parameters. It is still a Bayesian approach though, as the definition of probability would be a “degree of plausibility”, rather than a long run frequency.

**Attribution***Source : Link , Question Author : alberto , Answer Author : Dikran Marsupial*