Are Bayesian methods robust to violations of normality?

Consider the simple case

$$x|\sigma^2 \sim N(0,\sigma^2)$$
$$\sigma^2\sim IG(\alpha,\beta)$$

Then, marginally, $f(x) \propto (\beta + x^2/2)^{-\alpha}$, is a t-distribution.

Does this mean that the current model (and generally similar Bayesian models) are robust to violations of normality?


The Bayesian approach is robust only in that few Bayesians would feel right about using the model you have posed, and Bayes makes it relatively easy to use more general models that make fewer assumptions. The most common and most simple extension to the Gaussian model you posed is to use a $t$ distribution with unknown degrees of freedom $\nu$ for the raw data. A prior distribution for $\nu$ could favor normality ($\nu > 20$) if you had such knowledge, and as $n \uparrow$, $\nu$ would “follow the data” to allow tails to be as heavy as needed.

To get a sense of how massive a departure this is from the classical approach, textbooks urge students to assess normality and to try different data transformations to achieve this. The result is falsely narrow confidence intervals and much subjectivity. Lack of admission of the subjectivity, i.e., not accounting for model uncertainty, is what makes the intervals too narrow. A real Bayesian approach does not think dichotomously about normality, and when the analysis is finished, instead of asking “what if I am wrong?” you obtain a posterior probability of normality, e.g., $\Pr(\nu > 20)$.

If you do not believe the data distribution to be symmetric, you can use a similar process but based on a skew $t$ distribution. Bayes encourages us to be honest about what we do not know by including a parameter for what we don’t know. If you only knew that a distribution was continuous you may need 4 parameters (location, scale, skewness, kurtosis).

Source : Link , Question Author : papgeo , Answer Author : Frank Harrell

Leave a Comment