Estimating the covariance posterior distribution of a multivariate gaussian

I need to “learn” the distribution of a bivariate gaussian with few samples, but a good hypothesis on the prior distribution, so I would like to use the bayesian approach.

I defined my prior:
μ0=[00]   Σ0=[160027]

And my distribution given the hypothesis
μ=[00]   Σ=[180018]

Now I know thanks to here that to estimate the mean given the data


I can compute:



Now comes the question, maybe I’m wrong, but it seems to me that Σn is just the covariance matrix for the estimated parameter μn, and not the estimated covariance of my data. What I would like would be to compute also


in order to have a fully specified distribution learned from my data.

Is this possible? Is it already solved by computing Σn and it’s just expressed in the wrong way the formula above (or I am simply misentrepreting it)? References would be appreciated. Thanks a lot.


From the comments, it appeared that my approach was “wrong”, in the sense that I was assuming a constant covariance, defined by Σ. What I need would be to put a prior also on it, P(Σ), but I don’t know what distribution I should use, and subsequently what is the procedure to update it.


You can do Bayesian updating for the covariance structure in much the same spirit as you updated the mean. The conjugate prior for the covariance matrix of the multivariate-normal is the Inverse-Wishart distribution, so it makes sense to start there,


Then when you get your sample X of length n you can calculate the sample covariance estimate

This can then be used to update your estimate of the covariance matrix


You may choose to use the mean of this as your point estimate for the covariance (Posterior Mean Estimator)


or you might choose to use the mode (Maximum A Posteriori Estimator)


Source : Link , Question Author : unziberla , Answer Author : Corvus

Leave a Comment