What are the benefits of specifying a covariance structure in a GLM (rather than treating all off-diagonal entries in the covariance matrix as zero)? Aside from reflecting what one knows of the data, does it
- improve goodness of fit?
- improve predictive accuracy on held-out data?
- allow us to estimate the extent of covariance?
What are the costs of imposing a covariance structure? Does it
- add computational complications for estimation algorithms?
- increase the number of estimated parameters, also increasing AIC, BIC, DIC?
Is it possible to determine the right covariance structure empirically, or is this something that depends on your knowledge of the data-generative process?
Any costs / benefits I didn’t mention?
Basically, you must specify a covariance structure in GLM. If by “assuming no covariance”, you mean “all off-diagonal entries in the covariance matrix are zero”, then all you did was assume one very specific covariance structure. (You could be even more specific, e.g., by assuming that all variances are equal.)
This is really a variation on “I don’t subscribe to any philosophy; I’m a pragmatist.” – “You just described the philosophy you subscribe to.”
As such, I would say that the advantage of thinking about the covariance structure is the chance of using a model that is more appropriate to your data. Just as you should include known functional relationships for the expected value (or the mean) of your observations, you should account for any structure you know in the covariance.
And of course, the “disadvantage” is that you need to actually think about all this. Much easier to just use your software’s default setting. But this is kind of like always driving in the first gear because your car was in first gear when you bought it and understanding the gear shift takes effort. Not recommended.