In practice how is the random effects covariance matrix calculated in a mixed effects model?

Basically what I’m wondering is how different covariance structures are enforced, and how the values inside these matrices are calculated. Functions like lme() allow us to chose which structure we’d like, but I’d love to know how they are estimated.

Consider the linear mixed effects model $Y=X\beta+Zu+\epsilon$.

Where $u \stackrel{d}{\sim} N(0,D)$ and $\epsilon \stackrel{d}{\sim} N(0,R)$. Furthermore:

$Var(Y|X,Z,\beta,u)=R$

$Var(Y|X,\beta)=Z’DZ+R=V$

For simplicity we’ll assume $R=\sigma^2I_n$.

Basically my question is: How exactly is $D$ estimated from the data for the various parameterizations?
Say if we assume $D$ is diagonal (random effects are independent) or $D$ fully parameterized (case I’m more interested in at the moment) or any of the various other parameterizations? Are there simple estimators/equations for these? (That would no doubt be iteratively estimated.)

EDIT:
From the book Variance Components (Searle, Casella, McCulloch 2006) I’ve managed to gleam the following:

If $D=\sigma^2_uI_q$ then then the variance components are updated and calculated as follows:

$\sigma_u^{2(k+1)} = \frac{\hat{\textbf{u}}^T\hat{\textbf{u}}} {\sigma_u^{2(k)}\text{trace}(\textbf{V}^{-1}\textbf{Z}^T\textbf{Z})}$

$\sigma_e^{2(k+1)} = Y'(Y-X{\hat{\beta}}^{(k)}-{Z}\hat{{u}}^{(k)})/n$

Where $\hat{\beta}^{(k)}$ and $\hat{{u}}^{(k)}$ are the $k$th updates respectively.

Is there general formulas when $D$ is block diagonal or fully parameterized? I’m guessing in the fully parameterized case, a Cholesky decomposition is used to ensure positive definiteness and symmetry.

Answer

The Goldstein .pdf @probabilityislogic linked is a great document. Here’s a list of some references that discuss your particular question:

Harville, 1976: Extension of the Gauss-Markov Theorem to include the estimation of random effects.

Harville, 1977: Maximum likelihood approaches to variance component estimation and to related problems.

Laird and Ware, 1982: Random-effects models for longitudinal data.

McCulloch, 1997: Maximum likelihood algorithms for generalized linear mixed models.

The SAS User Guide excerpt for the MIXED procedure has some great information about covariance estimation and many more sources (starting on page 3968).

There are numerous quality textbooks on longitudinal/repeated measures data analysis, but here’s one that goes into some detail about implementation in R (from the authors of lme4 and nlme):

Pinheiro and Bates, 2000: Mixed-Effects Models in S and S-PLUS.

EDIT: One more relevant paper: Lindstrom and Bates, 1988: Newton-Raphson and EM Algorithms for linear mixed-effects models for repeated-measures data.

EDIT 2: And another: Jennrich and Schluchter, 1986: Unbalanced Repeated-Measures Models with Structured Covariance Matrices.

Attribution
Source : Link , Question Author : dcl , Answer Author : Community

Leave a Comment