Given a predicted variable (P), a random effect (R) and a fixed effect (F), one could fit two* mixed effects models (lme4 syntax):

`m1 = lmer( P ~ (1|R) + F ) m2 = lmer( P ~ (1+F|R) + F)`

As I understand it, the second model is the one that permits the fixed effect to vary across levels of the random effect.

In my research I typically employ mixed effects models to analyze data from experiments conducted across multiple human participants. I model participant as a random effect and experimental manipulations as fixed effects. I think it makes sense a priori to let the degree to which the fixed effects affect performance in the experiment vary across participants. However, I have trouble imagining circumstances under which I should nor permit the fixed effects to vary across levels of a random effect, so my question is:

When should one

notpermit a fixed effect to vary across levels of a random effect?

**Answer**

I am not an expert in mixed effect modelling, but the question is much easier to answer if it is rephrased in hierarchical regression modelling context. So our observations have two indexes P_{ij} and F_{ij} with index i representing class and j members of the class. The hierarchical models let us fit linear regression, where coefficients vary across classes:

Y_{ij}=\beta_{0i}+\beta_{1i}F_{ij}

This is our first level regression. The second level regression is done on the first regression coefficients:

\begin{align*}

\beta_{0i}&=\gamma_{00}+u_{0i}\\\\

\beta_{1i}&=\gamma_{01}+u_{1i}

\end{align*}

when we substitute this in first level regression we get

\begin{align*}

Y_{ij}&=(\gamma_{00}+u_{0i})+(\gamma_{01}+u_{1i})F_{ij}\\\\

&=\gamma_{00}+u_{0i}+u_{1i}F_{ij}+\gamma_{01}F_{ij}

\end{align*}

Here \gamma are fixed effects and u are random effects. Mixed models estimate \gamma and variances of u.

The model I’ve written down corresponds to `lmer`

syntax

```
P ~ (1+F|R) + F
```

Now if we put \beta_{1i}=\gamma_{01} without the random term we get

\begin{align*}

Y_{ij}=\gamma_{00}+u_{0i}+\gamma_{01}F_{ij}

\end{align*}

which corresponds to `lmer`

syntax

```
P ~ (1|R) + F
```

So the question now becomes when can we exclude error term from the second level regression? The canonical answer is that when we are sure that the regressors (here we do not have any, but we can include them, they naturally are constant within classes) in the second level regression fully explain the variance of coefficients across classes.

So in this particular case if coefficient of F_{ij} does not vary, or alternatively the variance of u_{1i} is very small we should entertain idea that we are probably better of with the first model.

**Note**. I’ve only gave algebraic explanation, but I think having it in mind it is much easier to think of particular applied example.

**Attribution***Source : Link , Question Author : Mike Lawrence , Answer Author : Chris Watson*