# Is it worth reporting small fixed-effect R2R^2 (marginal R2R^2), large model R2R^2 (conditional R2R^2)?

In a mixed model analysis (lme4 + lmerTest for R), I want to analyse the effect of 3 predictors, say A, B and C. Since it is a mixed model, there are two random effects Ran1 and Ran2.

I first built a random intercept model with Ran1 and Ran2, but without fixed terms:

mod.0 <- lmer ( outcome ~ 1 + (1|Ran1) + (1|Ran2), data = mydata)


The result (fixed part) is the following:

Fixed effects:
Estimate Std. Error       df t value Pr(>|t|)
(Intercept)  2.92381    0.07787 35.28000   37.55   <2e-16 ***


I built a random intercept mixed effect model to account for Ran1 and Ran2:

mod.1 <- lmer ( outcome ~ A + B + C + (1|Ran1) + (1|Ran2), data = mydata)


The result (fixed part) is the following:

Fixed effects:
Estimate Std. Error         df t value Pr(>|t|)
(Intercept)            3.255e+00  8.476e-02  5.000e+01  38.400  < 2e-16 ***
A                     -1.482e-01  2.639e-02  5.671e+04  -5.617 1.95e-08 ***
B                      3.495e-01  2.462e-02  5.971e+04  14.195  < 2e-16 ***
C                     -2.083e-01  1.873e-02  3.942e+04 -11.124  < 2e-16 ***


With the following method to compute $R^2$ for models:

r2.mer <- function(m)
{
lmfit <-  lm(model.response(model.frame(m)) ~ fitted(m))
summary(lmfit)\$r.squared
}


mod.0 has $R^2=$ 0.6187513 and mod.1 has $R^2=$ 0.6251295. We can see that by adding the fixed terms, the model $R^2$ does not change much.

I also use a detailed $R^2$ computation method to compare the two model and to compare the marginal and conditional $R^2$ (https://github.com/jslefche/rsquared.glmer).

By running the following command:

rsquared.glmm(list(mod.0, mod.1,))


The result is the following:

           Class   Family     Link   Marginal Conditional      AIC
1 merModLmerTest gaussian identity 0.00000000   0.5814522 300654.6
2 merModLmerTest gaussian identity 0.00555211   0.5691487 129177.1


The result is in line with the previous, i.e. for mod.1, the variances in the fixed terms only account for 0.00555 of the total variance (marginal $R^2)$.

As I said at the beginning, I am interested to analyse the effect of A, B, C. As you see, the effects are significant, although the effect size (Beta values) are small, due to large number of observations.

In this case, does it make sense to report that A and C have negative effects (Beta = -0.14 and -0.21), B has positive effect (Beta = 0.345), even the $R^2$ values of these fixed terms are really small? Do you have a better interpretation of the results?

See Nakagawa and Schielzeth (2013) (or this blog entry) for more information and a discussion on $R^2$ for mixed models. First of all, as Nagakawa and Schielzeth notice, using $R^2$ from OLS linear model for mixed models is misleading and should not be used. Generally, there are multiple ideas how to compute $R^2$ for mixed models, but there’s still no consensus on it. As for me, the Nagakawa and Schielzeth’s ideas are most interesting, however you should always remember that $R^2$ for mixed models is not the same “variance explained” as for linear models and it is just an approximation. Among other approaches you could also check Snijders and Bosker (1994) for comparison.
As for your question, I’d recommend Gelman and Hill’s (2007) book. First of all, they suggest to compare “effect sizes” for mixed models. In your case marginal $R^2$ is very small, but you should also look at the “Betas”: if they are small compared to variance of your data, i.e. including this effects in the model does not change anything for estimation, you probably could abandon those effects. On another hand, in regression and mixed models literature there are multiple examples for leaving “non-significant” effects in the model and it is almost never a decision based on a simple rule of thumb. For example, in your case the AIC changed quite dramatically what suggests that the second model has a better fit. So I do not see a simple answer in here, not yet.