This is an update to a previous question that I have posted. I am looking for clarification on comparing glm models using deviance and log-likelihood ratio tests (I have updated my question to make it clearer). I am working with R.

I will first use an example of a logistic regression to illustrate my confusion.

As I understand, logistic regression models can be compared by comparing the deviance. The deviance is defined by -2xlog-likelihood (-2LL). In most cases, the value of the log-likelihood will be negative, so multiplying by -2 will give a positive deviance. The deviance of a model can be obtained in two ways. First, you can use the value listed under “Residual deviance” in the model summary. Second, you can obtain the log-likelihood by executing “logLik(model)” and then multiplying by -2. Either way will give you the same value. For example (taken from Discovering Statistics using R by Field et al 2012):

`> summary(eel1) Call: glm(formula = Cured ~ Intervention, family = binomial(), data = eel) Deviance Residuals: Min 1Q Median 3Q Max -1.5940 -1.0579 0.8118 0.8118 1.3018 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.2877 0.2700 -1.065 0.28671 InterventionIntervention 1.2287 0.3998 3.074 0.00212 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 154.08 on 112 degrees of freedom Residual deviance: 144.16 on 111 degrees of freedom AIC: 148.16 Number of Fisher Scoring iterations: 4 > logLik(eel1) 'log Lik.' -72.0789 (df=2) > -2*logLik(eel1) 'log Lik.' 144.1578 (df=2)`

When comparing two models, the model with the lower deviance is the better model. To determine if the difference in deviances is significant, you take the differences in the deviances to give a χ2 value (i.e. χ2 = (-2LL1)-(-2LL2), with an associated p value.

Now onto the confusing part (for me at least). I have now begun model comparisons between negative binomial (NB) models using likelihood ratio tests, and from what I can gather, these test are very similar to comparing the deviances in the logistic regression models. To compare two NB models, I again compare the values of -2xlog-likelihood (-2LL). To obtain -2LL, I have been using logLik(model) to obtain the log-likelihood of each model, and then multiplying by -2 to obtain -2LL. Again, in most cases, the log-likelihood is negative, so multiplying it by -2 makes the -2LL positive (example taken from here):

`> summary(m1) Call: glm.nb(formula = daysabs ~ math + prog, data = dat, init.theta = 1.032713156, link = log) Deviance Residuals: Min 1Q Median 3Q Max -2.1547 -1.0192 -0.3694 0.2285 2.5273 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.615265 0.197460 13.245 < 2e-16 *** math -0.005993 0.002505 -2.392 0.0167 * progAcademic -0.440760 0.182610 -2.414 0.0158 * progVocational -1.278651 0.200720 -6.370 1.89e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for Negative Binomial(1.0327) family taken to be 1) Null deviance: 427.54 on 313 degrees of freedom Residual deviance: 358.52 on 310 degrees of freedom AIC: 1741.3 Number of Fisher Scoring iterations: 1 Theta: 1.033 Std. Err.: 0.106 2 x log-likelihood: -1731.258 > logLik(m1) 'log Lik.' -865.6289 (df=5) > -2*logLik(m1) 'log Lik.' 1731.258 (df=5)`

This time however, the value for -2LL is not equivalent to the “residual deviance” in the model summary. So my first question is why in the logistic regression model above, the -2LL and the residual deviance are equal, however this is not the case in the NB model?

To then compare two negative binomial models, I work out the difference between -2LL of the two models, which has a chi sq distribution. If the associated p value for the difference in the -2LL is less than 0.05, then one model fits the data better than the other. For example:

`> m1=glm.nb(daysabs~math+prog, data=dat) > m2=glm.nb(daysabs~math, data=dat) > logLik(m1) 'log Lik.' -865.6289 (df=5) > logLik(m2) 'log Lik.' -888.1529 (df=3) > -2*logLik(m1) 'log Lik.' 1731.258 (df=5) > -2*logLik(m2) 'log Lik.' 1776.306 (df=3) > chisq=(-2*logLik(m2))-(-2*logLik(m1)) > chisq 'log Lik.' 45.04798 (df=3) > pchisq(chisq, df = 2, lower.tail=FALSE) 'log Lik.' 1.65179e-10 (df=3)`

My next question is, based on the -2LL values of the two NB models, which model is the better fitting model? I have assumed that the -2LL value is analogous to the deviance in the logistic regression in that the model with the smaller value of -2LL is the better fitting model (that is m1). Is my assumption correct? Or is it the case that the model with more parameters will always fit the data better, its just a matter of if it is significantly better?

Furthermore, if you compare two NB models using the anova(m1,m2) function, the -2LL is calculated as 2xloglikelhood, rather than -2xlog-likelihood, which gives negative values. In this case, which model is the better fitting model? (Again I have been assuming that the model with the lower value of -2xlog-likelihood is the better fitting model)

`Likelihood ratio tests of Negative Binomial Models Response: daysabs Model theta Resid. df 2 x log-lik. Test df LR stat. Pr(Chi) 1 math 0.8558565 312 -1776.306 2 math + prog 1.0327132 310 -1731.258 1 vs 2 2 45.04798 1.65179e-10`

Any clarification is much appreciated. I realise that there are similar posts on CV, however these other posts still did not clarify my questions.

**Answer**

The residual deviance is twice the difference between the likelihood in the log scale of the saturated model and that of your proposed model:

ResidualDeviance=2×(ll(SaturatedModel)−ll(ProposedModel))

It can not be calculated simply as `-2*logLik(model)`

in `R`

generally, because the likelihood in the log scale of the saturated model is not always 0. Read this post for mathematical evidence. `-2*logLik(model)`

works for the logistic regression because in this case the likelihood in the log scale of the saturated model is 0. To calculate the residual deviance of the negative binomial regression model manually in `R`

, you can try this:

```
sum(residuals.glm(m1, "deviance")^2)
```

You are right about the likelihood that adding parameters will always increase the likelihood of a GLM. It is just a matter of statistical significance. It is recommended to choose a model based on the `AIC`

and the `BIC`

rather than the deviance only because the `AIC`

and the `BIC`

penalize you for adding more parameters.

I hope it will help.

**Attribution***Source : Link , Question Author : JeanDrayton , Answer Author : 一个锅*