“Brute force” expected deviance for logistic regression?

A commonly used goodness of fit statistic for logistic regression is the deviance. This is also known as the likelihood ratio chi-square statistic. It is defined as:


Where $N$ is the number of binomial cells, $y_i$ is the response, $n_i$ is the number of units in the ith cell, and $p_i$ is the predicted probability, given by $\log\left(\frac{p_i}{1-p_i}\right)=x_i^T\beta$. I have been taught that we have, approximately:

$$E(D)\approx N-\dim(\beta)$$

This is based on using the chi-square approximation to the likelihood ratio. This relationship is then used to test for “over-dispersion” in the model. If the ratio $\frac{D}{N-\dim(\beta)}$ is significantly different from $1$ (using the chi-square distribution) we conclude the model exhibits over-dispersion.

Is there any reason why we don’t simply use a brute force approach and directly average out $y_i$ from the deviance residual? That is, for each deviance residual calculate the condition mean exactly via:

$$E(d_i^2|p_i)=\sum_{y_i=0}^{n_i}{n_i\choose y_i}p_i^{y_i}(1-p_i)^{n_i-y_i}\left[2y_i\log\left(\frac{y_i}{n_ip_i}\right)+2(n_i-y_i)\log\left(\frac{n_i-y_i}{n_i-n_ip_i}\right)\right]$$

This is not a intensive calculation to do (or to code), and can easily be used to calculate the conditional variance of $d_i^2$ as well, given that most binomial cells are small. Further we can replicate the model dispersion scenario by calculating $\frac{d_i^2}{E(d_i^2|p_i)}$, or calculate T-like quantities $\frac{d_i^2-E(d_i^2|p_i)}{\sqrt{V(d_i^2|p_i)}}$ and search for outliers.

I feel like I must be missing something. Any pointers?? has anyone seen this done elsewhere?


Source : Link , Question Author : probabilityislogic , Answer Author : Community

Leave a Comment