Unable to fit negative binomial regression in R (attempting to replicate published results)

Attempting to replicate the results from the recently published article,

Aghion, Philippe, John Van Reenen, and Luigi Zingales. 2013. “Innovation and Institutional Ownership.” American Economic Review, 103(1): 277-304.

(Data and stata code is available at http://www.aeaweb.org/aer/data/feb2013/20100973_data.zip).

Am having no problem recreating the first 5 regressions in R (using OLS and poisson methods), but am simply unable to recreate their negative binomial regression results in R, while in stata the regression works fine.

Specifically, here’s the R code I’ve written, which fails to run a negative binomial regression on the data:

data.AVRZ <- read.dta("results_data2011.dta",
sicDummies <- grep("Isic4", names(data.AVRZ), value=TRUE)
yearDummies <- grep("Iyear", names(data.AVRZ), value=TRUE)
data.column.6 <- subset(data.AVRZ, select = c("cites",
data.column.6 <- na.omit(data.column.6)

glm.nb(cites ~ .,
   data = data.column.6,
   link = log,

Running the above in R, I get the following output:

Initial fit:
Deviance = 1137144 Iterations - 1 
Deviance = 775272.3 Iterations - 2 
Deviance = 725150.7 Iterations - 3 
Deviance = 722911.3 Iterations - 4 
Deviance = 722883.9 Iterations - 5 
Deviance = 722883.3 Iterations - 6 
Deviance = 722883.3 Iterations - 7 
theta.ml: iter 0 'theta = 0.000040'
theta.ml: iter1 theta =7.99248e-05
Initial value for 'theta': 0.000080
Deviance = 24931694 Iterations - 1 
Deviance = NaN Iterations - 2 
Step halved: new deviance = 491946.5 
Error in glm.fitter(x = X, y = Y, w = w, etastart = eta, offset = offset,  : 
NA/NaN/Inf in 'x'
In addition: Warning message:
step size truncated due to divergence

Have tried using a number of different initial values for theta, as well as varying the maximum number of iterations with no luck. The authors’ supplied stata code works fine, but I still can’t seem to coerce R into making the model work. Are there alternative fitting methods for glm.nb() that may be more robust to the problem I’m encountering?


There is a lot of literature about the stable parameterization of nonlinear models. For some reason this seems to be largely ignored in R. In this case the “design matrix” for the linear predictor benefits from some work. Let
M be the design matrix and p be the parameters of the model. The linear predictor for the means μ is given by
μ=exp(Mp) The reparameterization is accomplished by modified gram-Schmidt which produces a square matrix Σ such that
where the columns of O are orthonormal. (In fact some of the columns in this case are 0 so that the method must be modified slightly to deal with this.) Then
Mp=OΣp Let the new parameters q satisfy
So that the equation for the means becomes
This parametrization is much more stable and one can fit the model for q
and then solve for the p. I used this technique to fit the model with AD Model Builder, but it might work with R. In any event, having fit the model one should look at the “residuals” which are the squared difference between each observation and its mean divided by the estimate for the variance. As seems to be common for this type of model there are some huge residuals. I think these should be examined before the results of the paper are taken seriously.

Source : Link , Question Author : jayb , Answer Author : dave fournier

Leave a Comment