Residual standard error difference between optim and glm

I try to reproduce with optim the results from a simple linear regression fitted with glm or even nls R functions.
The parameters estimates are the same but the residual variance estimate and the standard errors of the other parameters are not the same particularly when the sample size is low. I suppose that this is due differences in the way the residual standard error is calculated between Maximum Likelihood and Least square approaches (dividing by n or by n-k+1 see bellow in the example).
I understand from my readings on the web that optimization is not a simple task but I was wondering if it would be possible to reproduce in a simple way the standard error estimates from glm while using optim.

Simulate a small dataset

set.seed(1)
n = 4 # very small sample size !
b0 <- 5
b1 <- 2
sigma <- 5
x <- runif(n, 1, 100)
y =  b0 + b1*x + rnorm(n, 0, sigma) 

Estimate with optim

negLL <- function(beta, y, x) {
    b0 <- beta[1]
    b1 <- beta[2]
    sigma <- beta[3]
    yhat <- b0 + b1*x
    likelihood <- dnorm(y, yhat, sigma)
    return(-sum(log(likelihood)))
}

res <- optim(starting.values, negLL, y = y, x = x, hessian=TRUE)
estimates <- res$par     # Parameters estimates
se <- sqrt(diag(solve(res$hessian))) # Standard errors of the estimates
cbind(estimates,se)


    > cbind(estimates,se)
      estimates         se
b0     9.016513 5.70999880
b1     1.931119 0.09731153
sigma  4.717216 1.66753138

Comparison with glm and nls

> m <- glm(y ~ x)
> summary(m)$coefficients
            Estimate Std. Error   t value    Pr(>|t|)
(Intercept) 9.016113  8.0759837  1.116411 0.380380963
x           1.931130  0.1376334 14.030973 0.005041162
> sqrt(summary(m)$dispersion) # residuals standard error
[1] 6.671833
> 
> summary(nls( y ~ b0 + b1*x, start=list(b0 = 5, b1= 2)))

Formula: y ~ b0 + b1 * x

Parameters:
   Estimate Std. Error t value Pr(>|t|)   
b0   9.0161     8.0760   1.116  0.38038   
b1   1.9311     0.1376  14.031  0.00504 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 6.672 on 2 degrees of freedom

I can reproduce the different residual standard error estimates like this :

> # optim / Maximum Likelihood estimate
> sqrt(sum(resid(m)^2)/n)
[1] 4.717698
> 
> # Least squares estimate (glm and nls estimates)
> k <- 3 # number of parameters
> sqrt(sum(resid(m)^2)/(n-k+1))
[1] 6.671833

Answer

The issues is that the standard errors comes from

ˆσ2(XX)1

where ˆσ2 is the unbiased estimator and not the MLE. See summary.lm

summary.lm
#R function (object, correlation = FALSE, symbolic.cor = FALSE, 
#R     ...) 
#R {
#R    z <- object
#R    p <- z$rank
#R    rdf <- z$df.residual
#R    ...
#R    Qr <- qr.lm(object) 
#R    ... 
#R    r <- z$residuals
#R    f <- z$fitted.values
#R    w <- z$weights
#R    if (is.null(w)) {
#R         mss <- if (attr(z$terms, "intercept")) 
#R             sum((f - mean(f))^2)
#R         else sum(f^2)
#R         rss <- sum(r^2)
#R    }
#R    ...
#R    resvar <- rss/rdf
#R    ...
#R    R <- chol2inv(Qr$qr[p1, p1, drop = FALSE])
#R    se <- sqrt(diag(R) * resvar)
#R    ...

This is the inverse observed Fisher information for (β0,β1) conditional on ˆσ2. Now the inverse observed Fisher information you compute is for the triplet (β0,β1,σ). I.e., you use the MLE of σ and not the unbiased estimator. Thus, I gather the standard errors should differ by factor n/(n3+1) or something similar. This is the case

set.seed(1)
n = 4 # very small sample size !
b0 <- 5
b1 <- 2
sigma <- 5
x <- runif(n, 1, 100)
y =  b0 + b1*x + rnorm(n, 0, sigma) 

negLL <- function(beta, y, x) {
  b0 <- beta[1]
  b1 <- beta[2]
  sigma <- beta[3]
  yhat <- b0 + b1*x
  return(-sum(dnorm(y, yhat, sigma, log = TRUE)))
}

res <- optim(c(0, 0, 1), negLL, y = y, x = x, hessian=TRUE)
estimates <- res$par     # Parameters estimates
(se <- sqrt(diag(solve(res$hessian))))
#R [1] 5.690 0.097 1.653
k <- 3
se * sqrt(n / (n-k+1))
#R [1] 8.047 0.137 2.338

To elaborate more as usεr11852 requests, the log-likelihood is

l(β,σ)=n2log(2π)nlogσ12σ2(yXβ)(yXβ)

where X is the design matrix and n is the number of observation. Consequently, the observed information matrix is

ββl(β,σ)=1σ2XX

Now we can either plug in the MLE or the unbaised estimator of σ as the following show

m <- lm(y ~ x)
X <- cbind(1, x)
sqrt(sum(resid(m)^2)/n       * diag(solve(crossprod(X))))
#R                     x 
#R 5.71058285 0.09732149
k <- 3
sqrt(sum(resid(m)^2)/(n-k+1) * diag(solve(crossprod(X))))
#R                   x 
#R 8.0759837 0.1376334 

We can do the same with a QR decomposition as lm does

obj <- qr(X)
sqrt(sum(resid(m)^2)/(n-k+1) * diag(chol2inv(obj$qr)))
#R [1] 8.0759837 0.1376334

So to answer

I understand from my readings on the web that optimization is not a simple task but I was wondering if it would be possible to reproduce in a simple way the standard error estimates from glm while using optim.

then you need to scale up the standard errors in the Gaussian example you use.

Attribution
Source : Link , Question Author : Gilles , Answer Author : Scortchi – Reinstate Monica

Leave a Comment