# Confidence intervals for regression parameters: Bayesian vs. classical

Given two arrays x and y, both of length n, I fit a model y = a + b*x and want to calculate a 95% confidence interval for the slope. This is (b – delta, b + delta) where b is found in the usual way and

``````delta = qt(0.975,df=n-2)*se.slope
``````

and se.slope is the standard error in the slope. One way to get the standard error of the slope from R is `summary(lm(y~x))\$coef[2,2]`.

Now suppose I write the likelihood of the slope given x and y, multiply this by a “flat” prior and use a MCMC technique to draw a sample m from the posterior distribution. Define

``````lims = quantile(m,c(0.025,0.975))
``````

My question: is `(lims[]-lims[])/2` approximately equal to delta as defined above?

Addendum Below is a simple JAGS model where these two seem to be different.

``````model {
for (i in 1:N) {
y[i] ~ dnorm(mu[i], tau)
mu[i] <- a + b * x[i]
}
a ~ dnorm(0, .00001)
b ~ dnorm(0, .00001)
tau <- pow(sigma, -2)
sigma ~ dunif(0, 100)
}
``````

I run the following in R:

``````N <- 10
x <- 1:10
y <- c(30.5,40.6,20.5,59.1,52.5,
96.0,121.4,78.9,112.1,128.4)
lin <- lm(y~x)

#Calculate delta for a 95% confidence interval on the slope
delta.lm <- qt(0.975,df=N-2)*summary(lin)\$coef[2,2]

library('rjags')
jags <- jags.model('example.bug', data = list('x' = x,'y' = y,'N' = N),
update(jags, 1000)
params <- jags.samples(jags,c('a', 'b', 'sigma'),7500)
lims <- quantile(params\$b,c(0.025,0.975))
delta.bayes <- (lims[]-lims[])/2

cat("Classical confidence region: +/-",round(delta.lm, digits=4),"\n")
cat("Bayesian confidence region:  +/-",round(delta.bayes,digits=4),"\n")
``````

And get:

Classical confidence region: +/- 4.6939

Bayesian confidence region: +/- 5.1605

Rerunning this multiple times, the Bayesian confidence region is consistently wider than the classical one. So is this due to the priors I’ve chosen?

The ‘problem’ is in the prior on sigma. Try a less informative setting

``````tau ~ dgamma(1.0E-3,1.0E-3)
sigma <- pow(tau, -1/2)
``````

in your jags file. Then update a bunch

``````update(10000)
``````

grab the parameters, and summarise your quantity of interest. It should line up reasonably well with the classic version.

Clarification: The updating is just to make sure you get where you’re going whatever choice of prior you decide on, although chains for models like this one with diffuse priors and random starting values do take longer to converge. In real problems you’d check convergence before summarising anything, but convergence is not the main issue in your example I don’t think.