Back transforming regression results when modeling log(y)

I’m fitting a regression on the log(y). Is it valid to back transform point estimates (and confidence/prediction intervals) by exponentiation? I don’t believe so, since E[f(X)]f(E[X]) but wanted other’s opinions.

My example below shows conflicts with back transforming (.239 vs .219).

set.seed(123)

a=-5
b=2

x=runif(100,0,1)
y=exp(a*x+b+rnorm(100,0,.2))
# plot(x,y)

### NLS Fit
f <- function(x,a,b) {exp(a*x+b)} 
fit <- nls(y ~ exp(a*x+b),  start = c(a=-10, b=15)) 
co=coef(fit)
# curve(f(x=x, a=co[1], b=co[2]), add = TRUE,col=2,lwd=1.2) 
predict(fit,newdata=data.frame(x=.7))
[1] 0.2393773

### LM Fit
# plot(x,log(y))
# abline(lm(log(y)~x),col=2)
fit=lm(log(y)~x)
temp=predict(fit,newdata=data.frame(x=.7),interval='prediction')
exp(temp)
        fit       lwr       upr
1 0.2199471 0.1492762 0.3240752

Answer

It depends on what you want to obtain at the other end.

A confidence interval for a transformed parameter transforms just fine. If it has the nominal coverage on the log scale it will have the same coverage back on the original scale, because of the monotonicity of the transformation.

A prediction interval for a future observation also transforms just fine.

An interval for a mean on the log scale will not generally be a suitable interval for the mean on the original scale.

However, sometimes you can either exactly or approximately produce a reasonable estimate for the mean on the original scale from the model on the log scale.

However, care is required or you might end up producing estimates that have somewhat surprising properties (it’s possible to produce estimates that don’t themselves have a population mean for example; this isn’t everyone’s idea of a good thing).

So for example, in the lognormal case, when you exponentiate back, you have a nice estimate of exp(μi), and you might note that the population mean is exp(μi+12σ2), so you may think to improve exp(^μi) by scaling it by some estimate of exp(12σ2).

One should at least be able to get consistent estimation and indeed some distributional asymptotics via Slutsky’s theorem (specifically the product-form) as long as one can consistently estimate the adjustment. The continuous mapping theorem says that you can if you can estimate σ2 consistently… which is the case.

So as long as ˆσ2 is a consistent estimator of σ2, then
exp(^μi)exp(12ˆσ2) converges in distribution to the distribution of exp(^μi)exp(12σ2) (which by inspection will then be asymptotically lognormally distributed). Since ^μi will be consistent for μi, bu the continuous mapping theorem, exp(^μi) will be consistent for exp(μi), and so we have a consistent estimator of the mean on the original scale.

See here.

Some related posts:

Back transformation of an MLR model

Back Transformation

Back-transformed confidence intervals

Attribution
Source : Link , Question Author : Glen , Answer Author : Community

Leave a Comment