# Additive Error or Multiplicative Error?

I’m relatively new to statistics and would appreciate help understanding this better.

In my field there is a commonly used model of the form:

$$P_t = P_o(V_t)^\alpha$$

When people fit the model to data they usually linearise it and fit the following

$$\log(P_t) = \log(P_o) + \alpha \log(V_t) + \epsilon$$

Is this ok? I read somewhere that because of noise in the signal the actual model should be

$$P_t = P_o(V_t)^\alpha + \epsilon$$

and this cannot be linearised as above. Is this true? If so, does anyone know of a reference I can read and learn more about it and probably cite in a report?

Which model is appropriate depends on how variation around the mean comes into the observations. It may well come in multiplicatively or additively … or in some other way.

There can even be several sources of this variation, some which may enter multiplicatively and some which enter additively and some in ways that can’t really be characterized as either.

Sometimes there’s clear theory to establish which is suitable. Sometimes pondering the main sources of variation about the mean will reveal an appropriate choice. Frequently people have no clear idea which to use, or if several sources of variation of different kinds may be necessary to adequately describe the process.

With the log-linear model, where linear regression is used:

$\log(P_t)=log(P_o)+α\log(V_t)+ϵ$

the OLS regression model assumes constant log-scale variance, and if that’s the case, then the original data will show an increasing spread about the mean as the mean increases.

On the other hand, this sort of model:

$P_t=P_o(V_t)^α+ϵ$

is generally fitted by nonlinear least squares, and again, if constant variance (the default for NLS) is fitted, then the spread about the mean should be constant.

[You may have the visual impression that the spread is decreasing with increasing mean in the last image; that’s actually an illusion caused by the increasing slope – we tend to judge the spread orthogonal to the curve rather than vertically so we get a distorted impression.]

If you have nearly constant spread on either the original or the log scale, that might suggest which of the two models to fit, not because it proves it’s additive or multiplicative, but because it leads to an appropriate description of the spread as well as the mean.

Of course one might also have the possibility of additive error that had non-constant variance.

However, there are other models still where such functional relationships can be fitted that have different relationships between mean and variance (such as a Poisson or quasi-Poisson GLM, which has spread proportional to the square root of the mean).