How is the confidence interval calculated for the ACF function?

For example, in R if you call the acf() function it plots a correlogram by default, and draws a 95% confidence interval. Looking at the code, if you call plot(acf_object, ci.type="white"), you see:

qnorm((1 + ci)/2)/sqrt(x$n.used)

as upper limit for type white-noise. Can some one explain theory behind this method? Why do we get the qnorm of 1+0.95 and then divide by 2 and after that, divide by the number of observations?

Answer

In Chatfield’s Analysis of Time Series (1980), he gives a number of methods of estimating the autocovariance function, including the jack-knife method. He also notes that it can be shown that the variance of the autocorrelation coefficient at lag k, $r_k$, is normally distributed at the limit, and that Var($r_k$) ~ 1/N (where N is the number of observations). These two observations are pretty much the core of the issue. He doesn’t give a derivation for the first observation, but references Kendall & Stuart, The Advanced Theory of Statistics (1966).

Now, we want α/2 in both tails, for the two tail test, so we want the 1−α/2
quantile.

Then see that
(1+1−α)/2=1−α/2
and multiply through by the standard deviation (i.e. square root of the variance as found above)

Attribution
Source : Link , Question Author : Nick Nikolaev , Answer Author : Robert de Graaf

Leave a Comment