# What is the autocorrelation function of a time series arising from computing a moving standard deviation?

Say I have a time series of observations and I compute a measure of the variance of that time series as the standard deviation (SD) in a rolling window of width $w$ and that window is moved in single time steps over the series. Assume further that $w = \left \lceil{n/2}\right \rceil$, where $n$ is the number of observations, and that the window is right aligned; I have to observe $w = \left \lceil{n/2}\right \rceil$ values of the series before I start yielding moving window estimates of the SD of the time series.

Is there an expected form for the ACF of the new time series of SD values? I presume the dependence on previous values will relate to the window with $w$, but is the ACF of such a series related to the ACF of an $\mathrm{MA}(w)$ process?

### Background

I’m trying to think through the implications of deriving a time series of the variance of the original time series via moving windows. Having computed the derived series of SD values the next step that is commonly applied is to see if there is some trend in the derived series of SD values. As each value in the derived series depends to some extent on the previous values of the original series the values of the derived series are not independent. Thus a question that crops up frequently is how to account for that lack of independence.

Such computations (the moving windows) are often done to time series to look for evidence of indicators (increasing variance, increasing AR(1) coefficient) of impending threshold response (so-called critical transitions).

The ACF of the rolling standard deviation cannot generally be obtained from the ACF of the time series, because the rolling standard deviation is fundamentally a nonlinear filter.

To avoid boundary effects take $(X_t)_{t \in\mathbb{Z}}$ to be a doubly infinite stationary process with mean 0. As I understand the rolling window computation we introduce the rolling variance estimator
$$s_t^2 = \sum_{i=0}^w \frac{1}{w+1} X_{t-i}^2,$$
which is a backward moving average of the squared process. The standard deviation, being $s_t = \sqrt{s_t^2}$, is even more so a nonlinear filter. However, $(s_t^2)_{t \in \mathbb{Z}}$ is a causal linear filter of the squared process, and its ACF can therefore be derived from the ACF of $(X_t^2)_{t \in \mathbb{Z}}$. If the times series is a sequence of i.i.d. variables so is the squared process, in which case $(s_t^2)_{t \in \mathbb{Z}}$ is an MA$(w)$ process with all weights equal to $1/(w+1)$. Using a ARCH(1) model we can, on the other hand, find an example where the process itself is a white noise process, but the squared process is not. In fact, for the ARCH(1) model the ACF for the squared process coincides with the ACF for an AR(1) process, in which case the ACF for the rolling variance is the same as for a moving average of an AR(1) process.

Clearly, the computations above are idealized, since we would probably also use a rolling mean in practice to center the time series. As I see it, this would just mess up explicit computations even more.

With explicit assumptions about the time series (ARCH-structure, or a Gaussian distribution) there is a certain chance you can compute the ACF for the squared process, and from this the ACF for the rolling variance.

On a more qualitative level the rolling variance and rolling standard deviation will inherit ergodicity and various mixing properties from the time series itself. This is useful if you want to apply general tools from (nonlinear) time series analysis and stochastic processes to assess if the rolling standard deviation is stationary (which I understand is of interest).