What is the problem with overdifferencing a long memory time series?

Suppose I have a long memory time series and instead of using fractional differentiation I take a first difference.

What kind of problems am I going to run into? Is there any advantage to doing the fractional differentiation?


First differences remove all the long term memory whilst fractional differences preserve some of it. If, therefore, the long term memory is important for your intended application fractional differencing is the way to go. Chapter 5 of the book Advances in Financial Machine Learning discusses this in some detail.

For example, assume you want to predict a long term ( financial?) trend using a machine learning algorithm and you feed it first differences as training data, it will be extremely difficult for the algorithm to learn the trend as it isn’t present in the data it’s presented with; the algorithm is much more likely to learn to predict just one step ahead, and thereby “failing.”

The above book suggests that this “over-differencing” is one reason why ML on (financial) time series is so often problematic. The book also, tentatively, suggests that this may be the reason why the Efficient Market Hypothesis holds such sway among financial academia, in that removing the “historical memory” from the data logically leads to the conclusion that future prices cannot be predicted from past prices. Any evidence that contradicts this belief is called an anomaly, e.g. momentum. However, fractionally differenced financial data show that trends and momentum do persist, whilst at the same time being stationary.

Source : Link , Question Author : badmax , Answer Author : babelproofreader

Leave a Comment