How to decrease the information loss from lag variables?

I am using a distributed lag model to analyze a time series data. The duration of study period is 18 years, and the observation is yearly data. When including a 1-year lag effect, the first year of the lag variable becomes missing. Then, a 2-year lag effect makes the first two data of lag variable missing, and so on.

I am going to analyze five lag effects in my studies, but five lag variables caused 5 missing data. I assume the multiple imputation may help me overcome the loss of information in these lag variables, but the imputation result is not reasonable.

Is there any better idea to impute the missing data in lag variables?

Answer

You can’t help but lose information when you use lags. I can’t think of any way around this, except to use shorter lags.

Attribution
Source : Link , Question Author : cchien , Answer Author : Zach

Leave a Comment