I would like to clarify how the Granger causality can/should be used in practice, and how to interpret the statistical significance given by the test.

Also, I would like to fill this table with things like “we don’t know” or if we know something, what do we know (It will for sure not be causality, but maybe something else?).

`X Granger cause Y sig. X Granger cause Y not Y Granger cause X sig. ... ... Y Granger cause X not ... ...`

**Answer**

To begin with, the source you added has almost all you need to get acquainted with *Granger (non)causality* concept (though I like the scholarpedia‘s article more). The most crucial is that G-causality in practice looks for the answer: would variable $x$ be useful predicting variable $y$, meaning that information containing in variables up to lag $p$ is **statistically** significant. Thus G-causality is purely statistical property of the data, that may be though supported by theoretically sound hypothesis.

Some practical considerations:

- If you have more than two stationary signals, it may happen that they have to be jointly described by a vector autoregressive model (VAR). Therefore pairwise G-causality could be misleading, since you ignore the impacts that come from the other variables.
**Suggestion in**$R$: try`library(vars)`

and`?causality`

for instantaneous and G-causality, when you have more than two variables, and VAR seems meaningful (well it is a separate answer when it really is, some ideas are also related to G-causality concept).- Previous suggestion is better in multivariate case, comparing to pairwise case
`library(lmtest)`

and`?grangertest`

. On the other hand pairwise case is an option when you do have to work with two variables only. Even in multivariate case you may still perform`grangertest`

just to mark possible useful covariates or decide on statistical possible endogeneity issues. I usually do so when lacking in time, since identification of subsets of variables and hyper-parameters (lag order) selection for VAR models is a not-so-quick task. So for quick useful-predictive-information-containing variables detection it is alright to go pairwise (but do not stop with this results, they are just auxiliary). - Note, that under null hypothesis you do test non-G-causality, thus $p$ values will mark G-causal relationships.
- Conclusions from G-causality tests would be “We know that if $x$ G-cause $y$ statistically significantly, thus it contains useful information that helps to predict future values of $y$”. However if we conclude the same about $y$ (feedback effect) it would mean that $x$ and $y$ are both endogenous and VAR type of the model is needed. You may also conclude that if none of the variables G-cause another, it is one of the signs that VAR specification is not necessary. And you may go for separate ARMA models (note, that your variables have to be stationary to perform G-causality tests correctly).
- Any other suggestions from the community are welcome, @zik you may try gretl as an alternative to $R$ to implement Granger-causality tests.

**Attribution***Source : Link , Question Author : RockScience , Answer Author : Dmitrij Celov*