It can be shown that, generally, the cointegration test statistic of A,B≠B,A. I believe this to be true for all cointegration tests, so the particular test used is, perhaps, irrelevant.
However, I have found that the two test statistics are generally “close”: the two test statistics will be in the same confidence level.
Note that in my work the common method to test for cointegration is to test for a unit root in the linear combination of the two series (AKA residual series). Generally I will do so by using the ADF test and compare the resulting test statistic to the confidence levels required to reject the null hypothesis.
My questions:
 Are there any formal things that can be said about the comparison of coint(A,B) to coint(B,A)?
 Is there a compelling technical reason to prefer one variable orientation over the other?
 Are the answers to 1 or 2 particular to the cointegration test used? If so, is there anything particularly relevant to the cointegration test methodology I outlined above?
Thanks.
EDIT:
Here’s an example, as requested. I use Python for most of my statistical work.
The ADF test statistic for the first linear combination (AKA residual series) is
35.9199966497
and35.7190914946
for the second linear combination.Obviously this is a rather extreme example, but there are many others.
Order of plots in the graph:
 Residual series 1
 Scatter plot with line of best fit, (x,y) orientation.
 Residual series 2
 Scatter plot with line of best fit, (y,x) orientation.
 Graph of the two raw curves.
Hopefully that clears things up.
Answer
For two time series Xt and Yt to be cointegrated two conditions are met:

Xt and Yt must be I(1) processes, i.e. ΔXt and ΔYt must be stationary processes (in a weak sense, i.e. covariance stationary).

There exists a set of coefficients α,β∈R such that the time series Zt=αXt+βYt is a stationary process. The vector (α,β) is called cointegrating vector.
Since stationarity is invariant to shift and scale it immediately follows that coefficients α and β are not uniquely defined, namely they are unique up to multiplicative constant.
Cointegration tests come in two varieties:

Tests on residuals of regression of Yt on Xt.

Tests on matrix rank in a vectorerror correction representation of (Yt,Xt).
Both varieties rely on certain theoretical results, namely:

OLS of Yt on Xt gives a consistent estimate of cointegration vector

Granger representation theorem.
The OP question is about the first variety of tests. In these tests we have a choice: estimate regression Yt=a1+b1Xt+ut or Xt=a2+b2Yt+vt on Yt. Naturally these two regressions will give two different cointegrating vectors: (−ˆb1,1) and (1,−ˆb2). But due to above mentioned theoretical result the probability limits of −ˆb1 and −1/ˆb2 must be the same, since the cointegrating vector is unique up to a constant.
Due to algebraic properties of OLS the residual series ˆut and ˆvt are not identical, although from theoretical perspective they both should be equal to 1βZt and 1αZt respectively, i.e. they should be identical to multiplicative constant. If the series Xt and Yt are cointegrated then Zt is a stationary series, so since ˆut and ˆvt approximate Zt we can test whether they are stationary.
That is how the first variety of cointegration tests are performed. Naturally since the ˆut and ˆvt are different any tests on them will differ too. But from theoretical point of view any difference is simply a finite sample bias, which should disappear asymptotically.
If the difference between the stationarity tests on series ˆut and ˆvt is statistically significant, this is an indication that the series are not cointegrated, or assumptions of stationarity tests are not met.
If we take ADF test as a stationarity test for residuals I think it would be possible to derive asymptotic distribution of difference between the ADF statistics on ˆut and ˆvt. Whether it would have any practical value I do not know.
So to summarize the answers to the three questions are the following:

See above.

No.

The asymptotic distribution of difference of the tests would depend on the test. Your methodology is fine. If time series are cointegrated, both statistics should indicate so. In case of no cointegration, either both statistics will reject stationarity, or one of them will. In both cases you should reject the null hypothesis of cointegration. As in testing for unit root you should safeguard against time trends, change points and all the other things that make unit root testing quite challenging procedure.
Attribution
Source : Link , Question Author : d0rmLife , Answer Author : mpiktas