I’m having a hard time understanding how to use confidence intervals for hypothesis testing. I’ve seen examples to reject a null hypothesis, but don’t understand how to use a one-sided confidence interval to show significance.
If you have two samples, x and y, how you would you use a one sided confidence interval to show $H_1:\mu_x > \mu_y$ is significant with 95%?
One sided confidence intervals are dual to one tailed hypothesis tests just as regular two sided CIs are dual to two tailed tests.
If $\theta$ is a parameter, and we say that $(a,\infty)$ is a one sided CI for $\theta$, then this means that $a$ was found by a process that will yield a value below the true value of $\theta$ $95\%$ of the time.
In your case, the parameter of interest is the difference of means: $\mu_x-\mu_y$. If you construct a one sided confidence interval for this parameter, of the form $(a,\infty)$, then you can say with 95% confidence that $a<\mu_x-\mu_y$. Thus, if $0\leq a$, you may reject the null hypothesis.