I have test data where I have several large samples from discrete distributions which I am using as empirical distributions. I am wanting to test whether the distributions are actually different and what the difference in means is for those distributions which are actually different.
Since they are discrete distributions my understanding is that the Kolmogorov-Smirnov test is invalid due to the underlying continuous distribution assumption. Would the Chi-Squared test be the correct test for whether the distributions are actually different?
What test would I use for the difference in means? Would a better approach be to sample from the distributions and take the difference and then perform analysis on the distribution of the difference?
The Kolmogorov-Smirnov can still be used, but if you use the tabulated critical values it will be conservative (which is only a problem because it pushes down your power curve). Better to get the permutation distribution of the statistic, so that your significance levels are what you choose them to be. This will only make a big difference if there are a lot of ties. This change is really easy to implement. (But the K-S test isn’t the only possible such comparison; if one is computing permutation distributions anyway, there are other possibilities.)
vanilla chi-square goodness of fit tests for discrete data are generally, to my mind, a really bad idea. If the above potential loss of power stopped you using the K-S test, the problem with the chi-square is often much worse – it throws out the most critical information, which is the ordering among the categories (the observation values), deflating its power by spreading it across alternatives that don’t consider the ordering, so that it’s worse at detecting smooth alternatives — like a shift of location and scale for example). Even with the bad effects of heavy ties above, the KS test in many cases still have better power (while still lowering the Type I error rate).
The chi-square can also be modified to take account of the ordering (partition the chisquare into linear, quadratic, cubic etc components via orthogonal polynomials and use only the low order few terms – 4 to 6 are common choices). Papers by Rayner and Best (and others) discuss this approach, which arises out of Neyman-Barton smooth tests. This is a good approach but if you don’t have access to software for it, it may take a little setting up.
Either modified approach should be fine, but if you’re not going to modify either approach, it’s not necessarily the case that the chi-square will be better than the KS test — in some situations it might be better … or it may be substantially worse.
If the ties are light (i.e. there are lots of different values taken by the data), I’d consider the KS as-is. If they’re moderate, I’d look to calculate the permutation distribution. If they’re very heavy (i.e. the data only take a few different values), the plain chi-square may be competitive.