I’m the author of the ez package for R, and I’m working on an update to include automatic computation of likelihood ratios (LRs) in the output of ANOVAs. The idea is to provide a LR for each effect that is analogous to the test of that effect that the ANOVA achieves. For example, the LR for main effect represents the comparison of a null model to a model that includes the main effect, the LR for an interaction represents the comparison of a model that includes both component main effects versus a model that includes both main effects and their interaction, etc.
Now, my understanding of LR computation comes from Glover & Dixon (PDF), which covers basic computations as well as corrections for complexity, and the appendix to Bortolussi & Dixon (appendix PDF), which covers computations involving repeated-measures variables. To test my understanding, I developed this spreadsheet, which takes the dfs & SSs from an example ANOVA (generated from a 2*2*3*4 design using fake data) and steps through the computation of the LR for each effect.
I would really appreciate it if someone with a little more confidence with such computation could take a look and make sure I did everything correctly. For those that prefer abstract code, here is the R code implementing the update to ezANOVA() (see esp. lines 15-95).
Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg in the calculation of the se, which is divided by (n-1) in a least-squares approach and divided by n in a maximum-likelihood. The maximum likelihood estimate is thus consistent, but slightly biased).
This has some implications: you can calculate the LR as the likelihood is proportional to 1s, but that doesn’t give you the likelihood of your anova model itself. It just tells you something about the ratio. As the AIC is classically defined in terms of the likelihood, I’m not sure if you can use the AIC as you intend.
I’ve looked at the spreadsheet, but the values for the “uncorrected LR within”(I’m also not completely following what exactly you’re trying to calculate there) seem highly unlikely to me.
On a side note, the power of LR testing is that you can just contrast the models you want, you don’t have to do that for all of them (which lowers the multitesting error). If you do this for every term, your LR is completely equivalent to an F test, and in the case of least squares, as far as I know even numerically about the same.
Your mile may vary, but I’ve never been confident mixing concepts of two different frameworks (i.e. least squares versus maximum likelihood). Personally, I’d report the F statistics and implement the LR in a function that allows to compare models (eg. the anova function for lme models which does exactly that).
My 2 cents.
PS : I did look at your code, but couldn’t really figure out all the variables. If you would annotate your code using comments, that would make life a bit easier again. The EXCEL sheet is also not the easiest to figure out. I’ll check again later to see if I can make something from it.