Relative variable importance with AIC

I am confused and just need some confirmation about calculating the relative variable importance value for the co-variates I used in AIC model selection procedures. I know that there is this one discussion but it doesn’t quite confirm explicitly enough what I should do.

Burnham and Anderson (2002) describe a simple way to quantify variable importance.

Page 168: Estimates of the relative importance of predictor variables
xj can best be made by summing the AIC weights across all the models
in the set where variable j occurs.

However, to use this method, one must have an equal number of models for each variable; otherwise, some variables will be over represented or under represented resulting in biased relative importance values.

Page 169: When assessing the relative importance of variables using sums of the AIC weights, it is important to achieve a balance in the number of models that contain each
variable j.

Does this mean if I have a set of models with their model weights from AIC procedure (these are not ranked by weight, just the order I created them):

1   INTERCEPT            
2   REPRO   TIME         
3   REPRO   TIME    R*T  
4   REPRO   TIME    WR  
5   REPRO   TIME    WR  WR*R 
6   REPRO   TIME    WR  WR*T 
7   WR

To calculate the relative variable weight I would sum up the weight for each incident that TIME was in the models and I would do so for each of the other variables. However, this is not completely correct right? Because there is not a balance in the number of models that contain each variable right? So, to correct for this I would then divide the sum of these weights by the number of models that had that variable. (Kittle et al 2008 “the scale-dependent impact of wolf predation risk….” does this). So, for instance if the sum of the weights for time was .75 I would divide it by 5 because it was in 5 of the models, likewise, WR would be divided by 4.

It seems like a silly question but it really changes the results and interpretation from my analysis. Because for instance, WRT is only in 1 model and it comes out to be in one of the top models so it has a high model weight, but Time and Repro are also in this top model but also in 4 other candidate models. So dividing the weight of T & R by 5 reduces the importance of T or R from (0.999), giving them a RVI of 0.2 and the RVI to WRT value of 0.7. Is that right?

In addition to this, my next question would be – do you do this over JUST the “BEST” (within 2AIC or what ever criteria) models or over all 7 regardless of what surfaced to the top? I used MuMIn package and use the importance command, but then when you use the get best models, it asks if you want to recalculate the importance which then it recalculates for just the top models. Which is more appropriate to use? This doesn’t make sense when only 1 model is the best. I would then assume it should be calculated over all models.


This is some further advise/discussion I was given:

AIC RIW can only be calculated from a balanced candidate model set. If you have 3 variables (e.g. repro, time & WR) then the balanced set (without interactions) is

repro + time
repro + WR
time + WR
repro + time + WR
intercept only

the number of models in the set is 2 to the power of the number of explanatory variables (in this case = 8)
with 2-way interactions your candidate model set ALSO includes the following (i.e. in addition to those above)

repro + time + repro*time
repro + WR + repro*WR
time + WR + time*WR
repro + time + WR + repro*time
repro + time + WR + repro*WR
repro + time + WR + time*WR

If you want the 3-way interaction, then you would ALSO add this to all of the models described above.

Each variable relative importance weight is then the SUM of ALL AIC-weights from models that contain that variable. Because AIC-weights are standardized to sum to one within a candidate model set, then RIW for each variable can range from 0 to 1.

Do not divide the result by the number of models it is contained in – it is the total sum. I would only use these for balanced candidate model sets; I wouldn’t use RIW for a smaller number of models.

NOTE that if you include interactions, then you can only compare the RIWs of main effects with each other, and you can only compare the RIWs of interactions with each other. You cannot compare main effect RIWs with interaction RIWs (because main effects are present in more models than interactions).

FYI: a strong explanatory variable will have a RIW of around 0.9, moderate effects of around 0.6-0.9, very weak effects of around 0.5-0.6 and below that, forget about it. For interactions, a strong effect could be >0.7, moderate >0.5.
If you’re not using RIWs then simply look at your model table and see if you get consistent improvements in AIC when you add specific variables, and by how much. Strong effects will often give you improvements in AIC of >5, moderate 2-5 and weak 0-2. If you don’t get an improvement at all, then it isn’t explaining anything.

if you don’t have a balanced candidate set, but DO have the AIC weights (which it appears you do), then you can simply use the ratios of these to determine the strength of support for one model over another. E.g. if you have model 1 with AIC-weight of 0.7 and model 2 with an AIC-weight of 0.15; then model 1 has 4.6 times more support from the data than model 2 (0.7/0.15). You can use this to assess the relative strength of variables as they go in and out of models. But you don’t NEED to do these calculations – and can simply refer the reader to the table. Especially if you have a dominant model; or a series of models at the top that all contain a particular variable. Then it is simply obvious to everyone that it is important.

Source : Link , Question Author : Kerry , Answer Author : Kerry

Leave a Comment