Interpreting the evidence from Information Theory is a value judgement
There are no hard rules about when to reject models or hypotheses if there’s not a clear best model
💭 As an illustration, imagine the situation where you’re trying to predict which of two candidates will be awarded a job
Knowing this, you might be confident that the first candidate would get the post
However, if both candidates were more evenly matched, with Masters degrees and relevant field experience, it’s much harder to judge who is the strongest candidate, and to predict who will be awarded the job
The same can happen with your models:
Let’s bring together all the theory and practical knowledge we’ve gained to formulate a multi-stage process to choose between hypotheses using Information Theory
This approach works for any situation in which you use AIC to evaluate and compare statistical models, not just distance sampling
Assuming that you have constructed your candidate model set carefully, we suggest the following strategy:
Assess the Goodness of Fit of the global model
Use R-squared and/or chi-squared values to assess Goodness of Fit
If none of your models fit well, Information Theory will only choose the most parsimonious from your set of poor models, which doesn’t add to our understanding of the ecological system
Examine the dAICc values in your model comparison table
Examine the Akaike weights (probabilities) for each model
Do you have:
Check the summed weights for each covariate
Examine the coefficient estimates for each model relative to their Standard Errors and Confidence Intervals
Compare the LogLikelihood or deviance values of models
Increasing the LogLikelihood by more than -2 suggests that the parameter is helpful in explaining patterns in your field data, rather than being a ‘pretending’ variable (Anderson 2008) which only gives a small improvement in model fit
Importance of covariates
If you’re still uncertain about the value of a covariate, use evidence ratios to compare pairs of models with and without it
An inability to select between models suggests either that:
If this happens, rely on your understanding of the ecology and inter-relationships between your hypotheses and covariates to make an informed decision, or acknowledge that your data are inadequate to draw clear conclusions
Ecological systems are complex, and research needs to be well designed to add to our understanding of a system
If you have several models with similar evidence supporting each, you may prefer the more simple model if it:
Always be aware that there is uncertainty in your selection of the best model, because the AIC values rely on the particular dataset you collected during that survey
Is it possible that the weightings and ranks could change if you repeated the survey?