Size: 7555
Comment:
|
← Revision 17 as of 2009-04-04 23:16:58 ⇥
Size: 7952
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 11: | Line 11: |
'''The workshop page is now online: [http://hlplab.wordpress.com/2009-pre-cuny-workshop-on-ordinary-and-multilevel-models-womm/ http://hlplab.wordpress.com/2009-pre-cuny-workshop-on-ordinary-and-multilevel-models-womm/] --- check it out for the final presentations (a 4 hour tutorial, including talks by Roger Levy, Harald Baayen, Victor Kuperman, Florian Jaeger, Dale Barr, and Austin Frank). |
Standards in fitting, evaluating, and interpreting regression models
As part of Workshop on common issues and standard in ordinary and multilevel regression modeling March 25, 2009, UC Davis
The workshop page is now online: [http://hlplab.wordpress.com/2009-pre-cuny-workshop-on-ordinary-and-multilevel-models-womm/ http://hlplab.wordpress.com/2009-pre-cuny-workshop-on-ordinary-and-multilevel-models-womm/] --- check it out for the final presentations (a 4 hour tutorial, including talks by Roger Levy, Harald Baayen, Victor Kuperman, Florian Jaeger, Dale Barr, and Austin Frank).
Quick example of very naive model interpretation. We will use R throughout to give examples.
But can we trust these coefficients? vif() cor, cor(, method="spearman") , pairs not completely ok for all types of models --> refer to conceptual background section and to Harald's section
Collinearity aside, can we trust the model overall? Models are fit under assumptions. Are those met? If not, do we have to worry about the violations? The world is never perfect, but when should I really be cautious?
Now that we know whether we can trust s model, how can we assess and compare effect sizes? Discuss different ways of how to talk about "effect size": What do the different measures assess? What are their trade-offs? When do different measures lead to different conclusions (if one is not careful enough)? Also mention differences between different types of models (e.g. ordinary vs. multilevel; linear vs. logit) in terms of available measures of fit; test of significance; etc. different tests (t,z) --> refer to section on problems with t-test for mixed models; necessity to of mcmc sampling not all models are fit with ML (mention mixed linear and mixed logit models) --> refer to Harald's section different fits needed for variance and point estimates (because ML-estimates of variance are biased) --> refer to Harald's section?
Discussion of some issues (see below) and some examples. [http://idiom.ucsd.edu/~rlevy/papers/doyle-levy-2008-bls.pdf p.8 for fixed effect summary, p.9 for random effect summary] For ordinary regression models: plot.Design() For mixed models: plotLMER.fnc(), my.plot.glmer(); see [http://hlplab.wordpress.com/2009/01/19/plotting-effects-for-glmer-familybimomial-models/ example] For ordinary models: plot.calibration.Design() For mixed models: my.plot.glmerfit(); see [http://hlplab.wordpress.com/2009/01/19/visualizing-the-quality-of-an-glmerfamilybinomial-model/ example] Visualization of predictors contribution to model (model comparison): plot.anova.Design()
Summary of what readers (and reviewers) need to know
see also" [http://idiom.ucsd.edu/~rlevy/teaching/fall2008/lign251/one_page_of_main_concepts.pdf roger's summary]
To do
Interpreting a simple model (5 minutes)
Evaluating a model I - coefficients (XX minutes)
Evaluating a model II - overall quality (XX minutes)
Comparing effect sizes (12 minutes)
Visualizing effects (3 minutes)
Publishing model (2 minutes)
Create downloadable cheat sheet?
Preparatory readings?