Differences between revisions 2 and 47 (spanning 45 versions)
Revision 2 as of 2008-05-19 17:49:09
Size: 853
Editor: colossus
Comment:
Revision 47 as of 2008-05-20 20:17:41
Size: 4086
Editor: colossus
Comment:
Deletions are marked like this. Additions are marked like this.
Line 4: Line 4:
#pragma section-numbers 3 #pragma section-numbers 4
Line 7: Line 7:
May 27 2008 - June 9 2008 May 27 2008 - June 17 2008
Line 9: Line 9:
== Week 1: Linear regression ==
=== Session 1: ===
|| Reading || ||
|| Assignments || ||
=== Session 2: ===
|| Reading || ||
|| Assignments || ||
Line 17: Line 10:
== Week 2: Logistic regression ==
=== Session 3: ===
|| Reading || ||
|| Assignments || ||
=== Session 4: ===
|| Reading || ||
|| Assignments || ||
|| [wiki:/Session0 Session 0] || May 27 || R primer (attendance optional, reading required) ||
|| [wiki:/Session1 Session 1] || May 29 || Linear regression ||
|| [wiki:/Session2 Session 2] || June 3 || Issues in linear regression ||
|| [wiki:/Session3 Session 3] || June 5 || Multilevel linear regression ||
|| [wiki:/Session4 Session 4] || June 10 || Logistic regression, GLM ||
|| [wiki:/Session5 Session 5] || June 12 || Multilevel logistic regression, GLMM ||
|| [wiki:/Session6 Session 6] || June 17 || Computational methods for model fitting ||
|| [wiki:/Session7 Session 7] || ??? || R wrap-up ||
Line 25: Line 19:
== Week 3: Multilevel regression ==
=== Session 5: ===
|| Reading || ||
|| Assignments || ||
=== Session 6: ===
|| Reading || ||
|| Assignments || ||
== Texts ==
 * [http://www.amazon.com/Analysis-Regression-Multilevel-Hierarchical-Models/dp/0521867061/ref=sr_1_1?ie=UTF8&s=books&qid=1211219851&sr=8-1 Data Analysis Using Regression and Multilevel/Hierarchical Models] by Gelman & Hill (2007). [http://www.stat.columbia.edu/~gelman/arm/ Online resources]. G&H07.
 * [http://www.amazon.com/Analyzing-Linguistic-Data-Introduction-Statistics/dp/0521882591/ref=sr_1_1?ie=UTF8&s=books&qid=1211219948&sr=8-1 Analyzing Linguistic Data: A Practical Introduction to Statistics using R] by Harald Baayen (2008). [attachment:baayen_analyzing_08.pdf Complete electronic draft]. Baa08.
 * [http://www.amazon.com/Introductory-Statistics-R-Peter-Dalgaard/dp/0387954759/ref=sr_1_1?ie=UTF8&s=books&qid=1211228905&sr=8-1 Introductory Statistics with R] by Peter Dalgaard (2004). [http://staff.pubhealth.ku.dk/~pd/ISwR.html Online resources]. [http://site.ebrary.com/lib/rochester/Doc?id=10047812 Electronic copy through U of R libraries]. Dal04.
 * [http://www.amazon.com/Categorical-Analysis-Wiley-Probability-Statistics/dp/0471360937/ref=pd_bbs_1?ie=UTF8&s=books&qid=1211231537&sr=8-1 Categorical Data Analysis] by Alan Agresti (2002). [http://www.stat.ufl.edu/~aa/cda/cda.html Online resources]. Agr02.
Line 33: Line 25:
== Week 4: Implementations (optional) ==
=== Session 7: lme4 implementation details ===
|| Reading || attachment:implementation.pdf attachment:computation.pdf||
|| Assignments || ||
== R packages ==
 * [http://cran.r-project.org/web/packages/Design/index.html Design]. Linear and generalized linear regression.
 * [http://cran.r-project.org/web/packages/lme4/index.html lme4]. Multilevel modeling.
 * [http://cran.r-project.org/web/packages/arm/index.html ARM]. Companion package for Gelman & Hill (2007).
 * [http://cran.r-project.org/web/packages/languageR/index.html languageR]. Companion package for Baayen (2008).

== How to read ==
One goal of this course is to make sure we're all comfortable with the same terminology and methods. Another goal is to make sure that as new people enter the community, we can bring them up to speed pretty quickly. To help with both of these goals, we're asking that you take some additional steps when you're doing the reading for this class.
 1. Keep an eye out for redundancy. If multiple pieces of assigned reading cover the same topic, and you find a single one of the treatments to be superior and sufficient, please make a note describing the nature of the redundant content, which source you preferred, and why. This will help us develop a set of "canonical" readings on these topics.
 2. Record and investigate unexplained or unclear terminology. Because we're cherry picking chapters from multiple sources, it's likely that at some point an author will use a term that was originally presented in some (unread by us) earlier section of the text. Alternatively, an author might just assume knowledge that we don't have. In any case, when you come across a term in the reading that you believe is not explained well enough, please make a note of the term and where you found it. Then, please go one step further. Do your best to find a simple definition of the term, and record it for others to use ([http://en.wikipedia.org/wiki/Statistics Wikipedia] and [http://mathworld.wolfram.com/topics/ProbabilityandStatistics.html MathWorld] are likely to be good resources for this, but also feel free to consult your favorite stats text books).

HLP Lab Mini Course on Regression Methods

May 27 2008 - June 17 2008

[wiki:/Session0 Session 0]

May 27

R primer (attendance optional, reading required)

[wiki:/Session1 Session 1]

May 29

Linear regression

[wiki:/Session2 Session 2]

June 3

Issues in linear regression

[wiki:/Session3 Session 3]

June 5

Multilevel linear regression

[wiki:/Session4 Session 4]

June 10

Logistic regression, GLM

[wiki:/Session5 Session 5]

June 12

Multilevel logistic regression, GLMM

[wiki:/Session6 Session 6]

June 17

Computational methods for model fitting

[wiki:/Session7 Session 7]

???

R wrap-up

Texts

R packages

How to read

One goal of this course is to make sure we're all comfortable with the same terminology and methods. Another goal is to make sure that as new people enter the community, we can bring them up to speed pretty quickly. To help with both of these goals, we're asking that you take some additional steps when you're doing the reading for this class.

  1. Keep an eye out for redundancy. If multiple pieces of assigned reading cover the same topic, and you find a single one of the treatments to be superior and sufficient, please make a note describing the nature of the redundant content, which source you preferred, and why. This will help us develop a set of "canonical" readings on these topics.
  2. Record and investigate unexplained or unclear terminology. Because we're cherry picking chapters from multiple sources, it's likely that at some point an author will use a term that was originally presented in some (unread by us) earlier section of the text. Alternatively, an author might just assume knowledge that we don't have. In any case, when you come across a term in the reading that you believe is not explained well enough, please make a note of the term and where you found it. Then, please go one step further. Do your best to find a simple definition of the term, and record it for others to use ([http://en.wikipedia.org/wiki/Statistics Wikipedia] and [http://mathworld.wolfram.com/topics/ProbabilityandStatistics.html MathWorld] are likely to be good resources for this, but also feel free to consult your favorite stats text books).

HLPMiniCourse (last edited 2011-08-09 18:01:46 by echidna)

MoinMoin Appliance - Powered by TurnKey Linux