Differences between revisions 13 and 14
Revision 13 as of 2008-06-09 19:29:33
Size: 2787
Editor: cpe-74-65-13-211
Comment:
Revision 14 as of 2008-06-10 15:09:25
Size: 2852
Editor: wireless
Comment:
Deletions are marked like this. Additions are marked like this.
Line 18: Line 18:
=== Materials ===
 * [attachment:ranef_sim.R simulated data]

Session 3: Multilevel (a.k.a. Hierarchical, a.k.a. Mixed ) Linear Models

June 10 2008

Reading

G&H07

Sections 1.1-1.3 (pp. 1-3)

Intro, examples, motivation

Chapter 11 (pp. 237-248)

Multilevel structures

Chapter 12 (pp. 251-277)

Multilevel linear models: the basics

Baa08

Chapter 7 (pp. 263-282)

Grouped data, functions, lmer

Notes on the reading

In G&H07, the first two examples (Section 11.2-3) that leads to the motivation of multilevel models is a logit model, which we haven't yet talked about. Just ignore that detail and focus on the conceptual argument made in that section. Think of the logit model as predicting the likely outcome (here: treatment success vs. failure) given the predictors we put into the model, just like for linear regression. In reading the Chapter 11, ask yourself, in a classical ANOVA Latin-square design, e.g. in a priming study where each subject sees say 24 items in one of its 4 conditions - what are the individuals and what are the groups ?

Materials

  • [attachment:ranef_sim.R simulated data]

Additional terminology

Feel free to add terms you want clarified in class:

  • Restricted/residual maximum likelihood (REML): Mixed linear models in R are fitted using REML rather than ML (which, we learned, is standardly used to fit ordinary linear regression). Mixed model fitting implies fitting the variance-covariance matrix for the random effects (and the residuals), based on the available sample. REML, unlike ML, can be used to derive unbiased estimates of variances and covariances. BR A biased estimate is pretty much what one would think it is (see this [http://en.wikipedia.org/wiki/Bias_of_an_estimator wiki article on the notion of a statistical bias in the estimation of a parameter]). Recall that in fitting linear models, the goal is to derive the (best) estimates for the parameters in our model. In a ordinary linear models, these are the coefficients; in a mixed linear model, the parameters also include the estimates of the random effect variances. We want these estimates of the variances (which are, of course, based on our sample) to be unbiased estimates of the true underlying population variance. When you read the above wiki article, consider that the example it gives for variance estimation, is an example of maximum likelihood estimation (the given estimate given the sample is the ML estimate of the sample's variance and it's a [downward] biased estimate of the population's variance).

Questions

  • Q:

Assignments

Please upload your solutions by ???

HLPMiniCourseSession3 (last edited 2008-11-09 02:03:54 by cpe-67-240-134-21)

MoinMoin Appliance - Powered by TurnKey Linux