Lab Meeting, Summer 2015, Week 16

Awe- and Aw-inspiring readings (or just stuff you think deserves a mention)

What we did over last week

Florian

Chigusa

Kodi

  1. Began writing introduction for the perspective/review paper on Sociophonetic inferences in speech perception and the rich probabilistic structure of linguistic knowledge (w/ Maryam, Chigusa and Florian)

  2. Did a lot of background reading on rational analysis for the perspective/review paper.

  3. Made progress writing one of my thesis manuscripts, which is a response to Maye et al.'s (2008) conclusion that adaptation to vowel chain shifts involves targeted vowel-specific, direction-specific perceptual shifts.

Andrew

Olga

  1. Created the new schedule for autumn, sent out the whenisgood
  2. Played catch up with e-mails
  3. Mainly worked for the Kinderlab this week with recruitment

Esteban

  1. Resubmitted a paper with Florian on the link between speech planning and articulation.

  2. Revised the first two lectures for BCS 152.

  3. Continued revising JML paper W/Florian & Mike.

  4. Outlined a guest lecture for BCS 501, mostly restructuring prior materials.

Dave

  1. Listened to stimuli that Dan made.

  2. Ran additional analyses of animal category learning data (linear classifier accuracy vs. representational similarity; anatomical vs. functional ROIs).

  3. Made a ton of figures for animal category learning paper (and gave up on doing any more analysis of results in Matlab and just export everything to R...)

  4. Poked at LDAP authentication for worker ID anonymization app with Andrew.

  5. Revised paper on selective adaptation (still a few edits to make).

  6. Helped Anne Pier figure out why in a logistic GLMM you might get an intercept that's substantially different than the grand mean average. My hunch is: random effects are assumed normally distributed in the linear predictor space, which is log odds. If you have a lot of, say, items with basically 100% accuracy, and a few that are much lower, than the best match to that data under the assumptions the model makes is that the overall intercept log odds is very high, but with enough random effect variance such that the lower end dips into the neighborhood where it produces some items with accuracy below ceiling. Because of the non-linearity of the log odds-to-probability mapping, the deviations (inferred by the model) from the intercept in the high direction do cancel out the deviations (actually observable) in the low direction, but they don't in probability space because the all the high side deviations don't produce any higher accuracy in probability space. So the overall grand mean is dragged down from the "true" (as inferred by the model) grand mean log odds.

Sarah

  1. BCS 111 stuff.
  2. Wrote more sections of SET manuscript.

  3. Talked with PO for my NRSA study review group.
  4. Talked with Sarah Creel about what the PO said about our grant application.

Dan

  1. Got Amanda to record stimuli for attention adaptation
  2. Cleaned, split up, denoised those stimuli and created (and recreated) continua for each critical item
  3. Worked on experiment scripts

Amanda

  1. Recorded stimuli for Dan
  2. Went to the zoo to recruit kids, ordered some sticker / activity books to give to kids in the lab
  3. Worked on mammals paper / edited / fixed many things for Frontiers - in the final push for submitted (fingers crossed for Wednesday)
  4. Sent most recent reading list for quals to Mike to add final papers before we cut it back
  5. Thought about my kid data / what I want to do next (more on that in the days to follow)

Zach

  1. Finished draft of accent modeling project.
  2. Read more on word recognition.
  3. Made slides for tomorrow's presentation.
  4. Lots o' plots.
  5. Sent lots of emails to people I don't know asking for information about online lectures and data.

Linda

Maryam

Wednesday

  1. Finished AMLaP poster & revised based on feedback from Roger

  2. Thought about a few possible designs for lexical priming experiment w/ Roger
  3. Tweaked experiment scripts (for new version of undergrad thesis experiment)
  4. Created new items (for new version of undergrad thesis experiment)

LabmeetingSU15w16 (last edited 2015-08-24 15:52:42 by 128)

MoinMoin Appliance - Powered by TurnKey Linux