Lab Meeting, Fall 2016, Week 10

Awe- and Aw-inspiring readings (or just stuff you think deserves a mention)

What we did over last week

Florian

Xin

Andrew

Jenn

Dan

Amanda

  1. Talked to the KurTan lab about my uncertainty with approaching uncertainty as a topic of study.

  2. Finished reading judith's paper, thought about it a bunch, failed to send her notes. will do so this week.
  3. Met with Si On Yoon about a collaboration / plan to look at an aspect of uncertainty on the kinds of expectations listeners might have for how a speaker is likely to label a thing.
  4. Emailed with Geertje about an eigenlijk follow up project / maybe setting up shop at the MPI for a bit next summer, if I'm going to be in Europe for a bit.
  5. Met with Chigusa and Wes about Wes' study. Thought of an interesting follow up.
  6. Asked Greg for a reference letter for a scholarship I'm applying for via the APA.
  7. Wrote a first draft abstract for the CSLI workshop at Stanford.
  8. Worked on my APA scholarship application, tried to work through some of the study ideas.
  9. Thought about Huang and Snedeker (2013) for the APA and as a first setup for the new eyetracker in the Kinder Lab.

Zach

Linda

  1. Wrote and submitted abstract to GURT on the old Bradlow and Bent data. Now back to Pen in Mouth writing for Bubble Fusion tomorrow.
  2. Met with Xin (and separately, Flo) to talk about the details about the new task for assessing talker similarity.
  3. Created lists (biggest headache to balance) for said talker similarity task.
  4. Updated survey and put together the talker similarity task in general. Debugged, and getting ready to run a pilot!
  5. Gave Command Line 101 Tutorial in RA Meeting. I think Henry's mind was blown by the power of the command line. :) Met with new KurTan RA briefly about gesture task.

  6. Went to Home Depot (I drove!), and did some hard core home winterization over the weekend. It's too early for this stuff, why Rochester?!

Wednesday

Shaorong

  1. Revising the ERP proposal
  2. Did some straightforward math for contextual diversity and realized that our hypothesis only hold under very specific assumptions. Ran into Frank and talked to him about the problem, he suggested using long-tail prior like zipf distribution. Will try that.
  3. Analyzed data for NRT course. Null effect. And also the intercept is in the opposite direction as predicted by the model.
MoinMoin Appliance - Powered by TurnKey Linux