Lab Meeting, Summer 2015, Week 5

Awe- and Aw-inspiring readings (or just stuff you think deserves a mention)

What we did over last week

Florian

  1. Finished editing through another round of Butler et al. The next version is already in my inbox.
  2. Rewrote introduction and parts of the experiments, discussion, and conclusion of Yildirim et al. We now have a version that can be shared with others, i.e., modulo references it's all there.
  3. Provided feedback on Dave's NIPS submission, though I'm not sure he ended up submitting it.
  4. Received further updates on Scott Fraundorf's need+PART experiments. We now finished running all subjects in the most recent version. The results are neat and we started talking about a write-up as well as one more follow-up experiment that assesses what exactly readers that are exposed to the need+PART structure learn (i.e., what do they generalize to?).

  5. Received a revised write-up of Alex's paper on syntactic adaptation with and without verb overlap. To be edited.
  6. Received a draft from Kodi for his Encyclopedia article on generalization in speech perception.
  7. Received a revised write-up of Esteban's paper on phonological neighborhood density effects on lexical planning and articulation.
  8. Reviewed book proposal.
  9. Started editing through about first half of Thomas Hoerberg's new thesis chapter.
  10. Replied to Masha about follow-ups to her attempts to replicate her own work using the web-based artificial language learning paradigm (a part success).
  11. Still owe responses to: Nikki, Bozena.
  12. Got invited to organize CMCL next year. Thought welcome. Remind me during lab meeting.

Andrew

  1. Got Alien Language Learning app to compile with the current Flex SDK and started converting it from a (Flex3 style) Halo/MX app to a (Flex4 style) Spark app.
  2. Actually started understanding how Spark apps are supposed to be constructed (and how far the ALL app is from it).
  3. Ran the rest of round2 of Scott's BeDrop experiment, converted the results, and sent them to Scott.

  4. Rewrote the survey extraction script for Scott's NeedsWashed/BeDrop experiments.

Olga

  1. Organized all the RAs and updated the information on them/ pay periods
  2. Trained Lauren on RedCap and gave her the assignment.

  3. Submitted and passed the amendment so that Kodi is now on both protocols
  4. Met with Esteban, Andrew, and Florian about the tutorial. Thought about reworking the structure. Will actually work on it this week to submit something by Friday.
  5. Read Tily et al. and Jackie's thesis. Created a direct readings page on the wiki
  6. Started gathering all the documents for the audit

Esteban

  1. Worked out the plan forward for testing the auto-aligner and the auto-vot extractor software from McGill.

  2. Finished up editing for an Nth draft resubmission for LCN.

Dave

  1. Wrote, revised, formatted, and submitted a paper to NIPS. (On inferring listeners' prior beliefs based on how they adapt to different input distributions.)

  2. Went over ROI and searchlight analyses of animal-learning experiment w/ Raj and Lauren, concluded that things looked good, and planned out next analyses.

Sarah

Dan

Amanda

  1. Presented at MXPrag, got some interesting feedback on the studies - considering following up on a few ideas.
  2. Met some interesting other researchers, hoping to keep in touch with Hannah Rohde, and Cat Davies.
  3. Chatted with one of Michael Franke's students about my master thesis which is relevant to her work, hoping to keep in touch with her about that - regarding the semantics of quantifiers such as "a few", "many", and "a lot", which seem to have no context independent meaning.
  4. Ran two versions of a production task for the Mammals study, one with instructions that highlighted the need to communicate information to an interlocutor.

Zach

  1. Finished coding the Kamide first replication
  2. Ran a very baby test version of the Kamide study
  3. Met with Lauren
  4. Got farther with the accent modeling project--got toy codes working
  5. Started working on a way to convert the stupid british pronunciations of the CELEX database into good ole American pronunciations

Linda

  1. Normalized intensity of recordings for syntax replication using Sarah's script.
  2. Worked with Lauren to clarify a few sentence stimuli pts
  3. Got clone of Andrew's boto scripts -- set up a much better way for excluding participants for future tasks. Thanks Andrew!
  4. Worked on coding up a two-choice block for Maryam's use (participant picks between two different choices for each item, rather than always picking between the same two choices). Should be done soon.
  5. Started practicing how to drive

Maryam

LabmeetingSU15w5 (last edited 2015-06-08 17:38:31 by ZachBurchill)

MoinMoin Appliance - Powered by TurnKey Linux