Lab Meeting, Spring 2015, Week 15

We'll discuss Morton, Sommers, and Lulich (2015). Dave will lead discussion. Please read the paper if at all possible (it's short). Pay special attention to the summary of the talker normalization literature in the intro. It's interesting and relevant to what a lot of us are working on, and provides some insight into how these issues are viewed by most of the hardcore speech perception community.

Awe- and Aw-inspiring readings (or just stuff you think deserves a mention)

* OpenSesame, an OpenSource Python-based experiment builder under active development, sponsored by SR Research. --Andrew

What we did over last week

Florian

  1. Presented at Causality in the Language Sciences workshop at the Max Planck Institute for Evolutionary Anthropology in Leipzig (co-organized with the MPI for Mathematics in the Sciences). I talked about the work on information density and dependency length in English, German, Arabic, Czech, and Mandarin (together with Dan Gildea) and Masha's work on a) dependency-length based ordering preferences in head-final languages and b) the trade-off between configurationality (constituent order flexibility) and case-marking. This was the first time I talked about two of these topics and I think it was well received. The conference was awesome with great talks by, for example, Dan Dediu, Morten Christiansen, Balthasar Bickel, Michael Dunn, and Tanmoy Bhattacharya. Unfortunately, I missed the talks by Gerhard Jaeger and Fermin Moscoso del Prado Martin, but they sounded really interesting, too. Russel Gray was there as well (new head of new MPI in Jena), Martin Haspelmath asked lots of great questions, bringing in the perspective of linguistics and non-quantitative typology. There were a few talks I didn't quite get (e.g., Eduardo Altman's, I wasn't sure what he was after), but they were all interesting. Among the student talks I probably liked Jasmeen Kanwal's (UCSD) the best. She talked about Zipf's law of abbreviation, using the Google ngram corpus and artificial language learning to probe the causal link between redundancy and shortening more directly than in any previous work I know.

  2. Presented a slightly different version of this talk, combined with a report on Esteban's work on the role of interlocutor feedback, at the inaugural workshop of SFB 1103 on Information density and linguistic encoding. This workshop, too, was a lot of fun. Ted Gibson gave the other plenary and there were lots of interesting reports by the various subprojects of the SFB (an SFB is a 'special research area' -- a giant 4-year grant of about 8 million Euro, which can be extended for up to a total of 12 years; they currently have over 20 graduate students and 8 post-docs working with about 15 PIs .... all in information density!). The projects were really interesting.

    • Evkaterina (Katja) Kravtchenko presented an neat study on the role of abundant redundancy in inferences during language processing. Some of the logic of the experiment reminded me of what Amanda is now focusing on. You guys should get into touch!
    • Several other talks centered on the use of scripts (event schemata like the sequence of events involved in going to a restaurant) in understanding language processing (e.g., by showing that people conditioned on script knowledge).
    • Vera Demberg reported on her work on discourse connectives (together with Fatemeh).

Chigusa

Andrew

  1. Helped Masha fix JavaScript for running her MTurk experiments at Penn.

  2. Started looking at using JavaScript and the HTML5 Canvas to do a new web self paced reading app instead of continuing with trying to fix and add onto the Flash based one. In the end it should much much lighter and faster, as well as removing an external dependency.

  3. Continued work on new webpage.

  4. Started porting Masha's artificial language experiment (with the Flash applet) into the list balancing experiment runner.

Olga

  1. Went over the continuing review with Andrew and submitted it to Florian to sign off on.
  2. Tutorial. Figured out how to link pages.
  3. Need to figure out the RA situation and extend a formal invitation to have them work here this summer...

Esteban

  1. reviewed an abstract for Scott Seyfarth
  2. learned how to hack an ExBuilder script (that I got from Anne Pier)

  3. created a small pilot experiment to learn how to use the eye-tracker and figure out what needs to be fixed for the full pilot
  4. read the forthcoming Christiansen & Chater (2015) BBS

  5. ran a pilot then a full study for the SZ project with Scott Seyfarth

Dave

  1. Wrote and gave a guest lecture in Ralf's computational neuroscience class about speech perception and challenges of translating computational-level Bayesian accounts into neural models

  2. Coded up basic proof-of-concept Stan model for conjugate belief updating (and R script to run here).

  3. Coded and ran leave-one-out analysis of fMRI data on learning animal categories.
  4. Created out-of-scanner version of VOT fMRI study for post-test (repo).

  5. Read a bunch of papers on how talker identity learning and talker normalization interact. Consensus appears to be that they don't (very much).
  6. Re-read reviews of cog sci paper and drafted plan for how to address them.

Sarah

  1. worked on slides for ETAP talk
  2. further debugged the new eye-tracking Gotta script
  3. prepped the excel sheets for SET pairs 3 and 5 for new speech act annotations by Ellen
  4. did some recordings for Maryam

Dan

Amanda

  1. Met with Valerie and Teigan to talk about what remains to be done for their Wugland study.
  2. Discussed with Olga the plans to try to collect the remaining kids for the BUCLD deadline, and for my MXPrag talk.
  3. Ran myself in the Naming Task with the eyetracker, but unfortunately have not had time to go over the data to be certain that it is exporting the data correctly. I still need to double check all of my work to ensure that I haven't somehow made time-related calculation errors. In a previous week I shared a link for scripts that will read the duration times from Praat textGrids, ideally I would like to write some code that then outputs this in the format I need for XBuilder, because I don't trust my excel calculations (human error, etc).
  4. Chatted about my pragmatic speaker adaptation stuff in adapatation class - tried to talk about it with respect to how the strength of your priors affects the amount / kind of evidence you need to decide to generalize.
  5. Booked my flights to MXPrag - I'll be in Berlin May 28- June 4th, and will be in the Netherlands / Belgium until the 11th. It's my plan to maybe try to meet with Emiel Krahmer (Tilburg) and Hans Westerbeek (Tilburg - works on the role of object knowledge on reference production).
  6. Looked over the MXPrag Schedule; it looks like it will be an interesting workshop. I'd like to brush up on some of the details of my MA thesis, and work by Stephanie Solt before hearing the Michael Franke talk.

  7. Looked over my CogSci reviews, and divided the tasks into simple vs difficult fixes, and tried to come up with some initial strategies to deal with the more difficult corrections.

  8. Re-wrote my grant proposal again (well, at the point of writing this, I'm working on it, but by the time we meet a new draft will be entirely written... hopefully). The proposal is aiming to use an artificial language learning task to look at how people's knowledge about the distributional features of objects influences their expectations about the kinds of referring expressions a speaker might produce, or how it actually influences production. It also asks questions about what role interlocutor knowledge plays in the kinds of expectations people have for production and comprehension. It also opens possibilities for studying how we develop these strategies that allow us to make informative predictions / productions based on shared world knowledge.
  9. Read a paper about Russian Children's ability to interpret color adjectives as contrastive (Sekerina & Trueswell, 2012) - summary of findings: kids slow at anticipating referents, best when the preceding discourse makes the contrast salient; processing is speeded up only for trials where a pitch accent was used on the adjective with a nonsplit constituent (apparently in russian you can say split: "red put butterfly" or non-split: "red butterfly put"; for adults the split gives them more time to resolve that the color word was used contrastively, and are seemingly facilitated by a contrastive pitch accent on the adjective).

Zach

Linda

  1. Used the results from the norming study last week to create a pretest that will (hopefully) filter out participants whose audio equipment prevent them from distinguishing between S/SH contrasts. I'm really hoping that this will improve the quality of the data experiments involving these sounds.
  2. Gave vision lecture (that was originally supposed to be for this Tuesday) last Thursday in Steve's class.
  3. Read up on garbage collection in javascript. Learned about how to (kind of) make sense of Chrome memory tools, and stomped out a few leaks involving eventListeners, callback functions, and global variables.
  4. Started coding up a 2AFC task that swaps through multiple labels for Maryam.
  5. Met with RAs and showed them how to work the eyetracker on the test list we have.
  6. Double checked RA's work for some of the sentence annotation task.
  7. Put together presentation for KurTan tomorrow (on the eye tracking experiment).

Maryam

  1. Wrote (most of) paper for Aud Perception class. Read a buttload of old papers on speech perception with cochlear implants.
  2. Met with RAs to show them how to prepare audio stimuli for second round of norming
  3. Was "expert" on my own work in class on Wed.
  4. Forgot to register for ETAP

LabmeetingSP15w15 (last edited 2015-04-20 17:32:12 by dhcp-10-5-20-196)

MoinMoin Appliance - Powered by TurnKey Linux