#acl HlpLabGroup:read,write,delete,revert,admin All:read #format wiki #language en #pragma section-numbers 3 = Mechanical Turk = == Two ways to use Mechanical Turk == 1. For simple task designs use the web-based requester interface. 2. For more complex designs use the command-line tools for External Questions. === Web-based requester interface === You should use the [[https://requester.mturk.com/mturk/|web-based requester interface]] (for the running actual subjects) or the web-based [[https://requestersandbox.mturk.com/mturk/|requester sandbox]] (for testing HITs before running actual subjects) if your experiment falls in one of the following categories: * You only have one list (i.e. not a balanced latin-square type design) that all workers (i.e. subjects) will do and the experiment is not multi-part. * You will be linking the workers to do a task at an outside site and having them copy and paste some sort of result or code in to prove that they did the task. (e.g. a self paced reading study at [[http://spellout.net/ibexfarm|IbexFarm]] * You are using a Flash applet (or Silverlight, etc.) that handles tracking all of the user state, list balancing, etc. With Flash this could be done with [[http://en.wikipedia.org/wiki/Local_Shared_Object|Local Shared Objects]] (aka LSOs or "Flash cookies"). Other web applet scripting languages may have similar. Lists are uploaded as CSV files and results are downloaded as CSV files. A template for a HIT uses the [[http://en.wikipedia.org/wiki/Apache_Velocity|Velocity]] templating language (or possibly just a subset?) to fill in any variables coming from your CSV files. Each row represents one HIT in your HIT group. While your HITs are running, the requester site displays a progress bar that shows how many of the HITs in your group have completed. === External Questions === With External Questions, you host the experiment on an external server, but the results are [[http://en.wikipedia.org/wiki/POST_%28HTTP%29|POSTed]] to the the Mechanical Turk (or Mechanical Turk Sandbox) site. You are responsible for the creation of any HTML templates your HIT needs, any backend (e.g. database) to fill the templates, CGI script(s) to present the templates, etc. All Amazon does is pass certain variables (discussed later on this page) to your webserver, display the page you specify in an iframe, and accept the POSTed results. To upload an External Question, you need 3 files * input - A tab-delimited file with additional variables you want passed to your script by Amazon. Each line represents one HIT in your HIT group. * properties - Describes the title, description, pay rate, required qualifications, etc for your HITs. It has a specific format, so it's best to copy one from the examples that come with the command line tools and edit it for your purposes * question - An XML file with properties of the external URL that Amazon will be passing variables to. Again, it's easiest to copy an example from the command line tools and edit it. Uses Velocity syntax (see link in web-based section) for variables. as well as Amazon's [[http://aws.amazon.com/developertools/694?_encoding=UTF8&jiveRedirect=1|command line tools]] (written in Java, so you also need a JVM on your computer). For further documentation see http://mturk.s3.amazonaws.com/CLT_Tutorial/UserGuide.html The results that your script POSTs to Mechanical Turk also are retrieved (as CSV files) using the command line tools when using External Questions. There is no progress bar on the requester site for External Question HITs. Amazon gives your script: * Hit``Id - identifier for a given HIT (aka "trial" to us) * Worker``Id - unique identifier of the worker doing the HIT. It is unique on the system, not just for the HIT, so you could see the same worker on many HITs over time. * Assignment``Id - unique identifier for the assignment. Appears to be the Hit``Id plus some sort of hash * any user created annotation - I often use Trial``Id ==== Creating Balanced Lists ==== Amazon creates a HIT for each trial, and creates as many assignments as you tell it to of each HIT. We want to: * show each worker items from only one list * use each list the same number of times * use each item from each list the same number of times Problems: * Workers can start with any HIT (trial) in a given assignment * Workers can return HITs at any time, making them available to a new worker, but given the information Amazon gives you, there's no way for you to know when this happens, so you can automatically start the new worker where the old one left off Solution: {{{ If worker seen before: fetch items for trial based on list from past trials and display items Else: ??? Somehow assign them to one of the lists, attempting to get an equal number of workers on each list for each item. }}} * Assigning new workers by count of workers modulo number of lists doesn't work, as workers can return HITs at any point and throw off the list count. * Assigning new workers by count of assignments of the current HIT (mod number of lists) doesn't work because workers can start with any HIT, so you could be assigning them to a list that's already taken by someone doing another HIT currently. * Either way, in the pathological case you end up with some lists being overassigned and others underassigned. Based on experience, many people only do one or two trials, most do less than five, so it's very easy for the pathological situation to happen. One way to improve the number of HITs each worker completes in a group is to offer a bonus. Pay a low amount for each HIT, but state in the instructions that you will pay a bonus for workers that complete certain amounts of HITs, e.g. On an experiment with 16 HITs (possibly with multiple trials within a given HIT) pay $0.10 per HIT, but at 5 HITs pay a $0.45 bonus, at 10 HITs pay a $1.50 bonus, and at 16 trials pay a $3.90 bonus, where each worker gets paid for exactly one bonus level (the highest bonus level they completed). An example graphic that went with just such an experiment is [[http://www.hlp.rochester.edu/mturk/compensation_crmrecall1.png|at this link]]. It helps to make a visual aid like this to make it clear to the workers that it's worth their time to do as many HITs in the group as they can. A way to increase the amount of work the average worker does is to combine multiple trials into a single HIT. You can use JavaScript to hide all trials except the current one. In theory, you could put all of your trials into a single HIT, but if your experiment is very long, you run the risk of fatigued or bored workers doing substandard work just to get it over with and get paid. It's better to give them an out at reasonable intervals. == Helpful Code == === Geographic Info === Via Neal Snider from Robert Munro (w/ minor changes by Andrew Watts for formatting and to make it valid HTML): If you place it in the design-view of your template, and it will use the IP address and browser settings of each Turker to populate fields with some useful demographics like 'City', 'Region', 'Country', and 'User Display Language'. {{{#!highlight html numbers=disable

}}} == Tutorials == * [[http://mechanicalturk.typepad.com/|Official Mechanical Turk Blog]] from Amazon * [[http://www.itworld.com/internet/76659/experimenting-mechanical-turk-5-how-tos|5 Mechanical Turk Howtos]] * n.b. Amazon removed the "send an email message" feature that the article says to use to invite good workers to do followups. Workers can email you, but you can't email them anymore. * Also, don't "ban" workers who have already done a study. It hurts their ability to do future HITs for anyone. The use of credentials they mention is difficult to do, and except for a few built-in ones (e.g. % approval rating, location), they can only be manipulated with and can only be used with experiments created with the command line tools. == Papers == [[attachment:KapelnerChandler-PreventingSatisficingInOnlineSurveys.pdf|Preventing Satisficing In Online Surveys]]