skip to primary navigationskip to content

Experiments using Crowd Sourcing Platforms

Crowd sourcing platforms provide an online mechanism for recruiting people to perform small tasks for small payments - typically a few cents per task. For experiments in which a large number of volunteers must be recruited, to carry out relatively small tasks (such as simple perceptual or rating decisions), these platforms can be very convenient.

It is, however, difficult to control for education, demographics, cultural background, or other typical experimental considerations. It is also necessary to assess the quality of responses, as the objective of contributors to these sites is to earn as much money as possible in as short a time as possible. This means that tasks involving complex or subtle judgements need to be carefully controlled.

Because all tasks must be administered online, experimental materials must be designed in a way that can be presented in a web browser.

Advice on design of crowdsourcing experiments:

http://sites.cognitivescience.co/knowledgebase/resources/crowdsourcing-participants-and-work-using-amazon-mechanical-turk

 

Constraints

Recruiting participants via a crowd sourcing site is closer in nature to an employment contract than a typical experiment, and most commercial usage of these sites is oriented toward efficient completion of small and mundane tasks. There is a degree of controversy associated with this status - for example, the income earned by workers is often below the legal minimum wage in the country where they reside.

However, all those contributing to crowd sourcing sites have consented to the contractual terms of the site. As an experimenter administering experiments via the site, you will be expected to do likewise, as part of the terms of service. For two major providers, these are as follows:

 

CrowdFlower

 

Amazon Mechanical Turk (AMT)

AMT policies:

Best practices for task requesters:

Information provided for AMT workers: