10.1145/1357054.1357127acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Crowdsourcing user studies with Mechanical Turk

Online:06 April 2008Publication History

ABSTRACT

User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach.

References

  1. Benkler, Y. 2002. Coase's penguin,or Linux and the nature of the firm. Yale Law Journal 112, 367--445.Google ScholarGoogle ScholarCross RefCross Ref
  2. Andreasen, M. S., Nielsen, H. V., Schrøder, S. O., and Stage, J. 2007. What happened to remote usability testing? An empirical study of three methods. In Proc. of CHI 2007. ACM Press, New York, NY, 1405--1414. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Fogg, B.J., Marshall, J., Kameda, T., Solomon, J., Rangnekar, A., Boyd, J., and Brown, B. 2001. Web credibility research: a method for online experiments and early study results. In Proc. CHI '01 Extended Abstracts. ACM Press, New York, NY, 295--296. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Kittur, A., Suh, B., Pendleton, B., Chi, E.H. 2007. He Says, She Says: Conflict and Coordination in Wikipedia. In Proc. of CHI 2007, pp. 453--462. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Spool, J, & Schroeder, W. 2001. Test web sites: five users is nowhere near enough. In Proc. CHI2001 Extended Abstracts. Seattle: ACM Press. 285--286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Viégas, F.B., Wattenberg, M., and McKeon, M. 2007. The Hidden Order of Wikipedia. In Proc. of HCI Interactional Conference. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Wikipedia. Featured Article Criteria. http://en.wikipedia.org/wiki/Wikipedia:Featured_article_criteria, accessed Sep, 2007.Google ScholarGoogle Scholar

Index Terms

  1. Crowdsourcing user studies with Mechanical Turk

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!