skip to main content
research-article

SurveyMan: programming and automatically debugging surveys

Published:15 October 2014Publication History
Skip Abstract Section

Abstract

Surveys can be viewed as programs, complete with logic, control flow, and bugs. Word choice or the order in which questions are asked can unintentionally bias responses. Vague, confusing, or intrusive questions can cause respondents to abandon a survey. Surveys can also have runtime errors: inattentive respondents can taint results. This effect is especially problematic when deploying surveys in uncontrolled settings, such as on the web or via crowdsourcing platforms. Because the results of surveys drive business decisions and inform scientific conclusions, it is crucial to make sure they are correct.

We present SurveyMan, a system for designing, deploying, and automatically debugging surveys. Survey authors write their surveys in a lightweight domain-specific language aimed at end users. SurveyMan statically analyzes the survey to provide feedback to survey authors before deployment. It then compiles the survey into JavaScript and deploys it either to the web or a crowdsourcing platform. SurveyMan's dynamic analyses automatically find survey bugs, and control for the quality of responses. We evaluate SurveyMan's algorithms analytically and empirically, demonstrating its effectiveness with case studies of social science surveys conducted via Amazon's Mechanical Turk.

Skip Supplemental Material Section

Supplemental Material

References

  1. M. Barclay, W. Lober, and B. Karras. SuML: A survey markup language for generalized survey encoding. In Proceedings of the AMIA Symposium, page 970. American Medical Informatics Association, 2002.Google ScholarGoogle Scholar
  2. D. W. Barowy, C. Curtsinger, E. D. Berger, and A. McGregor. AUTOMAN: A platform for integrating human-based and digital computation. In Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications, OOPSLA '12, pages 639--654, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. J. Berinsky, G. A. Huber, and G. S. Lenz. Evaluating online labor markets for experimental research: Amazon.com's Mechanical Turk. Political Analysis, 20(3):351--368, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  4. G. A. Churchill Jr and D. Iacobucci. Marketing research: methodological foundations. Cengage Learning, 2009.Google ScholarGoogle Scholar
  5. A. M. Colman. A dictionary of psychology. Oxford University Press, 2009.Google ScholarGoogle Scholar
  6. M. P. Couper. Designing Effective Web Surveys. Cambridge University Press, New York, NY, USA, 1st edition, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. De Vaus. Surveys in social research. Psychology Press, 2002.Google ScholarGoogle Scholar
  8. E. W. Dijkstra. Guarded commands, nondeterminacy and formal derivation of programs. Commun. ACM, 18(8):453--457, Aug. 1975. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. D. A. Dillman. Mail and telephone surveys, volume 3. Wiley New York, 1978.Google ScholarGoogle Scholar
  10. J. S. Downs, M. B. Holbrook, S. Sheng, and L. F. Cranor. Are your participants gaming the system?: Screening Mechanical Turk workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, pages 2399--2402, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. G. Emanuel. Post A Survey On Mechanical Turk And Watch The Results Roll In: All Tech Considered: NPR. http://n.pr/1gqklTx, Mar. 2014.Google ScholarGoogle Scholar
  12. I. Google. Google consumer surveys. http://www.google.com/insights/consumersurveys/home, 2013.Google ScholarGoogle Scholar
  13. J. J. Horton, D. G. Rand, and R. J. Zeckhauser. The online laboratory: Conducting experiments in a real labor market. Experimental Economics, 14(3):399--425, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  14. J. L. Huang, P. G. Curran, J. Keeney, E. M. Poposki, and R. P. DeShon. Detecting and deterring insufficient effort responding to surveys. Journal of Business and Psychology, 27(1):99--114, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  15. Instant.ly. Instant.ly. http://instant.ly, 2013.Google ScholarGoogle Scholar
  16. P. Ipeirotis. Demographics of Mechanical Turk. Technical Report NYU working paper no. CEDER-10-01, 2010.Google ScholarGoogle Scholar
  17. S. C. Kingsley. Personal Communication, August 2014.Google ScholarGoogle Scholar
  18. J. A. Krosnick. Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied cognitive psychology, 5(3):213--236, 1991.Google ScholarGoogle Scholar
  19. M. MacHenry and J. Matthews. Topsl: A domain-specific language for on-line surveys. In O. Shivers and O. Waddell, editors, Proceedings of the Fifth ACM SIGPLAN Workshop on Scheme and Functional Programming, pages 33--39, Snowbird, Utah, Sept. 22, 2004. Technical report TR600, Department of Computer Science, Indiana University. http://www.cs.indiana.edu/cgi-bin/techreports/TRNNN.cgi?trnum=TR600.Google ScholarGoogle Scholar
  20. G. MacKerron. Implementation, implementation, implementation: Old and new options for putting surveys and experiments online. Journal of Choice Modelling, 4:20--48, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  21. E. Martin. Survey questionnaire construction. Technical Report Survey Methodology #2006-13, Director's Office, U.S. Census Bureau, 2006.Google ScholarGoogle Scholar
  22. W. Mason and S. Suri. Conducting behavioral research on Amazon's Mechanical Turk. Behavior research methods, 44(1):1--23, 2012.Google ScholarGoogle Scholar
  23. A. S. McKay. Improving data quality with four short sentences: How an honor code can make the difference during data collection. 2014.Google ScholarGoogle Scholar
  24. A. W. Meade and S. B. Craig. Identifying careless responses in survey data. Psychological methods, 17(3):437, 2012.Google ScholarGoogle Scholar
  25. S. Netherlands. Blaise: Survey software for professionals. http://www.blaise.com/ShortIntroduction, 2013.Google ScholarGoogle Scholar
  26. D. M. Oppenheimer, T. Meyvis, and N. Davidenko. Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4):867--872, 2009.Google ScholarGoogle Scholar
  27. Pew Research Center. Question Order | Pew Research Center for the People and the Press. http://www.people-press.org/methodology/questionnaire-design/question-order/, 2014.Google ScholarGoogle Scholar
  28. Pew Research Center. Question Wording | Pew Research Center for the People and the Press. http://www.people-press.org/methodology/questionnaire-design/question-wording/, 2014.Google ScholarGoogle Scholar
  29. A. Peytchev. Survey breakoff. Public Opinion Quarterly, 73(1):74--97, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  30. P. Pizzo. Personal Communication, August 2014.Google ScholarGoogle Scholar
  31. I. Qualtrics. Qualtrics.com. http://qualtrics.com, 2013.Google ScholarGoogle Scholar
  32. J. P. Robinson-Cimpian. Inaccurate estimation of disparities due to mischievous responders several suggestions to assess conclusions. Educational Researcher, 43(4):171--185, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  33. SocialSci. Cambridge, ma, usa. http://www.socialsci.com, 2014.Google ScholarGoogle Scholar
  34. D. J. Solomon. Conducting web-based surveys, August 2001.Google ScholarGoogle Scholar
  35. S. Spencer. Canard question module editor. https://github.com/LegoStormtroopr/canard, 2013.Google ScholarGoogle Scholar
  36. S. Spencer. A case against the skip statement, 2013. Unpublished.Google ScholarGoogle Scholar
  37. S. Spencer. The simple questionnaire building language, 2013.Google ScholarGoogle Scholar
  38. S. S. Stevens. On the theory of scales of measurement. 1946.Google ScholarGoogle ScholarCross RefCross Ref
  39. L. Survey. Lime survey 2.05. https://www.limesurvey.org/en/, 2014.Google ScholarGoogle Scholar
  40. L. SurveyGizmo. Surveygizmo. http://http://www.surveygizmo.com/, 2014.Google ScholarGoogle Scholar
  41. SurveyMonkey, Inc. Surveymonkey. http://surveymonkey.com, 2013.Google ScholarGoogle Scholar
  42. R. Tourangeau, F. Conrad, and M. Couper. The Science of Web Surveys. Oxford University Press, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  43. P. D. Umbach. Web surveys: Best practices. New Directions for Institutional Research, 2004(121):23--38, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  44. U.S. Government Accountability Office. Questionnaire programming language. http://qpl.gao.gov/qpl6ref/01.php, 2009.Google ScholarGoogle Scholar
  45. D. Zhu and B. Carterette. An analysis of assessor behavior in crowdsourced preference judgments. In Proceedings of the SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation (CSE 2010), pages 21--26, 2010.Google ScholarGoogle Scholar

Index Terms

  1. SurveyMan: programming and automatically debugging surveys

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM SIGPLAN Notices
        ACM SIGPLAN Notices  Volume 49, Issue 10
        OOPSLA '14
        October 2014
        907 pages
        ISSN:0362-1340
        EISSN:1558-1160
        DOI:10.1145/2714064
        • Editor:
        • Andy Gill
        Issue’s Table of Contents
        • cover image ACM Conferences
          OOPSLA '14: Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages & Applications
          October 2014
          946 pages
          ISBN:9781450325851
          DOI:10.1145/2660193

        Copyright © 2014 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 15 October 2014

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!