skip to main content
research-article

Eliciting Structured Knowledge from Situated Crowd Markets

Authors Info & Claims
Published:27 March 2017Publication History
Skip Abstract Section

Abstract

We present a crowdsourcing methodology to elicit highly structured knowledge for arbitrary questions. The method elicits potential answers (“options”), criteria against which those options should be evaluated, and a ranking of the top “options.” Our study shows that situated crowdsourcing markets can reliably elicit/moderate knowledge to generate a ranking of options based on different criteria that correlate with established online platforms. Our evaluation also shows that local crowds can generate knowledge that is missing from online platforms and on how a local crowd perceives a certain issue. Finally, we discuss the benefits and challenges of eliciting structured knowledge from local crowds.

References

  1. Aaron Bangor, Philip Kortum, and James Miller. 2008. An empirical evaluation of the system usability scale. Intl. J. Hum.--Comput. Interact. 24, 6, 574--594.Google ScholarGoogle Scholar
  2. American Movie Awards. 2017. Judging criteria. Retrieved from https://www.americanmovieawards.com/enter-now/judging-criteria-sub.Google ScholarGoogle Scholar
  3. Michael S. Bernstein, Greg Little, Robert Miller, and Björn Hartmann, et al. 2010. Soylent: A word processor with a crowd inside. In Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology (UIST’10), 313--322. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Harry Brignull and Yvonne Rogers. 2003. Enticing people to interact with large public displays in public spaces. In Proceedings of INTERACT, 17--24.Google ScholarGoogle Scholar
  5. Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using amazon's mechanical turk. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1, 286--295. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Li Chen and Pearl Pu. 2011. Critiquing-based recommenders: survey and emerging trends. User Model. User-Adapt. Interac. 22, 1--2, 125--150. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Justin Cheng and Michael Bernstein. 2015. Flock: Hybrid crowd-machine learning classifiers. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work 8 Social Computing, 600--611. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Julie Downs, Mandy Holbrook, Steve Sheng, and Lorrie L. F. Cranor. 2010. Are your participants gaming the system?: Screening mechanical turk workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10), 2399--2402. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Jorge Goncalves, Denzil Ferreira, Simo Hosio, Yong Liu, Jakob Rogstadius, Hannu Kukka, and Vassilis Kostakos. 2013. Crowdsourcing on the spot: Altruistic use of public displays, feasibility, performance, and behaviours. In Proceedings of UbiComp’13, 753--762. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Jorge Goncalves, Simo Hosio, Denzil Ferreira, and Vassilis Kostakos. 2014a. Game of words: Tagging places through crowdsourcing on public displays. In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS’14), 705--714. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Jorge Goncalves, Pratyush Pandab, Denzil Ferreira, Mohammad Ghahramani, Guoying Zhao, and Vassilis Kostakos. 2014b. Projective testing of diurnal collective emotion. In Proceedings of UbiComp’14, 487--497. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Jorge Goncalves, Simo Hosio, Jakob Rogstadius, Evangelos Karapanos, and Vassilis Kostakos. 2015. Motivating participation and improving quality of contribution in ubiquitous crowdsourcing. Comput. Netw. 90, 34--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Jorge Goncalves, Hannu Kukka, Iván Sánchez, and Vassilis Kostakos. 2016. Crowdsourcing queue estimations in situ. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work 8 Social Computing (CSCW’16), 1040--1051. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Bin Guo, Zhu Wang, Zhiwen Yu, Yu Wang, Neil Y. Yen, Runhe Huang, and Xingshe Zhou. 2016. Mobile crowd sensing and computing: The review of an emerging human-powered sensing paradigm. ACM Comput. Surv. 48, 1, 7. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: Engaging local crowds to perform expert work via physical kiosks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’12), 1539--1548. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. John Horton, David Rand, and Richard Zeckhauser. 2011. The online laboratory: Conducting experiments in a real labor market. Exp. Econ. 14, 3, 399--425.Google ScholarGoogle ScholarCross RefCross Ref
  17. Simo Hosio, Jorge Goncalves, Vili Lehdonvirta, Denzil Ferreira, and Vassilis Kostakos. 2014a. Situated crowdsourcing using a market model. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST’14), 55--64. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Simo Hosio, Jorge Goncalves, Vassilis Kostakos, and Jukka Riekki. 2014b. Exploring civic engagement on public displays. In User-Centric Technology Design for Nonprofit and Civic Engagements, Saqib Saeed (Ed.). Springer, 91--111.Google ScholarGoogle Scholar
  19. Simo Hosio, Jorge Goncalves, Vassilis Kostakos, and Jukka Riekki. 2015. Crowdsourcing public opinion using urban pervasive technologies: Lessons from real-life experiments in oulu. Pol. Internet 7, 2, 203--222.Google ScholarGoogle ScholarCross RefCross Ref
  20. Simo Hosio, Jorge Goncalves, Theodoros Anagnostopoulos, and Vassilis Kostakos. 2016. Leveraging the wisdom of the crowd for decision support. In Proceedings of the 2016 British HCI Conference (British HCI’16).Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Yi-Ching Huang. 2015. Designing a micro-volunteering platform for situated crowdsourcing. In Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work 8 Social Computing (CSCW’15), 73--76. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Panagiotis Ipeirotis, Foster Provost, and Jing Wang. 2010. Quality management on amazon mechanical turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation (HCOMP’10), 64--67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Aniket Kittur, Boris Smus, Susheel Khamkar, and Robert R. E. Kraut. 2011. Crowdforge: Crowdsourcing complex work. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST’11), 43--52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Hannu Kukka, Heidi Oja, Vassilis Kostakos, Jorge Goncalves, and Timo Ojala. 2013. What makes you click: Exploring visual signals to entice interaction on public displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1699--1708. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Karim R. Lakhani and Eric von Hippel. 2003. How open source software works: “free” user-to-user assistance. Res. Policy 32, 6, 923--943.Google ScholarGoogle ScholarCross RefCross Ref
  26. Cliff Lampe, Paul Zube, Jusil Lee, Chul C. H. Park, and Erik Johnston. 2014. Crowdsourcing civility: A natural experiment examining the effects of distributed moderation in online forums. Gov. Inf. Quart. 31, 2, 317--326.Google ScholarGoogle ScholarCross RefCross Ref
  27. M. D. Lee, B. M. Pincombe, and M. B. Welsh. 2005. An empirical evaluation of models of text document similarity. Cogn. Sci. 1254--1259.Google ScholarGoogle Scholar
  28. Nira Liberman, Yaacov Trope, and Elena Stephan. 2007. Psychological distance. Soc. Psychol.: Handb. Basic Princip. 2, 353--383.Google ScholarGoogle Scholar
  29. Jörg Müller, Florian Alt, Daniel Michelis, and Albrecht Schmidt. 2010. Requirements and design space for interactive public displays. In Proceedings of the International Conference on Multimedia, 1285--1294. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Gonzalo Navarro. 2001. A guided tour to approximate string matching. ACM Comput. Surv. 33, 1, 31--88. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. D. J. Navarro and M. D. Lee. 2004. Common and distinctive features in stimulus similarity: A modified version of the contrast model. Psychonom. Bull. Rev. 11, 6, 961--974.Google ScholarGoogle ScholarCross RefCross Ref
  32. Jon Noronha, Eric Hysen, Haoqi Zhang, and Krzysztof Gajos. 2011. Platemate: Crowdsourcing nutritional analysis from food photographs. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST’11), 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Yvonne Rogers, Helen Sharp, and Jenny Preece. 2011. Interaction Design: Beyond Human-Computer Interaction. John Wiley 8 Sons. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Clay Shirky. 2010. Cognitive surplus: how technology makes consumers into collaborators. Penguin. SureLock. Retrieved from www.42gears.com/surelock/Google ScholarGoogle Scholar
  35. John Sweller. 1988. Cognitive load during problem solving: Effects on learning. Cogn. Sci. 12, 2, 257--285.Google ScholarGoogle ScholarCross RefCross Ref
  36. A. Tversky. 1977. Features of similarity. Psychol. Rev. 84, 4, 327--352.Google ScholarGoogle ScholarCross RefCross Ref
  37. Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’04), 319--326. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Wikidata. Retrieved July 22, 2015 from https://www.wikidata.org/wiki/Wikidata:Main_PageGoogle ScholarGoogle Scholar
  39. Shaomei Wu, Shenwei Liu, Dan Cosley, and Michael Macy. 2011. Mining collective local knowledge from google MyMaps. In Proceedings of the 20th International Conference Companion on World Wide Web (WWW’11), 151--152. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Eliciting Structured Knowledge from Situated Crowd Markets

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!