research-article

Passerby Crowdsourcing: Workers' Behavior and Data Quality Management

Abstract

Worker recruitment is one of the important problems in crowdsourcing, and many proposals have been presented for placing equipment in physical spaces for recruiting workers. One of the essential challenges of the approach is how to keep people attracted because those who perform tasks at first gradually lose interest and do not access the equipment. This study uses a different approach to the worker recruitment problem. In our approach, we dive into people's personal spaces by projecting task images on the floor, thereby allowing the passersby to effortlessly access tasks while walking. The problem then changes from how to keep people engaged to how to manage data quality because many passersby unconsciously or intentionally walk through the task screen on the floor without doing the task, which produces unintended results. We explore a machine-learning approach to select only the intended results and manage the data quality. The system assesses the workers' intention from their behavior. We identify the features for classifiers based on our observations of the passersby. We then conduct extensive evaluations with real data. The results show that the features are effective in practice, and the classifiers improve the data quality.

References

  1. 2018. "Crowd4u". http://crowd4u.org/Google ScholarGoogle Scholar
  2. Harry Brignull and Yvonne Rogers. 2003. Enticing people to interact with large public displays in public spaces. In Proceedings of INTERACT, Vol. 3.17--24.Google ScholarGoogle Scholar
  3. Florian Daniel, Pavel Kucherbaev, Cinzia Cappiello, Boualem Benatallah, and Mohammad Allahbakhsh. 2018. Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions. ACM Comput. Surv. 51, 1, Article 7 (Jan. 2018), 40 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Jia Deng, Jonathan Krause, and Li Fei-Fei. 2013. Fine-grained crowdsourcing for fine-grained recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 580--587. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Alan Dix, Jennifer G Sheridan, Stuart Reeves, Steve Benford, and Claire O'Malley. 2005. Formalising performative interaction. In International Workshop on Design, Specification, and Verification of Interactive Systems. Springer, 15--25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Anhai Doan, Raghu Ramakrishnan, and Alon Y Halevy. 2011. Crowdsourcing systems on the world wide web. Commun. ACM 54, 4 (2011), 86--96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Carsten Eickhoff, Christopher G Harris, Arjen P de Vries, and Padmini Srinivasan. 2012. Quality through flow and immersion: gamifying crowdsourced relevance assessments. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval. ACM, 871--880. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1631--1640. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Jorge Goncalves, Denzil Ferreira, Simo Hosio, Yong Liu, Jakob Rogstadius, Hannu Kukka, and Vassilis Kostakos. 2013. Crowdsourcing on the spot: altruistic use of public displays, feasibility, performance, and behaviours. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. ACM, 753--762. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Jorge Goncalves, Simo Hosio, Denzil Ferreira, and Vassilis Kostakos. 2014. Game of words: tagging places through crowdsourcing on public displays. In Proceedings of the 2014 conference on Designing interactive systems. ACM, 705--714. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: engaging local crowds to perform expert work via physical kiosks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1539--1548. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Bernd Huber, Joong Ho Lee, and Ji-Hyung Park. 2015. Detecting User Intention at Public Displays from Foot Positions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 3899--3902. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Kosetsu Ikeda, Atsuyuki Morishima Habibur Rahman, Senjuti Basu Roy, Saravanan Thirumuruganathan, Sihem Amer-Yahia, and Gau-tam Das. 2016. Collaborative crowdsourcing with crowd4U. Proceedings of the VLDB Endowment 9, 13 (2016), 1497--1500. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Lisa Koeman, Vaiva Kalnikaité, and Yvonne Rogers. 2015. Everyone is talking about it!: A distributed approach to urban voting technology and visualisations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 3127--3136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Thomas Ludwig, Christoph Kotthaus, Christian Reuter, Sören van Dongen, and Volkmar Pipek. 2017. Situated crowdsourcing during disasters: Managing the tasks of spontaneous volunteers through public displays. International Journal of Human-Computer Studies 102 (2017), 103--121. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Hao Ma, Raman Chandrasekar, Chris Quirk, and Abhishek Gupta. 2009. Improving search engines using human computation games. In Proceedings of the 18th ACM conference on Information and knowledge management. ACM, 275--284. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Atsuyuki Morishima, Shun Fukusumi, and Hiroyuki Kitagawa. 2016. Cylog/game aspect: An approach to separation of concerns in crowdsourced data management. Information Systems 62 (2016), 170--184. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Atsuyuki Morishima, Kiyonori Nagasaki, Kosetsu Ikeda, and Takanori Kawashima. 2016. The Hondigi/L-Crowd Joint Project: A Microtask-based Approach for Transcribing Japanese Texts. In International Conference on Digital Libraries (ICDL) 2016: Smart Future: Knowledge Trends that will Change the World. The Energy and Resources Institute (TERI), 108.Google ScholarGoogle Scholar
  19. Atsuyuki Morishima, Norihide Shinagawa, Tomomi Mitsuishi, Hideto Aoki, and Shun Fukusumi. 2012. CyLog/Crowd4U: A declarative platform for complex data-centric crowdsourcing. Proceedings of the VLDB Endowment 5, 12 (2012), 1918--1921. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Genevieve Patterson, Grant van Horn, Serge J Belongie, Pietro Perona, and James Hays. 2015. Tropel: Crowdsourcing Detectors with Minimal Training. In HCOMP. 150--159.Google ScholarGoogle Scholar
  21. Julian Ryall. 2008. Japan harnesses energy from footsteps. The Telegraph 12 (2008).Google ScholarGoogle Scholar
  22. Jeffrey M Rzeszotarski and Aniket Kittur. 2011. Instrumenting the crowd: using implicit behavioral measures to predict task performance. In Proceedings of the 24th annual ACM symposium on User interface software and technology. ACM, 13--22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Jeffrey M Rzeszotarski and Aniket Kittur. 2012. CrowdScape: interactively visualizing user behavior and output. In Proceedings of the 25th annual ACM symposium on User interface software and technology. ACM, 55--62. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Rikuya Suzuki, Tetsuo Sakaguchi, Masaki Matsubara, Hiroyuki Kitagawa, and Atsuyuki Morishima. 2018. CrowdSheet: An Easy-To-Use One-Stop Tool for Writing and Executing Complex Crowdsourcing. In International Conference on Advanced Information Systems Engineering. Springer, 137--153.Google ScholarGoogle Scholar
  25. Kathleen Tuite. 2014. GWAPs: Games with a problem.. In FDG.Google ScholarGoogle Scholar
  26. Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein. 2014. Twitch Crowdsourcing: Crowd Contributions in Short Bursts of Time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14). ACM, New York, NY, USA, 3645--3654. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Nina Valkanova, Robert Walter, Andrew Vande Moere, and Jörg Müller. 2014. MyPosition: sparking civic discourse by a public interactive poll visualization. In Proceedings of the 17th ACM conference on Computer supported co-operative work & social computing. ACM, 1323--1332. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Luis von Ahn. 2013. Duolingo: learn a language for free while helping to translate the web. In Proceedings of the 2013 international conference on Intelligent user interfaces. ACM, 1--2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 319--326. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Luis von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. 2008. recaptcha: Human-based character recognition via web security measures. Science 321, 5895 (2008), 1465--1468.Google ScholarGoogle Scholar

Supplemental Material

Index Terms

  1. Passerby Crowdsourcing

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!