skip to main content
10.1145/3411764.3445399acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Public Access

Comparing Generic and Community-Situated Crowdsourcing for Data Validation in the Context of Recovery from Substance Use Disorders

Authors Info & Claims
Published:07 May 2021Publication History

ABSTRACT

Targeting the right group of workers for crowdsourcing often achieves better quality results. One unique example of targeted crowdsourcing is seeking community-situated workers whose familiarity with the background and the norms of a particular group can help produce better outcome or accuracy. These community-situated crowd workers can be recruited in different ways from generic online crowdsourcing platforms or from online recovery communities. We evaluate three different approaches to recruit generic and community-situated crowd in terms of the time and the cost of recruitment, and the accuracy of task completion. We consider the context of Alcoholics Anonymous (AA), the largest peer support group for recovering alcoholics, and the task of identifying and validating AA meeting information. We discuss the benefits and trade-offs of recruiting paid vs. unpaid community-situated workers and provide implications for future research in the recovery context and relevant domains of HCI, and for the design of crowdsourcing ICT systems.

Skip Supplemental Material Section

Supplemental Material

References

  1. Elena Agapie, Lucas Colusso, Sean A. Munson, and Gary Hsieh. 2016. PlanSourcing: Generating Behavior Change Plans with Friends and Crowds. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, 119–133. https://doi.org/10.1145/2818048.2819943event-place: San Francisco, California, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Rawan N. Al-Matham and Hend S. Al-Khalifa. 2017. A Crowdsourcing Web-Based System for Reporting Predatory Publishers. In Proceedings of the 19th International Conference on Information Integration and Web-Based Applications & Services(iiWAS ’17). Association for Computing Machinery, New York, NY, USA, 573–576. https://doi.org/10.1145/3151759.3151844Google ScholarGoogle Scholar
  3. Sultana Lubna Alam and John Campbell. 2012. Crowdsourcing Motivations in a not-for-profit GLAM context: The Australian Newspapers Digitisation Program. In ACIS 2012 : Proceedings of the 23rd Australasian Conference on Information Systems. Australasian Conference on Information Systems. https://openresearch-repository.anu.edu.au/handle/1885/154077Google ScholarGoogle Scholar
  4. Omar Alonso and Matthew Lease. 2011. Crowdsourcing 101: Putting the WSDM of Crowds to Work for You. Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, 1–2. https://doi.org/10.1145/1935826.1935831event-place: Hong Kong, China.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. D. Auferbauer and H. Tellioundefinedlu. 2017. Centralized Crowdsourcing in Disaster Management: Findings and Implications. In Proceedings of the 8th International Conference on Communities and Technologies(C&T ’17). Association for Computing Machinery, New York, NY, USA, 173–182. https://doi.org/10.1145/3083671.3083689Google ScholarGoogle Scholar
  6. Daniel Robert Bateman, Erin Brady, David Wilkerson, Eun-Hye Yi, Yamini Karanam, and Christopher M. Callahan. 2017. Comparing Crowdsourcing and Friendsourcing: A Social Media-Based Feasibility Study to Support Alzheimer Disease Caregivers. JMIR Research Protocols 6, 4 (2017), e56. https://doi.org/10.2196/resprot.6904Google ScholarGoogle ScholarCross RefCross Ref
  7. Michael S. Bernstein, Desney Tan, Greg Smith, Mary Czerwinski, and Eric Horvitz. 2008. Personalization via Friendsourcing. ACM Trans. Comput.-Hum. Interact. 17, 2 (5 2008), 6:1–6:28. https://doi.org/10.1145/1746259.1746260Google ScholarGoogle Scholar
  8. Ria Mae Borromeo, Thomas Laurent, and Motomichi Toyama. 2016. The Influence of Crowd Type and Task Complexity on Crowdsourced Work Quality. In Proceedings of the 20th International Database Engineering & Applications Symposium(IDEAS ’16). Association for Computing Machinery, New York, NY, USA, 70–76. https://doi.org/10.1145/2938503.2938511Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Ria Mae Borromeo and Motomichi Toyama. 2016. An investigation of unpaid crowdsourcing. Human-centric Computing and Information Sciences 6, 1 (2016), 11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Soumia Bougrine, Hadda Cherroun, and Ahmed Abdelali. 2017. Altruistic crowdsourcing for arabic speech corpus annotation. Procedia Computer Science 117 (2017), 137–144.Google ScholarGoogle ScholarCross RefCross Ref
  11. Erin Brady. 2015. Getting Fast, Free, and Anonymous Answers to Questions Asked by People with Visual Impairments. SIGACCESS Access. Comput.112 (7 2015), 16–25. https://doi.org/10.1145/2809915.2809918Google ScholarGoogle Scholar
  12. Erin L. Brady, Yu Zhong, Meredith Ringel Morris, and Jeffrey P. Bigham. 2013. Investigating the Appropriateness of Social Network Question Asking As a Resource for Blind Users. Proceedings of the 2013 Conference on Computer Supported Cooperative Work, 1225–1236. https://doi.org/10.1145/2441776.2441915event-place: San Antonio, Texas, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Florian Brühlmann, Serge Petralito, Lena F. Aeschbach, and Klaus Opwis. 2020. The quality of data collected online: An investigation of careless responding in a crowdsourced sample. Methods in Psychology 2 (Nov. 2020), 100022. https://doi.org/10.1016/j.metip.2020.100022Google ScholarGoogle Scholar
  14. Pern Hui Chia and John Chuang. 2012. Community-Based Web Security: Complementary Roles of the Serious and Casual Contributors. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work(CSCW ’12). Association for Computing Machinery, New York, NY, USA, 1023–1032. https://doi.org/10.1145/2145204.2145356Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Franco Curmi, Maria Angela Ferrario, Jon Whittle, and Florian ’Floyd’ Mueller. 2015. Crowdsourcing Synchronous Spectator Support: (Go on, Go on, You’Re the Best)N-1. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 757–766. https://doi.org/10.1145/2702123.2702338event-place: Seoul, Republic of Korea.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Luca de Alfaro and Michael Shavlovsky. 2014. CrowdGrader: A Tool for Crowdsourcing the Evaluation of Homework Assignments. Proceedings of the 45th ACM Technical Symposium on Computer Science Education, 415–420. https://doi.org/10.1145/2538862.2538900event-place: Atlanta, Georgia, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Roelof A. J. de Vries, Cristina Zaga, Franciszka Bayer, Constance H. C. Drossaert, Khiet P. Truong, and Vanessa Evers. 2017. Experts Get Me Started, Peers Keep Me Going: Comparing Crowd- Versus Expert-designed Motivational Text Messages for Exercise Behavior Change. Proceedings of the 11th EAI International Conference on Pervasive Computing Technologies for Healthcare, 155–162. https://doi.org/10.1145/3154862.3154875event-place: Barcelona, Spain.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics and dynamics of mechanical Turk workers. In Proceedings of the eleventh ACM international conference on web search and data mining. 135–143.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, Panagiotis G. Ipeirotis, and Philippe Cudré-Mauroux. 2015. The Dynamics of Micro-Task Crowdsourcing: The Case of Amazon MTurk. In Proceedings of the 24th International Conference on World Wide Web(WWW ’15). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 238–247. https://doi.org/10.1145/2736277.2741685Google ScholarGoogle Scholar
  20. David Dunning. 2011. Chapter five - The Dunning–Kruger Effect: On Being Ignorant of One’s Own Ignorance. In Advances in Experimental Social Psychology, James M. Olson and Mark P. Zanna (Eds.). Vol. 44. Academic Press, 247–296. https://doi.org/10.1016/B978-0-12-385522-0.00005-6Google ScholarGoogle Scholar
  21. Lee Erickson, Irene Petrick, and Eileen Trauth. 2012. Hanging with the right crowd: Matching crowdsourcing need to crowd characteristics.AMCIS 2012 Proceedings (29 7 2012). https://aisel.aisnet.org/amcis2012/proceedings/VirtualCommunities/3Google ScholarGoogle Scholar
  22. M. Ferri, L. Amato, and M. Davoli. 2006. Alcoholics Anonymous and other 12-step programmes for alcohol dependence. The Cochrane Database of Systematic Reviews3 (19 7 2006), CD005032. https://doi.org/10.1002/14651858.CD005032.pub2PMID: 16856072.Google ScholarGoogle Scholar
  23. Oluwaseyi Feyisetan and Elena Simperl. 2017. Social Incentives in Paid Collaborative Crowdsourcing. ACM Trans. Intell. Syst. Technol. 8, 6, Article Article 73 (July 2017), 31 pages. https://doi.org/10.1145/3078852Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Karen E Fisher, Kenton T Unruh, and Joan C Durrance. 2003. Information communities: Characteristics gleaned from studies of three online networks. Proceedings of the American Society for Information Science and Technology 40, 1 (2003), 298–305.Google ScholarGoogle ScholarCross RefCross Ref
  25. Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(CHI ’15). Association for Computing Machinery, New York, NY, USA, 1631–1640. https://doi.org/10.1145/2702123.2702443Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Victor Girotto. 2016. Collective Creativity through a Micro-Tasks Crowdsourcing Approach. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion(CSCW ’16 Companion). Association for Computing Machinery, New York, NY, USA, 143–146. https://doi.org/10.1145/2818052.2874356Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jorge Goncalves, Denzil Ferreira, Simo Hosio, Yong Liu, Jakob Rogstadius, Hannu Kukka, and Vassilis Kostakos. 2013. Crowdsourcing on the Spot: Altruistic Use of Public Displays, Feasibility, Performance, and Behaviours. Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 753–762. https://doi.org/10.1145/2493432.2493481event-place: Zurich, Switzerland.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jorge Goncalves, Simo Hosio, Denzil Ferreira, Theodoros Anagnostopoulos, and Vassilis Kostakos. 2015. Bazaar: A Situated Crowdsourcing Market. Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, 1385–1390. https://doi.org/10.1145/2800835.2800974event-place: Osaka, Japan.Google ScholarGoogle Scholar
  29. Jorge Goncalves, Simo Hosio, Denzil Ferreira, and Vassilis Kostakos. 2014. Game of Words: Tagging Places Through Crowdsourcing on Public Displays. Proceedings of the 2014 Conference on Designing Interactive Systems, 705–714. https://doi.org/10.1145/2598510.2598514event-place: Vancouver, BC, Canada.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Jorge Goncalves, Simo Hosio, Yong Liu, and Vassilis Kostakos. 2016. Worker Performance in a Situated Crowdsourcing Market. Interacting with Computers 28, 5 (08 2016), 612–624. https://doi.org/10.1093/iwc/iwv035Google ScholarGoogle Scholar
  31. Daniel González, Regina Motz, and Libertad Tansini. 2013. Recommendations Given from Socially-Connected People, Yan Tang Demey and Hervé Panetto (Eds.). On the Move to Meaningful Internet Systems: OTM 2013 Workshops, 649–655.Google ScholarGoogle Scholar
  32. Kota Gushima, Mizuki Sakamoto, and Tatsuo Nakajima. 2016. Community-Based Crowdsourcing to Increase a Community’s Well-Being. In Proceedings of the 18th International Conference on Information Integration and Web-Based Applications and Services(iiWAS ’16). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3011141.3011173Google ScholarGoogle Scholar
  33. Hwajung Hong, Eric Gilbert, Gregory D. Abowd, and Rosa I. Arriaga. 2015. In-group Questions and Out-group Answers: Crowdsourcing Daily Living Advice for Individuals with Autism. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 777–786. https://doi.org/10.1145/2702123.2702402[Online; accessed 2016-04-13].Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Simo Hosio, Jorge Goncalves, Vili Lehdonvirta, Denzil Ferreira, and Vassilis Kostakos. 2014. Situated Crowdsourcing Using a Market Model. Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, 55–64. https://doi.org/10.1145/2642918.2647362event-place: Honolulu, Hawaii, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Gary Hsieh and Rafał Kocielnik. 2016. You Get Who You Pay for: The Impact of Incentives on Participation Bias. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing(CSCW ’16). Association for Computing Machinery, New York, NY, USA, 823–835. https://doi.org/10.1145/2818048.2819936Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Yun Huang, Yifeng Huang, Na Xue, and Jeffrey P. Bigham. 2017. Leveraging Complementary Contributions of Different Workers for Efficient Crowdsourcing of Video Captions. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 4617–4626. https://doi.org/10.1145/3025453.3026032Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Kazushi Ikeda and Michael S. Bernstein. 2016. Pay It Backward: Per-Task Payments on Crowdsourcing Platforms Reduce Productivity. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, New York, NY, USA, 4111–4121. https://doi.org/10.1145/2858036.2858327Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Panagiotis G. Ipeirotis and Evgeniy Gabrilovich. 2014. Quizz: Targeted Crowdsourcing with a Billion (Potential) Users. In Proceedings of the 23rd International Conference on World Wide Web(WWW ’14). Association for Computing Machinery, New York, NY, USA, 143–154. https://doi.org/10.1145/2566486.2567988Google ScholarGoogle Scholar
  39. Mark Irvine. [n.d.]. Facebook Ad Benchmarks for Industry Data. https://www.wordstream.com/blog/ws/2017/02/28/facebook-advertising-benchmarks Library Catalog: www.wordstream.com.Google ScholarGoogle Scholar
  40. Steve T. K. Jan, Chun Wang, Qing Zhang, and Gang Wang. 2018. Analyzing Payment-Driven Targeted Q8A Systems. Trans. Soc. Comput. 1, 3, Article Article 12 (Dec. 2018), 21 pages. https://doi.org/10.1145/3281449Google ScholarGoogle Scholar
  41. Steve T. K. Jan, Chun Wang, Qing Zhang, and Gang Wang. 2018. Analyzing Payment-Driven Targeted Q8A Systems. Trans. Soc. Comput. 1, 3, Article Article 12 (Dec. 2018), 21 pages. https://doi.org/10.1145/3281449Google ScholarGoogle Scholar
  42. Christopher Jonathan and Mohamed F. Mokbel. 2018. Stella: Geotagging Images via Crowdsourcing. Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 169–178. https://doi.org/10.1145/3274895.3274902event-place: Seattle, Washington.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Eunice Jun, Gary Hsieh, and Katharina Reinecke. 2017. Types of Motivation Affect Study Selection, Attention, and Dropouts in Online Experiments. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 56 (Dec. 2017), 15 pages. https://doi.org/10.1145/3134691Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Hernisa Kacorri, Kaoru Shinkawa, and Shin Saito. 2014. Introducing Game Elements in Crowdsourced Video Captioning by Non-Experts. In Proceedings of the 11th Web for All Conference(W4A ’14). Association for Computing Machinery, New York, NY, USA, Article Article 29, 4 pages. https://doi.org/10.1145/2596695.2596713Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Keiko Katsuragawa, Qi Shu, and Edward Lank. 2019. PledgeWork: Online Volunteering through Crowdwork. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). Association for Computing Machinery, New York, NY, USA, Article Paper 311, 11 pages. https://doi.org/10.1145/3290605.3300541Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Nicolas Kaufmann and Thimo Schulze. 0. Worker Motivation in Crowdsourcing and Human Computation. http://www.humancomputation.com/Program.html [Online; accessed 2019-05-13].Google ScholarGoogle Scholar
  47. Nicolas Kaufmann, Thimo Schulze, and Daniel Veit. 2011. More than fun and money. Worker Motivation in Crowdsourcing – A Study on Mechanical Turk. AMCIS 2011 Proceedings - All Submissions (6 8 2011). https://aisel.aisnet.org/amcis2011_submissions/340Google ScholarGoogle Scholar
  48. Aniket Kittur, E Chi, and Bongwon Suh. 2008. Crowdsourcing for usability: Using micro-task markets for rapid, remote, and low-cost user measurements. Proc. CHI 2008 (2008).Google ScholarGoogle Scholar
  49. Masatomo Kobayashi, Shoma Arita, Toshinari Itoko, Shin Saito, and Hironobu Takagi. 2015. Motivating Multi-Generational Crowd Workers in Social-Purpose Work. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing(CSCW ’15). Association for Computing Machinery, New York, NY, USA, 1813–1824. https://doi.org/10.1145/2675133.2675255Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. MIIA KOSONEN, CHUNMEI GAN, MIKA VANHALA, and KIRSIMARJA BLOMQVIST. 2014. USER MOTIVATION AND KNOWLEDGE SHARING IN IDEA CROWDSOURCING. International Journal of Innovation Management 18, 05(2014), 1450031. https://doi.org/10.1142/S1363919614500315Google ScholarGoogle ScholarCross RefCross Ref
  51. Ranjay A. Krishna, Kenji Hata, Stephanie Chen, Joshua Kravitz, David A. Shamma, Li Fei-Fei, and Michael S. Bernstein. 2016. Embracing Error to Enable Rapid Crowdsourcing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, New York, NY, USA, 3167–3179. https://doi.org/10.1145/2858036.2858115Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Anand Kulkarni, Matthew Can, and Björn Hartmann. 2012. Collaboratively Crowdsourcing Workflows with Turkomatic. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work(CSCW ’12). Association for Computing Machinery, New York, NY, USA, 1003–1012. https://doi.org/10.1145/2145204.2145354Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Mucahid Kutlu, Tyler McDonnell, Aashish Sheshadri, Tamer Elsayed, and Matthew Lease. 2018. Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collections Accurately and Affordably. In Proceedings of the 1st Biannual Conference on the Design of Experimental Search & Information REtrieval Systems (DESIRES) (2018), 42–46.Google ScholarGoogle Scholar
  54. Hongwei Li, Bo Zhao, and Ariel Fuxman. 2014. The Wisdom of Minority: Discovering and Targeting the Right Group of Workers for Crowdsourcing. Proceedings of the 23rd International Conference on World Wide Web, 165–176. https://doi.org/10.1145/2566486.2568033event-place: Seoul, Korea.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Qi Li, Fenglong Ma, Jing Gao, Lu Su, and Christopher J. Quinn. 2016. Crowdsourcing High Quality Labels with a Tight Budget. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining(WSDM ’16). Association for Computing Machinery, New York, NY, USA, 237–246. https://doi.org/10.1145/2835776.2835797Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Thomas Ludwig, Christoph Kotthaus, Christian Reuter, van Sören Dongen, and Volkmar Pipek. 2017. Situated crowdsourcing during disasters: Managing the tasks of spontaneous volunteers through public displays. International Journal of Human-Computer Studies 102 (1 6 2017), 103–121. https://doi.org/10.1016/j.ijhcs.2016.09.008Google ScholarGoogle Scholar
  57. Andrew Mao, Ece Kamar, Yiling Chen, Eric Horvitz, Megan E. Schwamb, Chris J. Lintott, and Arfon M. Smith. [n.d.]. Volunteering Versus Work for Pay: Incentives and Tradeoffs in Crowdsourcing. In First AAAI Conference on Human Computation and Crowdsourcing (2013). https://www.aaai.org/ocs/index.php/HCOMP/HCOMP13/paper/view/7497 Retrieved January 9, 2020 from.Google ScholarGoogle Scholar
  58. João Martins, José Carilho, Oliver Schnell, Carlos Duarte, Francisco M. Couto, Luís Carriço, and Tiago Guerreiro. 2014. Friendsourcing the Unmet Needs of People with Dementia. Proceedings of the 11th Web for All Conference, 35:1–35:4. https://doi.org/10.1145/2596695.2596716event-place: Seoul, Korea.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Winter Mason and Duncan J. Watts. 2009. Financial Incentives and the “Performance of Crowds”. In Proceedings of the ACM SIGKDD Workshop on Human Computation(HCOMP ’09). Association for Computing Machinery, New York, NY, USA, 77–85. https://doi.org/10.1145/1600150.1600175Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Bart Mellebeek, Francesc Benavent, Jens Grivolla, Joan Codina, Marta R. Costa-jussà, and Rafael Banchs. 2010. Opinion Mining of Spanish Customer Comments with Non-Expert Annotations on Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk(CSLDAMT ’10). Association for Computational Linguistics, USA, 114–121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. AA membership. 2019. Worldwide A.A. Individual and Group Membership. https://www.aa.org/assets/en_US/aa-literature/smf-132-estimates-worldwide-aa-individual-and-group-membership. Accessed: 2020-09-30.Google ScholarGoogle Scholar
  62. Meredith Ringel Morris, Kori Inkpen, and Gina Venolia. 2014. Remote Shopping Advice: Enhancing In-store Shopping with Social Technologies. Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, 662–673. https://doi.org/10.1145/2531602.2531707event-place: Baltimore, Maryland, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Meredith Ringel Morris, Jaime Teevan, and Katrina Panovich. 2010. A Comparison of Information Seeking Using Search Engines and Social Networks. Fourth International AAAI Conference on Weblogs and Social Media. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM10/paper/view/1518 [Online; accessed 2019-05-13].Google ScholarGoogle Scholar
  64. Robert R. Morris, Stephen M. Schueller, and Rosalind W. Picard. 2015. Efficacy of a Web-based, crowdsourced peer-to-peer cognitive reappraisal platform for depression: randomized controlled trial. Journal of Medical Internet Research 17, 3 (30 3 2015), e72. https://doi.org/10.2196/jmir.4167PMID: 25835472 PMCID: PMC4395771.Google ScholarGoogle ScholarCross RefCross Ref
  65. Swaprava Nath, Pankaj Dayama, Dinesh Garg, Y. Narahari, and James Zou. 2012. Threats and Trade-Offs in Resource Critical Crowdsourcing Tasks Over Networks. In Twenty-Sixth AAAI Conference on Artificial Intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/view/4807Google ScholarGoogle Scholar
  66. 1615 L. St NW, Suite 800Washington, and DC 20036USA202-419-4300 | Main202-857-8562 | Fax202-419-4372 | Media Inquiries. [n.d.]. How Mechanical Turk Works. https://www.pewresearch.org/internet/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/pi_2016-07-11_mechanical-turk_0-01/Google ScholarGoogle Scholar
  67. Lisa Posch, Arnim Bleier, Clemens M. Lechner, Daniel Danner, Fabian Flöck, and Markus Strohmaier. 2019. Measuring Motivations of Crowdworkers: The Multidimensional Crowdworker Motivation Scale. Trans. Soc. Comput. 2, 2, Article 8 (Sept. 2019), 34 pages. https://doi.org/10.1145/3335081Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Judith Redi and Isabel Povoa. 2014. Crowdsourcing for Rating Image Aesthetic Appeal: Better a Paid or a Volunteer Crowd?. In Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia(CrowdMM ’14). Association for Computing Machinery, New York, NY, USA, 25–30. https://doi.org/10.1145/2660114.2660118Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Jakob Rogstadius, Vassilis Kostakos, Aniket Kittur, Boris Smus, Jim Laredo, and Maja Vukovic. [n.d.]. An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets. In Fifth International AAAI Conference on Weblogs and Social Media (2011). https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2778 Retrieved April 11, 2019 from.Google ScholarGoogle Scholar
  70. Dana Rotman, Jenny Preece, Jen Hammock, Kezee Procita, Derek Hansen, Cynthia Parr, Darcy Lewis, and David Jacobs. 2012. Dynamic Changes in Motivation in Collaborative Citizen-Science Projects. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work(CSCW ’12). Association for Computing Machinery, New York, NY, USA, 217–226. https://doi.org/10.1145/2145204.2145238Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Ellen L. Rubenstein. 2013. Crowdsourcing Health Literacy: The Case of an Online Community. Proceedings of the 76th ASIST Annual Meeting: Beyond the Cloud: Rethinking Information Boundaries, 120:1–120:5. http://dl.acm.org/citation.cfm?id=2655780.2655900 event-place: Montreal, Quebec, Canada.Google ScholarGoogle ScholarCross RefCross Ref
  72. Sabirat Rubya. 2017. Facilitating Peer Support for Recovery from Substance Use Disorders. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 172–177. https://doi.org/10.1145/3027063.3048431Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Sabirat Rubya, Xizi Wang, and Svetlana Yarosh. 2019. HAIR: Towards Developing a Global Self-Updating Peer Support Group Meeting List Using Human-Aided Information Retrieval. In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval(CHIIR ’19). Association for Computing Machinery, New York, NY, USA, 83–92. https://doi.org/10.1145/3295750.3298933Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Sabirat Rubya and Svetlana Yarosh. 2017. Interpretations of Online Anonymity in Alcoholics Anonymous and Narcotics Anonymous. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article Article 91 (Dec. 2017), 22 pages. https://doi.org/10.1145/3134726Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Jeffrey M. Rzeszotarski and Meredith Ringel Morris. 2014. Estimating the Social Costs of Friendsourcing. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2735–2744. https://doi.org/10.1145/2556288.2557181event-place: Toronto, Ontario, Canada.Google ScholarGoogle Scholar
  76. Eric Schenk and Claude Guittard. 2011. Towards a characterization of crowdsourcing practices. Journal of Innovation Economics Management n°7, 1 (6 4 2011), 93–107.Google ScholarGoogle Scholar
  77. Clay Shirky. 2010. Cognitive Surplus: How Technology Makes Consumers into Collaborators. Penguin. Google-Books-ID: PjeTO822t_4C.Google ScholarGoogle Scholar
  78. Tammara Combs Turner, Marc A Smith, Danyel Fisher, and Howard T Welser. 2005. Picturing Usenet: Mapping computer-mediated collective action. Journal of Computer-Mediated Communication 10, 4 (2005), JCMC1048.Google ScholarGoogle ScholarCross RefCross Ref
  79. Simon C. Warby, Sabrina L. Wendt, Peter Welinder, Emil G.S. Munk, Oscar Carrillo, Helge B.D. Sorensen, Poul Jennum, Paul E. Peppard, Pietro Perona, and Emmanuel Mignot. [n.d.]. Sleep-spindle detection: crowdsourcing and evaluating performance of experts, non-experts and automated methods. Nature Methods 11, 4([n. d.]), 385–392. https://doi.org/10.1038/nmeth.2855Google ScholarGoogle Scholar
  80. Helen Wauck, Yu-Chun (Grace) Yen, Wai-Tat Fu, Elizabeth Gerber, Steven P. Dow, and Brian P. Bailey. 2017. From in the Class or in the Wild?: Peers Provide Better Design Feedback Than External Crowds. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 5580–5591. https://doi.org/10.1145/3025453.3025477event-place: Denver, Colorado, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Kerri Wazny. [n.d.]. Applications of crowdsourcing in health: an overview. Journal of Global Health 8, 1 ([n. d.]). https://doi.org/10.7189/jogh.08.010502Google ScholarGoogle Scholar
  82. Andrea Wiggins and Yurong He. 2016. Community-Based Data Validation Practices in Citizen Science. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing(CSCW ’16). Association for Computing Machinery, New York, NY, USA, 1548–1559. https://doi.org/10.1145/2818048.2820063Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Svetlana Yarosh. 2013. Shifting Dynamics or Breaking Sacred Traditions?: The Role of Technology in Twelve-step Fellowships. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3413–3422. https://doi.org/10.1145/2470654.2466468[Online; accessed 2016-03-29].Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. Ming Yin, Yiling Chen, and Yu-An Sun. [n.d.]. The Effects of Performance-Contingent Financial Incentives in Online Labor Markets. In Twenty-Seventh AAAI Conference on Artificial Intelligence (2013). https://www.aaai.org/ocs/index.php/AAAI/AAAI13/paper/view/6301 Retrieved January 9, 2020 from.Google ScholarGoogle Scholar
  85. Z. Yu, D. Zhang, D. Yang, and G. Chen. 2012. Selecting the Best Solvers: Toward Community Based Crowdsourcing for Disaster Management. In 2012 IEEE Asia-Pacific Services Computing Conference. 271–277. https://doi.org/10.1109/APSCC.2012.20Google ScholarGoogle Scholar
  86. Alvin Yuan, Kurt Luther, Markus Krause, Sophie Isabel Vennix, Steven P Dow, and Bjorn Hartmann. 2016. Almost an Expert: The Effects of Rubrics and Expertise on Perceived Value of Crowdsourced Design Critiques. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing(CSCW ’16). Association for Computing Machinery, New York, NY, USA, 1005–1017. https://doi.org/10.1145/2818048.2819953Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Dongqing Zhu and Ben Carterette. 2010. An analysis of assessor behavior in crowdsourced preference judgments. In SIGIR 2010 workshop on crowdsourcing for search evaluation. 17–20.Google ScholarGoogle Scholar

Index Terms

(auto-classified)
  1. Comparing Generic and Community-Situated Crowdsourcing for Data Validation in the Context of Recovery from Substance Use Disorders

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
      May 2021
      10862 pages
      ISBN:9781450380966
      DOI:10.1145/3411764

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 May 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate5,789of24,782submissions,23%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format