Abstract
While crowd workers typically complete a variety of tasks in crowdsourcing platforms, there is no widely accepted method to successfully match workers to different types of tasks. Researchers have considered using worker demographics, behavioural traces, and prior task completion records to optimise task assignment. However, optimum task assignment remains a challenging research problem due to limitations of proposed approaches, which in turn can have a significant impact on the future of crowdsourcing. We present 'CrowdCog', an online dynamic system that performs both task assignment and task recommendations, by relying on fast-paced online cognitive tests to estimate worker performance across a variety of tasks. Our work extends prior work that highlights the effect of workers' cognitive ability on crowdsourcing task performance. Our study, deployed on Amazon Mechanical Turk, involved 574 workers and 983 HITs that span across four typical crowd tasks (Classification, Counting, Transcription, and Sentiment Analysis). Our results show that both our assignment method and recommendation method result in a significant performance increase (5% to 20%) as compared to a generic or random task assignment. Our findings pave the way for the use of quick cognitive tests to provide robust recommendations and assignments to crowd workers.
Supplemental Material
Available for Download
Code for CrowdCog: A Cognitive Skill based System for Heterogeneous Task Assignment and Recommendation in Crowdsourcing. This repo contains the source code for the CrowdCog framework used in the study. CrowdCog is built using python (Django framework). Crowdsourcing tasks/experiments deployed through this framework are seamlessly integrated with the Amazon Mechanical Turk platform with PsiTurk, an open platform for building experiments. Cognitive tests are developed using jsPsych, a JavaScript library for running behavioural experiments in a web browser. Please read the README file for further details on deployment.
- Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively Inspired Task Design to Improve User Performance on Crowdsourcing Platforms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI '14). ACM, New York, NY, USA, 3665--3674. https://doi.org/10.1145/2556288.2557155Google Scholar
Digital Library
- Mohammad Allahbakhsh, Boualem Benatallah, Aleksandar Ignjatovic, Hamid Reza Motahari-Nezhad, Elisa Bertino, and Schahram Dustdar. 2013. Quality Control in Crowdsourcing Systems: Issues and Directions. IEEE Internet Computing, Vol. 17, 2 (2013), 76--81. https://doi.org/10.1109/MIC.2013.20Google Scholar
Digital Library
- Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2011. Towards Task Recommendation in Micro-task Markets. In Human Computation Workshop at the Twenty-Fifth AAAI Conference on Artificial Intelligence. AAAI Press, Palo Alto, California, USA.Google Scholar
- Sepehr Assadi, Justin Hsu, and Shahin Jabbari. 2015. Online assignment of heterogeneous tasks in crowdsourcing markets. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing (HCOMP'15). AAAI Press, Palo Alto, California, USA.Google Scholar
- Charles E. Bailey. 2007. Cognitive Accuracy and Intelligent Executive Function in the Brain and in Business. Annals of the New York Academy of Sciences, Vol. 1118, 1 (2007), 122--141. https://doi.org/10.1196/annals.1412.011Google Scholar
Cross Ref
- Michael S. Bernstein, David R. Karger, Robert C. Miller, and Joel Brandt. 2012. Analytic methods for optimizing realtime crowdsourcing. In Proceedings of Collective Intelligence 2012.Google Scholar
- R. Boim, O. Greenshpan, T. Milo, S. Novgorodov, N. Polyzotis, and W. Tan. 2012. Asking the Right Questions in Crowd Data Sourcing. In 2012 IEEE 28th International Conference on Data Engineering. 1261--1264. https://doi.org/10.1109/ICDE.2012.122Google Scholar
- Erika Borella, Barbara Carretti, and Santiago Pelegrina. 2010. The Specific Role of Inhibition in Reading Comprehension in Good and Poor Comprehenders. Journal of Learning Disabilities, Vol. 43, 6 (2010), 541--552. https://doi.org/10.1177/0022219410371676Google Scholar
Cross Ref
- Lydia B. Chilton, John J. Horton, Robert C. Miller, and Shiri Azenkot. 2010. Task search in a human computation market. In Proceedings of the ACM SIGKDD Workshop on Human Computation. ACM, 1--9.Google Scholar
Digital Library
- Matthew J. C. Crump, John V. McDonnell, and Todd M. Gureckis. 2013. Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS ONE, Vol. 8, 3 (2013), 1--18. https://doi.org/10.1371/journal.pone.0057410Google Scholar
Cross Ref
- Fred J. Damerau. 1964. A Technique for Computer Detection and Correction of Spelling Errors. Commun. ACM, Vol. 7, 3 (1964), 171--176. https://doi.org/10.1145/363958.363994Google Scholar
Digital Library
- Florian Daniel, Pavel Kucherbaev, Cinzia Cappiello, Boualem Benatallah, and Mohammad Allahbakhsh. 2018. Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions. ACM Comput. Surv., Vol. 51, 1, Article 7 (Jan. 2018), bibinfonumpages40 pages. https://doi.org/10.1145/3148148Google Scholar
Digital Library
- Joshua R. de Leeuw. 2015. jsPsych: A JavaScript library for creating behavioral experiments in a Web browser. Behavior Research Methods, Vol. 47, 1 (2015), 1--12. https://doi.org/10.3758/s13428-014-0458-yGoogle Scholar
- Adele Diamond. 2013. Executive Functions. Annual Review of Psychology, Vol. 64, 1 (2013), 135--168. https://doi.org/10.1146/annurev-psych-113011--143750Google Scholar
Cross Ref
- Djellel E. Difallah, Michele Catasta, Gianluca Demartini, Panagiotis G. Ipeirotis, and Philippe Cudré-Mauroux. 2015. The Dynamics of Micro-Task Crowdsourcing: The Case of Amazon MTurk. In Proceedings of the 24th International Conference on World Wide Web (Florence, Italy) (WWW '15). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 238--247. https://doi.org/10.1145/2736277.2741685Google Scholar
- Djellel E. Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2013. Pick-a-crowd: Tell Me What You Like, and I'll Tell You What to Do. In Proceedings of the 22nd International Conference on World Wide Web (Rio de Janeiro, Brazil) (WWW '13). ACM, New York, NY, USA, 367--374. https://doi.org/10.1145/2488388.2488421Google Scholar
Digital Library
- Tilman Dingler, Albrecht Schmidt, and Tonja Machulla. 2017. Building Cognition-Aware Systems: A Mobile Toolkit for Extracting Time-of-Day Fluctuations of Cognitive Performance. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 1, 3, Article 47 (Sept. 2017), bibinfonumpages15 pages. https://doi.org/10.1145/3132025Google Scholar
Digital Library
- Julie S. Downs, Mandy B. Holbrook, Steve Sheng, and Lorrie Faith Cranor. 2010. Are Your Participants Gaming the System?: Screening Mechanical Turk Workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI '10). ACM, New York, NY, USA, 2399--2402. https://doi.org/10.1145/1753326.1753688Google Scholar
Digital Library
- Jeffrey R. Edwards. 1991. Person-job fit: A conceptual integration, literature review, and methodological critique .John Wiley & Sons.Google Scholar
- Carsten Eickhoff. 2018. Cognitive Biases in Crowdsourcing. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (Marina Del Rey, CA, USA) (WSDM '18). ACM, New York, NY, USA, 162--170. https://doi.org/10.1145/3159652.3159654Google Scholar
Digital Library
- Ruth B. Ekstrom, Diran Dermen, and Harry Horace Harman. 1976. Manual for kit of factor-referenced cognitive tests. Vol. 102. Educational Testing Service, Princeton, NJ, USA.Google Scholar
- Barbara A. Eriksen and Charles W. Eriksen. 1974. Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception & Psychophysics, Vol. 16, 1 (1974), 143--149. https://doi.org/10.3758/BF03203267Google Scholar
Cross Ref
- Ju Fan, Guoliang Li, Beng Chin Ooi, Kian-lee Tan, and Jianhua Feng. 2015. iCrowd: An Adaptive Crowdsourcing Framework. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (Melbourne, Victoria, Australia) (SIGMOD '15). ACM, New York, NY, USA, 1015--1030. https://doi.org/10.1145/2723372.2750550Google Scholar
Digital Library
- Ujwal Gadiraju, Gianluca Demartini, Ricardo Kawase, and Stefan Dietze. 2018. Crowd Anatomy Beyond the Good and Bad: Behavioral Traces for Crowd Worker Modeling and Pre-selection. Computer Supported Cooperative Work (CSCW) (Jun 2018). https://doi.org/10.1007/s10606-018--9336-yGoogle Scholar
- Ujwal Gadiraju, Besnik Fetahu, Ricardo Kawase, Patrick Siehndel, and Stefan Dietze. 2017. Using Worker Self-Assessments for Competence-Based Pre-Selection in Crowdsourcing Microtasks. ACM Trans. Comput.-Hum. Interact., Vol. 24, 4, Article 30 (Aug 2017), bibinfonumpages26 pages. https://doi.org/10.1145/3119930Google Scholar
Digital Library
- Ujwal Gadiraju, Ricardo Kawase, and Stefan Dietze. 2014. A Taxonomy of Microtasks on the Web. In Proceedings of the 25th ACM Conference on Hypertext and Social Media (Santiago, Chile) (HT '14). ACM, New York, NY, USA, 218--223. https://doi.org/10.1145/2631775.2631819Google Scholar
Digital Library
- David Geiger and Martin Schader. 2014. Personalized task recommendation in crowdsourcing information systems -- Current state of the art. Decision Support Systems, Vol. 65 (2014), 3--16. https://doi.org/10.1016/j.dss.2014.05.007Google Scholar
Digital Library
- Laura Germine, Ken Nakayama, Bradley C. Duchaine, Christopher F. Chabris, Garga Chatterjee, and Jeremy B. Wilmer. 2012. Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments. Psychonomic Bulletin & Review, Vol. 19, 5 (2012), 847--857. https://doi.org/10.3758/s13423-012-0296--9Google Scholar
Cross Ref
- Jorge Goncalves, Michael Feldman, Subingqian Hu, Vassilis Kostakos, and Abraham Bernstein. 2017a. Task Routing and Assignment in Crowdsourcing Based on Cognitive Abilities. In Proceedings of the 26th International Conference on World Wide Web (Perth, Australia) (WWW '17). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 1023--1031. https://doi.org/10.1145/3041021.3055128Google Scholar
- Jorge Goncalves, Denzil Ferreira, Simo Hosio, Yong Liu, Jakob Rogstadius, Hannu Kukka, and Vassilis Kostakos. 2013. Crowdsourcing on the Spot: Altruistic Use of Public Displays, Feasibility, Performance, and Behaviours. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Zurich, Switzerland) (UbiComp '13). ACM, New York, NY, USA, 753--762. https://doi.org/10.1145/2493432.2493481Google Scholar
Digital Library
- Jorge Goncalves, Simo Hosio, Denzil Ferreira, and Vassilis Kostakos. 2014. Game of Words: Tagging Places through Crowdsourcing on Public Displays. In Proceedings of the 2014 Conference on Designing Interactive Systems (Vancouver, BC, Canada) (DIS '14). Association for Computing Machinery, New York, NY, USA, 705--714. https://doi.org/10.1145/2598510.2598514Google Scholar
Digital Library
- Jorge Goncalves, Simo Hosio, Niels van Berkel, Furqan Ahmed, and Vassilis Kostakos. 2017b. CrowdPickUp: Crowdsourcing Task Pickup in the Wild. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 1, 3, Article 51 (Sept. 2017), bibinfonumpages22 pages. https://doi.org/10.1145/3130916Google Scholar
Digital Library
- Todd M. Gureckis, Jay Martin, John McDonnell, Alexander S. Rich, Doug Markant, Anna Coenen, David Halpern, Jessica B. Hamrick, and Patricia Chan. 2016. psiTurk: An open-source framework for conducting replicable behavioral experiments online. Behavior Research Methods, Vol. 48, 3 (01 Sep 2016), 829--842. https://doi.org/10.3758/s13428-015-0642--8Google Scholar
Cross Ref
- Shuguang Han, Peng Dai, Praveen Paritosh, and David Huynh. 2016. Crowdsourcing Human Annotation on Web Page Structure: Infrastructure Design and Behavior-Based Quality Control. ACM Trans. Intell. Syst. Technol., Vol. 7, 4, Article 56 (April 2016), bibinfonumpages25 pages. https://doi.org/10.1145/2870649Google Scholar
Digital Library
- Danula Hettiachchi, Zhanna Sarsenbayeva, Fraser Allison, Niels van Berkel, Tilman Dingler, Gabriele Marini, Vassilis Kostakos, and Jorge Goncalves. 2020. 'Hi! I Am the Crowd Tasker' Crowdsourcing through Digital Voice Assistants. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376320Google Scholar
Digital Library
- Danula Hettiachchi, Niels van Berkel, Simo Hosio, Vassilis Kostakos, and Jorge Goncalves. 2019. Effect of Cognitive Abilities on Crowdsourcing Task Performance. In Human-Computer Interaction -- INTERACT 2019. Springer International Publishing, Cham, 442--464. https://doi.org/10.1007/978--3-030--29381--9_28Google Scholar
Cross Ref
- Chien-Ju Ho, Shahin Jabbari, and Jennifer Wortman Vaughan. 2013. Adaptive Task Assignment for Crowdsourced Classification. In Proceedings of the 30th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 28). PMLR, Atlanta, Georgia, USA, 534--542. http://proceedings.mlr.press/v28/ho13.htmlGoogle Scholar
- Chien-Ju Ho and Jennifer Wortman Vaughan. 2012. Online Task Assignment in Crowdsourcing Markets. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence (Toronto, Ontario, Canada) (AAAI'12). AAAI Press, Palo Alto, California, USA, 45--51.Google Scholar
Digital Library
- Bernhard Hommel. 2011. The Simon effect as tool and heuristic. Acta Psychologica, Vol. 136, 2 (2011), 189--202. https://doi.org/10.1016/j.actpsy.2010.04.011 Responding to the Source of Stimulation: J. Richard Simon and the Simon Effect.Google Scholar
Cross Ref
- Jeff Howe. 2008. Crowdsourcing : why the power of the crowd is driving the future of business 1st ed.). Crown Business, New York, NY, USA.Google Scholar
- Ayush Jain, Akash Das Sarma, Aditya Parameswaran, and Jennifer Widom. 2017. Understanding Workers, Developing Effective Tasks, and Enhancing Marketplace Dynamics: A Study of a Large Crowdsourcing Marketplace. Proc. VLDB Endow., Vol. 10, 7 (2017), 829--840. https://doi.org/10.14778/3067421.3067431Google Scholar
Digital Library
- Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2011. Worker Types and Personality Traits in Crowdsourcing Relevance Labels. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management (Glasgow, Scotland, UK) (CIKM '11). ACM, New York, NY, USA, 1941--1944. https://doi.org/10.1145/2063576.2063860Google Scholar
Digital Library
- Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2012. The Face of Quality in Crowdsourcing Relevance Labels: Demographics, Personality and Labeling Accuracy. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management (Maui, Hawaii, USA) (CIKM '12). ACM, New York, NY, USA, 2583--2586. https://doi.org/10.1145/2396761.2398697Google Scholar
Digital Library
- Asif R. Khan and Hector Garcia-Molina. 2017. CrowdDQS: Dynamic Question Selection in Crowdsourcing Systems. In Proceedings of the 2017 ACM International Conference on Management of Data (Chicago, Illinois, USA) (SIGMOD '17). ACM, New York, NY, USA, 1447--1462. https://doi.org/10.1145/3035918.3064055Google Scholar
- Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The Future of Crowd Work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (San Antonio, Texas, USA) (CSCW '13). ACM, New York, NY, USA, 1301--1318. https://doi.org/10.1145/2441776.2441923Google Scholar
Digital Library
- Amy L. Kristof. 1996. Person-organization fit: an integrative review of its conceptualizations, measurement, and implications. Personnel Psychology, Vol. 49, 1 (1996), 1--49. https://doi.org/10.1111/j.1744--6570.1996.tb01790.xGoogle Scholar
Cross Ref
- Muriel D. Lezak, Diane B. Howieson, David W. Loring, and Jill S. Fischer. 2004. Neuropsychological assessment .Oxford University Press, New York, NY, USA.Google Scholar
- Xuan Liu, Meiyu Lu, Beng Chin Ooi, Yanyan Shen, Sai Wu, and Meihui Zhang. 2012. CDAS: A Crowdsourcing Data Analytics System. Proc. VLDB Endow., Vol. 5, 10 (June 2012), 1040--1051. https://doi.org/10.14778/2336664.2336676Google Scholar
- Ioanna Lykourentzou, Angeliki Antoniou, Yannick Naudet, and Steven P. Dow. 2016. Personality Matters: Balancing for Personality Types Leads to Better Outcomes for Crowd Teams. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (San Francisco, California, USA) (CSCW '16). ACM, New York, NY, USA, 260--273. https://doi.org/10.1145/2818048.2819979Google Scholar
- Colin M. MacLeod. 1991. Half a Century of Research on the Stroop Effect: An Integrative Review. Psychological Bulletin, Vol. 109, 2 (1991), 163. https://doi.org/10.1037/0033--2909.109.2.163Google Scholar
Cross Ref
- Panagiotis Mavridis, David Gross-Amblard, and Zoltán Miklós. 2016. Using Hierarchical Skills for Optimized Task Assignment in Knowledge-Intensive Crowdsourcing. In Proceedings of the 25th International Conference on World Wide Web (Montreal, Québec, Canada) (WWW '16). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 843--853. https://doi.org/10.1145/2872427.2883070Google Scholar
- Kaixiang Mo, Erheng Zhong, and Qiang Yang. 2013. Cross-task Crowdsourcing. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Chicago, Illinois, USA) (KDD '13). ACM, New York, NY, USA, 677--685. https://doi.org/10.1145/2487575.2487593Google Scholar
- Stephen Monsell. 2003. Task switching. Trends in Cognitive Sciences, Vol. 7, 3 (2003), 134--140. https://doi.org/10.1016/S1364--6613(03)00028--7Google Scholar
Cross Ref
- Jonas Oppenlaender, Elina Kuosmanen, Jorge Goncalves, and Simo Hosio. 2019. Search Support for Exploratory Writing. In Human-Computer Interaction -- INTERACT 2019. Springer International Publishing, Cham, 314--336. https://doi.org/10.1007/978--3-030--29387--1_18Google Scholar
Cross Ref
- Adrian M. Owen, Kathryn M. McMillan, Angela R. Laird, and Ed Bullmore. 2005. N-back working memory paradigm: A meta-analysis of normative functional neuroimaging studies. Human Brain Mapping, Vol. 25, 1 (2005), 46--59. https://doi.org/10.1002/hbm.20131Google Scholar
Cross Ref
- Aditya G. Parameswaran, Hector Garcia-Molina, Hyunjung Park, Neoklis Polyzotis, Aditya Ramesh, and Jennifer Widom. 2012. CrowdScreen: Algorithms for Filtering Data with Humans. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (Scottsdale, Arizona, USA) (SIGMOD '12). ACM, New York, NY, USA, 361--372. https://doi.org/10.1145/2213836.2213878Google Scholar
Digital Library
- Eyal Peer, Joachim Vosgerau, and Alessandro Acquisti. 2014. Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods, Vol. 46, 4 (01 Dec 2014), 1023--1031. https://doi.org/10.3758/s13428-013-0434-yGoogle Scholar
Cross Ref
- Michael Petrides, Bessie Alivisatos, Alan C. Evans, and Ernst Meyer. 1993. Dissociation of human mid-dorsolateral from posterior dorsolateral frontal cortex in memory processing. Proceedings of the National Academy of Sciences, Vol. 90, 3 (1993), 873--877. https://doi.org/10.1073/pnas.90.3.873Google Scholar
Cross Ref
- T. W. Robbins, M. James, A. M. Owen, B. J. Sahakian, L. McInnes, and P. Rabbitt. 1994. Cambridge Neuropsychological Test Automated Battery (CANTAB): A Factor Analytic Study of a Large Sample of Normal Elderly Volunteers. Dementia and Geriatric Cognitive Disorders, Vol. 5, 5 (1994), 266--281. https://doi.org/10.1159/000106735Google Scholar
Cross Ref
- Jeffrey M. Rzeszotarski and Aniket Kittur. 2011. Instrumenting the Crowd: Using Implicit Behavioral Measures to Predict Task Performance. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (Santa Barbara, California, USA) (UIST '11). ACM, New York, NY, USA, 13--22. https://doi.org/10.1145/2047196.2047199Google Scholar
- Morteza Saberi, Omar K. Hussain, and Elizabeth Chang. 2017. An Online Statistical Quality Control Framework for Performance Management in Crowdsourcing. In Proceedings of the International Conference on Web Intelligence (Leipzig, Germany) (WI '17). ACM, New York, NY, USA, 476--482. https://doi.org/10.1145/3106426.3106436Google Scholar
Digital Library
- Frank L. Schmidt and John Hunter. 2004. General mental ability in the world of work: occupational attainment and job performance. Journal of personality and social psychology, Vol. 86, 1 (2004), 162. https://doi.org/10.1037/a0012842Google Scholar
Cross Ref
- Aaron D. Shaw, John J. Horton, and Daniel L. Chen. 2011. Designing Incentives for Inexpert Human Raters. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work (Hangzhou, China) (CSCW '11). ACM, New York, NY, USA, 275--284. https://doi.org/10.1145/1958824.1958865Google Scholar
Digital Library
- George Washington. 1766. George Washington Papers, Series 5, Financial Papers: Copybook of Invoices and Letters, 1754--1766. https://www.loc.gov/item/mgw500003Google Scholar
- Richard F. West, Maggie E. Toplak, and Keith E. Stanovich. 2008. Heuristics and biases as measures of critical thinking: Associations with cognitive ability and thinking dispositions. Journal of Educational Psychology, Vol. 100, 4 (2008), 930. https://doi.org/10.1037/a0012842Google Scholar
Cross Ref
- Man-Ching Yuen, Irwin King, and Kwong-Sak Leung. 2015. Taskrec: A task recommendation framework in crowdsourcing systems. Neural Processing Letters, Vol. 41, 2 (2015), 223--238. https://doi.org/10.1007/s11063-014--9343-zGoogle Scholar
Digital Library
- Yudian Zheng, Jiannan Wang, Guoliang Li, Reynold Cheng, and Jianhua Feng. 2015. QASCA: A Quality-Aware Task Assignment System for Crowdsourcing Applications. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (Melbourne, Victoria, Australia) (SIGMOD '15). ACM, New York, NY, USA, 1031--1046. https://doi.org/10.1145/2723372.2749430Google Scholar
Digital Library
- Mengdie Zhuang and Ujwal Gadiraju. 2019. In What Mood Are You Today? An Analysis of Crowd Workers? Mood, Performance and Engagement. In Proceedings of the 10th ACM Conference on Web Science (Boston, Massachusetts, USA) (WebSci '19). Association for Computing Machinery, New York, NY, USA, 373--382. https://doi.org/10.1145/3292522.3326010Google Scholar
Digital Library
Index Terms
CrowdCog: A Cognitive Skill based System for Heterogeneous Task Assignment and Recommendation in Crowdsourcing
Recommendations
Make Hay While the Crowd Shines: Towards Efficient Crowdsourcing on the Web
WWW '15 Companion: Proceedings of the 24th International Conference on World Wide WebWithin the scope of this PhD proposal, we set out to investigate two pivotal aspects that influence the effectiveness of crowdsourcing: (i) microtask design, and (ii) workers behavior. Leveraging the dynamics of tasks that are crowdsourced on the one ...
Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments
The ubiquity of the Internet and the widespread proliferation of electronic devices has resulted in flourishing microtask crowdsourcing marketplaces, such as Amazon MTurk. An aspect that has remained largely invisible in microtask crowdsourcing is that ...
Task Routing and Assignment in Crowdsourcing based on Cognitive Abilities
WWW '17 Companion: Proceedings of the 26th International Conference on World Wide Web CompanionAppropriate task routing and assignment is an important, but often overlooked, element in crowdsourcing research and practice. In this paper, we explore and evaluate a mechanism that can enable matching crowdsourcing tasks to suitable crowd-workers ...






Comments