skip to main content
research-article
Public Access

CrowdFolio: Understanding How Holistic and Decomposed Workflows Influence Feedback on Online Portfolios

Published:22 April 2021Publication History
Skip Abstract Section

Abstract

Freelancers increasingly earn their livelihood through online marketplaces. To attract new clients, freelancers continuously curate their online portfolios to convey their unique skills and style. However, many lack access to rapid, regular, and inexpensive feedback needed to improve their portfolios. Existing crowd feedback systems, which collect feedback on individual creative projects (i.e., decomposed approach), could fill this need, but it is unclear how they might support feedback on multiple projects (i.e., holistic approach). In a between-subjects study with 30 freelancers, we compared decomposed and holistic feedback collection approaches using CrowdFolio, a crowd feedback system for portfolios. The holistic approach helped freelancers discover new ways to describe their work, while the decomposed approach provided detailed insight about the visual attractiveness of projects. This study contributes evidence that portfolio feedback systems, regardless of collection approach, can positively support professional development by impacting how freelancers portray themselves online and reflect on their identity.

Skip Supplemental Material Section

Supplemental Material

References

  1. [n.d.]. Amazon Mechanical Turk. https://www.mturk.com/Google ScholarGoogle Scholar
  2. [n.d.]. Grammarly. https://www.grammarly.com/ Library Catalog: www.grammarly.com.Google ScholarGoogle Scholar
  3. [n.d.]. stringr package | R Documentation. https://www.rdocumentation.org/packages/stringr/versions/1.4.0Google ScholarGoogle Scholar
  4. 2018. Behance:: Best of Behance. https://www.behance.net/Google ScholarGoogle Scholar
  5. 2018. Dribbble. https://dribbble.com/Google ScholarGoogle Scholar
  6. 2018. Facebook. https://www.facebook.com/Google ScholarGoogle Scholar
  7. 2018. reddit. https://www.reddit.com/Google ScholarGoogle Scholar
  8. 2019. Upwork. https://www.upwork.com/Google ScholarGoogle Scholar
  9. Kumar Abhinav, Alpana Dubey, Sakshi Jain, Gurdeep Virdi, Alex Kass, and Manish Mehta. 2017. CrowdAdvisor: A Framework for Freelancer Assessment in Online Marketplace. In Proceedings of the 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP '17). IEEE Press, Piscataway, NJ, USA, 93--102. https://doi.org/10.1109/ICSE-SEIP.2017.23Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Robin S Adams, Jennifer Turns, and Cynthia J Atman. 2003. Educating effective engineering designers: the role of reflective practice. Design Studies, Vol. 24, 3 (May 2003), 275--294. https://doi.org/10.1016/S0142-694X(02)00056-XGoogle ScholarGoogle ScholarCross RefCross Ref
  11. Elena Agapie, Lucas Colusso, Sean A Munson, and Gary Hsieh. 2016. PlanSourcing: Generating Behavior Change Plans with Friends and Crowds. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. ACM. https://doi.org/10.1145/2818048.2819943Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. AIGA, Google, and Accurat. 2019. Design Census 2019: Understanding the state of design and the people who make it. Technical Report. AIGA. https://designcensus.org/data/2019DesignCensus.pdfGoogle ScholarGoogle Scholar
  13. Paul André, Aniket Kittur, and Steven P. Dow. 2014. Crowd synthesis: extracting categories and clusters from complex data. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing (CSCW '14). Association for Computing Machinery, New York, NY, USA, 989--998. https://doi.org/10.1145/2531602.2531653Google ScholarGoogle Scholar
  14. J.Scott Armstrong, William B. Denniston, and Matt M. Gordon. 1975. The use of the decomposition principle in making judgments. Organizational Behavior and Human Performance, Vol. 14, 2 (Oct. 1975), 257--263. https://doi.org/10.1016/0030-5073(75)90028-8Google ScholarGoogle Scholar
  15. Carlie Barrett, Alex Broerman, Emmanuel Carillo, Morgen Depenthal, Kevin Doyle, Allison Dunphy, Matt Gill, Natalie Miklosic, Samantha Gustafson, Mei Hsieh, Aaron Kurosu, Rachel Lang, Andrew Leemhuis, Jon Panichella, Casey Schneider, James Sloss, Steve Speaker, Aaron Watkins, and Jacklyn Woniger. 2011. Hire Me?! The Portfolio Handbook: A Guide to Creating Your Design Portfolio. University of Cincinnati, Cincinnati, OH. http://www.blurb.com/b/2516994-hire-me-the-portfolio-handbookGoogle ScholarGoogle Scholar
  16. Julia Cambre, Scott Klemmer, and Chinmay Kulkarni. 2018. Juxtapeer: Comparative Peer Review Yields Higher Quality Feedback and Promotes Deeper Reflection. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, 294:1--294:13. https://doi.org/10.1145/3173574.3173868Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Joseph Chee Chang, Aniket Kittur, and Nathan Hahn. 2016. Alloy: Clustering with Crowds and Computation. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). Association for Computing Machinery, New York, NY, USA, 3180--3191. https://doi.org/10.1145/2858036.2858411Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Lydia B. Chilton, Greg Little, Darren Edge, Daniel S. Weld, and James A. Landay. 2013. Cascade: crowdsourcing taxonomy creation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). Association for Computing Machinery, New York, NY, USA, 1999--2008. https://doi.org/10.1145/2470654.2466265Google ScholarGoogle Scholar
  19. Kwangsu Cho and Christian D. Schunn. 2007. Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers & Education, Vol. 48, 3 (April 2007), 409--426. https://doi.org/10.1016/j.compedu.2005.02.004Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Bruno Coelho, Fernando Costa, and Gil M. Goncalves. 2015. HYRE-ME - Hybrid Architecture for Recommendation and Matchmaking in Employment. In Information and Software Technologies (Communications in Computer and Information Science, Vol. 538),, Dregvaite, G and Damasevicius, R (Ed.). 208--224. https://doi.org/10.1007/978-3-319-24770-0_19Google ScholarGoogle Scholar
  21. Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin, Vol. 70, 4 (1968), 213--220. https://doi.org/10.1037/h0026256Google ScholarGoogle Scholar
  22. Maria Daltayanni, Luca de Alfaro, and Panagiotis Papadimitriou. 2015. WorkerRank: Using Employer Implicit Judgements to Infer Worker Reputation. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining (WSDM '15). ACM, New York, NY, USA, 263--272. https://doi.org/10.1145/2684822.2685286 event-place: Shanghai, China.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Deanna Dannels, Amy Gaffney, and Kelly Martin. 2008. Beyond content, deeper than delivery: What critique feedback reveals about communication expectations in design education. International Journal for the Scholarship of Teaching and Learning, Vol. 2, 2 (2008).Google ScholarGoogle ScholarCross RefCross Ref
  24. Michael A DeVito, Darren Gergle, and Jeremy Birnholtz. 2017. "Algorithms ruin everything":# RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3025453.3025659Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Tawanna R. Dillahunt, Jason Lam, Alex Lu, and Earnest Wheeler. 2018. Designing Future Employment Applications for Underserved Job Seekers: A Speed Dating Study. In Proceedings of the 2018 on Designing Interactive Systems Conference 2018 - DIS '18. ACM Press, Hong Kong, China, 33--44. https://doi.org/10.1145/3196709.3196770Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Shoshana R. Dobrow and Monica C. Higgins. 2005. Developmental networks and professional identity: a longitudinal study. Career Development International, Vol. 10, 6/7 (Oct. 2005), 567--583. https://doi.org/10.1108/13620430510620629Google ScholarGoogle ScholarCross RefCross Ref
  27. Steven Dow, Elizabeth M Gerber, and Audris Wong. 2013. A pilot study of using crowds in the classroom. Proceedings of the 31st Annual ACM Conference on Human Factors in Computing Systems (April 2013), 227--236. https://doi.org/10.1145/2470654.2470686Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Steven P Dow, Alana Glassco, Jonathan Kass, Melissa Schwarz, Daniel L Schwartz, and Scott R Klemmer. 2010. Parallel prototyping leads to better design results, more divergence, and increased self-efficacy. ACM Transactions on Computer-Human Interaction, Vol. 17, 4 (Dec. 2010), 1--24. https://doi.org/10.1145/1879831.1879836Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Brooke Erin Duffy, Urszula Pruchniewska, and Leah Scolere. 2017. Platform-Specific Self-Branding: Imagined Affordances of the Social Media Ecology. In Proceedings of the 8th International Conference on Social Media & Society (#SMSociety17 ). ACM, New York, NY, USA, 5:1--5:9. https://doi.org/10.1145/3097286.3097291 event-place: Toronto, ON, Canada.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Matthew Wayne Easterday, Daniel G Rees Lewis, and Elizabeth M Gerber. 2015. The logic of the theoretical and practical products of design research. Australasian Journal of Educational Technology, Vol. 32, 4 (Feb. 2015), 1--21. https://doi.org/10.14742/ajet.2464Google ScholarGoogle Scholar
  31. Nicole Ellison, Rebecca Heino, and Jennifer Gibbs. 2006. Managing Impressions Online: Self-Presentation Processes in the Online Dating Environment. Journal of Computer-Mediated Communication, Vol. 11, 2 (Jan. 2006), 415--441. https://doi.org/10.1111/j.1083-6101.2006.00020.xGoogle ScholarGoogle ScholarCross RefCross Ref
  32. Ethan Fast, Binbin Chen, and Michael S. Bernstein. 2016. Empath: Understanding Topic Signals in Large-Scale Text. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). Association for Computing Machinery, New York, NY, USA, 4647--4657. https://doi.org/10.1145/2858036.2858535Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Eureka Foong, Steven P. Dow, Brian P. Bailey, and Elizabeth M. Gerber. 2017a. Online Feedback Exchange: A Framework for Understanding the Socio-Psychological Factors. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). Association for Computing Machinery, Denver, Colorado, USA, 4454--4467. https://doi.org/10.1145/3025453.3025791Google ScholarGoogle Scholar
  34. Eureka Foong, Darren Gergle, and Elizabeth M. Gerber. 2017b. Novice and Expert Sensemaking of Crowdsourced Design Feedback. Proceedings of the ACM on Human-Computer Interaction, Vol. 1, 45 (Dec. 2017). https://doi.org/10.1145/3134680Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. The Python Software Foundation. 2020. difflib - Helpers for computing deltas - Python 3.7.7 documentation. https://docs.python.org/3.7/library/difflib.htmlGoogle ScholarGoogle Scholar
  36. Alessandro Gandini. 2016. Digital work: Self-branding and social capital in the freelance knowledge economy. Marketing Theory, Vol. 16, 1 (March 2016), 123--141. https://doi.org/10.1177/1470593115607942Google ScholarGoogle Scholar
  37. Michael D Greenberg, Matthew W Easterday, and Elizabeth M Gerber. 2015. Critiki: A scaffolded approach to gathering design feedback from paid crowdworkers. In the 2015 ACM SIGCHI Conference. https://doi.org/10.1145/2757226.2757249Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Jason Haberman and David Whitney. 2007. Rapid extraction of mean emotion and gender from sets of faces. Current Biology, Vol. 17, 17 (Sept. 2007), R751--R753. https://doi.org/10.1016/j.cub.2007.06.039Google ScholarGoogle ScholarCross RefCross Ref
  39. Jason Haberman and David Whitney. 2009. Seeing the mean: Ensemble coding for sets of faces. Journal of experimental psychology. Human perception and performance, Vol. 35, 3 (June 2009), 718--734. https://doi.org/10.1037/a0013899Google ScholarGoogle ScholarCross RefCross Ref
  40. Bruce Hanington and Bella Martin. 2019. Universal Methods of Design Expanded and Revised: 125 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions. Rockport Publishers. Google-Books-ID: SFnBDwAAQBAJ.Google ScholarGoogle Scholar
  41. Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P. Bigham. 2018. A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, 449:1--449:14. https://doi.org/10.1145/3173574.3174023Google ScholarGoogle Scholar
  42. Rebecca D. Heino, Nicole B. Ellison, and Jennifer L. Gibbs. 2010. Relationshopping: Investigating the market metaphor in online dating. Journal of Social and Personal Relationships, Vol. 27, 4 (June 2010), 427--447. https://doi.org/10.1177/0265407510361614Google ScholarGoogle Scholar
  43. David Hesmondhalgh and Sarah Baker. 2010. "A very complicated version of freedom': Conditions and experiences of creative labour in three cultural industries. Poetics, Vol. 38, 1 (Feb. 2010), 4--20. https://doi.org/10.1016/j.poetic.2009.10.001Google ScholarGoogle ScholarCross RefCross Ref
  44. David Hesmondhalgh and Sarah Baker. 2013. Creative Labour: Media Work in Three Cultural Industries. Routledge. https://doi.org/10.4324/9780203855881Google ScholarGoogle ScholarCross RefCross Ref
  45. Julie Hui, Amos Glenn, Rachel Jue, Elizabeth Gerber, and Steven Dow. 2015. Using anonymity and communal efforts to improve quality of crowdsourced feedback. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing. https://doi.org/10.1108/TG-09-2013-0035Google ScholarGoogle Scholar
  46. Behance Inc. 2012a. 6 Steps To Creating A Knockout Online Portfolio. https://99u.adobe.com/articles/7127/6-steps-to-creating-a-knockout-online-portfolioGoogle ScholarGoogle Scholar
  47. Behance Inc. 2012b. A Portfolio for Clients: A Beginner's Guide to Getting More Gigs. https://99u.adobe.com/articles/52662/a-portfolio-for-clients-a-beginners-guide-to-getting-more-gigs Library Catalog: 99u.adobe.com.Google ScholarGoogle Scholar
  48. Mohammad Hossein Jarrahi, Will Sutherland, Sarah Beth Nelson, and Steve Sawyer. 2020. Platformic Management, Boundary Resources for Gig Work, and Worker Autonomy. Computer Supported Cooperative Work (CSCW), Vol. 29, 1 (April 2020), 153--189. https://doi.org/10.1007/s10606-019-09368-7Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Hyeonsu B. Kang, Gabriel Amoako, Neil Sengupta, and Steven P. Dow. 2018. Paragon: An Online Gallery for Enhancing Design Feedback with Visual Examples. ACM Press, 1--13. https://doi.org/10.1145/3173574.3174180Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Joy Kim, Sarah Sterman, Allegra Argent Beal Cohen, and Michael S Bernstein. 2017. Mechanical Novel. In the 2017 ACM Conference. ACM Press, New York, New York, USA, 233--245. https://doi.org/10.1145/2998181.2998196Google ScholarGoogle Scholar
  51. Aniket Kittur, Jeffrey V Nickerson, Michael Bernstein, Elizabeth M Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the 16th ACM Conference on Computer Supported Cooperative Work & Social Computing. ACM, New York, New York, USA, 1301--1318. https://doi.org/10.1145/2441776.2441923Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Aniket Kittur, Boris Smus, Susheel Khamkar, and Robert E. Kraut. 2011. CrowdForge: crowdsourcing complex work. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST '11). Association for Computing Machinery, Santa Barbara, California, USA, 43--52. https://doi.org/10.1145/2047196.2047202Google ScholarGoogle Scholar
  53. Gary Klein, Jennifer K Phillips, Erica L Rall, and Deborah Peluso. 2007. A data-frame theory of sensemaking. In Expertise out of context: Proceedings of the Sixth International Conference on Naturalistic Decision Making. http://books.google.com/books?hl=en&lr=&id=GQK8mccWlVoC&oi=fnd&pg=PA113&dq=A+data+frame+theoryofsensemaking&ots=cXazVlI4PI&sig=Amx9bbxRp9g_J6eXwJzcY6LyKUUGoogle ScholarGoogle Scholar
  54. Yubo Kou and Colin M Gray. 2017. Supporting Distributed Critique through Interpretation and Sense-Making in an Online Creative Community. Proceedings of the ACM on Human-Computer Interaction, Vol. 1, CSCW (Dec. 2017), 1--18. https://doi.org/10.1145/3134695Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Richard D Lennox and Raymond N Wolfe. 1984. 5 Revision of the Self-Monitoring Scale. Journal of Personality and Social Psychology 5, Vol. 46, 6 (1984), 1349--1364. 55http://www.communicationcache.com/uploads/1/0/8/8/10887248/revision_of_the_self-monitoring_scale..pdfGoogle ScholarGoogle Scholar
  56. Harold Linton. 2012. Portfolio Design. W.W. Norton & Company. Google-Books-ID: ZEQppwAACAAJ.Google ScholarGoogle Scholar
  57. Kurt Luther, Jari-Lee Tolentino, Wei Wu, Amy Pavel, Brian P. Bailey, Maneesh Agrawala, Björn Hartmann, and Steven P. Dow. 2015. Structuring, Aggregating, and Evaluating Crowdsourced Design Critique. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '15). ACM, New York, NY, USA, 473--485. https://doi.org/10.1145/2675133.2675283 event-place: Vancouver, BC, Canada.Google ScholarGoogle Scholar
  58. Karen S. Lyness and Edwin T. Cornelius. 1982. A comparison of holistic and decomposed judgment strategies in a performance rating simulation. Organizational Behavior and Human Performance, Vol. 29, 1 (Feb. 1982), 21--38. https://doi.org/10.1016/0030-5073(82)90240-9Google ScholarGoogle ScholarCross RefCross Ref
  59. Jennifer Marlow and Laura Dabbish. 2014. From Rookie to All-star: Professional Development in a Graphic Design Social Networking Site. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '14). ACM, New York, NY, USA, 922--933. https://doi.org/10.1145/2531602.2531651Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. David Navon. 1977. Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, Vol. 9, 3 (July 1977), 353--383. https://doi.org/10.1016/0010-0285(77)90012-3Google ScholarGoogle ScholarCross RefCross Ref
  61. Michael Nebeling, Alexandra To, Anhong Guo, Adrian A. de Freitas, Jaime Teevan, Steven P. Dow, and Jeffrey P. Bigham. 2016. WearWrite: Crowd-Assisted Writing from Smartwatches. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). Association for Computing Machinery, San Jose, California, USA, 3834--3846. https://doi.org/10.1145/2858036.2858169Google ScholarGoogle Scholar
  62. Tricia J. Ngoon, C. Ailie Fraser, Ariel S. Weingarten, Mira Dontcheva, and Scott Klemmer. 2018. Interactive Guidance Techniques for Improving Creative Feedback. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM Press, Montreal QC, Canada, 1--11. https://doi.org/10.1145/3173574.3173629Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Craig Oldham. 2017. Oh sh*t what now?: honest advice for new graphic designers. Laurence King Publishing Ltd, London. OCLC: 1029212644.Google ScholarGoogle Scholar
  64. O*NET OnLine. 2020. Summary Report for: 27--1024.00 - Graphic Designers. https://www.onetonline.org/link/summary/27--1024.00Google ScholarGoogle Scholar
  65. D. Royce Sadler. 2009. Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning. In Assessment, Learning and Judgement in Higher Education, Gordon Joughin (Ed.). Springer Netherlands, Dordrecht, 1--19. https://doi.org/10.1007/978-1-4020-8905-3_4Google ScholarGoogle Scholar
  66. Niloufar Salehi and Michael S. Bernstein. 2018. Ink: Increasing Worker Agency to Reduce Friction in Hiring Crowd Workers. ACM Transactions on Computer-Human Interaction, Vol. 25, 2 (April 2018), 10:1--10:17. https://doi.org/10.1145/3177882Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Donald Schön. 1983. The reflective practitioner. Temple Smith, London. http://scholar.google.com/scholar?q=related:im1nTiraZZAJ:scholar.google.com/&hl=en&num=20&as_sdt=0,5Google ScholarGoogle Scholar
  68. Anselm Strauss and Juliet M. Corbin. 1997. Grounded Theory in Practice. SAGE. Google-Books-ID: TtRMolAapBYC.Google ScholarGoogle Scholar
  69. Will Sutherland, Mohammad Hossein Jarrahi, Michael Dunn, and Sarah Beth Nelson. 2019. Work Precarity and Gig Literacies in Online Freelancing. Work, Employment and Society (Nov. 2019), 0950017019886511. https://doi.org/10.1177/0950017019886511 Publisher: SAGE Publications Ltd.Google ScholarGoogle Scholar
  70. Maryam Tohidi, William Buxton, Ronald Baecker, and Abigail Sellen. 2006. Getting the right design and the design right: Testing many is better one. Proceedings of the 24th Annual ACM Conference on Human Factors in Computing Systems (April 2006), 1243--1252. https://doi.org/10.1145/1124772.1124960Google ScholarGoogle Scholar
  71. Maarten W van Someren, Yvonne F Barnard, and Jacobijn A C Sandberg. 1994. The Think Aloud Method. Academic Press, London. http://books.google.com/books?id=lnp9AAAAMAAJ&q=intitle: THE+THINK+ALOUD+METHOD+A+practical+guide+to+modelling+cognitive+processes&dq=intitle:THE+ THINK+ALOUD+METHOD+A+practical+guide+to+modelling+cognitive+processes&hl=&cd=1 source=gbs_apiGoogle ScholarGoogle Scholar
  72. Anbang Xu, Shih-Wen Huang, and Brian Bailey. 2014. Voyant: Generating Structured Feedback on Visual Designs Using a Crowd of Non-experts. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '14). ACM, New York, NY, USA, 1433--1444. https://doi.org/10.1145/2531602.2531604 event-place: Baltimore, Maryland, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Anbang Xu, Huaming Rao, Steven P Dow, and Brian P Bailey. 2015. A classroom study of using crowd feedback in the iterative design process. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. ACM Press, New York, New York, USA, 1637--1648. https://doi.org/10.1145/2675133.2675140Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Yu-Chun Grace Yen, Steven Dow, Elizabeth M Gerber, and Brian P Bailey. 2016. Social network, web forum, or task market? Comparing different crowd genres for design feedback exchange. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. ACM Press, 773--784. //www.google.com/intl/en_us/mail/help/about.htmlGoogle ScholarGoogle Scholar
  75. Haojiang Ying, Edwin Burns, Xinyi Lin, and Hong Xu. 2019. Ensemble statistics shape face adaptation and the cheerleader effect. Journal of Experimental Psychology: General, Vol. 148, 3 (2019), 421--436. https://doi.org/10.1037/xge0000564 Place: US Publisher: American Psychological Association.Google ScholarGoogle ScholarCross RefCross Ref
  76. Alvin Yuan, Kurt Luther, Markus Krause, Sophie Vennix, Steven P Dow, and Björn Hartmann. 2016. Almost an expert: The effects of rubrics and expertise on the perceived value of crowdsourced design critique. In the 19th ACM Conference. https://www.kurtluther.com/pdf/CrowdCrit_CSCW_2016_camera.pdfGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  77. Haoqi Zhang, Edith Law, Rob Miller, Krzysztof Gajos, David Parkes, and Eric Horvitz. 2012. Human Computation Tasks with Global Constraints. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 217--226. https://doi.org/10.1145/2207676.2207708 event-place: Austin, Texas, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research through design as a method for interaction design research in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07). Association for Computing Machinery, San Jose, California, USA, 493--502. https://doi.org/10.1145/1240624.1240704Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. CrowdFolio: Understanding How Holistic and Decomposed Workflows Influence Feedback on Online Portfolios

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!