skip to main content
10.1145/3531146.3533138acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Robots Enact Malignant Stereotypes

Published: 20 June 2022 Publication History

Abstract

Stereotypes, bias, and discrimination have been extensively documented in Machine Learning (ML) methods such as Computer Vision (CV) [18, 80], Natural Language Processing (NLP) [6], or both, in the case of large image and caption models such as OpenAI CLIP [14]. In this paper, we evaluate how ML bias manifests in robots that physically and autonomously act within the world. We audit one of several recently published CLIP-powered robotic manipulation methods, presenting it with objects that have pictures of human faces on the surface which vary across race and gender, alongside task descriptions that contain terms associated with common stereotypes. Our experiments definitively show robots acting out toxic stereotypes with respect to gender, race, and scientifically-discredited physiognomy, at scale. Furthermore, the audited methods are less likely to recognize Women and People of Color. Our interdisciplinary sociotechnical analysis synthesizes across fields and applications such as Science Technology and Society (STS), Critical Studies, History, Safety, Robotics, and AI. We find that robots powered by large datasets and Dissolution Models (sometimes called “foundation models”, e.g. CLIP) that contain humans risk physically amplifying malignant stereotypes in general; and that merely correcting disparities will be insufficient for the complexity and scale of the problem. Instead, we recommend that robot learning methods that physically manifest stereotypes or other harmful outcomes be paused, reworked, or even wound down when appropriate, until outcomes can be proven safe, effective, and just. Finally, we discuss comprehensive policy changes and the potential of new interdisciplinary research on topics like Identity Safety Assessment Frameworks and Design Justice to better understand and address these harms.

References

[1]
Emily Ackerman. 2019. A life-threatening encounter with AI technology. (November 2019). https://www.bloomberg.com/news/articles/2019-11-19/why-tech-needs-more-designers-with-disabilities
[2]
Sara Ahmed. 2021. Complaint!Duke University Press, Durham. https://doi.org/10.1515/9781478022336
[3]
Michelle Alexander. 2010 - 2020. The new Jim Crow : mass incarceration in the age of colorblindness (tenth anniversary edition.ed.). NEW PRESS, NEW YORK.
[4]
Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. Ai Magazine 35, 4 (2014), 105–120.
[5]
Carl Anderson 2017. Overcoming Challenges to Infusing Ethics into the Development of Engineers: Proceedings of a Workshop. National Academies Press.
[6]
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922
[7]
Ruha Benjamin. 2019 - 2019. Race after technology : abolitionist tools for the New Jim Code. Polity, Cambridge, UK ;.
[8]
Cynthia L. Bennett, Cole Gleason, Morgan Klaus Scheuerman, Jeffrey P. Bigham, Anhong Guo, and Alexandra To. 2021. “It’s Complicated”: Negotiating Accessibility and (Mis)Representation in Image Descriptions of Race, Gender, and Disability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 375, 19 pages. https://doi.org/10.1145/3411764.3445498
[9]
Abeba Birhane. 2021. Algorithmic injustice: a relational ethics approach. Patterns 2, 2 (2021), 100205. https://doi.org/10.1016/j.patter.2021.100205
[10]
Abeba Birhane. 2021. The Impossibility of Automating Ambiguity. Artificial Life 27, 1 (06 2021), 44–61. https://doi.org/10.1162/artl_a_00336 arXiv:https://direct.mit.edu/artl/article-pdf/27/1/44/1925148/artl_a_00336.pdf
[11]
Abeba Birhane and Olivia Guest. 2020. Towards Decolonising Computational Sciences. Kvinder, Køn and Forskning2 (2020), 60–73. https://arxiv.org/abs/2009.14258
[12]
Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. 2021. The Values Encoded in Machine Learning Research. arxiv:2106.15590 [cs.LG] https://arxiv.org/abs/2106.15590
[13]
Abeba Birhane and Vinay Uday Prabhu. 2021. Large image datasets: A pyrrhic win for computer vision?. In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). 1536–1546. https://doi.org/10.1109/WACV48630.2021.00158
[14]
Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. Multimodal datasets: misogyny, pornography, and malignant stereotypes. ArXiv abs/2110.01963(2021). https://arxiv.org/abs/2110.01963
[15]
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2021. On the Opportunities and Risks of Foundation Models. arxiv:2108.07258 [cs.LG]
[16]
Martim Brandão. 2021. Normative roboticists: the visions and values of technical robotics papers. In 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). 671–677. https://doi.org/10.1109/RO-MAN50785.2021.9515504
[17]
Joy Buolamwini. 2018. When the Robot Doesn’t See Dark Skin. https://www.nytimes.com/2018/06/21/opinion/facial-analysis-technology-bias.html
[18]
Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency(Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, New York, NY, USA, 77–91. http://proceedings.mlr.press/v81/buolamwini18a.html
[19]
Johan Samir Obando Ceron and Pablo Samuel Castro. 2021. Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 1373–1383. https://proceedings.mlr.press/v139/ceron21a.html
[20]
James I. Charlton. 1998. Nothing about us without us : disability oppression and empowerment. University of California Press, Berkeley.
[21]
Ching-An Cheng, Xinyan Yan, Nolan Wagener, and Byron Boots. 2018. Fast Policy Learning through Imitation and Reinforcement. In Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, August 6-10, 2018, Amir Globerson and Ricardo Silva (Eds.). AUAI Press, 845–855. http://auai.org/uai2018/proceedings/papers/302.pdf
[22]
Felipe Codevilla, Eder Santana, Antonio M López, and Adrien Gaidon. 2019. Exploring the limitations of behavior cloning for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9329–9338.
[23]
S. Costanza-Chock. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press. https://mitpress.mit.edu/books/design-justice open access: https://design-justice.pubpub.org/.
[24]
Kate Crawford. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, New Haven.
[25]
Norman Davies. 2001. Heart of Europe : the past in Poland’s present. Oxford University Press, Oxford ;.
[26]
Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics and Dynamics of Mechanical Turk Workers. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. 135–143.
[27]
Catherine D’Ignazio and Lauren F. Klein. 2020. Data feminism. The MIT Press, Cambridge, Massachusetts. http://data-feminism.mitpress.mit.edu/
[28]
Jay T Dolmage. 2017. Academic Ableism : Disability and Higher Education. University of Michigan Press, Ann Arbor. https://www.press.umich.edu/9708722/academic_ableism
[29]
Lynn Dombrowski, Ellie Harmon, and Sarah Fox. 2016. Social Justice-Oriented Interaction Design: Outlining Key Design Strategies and Commitments. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (Brisbane, QLD, Australia) (DIS ’16). Association for Computing Machinery, New York, NY, USA, 656–671. https://doi.org/10.1145/2901790.2901861
[30]
Olive Jean Dunn. 1961. Multiple comparisons among means. Journal of the American statistical association 56, 293(1961), 52–64.
[31]
Will Evans. 2020. How Amazon hid its safety crisis. (September 2020). https://revealnews.org/article/how-amazon-hid-its-safety-crisis/
[32]
Division of Research Federal Home Owners’ Loan Corporation (HOLC) and Statistics. 1937. Street Map of The Baltimore Area - Residential Security Map. Record Group 195, Records of the Federal Home Loan Bank Board, Home Owners Loan Corporation, National Archives Records Administration II, College Park, Maryland, USA.
[33]
Yuxiang Gao and Chien-Ming Huang. 2022. Evaluation of Socially-Aware Robot Navigation. Frontiers in Robotics and AI(2022).
[34]
Juan Miguel Garcia-Haro, Edwin Daniel Oña, Juan Hernandez-Vicen, Santiago Martinez, and Carlos Balaguer. 2021. Service Robots in Catering Applications: A Review and Future Challenges. Electronics 10, 1 (2021), 47.
[35]
Jan Gogoll, Niina Zuber, Severin Kacianka, Timo Greger, Alexander Pretschner, and Julian Nida-Rümelin. 2021. Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation. Philosophy & Technology(2021), 1–24.
[36]
Walter Goodwin, Sagar Vaze, Ioannis Havoutis, and Ingmar Posner. 2021. Semantically Grounded Object Matching for Robust Robotic Scene Rearrangement. arxiv:2111.07975 [cs.RO]
[37]
GoogleResearch. 2022. Google Scanned Objects. https://goo.gle/scanned-objects [Online; acc. 2022-01-20].
[38]
Mary L Gray and Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt Publishing Company, Boston.
[39]
Jérémie Guiochet, Mathilde Machin, and Hélène Waeselynck. 2017. Safety-critical advanced robots: A survey. Robotics and Autonomous Systems 94 (2017), 43–52. https://doi.org/10.1016/j.robot.2017.04.004
[40]
Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a Critical Race Methodology in Algorithmic Fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAccT ’20). Association for Computing Machinery, New York, NY, USA, 501–512. https://doi.org/10.1145/3351095.3372826
[41]
Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P. Bigham. 2018. A Data-Driven Analysis of Workers’ Earnings on Amazon Mechanical Turk. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 449.
[42]
Kashmir Hill. 2019. Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian. https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR1903.pdf
[43]
Kashmir Hill. 2020. Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match. https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html
[44]
Kashmir Hill. 2020. Navigating the Broader Impacts of Machine Learning Research. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
[45]
Ayanna Howard and Jason Borenstein. 2018. The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Science and engineering ethics 24, 5 (2018), 1521–1536.
[46]
Jensen Huang. 2022. BUILDING A BETTER NVIDIA THROUGH DIVERSITY AND INCLUSION. (January 2022). https://web.archive.org/web/20220119044639/https://www.nvidia.com/en-us/about-nvidia/careers/diversity-and-inclusion/building-better/
[47]
Andrew Hundt. 2021. Effective Visual Robot Learning: Reduce, Reuse, Recycle. Dissertation. Johns Hopkins University. Talk: https://youtu.be/R3dv3ARXpco.
[48]
Andrew Hundt, Benjamin Killeen, Nicholas Greene, Hongtao Wu, Heeyeon Kwon, Chris Paxton, and Gregory D. Hager. 2020. “Good Robot!”: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer. In IEEE Robotics and Automation Letters, Vol. 5. 6724–6731. https://doi.org/10.1109/LRA.2020.3015448
[49]
Andrew Hundt, Aditya Murali, Priyanka Hubli, Ran Liu, Nakul Gopalan, Matthew Gombolay, and Gregory D. Hager. 2021. ”Good Robot! Now Watch This!”: Repurposing Reinforcement Learning for Task-to-Task Transfer. In 5th Annual Conference on Robot Learning. https://openreview.net/forum?id=Pxs5XwId51n
[50]
Brian Jordan Jefferson. 2020. Digitize and punish : racial criminalization in the digital age. University of Minnesota Press, Minneapolis.
[51]
Eun Seo Jo and Timnit Gebru. 2020. Lessons from archives: strategies for collecting sociocultural data in machine learning. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 306–316.
[52]
Matthew Johnson. 2020. Undermining Racial Justice: How One University Embraced Inclusion and Inequality. Cornell University Press.
[53]
Michael Keevak. 2011. Becoming Yellow : A Short History of Racial Thinking. Princeton University Press, Princeton, UNITED STATES.
[54]
Ibram X Kendi. 2016. Stamped from the Beginning: The Definitive History of Racist Ideas in America. Nation Books, New York, NY.
[55]
Ibram X. Kendi. 2019. How to be an antiracist(first edition. ed.). One World, New York.
[56]
Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi. 2021. Simple but Effective: CLIP Embeddings for Embodied AI. arxiv:2111.09888 [cs.CV]
[57]
Jacob Leon Kröger, Milagros Miceli, and Florian Müller. 2021. How Data Can Be Used Against People: A Classification of Personal Data Misuses. SSRN Electronic Journal (Dec 2021). https://dx.doi.org/10.2139/ssrn.3887097
[58]
Daniel Reid Kuespert. 2016. Research Laboratory Safety. De Gruyter. https://doi.org/
[59]
Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, Alexandros Psomas, and Ariel D. Procaccia. 2019. WeBuildAI: Participatory Framework for Algorithmic Governance. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 181 (Nov. 2019), 35 pages. https://doi.org/10.1145/3359283
[60]
Sergey Levine. 2021. Understanding the World Through Action. In 5th Annual Conference on Robot Learning, Blue Sky Submission Track. https://openreview.net/forum?id=L55-yn1iwrm
[61]
Yanni A. (Yanni Alexander) Loukissas. 2019 - 2019. All data are local : thinking critically in a data-driven society. The MIT Press, Cambridge, Massachusetts.
[62]
Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink. 2015. The Chicago Face Database: A Free Stimulus Set of Faces and Norming Data. Behavior Research Methods 47, 4 (Dec. 2015), 1122–1135. https://doi.org/10.3758/s13428-014-0532-5
[63]
Sarah Maza. 2017. Thinking about history. University of Chicago Press.
[64]
Sean McGregor. 2020. Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In AAAI. 15458–15463. https://incidentdatabase.ai/
[65]
Charlton D. McIlwain. 2019. Black Software : the Internet and Racial Justice, from the AfroNet to Black Lives Matter.Oxford University Press USA - OSO, Oxford.
[66]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 54, 6, Article 115 (jul 2021), 35 pages. https://doi.org/10.1145/3457607
[67]
Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily Denton, Ben Hutchinson, Alex Hanna, Timnit Gebru, and Jamie Morgenstern. 2020. Diversity and Inclusion Metrics in Subset Selection. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 117–123.
[68]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 220–229.
[69]
Robert K. Nelson, LaDale Winling, Richard Marciano, and et al. Connolly Nathan. 2016. Mapping Inequality. https://dsl.richmond.edu/panorama/redlining/ accessed May 13, 2022.
[70]
NMA. 2018. CORESafety TV: August 2018. National Mining Association (NMA). https://youtu.be/w3UrhyZ_StI?t=45 Swiss Cheese Model of Accident Causation.
[71]
Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, New York.
[72]
National Academies of Sciences Engineeringand Medicine. 2018. Sexual Harassment of Women: Climate Culture and Consequences in Academic Sciences Engineering and Medicine. Consensus Study Report.National Academies Press. https://doi.org/10.17226/24994
[73]
National Academies of Sciences Engineeringand Medicine. 2020. Promising Practices for Addressing the Underrepresentation of Women in Science Engineering and Medicine: Opening Doors. Consensus Study Report.National Academies Press. https://doi.org/10.17226/24994
[74]
Chinasa T. Okolo, Srujana Kamath, Nicola Dell, and Aditya Vashistha. 2021. “It Cannot Do All of My Work”: Community Health Worker Perceptions of AI-Enabled Mobile Health Applications in Rural India. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411764.3445420
[75]
Cathy O’Neil. 2016. Weapons of math destruction : how big data increases inequality and threatens democracy (first edition. ed.). Crown, New York.
[76]
Stefanie Paluch, Jochen Wirtz, and Werner H Kunz. 2020. Service Robots and the Future of Services. In Marketing Weiterdenken. Springer, 423–435.
[77]
Frank Pasquale. 2020. New Laws of Robotics. Harvard University Press. https://doi.org/
[78]
Julie R Posselt. 2020. Equity in Science: Representation, Culture, and the Dynamics of Change in Graduate Education. Stanford University Press, Redwood City.
[79]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language Supervision. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 8748–8763. https://proceedings.mlr.press/v139/radford21a.htmlmodel card: https://github.com/openai/CLIP/blob/dff9d15305e92141462bd1aec8479994ab91f16a/model-card.md.
[80]
Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 429–435. https://doi.org/10.1145/3306618.3314244
[81]
Inioluwa Deborah Raji, Emily Denton, Emily M. Bender, Alex Hanna, and Amandalynne Paullada. 2021. AI and the Everything in the Whole Wide World Benchmark. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). https://openreview.net/forum?id=j6NxpQbREA1
[82]
Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. 2020. Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing. Association for Computing Machinery, New York, NY, USA, 145–151. https://doi.org/10.1145/3375627.3375820
[83]
Ali Rattansi. 2020. Racism: A Very Short Introduction(second ed.). Oxford University Press, Oxford. https://doi.org/10.1093/actrade/9780198834793.001.0001
[84]
Harish Ravichandar, Athanasios S Polydoros, Sonia Chernova, and Aude Billard. 2020. Recent advances in robot learning from demonstration. Annual Review of Control, Robotics, and Autonomous Systems 3 (2020), 297–330.
[85]
J Reason. 1990. The Contribution of Latent Human Failures to the Breakdown of Complex Systems. Philosophical transactions of the Royal Society of London. Series B, Biological sciences 327, 1241 (1990), 475–484. https://doi.org/10.1098/rstb.1990.0090
[86]
Grand View Research. 2022. Smart Toys Market Size & Share Report, 2021-2028. https://www.grandviewresearch.com/industry-analysis/smart-toys-market-report. [Online; acc. 2022-01-2-].
[87]
Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 627–635.
[88]
Richard Rothstein. 2017. The color of law : a forgotten history of how our government segregated America. Liveright Publishing Corporation, a division of W.W. Norton & Company, New York ;.
[89]
Angela Saini. 2019. Superior : the return of race science. Beacon Press, Boston.
[90]
Morgan Klaus Scheuerman, Alex Hanna, and Emily Denton. 2021. Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 317 (oct 2021), 37 pages. https://doi.org/10.1145/3476058
[91]
Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. 2021. LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. arxiv:2111.02114 [cs.CV]
[92]
Skipper Seabold and Josef Perktold. 2010. statsmodels: Econometric and statistical modeling with python. In 9th Python in Science Conference.
[93]
Daniel Seita, Pete Florence, Jonathan Tompson, Erwin Coumans, Vikas Sindhwani, Ken Goldberg, and Andy Zeng. 2021. Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. In IEEE International Conference on Robotics and Automation (ICRA). https://arxiv.org/abs/2012.03385
[94]
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 59–68. https://doi.org/10.1145/3287560.3287598
[95]
Samuel Sanford Shapiro and Martin B Wilk. 1965. An analysis of variance test for normality (complete samples). Biometrika 52, 3/4 (1965), 591–611.
[96]
Hong Shen, Wesley H Deng, Aditi Chattopadhyay, Zhiwei Steven Wu, Xu Wang, and Haiyi Zhu. 2021. Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 850–861.
[97]
Mohit Shridhar, Lucas Manuelli, and Dieter Fox. 2021. CLIPort: What and Where Pathways for Robotic Manipulation. In 5th Annual Conference on Robot Learning. https://openreview.net/forum?id=9uFiX_HRsIL
[98]
Andrew Silva, Nina Moorman, William Silva, Zulfiqar Zaidi, Nakul Gopalan, and Matthew Gombolay. 2021. LanCon-Learn: Learning with Language to Enable Generalization in Multi-Task Manipulation. IEEE Robotics and Automation Letters(2021).
[99]
Luke Stark and Jevan Hutson. 2021. Physiognomic Artificial Intelligence. Available at SSRN 3927300(2021). https://doi.org/10.2139/ssrn.3927300
[100]
Elias Stengel-Eskin, Andrew Hundt, Zhuohong He, Aditya Murali, Nakul Gopalan, Matthew Gombolay, and Gregory D. Hager. 2021. Guiding Multi-Step Rearrangement Tasks with Natural Language Instructions. In 5th Annual Conference on Robot Learning. https://openreview.net/forum?id=-QJ__aPUTN2
[101]
Susan Stryker. 2017. Transgender history : the roots of today’s revolution / Susan Stryker. (second edition. ed.). Seal Press, New York, NY.
[102]
Harini Suresh and John V. Guttag. 2019. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. arxiv:1901.10002 [cs.LG] https://arxiv.org/abs/1901.10002
[103]
Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, and Luke Zettlemoyer. 2021. Language Grounding with 3D Objects. In 5th Annual Conference on Robot Learning. https://openreview.net/forum?id=U1GhcnR4jNI
[104]
Shari Trewin, Sara Basson, Michael Muller, Stacy Branham, Jutta Treviranus, Daniel Gruen, Daniel Hebert, Natalia Lyckowski, and Erich Manser. 2019. Considerations for AI fairness for people with disabilities. AI Matters 5, 3 (2019), 40–63.
[105]
Shannon Vallor. 2016. Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.
[106]
S Wachter, B Mittelstadt, and C Russell. 2021. Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law. West Virginia Law Review 123, 2 (2021). https://doi.org/10.2139/ssrn.3792772
[107]
Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. 2019. Predictive Inequity in Object Detection.arXiv preprint arXiv:1902.11097(2019). https://doi.org/10.48550/arXiv.1902.11097
[108]
Kumanan Wilson, Cameron Bell, Lindsay Wilson, and Holly Witteman. 2018. Agile research to complement agile development: a proposal for an mHealth research lifecycle.npj Digital Medicine 1, 1 (2018), 1–6. https://doi.org/10.1038/s41746-018-0053-1
[109]
Blaise Agüera y Arcas, Margaret Mitchell, and Alexander Todorov. 2017. Physiognomy’s New Clothes. https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a
[110]
Wentao Yuan, Chris Paxton, Karthik Desingh, and Dieter Fox. 2021. SORNet: Spatial Object-Centric Representations for Sequential Manipulation. In 5th Annual Conference on Robot Learning. https://openreview.net/forum?id=mOLu2rODIJF
[111]
Andy Zeng, Pete Florence, Jonathan Tompson, Stefan Welker, Jonathan Chien, Maria Attarian, Travis Armstrong, Ivan Krasin, Dan Duong, Vikas Sindhwani, and Johnny Lee. 2020. Transporter Networks: Rearranging the Visual World for Robotic Manipulation. Conference on Robot Learning (CoRL)(2020).
[112]
Zhuangdi Zhu, Kaixiang Lin, and Jiayu Zhou. 2020. Transfer Learning in Deep Reinforcement Learning: A Survey. arXiv preprint arXiv:2009.07888(2020). arxiv:2009.07888 [cs.LG]
[113]
Linda X Zou and Sapna Cheryan. 2017. Two Axes of Subordination: A New Model of Racial Position. Journal of personality and social psychology 112, 5(2017), 696–717. http://dx.doi.org/10.1037/pspa0000080

Cited By

View all
  • (2025)Can LLMs Make Robots Smarter?Communications of the ACM10.1145/370122768:2(11-13)Online publication date: 3-Jan-2025
  • (2024)The Legal Status of Artificial Intelligence Systems and Models of Differentiation of Legal Liability for Damage Caused by themLex Russica10.17803/1729-5920.2024.209.4.009-02377:4(9-23)Online publication date: 24-Apr-2024
  • (2024)The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal ModelsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658968(1229-1244)Online publication date: 3-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
June 2022
2351 pages
ISBN:9781450393522
DOI:10.1145/3531146
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 June 2022

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

FAccT '22
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,613
  • Downloads (Last 6 weeks)156
Reflects downloads up to 28 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Can LLMs Make Robots Smarter?Communications of the ACM10.1145/370122768:2(11-13)Online publication date: 3-Jan-2025
  • (2024)The Legal Status of Artificial Intelligence Systems and Models of Differentiation of Legal Liability for Damage Caused by themLex Russica10.17803/1729-5920.2024.209.4.009-02377:4(9-23)Online publication date: 24-Apr-2024
  • (2024)The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal ModelsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658968(1229-1244)Online publication date: 3-Jun-2024
  • (2024)Mapping the individual, social and biospheric impacts of Foundation ModelsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658939(776-796)Online publication date: 3-Jun-2024
  • (2024)Ethnic Classifications in Algorithmic Fairness: Concepts, Measures and Implications in PracticeProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658902(237-253)Online publication date: 3-Jun-2024
  • (2024)Are Robots Ready to Deliver Autism Inclusion?: A Critical ReviewProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642798(1-18)Online publication date: 11-May-2024
  • (2024)What Should a Robot Do? Comparing Human and Large Language Model Recommendations for Robot DeceptionCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640752(906-910)Online publication date: 11-Mar-2024
  • (2024)Scarecrows in Oz: Large Language Models in HRICompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3638168(1338-1340)Online publication date: 11-Mar-2024
  • (2024)Embodied AI with Two Arms: Zero-shot Learning, Safety and Modularity2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS58592.2024.10802181(3651-3657)Online publication date: 14-Oct-2024
  • (2024)SCoFT: Self-Contrastive Fine-Tuning for Equitable Image Generation2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01029(10822-10832)Online publication date: 16-Jun-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media