skip to main content
research-article

Collecting the Public Perception of AI and Robot Rights

Published: 15 October 2020 Publication History

Abstract

Whether to give rights to artificial intelligence (AI) and robots has been a sensitive topic since the European Parliament proposed advanced robots could be granted "electronic personalities." Numerous scholars who favor or disfavor its feasibility have participated in the debate. This paper presents an experiment (N=1270) that 1) collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future and 2) examines whether debunking common misconceptions on the proposal modifies one's stance toward the issue. The results indicate that even though online users mainly disfavor AI and robot rights, they are supportive of protecting electronic agents from cruelty (i.e., favor the right against cruel treatment). Furthermore, people's perceptions became more positive when given information about rights-bearing non-human entities or myth-refuting statements. The style used to introduce AI and robot rights significantly affected how the participants perceived the proposal, similar to the way metaphors function in creating laws. For robustness, we repeated the experiment over a more representative sample of U.S. residents (N=164) and found that perceptions gathered from online users and those by the general population are similar.

References

[1]
Peter M Asaro. 2007. Robots and responsibility from a legal perspective. In proc. of the IEEE. 20--24.
[2]
Peter M Asaro. 2011. 11 A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics. Robot ethics: The ethical and social implications of robotics (2011), 169.
[3]
Edmond Awad, Sydney Levine, Max Kleiman-Weiner, Sohan Dsouza, Joshua B Tenenbaum, Azim Shariff, Jean-Francc ois Bonnefon, and Iyad Rahwan. 2019. Drivers are blamed more than their automated cars when both make mistakes. Nature human behaviour (2019), 1--10.
[4]
Nick Bostrom. 2017. Superintelligence .Dunod.
[5]
Bartosz Bro.zek and Bartosz Janik. 2019. Can artificial intelligences be moral agents? New Ideas in Psychology, Vol. 54 (2019), 101--106.
[6]
Joanna J Bryson. 2010. Robots should be slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues (2010), 63--74.
[7]
Joanna J Bryson, Mihailis E Diamantis, and Thomas D Grant. 2017. Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law, Vol. 25, 3 (2017), 273--291.
[8]
Michael Buhrmester, Tracy Kwang, and Samuel D Gosling. 2011. Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, Vol. 6, 1 (2011), 3--5.
[9]
Matt Carlson. 2015. The robotic reporter: Automated journalism and the redefinition of labor, compositional forms, and journalistic authority. Digital journalism, Vol. 3, 3 (2015), 416--431.
[10]
Julie Carpenter. 2016. Culture and human-robot interaction in militarized spaces: A war story .Routledge.
[11]
Samir Chopra and Laurence White. 2004. Artificial agents-personhood in law and philosophy. In Proceedings of the 16th European Conference on Artificial Intelligence. IOS Press, 635--639.
[12]
Samir Chopra and Laurence F White. 2011. A legal theory for autonomous artificial agents .University of Michigan Press.
[13]
John Danaher. 2016. Robots, law and the retribution gap. Ethics and Information Technology, Vol. 18, 4 (2016), 299--309.
[14]
Kate Darling. 2015. 'Who's Johnny?'Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy. Robot Ethics, Vol. 2 (2015).
[15]
Kate Darling. 2016. Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects. Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar (2016).
[16]
Marc Dupuis, Barbara Endicott-Popovsky, and Robert Crossler. 2013. An analysis of the use of amazons mechanical Turk for survey research in the cloud. In proc. of the International Conference on Cloud Security Management (ICCSM).
[17]
Andrew Feenberg. 1991. Critical theory of technology. (1991).
[18]
Joseph K Goodman, Cynthia E Cryder, and Amar Cheema. 2013. Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making, Vol. 26, 3 (2013), 213--224.
[19]
David J Gunkel. 2018. The other question: can and should robots have rights? Ethics and Information Technology, Vol. 20, 2 (2018), 87--99.
[20]
Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. 2017. The off-switch game. In Workshops at the Thirty-First AAAI Conference on Artificial Intelligence.
[21]
Karen Hao. [n.d.]. Can an AI be an inventor? Not yet. Available at https://tinyurl.com/tm8crzq. Date accessed 11 January 2020.
[22]
Alex Hern. [n.d.]. New AI fake text generator may be too dangerous to release, say creators. Available at https://tinyurl.com/y6979yo9. Date accessed 02 August 2019.
[23]
Brian Higgins. [n.d.]. Generative Adversarial Networks and the Rise of Fake Faces: an Intellectual Property Perspective. Available at https://tinyurl.com/y4og9flx. Date accessed 12 August 2019.
[24]
F Patrick Hubbard. 2010. Do androids dream: personhood and intelligent artifacts. Temp. L. Rev., Vol. 83 (2010), 405.
[25]
Edwin Hutchins. 1995. Cognition in the Wild. Number 1995. MIT press.
[26]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27]
Bert-Jaap Koops, Mireille Hildebrandt, and David-Olivier Jaquet-Chiffelle. 2010. Bridging the accountability gap: Rights for new entities in the information society. Minn. JL Sci. & Tech., Vol. 11 (2010), 497.
[28]
Kirsten Korosec. [n.d.]. Waymo, take the wheel: Self-driving cars go fully driverless on California roads. Available at https://tinyurl.com/y7coynpv. Date accessed 13 March 2020.
[29]
Frank Levy and Richard J Murnane. 2012. The new division of labor: How computers are creating the next job market .Princeton University Press.
[30]
Dingjun Li, PL Patrick Rau, and Ye Li. 2010. A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics, Vol. 2, 2 (2010), 175--186.
[31]
Gabriel Lima, Chihyung Jeon, Meeyoung Cha, and Kyungsin Park. 2020. Will Punishing Robots Become Imperative in the Future?. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA '20). Association for Computing Machinery, New York, NY, USA, 1--8. https://doi.org/10.1145/3334480.3383006
[32]
Vidushi Marda. 2018. Artificial intelligence policy in India: a framework for engaging the limits of data-driven decision-making. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 376, 2133 (2018).
[33]
Jonathan Margolis. [n.d.]. Rights for robots is no more than an intellectual game. Available at https://tinyurl.com/y5yqfeee. Date accessed 30 July 2019.
[34]
Andreas Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology, Vol. 6, 3 (2004), 175--183.
[35]
David A Mindell. 2015. Our robots, ourselves: Robotics and the myths of autonomy .Viking Adult.
[36]
Olof Mogren. 2016. C-RNN-GAN: Continuous recurrent neural networks with adversarial training. arXiv preprint arXiv:1611.09904 (2016).
[37]
Joe Monaco. 2018. The Android Fallacy: a twofold concept. (10 2018).
[38]
Tatsuya T Nomura, Dag Sverre Syrdal, and Kerstin Dautenhahn. 2015. Differences on social acceptance of humanoid robots between Japan and the UK. In Procs 4th int symposium on new frontiers in human-robot interaction. The Society for the Study of Artificial Intelligence and the Simulation of ?.
[39]
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, Vol. 1, 8 (2019).
[40]
Iyad Rahwan. 2018. Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology, Vol. 20, 1 (2018), 5--14.
[41]
Neil M Richards and William D Smart. 2013. How should the law think about robots? Available at SSRN 2263363 (2013).
[42]
SM Solaiman. 2017. Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. Artificial Intelligence and Law, Vol. 25, 2 (2017), 155--179.
[43]
Lawrence B Solum. 1991. Legal personhood for artificial intelligences. NCL Rev., Vol. 70 (1991), 1231.
[44]
Dag Sverre Syrdal, Kerstin Dautenhahn, Kheng Lee Koay, and Michael L Walters. 2009. The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. Adaptive and Emergent Behaviour and Complex Systems (2009).
[45]
Don Tapscott. [n.d.]. Why Transparency and Privacy Should Go Hand in Hand. Available at https://tinyurl.com/y6rjvzd7. Date accessed 13 August 2019.
[46]
Gunther Teubner. 2006. Rights of non-humans? Electronic agents and animals as new actors in politics and law. Journal of Law and Society, Vol. 33, 4 (2006), 497--521.
[47]
Jacob Turner. 2018. Robot Rules: Regulating Artificial Intelligence .Springer.
[48]
Robert van den Hoven van Genderen. 2018. Do We Need New Legal Personhood in the Age of Robots and AI? In Robotics, AI and the Future of Law. Springer, 15--55.
[49]
Janet Vertesi. 2015. Seeing like a rover: How robots, teams, and images craft knowledge of mars .University of Chicago Press.
[50]
James Vincent. 2019. That video of a robot getting beaten is fake, but feeling sorry for machines is no joke. Available at https://tinyurl.com/y2npjasu. Date accessed 11 September 2019.
[51]
Vivek Wadhwa. [n.d.]. Laws and Ethics Can't Keep Pace with Technology. Available at https://www.technologyreview.com/s/526401/laws-and-ethics-cant-keep-pace-with-technology/. Date accessed 04 September 2019.
[52]
Wendell Wallach and Colin Allen. 2008. Moral machines: Teaching robots right from wrong .Oxford University Press.
[53]
Kelly Walters, Dimitri A Christakis, and Davene R Wright. 2018. Are Mechanical Turk worker samples representative of health status and health behaviors in the US? PloS one, Vol. 13, 6 (2018), e0198835.
[54]
Tianhao Wang and Florian Kerschbaum. 2019. Robust and Undetectable White-Box Watermarks for Deep Neural Networks. arXiv preprint arXiv:1910.14268 (2019).
[55]
Chris Weller. [n.d.]. A robot that once said it would 'destroy humans' just became the first robot citizen. Available at https://tinyurl.com/y56w3r79. Date accessed 02 August 2019.
[56]
Kyle Wiggers. [n.d.]. Geoffrey Hinton and Demis Hassabis: AGI is nowhere close to being a reality. Available at https://tinyurl.com/y7vc2k3m. Date accessed 13 August 2019.
[57]
Marshal S Willick. 1985. Constitutional Law and Artificial Intelligence: The Potential Legal Recognition of Computers as" Persons". In proc. of the IJCAI. 1271--1273.
[58]
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending Against Neural Fake News. arXiv preprint arXiv:1905.12616 (2019).
[59]
Xin Zhong and Frank Y Shih. 2019. A Robust Image Watermarking System Based on Deep Neural Networks. arXiv preprint arXiv:1908.11331 (2019).
[60]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In proc. of the IEEE International Conference on Computer Vision (ICCV).

Cited By

View all
  • (2025)Taiwanese high school students’ perspectives on artificial intelligence and its applicationsComputers in Human Behavior Reports10.1016/j.chbr.2024.10055017(100550)Online publication date: Mar-2025
  • (2024)I, Robot have rights! Haven’t I? Conceptual and Normative Constraints on Holding Legal PositionsSSRN Electronic Journal10.2139/ssrn.4827538Online publication date: 2024
  • (2024)Metaverse Perspectives from Japan: A Participatory Speculative Design Case StudyProceedings of the ACM on Human-Computer Interaction10.1145/36869398:CSCW2(1-51)Online publication date: 8-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 4, Issue CSCW2
CSCW
October 2020
2310 pages
EISSN:2573-0142
DOI:10.1145/3430143
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 October 2020
Published in PACMHCI Volume 4, Issue CSCW2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. artificial intelligence
  2. legal personhood
  3. public perception
  4. rights
  5. robots

Qualifiers

  • Research-article

Funding Sources

  • Institute for Basic Science
  • National Research Foundation of Korea

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)289
  • Downloads (Last 6 weeks)22
Reflects downloads up to 05 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Taiwanese high school students’ perspectives on artificial intelligence and its applicationsComputers in Human Behavior Reports10.1016/j.chbr.2024.10055017(100550)Online publication date: Mar-2025
  • (2024)I, Robot have rights! Haven’t I? Conceptual and Normative Constraints on Holding Legal PositionsSSRN Electronic Journal10.2139/ssrn.4827538Online publication date: 2024
  • (2024)Metaverse Perspectives from Japan: A Participatory Speculative Design Case StudyProceedings of the ACM on Human-Computer Interaction10.1145/36869398:CSCW2(1-51)Online publication date: 8-Nov-2024
  • (2024)Which Artificial Intelligences Do People Care About Most? A Conjoint Experiment on Moral ConsiderationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642403(1-11)Online publication date: 11-May-2024
  • (2024)Challenges of the Legal Protection of Human Lives in the Time of Anthropomorphic RobotsThe Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction10.1017/9781009386708.009(100-114)Online publication date: 7-Dec-2024
  • (2024)An Introduction to the Law, Policy, and Regulation for Human–Robot InteractionThe Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction10.1017/9781009386708.003(1-170)Online publication date: 7-Dec-2024
  • (2024)Cracking the consumers’ code: A framework for understanding the artificial intelligence–consumer interfaceCurrent Opinion in Psychology10.1016/j.copsyc.2024.10183258(101832)Online publication date: Aug-2024
  • (2024)When does “no” mean no? Insights from sex robotsCognition10.1016/j.cognition.2023.105687244(105687)Online publication date: Mar-2024
  • (2023)The Moral Psychology of Artificial IntelligenceCurrent Directions in Psychological Science10.1177/0963721423120586633:1(27-34)Online publication date: 30-Nov-2023
  • (2023)Understanding the acceptance of emotional artificial intelligence in Japanese healthcare system: A cross-sectional survey of clinic visitors’ attitudeTechnology in Society10.1016/j.techsoc.2022.10216672(102166)Online publication date: Feb-2023
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media