10.1145/3278721.3278780acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Open Access

An AI Race for Strategic Advantage: Rhetoric and Risks

Online:27 December 2018Publication History

ABSTRACT

The rhetoric of the race for strategic advantage is increasingly being used with regard to the development of artificial intelligence (AI), sometimes in a military context, but also more broadly. This rhetoric also reflects real shifts in strategy, as industry research groups compete for a limited pool of talented researchers, and nation states such as China announce ambitious goals for global leadership in AI. This paper assesses the potential risks of the AI race narrative and of an actual competitive race to develop AI, such as incentivising corner-cutting on safe-ty and governance, or increasing the risk of conflict. It explores the role of the research community in respond-ing to these risks. And it briefly explores alternative ways in which the rush to develop powerful AI could be framed so as instead to foster collaboration and respon-sible progress.

References

  1. Allen, G. and Kania, E. 2017. China is Using America's Own Plan to Dominate the Future of Artificial Intelligence. For-eign Policy, 8 September 2017.Google ScholarGoogle Scholar
  2. Allen, G., & Chan, T. 2017. Artificial Intelligence and Na-tional Security, Technical Report, Harvard Kennedy School, Harvard University, Boston, MA.Google ScholarGoogle Scholar
  3. Allen, J. R., Husain, A. 2017. The Next Space Race is Artificial Intelligence. Foreign Policy, 3 November 2017.Google ScholarGoogle Scholar
  4. Armstrong, S., Bostrom, N., and Shulman, C. 2016. Racing to the precipice: a model of artificial intelligence develop-ment. AI & Society 31:201--206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Banavar, G. 2016. Learning to trust artificial intelligence systems, Report, IBM, Armonk, NY.Google ScholarGoogle Scholar
  6. Beneficial AI Tokyo, 2017. Cooperation for Beneficial AI. http://conscious-m.sakura.ne.jp/aiandsociety/signatories/tokyo-statement.htmlGoogle ScholarGoogle Scholar
  7. Bostrom, N. 2011. Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy 10: 44--79.Google ScholarGoogle Scholar
  8. Bostrom, N. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Bostrom, N. 2017. Strategic Implications of Openness in AI Development. Global Policy 8: 135--148.Google ScholarGoogle ScholarCross RefCross Ref
  10. Butler, D. 2017. AI summit aims to help world's poorest. Nature 546(7657):196--197.Google ScholarGoogle ScholarCross RefCross Ref
  11. Canadian Institute for Advanced Research. 2017. Pan-Canadian Artificial Intelligence Strategy Overview, Report, Canadian Institute for Advanced Research (CIFAR), Toronto, Canada.Google ScholarGoogle Scholar
  12. CB Insights. 2017. The Race For AI: Google, Baidu, Intel, Apple In A Rush To Grab Artificial Intelligence Startups, Technical Report, CB Insights, New York, NY.Google ScholarGoogle Scholar
  13. Crawford, K. and Calo, R. 2016. There is a blind spot in AI research. Nature 538: 311--313.Google ScholarGoogle Scholar
  14. Fa, G. 2017 State Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan. State Department, China.Google ScholarGoogle Scholar
  15. Fast, E., & Horvitz, E. 2017. Long-Term Trends in the Public Perception of Artificial Intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence 963--969. Menlo Park, Calif.: International Joint Conferences on Artificial Intelligence, Inc. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Future of Life Institute. 2015. Autonomous Weapons: An Open Letter from AI & Robotics Researchers, Report, Fu-ture of Life Institute, Cambridge, MA.Google ScholarGoogle Scholar
  17. Future of Life Institute. 2015. Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter, Report, Future of Life Institute, Cambridge, MA.Google ScholarGoogle Scholar
  18. Geist, E. M. 2016. It's already too late to stop the AI arms race-We must manage it instead. Bulletin of the Atomic Scientists 72: 318--321.Google ScholarGoogle ScholarCross RefCross Ref
  19. Golden, D. 2017. The science of spying: how the CIA secretly recruits academics. The Guardian, 10 October 2017Google ScholarGoogle Scholar
  20. Griffith, R. 1987. The Politics of Fear: Joseph R. McCarthy and the Senate. Amherst, Mass.: Univ of Massachusetts Press.Google ScholarGoogle Scholar
  21. Hall, W., and Pesenti, J. 2017. Growing the Artificial Intelli-gence Industry in the UK, Report, HM Government, Lon-don, United Kingdom.Google ScholarGoogle Scholar
  22. Huysmans, J. 2006. The Politics of Insecurity: Fear, Migration and Asylum in the EU. Oxford, UK: Routledge.Google ScholarGoogle Scholar
  23. Ipsos MORI. 2017. Public views of Machine Learning: Find-ings from public research and engagement conducted on behalf of the Royal Society, Report, Royal Society, London, United Kingdom.Google ScholarGoogle Scholar
  24. Jacobsen, A. 2014. Operation Paperclip: The secret intelli-gence program that brought Nazi scientists to America. Lon-don, UK: Hachette UK.Google ScholarGoogle Scholar
  25. Marcus, G. 2017. Artificial Intelligence Is Stuck. Here's How to Move It Forward. New York Times, 29 July 2017.Google ScholarGoogle Scholar
  26. Metz, C. 2017. Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent. New York Times, 22 October 2017.Google ScholarGoogle Scholar
  27. Musk, E. 2017. Twitter, 4 September 2017.Google ScholarGoogle Scholar
  28. Nakashima, E. 2012. Stuxnet was work of U.S. and Israeli experts, officials say. The Washington Post, 2 June 2012.Google ScholarGoogle Scholar
  29. Networking and Information Technology Research and Development Subcommittee, National Science and Tech-nology Council. 2016. The National Artificial Intelligence Research and Development Strategic Plan, Report, National Science and Technology Council, Washington, D.C.Google ScholarGoogle Scholar
  30. PwC. 2017. Sizing the prize: What's the real value of AI for your business and how can you capitalise? Technical Re-port, PwC, London, United Kingdom.Google ScholarGoogle Scholar
  31. PwC. 2017. The economic impact of artificial intelligence on the UK economy, Technical Report, PwC, London, Unit-ed Kingdom.Google ScholarGoogle Scholar
  32. RT. 2017. 'Whoever leads in AI will rule the world': Putin to Russian children on Knowledge Day. RT, 1 September 2017.Google ScholarGoogle Scholar
  33. Simonite, T. 2017. AI could revolutionise war as much as nukes. Wired, 19 July 2017.Google ScholarGoogle Scholar
  34. Stewart, P. 2017. U.S. weighs restricting Chinese investment in artificial intelligence. Reuters, 13 June 2017.Google ScholarGoogle Scholar
  35. Strategic Council for AI Technology. 2017. Artificial Intelli-gence Technology Strategy, Report, New Energy and Industrial Technology Development Organisation (NEDO), Kawasaki, Japan.Google ScholarGoogle Scholar
  36. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. 2016. Ethically Aligned Design, Technical Report, IEEE, New York City, NY.Google ScholarGoogle Scholar
  37. The State Council of the People's Republic of China. 2017. State Council Notice on the Issuance of the Next Genera-tion Artificial Intelligence Development Plan, Report, State Council, Beijing, China. Translated by Creemers, R., Webster, G., Triolo, P., and Kania, E. 2017. New America, Washington, D.C.Google ScholarGoogle Scholar

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader
About Cookies On This Site

We use cookies to ensure that we give you the best experience on our website.

Learn more

Got it!