10.1145/3205651.3208288acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article
Public Access

Real-time strategy game micro for tactical training simulations

Published:06 July 2018Publication History

ABSTRACT

Complex, realistic scenarios in training simulations can benefit from good control of large numbers of simulation entities. However, training simulations typically focus on simulation physics and graphics over the intelligence required to control large numbers of entities. Real-Time Strategy games, on the other hand, have evolved to make tradeoffs between the AI needed and human interaction required to control hundreds of entities in complex tactical skirmishes. Borrowing from work in real-time strategy games, this paper attacks the problem of controlling groups of heterogenous entities in training simulations by using a genetic algorithm to evolve control algorithm parameters that maximize damage done and minimize damage received during skirmishes in a real-time strategy game-like simulation. Results show the emergence of complex, coordinated behavior among groups of simulated entities. Evolved behavior quality seems to be relatively independent of the underlying physics model but depends on the initial dispositions of entities in the simulation. We can get over this dependence and evolve more robust high performance behaviors by evaluating fitness in several different scenarios with different initial dispositions. We believe these preliminary results indicate the viability of our approach for generating robust, high performance behaviors for controlling swarms of entities in training simulations to enable more complex, realistic training scenarios.

References

  1. 2002. Ogre3d. (2002). https://www.ogre3d.org/Google ScholarGoogle Scholar
  2. 2010. Starcraft. (2010). http://us.battle.net/sc2Google ScholarGoogle Scholar
  3. 2016. Fast Evolutionary Computing Systems Lab ENTity engine. (2016). https://ecsl.cse.unr.eduGoogle ScholarGoogle Scholar
  4. M. Barbuceanu and M.S. Fox. 1995. COOL: A language for describing coordination in multi agent systems. In Proceedings of the First International Conference on Multi-Agent Systems (ICMAS-95). Citeseer, 17--24.Google ScholarGoogle Scholar
  5. Maurice Bergsma and Pieter Spronck. 2008. Adaptive spatial reasoning for turn-based strategy games. Proceedings of AIIDE (2008). Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Valentino Braitenberg. 1986. Vehicles: Experiments in synthetic psychology. MIT press.Google ScholarGoogle Scholar
  7. Michael Buro. 2003. Real-time strategy games: A new AI research challenge. Proceedings of the 18th International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence (2003), 1534--1535. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Y.L. Chuang, Y.R. Huang, M.R. D'Orsogna, and A.L. Bertozzi. 2007. Multi-vehicle flocking: scalability of cooperative control algorithms using pairwise potentials. In Robotics and Automation, 2007 IEEE International Conference on. IEEE, 2292--2299.Google ScholarGoogle Scholar
  9. Holger Danielsiek, R Stuer, Andreas Thom, Nicola Beume, Boris Naujoks, and Mike Preuss. 2008. Intelligent moving of groups in real-time strategy games. In Computational Intelligence and Games, 2008. CIG'08. IEEE Symposium On. IEEE, 71--78.Google ScholarGoogle ScholarCross RefCross Ref
  10. P. Dasgupta. 2008. A multiagent swarming system for distributed automatic target recognition using unmanned aerial vehicles. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on 38, 3 (2008), 549--563. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Tyler DeWitt, Sushil J Louis, and Siming Liu. 2016. Evolving micro for 3d real-time strategy games. In Computational Intelligence and Games (CIG), 2016 IEEE Conference on. IEEE, 1--8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. D. Doherty and C. OâǍŹRiordan. 2006. Evolving tactical behaviours for teams of agents in single player action games. In Proceedings of the 9th International Conference on Computer Games: AI, Animation, Mobile, Educational & Serious Games. 121--126.Google ScholarGoogle Scholar
  13. R. Dubey, J. Ghantous, S. Louis, and S. Liu. 2018. Evolutionary Multi-objective Optimization of Real-Time Strategy Micro. ArXiv e-prints (March 2018). arXiv:1803.10316Google ScholarGoogle Scholar
  14. M. Egerstedt and X. Hu. 2001. Formation constrained multi-agent control. Robotics and Automation, IEEE Transactions on 17, 6 (2001), 947--951.Google ScholarGoogle Scholar
  15. L.J. Eshelman. 1991. The CHC adaptive search algorithm: How to have safe search when engaging in nontraditional genetic recombination. Foundations of genetic algorithms (1991), 265--283.Google ScholarGoogle Scholar
  16. J. Ferber and O. Gutknecht. 1998. A meta-model for the analysis and design of organizations in multi-agent systems. In Multi Agent Systems, 1998. Proceedings. International Conference on. IEEE, 128--135. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. J. Hagelbäck and S.J. Johansson. 2008. Using multi-agent potential fields in real-time strategy games. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 2. International Foundation for Autonomous Agents and Multiagent Systems, 631--638. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. John Henry Holland. 1992. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. N.R. Jennings. 1993. Commitments and conventions: The foundation of coordination in multi-agent systems. Knowledge Engineering Review 8 (1993), 223--223.Google ScholarGoogle ScholarCross RefCross Ref
  20. N.R. Jennings. 1995. Controlling cooperative problem solving in industrial multi-agent systems using joint intentions. Artificial intelligence 75, 2 (1995), 195--240. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. O. Khatib. {n. d.}. Real-time obstacle avoidance for manipulators and mobile robots. The international journal of robotics research 5 ({n. d.}). Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. S. Liu, S. Louis, and C. Ballinger. 2016. Evolving Effective Micro Behaviors in Real-Time Strategy Games. IEEE Transactions on Computational Intelligence and AI in Games PP, 99 (2016), 1--1.Google ScholarGoogle Scholar
  23. Siming Liu, Sushil J Louis, Tianyi Jiang, and Rui Wu. 2017. Increasing physics realism when evolving micro behaviors for 3D RTS games. In Evolutionary Computation (CEC), 2017 IEEE Congress on. IEEE, 2465--2472.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M.J. Matarić. 1995. Designing and understanding adaptive group behavior. Adaptive Behavior 4, 1 (1995), 51--80.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. C. Miles, J. Quiroz, R. Leigh, and S.J. Louis. 2007. Co-Evolving Influence Map Tree Based Strategy Game Players. In Computational Intelligence and Games, 2007. CIG 2007. IEEE Symposium on. 88--95. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. R. Olfati-Saber, J.A. Fax, and R.M. Murray. 2007. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 95, 1 (2007), 215--233.Google ScholarGoogle Scholar
  27. Santiago Ontañon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, Mike Preuss, et al. 2013. A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft. IEEE Transactions on Computational Intelligence and AI in games 5, 4 (2013), 1--19.Google ScholarGoogle ScholarCross RefCross Ref
  28. Mike Preuss, Nicola Beume, Holger Danielsiek, Tobias Hein, Boris Naujoks, Nico Piatkowski, Raphael Stuer, Andreas Thorn, and Simon Wessing. 2010. Towards intelligent team composition and maneuvering in real-time strategy games. Computational Intelligence and AI in Games, IEEE Transactions on 2, 2 (2010), 82--98.Google ScholarGoogle Scholar
  29. C.W. Reynolds. 1987. Flocks, herds and schools: A distributed behavioral model. In ACM SIGGRAPH Computer Graphics, Vol. 21. ACM, 25--34. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Penelope Sweetser and Janet Wiles. 2005. Combining influence maps and cellular automata for reactive game agents. Intelligent Data Engineering and Automated Learning-IDEAL 2005 (2005), 209--215. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Alberto Uriarte and Santiago Ontañón. 2012. Kiting in RTS Games Using Influence Maps. In Eighth Artificial Intelligence and Interactive Digital Entertainment Conference.Google ScholarGoogle Scholar
  32. P. Vadakkepat, K.C. Tan, and W. Ming-Liang. 2000. Evolutionary artificial potential fields and their application in real time robot path planning. In Evolutionary Computation, 2000. Proceedings of the 2000 Congress on, Vol. 1. IEEE, 256--263.Google ScholarGoogle Scholar
  33. G.N. Yannakakis and J. Hallam. 2004. Evolving opponents for interesting interactive computer games. From Animals to Animats 8 (2004), 499--508.Google ScholarGoogle Scholar

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    GECCO '18: Proceedings of the Genetic and Evolutionary Computation Conference Companion
    July 2018
    1968 pages
    ISBN:9781450357647
    DOI:10.1145/3205651

    Copyright © 2018 ACM

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 6 July 2018

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate 1,464 of 3,964 submissions, 37%

    Upcoming Conference

    GECCO '23

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader
About Cookies On This Site

We use cookies to ensure that we give you the best experience on our website.

Learn more

Got it!