ABSTRACT
Complex, realistic scenarios in training simulations can benefit from good control of large numbers of simulation entities. However, training simulations typically focus on simulation physics and graphics over the intelligence required to control large numbers of entities. Real-Time Strategy games, on the other hand, have evolved to make tradeoffs between the AI needed and human interaction required to control hundreds of entities in complex tactical skirmishes. Borrowing from work in real-time strategy games, this paper attacks the problem of controlling groups of heterogenous entities in training simulations by using a genetic algorithm to evolve control algorithm parameters that maximize damage done and minimize damage received during skirmishes in a real-time strategy game-like simulation. Results show the emergence of complex, coordinated behavior among groups of simulated entities. Evolved behavior quality seems to be relatively independent of the underlying physics model but depends on the initial dispositions of entities in the simulation. We can get over this dependence and evolve more robust high performance behaviors by evaluating fitness in several different scenarios with different initial dispositions. We believe these preliminary results indicate the viability of our approach for generating robust, high performance behaviors for controlling swarms of entities in training simulations to enable more complex, realistic training scenarios.
- 2002. Ogre3d. (2002). https://www.ogre3d.org/Google Scholar
- 2010. Starcraft. (2010). http://us.battle.net/sc2Google Scholar
- 2016. Fast Evolutionary Computing Systems Lab ENTity engine. (2016). https://ecsl.cse.unr.eduGoogle Scholar
- M. Barbuceanu and M.S. Fox. 1995. COOL: A language for describing coordination in multi agent systems. In Proceedings of the First International Conference on Multi-Agent Systems (ICMAS-95). Citeseer, 17--24.Google Scholar
- Maurice Bergsma and Pieter Spronck. 2008. Adaptive spatial reasoning for turn-based strategy games. Proceedings of AIIDE (2008). Google Scholar
Digital Library
- Valentino Braitenberg. 1986. Vehicles: Experiments in synthetic psychology. MIT press.Google Scholar
- Michael Buro. 2003. Real-time strategy games: A new AI research challenge. Proceedings of the 18th International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence (2003), 1534--1535. Google Scholar
Digital Library
- Y.L. Chuang, Y.R. Huang, M.R. D'Orsogna, and A.L. Bertozzi. 2007. Multi-vehicle flocking: scalability of cooperative control algorithms using pairwise potentials. In Robotics and Automation, 2007 IEEE International Conference on. IEEE, 2292--2299.Google Scholar
- Holger Danielsiek, R Stuer, Andreas Thom, Nicola Beume, Boris Naujoks, and Mike Preuss. 2008. Intelligent moving of groups in real-time strategy games. In Computational Intelligence and Games, 2008. CIG'08. IEEE Symposium On. IEEE, 71--78.Google Scholar
Cross Ref
- P. Dasgupta. 2008. A multiagent swarming system for distributed automatic target recognition using unmanned aerial vehicles. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on 38, 3 (2008), 549--563. Google Scholar
Digital Library
- Tyler DeWitt, Sushil J Louis, and Siming Liu. 2016. Evolving micro for 3d real-time strategy games. In Computational Intelligence and Games (CIG), 2016 IEEE Conference on. IEEE, 1--8.Google Scholar
Digital Library
- D. Doherty and C. OâǍŹRiordan. 2006. Evolving tactical behaviours for teams of agents in single player action games. In Proceedings of the 9th International Conference on Computer Games: AI, Animation, Mobile, Educational & Serious Games. 121--126.Google Scholar
- R. Dubey, J. Ghantous, S. Louis, and S. Liu. 2018. Evolutionary Multi-objective Optimization of Real-Time Strategy Micro. ArXiv e-prints (March 2018). arXiv:1803.10316Google Scholar
- M. Egerstedt and X. Hu. 2001. Formation constrained multi-agent control. Robotics and Automation, IEEE Transactions on 17, 6 (2001), 947--951.Google Scholar
- L.J. Eshelman. 1991. The CHC adaptive search algorithm: How to have safe search when engaging in nontraditional genetic recombination. Foundations of genetic algorithms (1991), 265--283.Google Scholar
- J. Ferber and O. Gutknecht. 1998. A meta-model for the analysis and design of organizations in multi-agent systems. In Multi Agent Systems, 1998. Proceedings. International Conference on. IEEE, 128--135. Google Scholar
Digital Library
- J. Hagelbäck and S.J. Johansson. 2008. Using multi-agent potential fields in real-time strategy games. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 2. International Foundation for Autonomous Agents and Multiagent Systems, 631--638. Google Scholar
Digital Library
- John Henry Holland. 1992. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press. Google Scholar
Digital Library
- N.R. Jennings. 1993. Commitments and conventions: The foundation of coordination in multi-agent systems. Knowledge Engineering Review 8 (1993), 223--223.Google Scholar
Cross Ref
- N.R. Jennings. 1995. Controlling cooperative problem solving in industrial multi-agent systems using joint intentions. Artificial intelligence 75, 2 (1995), 195--240. Google Scholar
Digital Library
- O. Khatib. {n. d.}. Real-time obstacle avoidance for manipulators and mobile robots. The international journal of robotics research 5 ({n. d.}). Google Scholar
Digital Library
- S. Liu, S. Louis, and C. Ballinger. 2016. Evolving Effective Micro Behaviors in Real-Time Strategy Games. IEEE Transactions on Computational Intelligence and AI in Games PP, 99 (2016), 1--1.Google Scholar
- Siming Liu, Sushil J Louis, Tianyi Jiang, and Rui Wu. 2017. Increasing physics realism when evolving micro behaviors for 3D RTS games. In Evolutionary Computation (CEC), 2017 IEEE Congress on. IEEE, 2465--2472.Google Scholar
Digital Library
- M.J. Matarić. 1995. Designing and understanding adaptive group behavior. Adaptive Behavior 4, 1 (1995), 51--80.Google Scholar
Digital Library
- C. Miles, J. Quiroz, R. Leigh, and S.J. Louis. 2007. Co-Evolving Influence Map Tree Based Strategy Game Players. In Computational Intelligence and Games, 2007. CIG 2007. IEEE Symposium on. 88--95. Google Scholar
Digital Library
- R. Olfati-Saber, J.A. Fax, and R.M. Murray. 2007. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 95, 1 (2007), 215--233.Google Scholar
- Santiago Ontañon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, Mike Preuss, et al. 2013. A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft. IEEE Transactions on Computational Intelligence and AI in games 5, 4 (2013), 1--19.Google Scholar
Cross Ref
- Mike Preuss, Nicola Beume, Holger Danielsiek, Tobias Hein, Boris Naujoks, Nico Piatkowski, Raphael Stuer, Andreas Thorn, and Simon Wessing. 2010. Towards intelligent team composition and maneuvering in real-time strategy games. Computational Intelligence and AI in Games, IEEE Transactions on 2, 2 (2010), 82--98.Google Scholar
- C.W. Reynolds. 1987. Flocks, herds and schools: A distributed behavioral model. In ACM SIGGRAPH Computer Graphics, Vol. 21. ACM, 25--34. Google Scholar
Digital Library
- Penelope Sweetser and Janet Wiles. 2005. Combining influence maps and cellular automata for reactive game agents. Intelligent Data Engineering and Automated Learning-IDEAL 2005 (2005), 209--215. Google Scholar
Digital Library
- Alberto Uriarte and Santiago Ontañón. 2012. Kiting in RTS Games Using Influence Maps. In Eighth Artificial Intelligence and Interactive Digital Entertainment Conference.Google Scholar
- P. Vadakkepat, K.C. Tan, and W. Ming-Liang. 2000. Evolutionary artificial potential fields and their application in real time robot path planning. In Evolutionary Computation, 2000. Proceedings of the 2000 Congress on, Vol. 1. IEEE, 256--263.Google Scholar
- G.N. Yannakakis and J. Hallam. 2004. Evolving opponents for interesting interactive computer games. From Animals to Animats 8 (2004), 499--508.Google Scholar





Comments