ABSTRACT
Typical implementations of artificial intelligent agents suggest that actions should be chosen in order to maximise some reward function. This naturally complies the philosophy behind rational choice theory. Yet, this heuristic may not always provide long-term success to the considered agents. In this paper, we stress the need to consider the self-organised and frequency-dependent nature of the environment, when designing agents that act in complex adaptive systems. We resort to the tools of evolutionary game theory, combined with a paradigmatic scenario of a population of self-regarding agents playing Ultimatum Game, to describe the dynamical impact of individual mistakes on collective behaviour. We resort to agent based simulations to show that that seemingly disadvantageous and irrational errors become the source of individual and collective long-term success.
References
- C. Camerer. Behavioral game theory: Experiments in strategic interaction. Princeton University Press, 2003.Google Scholar
- W. Güth, R. Schmittberger, and B. Schwarze. An experimental analysis of ultimatum bargaining. J. Eco. Behav. Organ., 3(4):367--388, 1982.Google Scholar
Cross Ref
- J. Maynard-Smith and G. Price. The logic of animal conflict. Nature, 246:15, 1973.Google Scholar
Cross Ref
- S. J. Russell. Rationality and intelligence. Artificial Intelligence, 94(1):57--77, 1997. Google Scholar
Digital Library
- H. A. Simon. Bounded rationality and organizational learning. Organization Science, 2(1):125--134, 1991. Google Scholar
Digital Library
- A. Traulsen, M. A. Nowak, and J. M. Pacheco. Stochastic dynamics of invasion and fixation. Phys Rev E, 74(1):011909, 2006.Google Scholar
Cross Ref
Index Terms
The Evolutionary Perks of Being Irrational





Comments