The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration

Robert Axelrod

Book cover

I am a fan of Axelrod’s book “The Evolution of Cooperation.” (Looking back, I see that I only gave it three stars. You can look at my reviews to see my reasons, which I think are valid, but I think it probably deserves 4.)

To recap briefly, TEC was about a computer simulation tournament that he ran of the iterated Prisoner’s Dilemma, and the results of that tournament. The book is an exploration of the simple but subtle idea of how cooperation can evolve and stabilize among self-interested autonomous individuals. TEC is an extremely accessible book, and the ideas Axelrod wrote about there are related to some other interesting work like Elinor Ostrom’s.

TCC is a “sequel” of sorts. It is interesting, but not as compelling as its predecessor. It is really a collection of academic papers, each with a brief introduction by Axelrod. The papers are all extensions, of one sort or another, of the ideas from TEC. For example, how does the situation change when agents can “misunderstand” each other? For the most part, I think they are very readable, with credit to Axelrod’s style. They were mostly published in relatively small/obscure journals, and I wonder whether there is a connection there.

As someone with an economics background, I found this work interesting. Broadly speaking, the work in this book focuses on agents that are “myopic” in one way or another. They do not try to work out a globally optimal solution, but rather, follow incrementalist strategies that seem like they will make some improvement on the status quo. Such strategies are likely to lead agents to local maxima. This is probably a fairly good description of how most real-world agents behave. It is quite appealing from a modeling perspective, in certain ways. For example, the agents need not have in mind fully-specified probability distributions for various stochastic events, which is typically one part of optimizing-agent models that can seem unrealistic.

On the other hand, I can completely understand why it has been hard for such models to gain a foothold in academic research. Moving away from the optimizing agent introduces a ton of degrees of freedom all at once. To paraphrase Tolstoy, all optimizing agents are alike, but every myopic agent is myopic in its own way. There are just too many ways of not-optimizing. If you think it is easy to build an agent with optimizing agents to “prove” whatever you want, it is even easier to do that when you allow your agents not to optimize. (Though it may be difficult to tell what the outcome will be ex ante.) To be clear, I do not at all think that Axelrod is building models “just to get the results he wants.” But without doing a lot of work to calibrate the non-rationality to the behavior of actually-observed agents, which Axelrod does not do, it is hard to say what it is that we learn from these simulations.

They can provide existence proofs, which I think is what we see in TEC–yes, cooperation can arise and stabilize among self-interested agents. A simulation is stronger, or at least complementary, evidence relative to any case study, because we can “open up” the agents to see exactly how they are operating. Is this how cooperation “really” arose in any given real-world situation? Difficult or impossible to say. But we know now that we do not have to resort to notions of altruism–self-interested agents can be a legitimate competing hypothesis. But in TCC, I didn’t see any compelling existence proofs of this sort. Rather, the book contains a lot of interesting explorations of various model specifications. I was entertained by it, but I’m not sure what I learned from it.

My Goodreads rating: 3 stars