- Discussion
- The code used in the spatial synchronous prisoner's dilemma
- The code used in the spatial asynchronous prisoner's dilemma

Synchronous dynamical models are those in which the actions of the agents are coordinated so that they perform each step in unison. Taken literally, this would mean that a synchronous dynamical model would have to be implemented on a massively parallel computer with the program carefully written to distribute each action of each agent to a single processor. This is clearly impractical for any model of a reasonable size. Usually what one means by a synchronous dynamical model is one which is written like an ordinary non-parallel program, with the synchronous nature of the model faked.

For example, in the synchronous spatial prisoner's dilemma, the model consists of two stages:

- Agents interact with their neighbours.
- Agents update their strategy.

A synchronous model pretends that, in each of these three stages, everything happens at once. The way this is done is by managing when information about a given agent gets used.

The lattice on which all the action occurs is a two-dimensional array of Agents of a certain size. Denote it by world[width][height]. Note that in the interaction phase, we really don't care if agents interact all at once or one after the other since all that happens is that the agent's score is calculated and the calculated score doesn't depend on what order we do this. So the first stage can be coded as follows:

for (i=0; i<width; i++) { for (j=0; j<height; j++) { world[i][j].interact(); } }

In the second stage, an agent updates - and hence may change - his strategy. The wrong way to do this is as follows:

for (i=0; i<width; i++) { for (j=0; j<height; j++) { world[i][j].updateStrategy(); } }

The problem is that if an agent's new strategy depends on the strategy
held by other agents around him, *this *approach leads to some agents
basing their new strategy on the *new* strategy of their neighbours.
What we want is each agent to base their new strategy on the strategy of their
neighbours that was used during the interaction phase. One way to do this
is to separate the determination of a new strategy from the adoption of it, as
follows:

for (i=0; i<width; i++) { for (j=0; j<height; j++) { world[i][j].determineNextStrategy(); } } for (i=0; i<width; i++) { for (j=0; j<height; j++) { world[i][j].updateStrategy(); } }

This approach provides us with a model of synchronous updating, since the new strategy each agent adopts is based on the strategies that their neighbours used during the interaction phase.

This code is the same as that used in the discussion above.

public void stepModel() { int i,j; for (i=0; i<parameters.width; i++) { for (j=0; j<parameters.height; j++) { world[i][j].interact(); } } for (i=0; i<parameters.width; i++) { for (j=0; j<parameters.height; j++) { world[i][j].determineNextStrategy(); } } for (i=0; i<parameters.width; i++) { for (j=0; j<parameters.height; j++) { world[i][j].updateStrategy(); } } }

This code is my gloss of the following explanation given by Huberman and Glance:

"The evolution of the same system is very different when the players' strategies are updated asynchronously. In our experiments, we choose the time intervale between updates to be small enough so that at most one player updates within that time intervale. As a consequence of this asynchronous updating, each player awakens to see a slightly different world than the players acting before or afterwards. The updating is performed as follows. Each player receives a score per unit time that is updated continuously as the matrix of players varies over time. In analogy to the discrete scenario considered in the synchronous case, each player's score per unit time is the sum of its payoffs per unit time. These payoffs are received from interactions with each of its neighbors and itself in a Prisoner's Dilemma confrontation.

Next, within a microstep, at most one player is replaced by the highest-scoring player with its neighborhood. The size of the microstep is chosen so that the average updating time (equivalent to one generation) for the whole array is the same for both the synchronous and asynchronous cases."

I take the talk of "microsteps" to be dispensable. What we want is the following:

- The agents scores are continuously updated. This means that whenever
an agent switches strategies, all of the scores of his neighbors are
recalculated to reflect this change.

- The agents strategies are changed one at a time. This means that we have a list indicating the order in which the agents are to change their strategies, and we walk through that list in order. When we reach the end of the list, we start again from the beginning.

However, if we used the *same* list in step 2 spurious correlations may
be introduced between what strategies survive/die out because of the fact that
some agents always update before or after other agents. To avoid the
introduction of spurious correlation, we need to randomly re-order the list
after we are done with it, before we start the next pass.

public void stepModel() { int i,j; for (i=0; i<parameters.width; i++) { for (j=0; j<parameters.height; j++) { world[i][j].interact(); } } scrambleAgentList(); Iterator it = agentList.iterator(); while (it.hasNext()) { Agent a = (Agent) it.next(); a.determineNextStrategy(); a.updateStrategy(); int i1,i2,x_pos,y_pos; x_pos = a.getXPosition(); y_pos = a.getYPosition(); for (i1=-2; i1<=2; i1++) { for (i2=-2; i2<=2; i2++) { if ((x_pos + i1 >= 0) && (y_pos + i2 >= 0) && (x_pos + i1 < parameters.width) && (y_pos + i2 < parameters.height)) world[x_pos + i1][y_pos + i2].interact(); } } } }