## Dynamic Games of Complete but Imperfect Information

2.4.A Extensive-Form Representation of Games

In Chapter 1 we analyzed static games by representing such games in normal form. We now analyze dynamic games by representing such games in extensive form.18 This expositional approach may make it seem that static games must be represented in normal form and dynamic games in extensive form, but this is not the case. Any game can be represented in either normal or extensive form, although for some games one of the two forms is more convenient to analyze. We will discuss how static games can be represented using the extensive form and how dynamic games can be represented using the normal form.

Recall from Section 1.1.A that the normal-form representation of a game specifies: (1) the players in the game, (2) the strategies available to each player, and (3) the payoff received by each player for each combination of strategies that could be chosen by the players.

Definition The extensive-form representation of a game specifies: (1) the players in the game, (2a) when each player has the move, (2b) what

18We give an informal description of the extensive form. For a precise treatment, see Kreps and Wilson (1982).

each player can do at each of his or her opportunities to move, (2c) what each player knows at each of his or her opportunities to move, and (3) the payoff received by each player for each combination of moves that could be chosen by the players.

Although we did not say so at the time, we analyzed several games represented in extensive form in Sections 2.1 through 2.3. The contribution of this section is to describe such games using game trees rather than words, because the former are often simpler both to express and to analyze.

As an example of a game in extensive form, consider the following member of the class of two-stage games of complete and perfect information introduced in Section 2.1.A:

1. Player 1 chooses an action a\ from the feasible set A\ =

2. Player 2 observes a\ and then chooses an action a2 from the set - {L',R'}.

3. Payoffs are Wi(«i,a2) and >#2), as shown in the game tree in Figure 2.4.1.

Payoff to Player 1: 3 12 0

Payoff to Player 2: 1 2 1 0

This game tree begins with a decision node for player 1, where 1 chooses between L and R. If player 1 chooses L, then a decision node for player 2 is reached, where 2 chooses between L' and R'. Likewise, if player 1 chooses R, then another decision node for player 2 is reached, where 2 chooses between L' and R'. Following each of player 2's choices, a terminal node is reached (i.e., the game ends) and the indicated payoffs are received.

It is straightforward to extend the game tree in Figure 2.4.1 to represent any dynamic game of complete and perfect information —that is, any game in which the players move in sequence, all previous moves are common knowledge before the next move is chosen, and the players' payoffs from each feasible combination of moves are common knowledge. (Continuous action spaces, as in the Stackelberg model, or an infinite horizon, as in the Rubinstein model, present graphical but not conceptual difficulties.) We next derive the normal-form representation of the dynamic game in Figure 2.4.1. We then conclude this section by showing that static games can be given extensive-form representations, and by describing how to construct extensive-form representations of dynamic games with complete but imperfect information.

As the numbering conventions in the definitions of the normal and extensive forms suggest, there is a close connection between a player's feasible strategies (item 2) given in the normal form and the description of when a player moves, what he or she can do, and what he or she knows (items 2a, 2b, and 2c) in the extensive form. To represent a dynamic game in normal form, we need to translate the information in the extensive form into the description of each player's strategy space in the normal form. To do this, recall the definition of a strategy given (informally) in Section 2.3.B:

Definition A strategy for a player is a complete plan of action—it specifies a feasible action for the player in every contingency in which the player might be called on to act.

It may seem unnecessary to require a player's strategy to specify a feasible action for every contingency in which the player might be called upon to move. It will become clear, however, that we could not apply the notion of Nash equilibrium to dynamic games of complete information if we allowed a player's strategy to leave the actions in some contingencies unspecified. For player; to compute a best response to player z's strategy, j may need to consider how i would act in every contingency, not just in the contingencies i or j thinks likely to arise.

In the game in Figure 2.4.1, player 2 has two actions but four strategies, because there are two different contingencies (namely, after observing L by player 1 and after observing R by player 1) in which player 2 could be called upon to act.

Strategy 1: If player 1 plays L then play U, if player 1 plays R then play I/, denoted by (L',L').

Strategy 2: If player 1 plays L then play V, if player 1 plays R then play R', denoted by {V, R').

Strategy 3: If player 1 plays L then play R'r if player 1 plays R then play L', denoted by (R',L').

Strategy 4: If player 1 plays L then play R'r if player 1 plays R then play R', denoted by (R', R').

Player 1, however, has two actions but only two strategies: play L and play R. The reason player 1 has only two strategies is that there is only one contingency in which player 1 might be called upon to act (namely, the first move of the game, when player 1 will certainly be called upon to act), so player l's strategy space is equivalent to the action space A\ = {L, R}.

Given these strategy spaces for the two players, it is straightforward to derive the normal-form representation of the game from its extensive-form representation. Label the rows of the normal form with player l's feasible strategies, label the columns with player 2's feasible strategies, and compute the payoffs to the players for each possible combination of strategies, as shown in Figure 2.4.2.

Having now demonstrated that a dynamic game can be represented in normal form, we turn next to showing how a static (i.e., simultaneous-move) game can be represented in extensive form. To do so, we rely on the observation made in Section 1.1.A (in connection with the Prisoners' Dilemma) that the players need not act simultaneously: it suffices that each choose a strategy without knowledge of the other's choice, as would be the case in the Prisoners' Dilemma if the prisoners reached decisions at arbitrary times while in separate cells. Thus, we can represent a (so-called)

Player 2

Player 1

Player 2

 3,1 3,1 1,2 1,2 2,1 0,0 2,1 simultaneous-move game between players 1 and 2 as follows. 1. Player 1 chooses an action a\ from the feasible set Ai. 2. Player 2 does not observe player l's move but chooses an action a2 from the feasible set A2. Alternatively, player 2 could move first and player 1 could then move without observing 2's action. Recall that in Section 2.1.B we showed that a quantity-choice game with this timing and information structure differs importantly from the Stackelberg game with the same timing but an information structure in which firm 2 observes firm l's move, and we argued that this sequential-move, unobserved-action game has the same Nash equilibrium as the simultaneous-move Cournot game. To represent this kind of ignorance of previous moves in an extensive-form game, we introduce the notion of a player's information set. Definition An information set for a player is a collection of decision nodes satisfying: (i) the player has the move at every node in the information set, and (ii) when the play of the game reaches a node in the information set, the player with the move does not know which node in the information set has (or has not) been reached. Part (ii) of this definition implies that the player must have the same set of feasible actions at each decision node in an information Prisoner 1 Prisoner 1