## Case ii

In case (i) Up strictly dominates Down for player 1, and in case (ii) Down strictly dominates Up. Recall from the previous section that a strategy s, is strictly dominated if and only if there is no belief that player i could hold (about the strategies the other players will choose) such that it would be optimal to play s,. Thus, if (q, 1 — q) is a mixed strategy for player 2, where q is the probability that 2 will play Left, then in case (i) there is no value of q such that Down is optimal for player 1, and in case (ii) there is no value of q such that Up is optimal. Letting (r, 1 — r) denote a mixed strategy for player 1, where r is the probability that 1 will play Up, we can represent the best-response correspondences for cases (i) and (ii) as in Figure 1.3.9. (In these two cases the best-response correspondences are in fact best-response functions, since there is no value of q such that player 1 has multiple best responses.)

In cases (iii) and (iv), neither Up nor Down is strictly dominated. Thus, Up must be optimal for some values of q and Down optimal for others. Let q' = (zv — y)/(x — z + w — y). Then in case (iii) Up is optimal for q > q' and Down for q < q', whereas in case (iv) the reverse is true. In both cases, any value of r is optimal when q = q'. These best-response correspondences are given in Figure 1.3.10.

(Right)

Case (iii)

Figure 1.3.10.

Since q' = 1 if x — z and q' = 0 if y = w, the best-response correspondences for cases involving either x — z or y = w are L-shaped (i.e., two adjacent sides of the unit square), as would occur in Figure 1.3.10 if q' = 0 or 1 in cases (iii) or (iv).

Adding arbitrary payoffs for player 2 to Figure 1.3.8 and performing the analogous computations yields the same four best-response correspondences, except that the horizontal axis measures r and the vertical q, as in Figure 1.3.4. Flipping and rotating these four figures, as was done to produce Figure 1.3.5, yields Figures 1.3.11 and 1.3.12. (In the latter figures, r' is defined analogously to q' in Figure 1.3.10.)

The crucial point is that given any of the four best-response correspondences for player 1, r*(q) from Figures 1.3.9 or 1.3.10, and any of the four for player 2, q*(r) from Figures 1.3.11 or 1.3.12, the pair of best-response correspondences has at least one intersection, so the game has at least one Nash equilibrium. Checking all sixteen possible pairs of best-response correspondences is left as an exercise. Instead, we describe the qualitative features that can result. There can be: (1) a single pure-strategy Nash equilibrium; (2) a single mixed-strategy equilibrium; or (3) two pure-strategy equilibria and a single mixed-strategy equilibrium. Recall from Figure 1.3.6 that Matching Pennies is an example of case (2), and from Figure 1.3.7 that the Battle of the Sexes is an example of

(Down)
(Down)

Figure 1.3.11.

Case (iii)

Figure 1.3.12.

fix)

Figure 1.3.13.

case (3). The Prisoners' Dilemma is an example of case (1); it results from combining case (i) or (ii) of r*{q) with case (i) or (ii) or q*(r).15

We conclude this section with a discussion of the existence of a Nash equilibrium in more general games. If the above arguments for two-by-two games are stated mathematically rather than graphically, then they can be generalized to apply to n-player games with arbitrary finite strategy spaces.

Theorem (Nash 1950): In the n-player normal-form game G = {Si,..., S„; Mi,..., «„}, if n is finite and Si is finite for every i then there exists at least one Nash equilibrium, possibly involving mixed strategies.

The proof of Nash's Theorem involves a fixed-point theorem. As a simple example of a fixed-point theorem, suppose f(x) is a continuous function with domain [0,1] and range [0,1]. Then Brouwer's Fixed-Point Theorem guarantees that there exists at least one fixed point — that is, there exists at least one value x* in [0,1] such that f(x*) — x*. Figure 1.3.13 provides an example.

15The cases involving x = z or y = w do not violate the claim that the pair of best-response correspondences has at least one intersection. On the contrary, in addition to the qualitative features described in the text, there can now be two pure-strategy Nash equilibria without a mixed-strategy Nash equilibrium, and a continuum of mixed-strategy Nash equilibria.

Applying a fixed-point theorem to prove Nash's Theorem involves two steps: (1) showing that any fixed point of a certain correspondence is a Nash equilibrium; (2) using an appropriate fixed-point theorem to show that this correspondence must have a fixed point. The relevant correspondence is the n-player best-response correspondence. The relevant fixed-point theorem is due to Kakutani (1941), who generalized Brouwer's theorem to allow for (well-behaved) correspondences as well as functions.

The n-player best-response correspondence is computed from the n individual players' best-response correspondences as follows. Consider an arbitrary combination of mixed strategies (pi,...,p„). For each player i, derive f's best response(s) to the other players' mixed strategies (pi,..., p,_i, p,+i,..., p„). Then construct the set of all possible combinations of one such best response for each player. (Formally, derive each player's best-response correspondence and then construct the cross-product of these n individual correspondences.) A combination of mixed strategies (pi,...,p*) is a fixed point of this correspondence if (pi,.-- , p*) belongs to the set of all possible combinations of the players' best responses to (p^,... ,p*). That is, for each i, p* must be (one of) player i's best response(s) to (p\, ..., p*_lr p*+1, ..., p£), but this is precisely the statement that (p*, ..., p*) is a Nash equilibrium. This completes step (1).

Step (2) involves the fact that each player's best-response correspondence is continuous, in an appropriate sense. The role of continuity in Brouwer's fixed-point theorem can be seen by modifying f(x) in Figure 1.3.13: if f(x) is discontinuous then it need not have a fixed point. In Figure 1.3.14, for example, f(x) > x for all x-< x', but f(x') < x' for x > x'.16

To illustrate the differences between/(x) in Figure 1.3.14 and a player's best-response correspondence, consider Case (iii) in Figure 1.3.10: at q = q', r*(q') includes zero, one, and the entire interval in between. (A bit more formally, r*(q') includes the limit of r*(q) as q approaches q' from the left, the limit of r*(q) as q approaches q' from the right, and all the values of r in between these two limits.) If f(x') in Figure 1.3.14 behaved analogously to

16The value of f(x') is indicated by the solid circle. The open circle indicates that /(*') does not include this value. The dotted line is included only to indicate that both circles occur at x = x'; it does not indicate further values of f(x').

Figure 1.3.14.

fix)

Figure 1.3.14.

player l's best-response correspondence r*(q'), then f(x') would include not only the solid circle (as in the figure) but also the open circle and the entire interval in between, in which case f(x) would have a fixed point at x!.

Each player's best-response correspondence always behaves the way r*(q') does in Figure 1.3.14: it always includes (the appropriate generalizations of) the limit from the left, the limit from the right, and all the values in between. The reason for this is that, as shown earlier for the two-player case, if player i has several pure strategies that are best responses to the other players' mixed strategies, then any mixed strategy p, that puts all its probability on some or all of player z's pure-strategy best responses (and zero probability on all of player i's other pure strategies) is also a best response for player i. Because each player's best-response correspondence always behaves in this way, the n-player best-response correspondence does too; these properties satisfy the hypotheses of Kakutani's Theorem, so the latter correspondence has a fixed point.

Nash's Theorem guarantees that an equilibrium exists in a broad class of games, but none of the applications analyzed in Section 1.2 are members of this class (because each application has infinite strategy spaces). This shows that the hypotheses of Nash's Theorem are sufficient but not necessary conditions for an equilibrium to exist—there are many games that do not satisfy the hypotheses of the Theorem but nonetheless have one or more Nash equilibria.