Feb 25
Feb 25
1 Nash Equilibrium
It is high time we moved to the central concept of game theory, that of a Nash
Equilibrium. So far our analysis has explored the limits of rationality and common
knowledge of rationality. But in most games of interest, this is too little: in a coordi-
nation game like the Battle of the Sexes, every strategy is rational, so the analysis so
far is DOA. Nash’s idea is to move one step back and instead of having a full-blown
theory of how to play a game, simply postulate a criterion that any prediction ought
to satisfy: that of being self-supporting. That is, a profile of strategies is a Nash
equilibrium if, fixing what the others are to do, no player has a profitable deviation.
Formally, for a game in normal form (Si , ui )ni=1 , a NE in pure strategies is a profile
s∗ such that
s∗i ∈ BRi (s∗−i ) for all i = 1, ..., n.
Likewise, a profile of mixed strategies σ ∗ is a NE if
Observe that rationality alone is not enough: players are to be rational (so that they
best-respond to a conjecture) and to have perfect foresight (so that their conjecture
is exactly what the other players will play.)
First off, for the Prisoners’ Dilemma and the second price sealed bid auction,
the profile of (weakly) dominant strategies is certainly a NE: not only a player has
no profitable deviation given what the others are to do; the player has no profitable
deviation regardless of what the others are to do. Interestingly, for the second price
sealed bid auction (with complete information), there are many other NE: let us
rank the bidders so that v1 > v2 > · · · > vn . There is a NE where bidder i wins, for
any i. Take for instance the last bidder, bidder n. And consider the profile s∗ given
by s∗1 = s∗2 = · · · = s∗n−1 ≤ vn and s∗n ≥ v1 . Then, for any given bidder, given what
the others are to do, there’s no alternative bid that yields a higher payoff: for bidder
n, bidding more won’t change anything (observe that they win and pay vn , so get
1
zero utility); a loss would result from bidding less than vn , and it would yield zero
again; for any other bidder, the only way to change the outcome is to bid at least
sn , but then the second price is sn , and this would be strictly worse for all bidders
other than 1, and would yield at most zero for bidder 1 (when sn = v1 ). Analogous
reasonings establish that there are NE where other bidders win. It is crucial to note,
however, that in all such NE bidders necessarily play weakly dominated strategies
(as they are not bidding their values.) So an issue with NE is that it may well
involve players playing inadmissible strategies.
The unique NE is (0, 0). But pi = 0 is weakly dominated by any price in (0, 1]:
charging zero yields zero no matter what; charging a positive price yields zero if the
other charges less, but it yields positive profits if the other charges more.
Move to a coordination game, like BoS. There are two NE in pure strategies, (F,
F) and (O,O): if the other will show up at F (resp. O), the given player will do well
to show up at F (resp. O). There is also one in mixed strategies: if wife plays F with
probability p and O with probability 1 − p, then husband gets 2p from F and 1 − p
from O; hence, if 2p = 1 − p, or p = 31 , husband will be indifferent between F and
O (and at that point he might as well randomize); likewise, if husband plays F with
probability 32 , wife gets 32 from F and also 32 from O, so she might as well randomize;
it then follows that these two mixed strategies are mutual best responses, so form a
NE in mixed strategies.
2
Consider the following game:
A B C
A 0, 4 4, 0 4, 0
B 3, 0 0, 3 3, 0
C 2, 0 2, 0 0, 2
It is not hard to check that there is no NE in pure strategies: Bob wants to
match, Alice wants to not match. Consider a mixed strategy profile σ1 = ( 73 , 47 , 0),
σ2 = ( 74 , 73 , 0) where Alice and Bob randomize between A and B; then A and B are
indifferent for each player (A and B yield 12 7 for each player.) But this does not mean
that they form a NE: in fact, given σ2 , Alice can do strictly better by playing C, as it
yields 2 > 12 7 . The unique NE of the game is found by finding the mixed strategy of
one player that makes all three pure strategies of the other player indifferent. Using
σ1 = (pA , pB , pC ) and σ2 = (qA , qB , qC ), we then have 4pA = 3pB = 2pC which
3 4 6
(using pA + pB + pC = 1) solves to pA = 13 , pB = 13 , and pC = 13 ; likewise, we
7 5
must have 4(qB + qC ) = 3(qA + qC ) = 2(qA + qB ) which solves to qA = 13 , qB = 13 ,
1
and qC = 13 .
Another fact that helps in computing NE is that we can perform IEDS before
start looking for a NE. This is so because no NE is lost during IEDS. The importance
of this (and any other such helping hand) is that the problem of computing NE is
notoriously difficult. In the simple examples above we can find NE by brute force;
but what if we had a general game with thousands of pure strategies? The effective
algorithm that we have at our disposal is pretty much that of exhaustive search
(maybe after performing IEDS). And this may take a whole lot of time to converge,
if at all. In practice, our best bet is to have an idea of what we are looking for.
There’s a class of games where we can do way better than that. That’s the class
of two-player zero-sum games, whereby the utility of player 2 is the negative of the
utility of player 1 at every profile s. In fact, the entire analysis of such games takes
a different perspective, with the idea of a security strategy, namely, a solution to
3
which in effect presumes that others will act in a way to minimize player i’s payoff,
so their best response would be to play a security strategy. In a general non-zero sum
game, a security strategy will typically yield less than what a player would get at a
NE. But in a two player zero-sum game, the presumption that the other will act to
minimize my own payoff is warranted: arg maxσ1 u1 (σ1 , σ2 ) = arg minσ1 u2 (σ1 , σ2 ),
as u2 (σ) = −u1 (σ), and similarly for player 2. In fact, a player can get no more than
his or her security level at any NE, so we have a strong notion of a solution here:
play your security strategy; whatever the other will do, you will have guaranteed
your maxmin value; and if the other also plays his or her security strategy, we will
be at a NE. The maxmin and minmax values (viewing player 1 as the maximizer)
are given by
When v(u1 ) = v(u1 ) = v(u1 ) we say that v(u1 ) is the value of the game described
by u1 . When a value exists, both players have approximate optimal (maxmin)
strategies, in that each can guarantee up to ε of their security levels, for any ε > 0.
If the value is attained, then players have optimal strategies, and the pair of optimal
strategy necessarily forms a NE. In fact, if σ ∗ is a NE then
u1 (σ1∗ , σ2∗ ) = max u1 (σ1 , σ2∗ ) ≥ max min u1 (σ1 , σ2 ) ≥ min u1 (σ1∗ , σ2 ) and
σ1 σ1 σ2 σ2
u1 (σ1∗ , σ2∗ ) = min u1 (σ1∗ , σ2 ) ≤ min max u1 (σ1 , σ2 ) ≤ max u1 (σ1 , σ2∗ ),
σ2 σ2 σ1 σ1
so v(u1 ) exists and minσ2 u1 (σ1∗ , σ2 ) = maxσ1 minσ2 u1 (σ1 , σ2 ), and thus σ1∗ is a
security strategy, and similarly for σ2∗ . Conversely, let σ ∗ be a pair of security
strategies. By Nash’s theorem a NE exists so the argument above shows that v(u1 )
exists. We then have u1 (σ1∗ , σ2∗ ) ≥ minσ2 u1 (σ1∗ , σ2 ) = maxσ1 u1 (σ1 , σ2∗ ) ≥ u1 (σ1∗ , σ2∗ ),
so ui (σi∗ , σj∗ ) = maxσi ui (σi , σj∗ ) for i, j = 1, 2, establishing that σ ∗ is a NE.
In fact, let v denote the value of a finite normal-form two-player zero-sum game,
with u1 (s) = u(s) = −u2 (s) for every s ∈ S. Consider the following linear program
4
problem in the variables (x(s1 ))s1 ∈S1 :
z ∗ ≡ max z
s.t.
X
x(s1 )u(s1 , s2 ) ≥ z, ∀s2 ∈ S2
s1
X
x(s1 ) = 1
s1
x(s1 ) ≥ 0, ∀s1 ∈ S1 .
Then it must be that z ∗ = v. In fact, letting σ1∗ denote player 1’s security strategy, we
have u(σ1∗ , σ2 ) ≥ v for any σ2 , and in particular for any s2 . So using x(s1 ) = σ1∗ (s1 )
for each s1 , we see that the vector (σ1∗ , v) satisfies the constraints, so we must
have z ∗ ≥ v. Conversely, take a vector (x, z) satisfying the constraints. Using
u = maxs1 ,s2 ku(s1 , s2 )k, we have
!
X X
z≤ x(s1 )u(s1 , s2 ) ≤ u x(s1 ) = u,
s1 s1
so z ∗ < ∞ as u must be a finite number when S1 and S2 are finite sets. Thus there
must exist a vector x such that the vector (x, z ∗ ) satisfies the constraints. But then
x is readily seem to be a mixed strategy for player 1 and u(x, s2 ) ≥ z ∗ for every
s2 , and hence also for every σ2 . It follows that player 1 has a mixed strategy that
guarantees him at least z ∗ , so it must be that v ≥ z ∗ . Combining with (a), we
conclude that z ∗ = v, and also that the problem of finding a security strategy can
be solved in polynomial time with the simplex method.
Back to the Bertrand game above, let us remove the upper bound on prices: now
firms are allowed to choose any price in R+ . The “indifference” approach can now
be used to compute much less grim NE of this duopoly problem. Let’s focus on a
mixed strategy with a continuous CDF F on R+ , and look for symmetric NE. For
any p̂ ≥ 0, the symmetric mixed strategy described by F with F (p) = 1 − p̂/p for
p ≥ p̂ and F (p) = 0 for p < p̂ is a NE: for p with F (p) > 0 the profit is
p̂
(1 − F (p))p = p = p̂,
p
so all prices in the support of F yield the same profit and prices p < p̂ yield p < p̂, and
hence there’s no profitable deviation. That is, there’s no “Bertrand paradox”, as the
two firms will make arbitrarily large profits in equilibrium. Of course, the assumption
of unit demand in face of an unbounded price is problematic. One can improve on
this direction by assuming that the firms face a downward sloping demand D(p)
with D(p)p > 0 for every p > 0, D(p)p non-decreasing with limp→∞ D(p)p = ∞.
5
p̂
For instance, D(p) = p−α , for 0 < α < 1. Now F would be F (p) = 1 − D(p)p for
p ≥ p̂ and 0 otherwise, where D(p̂) = 1. Profits in this NE would not be unbounded,
but still positive. But again, the assumptions on D pretty much say that consumers
will continue consuming even when prices are arbitrarily large.
Building on the analysis above, let us go back to the bounded prices (in [0, 1])
and add a capacity constraint: a firm can only sell up to 2/3 units. Then (0, 0) is
not a NE anymore, as p1 > 0 yields p1 /3 when p2 = 0, whereas p1 = 0 yields zero. In
fact, there is no pure strategy NE, as for any pj > 1/2, firm i would undercut; and
pi = 1 is the best response for any pj ≤ 1/2. Nevertheless, we can again construct a
symmetric equilibrium in mixed strategies where both firms use a continuous CDF
on [0, 1]. The profit of a firm choosing p ∈ [0, 1] when the other employs F is
p 2p
F (p) + (1 − F (p)) .
3 3
If p and p′ are in the support of F , the profit from p and p′ must be equal. So for
every p in the support of F , we must have
p 2p 3k
F (p) + (1 − F (p)) = k ⇒ F (p) = 2 −
3 3 p
for some k ≥ 0. As F is a CDF, we must have F (1) = 1, or k = 1/3. So
1
F (p) = 2 −
p
for every p in the support of F . Finally, because F (p) ≥ 0, we must have p ≥ 1/2
for p to be in the support of F . Observe that we have indeed computed a NE, as
any p < 1/2 yields at most 1/3, and any p ≥ 1/2 yields 1/3 (given that the other
plays F ).
While we are dealing with mixed strategies over a continuum of pure strategies,
it is worth going over an example of a War of Attrition game. Two players play
a concession game over time, where time is continuous. The first to concede loses
the game, and there’s a cost to staying in the game, equal to the time spent in
it. Finally, the value of winning is vi . So letting ti denote the time that player i
concedes, the payoff is given by
vi − tj if ti > tj
1
ui (ti , tj ) = 2 vi − tj if ti = tj
−t if t < t .
i i j
There are two NE (outcomes) in pure strategies where one player concedes immedi-
ately: ti = 0, tj ≥ vi , for i = 1, 2 and j 6= i. A perhaps more appealing NE is one
6
in mixed strategies. Let us use Fi and Fj as differentiable, strictly increasing CDFs
over R+ to describe the players’ mixed strategies. Then player i will be indifferent
among all times ti if
Z ti
−ti (1 − Fj (ti )) + Fj (ti )EFj (vi − tj |tj ≤ ti ) = −ti (1 − Fj (ti )) + (vi − tj )dFj (tj ) = k
0
for all ti , for some k. As this is to be true for every ti ∈ R+ , the derivative of the
function above with respect to ti must be equal to zero, so
So, if that’s the strategy used by player j, player i will be indifferent over all
conceding times and will be willing to randomize, using the corresponding CDF
Fi (ti ) = 1 − e−ti /vj , provided that the expected profit of any ti given Fj is at least
zero (because player i can guarantee zero by choosing ti = 0.) Substituting the
expression for Fj we get that the expected profit of any ti is
ti
1 −tj /vi
Z
−ti /vi
−ti (1 − 1 + e )+ (vi − tj ) e dtj
0 vi
As we saw above, NE may well involve players playing inadmissible strategies. One
way to avoid such issue is to strengthen the requirements: on top of being self-
supporting, we can add ideas related to cautiousness, as follows.
Fix a finite normal form game (Si , ui )ni=1 . We say that σ ∗ is a perfect equilib-
rium if
7
where ∆0 (Si ) is the set of probability distributions over Si with full support.
That is, even if others were to “tremble” and choose σ−ik close to σ ∗ , the choice
−i
σi∗ would remain optimal. The trembles have to put positive probability to every
pure strategy, forcing a player to be “cautious” as they are to consider all strategies
of the opponents as possible. In other words, every pure strategy in the support of
σi∗ must be admissible. Let us call σi∗ admissible when this last property is satisfied,
and let us say that a NE σ ∗ is admissible if each σi∗ is admissible.
That is, a perfect equilibrium is admissible. With more than two players, there
are admissible equilibria that are not perfect. With two players, the two concepts
coincide. In fact, pick an admissible NE σ ∗ , so σi∗ ∈ BRi (σj′ ) where σj′ ∈ ∆0 (Sj ),
for i, j = 1, 2. Define σjk = (1 − k1 )σj∗ + k1 σj′ , so that σjk ∈ ∆0 (Sj ). By linearity
of expected utility in σj , σi∗ ∈ BRi (σj∗ ) together with σi∗ ∈ BRi (σj′ ) mean that
σi∗ ∈ BRi (σjk ) for every k, so σ ∗ is perfect. The issue with more than two players
is that σi∗ is only guaranteed to be a best response to some conjecture with full
support, and not necessarily to one such conjecture formed by the product of the
others’ mixed strategies.
To verify that the two concepts are equivalent, let σ ∗ be the limit of ε-perfect
equilibria as ε → 0. Then, if σi∗ (si ) > 0, it must be that for all ε > 0 small enough,
ε ) ≥ u (s′ , σ ε ) for every s′ , for otherwise we would have σ ε (s ) < ε for
ui (si , σ−i i i −i i i i
every such ε > 0, which would contradict σi∗ (si ) > 0. But this then means that
si ∈ BRi (σ−i ε ) for all such s and, as a consequence, σ ∗ ∈ BR (σ ε ), for all ε > 0
i i i −i
small enough. This means that σi is a perfect equilibrium. Conversely, if σ ∗ is a
∗
perfect equilibrium, then σi∗ ∈ BR(σ−i k ) for the appropriate sequence σ k . It follows
−i
k ) < u (s′ , σ k ) for two pure strategies s and s′ along the sequence,
that if ui (si , σ−i i i −i i i
then σi∗ (si ) must be zero. But then, for each ε > 0, we will have k(ε) large enough so
k(ε)
that σi (si ) < ε, (that’s because σik → σi∗ ). Now set σ ε = σ k(ε) to get the required
ε-perfect equilibrium for each ε.
Perfect equilibria deal with the issue of non admissible equilibria, and also go a
8
bit further, but fail other desirable criteria. Consider the following game:
A B C
A 1, 1 0, 0 −1, −2
B 0, 0 0, 0 0, −2
C −2, −1 −2, 0 −2, −2
Deleting the strictly dominated strategy C for both players, we have that (A,A) is
the unique admissible and hence perfect equilibrium. But if we keep C for both
players we get that (B,B) is a perfect equilibrium, because it is the limit as ε → 0
of the ε-perfect equilibrium ((ε, 1 − 3ε, 2ε), (ε, 1 − 3ε, 2ε)). So a perfect equilibrium
does not satisfy the following criterion of elimination of dominated strategies: an
equilibrium of a game ought to remain an equilibrium of the game resulting from the
elimination of some dominated strategies. The idea is that rational players would
never play a dominated strategy, so their behavior should not be affected by the
inclusion or deletion of such “irrelevant” choices.
Now notice that the “tremble” C is much worse than the “tremble” A, if a player
is supposed to play B. It stands to reason that rational players will try harder to
avoid a worse mistake. The following strengthening of ε-perfection captures this
idea. Given ε > 0, a joint mixed strategy εε with σiε ∈ ∆0 (Si ) is said to be an
ε-proper equilibrium if
ε ε
ui (si , σ−i ) < ui (s′i , σ−i ) ⇒ σiε (si ) ≤ εσiε (s′i ).
The idea is that a worse pure strategy si (when others play σ−i ε ) ought to be
assigned a (positive) weight that is infinitely smaller than the weight assigned to s′i ,
σε (s )
in the sense that σiε (si′ ) ≤ ε.
i i
9
Let us go over the computations involved in computing all Nash, perfect, and
proper equilibria of the Selten’s Horse game in Figure 1.
1 C 2 c
D d 1, 1, 1
3
L R L R
3, 3, 2 0, 0, 0 4, 4, 0 0, 0, 1
There are two NE in pure strategies, (C, c, R) and (D, c, L), and infinitely many NE
in mixed strategies. In fact, observe that at (C, c, R) player 3 is indifferent between
L and R given (C, c) for the other two; so let σ3 = (r, 1 − r); in order for C to be a
best response to c and σ3 , we need 1 ≥ 3r; and in order for c to be a best response
to C and σ3 , we need 1 ≥ 4r; so we have the following set of mixed strategy NE
1
{(C, c, σ3 ) : σ3 = (r, 1 − r), r ∈ [0, ]}.
4
Observe that all such NE yield the same outcome (1, 1, 1) of the pure NE (C, c, L).
Likewise, at (D, c, L), player 2 is indifferent between c and d given (D, L) for the
other two; so using σ2 = (q, 1 − q), in order for D to be a best response to σ2 and L
we need 3 ≥ q + 4(1 − q), or q ≥ 31 ; and of course L is better than R given σ2 and
D; so we also have this other set of mixed strategy NE:
1
{(D, σ2 , L) : σ2 = (q, 1 − q), q ∈ [ , 1]}.
3
Again, all such NE yield the outcome (3, 3, 2) of the pure NE (D, c, L).
Every NE in this latter set is not perfect (and hence not proper): letting p =
P r(C), q = P r(c) and r = P r(L), we have u2 (c, σ−2 ) − u2 (d, σ−2 ) = pr + 3(1 − p)r +
10
p(1 − r) − 4pr − 3(1 − p)r = p(1 − 4r) < 0 whenever p > 0 and r is close to 1. So c
is not a best response for player 2 when the other two players tremble their choices
around D and L. This means that q must be zero for perfection to be obtained. But
this violates the requirement that q be at least 1/3.
Proposition 2.1. Every finite game (Si , ui )ni=1 has a proper equilibrium.
m
Proof. Let m = maxi |Si |. For each given 0 < ε < 1, put δ = εm , and define
∆δ (Si ) = {σi ∈ ∆(Si ) : σi (si ) ≥ δ, for all si ∈ Si }. It is clear that ∆δ (Si ) is
convex and compact (as a closed subset of the compact space ∆(Si )). Now define a
correspondence Fi : ×j6=i ∆δ (Sj ) ⇒ ∆δ (Si ) as
Fi (σ−i ) = {σ̂i ∈ ∆δ (Si ) : ui (si , σ−i ) < ui (s′i , σ−i ) ⇒ σ̂i (si ) ≤ εσ̂i (s′i )}
That is, for each given σ−i , whenever player i has two strategies si and s′i such that
si is strictly worse than s′i when others play σ−i , we assign the set of all mixed
strategies σ̂i ∈ ∆δ (Si ) satisfying σ̂i (si ) ≤ εσ̂i (s′i ). Note that Fi (σ−i ) is convex and
compact, as a bounded set determined by linear inequalities.
The choice of δ ensures that Fi (σ−i ) is non empty. In fact, let n(si ) = |{s′i ∈ Si :
ui (si , σ−i ) < ui (s′i , σ−i )}| denote the number of strategies that are strictly better
n(s )
than si when others use σ−i , and consider σ̂i defined by σ̂i (si ) = P ε i n(s′ ) for
s′ ∈Si ε i
i
each si . Note that σ̂i (si ) ≥ δ by construction, and, since n(si ) ≥ n(s′i ) + 1, we have
σ̂i (si ) ≤ εσ̂i (s′i ), so σ̂i ∈ Fi (σ−i ).
11
Finally, note that Fi has closed graph because ui is continuous: if σ−i k → σ ,
−i
σ̂ik ∈ Fi (σ−ik ) and σ̂ k → σ̂ , then u (s , σ ) < u (s′ , σ ) implies that u (s , σ k ) <
i i i i −i i i −i i i −i
k ) for large k, implying that σ̂ k (s ) ≤ εσ̂ k (s′ ), and a fortiori that σ̂ (s ) ≤
ui (s′i , σ−i i i i i i i
′
εσ̂i (si ).
Now put F : ×ni=1 ∆δ (Si ) ⇒ ×ni=1 ∆δ (Si ) as F (σ) = ×ni=1 Fi (σ−i ). By Kakutani’s
fixed point theorem, for each ε there exists a fixed point σ ε ∈ F (σ ε ). By definition
σ ε is ε-proper. Now passing to a subsequence if necessary, there exists a limit σ ∗
of the sequence {σ k }, with σ k = σ ε for ε = 1/k, which is a proper equilibrium by
construction.
In fact, there are some relevant games where the payoff discontinuities are so
severe that they do not have any NE. Consider the following version of the classical
Downsian model of electoral competition. The players are the two candidates. Each
picks a policy in the space of feasible policies, P . Assume that P is the shaded
triangle in the figure below. There are three voters with preferences defined over P ,
illustrated by the drawn indifference curves: voter 1 prefers policies in the southeast
direction; voter 2 prefers policies in the southwest direction, and voter 3’s preferences
are the exact opposite of voter 2’s. Given a pair of policies x, y ∈ P , a voter is
assumed to vote for their preferred policy; the candidates care about getting at least
2 votes (majority rule). We then have a zero-sum game, with S1 = S2 = P and
player 1’s payoff given by u(x, y) = 1 if at least two voters prefer x to y, u(x, y) = −1
if at least two voters prefer y to x, and u(x, y) = 0 when x = y, where x (resp. y) is
player 1’s (resp. player 2’s) choice. When majority is not reached, a tie-breaking rule
will determine the winner. In the figure below, the pairs of policies where a majority
12
is not reached are the pairs (x, y) with both x and y lying on some indifference curve
of some voter. Consider the following way of breaking ties. Whenever two policies
lie on an indifference curve of voter 1, the outcome is decided by a coin flip (so voter
1 sides with either voter 2 or 3 with equal probabilities). Whenever two policies
lie on a line parallel to the [b, a] face of P (including the face itself), let us have
u(x, y) = 1 whenever x is closer to a than y, except when x = a and y = b, in which
case u(x, y) = −1; the idea is that at least one of voters 2 and 3 sides with voter 1
along all such lines, including on the [b, a] face, unless x and y lie on the face and
the distance between x and y is so large that both voters decide to vote for y rather
than x. Observe that the resulting game is symmetric because the tie-breaking rule
is symmetric. As a consequence, u(σ, σ) = 0 because u(x, y) = −u(y, x) for any pair
of policies (x, y).
Downsian Game
2
b
1
This game has no NE. To verify, assume that (σ, σ ′ ) was a NE. It must be that
u(x, σ ′ ) ≤ 0 for all x ∈ P , for otherwise we’d have u(σ, σ ′ ) > 0 and player 2 could do
better by switching to σ from σ ′ (and thereby obtaining 0 instead of −u(σ, σ ′ ) < 0.)
Now take a sequence xk converging to a along the [b, a] face. We have
so we must have σ ′ (P \{a, b}) = σ ′ ({a}) = 1/2. We then have u(a, σ ′ ) = 1/2 > 0
(because σ ′ ({b}) = 0), but this is a contradiction.
In fact, this game does not even have a value, which implies that it does not
have ε-NE for ε > 0 small enough. To verify, observe that because it is a sym-
metric game, we must have v(u) ≥ 0: for each σ2 , player 1 can pick σ1 = σ2
13
and ensure that u1 (σ1 , σ2 ) = 0, so supσ1 u(σ1 , σ2 ) ≥ 0 for any σ2 , and hence
v(u) = inf σ2 supσ1 u(σ1 , σ2 ) ≥ 0. Now fix σ1 . If σ1 ({a}) ≤ 11
24 , then for a sequence
k
y converging to a along the [b, a] face
1
lim u(σ1 , y k ) = σ1 ({a}) − σ1 (P \{a}) ≤ − ,
k 12
1 11
so there’s y k for k large enough for which u(σ1 , y k ) < − 24 . If, instead, σ1 ({a}) > 24 ,
7 7
consider two cases: (i) σ1 (P \{a, b}) ≤ 24 and (ii) σ1 (P \{a, b}) > 24 . For case
11 7
(i), u(σ1 , b) < − 24 + 24 = − 14 , and for case (ii), u(σ1 , a) < − 24 7
+ 246
= − 241
.
1
Hence, for each σ1 we can find a y for which u(σ1 , y) < − 24 , which implies that
1
v(u) = supσ1 inf σ2 u(σ1 , σ2 ) ≤ − 24 < 0 ≤ v(u), establishing that the game does not
have a value.
Of course, the specification of preferences and the tie-breaking rule used above
were rather specific. In general, on the other hand, it is known that a NE in pure
strategies is very difficult to obtain: with three voters, Euclidean preferences over
a two-dimensional policy space P , and tie-breaking by coin-flips, we will have a
Condorcet winner1 if and only if it is equal to the ideal point x∗i of one of the voters
and the ideal points of the other two voters lie on a straight line through x∗i and
on opposite sides of it (in such a way that their indifference curves through x∗i are
tangent). This is “Plott’s symmetry condition”, a necessary and sufficient condition
for existence of a Condorcet winner (when no two ideal points coincide). This is
obviously a very demanding condition, so for a general specification of preferences
one expects that a NE in pure strategies does not exist – and the analysis above
shows that under some specifications of preferences and tie-breaking rules, not even
the value exists.
14
The purpose of presenting this one result is to illustrate the general idea of
using approximations to establish a property of the “limit” (which is the original
object of interest.) We will make use of the Mapping Theorem (or Continuous Map-
ping Theorem), which says that if a sequence of probability measures µk converges
(weak*) to µ and µ assigns zero probability to the set Df of discontinuities of a
real-valued function f , then f dµk → f dµ. Consider a game in strategic form
R R
(Si , ui )ni=1 with Si compact metric and ui bounded for each i. For each player i,
assume that ui is continuous everywhere in S except in a “small” measurable subset
of Di ⊂ S, as follows: for each pair (si , σ−i ), there exists some s′i close to si such
that σ−i (Di (s′i )) = 0, where Di (s′i ) = {s−i : (s′i , s−i ) ∈ Di }. In addition, assume
that the strategy s′i defined above satisfies ui (s′i , σ−i ) ≥ ui (si , σ−i ). The idea is that
a player can avoid discontinuities without getting a payoff hit. Finally, assume that
Pn
i=1 ui is upper semicontinuous.
Proposition 3.1. Under the assumptions above, the game (Si , ui )ni=1 has a NE.
Proof. Pick finitely many points in Si for each i and focus on the corresponding
finite game (with utilities restricted to the corresponding finitely many profiles). By
Nash’s theorem, this game has a NE. For each i, consider a sequence of finite subsets
Sik ⊂ Si that converges to Si as k → ∞, and the corresponding NE profile σ k of the
game restricted to S k . As ∆(Si ) is compact (and metrizable, so every sequence must
have a convergent subsequence) under the weak∗ topology, and by realizing ∆(Sik ) as
a subset of ∆(Si ) by identifying si ∈ Sik with the probability measure δsi that assigns
probability 1 to the set {si }, the sequence σ k must have a convergent subsequence.
Let σ ∗ be its limit. Because ui is bounded, the corresponding subsequence ui (σ k ) has
limit points. Abusing notation, use k as the index of the subsequence σ k with limit
σ ∗ such that limk→∞ ui (σ k ) exists for each i. We claim now that limk→∞ ui (σ k ) =
ui (σ ∗ ) for all i. In fact, if this is not the case, then we have inequality for some
P
player j; by upper semicontinuity of i ui , there must exist a player i (which may
well be player j) with ui (σ ∗ ) > limk→∞ ui (σ k ); so we must have a pure strategy
∗ ) > lim
si such that ui (si , σ−i k ′
k→∞ ui (σ ). By small Di , there is si close to si such
that σ−i ∗ (D (s′ )) = 0 and u (s′ , σ ∗ ) ≥ u (s , σ ∗ ). Because S k → S , we can select
i i i i −i i i −i i i
k ′ k k k k
si → si with si ∈ Si . Hence we have a sequence (si , σ−i ) which can be viewed
as a sequence µk in ∆(S), with µk = δsk ⊗ σ−i k . We then have µk → µ, where
i
∗ . As D = ∗ ′
S
µ = δs′i ⊗ σ−i i si {si } × Di (si ), µ(Di ) = σ−i (Di (si )) = 0. Hence, by
the Mapping Theorem, limk→∞ ui (ski , σ−i k ) = u (s′ , σ ∗ ). But u (σ k ) ≥ u (sk , σ k )
i i −i i i i −i
for all k because σ is a NE of the finite game, so limk→∞ ui (σ k ) ≥ ui (s′i , σ−i
k ∗ ) >
limk→∞ ui (σ k ), which is not possible, and this establishes our claim. Now if σ ∗ was
not a NE, there would exist some player i and a strategy si such that ui (si , σ−i ∗ )>
ui (σ ∗ ). But then the argument above would generate the same contradiction, as
15
ui (σ ∗ ) = limk→∞ ui (σ k ).
Observe that the requirement that s′i be close to si in the definition of “small Di ”
is not necessary; its purpose is simply to have a “useful” set of sufficient conditions
for existence of NE (in a bidding game, for instance, the idea is that one can avoid a
tie by increasing or decreasing their bid slightly). Also remark that the assumptions
P
of small set of discontinuities and of i ui upper semicontinuous are likely to be met
in economic games, as discontinuities often arise from “ties” or other “knife-edge”
considerations, and to be confined to the relation among the payoffs (so not present
in the sum of payoffs.)
References:
Of course, the classical source for NE is Nash’s work: his dissertation J. Nash
“Equilibrium Points in n-Person Games,” Proceedings of the National Academy of
Sciences, (1950), 36, and J. Nash “Non-Cooperative Games,” Annals of Mathematics
(1951), 54. The idea of using trembles to refine NE dates back to R. Selten “Reex-
amination of the Perfectness Concept for Equilibrium Points in Extensive Games,”
International Journal of Game Theory (1975), 4. Proper Equilibrium is from R.
Myerson “Refinements of the Nash Equilibrium Concept,” International Journal of
Game Theory (1978), 7. Plott’s symmetry condition was introduced by C. Plott
“A Notion of Equilibrium and its Possibility Under Majority Rule,” American Eco-
nomic Review (1967) 57(4). Sion’s Minimax is from M. Sion “On General Minimax
Theorems,” Pacific Journal of Mathematics (1958), 8. The ideas underlying Propo-
sition 3.1 come from P. Dasgupta and E. Maskin “The Existence of Equilibrium in
Discontinuous Economic Games,” Review of Economic Studies (1986), 53, and P.
Barelli, S. Govindan, and R. Wilson “Competition for a Majority,” Econometrica
(2014), 82. A general reference for mathematical concepts (Maximum Theorem,
weak* topology, etc.) is C. Aliprantis and K. Border Infinite Dimensional Analysis:
A Hitchhiker’s Guide, 3rd Edition.
16