0% found this document useful (0 votes)
8 views21 pages

Information Economics

The document discusses the limitations of basic microeconomic theory in accounting for uncertainty and information problems, highlighting historical examples where information issues led to market failures. It outlines the course structure, which covers decision-making under risk, mechanisms for dealing with uncertainty, and asymmetric information, including adverse selection and moral hazard. Key concepts include preferences, utility functions, and the Von Neumann-Morgenstern theory of expected utility, emphasizing the importance of information in economic decision-making.

Uploaded by

justeillian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views21 pages

Information Economics

The document discusses the limitations of basic microeconomic theory in accounting for uncertainty and information problems, highlighting historical examples where information issues led to market failures. It outlines the course structure, which covers decision-making under risk, mechanisms for dealing with uncertainty, and asymmetric information, including adverse selection and moral hazard. Key concepts include preferences, utility functions, and the Von Neumann-Morgenstern theory of expected utility, emphasizing the importance of information in economic decision-making.

Uploaded by

justeillian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Introduction

Basic microeconomic theory abstracts from uncertainty and information prob-


lems:

• Individuals’ preferences are known

• Individuals’ actions are observable

• Costs are common knowledge

• Quality and nature of goods are observable

• ...

In reality, this is not true and information issues can make standard market
mechanism collapse (ex: 2008 subprime mortgage crisis).

Economic history illustrates that information issues have been known for a
long time in economics (see Laffont and Martimort, 2002, chap. 1):

• Contracts between landowners and farmers in agriculture

• Collective action and the commons: free-rider problem

• Price discrimination: firms offer menus with different price-quality pairs to


extract more consumer surplus

• Incentives in planned economies (Soviet Union in the 1950ies-1980ies)

• Insurance: behaviors may affect proba of accident (moral hazard problem)

• Redistribution: tax on labor may disincentive workers

• ...

In this course:

1. Basic tools for decision under risk and uncertainty, and the value of infor-
mation (Chap. 1)

1
CONTENTS 2

2. Mechanism to deal with uncertainty under symmetric information (Chap.


2)

3. Asymmetric information:

(a) General issues (Chap. 3)


(b) Adverse selection (Chap. 4)
(c) Moral hazard (Chap. 5)
Chapter 1

Decision making under risk and uncertainty

1.1 Pre-requisites
In this section, we recap some of the basic tools of economic modeling.

1.1.1 Preferences and choices


General case
Let X be a set of possible outcome or options, with x, y, · · · possible options. For
instance X can be a commodity set X ⇢ R`+ , with ` the number of commodities
(then x = (x1 , · · · , x` ) where xj is the quantity of good j consumed). The set X
may have some properties: for instance,
• X is closed if for each x 2 X we can find a sequence of elements of X that
converge to x (i.e. the “boundaries” are included in the set);

• X is bounded if there are elements z, y such that z  x  y (for some defi-


nition of );

• X is convex if for all x, y 2 X, x + (1 )y 2 X (for some definition of the


mixture operation x + (1 )y).
Example: a budget set. Assume that there are ` goods and the price vector is p =
(p1 , · · · , p` ) The initial endowments of an individual are given by the vector ! =
(!1 , · · · , !` ). The budget set define all the possible consumption plans for the
individual, assuming that she can only consume positive amounts: X = {x 2
P P
R`+ | `k=1 pk xk  `k=1 pk !k }. The set X is closed: the budget line is the boundary
P`
included in the set. Denote ⌦ = k=1 pk !k : the set X is bounded because for
all possible consumption plan x, we have 0  x  (⌦/p1 , · · · , ⌦/p` ). Denote
x + (1 )y the consumption plan z such that zk = xk + (1 )yk for each
k = 1, · · · , `: x + (1 )y 2 X so that X is convex. (See Figure 1.1 for an
illustration: X is the red zone).

3
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 4

Figure 1.1: Choice from a budget set

x2
⌦/p2

x?

X
y I
x x + (1 )y I0

0 x1
⌦/p1

Note: The red zone is the budget set X. The two blue curves are indifference curves. Consumption
bundle x? in green maximizes utility over the budget set

Preferences over the set X are modeled as a binary relation denoted ⌫. Nota-
tion x ⌫ y means that x is (weakly) preferred to y; notation x y means that x
is strictly preferred to y (that is: x ⌫ y is true, but y ⌫ x is not the case); notation
x ⇠ y means that x and y are indifferent (that is: x ⌫ y and y ⌫ x). Preference
relations may satisfy several properties:

• ⌫ is transitive if, whenever x ⌫ y and y ⌫ z, then x ⌫ z;

• ⌫ is reflexive if x ⌫ x;

• ⌫ is complete if, for all x, y 2 X, either x ⌫ y, or y ⌫ x, or both;

• ⌫ is continuous if, for any x 2 X, the upper-contour set U (x) = {y 2 X|y ⌫


x} and the lower-contour set L(x) = {y 2 X|x ⌫ y} are closed (cf. Figure
1.1: the upper-contours set of x? is the area above the blue indifference curve
I of x? ; the lower-contours set of x? is the area below I);

• ⌫ is convex if, for any x 2 X, the upper-contour set U (x) = {y 2 X|y ⌫ x}


is convex (cf. Figure 1.1).

A preference relation is said to be a quasi-ordering (or a pre-order) if it is


transitive and reflexive. A preference relation is said to be a weak order (or a
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 5

total pre-order) if it is transitive and complete. Usually, economic theory models


preferences as weak orders.

A utility function is a function u : X ! R. It is said to represent a preference


relation ⌫ (which is a weak order) if, for all x and y 2 X, x ⌫ y if and only if
u(x) u(y). We have the following representation theorem:

Theorem 1. A preference relation that is transitive, complete and continuous can be


represented by a utility function.

Given this theorem, it is common to model preferences by a utility function


directly, rather than using a preference relation.

The main behavioral assumption made in economic modeling consists in as-


suming that individuals (or firms) make decisions that maximizes their utility or
preference satisfaction over a set of feasible alternatives. This implies that (ob-
servable) choices reflect preferences (this the revealed preference theory).
For example, Figure 1.1 represents how a consumer may choose a consump-
tion plan (i.e. the level of consumption of each good) given her budget constraint.
There are two goods, x1 and x2 . The set X is the budget set, the set of all consump-
tion bundles the consumer can afford (i.e. such that p1 x1 + p2 x2  p1 !1 + p2 !2 ,
where !1 and !2 are the initial endowments that the consumer can trade). Pref-
erences can be represented by indifference curves that connects all bundles that
are equivalent for the consumer: two examples of such curves are the blue curves
I and I 0 in the curve. Assuming that preferences are monotonic (more is better
than less), the bundles on curve I are preferred to the bundles on curve I 0 . To
maximize her utility, the consumer will choose bundles on the highest indiffer-
ence curve, in the set X. In the picture, there is a unique such optimal bundle x?
which is such that the indifference curve is tangent to the budget line at x? .

Risk: Von Neumann and Morgenstern (VNM) theory of expected utility


VNM (1947) proposed a theoretical foundation for the expected utility model:
if you accept some principles of rationality, you should aim at maximizing an
expected utility.
A set X of possible outcomes (example: X = {bad bread, fair bread, good bread}).
(X) the set of finite lotteries over X. Formally:
n X o
(X) = p : X ! [0, 1] | p(x) = 1; p(x) > 0 for finitely many x 2 X .
x2X

Function p is a probability measure that assigns to each outcome a probability of


realization.
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 6

Figure 1.2: Machina’s triangle

p2
1

q
p p + (1 )q

0 p1
1

Note: p and q are two lotteries; p + (1 )q is a mixture of p and q, where 2 (0, 1).

A simple representation in the case where |X|


n = 3: Machina’s triangle (see Fig
1.2). In that case (X) is the simplex, (X) = (p1 , p2 , p3 ) 2 [0, 1]3 | p1 + p2 + p3 =
o
1 .
(X) is a mixture space. Define the mixture operation on (X) in the following
way: for any two lotteries p and q, and for any 2 [0, 1], r = p + (1 )q is the
lottery such that r(x) = p(x) + (1 )q(x) for any x 2 X. The lottery r is a
( )mixture of p and q: it is like playing lottery p with probability and playing
lottery q with probabilty 1 . One also says that r is a compound lottery.
To model how we choose a lottery (among a set of possible risky actions), we
assume that we have a preference ordering ⌫ on (X). For any two lotteries p
and q, p ⌫ q means that p is better than q (thus that we want to choose p when
the choice is restricted to those two lotteries); p q means that p is strictly better
than q and p ⇠ q means that p are q indifferent (equivalent).
We may want ⌫ to satisfy the following three properties:
Ordering. The relation ⌫ is:
1. transitive: for all p, q, r 2 (X), if p ⌫ q and q ⌫ r then p ⌫ r.
2. complete: for all p, q 2 (X), either p ⌫ q or q ⌫ p (or both).

Continuity. for all p, q, r 2 (X), if p ⌫ q ⌫ r, then there exists ↵ 2 [0, 1] such that
q ⇠ ↵p + (1 ↵)r.
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 7

VNM-Independence. For all p, q, r 2 (X) and 2 (0, 1]:


1. p q if and only if p + (1 )r q + (1 )r.
2. p ⇠ q if and only if p + (1 )r ⇠ q + (1 )r.

VNM-Independence is the key (and most controversial) property. It can be


expressed in terms of compound lotteries: if both p and q are compounded with
r, then our choice is independent of the complement r that will be the same in the
two situations (see Fig 1.3).

Figure 1.3: VNM-Independence

p q
⌫ () p ⌫ q
1 r 1 r

Note: We should make the same choice on the left and on the right, because on the left we only
add the possibility of an alternative which is the same in the two lotteries, and thus is not relevant.

The fundamental result of VNM is the following theorem

Theorem 2. The relation ⌫ satisfies Ordering, Continuity and VNM independence if


and only if there exists a function u : X ! R such that:
X X
p ⌫ q () p(x)u(x) q(x)u(x).
x2X x2X

Proof. (Sketch for three elements). For the case where X = {x1 , x2 , x3 } any ele-
ment in (X) has the form (p, q, 1 p q).
Assume (without loss of generality) that (0, 0, 1) ⌫ (0, 1, 0) ⌫ (1, 0, 0), that is
x1 is the worst option, x2 the second best, x3 the best. By ordering and continuity,
there exists a function U that represent ⌫. We normalize it so that U (1 ✓, 0, ✓) = ✓,
i.e. u(x1 ) = U (1, 0, 0) = 0 and u(x3 ) = U (0, 0, 1) = 1.
By continuity, there exists ↵ 2 (0, 1) such that (0, 1, 0) ⇠ (1 ↵)(1, 0, 0) +
↵(0, 0, 1) = (1 ↵, 0, ↵). Hence, u(x2 ) = U (0, 1, 0) = ↵.
Now consider any p = (p1 , p2 , p3 ) 2 (X). Assume that (0, 1, 0) ⌫ p ⌫
(1, 0, 0).1 By continuity, there exists 2 (0, 1) such that (p1 , p2 , p3 ) ⇠ (1, 0, 0) +
(1 )(0, 1, 0) = ( , 1 , 0). But given that (0, 1, 0) ⇠ (1 ↵, 0, ↵), we also
1
The case (0, 0, 1) ⌫ p ⌫ (0, 1, 0) can be treated similarly.
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 8

have (p1 , p2 , p3 ) ⇠ (1, 0, 0) + (1 )(1 ↵, 0, ↵) = 1 (1 )↵, 0, (1 )↵ , by


VNM-Independence.
Both ( , 1 , 0) and 1 (1 )↵, 0, (1 )↵ are on the edges of Machina’s
triangle and indifferent to p. By VNM-Independence, indifference curves are lines
in Machina’s triangle, hence there exists µ 2 [0, 1] such that (see Fig. 1.4):

(p1 , p2 , p3 ) = (1 µ)( , 1 , 0) + µ 1 (1 )↵, 0, (1 )↵ .

Figure 1.4: Machina’s triangle for the proof of the VNM Theorem

p2
1

0 p1
a 1

Note: We denote p the lottery (p1 , p2 , p3 ). The point a is a = (1 (1 )↵, 0, (1 )↵); the point b
is b = ( , (1 ), 0).

Identifying the terms, we have

p3 = 1 p1 p2 = µ(1 )↵

p2 = (1 µ)(1 )
So that p2 ↵ + p3 = (1 )↵. Hence,

U (p1 , p2 , p3 ) = U 1 (1 )↵, 0, (1 )↵ because (p1 , p2 , p3 ) ⇠ (1 (1 )↵, 0, (1 )↵)


= (1 )↵ by our normalization
= p2 ↵ + p3 by identification of the terms
= p1 u(x1 ) + p2 u(x2 ) + p3 u(x3 ).
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 9

1.1.2 Equilibrium
In a simple pure exchange economy, the market equilibrium results from individ-
uals’ maximizing decisions and from market clearing conditions.
There is a finite number of commodities labeled by subscript k = 1, · · · , ` and
a finite number of consumers labeled by the subscript i = 1, · · · , n. Let xik be the
consumption of good k by individual i and xi = (xi1 , · · · , xi` ) a consumption bun-
dle for individual i. The initial endowments are given by vectors ! i = (!1i , · · · , !`i )
for each Pnindividual i (where !ki is the endowment of i in good k) and we denote
!k = i=1 !k the total endowment in good k in the
i
Pn economy.
An allocation is feasible if, for each good k, i=1 xik  !k . A market equilib-
rium will give equilibrium prices (p1 , · · · , p` ) and a feasible equilibrium allocation
(x̂1 , · · · , x̂n ) that should satisfy two conditions:

• the allocated bundle for each individual should be optimal given her bud-
get. That means that for each i, x̂i is a solution to the problem:

maxxi u(xi )
X̀ X̀
s.t. pk xik  pk !ki
k=1 k=1

Pn
• markets clear. This means that for each k = 1, · · · , `: i=1 x̂ik = !k .

The equilibrium can be depicted in a simple way in the case with two goods
and two individuals using the Edgeworth box (Figure 1.5). The box describes all
allocations satisfying the market clearing conditions. Looking from the bottom
left, there are two axis: the horizontal axis gives the consumption of good 1 by
individual 1 and the vertical axis gives the consumption of good 2 by individual
1. The maximal amounts that individual 1 could consume is !1 for good 1 and
!2 for good 2 (that is the total endowments). But the market clearing conditions
implies that we should have x̂21 = !1 x̂11 and x̂22 = !2 x̂12 , so that, looking from
the top right we can draw "reversed axis" that gives the consumption of goods 1
and 2 by individual 2
Preferences can be represented in the Edgeworth box by drawing indifference
curves. Figure 1.5 shows examples of such curves for individual 1 (blue curves)
and individual 2 (red curves), as well as the direction in which utility is increasing
(blue and red arrows).
The market equilibrium will be given by equilibrium prices and an equilib-
rium allocation x̂. At this equilibrium, the two individuals maximize their utility
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 10

Figure 1.5: Edgeworth box: market equilibrium

x12
x̂21 O2
x21

x̂12 x̂ x̂22

p1
slope p2

x11
O1 x̂11
x22

Note: The green line is the budget line for the two individuals: Individual 1 can
choose an allocation below the line; individual 2 can choose an allocation below
the line. The blue curves are the indifferences curves for individual 1 (allocations
are better when moving to the north east). The red curves are the indifferences
curves for individual 2 (allocations are better when moving to the south west).
At the equilibrium x̂, the indifference curves of the two individuals are tangent
and their slope is (minus) the ratio of equilibrium prices ( p1 /p2 ).

given their budget constraint given by the equilibrium prices and initial endow-
ments ! (the budget line – the green line in Figure 1.5 is the line going through
the initial allocation and whose slope is the opposite of the ratio of equilibrium
prices). This implies that, at an equilibrium the ratio of marginal utility of goods
is equal to the ratio of their prices.
Note that the market equilibrium allocation is Pareto efficient: if we move from
the equilibrium allocation in any direction, at least one individual will be worse-
off. This corresponds to the First Welfare Theorem: any market equilibrium al-
location is Pareto efficient. There is also a Second Welfare Theorem: any Pareto
efficient allocation can be obtained as a market equilibrium allocation if we redis-
tribute initial endowments ! in an appropriate way.
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 11

1.2 Two models of expected utility


Decisions under risk are very common. Examples:

• Deciding to go to a new bakery. Three possible quality levels: bad, fair and
good. How would you decide ?

• Saint Petersburg paradox: see tutorials.

• Bernoulli proposed the expected utility model (1738).

It is common practice in economics (since Knight, 1921) to make a distinction


between risk (when we know the probability) and uncertainty (when we have to
assign probability to different states of the world).

1.2.1 Objective probability: risk


The model here is the expected utility model of VNM (see the pre-requisites).
(X) the set of finite lotteries over X. Formally:
n X o
(X) = p : X ! [0, 1] | p(x) = 1; p(x) > 0 for finitely many x 2 X .
x2X

Function p is a probability measure that assigns to each outcome a probability of


realization.
A preference ordering ⌫ on (X) satisfy the three properties Ordering, Con-
tinuity and VNM independence if and only if there exists a function u : X ! R
such that: X X
p ⌫ q () p(x)u(x) q(x)u(x).
x2X x2X

1.2.2 Subjective expected utility: uncertainty


Savage (1954) provided a characterization of expected utility in the case of uncer-
tainty (known as subjective expected utility).
Starts from a state space S (set of “states of the world”, i.e. complete descrip-
tion of what is the case). Then Savage studies (simple) acts, that is mappings
f : S ! X, where X is again the set of outcomes. An act describes the conse-
quences of a decision in each state of the world: if I take action f , then is state s, I
will obtain f (s) 2 X.
Example: Bakery with three states s1 =‘bread is good’, s2 =‘bread is OK’, s3 =‘br-
ead if bad’. Then, the act f =buy bread in the bakery gives, f (s1 ) =‘I am very
happy’, f (s2 ) =‘I am a little disappointed’ and f (s3 ) =‘I am very angry (and
possibly sick)’.
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 12

Savage (1954) imposed properties that implied that (i) there exist state-inde-
pendent contingent preferences (i.e. the value of ‘I am very happy’ is the same in
all states of the world); (ii) there exist well-defined beliefs. But the main property
is the sure-thing principle: if I compare two acts that have the same consequences
in some states of the world, the choice between them does not depend on what
happens in these states (i.e. if I am sure something will happen in some states
whatever my action, I do not consider what happens in these states).
Using these properties (and other technical conditions), Savage (1954) showed
that we should between acts by relying on the subjective expected utility model:
there exists a probability measure p and a utility function u such that:
Z Z
f ⌫ g () u f (s) dp(s) u g(s) dp(s).
S S

1.3 Beyond expected utility


There are experimental evidence that the (subjective) expected utility model is
often violated:
1. The Allais paradox: VNM independence is violated (see tutorial).
2. The Ellsberg paradox. There is an urn with 30 red balls and 60 blue or black
balls (proportion is unknown). You are ask to make two bets:
• Bet 1: choose one of the two gambles:
– A: Get $ 100 if you draw a red ball.
– B: Get $ 100 if you draw a blue ball.
• Bet 2: choose one of the two gambles:
– C: Get $ 100 if you draw a red or black ball.
– D: Get $ 100 if you draw a blue or black ball.
Usually people choose A over B and D over C, but this is inconsistent: This
means that p(blue) < p(red) but p(red or black) < p(blue or black).
This is what people call “uncertainty aversion”.

Recent research in economics has developed alternative models of choice un-


der uncertainty (not discussed in this course, but you should be aware that many
result here depends on expected utility):
1. Quiggin’s (1982) anticipated utility (for lotteries):
X X
p ⌫ q () (p, x)u(x) (q, x)u(x),
x2X x2X
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 13

where (rank depend model):


0 1 0 1
X X
(p, x) = @ p(y)A @ p(y)A .
y:u(y)u(x) y:u(y)<u(x)

2. The subjective maximin model (Gilboa and Schmeidler, 1989):


Z Z
f ⌫ g () min u f (s) dp(s) min u g(s) dp(s).
p2P S p2P S

3. The Hurwicz criterion (1951):


Z Z
f ⌫ g () ↵ min u f (s) dp(s) + (1 ↵) max u f (s) dp(s)
p2P S p2P S
Z Z
↵ min u g(s) dp(s) + (1 ↵) max u g(s) dp(s).
p2P S p2P S

1.4 Risk aversion and risk measures


⇥ ⇤
Let
R x̃ ⇥ be a⇤ real-valued random variable. The expected utility model is E u(x̃) =
S ⇥ ⇤ Usually we take u concave to express risk aversion: we prefer a
u x̃(s) dp(s).
sure value E x̃ to the facing the risk on income x̃.
This is because strict concavity implies u x1 + (1 )x2 > u(x1 ) + (1
⇥ ⇤
)u(x2⇥). Hence,
⇤ by Jensen’s inequality, there exists a difference between u E x̃
and E u(x̃) . The extend of this difference measures how averse to risk people
are.

Definition 1. The risk premium ⇢u (x̃) is the real number such that:
⇥ ⇤ ⇥ ⇤
u E x̃ ⇢u (x̃) = E u(x̃) .

For a utility function u we often use two indices to measure risk aversion:

Definition 2. For a utility function u:

1. The index of absolute risk aversion at x 2 R is


u00 (x)
Rua (x) = u0 (x)
.

2. The index of relative risk aversion at x 2 R is


xu00 (x)
Rur (x) = u0 (x)
.
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 14

Figure 1.6: Concave utility function and the risk premium

u(x)
u(x2 )

u (E[x̃])

⇢u (x̃)
E[u(x̃)]

u(x1 )

x1 E[x̃] x2 x

Note: The decision maker a lottery of getting x1 with proba p and x2 with proba 1 p. E[x̃] =
px1 + (1 p)x2 and E[u(x̃)] = pu(x1 ) + (1 p)u(x2 ). The quantity ⇢u (x̃) is the risk premium: how
much of the expected income the individual is ready to give up to obtain a sure income level.

There is a relation between the risk premium and the index of absolute risk
aversion, given by the Arrox-Pratt approximation. Consider a small white noise
(random variable with mean 0) "˜ and consider the random variable x̃ = x + "˜.
Assume also that u is twice differentiable. Then (Taylor expansion):
"˜2 00
u(x + "˜) ⇡ u(x) + "˜u0 (x) + 2
u (x)

so that ⇥ ⇤ 2
E u(x̃) ⇡ u(x) + "
˜
2
u00 (x).
Similarly, ⇥ ⇤
u E x̃ ⇢u (x̃) ⇡ u(x) ⇢u (x̃)u0 (x).
By definition of the risk premium:
2 2
u00 (x)
⇢u (x̃) = u0 (x)
⇥ "
˜
2
= Rua (x) ⇥ "
˜
2
.

A similar relation exists for relative aversion when considering a small multi-
plicative risk.
We also have the following theorem.
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 15

Theorem 3 (Pratt, 1964). Let u1 and u2 be two utility functions (twice differentiable,
increasing and concave). Then the following three statements are equivalent:
1. Rua1 (x) Rua2 (x) for all x.
2. ⇢u1 (x̃) ⇢u2 (x̃) for all x̃.
3. There exists a twice differentiable, increasing and concave function such that
u1 = u2 .
Proof. See the tutorial.

Classical utility functions:


1. CARA utility functions: u(x) = e ↵x
, so that Rua (x) = ↵ for all x 2 R.
x1
2. CRRA utility functions: u(x) = 1
, so that Rur (x) = for all x 2 R.
⇣ ⌘1 ⇣ ⌘ 1
3. HARA utility functions: u(x) = 1
1

+ x
, so that R a
u (x) = 1

+ x

for all x 2 R.
This utility function is DARA if > 0, CRRA if ↵ ! 1 and CARA when
! +1.

1.5 Information and information structure


To describe the role and value of information in economics, let us first describe
what is meant by information and by more information.
Let S be a set of states of the world (like in Savage’s framework). To simplify,
let us assume that S is finite (everything extends to the more general case). We
assume that we have an a-priori proba p on S, and denote p(s) is the proba of
state s (and more generally p(E) the proba of an event E).
An information structure describes what we will learn about the true state of
the world. Let M = {m1 , · · · , mK } a set of messages (signals) that we will get.
The information structure describes the probability to get a message conditional
on some state to be the case: if the true state is s, you will only receive message
m with proba ⇧m s . The information structure is then completely described by an
S ⇥ M matrix ⇧ = (⇧m s )s2S,m2M (where s are for the rows and m for the column
of the matrix). See Fig. 1.7.
A simpler case is that of noiseless information structure where you can only
receive one message conditional on being in a given state. Then the information
structure describes a partition of of S, which is (E1 , · · · , EK ) (see Fig. 1.8).
Information (through signals) can help make better decisions. Let A be a set
of actions. If you do not have information, you will choose action a⇤ such that
X
a⇤ = arg max p(s)u(a(s)),
a2A
s2S
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 16

Figure 1.7: A noisy information structure

S M

s3 ⇧33 m3
⇧23 ⇧32
⇧13
s2 ⇧22 m2
⇧31
s1 ⇧21 ⇧12 m1
⇧11

s is the proba to receive message m when the true state is s.


Note: ⇧m

with u(a(s)) the utility when action a is taken is state s.


For an information structure (⇧, M ), you can update your probabilities to make
better decisions. Given that you know that you can have access to messages,
you will wait to get message m before making your decision. The probability of
receiving this message is X
q(m) = p(s0 )⇧m
s0 .
s0 2S

Upon receiving m 2 M , you can update your beliefs according to Baye’s rule:
p(s)⇧m m
P p(s)⇧s0 m .
p(s|m) = q(m)
s
=
s0 2S p(s )⇧s0

When you know m, you can choose a⇤ (m) such that:


X
a⇤m = arg max p(s|m)u(a(s)).
a2A
s2S

Ex ante (before receiving the message), the expected utility is:


0 1

X BX C
B C
U (⇧, M, p, u) = q(m) B
B p(s|m)u am (s) C

C
m2M @ s2S A
| {z }
V (m)

where V (m) is the indirect utility after receiving m.


CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 17

Figure 1.8: Noiseless information structures

S M S M
E1 E1
s3 m3 s3
⇧ ⇧ m2
E3 s 2 m2 s2
m1
s1 m1 s1
E2 E2

a) A perfectly informative structure b) An imperfectly informative structure

Note: Information structure a) is perfectly informative: when receiving a message mk , when know
exactly the state sj : 0 1
0 1 0
⇧ = @ 0 0 1 A.
1 0 0
Information structure b) is less informative: when receiving a message m1 , we could be either in
state s2 or in state s3 : 0 1
0 1
⇧ = @ 1 0 A.
1 0

P
Obviously, V (m) s2S p(s|m)u(a(s)) for any a 2 A so that:
0 1
!
X X XB BX
C
C X
U (⇧, M, p, u) q(m) p(s|m)u(a(s)) = B q(m)p(s|m) C u(a(s)) = p(s)u(a(s))
B C
m2M s2S s2S @m2M A s2S
| {z }
p(s)

P
for any a 2 A. This implies that U (⇧, M, p, u) maxa2A s2S p(s)u(a(s)): we
make better decision with information.2

Theorem 4. Optimal actions contingent on receiving information (a message) are always


better than actions chosen without information
2
Remark that this is specific to the expected utility model. Alternative models of decision
making may not always find information valuable.
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 18

Example: S = {s1 , s2 , s3 }, p(s1 ) = ⇡, p(s2 ) = ✓, p(s3 ) = 1 ⇡ ✓ (s1 =‘bead is bad’,


s2 =‘bead is OK’, s3 =‘bread is good’). Two actions: a1 =‘buy’, a2 =‘do not buy’. We
assume that u(a1 , s1 ) = 2, u(a1 , s2 ) = 1, u(a1 , s3 ) = 2 and u(a2 , sk ) = 0 for all
k = 1, 2, 3.
Consider three information structures:
0 1
1
1. Case 1: M = {m1 }, ⇧ = @ 1 A (No information).
1
The expected utility of a1 is U (a1 ) = ⇡2 + ✓ + (1 ⇡ ✓)2.
The expected utility of a2 is U (a2 ) = 0.
You choose a1 if 2(1 ⇡ ✓) > 2⇡ ✓.
0 1
1 0
2. Case 2: M = {m1 , m2 }, ⇧ = @ 1 0 A.
0 1
If you receive message m2 , you know that s3 is the case for sure and you
take action a1 .
If you receive message m1 , you know update your beliefs so that p(s1 |m1 ) =

⇡+✓
, p(s2 |m1 ) = ⇡+✓

and p(s3 |m1 ) = 0.
The expected utility of a1 is Um1 (a1 ) = ⇡
⇡+✓
2 + ✓
⇡+✓
.
The expected utility of a2 is Um1 (a2 ) = 0.
You choose a1 when receiving m1 if 0 > 2⇡ ✓ (in which case you would
also choose a1 when no info).
0 1
1 0
3. Case 3: M = {m1 , m2 }, ⇧ = @ 0 1 A.
0 1
If you receive message m1 , you know that s1 is the case for sure and you
take action a2 .
If you receive message m2 , you know update your beliefs so that p(s1 |m2 ) =
0, p(s2 |m2 ) = 1 ✓ ⇡ and p(s3 |m2 ) = 1 1 ⇡ ⇡ ✓ .
The expected utility of a1 is Um2 (a1 ) = 1 ⇡ ✓
1 ⇡
2 + ✓
1 ⇡
> 0.
The expected utility of a2 is Um2 (a2 ) = 0.
You choose a1 .
In that case, you actually take the right decision in all states of the world.

4. Summary:
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 19

2⇡ ✓<0 0 < 2⇡ ✓ < 2(1 ⇡ ✓) 2(1 ⇡ ✓) < 2⇡ ✓


Case 1 s1 a1 a1 a2
s2 a1 a1 a2
s3 a1 a1 a2
Case 2 s1 a1 a2 a2
s2 a1 a2 a2
s3 a1 a1 a1
Case 3 s1 a2 a2 a2
s2 a1 a1 a1
s3 a1 a1 a1

We know that having information is better than having no information. But


we may want to define what it means to have access to a ‘better’ or ‘more infor-
mative’ information structure. Hence the following definition and result.
Definition 3. An information structure (⇧1 , M1 ) has more value than an information
structure (⇧2 , M2 ) if and only if for any utility function u and any proba p:

U (⇧1 , M1 , p, u) U (⇧2 , M2 , p, u)

Theorem 5 (Blackwell, 1951). The information structure (⇧1 , M1 ), with |M1 | = K1 ,


has more value than an information structure (⇧2 , M2 ), with |M2 | = K2 , if and only if
there exists positive numbers (Bjk )j2{1,··· ,K1 },k2{1,··· ,K2 } such that:
P
1. (⇧2 )ks = j2{1,··· ,K1 } Bjk (⇧1 )js for all mk2 2 M2 and s 2 S.
P
2. k2{1,··· ,K2 } Bj = 1 for all j 2 {1, · · · , K1 }.
k

The interpretation of the theorem is the following. In matrix terms, denoting


B = (Bjk )j2{1,··· ,K1 },k2{1,··· ,K2 } (where j are for the rows and k for the column of the
matrix), we have ⇧2 = B⇧1 . Hence matrix B transforms the signals in (⇧1 , M1 )
to obtain more noisy ones (⇧2 , M2 ). This is called ‘garbling’: during the transmis-
sion of a message, it can be garbled i.e. mixed up or distorted.
Example: Assume that S = {s1 , s2 } and that we have two messages M = {m1 , m2 }.
A perfect expert is represented by matrix
✓ ◆
1 0
⇧1 = ,
0 1

i.e. when receiving message m1 you are sure that the state is s1 and when receiv-
ing message m2 you are sure that the state is s2 .
Now consider ✓ ◆
✓ 1 ✓
B= .
1 ✓ ✓
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 20

We obtain: ✓ ◆
✓ 1 ✓
⇧2 = B⇧1 = .
1 ✓ ✓
With information structure (⇧2 , M ), the signals are more noisy: when I receiving
message 1 I know that true state is 1 only with proba ✓p(s1 )/ ✓p(s1 )+(1 ✓)p(s2 ) .
The Blackwell theorem only tells us that a structure of information has less
value when it is obtained by garbling messages of the perfectly informative struc-
ture.

1.6 The value of information


In the previous section, we have seen that more information (in the sense of better
information structures) has value in the sense that it helps make better decision
and thus having a larger utility. In general, we may assess the value of infor-
mation in utility terms by comparing the difference of utility with and without
information:
X
V (⇧, M, p, u) = U (⇧, M, p, u) max p(s)u(v(s))
a2A
s2S
!
X X X
= q(m) p(s|m)u a⇤m (s) p(s)u(a⇤ (s))
m2M s2S s2S
0.

The problem is that this value is in term of utility and not money, which is
difficult to interpret, unless u(a(s)) is a monetary value. What we would like to
have is a willingness to pay (WTP) to access an information structure (⇧, M ).
To measure this willingness to pay, let us assume that R(a(s)) is the monetary
consequence of action a in state s and that utility can be written:

u(a(s)) = v R(a(s)) + w̄ ,

where w̄ is the initial wealth level (more generally, we could have u(a(s)) =
v R(a(s)) + w̄, a(s) ).
We can define the (monetary) value of information as a willingness-to-pay in two
different ways:

1. V ? (⇧, M, p, u) the compensating variation such that:


!
X X ⇣ ⌘ X
q(m) p(s|m)v R(a⇤m (s)) + w̄ V ? (⇧, M, p, u) = p(s)v R(a⇤ (s))+w̄
m2M s2S s2S
CHAPTER 1. DECISION MAKING UNDER RISK AND UNCERTAINTY 21

2. V ?? (⇧, M, p, u) the equivalent variation such that:


!
X X X ⇣ ⌘
q(m) p(s|m)v R(a⇤m (s)) + w̄ = p(s)v R(a⇤ (s))+w̄+V ?? (⇧, M, p, u)
m2M s2S s2S

Example: S = {s1 , s2 , s3 }, p(s1 ) = ⇡, p(s2 ) = ✓, p(s3 ) = 1 ⇡ ✓ (s1 =‘bead is bad’,


s2 =‘bead is OK’, s3 =‘bread is good’). Two actions: a1 =‘buy’, a2 =‘do not buy’. We
assume that u(a1 (sk )) = v(sk ) ⌧ + w̄, with v(sk ) the utility of consuming bread
and ⌧ the price of bread, and u(a2 (sk )) = w̄. P
When you have no information , the expected utility of a1 is U (a1 )P = s p(s)v(s)
⌧ +w̄ and the expected utility of a2 is U (a2 ) = w̄. You buy the bread if s p(s)v(s) >
⌧ . In that case your utility is
X X
p(s)u(a⇤ (s)) = p(s)v(s) ⌧ + w̄
s2S s

Now assume 0 that you


1 can access the noiseless information structure M =
1 0
{m1 , m2 }, ⇧ = @ 0 1 A. We know that in that case, when you receive message
0 1
m1 , you know that s1 is the case for sure and you take action a2 (we assume that
v(s1 ) < ⌧ ). Assuming that v(s3 ) > v(s2 ) > ⌧ , if you receive message m2 , you want
to buy the bread.
In that case your utility is

V (⇧, M, p, u) = ✓v(s2 ) + (1 ⇡ ✓)v(s3 ) (1 ⇡)⌧ + w̄.

Then the value of information is measured either by V ? (⇧, M, p, u) such that


X
✓v(s2 ) + (1 ⇡ ✓)v(s3 ) (1 ⇡)⌧ + w̄ V ? (⇧, M, p, u) = p(s)v(s) ⌧ + w̄
s

or by V ?? (⇧, M, p, u) such that


X
✓v(s2 ) + (1 ⇡ ✓)v(s3 ) (1 ⇡)⌧ + w̄ = p(s)v(s) ⌧ + w̄ + V ?? (⇧, M, p, u).
s

In this specific case, we have

V ? (⇧, M, p, u) = V ?? (⇧, M, p, u) = ⇡ ⌧ v(s1 ) .

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy