0% found this document useful (0 votes)
6 views57 pages

Dual Approach To Recursive Optimization

The paper presents a dual approach to recursive optimization that integrates duality and dynamic programming, demonstrating that the dual of a separable dynamic optimization problem can be recursively decomposed. It establishes a dual Bellman operator and conditions for its contractivity, providing a framework for relating primal and dual problems while addressing computational challenges. The authors also discuss numerical implementation and offer examples to illustrate the application of their method in various economic contexts.

Uploaded by

nhon phan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views57 pages

Dual Approach To Recursive Optimization

The paper presents a dual approach to recursive optimization that integrates duality and dynamic programming, demonstrating that the dual of a separable dynamic optimization problem can be recursively decomposed. It establishes a dual Bellman operator and conditions for its contractivity, providing a framework for relating primal and dual problems while addressing computational challenges. The authors also discuss numerical implementation and offer examples to illustrate the application of their method in various economic contexts.

Uploaded by

nhon phan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

The Dual Approach to Recursive Optimization:

Theory and Examples∗


Matthias Messner† Nicola Pavoni‡ Christopher Sleet§

September 15, 2013

Abstract

We bring together the theories of duality and dynamic programming. We show that
the dual of a separable dynamic optimization problem can be recursively decomposed.
We provide a dual version of the principle of optimality and give conditions under
which the dual Bellman operator is a contraction with the optimal dual value function
its unique fixed point. We relate primal and dual problems, address computational
issues and give examples.

JEL codes: C61, C73, D82, D86, E61.


Keywords: Dynamic Contracts, Duality, Dynamic Programming.

1 Introduction
Many dynamic economic optimization problems have a recursive structure that makes
them amenable to solution via dynamic programming. This structure allows the original
problem to be decomposed into a family of simpler sub-problems linked by state variables.
The set of state variables consistent with a non-empty constraint correspondence is called
∗ We thank Musab Kurnaz for expert assistance with the calculations. We are grateful for the comments
of seminar participants at Concordia University, Mannheim University, the Bank of Portugal, the 2013 SAET
meetings. This paper supersedes Messner, Pavoni, Sleet (2011), which developed the dual recursive method
in a simpler two period setting, and Messner, Pavoni, Sleet (2012a), which constitutes our first attempt to de-
rive contractive properties of the dual Bellman operator we introduce here. Pavoni gratefully acknowledges
financial support from the European Research Council, Starting Grant #210908.
† Bocconi University and IGIER, 20136 Milano, Italy; matthias.messner@unibocconi.it.
‡ Department of Economics, Bocconi University and IGIER, Via Roentgen 1, I-20136, Milan, Italy; IFS and

CEPR, London; pavoni.nicola@gmail.com.


§ Tepper School of Business, Carnegie Mellon University, Pittsburgh PA 15217; csleet@andrew.cmu.edu.

1
the "effective" state space and is a key component of a problem’s recursive formulation. In
many settings the effective state space is not given explicitly: it must be recovered as part
of the solution to the problem. This complicates the application of dynamic programming
methods and, following Marcet and Marimon (2011), has prompted economists to adopt
recursive formulations that replace or supplement standard "primal" state variables with
"dual" ones. Examples include, inter alia, Kehoe and Perri (2002), Marimon and Quadrini
(2006), Acemoğlu, Golosov, and Tsyvinski (2010), Chien, Cole, and Lustig (2011) and Aiya-
gari, Marcet, Sargent, and Seppälä (2002). Despite their widespread use, thorough analysis
of these methods is limited and their application has often been ad hoc. This paper devel-
ops a new recursive dual approach to dynamic optimization that blends elements of the
theories of duality and dynamic programming. It shows that (i) a large class of dynamic
optimization problems in economics have recursive duals, (ii) such recursive duals relo-
cate the analysis to a more convenient dual state space that is often easy to characterize
and (iii) the associated dual Bellman operator is contractive on an appropriate function
space. Sufficient conditions for the dual and, hence, the recursive dual to characterize the
solution of the original (primal) problem are given. For situations in which these sufficient
conditions are not satisfied, a numerical check of optimality is proposed. Numerical im-
plementation of the recursive dual method is discussed and various economic examples
and applications provided.
The paper begins with a family of recursive (primal) optimizations that encompasses
many economic applications. These optimizations feature objective and constraint func-
tions that can be expressed in terms of recursively-evolving "summaries" of past and future
actions. In the context of particular applications, such summaries have interpretations as
capital, utility promises or inflation; they may be backward-looking (i.e. functions of past
actions and shocks and an initial condition) or forward-looking (i.e. functions of future
actions). In recursive formulations of primal problems, they serve as state variables. Pri-
mal optimization problems may be re-stated using a Lagrangian. In the re-stated problem
a sup-inf operation over choices and Lagrange multipliers replaces a sup operation over
choices alone. By interchanging the sup and inf operations a dual inf-sup problem is ob-
tained. We show that if the correct Lagrangian is chosen, the recursive structure of the
primal is inherited by the dual with, in the latter case, co-states (i.e. multipliers on laws of
motion for primal states) serving as dual state variables.
We use this structure to recover a dual Bellman operator. The dual Bellman updates
candidate value functions via "inf-sup" operations over Lagrange multipliers and actions.
Specifically, at each dual state and current multiplier combination, an "inner" supremum
operation is performed over current actions. Then, at each current dual state, an outer in-

2
fimum operation over multipliers gives the updated value function. We show that without
further assumptions the dual Bellman gives necessary conditions for optimal dual values
and policies and, under mild additional restrictions, sufficient conditions for such values
and policies. In short, we recover a dual principle of optimality. The key step in the deriva-
tion is an interchange of an infimum operation over future multipliers with a supremum
operation over current actions. To ensure this interchange does not modify values or solu-
tions (in the absence of further assumptions), it is essential to associate a Lagrangian with
the problem that is rich enough to allow all non-linearities in constraints and in the objec-
tive to be contained in the Lagrangian’s "current" terms. In general this requires explicitly
incorporating laws of motion for primal state variables into the Lagrangian.
An attractive aspect of the recursive dual is that in some important cases, in particular,
when the primal state space is bounded, the "effective" dual state space is readily identified
as all of RN , where N is the number of dual state variables. Thus, for the dual problem,
the difficulty of determining the state space is resolved. In addition, dual value functions
are positively homogenous of degree one. Consequently, in calculations, the dual state
space may be identified with the unit circle (in RN ).
The recursive dual features an unbounded value function and an unbounded con-
straint correspondence. This combination creates a challenge for the standard approach
to establishing contractivity of the Bellman operator. For problems with unbounded value
functions, a common procedure following Wessels (1977), is to show that there is a set
of functions closed and bounded with respect to a more permissive weighted sup norm1
that contains the optimal value function and on which the Bellman is a contractive self-
map. However, this approach requires that the continuation state variables and, hence,
the continuation value function cannot vary "too much" on the graph of the constraint
correspondence. Since the dual Bellman operator permits the choice of multipliers from
an unbounded set, this condition is only guaranteed in the dual setting if additional non-
binding constraints on multipliers are found. Instead, we show that the Bellman is contrac-
tive with respect to an alternative metric on a space of functions sandwiched between two
(unbounded) functions.2 We show through examples that such bounding functions are of-
ten available. A further difficulty is that the unboundedness and, hence, non-compactness
of the set of feasible multipliers disrupts the application of the Theorem of the Maximum.
However, it is easy to show that the optimal value function is convex. When it is every-
1A weighted sup norm on a set of functions F with common domain X is a function k · kw : F → R of
the form k f kw =
f (x)
supX w( x ) for some w : X → R++.
2 The argument combines the concavity of the dual Bellman, properties of the metric and of the sandwich.
It adapts ideas of Rinćon-Zapatero and Rodríguez-Palmero (2003). The novelty lies in the application of this
argument to the dual setting to which it seems very suited.

3
where real-valued as well, appeals can be made to the continuity properties of convex
functions to establish continuity of the optimal value function.
The recursive dual formulation permits solution of the dual rather than the original
primal problem. It remains to relate them. Weak duality results imply that the dual and,
hence, the recursive dual supplies an upper bound for payoffs from the primal problem.
Consequently, with no further assumptions the recursive dual gives welfare bounds for
optimal policies or policy improvements. For concave problems, possibly after relaxation
of the equality constraints describing laws of motion for state variables, we may appeal
directly to known duality results to relate the dual (and, hence, again the recursive dual)
more tightly to the original primal. These results give sufficient conditions on primitives
for dual and primal values and, sometimes, solutions to coincide. When theoretical suf-
ficient conditions for equality of dual and primal values and solutions are not available,
because, for example of non-concavities, we propose a numerical procedure for checking
whether a dual solution solves the original primal problem.
The paper proceeds as follows. After a brief literature review, Section 2 introduces a
general class of stochastic, infinite-horizon problems. Economic examples are given in Sec-
tion 3. Section 4 presents a primal recursive formulation for a sub-class of these problems
and points out difficulties in applying it. In Section 5, the primal problem is paired with
a dual problem and a recursive formulation of the latter obtained. A Bellman-type princi-
ple of optimality for the dual problem is established; Section 6 gives a contraction result
for recursive dual problems. The important class of problems with laws of motion and
constraints that are quasi-linear in (primal) states is considered in Section 7. Section 8 re-
lates primal and dual problems. Numerical implementation is discussed and a numerical
example given in Section 9.

Literature Our method is related to, but distinct from, that of Marcet and Marimon (1999)
(revised: Marcet and Marimon (2011)). These authors propose solving dynamic opti-
mizations by recursively decomposing a saddle point operation. They restrict attention to
concave problems with constraints (including laws of motion) that are linear in forward-
looking state variables. They substitute forward-looking states out of the problem using
their laws of motion and absorb a subset of constraints into a Lagrangian. Laws of motion
for backward-looking primal states (e.g. capital) are left as explicit restrictions. They then
recursively decompose a saddle point of this Lagrangian (on the constraint set defined by
the backward-looking laws of motion).
In contrast, our approach cleanly separates dualization of the primal from recursive
decomposition of the dual and shows that the latter is available under rather weak sep-

4
arability conditions, much weaker than those imposed by Marcet and Marimon (2011).
Our theoretical sufficient conditions for equality of optimal dual and primal values and
solutions are stronger than those guaranteeing recursive decomposition. However, even
here we can dispense with several of Marcet-Marimon’s restrictions. The requirements
that constraints are linear in forward-looking state variables and that every continuation
problem has a saddle can be dropped. Moreover, when these theoretical conditions are
not satisfied, we propose a numerical procedure for checking primal optimality of a dual
solution.
For some problems, Marcet and Marimon (2011)’s recursive saddle Bellman operator is
available and resembles our dual Bellman.3 In others, it is not available or is available, but
is quite different from ours. In particular, all of the examples considered in this paper either
cannot be handled by Marcet and Marimon (2011)’s formulation or would be handled
differently. The difference in the handling of backwards-looking state variables across
our approach and that of Marcet and Marimon (2011) is not a detail. Our treatment of
these variables is essential for the contractivity of the dual Bellman. This result relies on
the concavity of the Bellman operator, which is always true for our formulation, but not
theirs.4
Messner, Pavoni, and Sleet (2012b) consider the relationship between primal and dual
Bellman operators. They restrict attention to concave problems without backward-looking
state variables and with laws of motion that are linear in forward-looking ones. Thus,
their setting is much less general than the present one; it excludes many economically rel-
evant problems such as default with capital accumulation, risk sharing with non-expected
utility and optimal monetary policy, all of which are considered here. In addition, they
do not provide contraction results or a numerical implementation. In a similar setting to
Messner, Pavoni, and Sleet (2012b), Cole and Kubler (2012) show how recursive methods
using dual variables may be extended to give sufficient conditions for an optimal primal
solution under weak concavity conditions. In addition, they derive a contraction result
using a weighted sup-norm. They do so by obtaining additional non-binding constraints
on multipliers and, hence, continuation states. However, the restrictions on primitives for
these additional constraints to be non-binding appear strong.
3 However, even in these cases, our Bellman operator implements a fairly straightforward inf-sup opera-
tion, whereas theirs involves a more difficult saddle point operation.
4 Underpinning this is the fact that our dual formulation relies entirely on dual state variables; Marcet

and Marimon (1999) dualize a subset of constraints and rely on a mixture of dual and primal state variables.

5
2 Decision Maker’s Problem
This section describes an abstract recursive choice problem that can be specialized to give
many problems considered in the literature. In particular, it encompasses many dynamic
contracting and optimal policy problems. Concrete examples are given in Section 3.

Shocks and Action Plans Let S = {1, . . . , ns }, with element s, denote a finite set of
shocks.5 Shock histories of length t = 1, . . . , ∞ are denoted st ∈ S t . Let A ⊂ Rn , with
a

element a, denote a set of actions available to a decision-maker. The decision-maker’s


action choices at each history are collected into an action plan: α = { at }∞
t=0 , with a0 ∈ A
and, ∀t ∈ N, at : S t → A. The st -continuation of an action plan α is denoted α|st =

A such that if α ∈ A , then for all t and
{at+τ (st , ·)}∞
τ =0 . Plans are restricted to a set A ⊂ 2
st , α|st ∈ A . Let R(S) denote the set of probability distributions on S and Q : S × A →
R(S) a transition that maps current shock-action pairs to probability distributions over
the subsequent period’s shocks. Together Q, a seed shock s0 and an action plan α induce
a probability distribution over shocks and actions in all periods.

Constraints The set A is supplemented with additional constraints involving explicit


functions of actions. These functions depend on recursively evolving "summaries" of
past and future actions. Such summaries serve as states in primal recursive formula-
tions, where they often have concrete economic interpretations as, inter alia, capital stocks,
utility promises or inflation rates. In the dual setting multipliers on the laws of motion for
these summaries will serve as states.
We distinguish between summaries of past and future actions. Let K ⊂ Rn k be a
bounded set. Given a plan α, summaries of past actions and shocks Kt+1 (α, st ) are con-
structed recursively from a function W K : K × S × A → K according to:

Kt+1 (α, st ) = W K [Kt (α, st−1 ), st , at (st )], (1)

with K0 (α, s−1 ) = k̄ an initial seed state. In the sequel, we call summaries of past actions
and shocks backward-looking state variables. In many economic models physical or human
capital are naturally formalized as a backward-looking state variables.
Rn +1 are constructed recursively from a pair
Summaries of future actions V (st , α|st ) ∈ v

of functions W V : S × A × Rn +1 → Rn +1 and MV : S × A × Rn (n +1) → Rn +1 . The


v v s v v

first is a time aggregator that gives the current summary as a function of current actions
5 The restriction to a finite set of shocks streamlines our presentation by avoiding measure-theoretic com-
plications, but is not essential for our main results.

6
and a certainty equivalent of future summaries; the second is a stochastic aggregator that
generates the certainty equivalent. Future summaries are given by a function V : S × A →
Rn +1 satisfying the fixed point condition:
v

V (st , α|st ) = W V [st , at (st ), MV [st , at (st ), V ′ (α|st )]]. (2)

where V ′ (α|st ) = {V (s, α|(st , s))}ns=s 1 ∈ R ( n + 1) n


v s is a vector of continuation summaries.
In many examples, V gives the continuation payoffs of a group of agents facing incen-
tive constraints. If these agents have time additive-expected utility preferences, then
W V [s, a, m] = f (s, a) + δm and MV [s, a, v′ ] = ∑s′ ∈S v′ (s′ )Q(s|s′ ). However, our formu-
lation allows us to accommodate problems in which agents have non-time additive or
non-expected utility preferences or, indeed, problems in which the forward-looking vari-
ables are not payoffs at all (see Section 3).
To ensure the future summaries V (st , α|st ) are well defined and that (2) admits a fixed
point in a suitable space of functions, the following restrictions are imposed on W V and
MV .

Assumption 1. W V is increasing and continuous in its third argument. W V [·, ·, 0] is bounded


and there is a δ̄ ∈ [0, 1) such that for all m and m′ ∈ Rn + 1 : v

sup kW V [s, a, m] − W V [s, a, m′ ]k < δ̄km − m′ k,


S ×A

with k · k the Euclidean metric (on Rn +1).


v

If v′ ∈ Rn (n +1) and κ ∈ Rn +1, then we will write v′ + κ for v′ + (κ κ · · · κ) ∈ Rn (n +1).


s v v s v

Assumption 2. For each (s, a) ∈ S × A and κ ∈ Rn +1 , (i) MV [s, a, ·] is increasing, (ii)


v

MV [s, a, κ ] = κ and (iii) MV [s, a, ·] is constant sub-additive: for all v′ ∈ Rn (n +1) , MV [s, a, v′ +
s v

κ ] ≤ MV [s, a, v′ ] + κ.

Existence and uniqueness of a bounded function V satisfying (2) follows from Assump-
tions 1 and 2 and is shown in Appendix A. In the remainder of the paper, summaries of
future actions V (st , α|st ) are called forward-looking state variables. Let V := V (S × A ), i.e.
V is the (bounded) codomain of V.
Constraints are constructed from state variables according to for all t ∈ {0} ∪ N, st ∈ S
and st ∈ S t ,

H [Kt (α, st−1 ), st , at (st ), V ′ (α|st )] ≥ 0, (3)

7
where H : K × S × A × V ns → Rn h is bounded. In applications these inequalities cap-
ture incentive and resource constraints. We assume throughout that the decision-maker’s
constraint set is non-empty for some combination of initial state variables.

Objective and Problem The decision-maker’s objective, U : S × A → R, is given by an


aggregator over the forward-looking state variables:

U (s0 , α) = F[s0 , V (s0 , α)],

where F[s0 , ·] is non-decreasing. For example, V (s0 , α) = {V i (st , α|st )}in=v 0 ∈ Rn +1 may
v

give the payoffs of agents i = 0, . . . , nv and F may attach (possibly state contingent) Pareto
weights to these agents. U is then interpreted as a planner’s payoff.
The decision-maker’s primal problem is:

P∗ = sup F[s0 , V (s0 , α)] (P)

subject to ∀t, st , (3). We follow the usual convention sup ∅ = −∞.

3 Examples and Variations


Our framework accommodates many examples from the literature. Below we give three
that highlight the scope of our method. The first is a limited commitment problem similar
to that studied by Kocherlakota (1996) except that we assume agents have non-expected
utility Epstein-Zin preferences. Consequently, the law of motion for forward-looking
states, in this case the agents’ utilities, is non-linear in V ′ . The next example is a lim-
ited commitment problem with physical capital (and standard preferences); it features a
backward-looking state variable. The third is an optimal monetary policy problem similar
to those considered in Woodford (2003). This problem also features a non-linear law of
motion for the forward-looking state variables. All of these examples are outside of the
formulation of Messner, Pavoni, and Sleet (2012b) (which features no backward-looking
states and linear laws of motion for forward-looking ones) and Marcet and Marimon
(2011) (which allows for backward-looking states, but also assumes linear laws of mo-
tion for forward-looking ones and restricts attention to concave problems). In addition,
Marcet and Marimon (2011) treat backward-looking states quite differently to us.

Example 1 (Risk sharing with limited commitment and Epstein-Zin preferences). Two agents
share risk. They face shocks to their endowments and to their utility options from separa-

8
R+ give the joint endowment of the agent pair in each shock state and
tion. Let γ : S →
w : S → R w(s) = {wi (s)}i =1,2 , their outside utility options. Let A = R2+ denote a set of
2,

possible consumptions for the agents. There are no backward-looking state variables. The
continuation payoffs of the two agents, V i (s, α), i = 0, 1, constitute forward-looking state
variables. They evolve according to Equation (2) with aggregators:

1−δ  n o σ1 
W V [s, a, m] = (ai )1−µ + δmi MV [s, v′ ] = ∑ vi ′ ( s ′ ) σ Q ( s | s ′ ) .
1−µ i =0,1 ′
s ∈S i =0,1

Boundedness and concavity of these aggregators is assured if µ, σ ∈ (0, 1). The resource
and incentive constraints are collected into a single function:
!
W V [s, a, MV [s, v′ ]] − w(s)
H [s, a, v′ ] = ≥ 0.
γ(s) − ∑2i =1 ai

Finally, the decision-maker is a planner who attaches Pareto weight λi to the i-th agent.
Her objective is F[s0 , V (s0 , α)] = ∑1i =0 λi V i (s0 , α). 

Example 2 (Default with capital accumulation). A lender (agent 0) extends credit to a bor-
rower (agent 1) who can accumulate capital and can default. Let a = (a0 , a1 ) ∈ A ⊂
R × R+ denote a pair of consumptions for the lender and the borrower. Their (bounded be-
low) utility functions are denoted f i (ai ), i = 0, 1. The borrower operates a risky technology
γ: R+ × S → R+ that maps the capital stock and current shock to output. γ is assumed
bounded. The borrower is free to default and take an outside utility w : R+ × S → R
that depends upon the amount of capital she has sunk into the technology and the current
shock. The lender and borrower’s utilities and capital constitute forwards and backwards-
looking state variables with aggregators:
   
W V [s, a, m] = f i (ai ) + δmi MV [s, v′ ] = ∑ vi ′ ( s ′ ) Q ( s | s ′ ) ;
i =0,1 ′
s ∈S i =0,1

and

W K [k, s, a] = γ(k, s) − ∑ ai .
i =0,1

The incentive ("no default") constraint is given by:

f 1 ( a1 ) + δ ∑ v1′ (s′ )Q(s|s′ ) − w(k, s) ≥ 0,



s ∈S

9
and the resource constraint by:
W K [k, s, a] ≥ 0.

As in the previous example, these may be collected into a single function H. The objective
is given by the Pareto sum: F[s0 , V (s0 , α)] = ∑i =0,1 λi V i (s0 , α). 

Example 3 (Optimal monetary policy). The government’s social objective over sequences of
output α = { at }∞ ∞ ∞ t
t=0 and inflation {∆pt }t=0 is given by ∑ t=0 δ L( at , ∆pt ) with L :
2 → R R
continuous.6 Output sequences are restricted to A := A∞ , with A = [ a, a] a bounded
interval. Inflation evolves according to a simple New Keynesian Philips Curve,

∆pt = κat + δ∆pt+1 ,

with the terminal condition limt→∞ ∑∞ t


t=0 δ ∆pt = 0. Consequently, given an output plan
α, inflation at t is ∆pt = V 1 (α|t) := κ ∑∞ τ
τ =0 δ at+ τ and the government’s continuation pay-
off: V 0 (α|t) := ∑∞ t 1
τ =0 δ L( at+ τ , V (α | t + τ )). The government’s payoff and inflation serve as
forward-looking state variables; there are no backward-looking state variables in this prob-
lem. Adopting our previous notation and letting v = (v0 , v1 ) and v′ = (v0′ , v1′ ) denote,
respectively, current and future pairs of payoff and inflation, the (non-linear) aggregator
W V is given by:

 L(a, κa + δv1′ ) 
v = W V [ a, v′ ] = + δv′ .
κa

There is no H function in this case and the social objective is simply F[V (α)] = V 0 (α). 

3.1 Variations
Our framework also accommodates dynamic (hidden action) moral hazard problems with
general recursive preferences and the timing assumed in Hopenhayn and Nicolini (1997).7
Small modifications of our basic framework admit other economic problems considered in
the literature. We briefly describe two of these.
6 We consider here the deterministic version of the problem as in Woodford (2003). In most of Woodford
(2003)’s examples L is a (concave) quadratic approximation to an underlying objective over primitives. For
now we place no such restrictions on L.
7 Under this timing the public signal (a job or unemployment) of a hidden action (job search) is realized

in the period after the action is taken. Alternative timing assumptions are possible after modifications of
our framework. The modifications are similar to those described in the discussion of hidden information
problems below.

10
Participation Constraints In some problems constraints are supplemented with addi-
tional restrictions on the initial values of forward-looking variables. For example, con-
tracting problems often place initial participation constraints on agents. Such constraints
are easily incorporated into our basic framework by appending the additional restriction:

W V [s0 , a0 , MV [s0 , a0 , V ′ (α)]] − v̄ ≥ 0, (4)

where v̄ gives the player’s initial outside payoff option. Our basic formulation omits (4),
but we point out the small modifications needed to incorporate it.

Hidden Information problems In hidden information problems some or all agents pri-
vately observe a shock process. Without loss of generality attention may be restricted to
plans that induce agents to truthfully reveal their current shock. This requires incentive
constraints that "run across" contemporaneous shock states and, hence, the replacement of
H : K × S × A × V ns → Rn h with H̃ : K × Ans × V ns ×ns → Rn . For example, consider the
h

simplest case in which a firm induces a worker to reveal whether she is well (s = 1) or sick
(s = 2). The incentive constraints require that well workers reveal their health and are of
the form:

H̃ [{ at (st−1 , s)}, {V ′ (α|st−1 , s)}] = u(1, at (st−1 , 1)) + δ ∑ V (s′ , α|st−1 , 1, s′ )Q(1, s′ )

s ∈S
t −1
− u(1, at (s , 2)) − δ ∑ V (s′ , α|st−1 , 2, s′ )Q(1, s′ ) ≥ 0,

s ∈S

where at (st ) ∈ A is now the bundle of consumption and effort prescribed after health
history st . In this case, W V [s, a, m] = u(s, a) + δm and MV [s, v′ ] = ∑s′ ∈S v′ (s′ )Q(s, s′ ). If
the health shocks are i.i.d., then MV [v′ ] = ∑s′ ∈S v′ (s′ )Q(s′ ) and it is more convenient to
redefine the forward-looking state as the certainty equivalent of V, i.e. as Ṽ (α|st−1 ) =
MV [V ′ (α|st−1 )]. Forward-looking states then evolve according to

Ṽ (α|st−1 ) = MV [W V [st , at (st ), Ṽ (α|st )]]

and the constraints become:

H̃ [{ at (st−1 , s)}, {Ṽ (α|st−1 , s)}] = u(1, at (st−1 , 1)) + δṼ (α|st−1 , 1)
− u(1, at (st−1 , 2)) − δṼ (α|st−1 , 2) ≥ 0.

Slightly modified versions of all the results given below hold with H̃ replacing H.

11
4 Augmented Primal and a Recursive Primal Problem
We define an augmented primal problem in which state variables are introduced as explicit
choices rather than as functions of past actions. Our motive for introducing this problem
is that it, rather than the original one, is amenable to direct recursive decomposition. We
give a recursive formulation that decomposes the augmented problem into sub-problems
linked by state variables. The difficulties with this formulation motivate our subsequent
dual approach.

4.1 Augmented primal problem


Define a primal process π to be a plan α augmented with a process for backwards and
forward-looking states {kt , vt }∞
t=0 . The set of primal processes is given by:

∀t ∈ N, kt : S t−1 → K,
( )
α ∈ A , k0 ∈ K,
P= π= (α, {kt , vt }∞
t =0 )
v0 ∈ V , ∀t ∈ N, vt : S t → V
.

The augmented primal problem is:


sup F[s0 , v0 ] (AP)

subject to π ∈ P, k0 = k̄ and ∀t, st ,

kt+1 (st ) = W K [kt (st−1 ), st , at (st )], (5)


vt (st ) = W V [st , at (st ), MV [st , at (st ), vt+1 (st )]], (6)

and

H [kt (st−1 ), st , at (st ), vt+1 (st )] ≥ 0. (7)

Thus, the augmented primal problem (AP) re-expresses constraints in terms of state pro-
cesses.8 We record the following (obvious) fact.

Proposition 1. If P∗ is the optimal value for (P), then it is also the optimal value for (AP). α∗
solves (P) if and only if there is a state process {k∗t , v∗t }∞ ∗ ∗ ∗ ∞
t=0 such that (α , {k t , vt }t=0 ) solves (AP).
8 Participation constraints are incorporated by adding v0 ≥ v̄ to the constraint set.

12
4.2 Recursive Primal Problem
In this section we give a recursive primal formulation of a principal-agent problem.9 In
such a problem, a committed principal possessing no private information designs a con-
tract to motivate a group of agents. A forward-looking variable V 0 defining the principal’s
payoff function is the objective and does not enter the constraints. It is convenient to ex-
ploit this structure by separating the principal’s payoff V 0 from the other forward-looking
variables (typically utility promises to agents) and redefining V := {V i }in=v 1 to exclude V 0 .
The problem becomes:
sup V 0 (s0 , α)

subject to ∀t, st , (3) with V (and H, W V and MV ) redefined. The augmented version of this
problem is:
sup V 0 (s0 , α) (PA)

subject to π ∈ P, k0 = k̄ ∈ K and (5) to (7) with vt redefined to exclude v0t , i.e. vt = {vit }in=v 1 .
The aggregators W K , W V and MV may be used to decompose (PA) into a family of
sub-problems linked by elements in S , Rn k and Rn . It is useful to identify "state spaces"
v

on which these sub-problems are well-posed (i.e. have non-empty constraint sets). To that
end define the "endogenous state space" X to be the largest subset of K × S × V satisfying
the recursion:

∃(a, k′ , v′ ) ∈ A × K × V , k′ = W K [k, s, a],


 

 

X = (k, s, v) v = W V [s, a, MV [s, a, v′ ]], H [k, s, a, v′ ] ≥ 0, . (8)
 
and ∀s′ ∈ S , (k′ , s′ , v′ (s′ )) ∈ X
 

Crucially, while K and V are given exogenously or are easy to find, X is often neither. In
addition, let:
( )
k′ = W K [k, s, a], v
= W V [ a, s, MV [ a, s, v′ ]],
Γ(k, s, v) = (a, k′ , v′ ) ∈ A × K × V ns .
H [k, s, a, v ] ≥ 0 and (k′ , s′ , v′ (s′ )) ∈ X

Define:
V (k, s) = {v : (k, s, v) ∈ X }
9 The principal-agent problem is a special case of (P). The more general problem (P) also has primal
recursive formulations, see Kocherlakota (1996), Rustichini (1998) and, especially, Messner, Pavoni, and Sleet
(2012b), Section 7. However, since our goal here is to briefly review the recursive primal approach and
point out its limitations, we restrict ourselves to a recursive primal treatment of the simpler principal-agent
problem.

13
and let W V,0 and MV,0 denote the time and stochastic aggregators for V 0 .

Proposition 2. Let P0∗ ∈ R ∪ {−∞} be the optimal value for problem (PA). Then:
P0∗ = sup P∗ (k̄, s0 , v0 ), (9)
V (k̄,s0 )

where P∗ satisfies the recursion, for each (k, s, v) ∈ X ,

P∗ (k, s, v) = sup W V,0 [s, a, MV,0 [s, a, P∗ (k′ , v′ )]], (10)


Γ (k,s,v)

with P∗ (k′ , v′ ) = { P∗ (k′ , s′ , v′ (s′ ))}ns′s=1 . In addition, (α∗ , {k∗t , v∗t }∞


t=0 ) solves (PA) if and only
if (i) k∗0 = k̄ and v0∗ ∈ G0∗ and (ii) for all t ∈ N, st ∈ S t , (a∗t (st ), k∗t+1 (st ), v∗t+1 (st )) ∈
G ∗ (k∗t (st−1 ), st , v∗t (st−1 )), where:

G0∗ := argmax P∗ (k̄, s0 , v) and


V (k̄,s0 )

G (k, s, v) := argmax W V,0 [s, a, MV,0 [s, a, P∗ (k′ , v′ )]].



Γ (k,s,v)

Proof. See Appendix B.

Note that the role of the ‘first stage problem’ (9) is to provide an optimal initial condi-
tion v0∗ for the forward-looking state variables; (10) then gives the primal Bellman equation.
As Proposition 2 indicates X is generally part of the solution to the problem along with P0∗
and P∗ . Stokey, Lucas, and Prescott (1989) document problems in which X is determined
exogenously as a primitive of the problem. However, for many other problems, in particu-
lar those with forward-looking state variables, X is given implicitly and recovering it (i.e.
solving the fixed point problem defined by (8)) is a major complication. This motivates the
dual approach.

5 Recursive Dual
We begin this section by defining a Lagrangian for (AP). The Lagrangian involves the
product of constraint values with multipliers. We collect the former into an object called a
constraint process and the latter into an object called a dual process. Definitions of these
follow.

14
5.1 Lagrangians and Dual Problems
As a preliminary, we make a small adjustment to the definition of a primal process. Recall
that backwards-looking state variables kt were previously defined to be st−1 -measurable.
To fully exploit the recursive structure in the Lagrangian it is convenient to allow these
variables to be st -measurable and to enforce st−1 -measurability via their law of motion.
Thus, from now on, unless further restricted, each t-dated variable (including kt ) in a
primal process π = {kt , at , vt }∞ t
t=0 is s -measurable.
A constraint process evaluates constraint functions inclusive of laws of motion at a
given primal process. For each primal process π, let z0K (π ) = k̄ − k0 and, for all t ∈ N and
st ∈ S t, let:
t −1
zK t K
t (π )(s ) = W [ k t−1 (s ), st−1 , at−1 (st−1 )] − kt (st ).

Then {zK ∞
t (π )}t=0 gives the values of the law of motion for backward-looking constraints
(inclusive of the initial condition) at π. Similarly, define for all t ∈ {0} ∪ N, st ∈ S t ,
zV t V t V t t t
t (π )(s ) = W [ s t , at (s ), M [ s t , at (s ), vt+1 (s )]] − vt (s )

and ztH (π )(st ) = H [kt (st−1 ), st , at (st ), vt+1 (st )]. Then {zV ∞ H ∞
t (π )}t=0 and {zt (π )}t=0 give the
values of the forward-looking law of motion and H constraints at π. These terms are col-
j
lected into the constraint process ζ (π ) = {zt (π )}∞
t=0,j∈J , J := {K, V, H }. The boundedness
assumptions placed on primitives and the countable number of constraints ensure that for
all π ∈ P, ζ (π ) ∈ ℓ∞ .10
A dual process contains summable ("countably additive") multipliers for the various
constraints facing the decision-maker. Let θ K = {qK ∞ K t
t }t=0 , with qt : S → Rn , denote
k

multipliers (co-states) for the backward-looking law of motion and θ V = {qV ∞


t }t=0 , with
qV t
t : S → Rn +1, multipliers (co-states) for the forward-looking law of motion. Let θ H =
v

t=0 , with qt : S → R , denote multipliers for the H-constraints. Collect these


{qtH }∞ H t n h

various multipliers into a dual process θ = {θ j } j∈J and define the set of (bounded) dual
processes:  
 ∞ 
j
Q= θ ∑ ∑ ∑ δ̄t−1 kqt (st )k < ∞  ,
J t =0 S t

10 We use ℓ∞ to denote the set of sup-norm bounded, vector valued sequences: {{ xn }∞ n =1 | x n ∈


R m , sup
n ∈N k x n k < ∞ } , where k · k is the Euclidean norm on R
m . In our setting, m =
∑ j ∈J j + 1 and
n
j
xn = {zt(n) (st (n))} j∈J for some enumeration of histories st ( n).

15
with δ̄ ∈ (0, 1) the discount from the aggregator W V . Define the Lagrangian:

L (π, θ ) = F[s0 , v0 ] + hθ, ζ (π )i,

j j
where hθ, ζ (π )i = ∑J ∑∞ t t t
t=0 ∑ S t δ̄ {qt (s ) · zt (π )(s )} and · is the usual vector dot product.
The decision-maker’s augmented primal problem (AP) may be re-expressed as a sup-
inf problem:
P0∗ := sup inf L (π, θ ). (SI)
P Q

Its dual interchanges the infimum and supremum operations:

D0∗ := inf sup L (π, θ ). (IS)


Q P

Discussion of the relation between these problems is deferred until Section 8. Instead in
the remainder of this section we pursue a recursive formulation of (IS).11

5.2 Recursive Dual


The recursive dual formulation decomposes (IS) into sub-problems linked by co-state vari-
ables. We introduce some preliminary notation and concepts. We call p = (a, k, v′ ) a current
primal choice where a ∈ A is a current action, k ∈ K is a current backwards-looking state and
v′ ∈ V ns is a tuple of continuation forward-looking states, one for each future shock s′ . Cur-
rent primal choices belong to P = A × K × V ns .12 We call q = (q H , y′ ) a current dual choice
where q H ∈ Rn+ h
is a current H-constraint multiplier and y′ = (qK ′ , qV ′ ) ∈ Rn (n +n +1) is
s k v

a tuple of co-states for the next period’s backward and forward-looking laws of motion.
Current dual choices belong to Q = Rn+ × Rn (n +n +1). Let y = (qK , qV ) ∈ Y := Rn +n +1
h s k v k v

denote a pair of co-state variables on current laws of motion.


The Lagrangian in (IS) may be expanded as:

D0∗ = inf sup F[s0 , v0 ] − qV V V K


0 · {v0 − W [ s0 , a0 , M [ s0 , a0 , v1 ]} + q0 · (k̄ − k0 )
Q P
+ δ̄ ∑ hθ, ζ (π )|s1 i, (11)
s1 ∈S

j j
with hθ, ζ (π )|s1 i = ∑J ∑∞ t t t
t=0 ∑ S t δ̄ qt+1 (s1 , s ) · zt+1 (π )(s1 , s ) the continuation of hθ, ζ (π )i
11 An initial participation constraint may be incorporated by appending zV V
−1 = v0 − v̄ and multiplier q−1 to
the constraint and dual process respectively.
12 Our notation convention is to use calligraphic letters P for sets of current actions and script letters P

for sets of stochastic processes.

16
after the realization of the first period shock s1 . Removing F[s0 , v0 ] − qV K
0 · v0 + q0 · k̄ from
(11) and fixing the initial co-states y0 = (q0K , qV
0 ) gives the following continuation dual prob-
lem:

D ∗ (s0 , y0 ) = inf sup −q0K · k0 + qV V V


0 · W [ s0 , a0 , M [ s0 , a0 , v1 ]] (12)
Q ( y0 ) P ( v0 )

+ q0H · H [k0 , s0 , a0 , v1 ] + δ̄ ∑ hθ, ζ |s1 i,


s1 ∈S

where Q (y0 ) omits y0 = (q0K , qV


0 ) from Q, P (v0 ) omits v0 from P. Collecting terms in (12)
involving the initial current primal choice p0 = (a0 , k0 , v1 ) gives the current "dual" payoff
J:

J (s0 , y0 ; q0 , p0 ) = −q0K · k0 + qV V V H
0 · W [ s0 , a0 , M [ s0 , a0 , v1 ]] + q0 · H [ k0 , s0 , a0 , v1 ]
− δ̄ ∑ qV
1 (s1 ) · v1 (s1 ) + δ̄ ∑ q1K (s1 ) · W K [k0 , s0 , a0 ]. (13)
s1 ∈S s1 ∈S

Note that the terms in the second line of (13) are extracted from δ̄ ∑s1 ∈S hθ, ζ |s1 i in (12).
Below we give explicit economic interpretations of the terms in J in the context of exam-
ples. Proposition 3 relates D0∗ , D ∗ and J and gives the key dynamic programming result
for dual value functions.

Proposition 3 (Value functions). The value D0∗ satisfies:

D0∗ = inf sup F[s0 , v] − qV · v + qK · k̄ + D ∗ (s0 , qK , qV ), (14)


Y V

with for all (s, y) ∈ S × Y ,

D ∗ (s, y) = inf sup J (s, y; q, p) + δ̄ ∑ D ∗ s ′ , y′ (s ′ ) ,



(15)
Q P ′
s ∈S

where y′ (s′ ) = (qK ′ , qV ′ )(s′ ).

Proof. See Appendix C.

The first stage problem (14) generates the initial co-states; (15) then gives the dual
Bellman equation. Moving from the dual problem (IS) to the recursive dual problems (14)
and (15) involves interchanging an infimum operation over future dual variables with a
supremum operation over current primal ones. In general interchanging such operations
alters optimal values. But here the additive separability of the Lagrangian in these two

17
sets of variables ensures that it does not. See the proof of Proposition 3 for details. Note if
the laws of motion for backward or forward-looking states are non-linear in these states,
then it is necessary to work with the Lagrangian of the augmented primal to ensure this
separability.

Remark 1. The function J may be interpreted as an augmented Hamiltonian. Suppose that


W V [s, a, m] = u(s, a) + δ̄m, MV [s, a, v′ ] = ∑S v′ (s′ )Q(s|s′ ) and H = 0, then J reduces to:
n o n o
J (s, y; q, p) = δ̄ ∑ qK ′ (s′ ) − qK · k + δ̄ ∑ qV Q(s|s′ ) − qV ′ (s′ ) · v′ (s′ )
S S
+ q · u(s, a) + δ̄ ∑ qK ′ (s′ ) · {W K [k, s, a] − k}.
V
S

The terms in the second line isolate the current action and correspond to a classical Hamil-
tonian. J augments this with additional terms involving adjustments to the shadow value
of backward and forward-looking states. Assuming differentiability of W K and differenti-
ating with respect to k gives a discrete time analogue of the co-state equation from optimal
control. In our more general setting, current resource and incentive conditions are explic-
itly incorporated into J via the H function and linearity of J in the forward-looking states
is not assumed.

Remark 2. Our recursive dual formulation relies entirely dual co-state variables yt to sum-
marize the past. This contrasts with Marcet and Marimon (2011) who dualize a subset of
constraints and make use of a mixture of primal and dual variables to summarize histories.

Remark 3. The primal "state" variables k and v continue to appear in the recursive dual
problem. This allows us to accommodate non (quasi-)linear laws of motion for such vari-
ables in our framework. However, they are no longer passed between sub-problems in the
recursive dual setting and in this sense no longer function as state variables.13

Definition 1. Let F denote the set of proper functions D : S × Y → R ∪ {∞} that are not
everywhere infinite valued. Define the dual Bellman operator B : F → F , ∀(s, y) ∈ S × Y ,

∑ D s ′ , y′ (s ′ ) .

B( D )(s, y) = inf sup J (s, y; q, p) + δ̄
Q P ′
s ∈S

The following theorem recasts D ∗ as a fixed point of B. It is an immediate corollary of


Proposition 3.
13 Notice also that they are restricted to the exogenous K × V ns and not the endogenous X . Choices of
primal states inconsistent with X are (finitely) penalized via the Lagrangian.

18
Theorem 1. D ∗ = B( D ∗ ).

To make the preceding discussion concrete we revisit the examples.

Example 1 (Risk sharing with limited commitment and Epstein-Zin preferences). This example
lacks backward-looking state variables. The initial period problem is:

D0∗ = inf sup λ · v0 − qV · v0 + D ∗ (s0 , qV ),


R2 V

where recall λ is a pair of exogenous Pareto weights, qV is a pair of initial co-states and
v0 is a pair of utility promises drawn from the exogenous feasibility set V . The recursive
dual problem is as in (15), but without backward-looking states or co-states and with the
current dual function:
 !1
1−δ σ

J (s, qV ; q, p) = ∑ (qV,i + q H,i )  1 − µ (ai )1−µ + δ ∑



vi ′ ( s ′ ) σ Q ( s | s ′ ) (16)
i =0,1 s ∈S

!
− ∑ q H,i wi (s) − q H,2 ∑ ai − γ ( s ) −δ ∑ qV ′ ( s ′ ) · v ′ ( s ′ ).
i =0,1 i =0,1 ′
s ∈S

The function, J incorporates the "shadow value" of delivering utility to the agents (in-
clusive of relaxation of the incentive constraints) less the shadow costs of resources and
continuation utility promises. 

Example 2 (Default with capital accumulation). In this case,

D0∗ = inf sup λ · v0 − qV · v0 + qK · k̄ + D ∗ (s0 , qK , qV ).


R3 V

The recursive dual problem is as in (15), but now with current dual function:
n o
J (s, qK , qV ; q, p) = − qK · k + qV,0 f 0 (a0 ) + δ ∑ v 0′ ( s ′ ) Q ( s | s ′ ) (17)
s′ ∈S
n o
+ (qV,1 + q H,1 ) f 1 (a1 ) + δ ∑ v1′ (s′ )Q(s|s′ ) − q H,1 w(k, s)
s′ ∈S
 
K′ ′
+ (δ ∑ q (s ) + q H,2
) γ(k, s) − ∑ a i
−δ ∑ qV ′ ( s ′ ) · v ′ ( s ′ ).
s′ i =0,1 ′
s ∈S

Many of the terms in (17) have similar interpretations to those in (16). In addition,
−qK · k is the shadow cost of using k of the backward-looking state in the present and
δ ∑s′ ∈S qK ′ (s′ )(γ(k, s) − ∑i =0,1 ai ) is the shadow benefit of delivering γ(k, s) − ∑i =0,1 ai of

19
this state variable into the future. 

Example 3 (Optimal monetary policy). In this case, the period 0 dual period value is given
by:
D0∗ = inf sup v00 − qV · v0 + D ∗ (qV ),
R2 V

where v0 = (v00 , v10 ) is the period 0 government payoff and inflation, while the current dual
function specializes to:
n o n o
J (qV ; q, p) = qV,0 L(a, κa + δv1′ ) + δv0′ + qV,1 κa + δv1′ − δqV ′ · v′ . (18)

Here J incorporates the shadow value of delivering payoff to the government and inflation
less the cost of shadow cost of future payoff and inflation promises. 

State Spaces Specialized to the principal-agent case Proposition 3 supplies a dual ana-
logue of the value function component of Proposition 2 (the policy function component
follows below). It relocates the dynamic programming to a state space of dual co-state
variables. As previously emphasized, determining the endogenous set of feasible states
in the recursive primal setting is problematic and adds another layer of calculation. The
next result shows that in the dual setting (with bounded primal variables), the dual value
function D ∗ is finite-valued on all of S × Y (= S × R n + n + 1 ).
k v Thus, the effective dual
state space, on which choice sets are non-empty and value functions finite, is immediately
determined.

Proposition 4. D ∗ : S × Y → R.
Proof. See Appendix C.

The immediate determination of the state space is an important advantage of the dual
approach. In addition, it is easily verified that each D ∗ (s, ·) is positively homogenous of
degree one (see Lemma 1 below). This has the advantage that once the dual value functions
D ∗ (s, ·) are determined on the unit circle C = {y ∈ Y|kyk = 1}, then they are determined
everywhere via positive scaling. From a practical point of view, the state space may be
identified with S × C . To make this concrete, consider Example 2. In this example, there
are two co-states (associated with capital and borrower payoffs) and the effective dual state
space is simply S copies of the unit circle inR2. In contrast, in the primal formulation of
the problem the state space is an implicit subset of R+ × R describing the set of incentive-
feasible capitals and borrower payoffs. This would have to be calculated separately adding
an extra layer of calculation. We take up the issue of how to approximate value functions

20
on C in Section 9. Less positively the homogeneity of candidate value functions combined
with the unboundedness of the current dual set Q (i.e. the set of current dual choices in
(15)) disrupts the conventional approach to proving that B is a contraction. We address
this issue in Section 6.

Policies We now turn to policies. For arbitrary sets C and E and function g : C × E → R,
define the argminmax operation
( )
argminmax g = (c∗ , e∗ ) c∗ ∈ argmin sup g(c, e) and e∗ ∈ argmax g(c∗ , e) .
C|E C E E

The solution to the sequential dual (IS) is given by:

Λ IS := argminmax L (π, θ ).
Q |P

On the other hand, the solution to the recursive dual is described by a set:

G0IS = argminmax F[s0 , v] − qV · v + qK · k̄ + D ∗ (s0 , qK , qV )


Y |V

and a correspondence

G IS (s, y) = argminmax J (s, y; q H , y′ , p) + δ̄ ∑ D ∗ s ′ , y′ (s ′ ) .



Q|P ′
s ∈S

Any element (θ ∗ , π ∗ ) in Λ IS ⊂ Q × P implies an initial (v0∗ , y0∗ ) = (v0∗ , q0K ∗ , qV ∗


0 ) and a
sequence of multipliers and choices {q∗t , p∗t }∞ ∗ H∗ ∗ H∗ K∗ V∗
t=0 , with qt = (qt , yt+1 ) = (qt , qt+1 , qt+1 ).
On the other hand, such a sequence can be recovered from G0IS and G IS : (y0 , v0 ) ∈ G0IS and
(qt (st ), pt (st )) ∈ G IS (st , yt (st )) for each t, st . The next proposition relates policies from the
dual and the recursive dual.

Proposition 5 (Policy functions). (θ ∗ , π ∗ ) ∈ Λ IS only if (q0K ∗ , pV ∗ IS


0 ) ∈ G0 and for each t ∈ N,
st ∈ S t , (qtH ∗ (st ), y∗t+1 (st ), p∗t (st )) ∈ G IS (st , y∗t (st )). Conversely, (θ ∗ , π ∗ ) ∈ Λ IS if (q0K ∗ , pV ∗
0 )∈
G0IS , for each t ∈ N, st ∈ S t, (qtH∗ (st ), y∗t+1(st ), p∗t (st )) ∈ G IS (st , y∗t (st )) and:
lim δ̄ T +1 ∑ D ∗ (s T +1 , y∗T +1 (s T +1 )) ≥ 0. (T)
T →∞
S T +1

Proof. Appendix C.

21
Example 1 (Risk sharing with limited commitment and Epstein-Zin preferences; Policies). From
(15) and (16) it follows that the consumptions a = (a0 , a1 ) are chosen to solve the "Pareto
problems":

1 − δ i 1− µ
max (qV,i + q H,i ) (a ) − q H,2 ai , i = 0, 1.
R+ 1−µ

It is easily shown that:


ri
ai = γ ( s ),
1 + ri
 µ1
qV,i + q H,i

where ri = qV,j + q H,j
, j = 0, 1, j 6= i. The continuation forward-looking states are chosen
to solve:
!1
σ
i′ ′ σ ′
max ∑ (q V,i
+q H,i
) ∑ v (s ) Q (s | s ) − ∑ ∑ qV,i ′ (s′ )vi ′ (s′ ). (19)
V i =0,1 ′
s ∈S ′
i =0,1 s ∈S

If the boundaries implied by V are non-binding, then (19) implies that the co-states (en-
dogenous Pareto weights) evolve as:

"  σ −σ 1 # σ −σ 1
qV,i ′ (s′ )

∑ Q (s ′ | s ) = qV,i + q H,i ,

s ∈S
Q (s ′ | s )

i.e. the stochastic aggregator of an agent’s (normalized) continuation Pareto weights is


adjusted upwards if the multiplier on her incentive constraint q H,i is positive.14 If σ ∈
(0, 1), then the stochastic aggregator is concave and increments to low valued continuation
utilities are more valuable than increments to high valued ones. Consequently, in contrast
to the standard expected utility case, following a binding incentive constraint (a positive
q H,i value), the agent’s continuation Pareto weight is increased more in low continuation
utility states than in high, i.e. the incremental reward to keep the agent inside the risk
sharing arrangement is skewed towards these states. To see this note that the sub-problem
(19) implies (absent binding boundaries):
! σ−1
qV,i ′ (s′ ) vi ′ ( s ′ )
= (qV,i + q H,i ) . 
Q (s | s ′ ) {∑ s′′ ∈S vi ′ (s′′ )σ Q(s|s′′ )} σ
1

An explicit solution to Example 1 is computed in Section 9. Solutions to the other


14 Note that the utility certainty equivalent uses the power σ, but the Pareto weight aggregator uses the
σ
dual or conjugate exponent to σ, σ− 1.

22
examples are discussed in Section 7 where we exploit or impose additional structure.

6 Contraction
This section establishes sufficient conditions for B to be contractive on an appropriate
space of functions. The combination of an unbounded dual value function and an un-
bounded dual constraint correspondence15 is an obstacle to conventional approaches to
proving contractivity.16 Following Thompson (1963), Marinacci and Montrucchio (2010)
and especially Rinćon-Zapatero and Rodríguez-Palmero (2003), we pursue a different ap-
proach. The basic idea is to restrict attention to spaces of functions having a certain scaling
property. Specifically, for any distinct pair g1 , g2 , scaleability requires a positive number
b ∈ R+ satisfying bg1 ≥ g2 . Distances between function pairs ( g1 , g2 ) are then identi-
fied with the log of the smallest scaling factor b such that both bg1 ≥ g2 and bg2 ≥ g1 .
Scaleability of a set of candidate value functions is ensured via a renormalization involving
bounding value functions that are themselves scaleable (after renormalization). Since the
optimal dual value function is convex and positively homogenous (see below) in co-states,
we restrict attention to candidate value functions with these properties. Consequently, it
is sufficient for us to have scaleability on the unit circle in the co-state space (i.e. on a
compact set) and to define our distance measures accordingly. The interval of convex, pos-
itively homogenous functions between the bounding value functions is a complete metric
space. If B is a self-map on this interval, then contractivity follows from monotonicity and
concavity of B, the properties of the bounding value functions and the homogeneity of
candidate value functions.17
The following definition is useful.

Definition 2. A function D : Y → R is sub-linear if (i) D(·) is convex and (ii) D(·) is


positively homogeneous of degree 1. A function D : S × Y → R is sub-linear if each
D (s, ·) is sub-linear.

Lemma 1 indicates the importance of the previous definition for our setting.

Lemma 1. (i) D ∗ is sub-linear. (ii) If D : S × Y → R is sub-linear, then B(D) is sub-linear.


Proof. See Appendix D.
15 The Rn R
current dual choice set is Q = +h × ns ( nk +nv +1) .
16 When the optimal value function is unbounded and the constraint correspondence compact-valued it is

often possible to prove contractivity on a space of weight norm bounded functions. In the dual setting, this
approach is disrupted by the unboundedness of the constraint correspondence (for multipliers and co-states).
17 Thus, Blackwell’s Theorem is avoided.

23
Once again, let C = {y ∈ Y|kyk = 1} denote the unit circle in Rn + n + 1 .
k v The key
assumption ensuring contractivity is the following.

Assumption 3 (Bounds). There is a triple of functions D : S × Y → R, D : S × Y → R and


D : S ×Y → R and a pair of numbers ε0 , ε1 > 0 such that for each s, D(s, ·) is continuous
and positively homogeneous of degree 1, both D (s, ·) and D (s, ·) are continuous and for all (s, y) ∈
S × C , (i) D (s, y) + ε 0 ≤ D (s, y) ≤ D (s, y), (ii) D (s, y) ≤ B( D )(s, y) and B( D )(s, y) ≤ D (s, y)
and (iii) D (s, y) + ε 1 < B( D )(s, y).

We discuss the selection of bounding functions in the context of specific examples


below. Note, however, if D satisfies Assumption 3 (iii) and D ≤ D ∗ ≤ D, then, from the
monotonicity of B and Theorem 1, for all (s, y) ∈ S × C ,

D (s, y) + ε < B( D )(s, y) ≤ B( D ∗ )(s, y) = D ∗ (s, y) ≤ D (s, y).

Thus, if each B( D )(s, ·) is continuous, then D may be set equal to B( D ). Given a triple of
functions D, D and D satisfying Assumption 3, let:

G = {D : S × Y → R|D is sub-linear and D ≤ D ≤ D}.


Define the "Thompson-like" metric d : G × G → R+ according to:
! !
D1 (s, y) − D (s, y) D2 (s, y) − D (s, y)
d( D1 , D2 ) = sup ln − ln
S ×C D (s, y) − D (s, y) D (s, y) − D (s, y)
!
D (s, y) − D (s, y)
≤ sup ln < ∞,
S ×C D (s, y) − D (s, y)

where the finiteness stems from Assumption 3.18 That (G , d) is complete metric space is
shown next.

Lemma 2. (G , d) is a complete metric space.

Proof. See Appendix D.

Proposition 6 verifies that B is contraction on G . It relies on the concavity (and mono-


tonicity) of B rather than any discounting-type conditions. This makes it well suited to the
present setting where concavity of B is easy to show, but discounting (with respect to a
suitable bounding norm) is not.
18 In particular, it follows from D(s, y) ≥ D(s, y) ≥ D(s, y) + ε 0 , the compactness of C and the continuity
of the functions D, D and D.

24
Proposition 6. Let Assumption 3 hold. There is a ρ ∈ [0, 1) such that for all D1 , D2 ∈ G ,
d(B( D1 ), B( D2 )) ≤ ρd( D1 , D2 ), i.e. B is a contraction on (G , d) with modulus of contraction ρ.

Proof. See Appendix D.

Application of the contraction mapping theorem yields that B admits a unique fixed
point in G .

Theorem 2. Let Assumption 3 hold and assume that D ≤ D ∗ ≤ D. D ∗ is the unique fixed
d
point of B in G . Also, there is a ρ ∈ [0, 1) such that for any D0 ∈ G , Bn ( D0 ) → D with
d(Bn ( D0 ), D ∗ ) ≤ ρn d( D0 , D ∗ ) ≤ ρn d( D, D ).

Proof. D ∗ is sub-linear by Lemma 1 and bounded below by D and above by D by assump-


tion. Thus, D ∗ ∈ G . Also by Lemma 1, if D ∈ G , then B( D ) is sub-linear and by the
montonicity of B and Assumption 3 it is bounded below by D and above by D. Thus,
B : G → G . By Proposition 6, it is contractive on G . The results in the theorem then stem
from the contraction mapping theorem.

Application of Theorem 2 requires bounding functions satisfying Assumption 3. Such


functions are often easy to derive in the context of particular applications using actions
and "large" values from the bounding set of state variables that strictly satisfy current
constraints. The following examples illustrate.19

Example 1 (Risk sharing with limited commitment and Epstein-Zin preferences). Let V = [v, v]2
and assume there is a resource-feasible consumption profile that gives each agent strictly
more than autarky if combined with autarkic continuation payoffs and strictly less than
the best possible payoff if combined with best possible continuation payoffs, i.e. an ã ∈ Ans
and a ξ > 0 such that for each s ∈ S , γ(s) > ∑1i =0 ãi (s), and for each s ∈ S and i ∈ {0, 1},

1
1−δ i 1− µ 1−δ i 1− µ
n
i′ ′ σ ′

v−ξ ≥ (ã (s)) + δv > (ã (s)) + δ ∑ w (s ) Q (s | s ) > wi (s) + ξ.
1−µ 1−µ s′ ∈S
(20)
19 For application of Theorem 2, it is sufficient to know (i) that bounding functions satisfying Assumption 3
exist and (ii) that a given function D0 lies between them and can thus serve as an initial condition in a value
iteration. Explicit calculation of the bounding functions is unnecessary. This contrasts with results relying
on monotone (not contractive) operators, which require an upper or lower bound to the true value function
as an initial condition. In addition, as always, the contraction result allows us to calculate error bounds
and rates of convergence and is, thus, an improvement on results relying only on monotone iterations and
pointwise convergence of iterates.

25
Set: 
v qV,i ≥ 0
D (s, qV ) = ∑ qV,i φi (qV,i , s), φi (qV,i , s) :=
i =0,1
v qV,i < 0

and

 w(s ) qV,i ≥ 0
D (s, qV ) = ∑ {qV,i ψi (qV,i , s) + |qV,i |ξ }, ψi (qV,i , s) :=
i =0,1
v qV,i < 0.

It is easy to see that D is sub-linear, D is continuous and positively homogenous and for
all (s, qV ) ∈ S × C , D (s, qV ) < D (s, qV ). In addition, for v large enough, these definitions
and (20) also ensure D < D ∗ ≤ D. In Appendix D, we show that given (20), there exists
an ε > 0 such that for all (s, qV ) ∈ S × C , D (s, qV ) + ε < B( D )(s, qV ). D may be set equal
to B( D ) and the conditions of Assumption 3 are satisfied. 

Example 2 (Default with capital accumulation). We give mild conditions that ensure the
existence of bounding functions satisfying Assumption 3 for the default problem. To
economize on space we do so only for the problem without shocks: γ(k, s) ≡ γ(k) and
w(k, s) ≡ w(k). Assume a k̄ such that γ(k̄ ) = k̄ > 0. Let V = [v0 , v0 ] × [v1 , v1 ], [0, k̄] ⊂ K
and A = A0 × A1 , with Ai the action set of agent i. Suppose there is a small ξ V > 0 and
an ã1 ∈ A1 such that the following inequalities are satisfied:

v1 − ξ V ≥ f 1 (ã1 ) + δv1 > f 1 (ã1 ) + δw(k̄ ) > f 1 (ã1 ) + δw(0) ≥ w(k̄ ) > w(0) + ξ V . (21)

In addition, for some small ξ K > 0, suppose that a0 = − ã1 − ξ K and a0 = γ(k̄ ) − ã1 − ξ K ,
are in A0 and, hence, feasible for agent 0. Note that negative values for a0 are natural if
agent 0 is a risk neutral lender. Assume also that for ã0 ∈ { a0 , a0 }, v0 − ξ V ≥ f 0 (ã0 ) + δv0 >
f 0 (ã0 ) + δv0 > v0 + ξ V . Let:
  
−k̄ qK ≥0  v0 qV,0 ≥0  w (0 ) qV,1 ≥ 0
ψ K (qK ) := ψ0 (qV,0 ) := ψ1 (qV,1 ) :=
0 qK < 0,  v0 qV,0 < 0,  v1 qV,1 < 0.

In Appendix D we show that the following are valid bounding functions:



 vi qV,i ≥ 0
D ( q K , qV ) = ∑ qV,i φV,i (qV,i ) + qK ψK (−qK ), φV,i (qV,i ) :=
i =0,1
 vi qV,i < 0,

26
D ( q K , qV ) = ∑ {qV,i ψV,i (qV,i ) + |qV,i |ξ V } + qK ψK (qK ) − |qK |ξ K .
i =0,1

Example 3 (Optimal monetary policy). Let V = ∏i =0,1 [vi , vi ] denote a set of possible govern-
ment payoffs and inflation rates. Assume an ã ∈ A = [ a, a] and ξ > 0 such that:
! ! !
v0 − ξ L(ã, κ ã + δv1 ) v0
≥ +δ
v1 − ξ κ ã v1
! ! !
L(ã, κ ã + δv1 ) v0 v0 +ξ
≥ +δ > . (22)
κ ã v1 v1 +ξ

It may be verified that:



1 vi qV,i ≥ 0
V V,i i V,i i V,i
D (q ) = ∑q φ (q ), φ (q )=
vi
i =0 qV,i < 0,

1  vi qV,i ≥ 0
D ( qV ) = ∑ {qV,i ψi (qV,i ) + |qV,i |ξ }, ψi (qV,i ) =
i =0
 vi qV,i < 0.

and D = B( D ) satisfy all desired conditions.20 

7 Quasi-linearity in backward and forward state variables


Many problems have aggregators and constraint functions that are quasi-linear in k or v
or both.21 Exploiting this structure can lead to considerable simplification. Specifically, it
is possible to work with the dual of the original rather than the augmented problem, i.e.
(P) rather than (AP). Backward and forward primal states kt and vt are then removed from
the analysis along with the equality constraints describing their evolution. In addition, the
co-state variables qV
t are no longer explicit choices (they are determined as functions of q
H

multipliers). All of this simplifies optimizations. Below we describe the modified recursive
dual problems that emerge, first for problems in which all laws of motion are quasi-linear
in primal states and then, via an example, for those in which some are.
20 The verification is similar to that given for the limited commitment case in Appendix D.
21 For example, Messner, Pavoni, and Sleet (2012b) only considers simplified problems of this sort.

27
7.1 Fully quasi-linear problems
Assume that W K is quasi-linear in k:

W K [k, s, a] = A(s)k + B(s, a),

for some functions A : S → Rn+ and B : S × A → Rn . The functions Kt+1 are defined to
k k

be consistent with this aggregator:

t t t
Kt+1 (k̄, α|st ) = ∑ ∏ A(s j ) B(sτ , aτ (sτ )) + ∏ A(s j )k̄, (23)
τ =0 j = τ +1 j =0

where in what follows it is useful to make the dependence of backward-looking states on


the initial value k̄ ∈ K ⊂ Rn k explicit. The requirement that the Kt functions and K are
bounded is no longer imposed.22
Assume that the shock transition is independent of any action: Q : S → R(S) and let
Q t (s 0, s
t) denote the induced probability over history st given seed shock s0 . Turning to
forward-looking states, assume that W V is quasi-linear in m:

W V [s, a, m] = f (s, a) + δm, (24)

with f : S × A → Rn +1 a bounded function and δ a non-negative nv + 1 diagonal matrix


v

with elements bounded by δ̄. Assume that MV is linear in v′ :

MV [s, v′ ] = ∑ v′ (s′ )Q(s, s′ ). (25)


s ∈S′

V is now defined to be consistent with these aggregators:


V ( s0 , α ) = ∑ δt ∑ f (st , at (st ))Qt (s0 , st ).
t =0 st ∈S t

The composition of W V and MV is quasi-linear in v′ . The constraint function H is obtained


from an nh × nk matrix N K , a function h : K × S × A → Rn h and a family of nh × (nv + 1)
matrices N V (s, s′ ):

H [k, s, a, v′ ] = N K (s)k + h(s, a) + ∑ N V (s, s′ )v′ (s′ )Q(s, s′ ).



s ∈S
22 In many applications Kt is capital and K = Rn+ .
k

28
Finally, F is assumed to be linear in forward states v: F[s, v] = qV
0 · v. Combining these
assumptions, (P) becomes:
sup qV
0 · V ( s0 , α ) (QL-P)

subject to α ∈ A and for all t, st ,

N K (st )Kt (k̄, α|st−1 ) + h(st , at (st )) + ∑ N V (s, s′ )V (s′ , α|st , s′ )Q(st , s′ ) ≥ 0.

s ∈S

Various problems satisfy these types of assumptions.

Example 2 (Default with capital accumulation, AK version). We specialize Example 2 to place


it within the quasilinear framework. Since W K [k, s, a] = γ(k, s) − ∑i =0,1 ai , quasi-linearity
of W K in k follows if the production function is specialized to γ(k, s) = γ(s)k. Quasi-
linearity of the incentive ("no default") constraint in k and v follows if the default value
is given by w(k, s) = w(s)k.23 Then our general notation specializes to: A(s) = γ(s),
B(a) = − ∑i =0,1 ai ,
! !
− w(s ) f ( a1 )  δ 
N K (s ) = ; h( a) = ; and NV = .
γ(s ) − ∑i =0,1 ai 0

Note that a small modification of the assumptions in Example 2 that removes capital and
incorporates an incentive constraint for agent 0 gives the limited commitment model of
Kocherlakota (1996). On the other hand, the removal of the incentive constraints gives a
standard AK growth model.
Since (QL-P) incorporates only the H-function constraints, the constraint process is
given simply by ζ H (k, α) = {ztH (k, α)}∞
t=0 , with

ztH (k, α)(st ) = N K (st )Kt (k, α|st ) + h(st , at (st )) + ∑ N V (s, s′ )V (s′ , α|st , s′ )Q(st , s′ ).

s ∈S

Because neither the Kt functions nor the h function need be bounded, constraint processes
need not belong to the normed space ℓ∞ . However, we assume that B, A, h and A are
such that for all k ∈ K and α ∈ A , ζ H (k, α) satisfies supt,st kztH (k, α)(st )k/Mt (st ) < ∞ for
some positive-valued process { Mt }.24 Consequently, the constraint process normalized by
23 This
assumption on default values is made in Cooley, Marimon, and Quadrini (2004).
24 In
bounded problems with Ā := maxS A(s) ≤ 1 and h bounded, the natural candidate for { Mt } is the
constant process ∀t, st , Mt (st ) = 1. In growth models with positive valued A process and Ā := maxS A(s) >
1, the natural candidate is ∀t, st , Mt (st ) = At (st ) := ∏tj=0 A(s j ).

29
{ Mt } is in ℓ∞ .
The Lagrangian

L(α, θ H ) =qV H H
0 · V (s0 , α ) + hθ , ζ (k̄, α )i

allows the following dual problem to be associated with (QL-P):

D0∗ = inf sup L(α, θ H ), (26)


Q A

with Q = {θ H |qtH (st ) ≥ 0, ∑∞ H t t


t=0 ∑ S t qt (s ) Mt (s ) < ∞}. Crucially, the quasi-linearity of
the aggregators ensures that this Lagrangian has the necessary separability for a recursive
dual approach.
As a first step, we recover the continuation dual problem from (26). The definition of
ζ H , (23) and straightforward algebra implies that the dual problem (26) can be rewritten
as:

D0∗ = inf sup ∑ ∑ qtH (st ) · N K (st ) At (st ) · k̄ + qV H H
0 · V (s0 , α ) + hθ , ζ (0, α )i
Q A t =0 S t
( )
= inf q0K · k̄ + inf sup qV H H
0 · V (s0 , α ) + hθ , ζ (0, α )i , (27)
Rk
n
Q | q0K A

where Q |q0K = {θ H ∈ Q |q0K := T (θ H )} and T is the linear map T (θ H ) = ∑∞ {q H (s t ) ·


t =0 ∑ S t t
N K (st ) At (st )}. In the first line of (27) terms involving k̄ are factored out of the Lagrangian,
while in the second line the infimum over dual processes is broken into two steps: an
infimum over a co-state for the backward-looking state followed by a (more) constrained
infimum over dual processes. Equation (27) motivates the following choice of continuation
dual problem, for each y ∈ Y = {(qK , qV ) ∈ Rn × Rn + 1 : Q | q K 6 = ∅ },
k v

D ∗ (s, y) = inf sup qV · V (s, α) + hθ H , ζ H (0, α)i. (28)


Q|qK A

Now the dual co-state space Y = {(qK , qV ) ∈Rn × Rn + 1 : Q | q K 6 = ∅ } = { q K ∈ Rn :


k v k

Q |qK 6= ∅} × Rn +1 may be a proper subset of Rn × Rn +1 . In particular, to guarantee a


v k v

continuation dual problem with a non-empty constraint set qK must be in the range of the
linear map T on Q, i.e. there must be a non-negative valued process θ H ∈ Q such that
qK = T (θ H ). Since Q is a cone and T is linear, the range of R is also a cone and in several

30
relevant applications is easy to find. 25

We now turn to the recursive form of (28). This is modified in several ways from
previous sections. First, the terms {kt , vt } are substituted out of the problem; second, the
co-state variable qV is no longer chosen directly, rather it evolves as a function of initial
values and accumulated multipliers q H . On the other hand, in general, qK ′ must still
be picked: it is a forward-looking variable and is not (generally) determined by past q H
multipliers. Define the current dual correspondence Q : Rn k → 2R
nh +ns nk
,

Q(qK ) = (q H , qK ′ ) ∈ Rn+h × Rnk ×ns qK = q H · N K (s) + A(s) ∑ qK ′ (s′ ) ,


n o

and the current dual objective J : S × Rn +1 × Rn+ × Rn ×n


v h k s × A → R,

J (s, qV ; q H , qK ′ , a) =qV · f (s, a) + ∑ qK ′ (s′ ) · B(s, a) + q H · h(s, a).



s ∈S

Finally, define the law of motion for co-states qV :

1n

V H
φ(s, q ; q )(s ) = δ · qV + q H N V (s, s′ ) .
δ̄

The recursive dual problem for this case is described in the following proposition.

Proposition 7 (Value functions). The value function D0∗ satisfies:

D0∗ = inf D ∗ ( s 0 , q K , qV K
0 ) + q · k̄. (29)
n
Rk
with for all (s, qK , qV ) ∈ S × Rn+ × Rn +1,
k v

D ∗ (s, qK , qV ) = inf sup J (s, qV ; q H , qK ′ , a) + δ̄ ∑ D ∗ (s′ , qK ′ (s′ ), φ(s, qV ; q H )(s′ ))Q(s, s′ ).


Q(q K ) A ′
s ∈S
(30)

The proof is essentially the same as Proposition 3 and is omitted. Notice that in (29)
the initial condition for the costate qK is picked, whilst that for qV is pinned down by the
parameter qV
0;
26 (30) gives the dual Bellman. Comparison of Propositions 3 and 7 and

25 For Rn
example, in AK growth models, for all s, N K (s) = 1, and T (Q ) = +k . In limited commitment
models without capital, for all s, N K (s) = 0 and T (Q ) = {0} (and backward-looking state variables and
their co-states may be omitted). In Example 2, N K (s) = (−w(s) γ(s)), T (Q ) = nk and Y = nk +nv +1 R R
once more. The function D ∗ remains sub-linear. Hence, for practical purposes the effective state space can
be identified with C ∩ T (Q ).
26 Thus, the co-state qK for the backward-looking state k is forward-looking and the the co-state qV for the

31
the terms defining the Bellman in each (for example, comparison of the corresponding J
functions) reveals how exploitation of quasilinearity simplifies matters.

Example 2 (Default with capital accumulation, AK version). To make the preceding discussion
concrete, consider again the default model with linear production. Applying (30), the dual
Bellman is:
 
∗ K V V,0 0 V,1 H,1 K′ ′ H,2
D (s, q , q ) = inf sup q 0
f ( a ) + (q +q 1
) f (a ) − 1
∑ q (s ) + q ∑ ai
Q(q K ) A ′
s ∈S i =0,1
∗ ′ K′ ′ V H ′ ′
+δ ∑ D (s , q (s ), φ(s, q ; q )(s ))Q(s, s ).
s′ ∈S

with Q(qK ) = {(q H , qK ′ )|qK = −q H,1 w(s) + q H,2 + ∑S qK ′ (s′ ) γ(s)} and


!
qV,0
φ(s, qV ; q H )(s′ ) = .
qV,1 + q H,1

Thus, the weights on the borrower’s current utility f (a1 ) and, via the updating function φ,
future utility are augmented by the multiplier on her incentive constraint q H,1 . The con-
straint set Q(qK ) reveals the evolution of the co-state qK , the shadow value of capital. This
value is depressed to the extent that capital tightens the incentive constraint −q H,1 w(s),
but enhanced to the extent that capital relaxes the current resource constraint or augments
the future capital stock (q H,2 + ∑S qK ′ (s′ ))γ(s). 

Remark 4. In Example 2, the weight on the lender’s payoffs qV,0 remains constant at its
initial value. This is typical of (principal agent) problems in which one forward-looking
state variable v0 does not enter the H-constraint. For these problems, the corresponding
co-state qV,0 may be removed as an explicit state variable and the state space reduced in
dimension from Rn +n +1 to Rn +n .
k v k v The cost is that positive homogeneity of the value
function is lost.

7.2 Partially quasi-linear problems


The preceding analysis extends to problems in which aggregators are quasi-linear in a
subset of state variables. Rather than developing this in full, we describe the application
to Example 3 (the optimal monetary policy problem). Recall that in this example, there are
forward-looking state v is backward-looking.

32
only forward-looking states and the aggregator W V is given by:

 v0  h  v0′ i  L(a, κa + δv1′ ) + δv0′ 


V
= W a, ′ = ,
v1 v1 κa + δv1′

with v0 the government’s payoff, v1 inflation and a output. The forward-looking state
describing the government’s future payoff v0′ enters W V in a quasi-linear way and can
be substituted out. In contrast, the forward-looking state describing inflation v1′ enters
non-linearly and cannot be so removed. After substitution of v0 , the problem becomes:


sup ∑ δt L(at , κat + δv1t+1 )
t =0

subject to, for all t, v1t = κat + δv1t+1 . This leads to the dual problem:

∞ ∞
D0∗ = inf sup ∑ δt L(at , κat + δv1t+1 ) + ∑ δt qV,1 1 1
t {κat + δvt+1 − vt }, (31)
Q P t =0 t =0

where Q is the set of inflation co-state sequences {qV,1


t } and P the set of inflation-output
sequences {v1t , at }∞
t=0 . Notice that in (31) the co-state on the government’s payoff q
V,0 is

initialized to and remains at 1. This is a principal-agent type problem. Using arguments


similar to before the initial problem specializes to:

D0∗ = inf sup −qV,1 1 ∗ V,1


0 v0 + D (1, q0 ),
R V1

κ
where V 1 = 1− δ [ a, a ] is the set of possible inflations, while the dual Bellman equation
becomes:

D ∗ (1, qV,1 ) = inf sup L(a, κa + δv1′ ) + qV,1 (κa + δv1′ ) − qV,1′ v1′ + δD ∗ (1, qV,1′ ).
R A×V 1

In the latter the inner supremum is over current output-inflation pairs (a, v1 ), while the
infimum operation is over the future inflation co-state qV,1′ .

Quadratic Case If (the negative of) the loss function L is specialized to be quadratic, an
explicit closed form solution of the dual problem is available. Let a = 0, and

1n 2 2
o
L( x, z) = − x + λz ,
2

33
with λ > 0. Then, a standard ‘guess and verify’ exercise confirms that the value function
for this problem satisfies:

1
D ∗ (1, qV,1 ) = (max{0, qV,1 })2 ,

with χ > 0 the positive root of a quadratic equation.27 Optimal dual policy functions are
then easily obtained. For qV,1 ≥ 0, they are linear in the dual co-state and are given by:

qV ′ (qV,1 )
   
1
κ  V,1
 a(qV,1 )  =   ξq ,
  
λ
1
π ′ (qV,1 ) χ

with ξ = 1
2 ∈ (0, 1). For qV,1 < 0, qV,1′ (qV,1 ) = qV,1 , and a(qV,1 ) = v1 (qV,1 ) = 0.
1+ κλ + χδ
The problem is strictly concave and policies are single valued. Hence, from Proposition 9
below and the subsequent discussion, the solution to the dual problem delivers necessary
and sufficient conditions for the solution to the original problem.

8 Relating Primal and Dual


The preceding sections established that optimal dual values and solutions may be recov-
ered from the recursive dual. They also showed that the dual Bellman operator was con-
tractive. Consequently, if the dual supplies an optimal value and optimal policies for
the original primal problem, then the recursive dual does as well and the primal may
be solved via dual value iteration. In this section, we briefly discuss conditions for the
sequential dual and primal problems to have common values and policies.

8.1 Saddles and recursive dual policies


Without further restriction, classical weak duality implies that the optimal dual value
bounds the optimal primal value: D0∗ ≥ P0∗ . Thus, with no further assumptions the re-
cursive dual gives welfare bounds for optimal policies or policy improvements.
A well known sufficient condition for equality of optimal values, albeit not on prim-
r 2
2 2 2
1− δ + κλ + 1− δ + κλ +4 κλ δ
27 It solves: χ = .
2
2 κλ

34
itives, is that the Lagrangian admits a saddle point.28 Saddle existence also ensures that
the dual policy set includes all primal solutions. However, the converse is not true: addi-
tional restrictions are required to ensure that the "finite penalization" implicit in the dual
problem is "sharp enough" to pin down only primal solutions. The following propositions
summarize the situation for the general problem considered in Section 5.29

Proposition 8 (Policy functions; Necessity). Assume that the Lagrangian L admits a saddle
point. Then: (i) (Equality of values) D0∗ = P0∗ . (ii) (Necessity conditions for policies) If π ∗ solves
(AP), then there is a corresponding optimal dual sequence θ ∗ such that (q0K ∗ , qV ∗ ∗ IS
0 , v0 ) ∈ G0 and
for all t and st , (qtH ∗ (st ), y∗t+1 (st ), p∗t (st )) ∈ GtIS (st , y∗t (st )).

Proof. See Appendix E.

Proposition 8 only requires that L admits a saddle point. It does not require that the
Lagrangian associated with every (st , yt (st ))-continuation problem has a saddle, as is the
case in Marcet and Marimon (2011). Proving, or numerically checking, the existence of a
saddle for L , while non-trivial, is less demanding than doing so for all possible histories.
Sufficiency of (recursive) dual policies for primal attainment requires additional as-
sumptions. We say that a set of primal processes P ′ shares a plan α if for each π ∈ P ′
there is a process {vt , kt }∞ ∞
t=0 such that π = (α, {vt , k t }t=0 ).

Proposition 9 (Policy functions; Sufficiency). Assume that Q ∗ × P ∗ , the set of saddle points of
L , is non-empty and that for each θ ∗ ∈ Q ∗ , the set of primal processes argmaxP L (·, θ ∗ ) shares a
plan α∗ . Then: (i) α∗ is the unique solution of (P) and (ii) if a pair (π, θ ) with π = (α, {kt , vt }∞
t =0 )
satisfies (q0K , qV IS t H t t t IS t
0 , v0 ) ∈ G0 , for all t and s , (qt (s ), yt+1 (s ), pt (s )) ∈ Gt (s t , yt (s )) and (T),
then α solves (P) (and equals α∗ ).

Proof. See Appendix E.

We apply Proposition 9 in Section 9 to show that recursive dual policies are sufficient
for primal optimality in a parameterized version of Example 1.
28 For a real-valued function defined on a product set g : C × E → R, the set of saddle points is:
( )
∗ ∗ ∗ ∗ ∗ ∗
saddle g = (c , e ) c ∈ argmin g(c, e ) and e ∈ argmax g(c , e) .
C| E C E

29 Similarresults hold for the quasi-linear case considered in the preceding section (see Messner, Pavoni,
and Sleet (2011), Section 3 for details).

35
8.2 Concave Problems
The literature gives various sufficient conditions on primitives ensuring equality of opti-
mal values and saddle existence.30 Consider for a moment the original (non-augmented)
optimization (P). This problem omits laws of motion for primal states and is written en-
tirely in terms of plans rather than primal processes. The constraints may be collected
together as:
ζ H (α) = { H (Kt (α, st ), at (st ), V (α, st ))}t,st ≥ 0.

Since there are a countable number of constraints and H is bounded, ζ H : A → ℓ∞ . We


may associate a Lagrangian L̃ with this problem:

L̃(α, θ H ) = F[s0 , V (s0 , α)] + hq H , ζ H (α)i, (32)

where q H belongs to ℓ⋆∞ , the dual space of ℓ∞ and hq H , ζ H (α)i is the evaluation of q H at
ζ H (α).31 The Lagrangian L̃ can be used to define primal and dual problems for (P) directly.
A well known sufficient condition for these problems to have equality of optimal values
and a minimizing dual multiplier (so called "strong duality") is that (i) the objective and
constraints are concave and (ii) the evaluation of the constraints at some primal choice lies
in the interior of the constraint space’s closed non-negative cone (a Slater condition).32 If,
in addition, a solution to (P) exists then it and the minimizing multiplier constitute a saddle
point. Since the objective and constraints are constructed from compositions of functions,
a standard assumption guaranteeing concavity is that F, H, W K , W V and MV are jointly
concave in their arguments and either quasi-linear or non-decreasing in the primal states k
and v′ . Stronger strict concavity restrictions ensure uniqueness of the primal solution and
sufficiency of the dual solution for primal optimality.
A difficulty is that the preceding result guarantees the existence of a minimizing multi-
plier q H in ℓ⋆∞ . It is much more convenient to establish such existence in ℓ1 ⊂ ℓ⋆∞ , the space
of summable sequences {{qtH } : ∑∞ H t
t=0 ∑ S t kqt (s )k < ∞}, and, hence, to obtain existence
of a saddle point of the Lagrangian:


H
L(α, θ ) = F[s0 , V (s0 , α)] + ∑ ∑ qtH (st ) · H (Kt (α, st ), at (st ), V (α, st )). (33)
t =0 S t

30 Luenberger (1969) and Rockafellar (1974), especially Section 7, are good references for the theory in
infinite dimensional settings.
31 We discuss ℓ ⋆ briefly below. It is the space of bounded continuous functionals on ℓ and, as is well

known, equals the space of all signed charges of bounded variation on the power set 2N .
∞ ∞

32 And the constraint space, i.e. the codomain of ζ H , has a non-negative cone with non-empty interior

because it is the set ℓ∞ .

36
In fact, following an argument of Ponstein (1981), the structure of the constraints enables
us to do this and, hence, obtain saddle existence for L rather than L̃ under the conditions
given above.33
Such saddle existence results are directly applicable to the quasi-linear case discussed
in Section 7.34 However, as noted, for more general problems with laws of motion that
are non-linear in states L is not suitable for dual recursive decomposition. Instead the
Lagrangian L from the augmented problem is needed. An apparent difficulty is that L
incorporates (possibly non-linear) equality constraints for the laws of motion for states.
Thus, standard conditions for saddle existence are not applicable.35 If, however, H is non-
decreasing in k and v′ , then the equality constraints can be relaxed to inequalities. The
relaxation does not modify optimal values or solutions. Strong duality for the relaxed
problem is then established under the standard concavity and monotonicity assumptions
on F, H, W K , W V and MV described previously. For more details on relaxation including
weaker conditions for its validity, see Appendix F.36

8.3 Ex Post Check


The following elementary proposition gives a sufficient condition for primal optimality in
terms of the optimal dual value. Importantly, the condition does not rely on any concav-
ity assumption on the problem. We call a process (π̂, θ̂ ) = (α̂, {k̂t , v̂t }∞
t=0 , θ̂ ) a candidate
plan if it is obtained from the policy correspondence: (q0K , qV IS t
0 , v0 ) ∈ G0 , and ∀ t and s ,
(qtH (st ), yt+1 (st ), pt (st )) ∈ GtIS (st , yt (st )).

Proposition 10. Suppose a candidate plan (π̂, θ̂ ) satisfies: (i) F[s0 , V (s0 , α̂)] ≥ D0∗ and (ii) π̂ is
feasible for (AP). Then π̂ is optimal for (AP) and D0∗ = P0∗ . If in addition to (i)-(ii), (π̂, θ̂ ) satisfies
condition (T), then (π̂, θ̂ ) is a saddle for the Lagrangian associated with problem (AP).

Proof. See Appendix E.

Despite its simplicity, Proposition 10 is the basis of a useful ex post check of primal
optimality. Suppose the recursive dual problem has been solved and a fixed point D̂ of
the operator B obtained. If the conditions of Assumption 3 hold and D̂ lies between the
33 We defer this technical argument to Messner, Pavoni, and Sleet (2013).
34 With the slight modification that the constraint space is set to {{ xt } : sup k xt (st )k/Mt (st ) < ∞} and the
multiplier space to {{qtH } : ∑∞ H t t
t =0 ∑S t kqt ( s ) Mt ( s )k < ∞ } to accommodate unbounded H functions.
35 Even if the constraints stemming from the laws of motion are re-expressed as pairs of inequalities, the

Slater condition and, unless these laws of motion are linear, concavity is lost.
36 The dual and recursive dual of the relaxed problem are slightly modified to restrict co-states to be

non-negative.

37
bounding functions D and D, then D̂ = D ∗ . Consequently, the value D0∗ and a candidate
plan π̂ may be recovered. Proposition 10 then provides sufficient conditions for π̂ to be a
solution to (AP) and for the existence of a saddle point of the associated Lagrangian.
In practice, D0∗ , G0IS and G IS must be approximated via, say, a numerical implementa-
tion of the value iteration described in Theorem 2, and the conditions (i) and (ii) in Propo-
sition 10 checked numerically to within some acceptable level of tolerance. We describe a
numerical implementation of the value iteration next.

9 Numerical Implementation
We use Example 1 (limited commitment risk sharing) to illustrate the numerical imple-
mentation of the recursive dual problem.

Applicability of the Recursive Dual If the agent’s initial Pareto weights are non-negative,
λ= ∈ R then without loss of generality the law of motion for utility promises
( λ1 , λ2 ) 2,
+
may be relaxed to an inequality in either the augmented primal problem or its dual:

1
1 − δ i t 1− µ 
t ′ σ ′

(at (s )) i
+ δ ∑ (vt+1 (s , s )) Q(st |s ) − vit (st ) ≥ 0.
1−µ s′ ∈S

Given µ, σ ∈ (0, 1), the functions describing these constraints (and the limited commitment
constraints) are strictly concave. If a primal solution exists and there is a primal process
strictly satisfying all constraints, then standard results and an argument of Ponstein (1981),
establishes the existence of a saddle point for the Lagrangian L with co-states restricted
to be non-negative. Consequently, by our previous results, the recursive dual gives the
optimal primal value and necessary and sufficient conditions for optimal primal policies.

Value iteration and function approximation We limit attention to value functions de-
fined on a domain of shocks and non-negative co-states ("Pareto weights"), S × R2+ and
modify definitions accordingly.37 The bounding value functions D, D and D are as in
Section 6, but restricted to this domain. The definitions of G and B become:

G = {D : S × R2+ → R|D is sublinear, each D(s, ·) is continuous and D ≤ D ≤ D}.


37 Thisrestricts us to concave and economically interesting continuation problems in which no agent gets
a negative weight.

38
and
B( D )(s, y) = inf sup J (s, y; q, p) + δ ∑ D (s′ , y′ (s′ )),
Q+ P ′
s ∈S

with Q+ = R4+ replacing R2+ × R2 and J as in (16). Theorem 2, very slightly modified
to incorporate the domain restriction, ensures that D∗ may be calculated via an iteration
of B from any D0 ∈ G . Implementation of this iteration requires approximation of the
value functions. Our approximation procedure exploits the sub-linearity of dual value
functions.38
If g : R2+ → R is sub-linear, then for all y ∈ R2+ ,
g(y) = max{m · y|∀y′ ∈ C+ , m · y′ ≤ g(y′ )}.

Let Cˆ+
I := {y } I
i i =1 ⊂ C + denote a set of I > 1 distinct points in C + . Then g is bounded
above by ĝ I , where ĝ I is defined by the less restricted problem:

g(y) ≤ ĝ I (y) := max{m · y|∀yi ∈ Cˆ+


I
, m · yi ≤ g(yi )}. (34)

The function ĝ I is continuous and sub-linear and ĝ I (yi ) = g(yi ) at each yi ∈ C+


I . In

addition, ĝ I (y) is easily found by solving the simple linear programming problem in (34).
A sequence of sets Cˆ+ I , I = 2, 3, . . ., may be constructed with Cˆ I ⊂ Cˆ I +1 and Cˆ∞ = ∪ Cˆ I
+ + I + +
dense in C+ .39 If g is also continuous, then it is readily verified that the corresponding
sequence of approximating functions ĝ I converges pointwise to g from above.40 Moreover,
by Dini’s theorem it converges uniformly on C+ and, hence, in the Thompson-like metric
d to g.
This procedure may be used to approximate sub-linear functions D ∈ G from above. It
is easy to implement, may be integrated into the value iteration and involves approxima-
tion on a simple state space.

Remark 5. As we have previously remarked Example 1 is outside the scope of Marcet


38 For facts about sub-linear functions used below consult Florenzano and Van (2001)
39 For example, the set of points in C + with rational coordinates is dense in C , see Schmutz (2008) for an
explicit construction.
40 It clearly converges at all points in Cˆ ∞ and if (1, 0) and (0, 1) are in Cˆ ∞ at these two points. Choose
+
R
a point y ∈ C + ∩ int 2+ . Let y1n and y2n be two sequences in ∪ I Cˆ+ I converging to y and such that y =

R
λn an y1n + (1 − λn )bn y2n , with λn ∈ (0, 1), an , bn ∈ + and an , bn ↓ 1, i.e. an y1n and bn y2n lie either side
of y on the tangent to C + passing through y. There is a sequence { In } such that ĝ In (y1n ) = g(y1n ) and
ĝ In (y2n ) = g(y2n ). By the sub-linearity of g and each ĝ In , we have g(y) ≤ g In (y) ≤ λn ĝ In ( an y1n ) + (1 −
λn ) ĝ In (bn y2n ) = λn an ĝ In (y1n ) + (1 − λn )bn ĝ In (y2n ) = λn an g(y1n ) + (1 − λn )bn g(y2n ). Since g is continuous,
yin → y and an , bn ↓ to1, it follows that the last term in the string of inequalities converges to g(y). Thus, the
sequence of functions converges pointwise on C + and by the positive homogeneity of the functions on 2+ . R

39
and Marimon (2011). Recursive primal formulations of limited commitment problems are
available. Such problems (without Epstein-Zin preferences) have often been handled nu-
merically by maximizing the payoff to one player subject to incentive and utility promise-
keeping constraints. This leads to a Bellman-type operator in which the continuation value
function of the maximized player enters the constraint set. Although this Bellman oper-
ator has monotonicity properties it is not a contraction. Such a problem also involves an
endogenous state space of utility promises. With only two players, this state space is an
interval and, therefore, easy to approximate. But as more players are introduced represen-
tation and approximation of this set becomes more difficult.
Our function approximation procedure is similar to Judd, Yeltekin, and Conklin (2003)’s
outer approximation method. They use piecewise linear approximations to support func-
tions of payoff sets in their analysis of repeated games.

Numerical Example Figure 1 illustrates value and policy functions from a numerical
example. In the example, the discount factor δ is set to 0.8. The preference parameters
µ and σ are set to 0.5 and 0.8 respectively. Two shock states are assumed with Markov
transition Q(1, 1) = Q(2, 2) = 0.8. Two agents are assumed. In the first state, agent 1’s
outside option is 0 and agent 2’s is set to 1.25. These are reversed in state 2. Output
is constant at 1 across the states. These parameters determine D and D. The bounding
function D is set equal to D − ε. The dual Bellman is a contraction and ε chosen to ensure
that it has a modulus of contraction of ρ = 0.9 with respect to the implied Thompson-like
metric.
Figure 1a gives the value function on R2+ . In each iteration this function is evaluated
at a finite number of points and then approximated on its entire domain as described
above. The remainder of the figure shows optimal policies for agent 1 as a function of
p
the shock s and co-state/Pareto variable qV,1 , with qV,2 set so that 1 = (qV,1 )2 + (qV,2 )2
and (qV,1 , qV,2 ) is in C+ . As Figure 1c shows, the agent’s incentive multiplier is positive
in state 2 for low co-state/Pareto weight values qV,1 , otherwise, it is zero. Only for this
combination of a high outside option (state 2) and a low Pareto weight, does the agent’s
incentive constraint bind. As Figure 1b shows, the agent’s consumption is 0.4 (i.e. 40%
of the endowment) for this combination. In contrast, in state s = 1, the agent receives
a share of the endowment that becomes arbitrarily small as the agent’s Pareto weight
becomes small. On other hand in this state, the agent’s share of the endowment reaches a
maximum value of 0.6 for larger values of her Pareto weight (and correspondingly smaller
values of the other agent’s Pareto weight). For these values, agent 2’s incentive constraint
binds. Implications for agent 1’s next period Pareto weight are illustrated in Figure 1d.

40
1
s =1
s =2
10 0.8
D ( 1, : )

0.6
5

a1
0.4

0
4 0.2
3
2 2
1
V,2 0 0 0
q q V,1 0 0.2 0.4 0.6 0.8 1
q V ,1
(a) Value Function
(b) Consumption Policy Functions
0.8 1
s =1 s = 1, s′ = 1
s =2 s = 1, s′ = 2
0.7
s = 2, s′ = 1
0.8
0.6 s = 2, s′ = 2

0.5 0.6
q V , 1( s ′)
q H, 1

0.4

0.3 0.4

0.2
0.2
0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
q V ,1 q V ,1

(c) Multiplier Policy Functions (d) Co-state Policy Functions

Figure 1: Value and Policy Functions for the Limited Commitment Problem

10 Conclusion
In many settings the (primal) state space of a dynamic economic problem is defined im-
plicitly and must be recovered as part of the solution to the problem. This complicates the
application of recursive methods. Associated dual problems have recursive formulations
in which co-states are used to keep track of histories of past or feasible future actions. If the
primal state space is bounded, then the dual (co-)state space is immediately determined
as RN (or, perhaps, R+N ). Despite the unboundedness of the dual value functions and the
lack of a bounded constraint correspondence, contractivity of the dual Bellman operator
(with respect to the modified Thompson metric) may be established if suitable bounding
functions are available. In many problems they are.

41
References
Acemoğlu, D., M. Golosov, and A. Tsyvinski (2010). Dynamic mirrlees taxation under
political economy constraints. Review of Economic Studies 77, 841–881.
Aiyagari, R., A. Marcet, T. Sargent, and J. Seppälä (2002). Optimal taxation without
state-contingent debt. Journal of Political Economy 110, 1220–1254.
Chien, Y., H. Cole, and H. Lustig (2011). A multiplier approach to understanding the
macro implications of household finance. Review of Economic Studies 78, 199–234.
Cole, H. and F. Kubler (2012). Recursive contracts, lotteries and weakly concave pareto
sets. Review of Economic Dynamics 15(4), 479–500.
Cooley, T., R. Marimon, and V. Quadrini (2004). Aggregate consequences of limited
contract enforceability. Journal of Political Economy 112(4), 817–847.
Florenzano, M. and C. L. Van (2001). Finite Dimensional Convexity and Optimization.
Springer Studies in Economic Theory 13.
Hopenhayn, H. and J. Nicolini (1997). Optimal unemployment insurance. Journal of Po-
litical Economy 105, 412–438.
Judd, K., S. Yeltekin, and J. Conklin (2003). Computing supergame equilibria. Economet-
rica 71, 1239–1254.
Kehoe, P. and F. Perri (2002). International business cycles with endogenous incomplete
markets. Econometrica 70, 907 – 928.
Kocherlakota, N. (1996). Implications of efficient risk sharing without commitment. Re-
view of Economic Studies 63, 595–609.
Luenberger, D. (1969). Optimization by Vector Space Methods. New York, John Wiley &
Sons.
Marcet, A. and R. Marimon (1999). Recursive contracts. Working paper.
Marcet, A. and R. Marimon (2011). Recursive contracts. Working Paper.
Marimon, R. and V. Quadrini (2006). Competition, innovation and growth with limited
commitment. NBER Working Paper 12474.
Marinacci, M. and L. Montrucchio (2010). Unique solutions for stochastic recursive util-
ities. Journal of Economic Theory 145, 1776–1804.
Messner, M., N. Pavoni, and C. Sleet (2011). On the dual approach to recursive opti-
mization. IGIER Working Paper 423.

42
Messner, M., N. Pavoni, and C. Sleet (2012a). Contractive dual methods for incentive
problems. IGIER Working Paper 466.
Messner, M., N. Pavoni, and C. Sleet (2012b). Recursive methods for incentive problems.
Review of Economic Dynamics 15(4), 501–525.
Messner, M., N. Pavoni, and C. Sleet (2013). Countably additive multipliers in incentive
problems. Working Paper.
Ponstein, J. (1981). On the use of purely finitely additive multipliers in mathematical
programming. Journal of Optimization Theory and Applications 33, 37–55.
Rinćon-Zapatero, J. and C. Rodríguez-Palmero (2003). Existence and uniqueness of so-
lutions to bellman equation in the unbounded case. Econometrica 71, 1519–1555.
Rockafellar, R. (1974). Conjugate Duality and Optimization. SIAM Society for Applied and
Industrial Mathematics.
Rockafellar, T. and R. Wets (1998). Variational Analysis. Springer-Verlag.
Rustichini, A. (1998). Dynamic programming solution of incentive-constrained prob-
lems. Journal of Economic Theory 78, 329–354.
Schmutz, E. (2008). Rational points on the unit sphere. Central European Journal of Math-
ematics 6(3), 482–487.
Stokey, N., R. Lucas, and E. Prescott (1989). Recursive Methods in Economic Dynamics.
Harvard University Press.
Thompson, A. (1963). On certain contraction mappings in a partially ordered vector
space. Proceedings of the American Mathematical Society 14, 438–443.
Wessels, J. (1977). Markov programming by successive approximation with respect to
weighted supremum norms. Journal of Mathematical Analysis and Applications 58, 326–
355.
Woodford, M. (2003). Interest and Prices. Princeton University Press.

Appendix

A Construction of Payoffs
R
A function g : S × A → nv +1 is bounded if k gk∞ := supS ×A k g(s, α)k. Let G denote the
R
set of bounded functions g : S × A → nv +1 . For g ∈ G , define T V ( g)(s, α) according to:

T V ( g)(s, α) = W V [s, a0 , MV [s, a0 , g′ (α)]],

43
where g′ (α) = { g(s′ , α|s′ )}ns′s=1 .

Lemma 3. T V : G → G and is contractive.

Proof. Let g ∈ G . The monotonicity of MV and the fact that MV [s, a, ·] maps constant-
valued random variables to their constant values implies:

sup | MV [s, a, g′ (α)]| ≤ k gk∞ I,


S ×A

where I is the nv + 1-unit vector. The boundedness and discounting properties of W V


imply:
sup kW V [s, a, MV [s, a, g′ (α)]]k ≤ sup kW V [s, a, 0]k + δ̄k gk∞ < ∞.
S ×A S ×A

We deduce that T ( g) ∈ G . Monotonicity of T follows from monotonicity of each W V [s, a, ·]


and MV [s, a, ·]. Let g, g̃ ∈ G , then from the monotonicity and sub-additivity of MV :

MV [s, a, g′ (α)] − MV [s, a, g̃′ (α)] ≤ MV [s, a, g̃′ (α) + k g − g̃ k∞ ] − MV [s, a, g̃′ (α)] ≤ k g − g̃k∞ .

By the monotonicity and discounting properties of W V , for each (s, a, α),

W V [s, a,MV [s, a, g′ (α)]] − W V [s, a, MV [s, a, g̃′ (α)]]


≤ W V [s, a, MV [s, a, g̃′ (α)] + k g̃ − gk∞ ] − W V [s, a, MV [s, a, g̃′ (α)]] ≤ δ̄k g̃ − gk∞ .

Hence, T V satisfies a discounting property and, by Blackwell’s theorem, is a contraction


on G .
It follows from Lemma 3, the completeness of G and the contraction mapping theorem
that T V has a unique fixed point on G . V is identified with this function. By placing
additional continuity restrictions on W V and MV , the previous result may be strengthened
to show that V, the unique fixed point on G , is continuous.

B Proofs for Section 4


Proof of Proposition 2. For (k, s, v) ∈ X , let:
 
 k0 = k, s0 = s, v0 = v, 
kt+1 (st ) = W K [kt (st−1 ), st , at (st )],

 

Ω(k, s, v) = π ∈ P .

 vt (st ) = W V [st , at (st ), MV [st , at (st ), vt+1 (st )]], 

H [kt (st−1 ), at (st ), vt+1 (st )] ≥ 0
 

It follows from definitions that Ω(k, s, v) = {π |k0 = k, v0 = v, (a0 , k1 , v1 ) ∈ Γ(k, s, v), π |s′ ∈
Ω(k1 , s′ , v1 (s′ ))}, where π |s′ is the continuation of π after s1 = s′ . For (k, s, v) ∈ X , let
P∗ (k, s, v) = supΩ(k,s,v) V 0 (s, α). Define B on the domain F = { P : X → } as, for P ∈ F R
and (k, s, v) ∈ X , B( P)(k, s, v) = supΓ(k,s,v) W V,0 [s, a, MV,0 [s, a, P′ (k′ , v′ )]], with P′ (k′ v′ ) =

44
{ P(k′ , s′ , v′ (s′ ))}s′ ∈S . We verify that for (k, s, v) ∈ X , P∗ (k, s, v) = B( P∗ )(k, s, v). Suppose
P∗ (k, s, v) > supΓ(k,s,v) W V,0 [s, a, MV,0 [s, a, P∗′ (k′ , v′ )]]. Since (k, s, v) ∈ X , Ω(k, s, v) 6= ∅ and
there is a π ∈ Ω(k, s, v) with V 0 (s, α) > supΓ(k,s,v) W V,0 [s, a, MV,0 [s, a, P∗′ (k′ , v′ )]]. But π ∈
Ω(k, s, v), thus (a0 , k1 , v1 ) ∈ Γ(k, s, v) and π |s′ ∈ Ω(k1 , s′ , v1 (s′ )). From the monotonicity of
W V,0 in its third argument and the definition of P∗ , V 0 (s, α) = W V,0 [s, a0 , MV,0 [s, a0 , V 0′ (α)]]
≤ W V,0 [s, a0 , MV,0 [s, a0 , P∗′ (k1 , v1 )]] ≤ supΓ(k,s,v) W V,0 [s, a, MV [s, a, P∗′ (k′ , v′ )]]. This is a
contradiction and so P∗ (k, s, v) ≤ B( P∗ )(k, s, v). Next suppose that P∗ (k, s, v) < supΓ(k,s,v)
W V,0 [s, a, MV,0 [s, a, P∗′ (k′ , v′ )]]. Then, since (k, s, v) ∈ X , Γ(k, s, v) is non-empty and there
is a triple (a, k′ , v′ ) ∈ Γ(k, s, v) with P∗ (k, s, v) < W V,0 [s, a, MV,0 [s, a, P∗′ (k′ , v′ )]]. Since W V,0
and MV,0 are continuous in their third arguments, there is a family π |s′ with for each π |s′ ∈
Ω(k′ , s′ , v′ ) and with associated plans α|s′ satisfying P∗ (k, s, v) < W V,0 [s, a, MV,0 [s, a, V 0′ (α)]]
= V 0 (s, α). But the definition of Ω(k, s, v) implies that π = (k, v, a, {π |s′ }) ∈ Ω(k, s, v).
Hence, V 0 (s, α) ≤ supΩ(k,s,v) V (s, α′ ) = P∗ (k, s, v), another contradiction. Thus, P∗ (k, s, v) ≥
B( P∗ )(k, s, v). Combining inequalities and noting that (k, s, v) was arbitrary in X , it follows
that P∗ = B( P∗ ) as required. By a very similar argument, P0∗ = supV (k̄,s0 ) P∗ (k̄, s0 ).
Similar reasoning to that above establishes that any solution to (PA) satisfies (i) and (ii)
from the proposition. Conversely, let π ∗ = (α∗ , {k∗t , v∗t }) be a primal process satisfying (i)
and (ii) in the proposition. Feasibility of π ∗ for (PA) is immediate. Also,

| P∗ (k∗0 , s0 , v0∗ ) − V 0 (s0 , α∗ )| = |W V,0 [s0 , a0∗ , MV,0 [s0 , a0∗ , { P∗ (k∗1 (s0 ), s1 , v1∗ (s0 ))}s1 ∈S ]]
− W V,0 [s0 , a0∗ , MV,0 [s0 , a0∗ , {V 0 (s1 , α∗ |s1 )}s1 ∈S ]]|
≤ δ max | P∗ (k∗1 (s0 ), s1 , v1∗ (s0 )) − V 0 (s1 , α∗ |s1 )|
s1 ∈S

≤ . . . ≤ δt max | P∗ (k∗t (st−1 ), st , v∗t (st−1 )) − V 0 (st , α∗ |st )|, (35)


st ∈S

where the first equality uses property (ii) in the proposition and (10), the first inequality
uses the sub-additivity of MV,0 and the discounting property of W V,0 . The final inequality
follows from an iteration of these arguments. The boundedness of P∗ and V 0 and δ ∈
(0, 1) then implies that the final term in (35) converges to 0 as t converges to ∞. Hence,
P∗ (k∗0 , s0 , v0∗ ) = V 0 (s0 , α∗ ). Then using property (i) in the proposition and (9), we have that
P0∗ = P∗ (k∗0 , s0 , v0∗ ) = V 0 (s0 , α∗ ) and π ∗ is a solution to (PA).

45
C Proofs for Section 5
Proof of Proposition 3. We have:

D0∗ = inf sup L (π, θ )


Q P
= inf sup F[s0 , v0 ] + q0K · (k̄ − k0 ) + qV V V
0 · (W [ s0 , a0 , M [ s0 , a0 , v1 ]] − v0 )
Q P
+ q0H · H [k, s0 , a0 , v1 ] + δ̄ ∑ hθ, ζ (π )|s1 i
s1 ∈S

= inf sup q0K · k̄ + F[s0 , v0 ] − qV


0 · v0
Y V
+ inf sup −q0K · k0 + qV V V
0 · W [ s0 , a0 , M [ s0 , a0 , v1 ]]
Q (q0K ,qV
0 ) P ( v0 )

+ q0H · H [k, s0 , a0 , v1 ] + δ̄ ∑ hθ, ζ (π )|s1 i,


s1 ∈S

which combined with the definition of D ∗ gives the first equality in the proposition. For
each (s, y) = (s, qK , qV ) ∈ S × Y ,

D ∗ (s, y) = inf sup −qK · k0 + qV · W V [s, a0 , MV [s, a0 , v1 ]]


Q ( y ) P ( v0 )

+ q0H · H [k0 , s, a0 , v1 ] + δ̄ ∑ hθ, ζ (π )|s′ i



s ∈S
= inf sup −q · k0 + q · W [s, a0 , M [s, a0 , v1 ]] + q0H · H [k0 , s, a0 , v1 ]
K V V V
Q ( y ) P ( v0 )
′ ′
− δ̄ ∑ qV
1 (s ) · v1 (s ) + δ̄ ∑ q1K (s′ ) · {W K [k0 , s, a0 ] − k1 (s′ )}

s ∈S ′
s ∈S
n
′ V ′ ′ V ′ ′ ′
+ δ̄ ∑ qV
1 (s ) · W [ s , a1 (s ), M [ s , a1 (s ), v2 (s )]]

s ∈S
o
+ q1H (s′ ) · H [k1 , s′ , a1 (s′ ), v2 (s′ )] + δ̄ ∑ hθ, ζ (π )|s′ , s′′ i .
s′′ ∈S

Note that once q0 = (q0H , q1K , qV


1 ) is chosen, p0 = (k0 , a0 , v1 ) is independent of the remaining
dual variables. Consequently, conditional on q0 = (q0H , q1K , qV 1 ), the infimum over these dual

46
variables and the supremum over p0 may be interchanged to give:

D ∗ (s, y) = inf sup −q0K · k0 + qV V V


0 · W [ s, a0 , M [ s, a0 , v1 ]]
Q P
′ ′
+ q0H · H [k0 , s, a0 , v1 ] − δ̄ ∑ qV
1 ( s ) v1 ( s ) + ∑ q1K (s′ ) · W K [k0 , s, a0 ]

s ∈S s ∈S′
n
+ δ̄ ∑ inf sup − q1K (s′ ) · k1 (s′ ) + qV ′ V ′ ′ V ′ ′ ′
1 (s ) · W [ s , a1 (s ), M [ s , a1 (s ), v2 (s )]]

s ∈S Q (y1 (s′ )) P (v 1 (s′ ))
o
+ q1H (s′ ) · ′ ′ ′
H [k1 , s , a1 (s ), v2 (s )] + δ̄ ∑ hθ, ζ (π )|s , s ′ ′′
i .
s′′ ∈S
= inf sup −q0K · k0 + qV V V H
0 · W [ s, a0 , M [ s, a0 , v1 ]] + q0 · H [ k0 , s, a0 , v1 ]
Q P
′ ′
∑ qV ∑ q1K (s′ ) · W K [k0 , s, a0 ] + δ̄ ∑ D ∗ s ′ , y1 ( s ′ ) .

− δ̄ 1 (s ) · v1 (s ) + δ̄

s ∈S ′
s ∈S ′
s ∈S

Combining the last equality with the definition of J gives the second equality in the propo-
sition.
Proof of Proposition 4. Choose an arbitrary (s, y) = (s, qK , qV ) ∈ S × Y = S × nk +nv +1 . R
Since 0 ∈ Q is a feasible multiplier choice for the infimum in the continuation problem
(12):

D ∗ (s, y) = inf sup −qK · k0 + qV · W V [s, a0 , MV [s, a0 , v1 ]]


Q ( y ) P ( v0 )

+ q H · H [k0 , s, a0 , v1 ] + ∑ hθ, ζ (π )|s′ i



s ∈S
K V
≤ sup −q · k0 + q · W [s, a0 , M[s, a0 , v1 ]] < ∞,
P

where the last inequality uses the boundedness of K × V . On the other hand, for a fixed
feasible primal process π ′ ∈ P and an arbitrary dual process in Q,

sup − qK · k0 + qV · W V [s, a0 , MV [s, a0 , v1 ]] + q H · H [k0 , s, a0 , v1 ] + ∑ hθ, ζ (π )|s′ i


P ′
s ∈S
≥ −q K
· k′0 +q ·W V V
[s, a0′ , MV [s, a0′ , v1′ ]] > −∞.

And so, D ∗ (s, y) ≥ −qK · k′0 + qV · W V [s, a0′ , MV [s, a1′ , v1′ ]] > −∞. Hence, D ∗ (s, y) is real
and, since (s, y) was arbitrary, D ∗ is real-valued on S × Y .
Proof of Proposition 5. (Only if) Let J0 (y0 , v0 ) = F[s0 , v0 ] − qV K
0 · v0 + q0 · k̄. Using this defini-
tion and that of J implies:

L (π, θ ) = J0 (y0 , v0 ) + ∑ δ̄t ∑ J (st , yt (st ); qt (st ), pt (st )).
t =0 St

47
Thus, if (θ ∗ , π ∗ ) ∈ Λ IS , then:

π ∗ ∈ argmax J0 (y0∗ , v0 ) + ∑ δ̄t ∑ J (st , y∗t (st ); q∗t (st ), pt (st )).
P t =0 St

The above maximization can be decomposed into a collection of static maximizations


with v0∗ ∈ argmaxV J0 (y0∗ , v0 ) and p∗t (st ) ∈ argmaxP J (st , y∗t (st ); q∗t (st ), pt (st )). Let J0∗ (y) =
supV J0 (y, v0 ) and J ∗ (s, y; q) = supP J (s, y; q, p). Then:

D0∗ = J0∗ (y0∗ ) + ∑ δ̄t ∑ J ∗ (st , y∗t (st ); q∗t (st ))
t =0 St

≤ J0∗ (y0 ) + ∑ δ̄t ∑ J ∗ (st , yt (st ); qt (st )), θ ∈ Q.
t =0 St

In particular, the inequality holds for all θ with initial element y0∗ and so, since D ∗ (s, y) =
infQ (y) ∑∞
t =0
δ̄t ∑S t J ∗ (st , yt (st ); qt (st )),

D0∗ = J0∗ (y0∗ ) + ∑ δ̄t ∑ J ∗ (st , y∗t (st ); q∗t (st )) ≤ J0∗ (y0∗ ) + D∗ (s0 , y0∗ ).
t =0 St

Conversely, since the continuation of θ ∗ lies in Q (y), the reverse inequality holds: D0∗
= J0∗ (y0∗ ) + ∑∞ t =0
δ̄t ∑S t J ∗ (st , y∗t (st ); q∗t (st )) ≥ J0∗ (y0∗ ) + D ∗ (s0 , y0∗ ). Hence D0∗ = J ∗ (y0∗ ) +
D ∗ (s0 , y0∗ ) and y0∗ attains the minimum in (14). Consequently, (y0∗ , v0∗ ) ∈ G0IS . Pursuing the
same argument at successive histories gives (q∗t (st ), y∗t+1 (st )) attains the minimum in (15)
at (st , y∗t (st )) and so (q∗t (st ), p∗t (st )) ∈ G IS (st , y∗t (st )).
(If) Suppose (π ∗ , θ ∗ ) is such that (q0K ∗ , pV ∗ ∗ IS
0 , v0 ) ∈ G0 and for each t ∈ , st ∈ S t , N
(qtH ∗ (st ), y∗t+1 (st ), p∗t (st )) ∈ G IS (st , y∗t (st )). The definitions of G0IS and G IS imply that
J0 (y0∗ , v0∗ ) = supV J0 (y0∗ , v0 ) and J (st , y∗t (st ); q∗t (st ), p∗t (st )) = supP J (st , y∗t (st ); q∗t (st ), pt (st )).
Hence, for arbitrary π ∈ P,

L (π ∗ , θ ∗ ) = J0 (y0∗ , v0∗ ) + ∑ δ̄t−1 ∑ J (st , y∗t (st ); q∗t (st ), p∗t (st ))
t =0 St

≥ J0 (y0∗ , v0 ) + ∑ δ̄t−1 ∑ J (st , y∗t (st ); q∗t (st ), pt (st ))
t =0 St
= L (π, θ ∗ ).

And so π ∗ ∈ argmaxP L (π, θ ∗ ). Let J0∗ (y) = supV J0 (y, v0 ) and J ∗ (s, y; q) = supP J (s, y; q, p).
Then:

D0∗ = inf sup L (π, θ ) = inf J0∗ (y0∗ ) + ∑ δ̄t−1 ∑ J ∗ (st , y∗t (st ); q∗t (st ))
Q P Q
t =0 St

48
The definitions of G0IS and G IS imply:

D0∗ = J0∗ (y0∗ ) + D ∗ (s0 , y0∗ )

and
D ∗ (st , y∗t (st )) = J ∗ (st , y∗t (st ); q∗t (st )) + δ̄ ∑ D ∗ (s′ , y∗t+1 (st , s′ )).

s ∈S
Consequently, we have:
T
D0∗ = J0∗ (y0∗ ) + ∑ δ̄t ∑ J ∗ (st , y∗t (st ); q∗t (st )) + δ̄T+1 ∑ D ∗ (s T +1 , y∗T +1 (s T +1 )).
t =0 St S T +1

Taking the limit as T goes to infinity and using the condition in the proposition implies
that

D0∗ ≥ J0∗ (y0∗ ) + ∑ δ̄t ∑ J ∗ (st , y∗t (st ); q∗t (st )).
t =0 St

But since θ ∗ ∈ Q and so is feasible for the minimization defining Λ IS , the reverse inequality
holds and θ ∗ attains the minimum as required.

D Proofs for Section 6


Proof of Lemma 1. We begin with simple general result. Let Ψ, Φ and Ω denote vector
spaces and L : Ψ × Φ × Ω → R
a real-valued function. Assume that for each ω ∈ Ω,
L(·, ·, ω ) is sub-linear. For ψ ∈ Ψ, let: Λ(ψ) = inf sup L(ψ, φ, ω ). We prove that Λ is sub-
Φ Ω
linear. To begin with we first show that Λ is convex. Let ψ1 and ψ2 be elements of Ψ and
λ ∈ [0, 1]. Let ψλ = λψ1 + (1 − λ)ψ2 . Assume that the infimum defining Λ is attained at ψi
by some φi∗ , i = 1, 2. This assumption simplifies the exposition and can easily be dropped.
Let φλ∗ = λφ1∗ + (1 − λ)φ2∗ . Then:

λΛ(ψ1 ) + (1 − λ)Λ(ψ1 ) = λ inf sup L(ψ1 ; φ, ω ) + (1 − λ) inf sup L(ψ2 ; φ, ω )


Φ Ω Φ Ω
= ∗
λ sup L(ψ1 ; φ1 , ω ) + (1 − λ) sup L(ψ2 ; φ2∗ , ω )
Ω Ω
≥ ∗
sup{λL(ψ1 ; φ1 , ω ) + (1 − λ) L(ψ2 ; φ2∗ , ω )}

≥ sup L(ψλ ; φλ∗ , ω ) ≥ inf sup L(ψλ ; φ, ω ) = Λ(ψλ ),
Ω Φ Ω

where the second inequality uses the convexity of L(·, ·, ω ). Thus, Λ is convex. Next we
show homogeneity. Suppose that ψ ∈ Ψ and λ > 0. Then:

Λ(λψ) = inf sup L(λψ; φ, ω ) = λ inf sup L(ψ; φ/λ, ω ) = λΛ(ψ),


Φ Ω Φ Ω

49
where the second equality uses the positive homogeneity of L(·, ·, ω ). Thus, Λ is positively
homogenous of degree 1 and, combining results, sub-linear.
(i) For fixed s0 ∈ S , define the "continuation Lagrangian":

M(y0 ; q0H , {θ |s1 }, p, {ζ (π )|s1 }) = −q0K · k0 + qV V V


0 · W [ s0 , a0 , M [ s0 , a0 , v1 ]]
+ q0H · H [k0 , s0 , a0 , v1 ] + δ̄ ∑ hθ, ζ (π )|s1 i,
s1 ∈S

Setting ψ = y0 , Ψ = Y , φ = (q0H , {θ |s1 }s1 ∈S ), Φ = Q (y0 ), ω = (q, {π |s1 }s1 ∈S ), Ω = P (v0 )


and L(ψ; φ, ω ) = M(y0 ; q0H , {θ |s1 }, p, {ζ |s1 }), it follows that for each ω, L(·; ·, ω ) is linear
and, hence, sub-linear. Applying the general result from the first part of the proof, D ∗ (s0 , ·)
is sub-linear. Since s0 was arbitrary in S , D ∗ is sub-linear.
(ii) It is easy to verify that for each (s, p), J (s, ·; ·, p) is linear and, hence, sub-linear.
Assume that D is sub-linear. Then for each (s, p), J (s, ·; ·, p) + δ̄ ∑s′ ∈S D (s′ , ·) is sub-linear.
Consequently, the logic from the first part of the proof establishes that B( D )(s, ·),

B( D )(s, y) = inf sup J (s, y; q, p) + δ̄ ∑ D s′ , y′ (s′ ) ,



Q P s′ ∈S

is sub-linear. Since s was arbitrary in S , B( D ) is sub-linear.


Proof of Lemma 2. Evidently, (G , d) is a metric space. Let { Dn } be a Cauchy sequence in
G . Thus, as n, m → ∞,
! !
Dn (s, y) − D (s, y) Dm (s, y) − D (s, y)
d( Dn , Dm ) = sup ln − ln → 0.
S ×C D (s, y) − D (s, y) D (s, y) − D (s, y)

N, R according to: n ( s,y)− D ( s,y)


D 
For each n ∈ define gn : S × C → gn (s, y) = ln D (s,y)− D(s,y)
,
(s, y) ∈ S × C . Let g = 0 and g = ln{( D − D )/( D − D )}. It follows that { gn } is Cauchy
with respect to the sup-norm and that for each n, g ≤ gn ≤ g. By the completeness of the
continuous, bounded functions from C to R, { gn } converges in the sup-norm to a function
g∞ , with each g∞ (s, ·) continuous and bounded and g ≤ g∞ ≤ g. Use g∞ to define the
homogeneous function D∞ as:
          
y y y y
D∞ (s, y) = kyk D s, + exp g∞ s, D s, − D s, .
kyk kyk kyk kyk
d
By construction D ≤ D∞ ≤ D and Dn → D∞ . Since D∞ is the pointwise limit of a sequence
of sub-linear and, hence, convex functions, it too is convex. Hence, it is in G .
Proof of Proposition 6. Let G0 denote the interval of real-valued functions between D and
D. Let D1 and D2 be any pair of functions in G0 and let λ ∈ [0, 1]. Define for each

50
(s, y, q) ∈ S × Y × Q, J ∗ (s, y, q) = supP J (s, y; q, p). Then, for each (s, y) ∈ S × Y ,

B(λD1 + (1 − λ) D2 )(s, y) = inf J ∗ (s, y, η, y′ ) + ∑ {λD1 s′ , y′ (s′ ) + (1 − λ) D2 s′ , y′ (s′ ) }


 
Q
s′ ∈S
n o
∗ ′ ′ ′ ′
= inf λ J (s, y, η, y ) +
Q
∑ D1 s , y ( s )
s′ ∈S
n o
∗ ′ ′ ′ ′
+ (1 − λ) J (s, y, η, y ) + ∑ D2 s , y ( s )
s′ ∈S
n o
≥ λ inf J ∗ (s, y, η, y′ ) + ∑ D1 s ′ , y ′ ( s ′ )
Q
s′ ∈S
n o
+ (1 − λ) inf J ∗ (s, y, η, y′ ) + ∑ D2 s ′ , y ′ ( s ′ )
Q
s′ ∈S
= λB( D1 )(s, y) + (1 − λ)B( D2 )(s, y).

Thus, B is concave on G0 . Let D1 , D2 ∈ G ⊂ G0 . By definition of d, for each (s, y) ∈ S × C ,


! !
D2 (s, y) − D (s, y) D1 (s, y) − D(s, y)
ln ≤ ln + d ( D1 , D2 ) .
D−D D (s, y) − D (s, y)

Taking the exponential of each side and rearranging gives:


! !
D2 (s, y) − D (s, y) D1 (s, y) − D (s, y)
exp{−d( D1 , D2 )} ≤ .
D (s, y) − D (s, y) D (s, y) − D (s, y)

But, by Assumption 3 (i), D − D > 0 and so, after rearrangement,

D1 (s, y) ≥ exp{−d( D1 , D2 )} D2 (s, y) + (1 − exp{−d( D1 , D2 )}) D (s, y). (36)

Since D1 , D2 and D are positively homogeneous of degree 1, this inequality holds at all
(s, y) ∈ S × Y . Then, by monotonicity and concavity of B (on G0 ),

B( D1 ) ≥ B(exp {−d( D1 , D2 )} D2 + (1 − exp{−d( D1 , D2 )}) D )


≥ exp{−d( D1 , D2 )}B( D2 ) + (1 − exp{−d( D1 , D2 )})B( D ). (37)

By assumption there is a ε 1 > 0 such that for each (s, y) ∈ S × C , B( D )(s, y) > D (s, y) + ε 1 .
For (s, y) ∈ S × C , define:
ε1
λ(s, y) := .
D (s, y) − D (s, y)
Since D (s, y) ≥ B( D )(s, y) ≥ B( D )(s, y) > D (s, y) + ε 1 , λ(s, y) ∈ (0, 1). Now, for each
s ∈ S , D (s, ·) and D (s, ·) are continuous. Thus, λ(s, ·) is continuous and since C is compact,

51
there is a λ∗ = minS ×C λ(s, y) ∈ (0, 1). Then, for all (s, y) ∈ S × C ,

B( D )(s, y) > D (s, y) + ε 1 = λ(s, y) D (s, y) + (1 − λ(s, y)) D (s, y)


≥ λ∗ D (s, y) + (1 − λ∗ ) D (s, y)
≥ λ∗ B( D2 )(s, y) + (1 − λ∗ ) D (s, y), (38)

where the first inequality is by assumption, the first equality uses the definition of λ(s, y),
the second inequality uses the definition of λ∗ and D ≥ D and the final inequality uses
D ≥ B( D ) ≥ B( D2 ). Combining (37) with (38) gives for all (s, y) ∈ C ,

B( D1 )(s, y) ≥ exp{−d( D1 , D2 )}B( D2 )(s, y) + (1 − exp{−d( D1 , D2 )})


× [λ∗ B( D2 )(s, y) + (1 − λ∗ ) D (s, y)].

Letting r := exp{−d( D1 , D2 )} + (1 − exp{−d( D1 , D2 )})λ∗ , then gives for (s, y) ∈ S × C :

B( D1 )(s, y) − D (s, y) B( D2 )(s, y) − D (s, y)


≥r .
D (s, y) − D (s, y) D (s, y) − D (s, y)

Hence, taking logs, for (s, y) ∈ S × C ,


! !
B( D1 )(s, y) − D (s, y) B( D2 )(s, y) − D (s, y)
ln ≥ ln r + ln .
D (s, y) − D (s, y) D (s, y) − D (s, y)

But from the definition of r and Jensen’s inequality:

ln r ≥ (1 − λ∗ ) ln exp{−d( D1 , D2 )} + λ∗ ln 1 = −(1 − λ∗ )d( D1 , D2 ).

Thus, for (s, y) ∈ S × C ,


! !
B( D2 )(s, y) − D (s, y) B( D1 )(s, y) − D (s, y)
(1 − λ∗ )d( D1 , D2 ) ≥ − ln r ≥ ln − ln .
D (s, y) − D (s, y) D (s, y) − D (s, y)
(39)
Repeating the argument with D1 and D2 interchanged and combining with (39) implies
that for all (s, y) ∈ S × C ,
! !
B ( D 2 )( s, y ) − D ( s, y ) B ( D 1 )( s, y ) − D ( s, y )
(1 − λ∗ )d( D1 , D2 ) ≥ ln − ln .
D (s, y) − D (s, y) D (s, y) − D (s, y)

Consequently, letting ρ := (1 − λ∗ ) ∈ (0, 1),


! !
B( D2 )(s, y) − D (s, y) B( D1 )(s, y) − D (s, y)
ρd( D1 , D2 ) ≥ sup ln − ln
S ×C D (s, y) − D (s, y) D (s, y) − D (s, y)
= d(B( D1 ), B( D2 ))

52
as desired.

Bounding value functions for Example 1 Assume as in the main text an ã ∈ Ans and a
ξ > 0 such that for each s ∈ S , γ(s) > ∑1i =0 ãi (s), and for each s ∈ S and i ∈ {0, 1},
1
1−δ i 1−δ i n oσ
v−ξ ≥ (ã (s))1−µ + δv > (ã (s))1−µ + δ ∑ wi ′ (s′ )σ Q(s|s′ ) > wi (s) + ξ.
1−µ 1−µ s′ ∈S

Set: (
1
v qV,i ≥ 0
D (s, qV ) = ∑ qV,i φi (qV,i , s), φi (qV,i , s) =
i =0 v qV,i < 0
and
(
1
wi ( s ) qV,i ≥ 0
D (s, qV ) = ∑ {qV,i ψi (qV,i , s) + |qV,i |ξ }, ψi (qV,i , s) =
i =0 v qV,i < 0.

Given qV ′ = {qV ′,i (s′ )}, define ψ(qV ′ ) = {ψi (qV ′,i (s′ ), s)} and note that the above defini-
tions imply for each s and qV ′ , H [s, ã(s), ψ(qV ′ )] ≥ 0.
B( D ) is given by, for all (s, qV ) ∈ S × Y ,
 !1
1−δ σ
V V,i H,i i 1− µ i′ ′ σ ′
B( D )(s, q ) = inf sup ∑ (q + q ) (a ) + δ ∑ v (s ) Q (s | s )
Q P i =0,1 1 − µ s′ ∈S

!
− ∑ q H,i wi (s) − q H,2 ∑ ai − γ ( s ) −δ ∑ qV ′ ( s ′ ) · v ′ ( s ′ ) + δ ∑ D (s′ , qV ′ (s′ )).
i =0,1 i =0,1 ′
s ∈S ′
s ∈S

Setting D = D, using the definition of v and v and noting that the dual variables (q H , qV ′ )
can always be chosen equal to 0 in the infimum, we have B( D )(s, qV ) ≤ D (s, qV ). On the
other hand, setting D = D and noting that for any s and choice of (q H , qV ′ ), (ã (s), ψ(qV ′ ))
is a feasible choice for the supremum with H [s, ã(s), ψ(qV ′ )] ≥ 0, we have:
 !1
1−δ σ

B( D )(s, qV ) ≥ inf ∑ (qV,i + q H,i ) (ãi (s))1−µ + δ ∑ ψi (qV ′i (s′ ), s′ )σ Q(s|s′ )


Q i =0,1 1 − µ s′ ∈S

!
− ∑ q H,i wi (s) − q H,2 ∑ ãi (s) − γ(s)
i =0,1 i =0,1
 !1
1−δ σ

≥ inf ∑ qV,i (ãi (s))1−µ + δ ∑ ψ i ( q V ′i ( s ′ ), s ′ ) σ Q ( s | s ′ )


Q i =0,1 1 − µ ′
s ∈S

53
If qV,i ≥ 0, then
 !1
1−δ σ

inf qV,i (ãi (s))1−µ + δ ∑ ψi (qV ′i (s′ ), s′ )σ Q(s|s′ )


qV ′ i 1 − µ s′ ∈S

 !1
1−δ σ

≥ qV,i (ãi (s))1−µ + δ ∑ wi (s′ )σ Q(s|s′ ) ≥ qV,i (wi (s) + ξ ),


1 − µ s′ ∈S

with the inequality strict if qV,i > 0. Similarly, if qV,i < 0, then
 !1
1−δ σ

inf qV,i (ãi (s))1−µ + δ ∑ ψi (qV ′i (s′ ), s′ )σ Q(s|s′ )


qV ′ i 1 − µ s′ ∈S

 
V,i 1−δ i 1− µ
≥q (ã (s)) + δv > qV,i (v − ξ ).
1−µ

Thus, for all (s, qV ) ∈ S × C , B( D )(s, qV ) > D (s, qV ). The continuity of each B( D )(s, ·) fol-
lows from the assumptions on D and an argument of Rockafellar and Wets (1998) (Theorem
1.17, p. 16-17). The continuity of each D (s, ·) and B( D )(s, ·) and the compactness of C then
implies that there is a ε > 0 such that for all (s, qV ) ∈ S × C , B( D )(s, qV ) > D (s, qV ) + ε as
required. 

Bounding value functions for Example 2 The verification of Assumption 3 is very simi-
lar to Example 1. The main differences are in showing that B( D ) > D + ε, ε > 0, on C . We
detail the steps below. In the deterministic version of the default model, B( D ) takes the
form:
 
B( D )(qK , qV ) = inf sup − qK · k + qV · f (a) + δv′ + q H,1 f 1 (a1 ) + δv1′ − w(k)

q K ′ ,qV ′ ,q H k,a,v′
!
   
V′ ′ K′ H,2 i V′ K′
−δq · v + δq + q γ(k) − ∑ a + δD q , q .
i =0,1

Let D = D and define ã(qK ′ ) = (ã0 (qK ′ ), ã1 ), with ã0 (qK ′ ) = a0 if qK ′ > 0 and ã0 (qK ′ ) =
a0 otherwise. Note that given (qK , qV ) and (q H , qK ′ , qV ′ ), (k̄, ã(qK ′ ), ψ(qV ′ )) is a feasible
choice for the supremum that satisfies the no default constraint. Also, for all possi-
ble qK ′ , the component δqK ′ (γ(k̄ ) − ∑i =0,1 ãi (qK ′ )) in the objective function exactly offsets

δqK ψK (qK ′ ) + |qK |ξ K , the K component of D. Consequently,
 
B( D )(qK , qV ) ≥ inf −qK k̄ + qV · f (ã (qK ′ )) + δψi (qV ′i ) .
q K ′ ,qV ′

Fix (qK , qV ) ∈ C . The conditions placed on a0 , a0 and ã1 in the main text and the
same line of argument used in the preceding example establishes that: infqK ′ ,qV ′ qV ·

54
f (ã (qK ′ )) + δψi (qV ′i ) ≥ ∑i =0,1 {ψV,i (qV,i ) + |qV,i |ξ V }, with the inequality strict whenever


qV 6= 0. Also:
−qK k̄ ≥ qK ψK (qK ) − |qK |ξ K .
Where the last inequality is strict whenever qK 6= 0: if qK < 0, then (−qK )k̄ > 0 > ψK (qK ) −
|qK |ξ K = qK ξ K ; if qK > 0, then −qK k̄ > qK ψK (qK ) − |qK |ξ K = −qK k̄ − qK ξ K . Hence, for all
y ∈ C , B( D )(y) > D (y). As before, the continuity of each B( D )(·) follows from the
assumptions on D and an argument of Rockafellar and Wets (1998). The continuity of
each D (·) and B( D )(·) and the compactness of C then implies that there is a ε > 0 such
that for all y ∈ S × C , B( D )(y) > D (y) + ε as required. 

E Proofs for Section 8


Proof of Proposition 8. Equality of values follows from the proof of Luenberger (1969), The-
orem 2, p. 221, following a small extension to accommodate equality constraints. If π ∗
solves (AP) and L admits a saddle point, then, again by a small extension to the proof of
Luenberger (1969), Theorem 2, p. 221, there is a θ0∗ that attains the infimum in (IS) and is
such that π ∗ maximizes L . The result then follows from Proposition 5.
Proof of Proposition 9. Part (i). Since for each θ ∗ ∈ Q ∗ , every element of P ∗ is maximal
for L (·, θ ∗ ) and since all elements of argmaxP L (·, θ ∗ ) share a plan α∗ , it follows that all
elements of P ∗ share a plan α∗ . That α∗ is a solution for (P) then follows from Luenberger
(1969), Theorem 2, p. 221 and Proposition 1. That α∗ is the unique solution of (P) follows
from Proposition 1 and the fact that all solutions to (AP) belong to P ∗ and, hence, all
share the plan α∗ . Part (ii). Suppose (π, θ ) = (α, {kt , vt }∞
t=0 ) satisfies the condition in part
(ii) of the proposition. Then, by Proposition 5, (π, θ ) solves (IS). In addition, θ ∈ Q ∗ and
since π ∈ argmaxP L it follows that α = α∗ and is optimal for (P).
Proof of Proposition 10. Condition (i) and the weak duality inequality imply F[s0 , V (s0 , α̂)] ≥
D0∗ ≥ P0∗ . On the other hand, Condition (ii) implies P0∗ ≥ F[s0 , V (s0 , α̂)] and, hence π̂
solves (AP) and D0∗ = P0∗ . In addition, from Proposition 5, if (π̂, θ̂ ) satisfies Condition (T),
then it is a solution to the dual problem (IS). Also, L (π̂, θ̂ ) = supπ L (π, θ̂ ) = D0∗ = P0∗ =
infθ L (π̂, θ ), where the first and second inequalities use the fact that (π̂, θ̂ ) solves the dual,
the third uses D0∗ = P0∗ and the fourth the fact that π̂ solves (AP) and, hence, maximizes
infθ L (π, θ ) and attains P0∗ . Thus, π̂ solves max L (π, θ̂ ) and θ̂ solves min L (π̂, θ ) and
(π̂, θ̂ ) is a saddle fof L .

F Relaxation
We first consider the augmented problem without backward-looking state variables. The
relaxed version of this problem is:
sup F[s0 , v0 ] (R-AP)

55
subject to π ∈ P and ∀t, st ,

W V [st , at (st ), MV [st , at (st ), vt+1 (st )]] ≥ vt (st ), (40)


t t
H [st , at (s ), vt+1 (s )] ≥ 0. (41)

If F[s0 , ·] is concave in v0 , for each s, W V [s, ·, ·] is jointly concave in (a, m) (recall that by
assumption it is increasing in m), MV [s, ·, ·] and H [s, ·, ·] are jointly concave in (a, v′ ) and
a Slater condition holds, then equality of primal and dual values and existence of a min-
imizing multiplier in the dual is established by standard arguments. Assumption 4 gives
sufficient conditions for relaxation to leave optimal values and primal solutions unaffected.
R
Below for x, x ′ ∈ n , we write x > x ′ if x ≥ x ′ and x 6= x ′ . Also, for v ∈ V ns ⊂ ns nv and R
R
d ∈ nv , let v +s′ d denote the addition of d to the (s′ − 1)nv + 1 to s′ nv elements of v.

Assumption 4. For all (s, a, v) ∈ S × A × V ns and (s′ , a′ , v′ ) ∈ S × A × V ns such that


(i) H [s, a, v], H [s′ , a′ , v′ ] ≥ 0 and (ii) W V [s′ , a′ , MV [s′ , a′ , v′ ]] > v(s′ ), there is a pair of di-
R R
rections (d1 , d2 ) ∈ n+v × n a , d1 > 0, such that (i) (v(s′ ) + d1 , a′ + d2 ) ∈ V × A, (ii)
H [s, a, v +s′ d1 ] ≥ H [s, a, v] and H [s′ , a′ + d2 , v′ ] ≥ H [s′ , a′ , v′ ], (iii) W V [s, a, MV [s, a, v +s′
d1 ]] > W V [s, a, MV [s, a, v]], (iv) W V [s′ , a′ + d2 , MV [s′ , a′ + d2 , v′ ]] ≥ v(s′ ) + d1 .

Proposition 11. If (AP) features no backward-looking state variables and Assumption 4 holds,
then the optimal value from the relaxed problem (R-AP) equals that from (AP) and any solution to
(AP) also solves the relaxed problem. If F[s0 , ·] is increasing then, in addition, any solution to the
relaxed problem also solves (AP).

Proof. Let π be feasible for the relaxed problem and suppose at some ŝt = (ŝt−1 , ŝt ),
W V [ŝt , at (ŝt ), MV [ŝt , at (ŝt ), vt+1 (ŝt )]] > vt (ŝt ). By Assumption 4, there is a feasible per-
turbation (d1t , d2t ), d1t > 0, such that:

W V [ŝt−1 , at−1 (ŝt−1 ),MV [ŝt−1 , at−1 (ŝt−1 ), vt (ŝt−1 ) +ŝt d1t ]]
> W V [ŝt−1 , at−1 (ŝt−1 ), MV [ŝt−1 , at−1 (ŝt−1 ), vt (ŝt−1 )]] ≥ vt−1 (ŝt−1 ).

Reset vt (ŝt ) to vt (ŝt ) + d1t and at (ŝt ) to at (ŝt ) + d2t . After this adjustment

W V [ŝt−1 , at−1 (ŝt−1 ), MV [ŝt−1 , at−1 (ŝt−1 ), vt (ŝt−1 )]] > vt−1 (ŝt−1 ).

Repeating the argument at successively shorter histories, there is a (d11 , d21 ), d11 > 0, such
that:

W V [s0 , a0 ,MV [s0 , a0 , v1 +ŝ1 d11 ]] > W V [s0 , a0 , MV [s0 , a0 , v1 ]] ≥ v0 .

Reset v0 to equal W V [s0 , a0 , MV [s0 , a0 , v1 +ŝ1 d11 ]]. Applying this argument at all histories
such that W V [st , at (st ), MV [st , at (st ), vt+1 (st )]] > vt (st ) holds, a primal process feasible for
(AP) is constructed with initial forward-looking state variable v0 greater than that of the
original process. Since F[s0 , ·] is assumed non-decreasing the new primal process has
a payoff no less than the original process. Consequently, the optimal payoff from (AP)
equals that from the relaxed problem and any solution to (AP) also solves the relaxed

56
problem. On the other hand, if F[s0 , ·] is increasing, then the constructed process has a
payoff strictly above the original process and so any solution to the relaxed problem must
be feasible and, hence, optimal for the original problem.
The simplest situation in which Assumption 4 is satisfied occurs when H is increasing
in its third argument. Then d2 may be set equal to 0 and d1 = W V [s′ , a′ , MV [s′ , a′ , v′ ]] −
v(s′ ) > 0. Since W V [s′ , a′ , MV [s′ , a, v′ ]] ∈ V (s), (i) is satisfied. Since H is increasing in its
third argument (ii) is satisfied. The monotonicity properties of W V and MV imply that (iii)
holds and (iv) holds by construction. These conditions are satisfied in standard limited
commitment problems such as the Epstein-Zin example in Section 3.
The analysis may be extended to problems with backward-looking states. In relaxed
problems with backward-looking states, the law of motion for such states is replaced
with the inequalities W K [kt (st−1 ), st , at (st )] ≥ kt+1 (st ) (and the law of motion for forward-
looking states is relaxed as in (40)). Modify Assumption 4 as in Assumption 5 below.

Assumption 5. For all (k, s, a, v) ∈ K × S × A × V ns and (k′ , s′ , a′ , v′ ) ∈ K × S × A × V ns


such that (i) H [k, s, a, v′ ], H [k′ , s′ , a′ , v′ ] ≥ 0 and (ii) either W V [s′ , a′ , MV [s′ , a′ , v′ ]] > v(s′ ) or
R R R
W K [k, s, a] > k′ , there is a triple (d0 , d1 , d2 ) ∈ nk × nv × n a such that (i) H [k, s, a, v +s′
d1 ] ≥ H [k, s, a, v] and H [k′ + d0 , s, a′ + d2 , v′ ] ≥ H [k′ , s′ , a′ , v′ ] (ii) W V [s, a, MV [s, a, v +s′ d1 ]] >
W V [s, a, MV [s, a, v]], (iii) W V [s′ , a′ + d2 , MV [s′ , a + d2 , v′ ]] > v(s′ ) + d1 and (iv) W K [k, s, a] >
k′ + d0 and W K [k′ + d0 , s′ , a′ + d2 ] > W K [k′ , s′ , s′ ].

The proof of the following proposition is similar to Proposition 11.

Proposition 12. If (AP) satisfies Assumption 5, then the optimal value from the relaxed problem
(with both backward and forward-looking state variables) equals that from (AP) and any solution
to (AP) also solves the relaxed problem. If F[s0 , ·] is increasing then, in addition, any solution to
the relaxed problem also solves (AP).

Proposition 12 is directly applicable to the contracting problem in Cooley, Marimon,


and Quadrini (2004). This is a limited commitment problem with default in which the
production function is strictly concave, but the outside option affine in capital (e.g. the
entrepreneur can sell off some capital after default). The law of motion for capital can
be relaxed without affecting the optimal solution (since it will never be optimal to throw
resources away, they can always be used to raise consumption of the entrepreneur or the
lender). Similarly, the law of motion for utility promises can be relaxed since the constraint
H is increasing in such promises (or, since the law of motion is quasi-linear in agent utility
promises, it may be substituted out).

57

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy