Chapter Iv: Stochastic Processes in Discrete Time 1. Filtrations
Chapter Iv: Stochastic Processes in Discrete Time 1. Filtrations
§1. Filtrations.
Fn ⊂ Fn+1 (n = 0, 1, 2, · · · ),
F∞ := limn→∞ Fn
represents all we ever will know (the ‘Doomsday σ-field’). Often, F∞ will be
F (the σ-field from Ch. II, representing ‘knowing everything’. But this will
not always be so; see e.g. [W], §15.8 for an interesting example.
Such a family {Fn : n = 0, 1, 2, · · · } is called a filtration; a probabil-
ity space endowed with such a filtration, {Ω, {Fn }, F, P } is called a filtered
probability space. (These definitions are due to P.- A. MEYER of Strasbourg;
Meyer and the Strasbourg (and more generally, French) school of probabilists
1
have been responsible for the ‘general theory of [stochastic] processes’, and
for much of the progress in stochastic integration, since the 1960s.) Since
the filtration is so basic to the definition of a stochastic process, the more
modern term for a filtered probability space is a stochastic basis.
Fn = σ(X0 , X1 , · · · , Xn )
Fn = σ(W0 , W1 , · · · , Wn )
Xn = fn (W0 , W1 , · · · , Wn )
2
§3. Discrete-Parameter Martingales.
3
From the OED: martingale (etymology unknown)
1. 1589. An article of harness, to control a horse’s head.
2. Naut. A rope for guying down the jib-boom to the dolphin-striker.
3. A system of gambling which consists in doubling the stake when losing in
order to recoup oneself (1815).
Thackeray: ‘You have not played as yet? Do not do so; above all avoid a
martingale if you do.’
Problem. Analyse this strategy.
Gambling games have been studied since time immemorial - indeed, the
Pascal-Fermat correspondence of 1654 which started the subject was on a
problem (de Méré’s problem) related to gambling.
The doubling strategy above has been known at least since 1815.
The term ‘mg’ in our sense is due to J. VILLE (1939). Martingales were
studied by Paul LÉVY (1886-1971) from 1934 on [see obituary, Annals of
Probability 1 (1973), 5-6] and by J. L. DOOB (1910-2004) from 1940 on.
The first systematic exposition was Doob’s book [D], Ch. VII.
Example: Accumulating data about a random variable ([W], 96, 166-167).
If ξ ∈ L1 (Ω, F, P ), Mn := E(ξ|Fn ) (so Mn represents our best estimate of ξ
based on knowledge at time n), then
E[Mn |Fn−1 ] = E[E(ξ|Fn )|Fn−1 ]
= E[ξ|Fn−1 ] (iterated conditional expectations)
= Mn−1 ,
so (Mn ) is a mg. One has the convergence (see IV.4 below)
Mn → M∞ := E[ξ|F∞ ] a.s. and in L1 .
§4. Martingale Convergence.
4
Theorem (Doob). An L1 -bounded supermartingale is a.s. convergent:
there exists X∞ finite such that
X n → X∞ (n → ∞) a.s.
In particular, we have
We say that
Xn → X∞ in L1
if
E[ |Xn − X∞ | ] → 0 (n → ∞).
For a class of martingales, one gets convergence in L1 as well as almost
surely [= with probability one]. Such martingales are called uniformly in-
tegrable (UI) [W], or regular [N], or closed (see below), They are ”the nice
ones”. Fortunately, they are the ones we need.
The following result is in [N], IV.2, [W], Ch. 14; cf. SP L18-19, SA L6.
Xn = E[X∞ |Fn ],
Xn = E[X|Fn ].
5
is revealed, by ”progressive revelation” – as in (choose your metaphor) a
striptease, or the ”Day of Judgement” (when all will be revealed).
As we shall see (Risk-Neutral Valuation Formula): closed mgs are vital
in mathematical finance, and the closing value corresponds to the payoff of
an option.
We write
6
Proof.
With Y = C • X as above,
≤0
=0
Proof.
If (Mn ) is a martingale, X defined by X0 = 0, Xn = n1 Hr ∆Mr
P
(n ≥
1) is the martingale transform H • M , so is a martingale.
7
Conversely, if the condition of the Proposition holds, choose j, and for
any Fj -measurable set A write Hn = 0 for n 6= j + 1, HP j+1 = IA . Then
(Hn ) is previsible, so the condition of the Proposition, E[ n1 Hr ∆Mr ] = 0,
becomes
E[IA (Mj+1 − Mj )] = 0.
As this holds for every A ∈ Fj , the definition of conditional expectation gives
E[Mj+1 |Fj ] = Mj .
{T ≤ n} = {ω : T (ω) ≤ n} ∈ Fn ∀n ≤ ∞.
Equivalently,
{T = n} ∈ Fn n ≤ ∞.
Think of T as a time at which you decide to quit a gambling game: whether
or not you quit at time n depends only on the history up to and including
time n – NOT the future. [Elsewhere, T denotes the expiry time of an option.
If we mean T to be a stopping time, we will say so.]
The following important classical theorem is discussed in [W], 10.10.
E[XT ] ≤ E[X0 ].
E[XT ] = E[X0 ].
8
The OST is important in many areas, such as sequential analysis in
statistics. We turn in the next section to related ideas specific to the gam-
bling/financial context.
Write XnT := Xn∧T for the sequence (Xn ) stopped at time T .
Proof. If φj := I{j ≤ T },
n
X
XT ∧n = X0 + φj (Xj − Xj−1 ).
1
Definition. If Z = (Zn )N
n=0 is a sequence adapted to a filtration (Fn ), the
sequence U = (Un )N
n=0 defined by backward recursion by
is called the Snell envelope of Z (J. L. Snell in 1952; [N] Ch. 6). U is adapted,
i.e. Un ∈ Fn for all n. For, Z is adapted, so Zn ∈ Fn . Also E[Un+1 |Fn ] ∈ Fn
(definition of conditional expectation). Combining, Un ∈ Fn , as required.
The Snell envelope (see IV.8 L20) is exactly the tool needed in pricing
American options. It is the least supermg majorant (also called the réduite
9
or reduced function – crucial in the mathematics of gambling):
Proof.
First, Un ≥ E[Un+1 |Fn ], so U is a supermartingale, and Un ≥ Zn , so U
dominates Z.
Next, let T = (Tn ) be any other supermartingale dominating Z; we must
show T dominates U also. First, since UN = ZN and T dominates Z, TN ≥
UN . Assume inductively that Tn ≥ Un . Then
We omit the proof (not hard, but fiddly – for details, see e.g. L13, 2014).
Because U is a supermartingale, we knew that stopping it would give a su-
permartingale, by the Proposition of §6. The point is that, using the special
properties of the Snell envelope, we actually get a martingale.
Write Tn,N for the set of stopping times taking values in {n, n+1, · · · , N }
(a finite set, as Ω is finite). We next see that the Snell envelope solves the
optimal stopping problem: it maximises the expectation of our final value of
Z – the value when we choose to quit – conditional on our present (publicly
10
available) information. This is the best we can hope to do in practice (with-
out cheating – insider trading, etc.)
U0 = U0T0 (since 0 = 0 ∧ T0 )
T0
= E[UN |F0 ] (by the martingale property)
= E[UT0 |F0 ] (since T0 = T0 ∧ N )
= E[ZT0 |F0 ] (since UT0 = ZT0 ),
proving the first statement. Now for any stopping time T ∈ T0,N , since U is
a supermartingale (above), so is the stopped process (UnT ) (§6). So
U0 = U0T (0 = 0 ∧ T , as above)
≥ E[UNT |F0 ] ((UnT ) a supermartingale)
= E[UT |F0 ] (T = T ∧ N )
≥ E[ZT |F0 ] ((Un ) dominates (Zn )),
The same argument, starting at time n rather than time 0, gives an ap-
parently more general version:
Theorem. If Tn := min{j ≥ n : Uj = Zj },
11
§8. Doob Decomposition.
Theorem. Let X = (Xn ) be an adapted process with each Xn ∈ L1 . Then
X has an (essentially unique) Doob decomposition
X = X0 + M + A : Xn = X0 + Mn + An ∀n (D)
with M a mg null at zero, A a previsible process null at zero. If also X is a
submg (‘increasing on average’), A is increasing: An ≤ An+1 for all n, a.s.
The proof in discrete time is quite easy (see L13, 2014). It is hard in
continuous time – but more important there (see Ch. V: quadratic variation
(QV) and the Itô integral). This illustrates the contrasts between the theo-
ries of stochastic processes in discrete and continuous time.
§9. Examples. Pn
1. Simple random walk. Recall the simple random walk: Sn := 1 Xk ,
where the Xn are independent tosses of a fair coin, taking values ±1 with
equal probability 1/2. We decide to bet until our net gain is first +1, then
quit – at time T , a stopping time. This has been analysed in detail; see e.g.
[GS] GRIMMETT, G. R. & STIRZAKER, D.: Probability and random pro-
cesses, OUP, 3rd ed., 2001 [2nd ed. 1992, 1st ed. 1982], §5.2:
(i) T < ∞ a.s.: the gambler will certainly achieve a net gain of +1 eventually;
(ii) E[T ] = +∞: the mean waiting-time till this happens is infinity. So:
(iii) No bound can be imposed on the gambler’s maximum net loss before his
net gain first becomes +1.
At first sight, this looks like a foolproof way to make money out of noth-
ing: just bet till you get ahead (which happens eventually, by (i)), then quit.
But as a gambling strategy, this is hopelessly impractical: because of (ii),
you need unlimited time, and because of (iii), you need unlimited capital!
Notice that the Optional Stopping Theorem fails here: we start at zero,
so S0 = 0, E[S0 ] = 0; but ST = 1, so E[ST ] = 1. This shows two things:
(a) The Optional Stopping Theorem does indeed need conditions, as the con-
clusion may fail otherwise [none of the conditions (i) - (iii) in the OST are
satisfied in the example above],
(b) Any practical gambling (or trading) strategy needs to have some inte-
grability or boundedness restrictions to eliminate such theoretically possible
but practically ridiculous cases.
2. The doubling strategy. Similarly for the doubling strategy (§3).
12