SCH MFF Wi21 ML
SCH MFF Wi21 ML
(a) (3)
(b) (2)
(c) (2)
(d) (1)
(e) (1)
(f) (3)
(g) (2)
(h) (2)
Question 2
(a) By definition, Pe (S 1 )is given by all probability measures Q on (Ω, F) that are equivalent
to P and satisfy EQ S11 = S01 . Since Ω is a finite set, all such probability measures can
(b) Since every martingale is a local martingale (with respect to the same probability measure
and filtration), we clearly have that Pe (S 1 ) ⊆ Pe,loc (S 1 ). To show the opposite inclusion,
we note that S 1 is bounded P -a.s. by a fixed constant C because Ω is finite, and so is
τ
then (S 1 ) for any F-stopping time τ . Fix a Q ∈ Pe,loc (S 1 ) and let (τn )n∈N be a localising
τ
sequence for S 1 . Then each (S 1 ) n is a (Q, F)-martingale and Q-a.s. bounded by C because
Q ≈ P . So the dominated convergence theorem then gives
h i
EQ S11 = EQ lim S1∧τ 1
1 1
= S01 .
n
= lim EQ S1∧τ n
= lim S0∧τ n
n→∞ n→∞ n→∞
The value 12 2
11 is clearly attained for α = 5 , which means that it is attained under the
probability measure Q∗ characterized by the probability vector ( 25 , 0, 35 ). Q∗ is clearly not
equivalent to P , but since P [{ω}] = 0 implies Q∗ [{ω}] = 0, Q∗ is absolutely continuous
with respect to P . (In fact, P [{ω}] = 0 is never true so that by the logical fact that an
empty premise implies every conclusion, any probability measure on (Ω, F) is absolutely
continuous with respect to P .)
S 1 is also a (Q∗ , F)-martingale since
1 1 2 3 1 16 + 39 1
EQ S1 =
∗ × × 8 + × 13 = × = × 11 = 10 = S01 .
1.1 5 5 1.1 5 1.1
The Q∗ -integrability of S 1 is trivial since S 1 is bounded, and adaptedness does not depend
on the probability measure.
Question 3
(a) Let (Xn )n∈N be a sequence of simple random variables of the form
n
X
Xn = xi,n 1Ai,n
i=1
for some constants x1,n , . . . , xn,n ≥ 0 and some sets A1,n , . . . , An,n ∈ F with Xn ↑ X
pointwise as n → ∞. We have seen that such a sequence exists, for instance in the solution
to Exercise 2.3 of the exercise sheets. We compute
n
X n n
X X
EQ [Xn ] = xi,n EQ 1Ai,n = xi,n Q [Ai,n ] = xi,n EP D1Ai,n
i=1 i=1 i=1
" n
# (2)
X
= EP D xi,n 1Ai,n = EP [DXn ] .
i=1
The first equality uses (a), the second one uses the tower property of conditional expec-
tation, the third one the Fk -measurability and nonnegativity of Y , and the last one the
definition of Zk .
(c) We compute
1 1
EP [Y ] = EP Zk Y = EQ Y .
Zk Zk
The first equality is obvious because Zk > 0 P -a.s., and the second one follows from (b)
since Y /Zk is nonnegative by nonnegativity of Zk and Y and also Fk -measurable as a ratio
of two Fk -measurable random variables.
(d) By the definition of conditional expectation, we need to show that
1
EQ EQ [Uk | Fj ] 1A = EQ EP [Zk Uk | Fj ] 1A
Zj
The first and the third equality follow from the definition of conditional expectation, the
second one from (b), and the last one uses (c) with the fact that EP [Zk Uk | Fj ] 1A is non-
negative by the nonnegativity of Uk and Zk and also Fj -measurable. Indeed, a conditional
expectation with respect to Fj is Fj -measurable and 1A is also Fj -measurable since A ∈ Fj
by assumption.
(e) If N is F-adapted, then ZN is F-adapted since the product of measurable functions is a
measurable function. Conversely, if ZN is F-adapted, then N is F-adapted for the same
reason since N = Z1 ZN and Z > 0. The same argument shows that Z is nonnegative if
and only if ZN is nonnegative.
Now, N is Q-integrable if and only if ZN is P -integrable because for any k ∈ {0, 1, . . . , T },
we have by (b) that
(a) Z σ is clearly positive by definition for all σ > −1. Furthermore, using the fact that
Nt − Ns ∼ Poi(λ(t − s)) and the knowledge of the moment generating function of the
Poisson distribution, we compute
σ
Zt h
(Nt −Ns ) log(1+σ)−λσ(t−s)
i
−λσ(t−s)
h
(Nt −Ns ) log(1+σ)
i
E F s = E e F s = e E e
Zsσ
= e−λσ(t−s) eλ(t−s)(1+σ−1) = 1.
Here we do not have to worry about the integrability of Z σ when verifying the martingale
condition since Z σ > 0 for σ > −1. Using the martingale property of Z σ , it also follows
for any σ > −1 and t ∈ [0, T ] that
EP [Ztσ ] = EP [Z0σ ] = 1
since N0 = 0 P -a.s.
(b) First, since Qσ ≈ P for any σ > −1, we immediately obtain that {N0 6= 0} is a Qσ -nullset
because it is a P -nullset and that
{ω ∈ Ω : [0, T ] 3 t 7→ Nt (ω) is not RCLL with jumps of size 1}
is a Qσ -nullset because it is P -nullset. So N0 = 0 Qσ -a.s. and Qσ -almost all trajectories of
N are RCLL with jumps of size 1.
Now we compute the conditional moment generating function of the increment Nt − Ns ,
0 ≤ s ≤ t ≤ T , under Qσ . It is given by
h i 1 h i
EQσ eu(Nt −Ns ) Fs = σ EP Ztσ eu(Nt −Ns ) Fs
Zs
h i
= e−Ns log(1+σ)+λσs EP eNt log(1+σ)−λσt eu(Nt −Ns ) Fs
h i
= e−λσ(t−s) EP e(Nt −Ns ) log(1+σ) eu(Nt −Ns ) Fs
h i log(1+σ)+u −1)
= e−λσ(t−s) EP e(Nt −Ns )(log(1+σ)+u) = e−λσ(t−s) eλ(t−s)(e
u −1) u −1)
= e−λσ(t−s) eλ(t−s)((1+σ)e = e−λ(1+σ)(t−s)(e .
The first equality follows from the Bayes formula, the third from the Fs -measurability of
eNs log(1+σ) , the fourth from the independence of Nt − Ns of Fs under P , and the fifth from
the fact that EP e(log(1+σ)+u)(Nt −Ns ) is the moment generating function of Poi(λ(t − s))
evaluated at log(1 + σ) + u.
The last expression above is in fact the moment generating function of Poi(λ(1 + σ)(t − s))
and thus shows that Nt − Ns ∼ Poi(λ(1 + σ)(t − s)). Furthermore, since the expression
does not depend on ω ∈ Ω, we can also conclude that eu(Nt −Ns ) is independent of Fs
under Qσ , by which we can conclude the same about Nt − Ns since is can be written as a
continuous (therefore measurable) transformation of eu(Nt −Ns ) . We can thus conclude that
N is (Qσ , F)-Poisson process with parameter λ(1 + σ) > 0.
(c) Since X and Y are predictable and satisfy
Z T Z T
EP Xt2 d[N
e ] < ∞ and EP
t Yt2 d[N
e] ,
t
0 0
RT RT
the stochastic integrals 0 Xt dN
et and
0 Yt dN
et are well defined. By squaring out, we
obtain
Z T Z T
Xt dNt
e Yt dN
et
0 0
Z T Z T 2 Z T 2 Z T 2 !
1
= Xt dN
et + Yt dN
et − Xt dN
et − Yt dN
et
2 0 0 0 0
Z T 2 Z T 2 Z T 2 !
1
= (Xt + Yt )dN
et − Xt dN
et − Yt dN
et .
2 0 0 0
Taking P -expectations on both sides, using linearity and applying the isometry property
of the stochastic integral to all three terms on the right-hand side (N
e is a (P, F)-martingale
as shown in Exercise 9.2 (a)), we obtain
Z T Z T
EP Xt dNt
e Yt dNt
e
0 0
Z T Z T Z T
1 2 2 2
= EP (Xt + Yt ) d[N ]t − EP
e Xt d[N ]t − EP
e Yt d[N ]t
e
2 0 0 0
Z T Z T
= EP Xt Yt d[N ]t = EP
e Xt Yt dNt ,
0 0
(a) Since M is a (P, F)-supermartingale, we have for all 0 ≤ s ≤ t ≤ T that Ms −EP [Mt | Fs ] ≥
0 P -a.s. Taking the expectation of the left-hand side gives
EP Ms − EP [Mt | Fs ] = EP [Ms ] − EP [Mt ] = C − C = 0.
But every nonnegative random variable with zero P -expectation is P -a.s. equal to zero (as
shown in Exercise 1.2 in the exercise sheets). So we have EP [Mt | Fs ] = Ms P -a.s., which
is the martingale property of M . Integrability and adaptedness follow from the fact that
M is a (P, F)-supermartingale.
(b) Let us define the process Y = (Yt )t∈[0,T ] by
Z t
Yt = λ(s)dWs .
0
Using the hint and the assumption that λ ∈ L2loc (W ), we see that Y is in fact a continuous
local (P, F)-martingale. The process Z is thus explicitly given by
Z t
1 t 2
Z
1
Zt = exp Yt − hY it = exp λ(s)dWs − λ (s)ds (3)
2 0 2 0
and it is in particular also a local (P, F)-martingale. Taking expectations on both sides
of (3) and using the fact that
Z t Z t
2
λ(s)dWs ∼ N 0, λ (s)ds
0 0
because λ is a deterministic function (see Exercise 12.3 (b) in the exercise sheets), we obtain
that E [Zt ] = 1 for all t ∈ [0, T ]. In particular, Z is integrable. But since Z is also positive,
we can apply Fatou’s lemma to show that it is in fact a (P, F)-supermartingale. Indeed,
let (τn )n∈N be a localising sequence for Z. Then we have for all 0 ≤ s ≤ t ≤ T that
h i
EP [Zt | Fs ] = EP lim inf Ztτn Fs ≤ lim inf EP [Ztτn | Fs ] = lim inf Zsτn = Zs .
n→∞ n→∞ n→∞
But according to (a), every (P, F)-supermartingale with a constant expectation is a true
(P, F)-martingale and we are done.
Alternatively, and more simply, one could recall the Novikov’s condition that is briefly
mentioned in the lecture notes and which says that if L = (Lt )t∈[0,T ] is a continuous local
(P, F)-martingale with L0 = 0 and EP 12 hLiT < ∞, then E(L) is a true (P, F)-martingale
on [0, T ]. In our case, we have that
h 1 i h 1 RT 2 i 1 T 2
R
EP e 2 hY iT = EP e 2 0 λ (s)ds = e 2 0 λ (s)ds < ∞,
dSt1
1 = µ1 − r(t) dt + σ1 dWt . (4)
St
Define λ : [0, T ] 7→ R by λ(s) := (µ1 − r(s))/σ1 and the process Z = (Zt )t∈[0,T ] by
Z
Zt := E − λ(s)dWs .
t
Since λ is left-continuous and bounded, (b) gives that Z is a positive (P, F)-martingale
with EP [Zt ] = 1 for all t ∈ [0, T ], and thus the density process of some measure Q ≈ P .
Girsanov’s theorem gives that
Z t Z t
µ1 − r(s)
Z
WtQ := Wt − W, − λ(s)dWs = Wt + λ(s)ds = Wt + ds
t 0 0 σ1
=: v(t, St1 ),
where we have used that WTQ − WtQ is independent of Ft under Q and normally distributed
with mean 0 and variance T − t. Consequently, the delta of H is given by
2 p2 σ1
2 (T −t)
∂v RT σ1
(T −t) p−1
ϑt = = pe(p−1) 0 r(s)ds−p 2 (St1 ) e 2 , (5)
∂x (t,x)=(t,St1 )