0% found this document useful (0 votes)
5 views9 pages

SCH MFF Wi21 ML

sch_mff_wi21_ml

Uploaded by

hans
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views9 pages

SCH MFF Wi21 ML

sch_mff_wi21_ml

Uploaded by

hans
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Question 1

The correct answers are:

(a) (3)
(b) (2)
(c) (2)
(d) (1)
(e) (1)
(f) (3)
(g) (2)
(h) (2)
Question 2

(a) By definition, Pe (S 1 )is given by all probability measures Q on (Ω, F) that are equivalent
to P and satisfy EQ S11 = S01 . Since Ω is a finite set, all such probability measures can


be characterized by the probability vectors (q1 , q2 , q3 ) ∈ (0, 1)3 with q1 + q2 + q3 = 1 and


 
 1 1 1 Y 1
EQ S1 = S0 ⇐⇒ EQ Se0 = Se01 ⇐⇒ EQ [Y1 ] = 1 + r
(1 + r)
⇐⇒ q1 (1 + d) + q2 (1 + m) + q3 (1 + u) = 1 + r
⇐⇒ q1 d + q2 m + q3 u = r
⇐⇒ −0.2q1 + 0.1q2 + 0.3q3 = 0.1. (1)
Setting q1 = α, we obtain that q3 = 1 − α − q2 from the condition q1 + q2 + q3 = 1 and
thus from (1) that
5
0.1q2 = 0.1 + 0.2α − 0.3(1 − α − q2 ) ⇐⇒ −0.2q2 = −0.2 + 0.5α ⇐⇒ q2 = 1 − α.
2
It thus follows that
5 3
q3 = 1 − α − 1 + α = α.
2 2
Since we must have (q1 , q2 , q3 ) ∈ (0, 1)3 , we can only take α ∈ (0, 2/5). So
    
5 3 2
Pe (S 1 ) = Qα = b α, 1 − α, α : α ∈ 0, .
2 2 5

Parametrising instead q2 := α, analogous computations lead to


   
2 2 3 3
Pe (S 1 ) = Qα =b − α, α, − α : α ∈ (0, 1) ,
5 5 5 5
and parametrising instead q3 := α to
    
1 2 5 3
Pe (S ) = Qα = b α, 1 − α, α : α ∈ 0, .
3 3 5

(b) Since every martingale is a local martingale (with respect to the same probability measure
and filtration), we clearly have that Pe (S 1 ) ⊆ Pe,loc (S 1 ). To show the opposite inclusion,
we note that S 1 is bounded P -a.s. by a fixed constant C because Ω is finite, and so is
τ
then (S 1 ) for any F-stopping time τ . Fix a Q ∈ Pe,loc (S 1 ) and let (τn )n∈N be a localising
τ
sequence for S 1 . Then each (S 1 ) n is a (Q, F)-martingale and Q-a.s. bounded by C because
Q ≈ P . So the dominated convergence theorem then gives
h i
EQ S11 = EQ lim S1∧τ 1
 1  1
= S01 .
 
n
= lim EQ S1∧τ n
= lim S0∧τ n
n→∞ n→∞ n→∞

So S 1 is in fact a (Q, F)-martingale, which means that Q ∈ Pe (S 1 ). This shows that


Pe,loc (S 1 ) ⊆ Pe (S 1 ) and concludes the proof.
e Se1 ) is given by
(c) The set of all arbitrage-free prices for C( 1
( " # )
e Se1 )
C(
M = EQ 1
: Q ∈ Pe (S 1 ) .
1+r

Using the parametrisation of Pe (S 1 ) from (a), we compute


" #
e Se1 )
C( 1
 
5

3

30
1
EQα = α × 0 + 1 − α × 0 + α × 2 = α.
1+r 1.1 2 2 11
12
because α ∈ 0, 25 .
 
So we conclude that M = 0, 11
(d) We have from (c) that
   
1 e e1 1 e e1 30 12
sup EQ C(S1 ) = sup EQα C(S1 ) = sup α= .
Q∈Pe (S 1 ) 1+r α∈(0,2/5) 1+r α∈(0,2/5) 11 11

The value 12 2
11 is clearly attained for α = 5 , which means that it is attained under the
probability measure Q∗ characterized by the probability vector ( 25 , 0, 35 ). Q∗ is clearly not
equivalent to P , but since P [{ω}] = 0 implies Q∗ [{ω}] = 0, Q∗ is absolutely continuous
with respect to P . (In fact, P [{ω}] = 0 is never true so that by the logical fact that an
empty premise implies every conclusion, any probability measure on (Ω, F) is absolutely
continuous with respect to P .)
S 1 is also a (Q∗ , F)-martingale since
 
 1 1 2 3 1 16 + 39 1
EQ S1 =
∗ × × 8 + × 13 = × = × 11 = 10 = S01 .
1.1 5 5 1.1 5 1.1

The Q∗ -integrability of S 1 is trivial since S 1 is bounded, and adaptedness does not depend
on the probability measure.
Question 3

(a) Let (Xn )n∈N be a sequence of simple random variables of the form
n
X
Xn = xi,n 1Ai,n
i=1

for some constants x1,n , . . . , xn,n ≥ 0 and some sets A1,n , . . . , An,n ∈ F with Xn ↑ X
pointwise as n → ∞. We have seen that such a sequence exists, for instance in the solution
to Exercise 2.3 of the exercise sheets. We compute
n
X n n
  X X  
EQ [Xn ] = xi,n EQ 1Ai,n = xi,n Q [Ai,n ] = xi,n EP D1Ai,n
i=1 i=1 i=1
" n
# (2)
X
= EP D xi,n 1Ai,n = EP [DXn ] .
i=1

By the monotone convergence theorem, we immediately obtain that EQ [Xn ] ↑ EQ [X] as


n → ∞. But since D > 0, we also clearly have that DXn ↑ DX and another application
of the monotone convergence theorem thus gives that EP [DXn ] ↑ EP [DX]. Therefore,
taking the limit on both sides of (2) gives EQ [X] = EP [DX] as desired.
(b) We compute

EQ [Y ] = EP [DY ] = EP [EP [DY | Fk ]] = EP [Y EP [D | Fk ]] = EP [Zk Y ] .

The first equality uses (a), the second one uses the tower property of conditional expec-
tation, the third one the Fk -measurability and nonnegativity of Y , and the last one the
definition of Zk .
(c) We compute
   
1 1
EP [Y ] = EP Zk Y = EQ Y .
Zk Zk

The first equality is obvious because Zk > 0 P -a.s., and the second one follows from (b)
since Y /Zk is nonnegative by nonnegativity of Zk and Y and also Fk -measurable as a ratio
of two Fk -measurable random variables.
(d) By the definition of conditional expectation, we need to show that
 
  1
EQ EQ [Uk | Fj ] 1A = EQ EP [Zk Uk | Fj ] 1A
Zj

for all A ∈ Fj . We fix A ∈ Fj and compute


   
EQ EQ [Uk | Fj ] 1A = EQ [Uk 1A ] = EP [Zk Uk 1A ] = EP EP [Zk Uk | Fj ] 1A
 
1
= EQ EP [Zk Uk | Fj ] 1A .
Zj

The first and the third equality follow from the definition of conditional expectation, the
second one from (b), and the last one uses (c) with the fact that EP [Zk Uk | Fj ] 1A is non-
negative by the nonnegativity of Uk and Zk and also Fj -measurable. Indeed, a conditional
expectation with respect to Fj is Fj -measurable and 1A is also Fj -measurable since A ∈ Fj
by assumption.
(e) If N is F-adapted, then ZN is F-adapted since the product of measurable functions is a
measurable function. Conversely, if ZN is F-adapted, then N is F-adapted for the same
reason since N = Z1 ZN and Z > 0. The same argument shows that Z is nonnegative if
and only if ZN is nonnegative.
Now, N is Q-integrable if and only if ZN is P -integrable because for any k ∈ {0, 1, . . . , T },
we have by (b) that

EP [|Zk Nk |] = EP [Zk |Nk |] = EQ [|Nk |] .

Finally, N ≥ 0 satisfies the martingale property under Q if and only if ZN ≥ 0 satisfies


the martingale property under P . Indeed, note that by (d), we have for any k ∈ {1, . . . , T }
that
1
EQ [Nk | Fk−1 ] = EP [Zk Nk | Fk−1 ] ,
Zk−1

which gives that


1
EQ [Nk | Fk−1 ] = Nk−1 ⇐⇒ EP [Zk Nk | Fk−1 ] = Nk−1
Zk−1
⇐⇒ EP [Zk Nk ] = Zk−1 Nk−1

since Zk−1 > 0.


Question 4

(a) Z σ is clearly positive by definition for all σ > −1. Furthermore, using the fact that
Nt − Ns ∼ Poi(λ(t − s)) and the knowledge of the moment generating function of the
Poisson distribution, we compute
 σ 
Zt h
(Nt −Ns ) log(1+σ)−λσ(t−s)
i
−λσ(t−s)
h
(Nt −Ns ) log(1+σ)
i
E F s = E e F s = e E e
Zsσ
= e−λσ(t−s) eλ(t−s)(1+σ−1) = 1.
Here we do not have to worry about the integrability of Z σ when verifying the martingale
condition since Z σ > 0 for σ > −1. Using the martingale property of Z σ , it also follows
for any σ > −1 and t ∈ [0, T ] that
EP [Ztσ ] = EP [Z0σ ] = 1
since N0 = 0 P -a.s.
(b) First, since Qσ ≈ P for any σ > −1, we immediately obtain that {N0 6= 0} is a Qσ -nullset
because it is a P -nullset and that
{ω ∈ Ω : [0, T ] 3 t 7→ Nt (ω) is not RCLL with jumps of size 1}
is a Qσ -nullset because it is P -nullset. So N0 = 0 Qσ -a.s. and Qσ -almost all trajectories of
N are RCLL with jumps of size 1.
Now we compute the conditional moment generating function of the increment Nt − Ns ,
0 ≤ s ≤ t ≤ T , under Qσ . It is given by
h i 1 h i
EQσ eu(Nt −Ns ) Fs = σ EP Ztσ eu(Nt −Ns ) Fs
Zs
h i
= e−Ns log(1+σ)+λσs EP eNt log(1+σ)−λσt eu(Nt −Ns ) Fs
h i
= e−λσ(t−s) EP e(Nt −Ns ) log(1+σ) eu(Nt −Ns ) Fs
h i log(1+σ)+u −1)
= e−λσ(t−s) EP e(Nt −Ns )(log(1+σ)+u) = e−λσ(t−s) eλ(t−s)(e
u −1) u −1)
= e−λσ(t−s) eλ(t−s)((1+σ)e = e−λ(1+σ)(t−s)(e .
The first equality follows from the Bayes formula, the third from the Fs -measurability of
eNs log(1+σ) , the fourth from the independence of Nt − Ns of Fs under P , and the fifth from
the fact that EP e(log(1+σ)+u)(Nt −Ns ) is the moment generating function of Poi(λ(t − s))
 
evaluated at log(1 + σ) + u.
The last expression above is in fact the moment generating function of Poi(λ(1 + σ)(t − s))
and thus shows that Nt − Ns ∼ Poi(λ(1 + σ)(t − s)). Furthermore, since the expression
does not depend on ω ∈ Ω, we can also conclude that eu(Nt −Ns ) is independent of Fs
under Qσ , by which we can conclude the same about Nt − Ns since is can be written as a
continuous (therefore measurable) transformation of eu(Nt −Ns ) . We can thus conclude that
N is (Qσ , F)-Poisson process with parameter λ(1 + σ) > 0.
(c) Since X and Y are predictable and satisfy
Z T  Z T 
EP Xt2 d[N
e ] < ∞ and EP
t Yt2 d[N
e] ,
t
0 0
RT RT
the stochastic integrals 0 Xt dN
et and
0 Yt dN
et are well defined. By squaring out, we
obtain
Z T  Z T 
Xt dNt
e Yt dN
et
0 0
Z T Z T 2 Z T 2 Z T 2 !
1
= Xt dN
et + Yt dN
et − Xt dN
et − Yt dN
et
2 0 0 0 0
Z T 2 Z T 2 Z T 2 !
1
= (Xt + Yt )dN
et − Xt dN
et − Yt dN
et .
2 0 0 0
Taking P -expectations on both sides, using linearity and applying the isometry property
of the stochastic integral to all three terms on the right-hand side (N
e is a (P, F)-martingale
as shown in Exercise 9.2 (a)), we obtain
Z T  Z T 
EP Xt dNt
e Yt dNt
e
0 0
 Z T  Z T  Z T 
1 2 2 2
= EP (Xt + Yt ) d[N ]t − EP
e Xt d[N ]t − EP
e Yt d[N ]t
e
2 0 0 0
Z T  Z T 
= EP Xt Yt d[N ]t = EP
e Xt Yt dNt ,
0 0

as desired. The last equality follows from the fact that [N


e ] = N , as shown in Exercise 9.2 (b)
in the exercise sheet.
Question 5

(a) Since M is a (P, F)-supermartingale, we have for all 0 ≤ s ≤ t ≤ T that Ms −EP [Mt | Fs ] ≥
0 P -a.s. Taking the expectation of the left-hand side gives
 
EP Ms − EP [Mt | Fs ] = EP [Ms ] − EP [Mt ] = C − C = 0.

But every nonnegative random variable with zero P -expectation is P -a.s. equal to zero (as
shown in Exercise 1.2 in the exercise sheets). So we have EP [Mt | Fs ] = Ms P -a.s., which
is the martingale property of M . Integrability and adaptedness follow from the fact that
M is a (P, F)-supermartingale.
(b) Let us define the process Y = (Yt )t∈[0,T ] by
Z t
Yt = λ(s)dWs .
0

Using the hint and the assumption that λ ∈ L2loc (W ), we see that Y is in fact a continuous
local (P, F)-martingale. The process Z is thus explicitly given by
Z t
1 t 2
  Z 
1
Zt = exp Yt − hY it = exp λ(s)dWs − λ (s)ds (3)
2 0 2 0

and it is in particular also a local (P, F)-martingale. Taking expectations on both sides
of (3) and using the fact that
Z t  Z t 
2
λ(s)dWs ∼ N 0, λ (s)ds
0 0

because λ is a deterministic function (see Exercise 12.3 (b) in the exercise sheets), we obtain
that E [Zt ] = 1 for all t ∈ [0, T ]. In particular, Z is integrable. But since Z is also positive,
we can apply Fatou’s lemma to show that it is in fact a (P, F)-supermartingale. Indeed,
let (τn )n∈N be a localising sequence for Z. Then we have for all 0 ≤ s ≤ t ≤ T that
h i
EP [Zt | Fs ] = EP lim inf Ztτn Fs ≤ lim inf EP [Ztτn | Fs ] = lim inf Zsτn = Zs .
n→∞ n→∞ n→∞

But according to (a), every (P, F)-supermartingale with a constant expectation is a true
(P, F)-martingale and we are done.
Alternatively, and more simply, one could recall the Novikov’s condition that is briefly
mentioned in the lecture notes and which says that if L = (Lt )t∈[0,T ] is a continuous local
(P, F)-martingale with L0 = 0 and EP 12 hLiT < ∞, then E(L) is a true (P, F)-martingale
 
on [0, T ]. In our case, we have that
h 1 i h 1 RT 2 i 1 T 2
R
EP e 2 hY iT = EP e 2 0 λ (s)ds = e 2 0 λ (s)ds < ∞,

so the result follows.


(c) We first compute the P -dynamics of the discounted price process S 1 = Se1 /Se0 . Direct
application of Itô’s formula (or the product rule) to the semimartingale (Se0 , Se1 ) and the
C 2 function R2++ 3 (x, y) 7→ x/y yields

dSt1 
1 = µ1 − r(t) dt + σ1 dWt . (4)
St

Define λ : [0, T ] 7→ R by λ(s) := (µ1 − r(s))/σ1 and the process Z = (Zt )t∈[0,T ] by
 Z 
Zt := E − λ(s)dWs .
t
Since λ is left-continuous and bounded, (b) gives that Z is a positive (P, F)-martingale
with EP [Zt ] = 1 for all t ∈ [0, T ], and thus the density process of some measure Q ≈ P .
Girsanov’s theorem gives that
  Z t Z t
µ1 − r(s)
Z
WtQ := Wt − W, − λ(s)dWs = Wt + λ(s)ds = Wt + ds
t 0 0 σ1

is a (Q, F)-Brownian motion. By (4), the Q-dynamics of S 1 are given by


Z t
dSt1
 
µ1 − r(s)
ds = σ1 dWtQ .
 
= µ1 − r(t) dt − µ1 − r(t) dt + σ1 d Wt +
St1 0 σ 1

In other words, S 1 = E(σ1 W Q ), which is a (Q, F)-martingale.


(d) The arbitrage-free price at time t of the discounted payoff H is given by
" p #  1 p
(SeT1 )

 p−1
0
 1 p   0 p−1 1 p ST
Vt = EQ Ft = ST e EQ ST Ft = ST e St EQ Ft
SeT0 St1
2
 
RT p Q Q σ1
= e(p−1) 0 r(s)ds St1 EQ epσ1 (WT −Wt )−p 2 (T −t) Ft
RT σ12
Q Q
p h i
(T −t)
= e(p−1) 0 r(s)ds−p 2 St1 EQ epσ1 (WT −Wt )
2
RT σ1
(T −t)
p p2 σ12 (T −t)
= e(p−1) 0 r(s)ds−p 2 St1 e 2

=: v(t, St1 ),

where we have used that WTQ − WtQ is independent of Ft under Q and normally distributed
with mean 0 and variance T − t. Consequently, the delta of H is given by
2 p2 σ1
2 (T −t)
∂v RT σ1
(T −t) p−1
ϑt = = pe(p−1) 0 r(s)ds−p 2 (St1 ) e 2 , (5)
∂x (t,x)=(t,St1 )

and the price at time 0 is


RT σ12 p2 σ1
2T

V0 = v(0, S01 ) = e(p−1) 0 r(s)ds−p 2


T
e 2 .

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy