MA451 S23 Assignment 2 Solutions Marking PDF
MA451 S23 Assignment 2 Solutions Marking PDF
1. (a) We use the formula in Equation (1.3) of the new version (Equation
(10.3) of the old version). We have
W (t) − W (s) −1 1 W (s)
X := := ≡ BW
W (t) + W (s) 1 1 W (t)
where
T−1 1 s s −1 1 t−s t−s
BCB = = .
1 1 s t 1 1 t − s t + 3s
Alternative solution:
We have the 2 × 1 vector
X1 W (t) − W (s)
X= :=
X2 W (t) + W (s)
1
(b) We use the formula in Equation (1.3) of the new version (Equation
(10.3) of the old version). We have the 3 × 1 vector having the trivariate
normal distribution:
W (s) s s s
W := W (t) ∼ Norm3 (0, C) , C = s t t ,
W (T ) s t T
where
W (T ) − W (s) −1 0 1 W (s)
X := W (T ) − W (t) = 0 −1 1 W (t) ≡ BW.
W (t) − W (s) −1 1 0 W (T )
Hence,
X ∼ Norm3 0, BCBT ,
where
−1 0 1 s s s −1 0 −1
T
BCB = 0 −1
1 s t t 0 −1 1
−1 1 0 s t T 1 1 0
T −s T −t t−s
= T − t T −t 0
t−s 0 t−s
.
Alternative solution:
We have the 3 × 1 vector
X1 W (T ) − W (s)
X = X2 := W (T ) − W (t)
X3 W (t) − W (s)
having a trivariate normal distribution with mean vector E[X] = 0. The
(variance) covariance matrix is given by
Var(X1 ) = Var(W (T ) − W (s)) = T − s,
Var(X2 ) = Var(W (T ) − W (t)) = T − t,
Var(X3 ) = Var(W (t) − W (s)) = t − s,
Cov(X1 , X2 ) = Cov(W (T ) − W (s), W (T ) − W (t)) = Var(W (T ) − W (t)) = T − t,
Cov(X1 , X3 ) = Cov(W (T ) − W (s), W (t) − W (s)) = Var(W (t) − W (s)) = t − s,
Cov(X2 , X3 ) = Cov(W (T ) − W (t), W (t) − W (s)) = 0.
2
Hence,
0 T −s T −t t−s
X ∼ Norm3 0 , T − t T − t 0 ,
0 t−s 0 t−s
(c) One method is to make use of the conditioning formula in Equation
(1.4) (or (10.4) in old version) of the book. We identify the 1-by-1 (scalar)
and 2-by-1 vectors:
W (s)
X1 = [W (t)] ≡ W (t), X2 = .
W (T )
0
The respective mean (vectors) are E[X1 ] = 0, E[X2 ] = . Given the
0
3-by-3 covariance matrix C of [W (t), W (s), W (T )]T ,
t s t
C = s s s ,
t s T
we have the sub-covariance matrices:
s s s
C11 = t, C12 = [s t], C21 = , C22 = ,
t s T
where
−1 1 T −s
C22 = .
s(T − s) −s s
Using the conditioning formula for X1 |{X2 = x2 } gives the univariate
(scalar) normal distribution:
W (s) x
W (t) = ≡ W (t)| {W (s) = x, W (T ) = y}
W (T ) y
1 T −s x 1 T −s s
∼ Norm [s t] · , t − [s t] ·
s(T − s) −s s y s(T − s) −s s t
(T − t)x + (t − s)y (T − t)s + (t − s)t
∼ Norm ,t −
T −s T −s
(T − t) (t − s) (t − s)(T − t)
∼ Norm x+ y,
T −s T −s T −s
(t − s) (t − s)(T − t)
∼ Norm x + (y − x) y, .
(T − s) T −s
3
Note: the last three expressions are simply equivalent ways of writing the
expressions.
Alternative solution: We recognize this as a Brownian bridge that is
fixed at W (s) = x and at W (T ) = y. This follows from from Equation
(1.55) of the new book where we have (upon making the replacement
of parameters: t0 → s, a → x, b → y):
(x,y) (t − s) (0,0)
B[s,T ] (t) = x + (y − x) + B[s,T ] (t).
(T − s)
You can work out the mean and varaince of this normal r.v.. The
distrbution is actually given directly by Equation (1.56) in the new book:
(x,y) (t − s) (t − s)(T − t)
B[s,T ] (t) ∼ Norm x + (y − x) y,
(T − s) T −s
4
the bivariate normal distribution:
W (t)
{W (s) = x}
W (T )
s 1 t t 1 s
∼ Norm · · x, − [s s]
s s t T s s
x t−s t−s
∼ Norm ,
x t−s T −s
Alternative solution: The above is equivalent to the method of
computing the conditional mean, conditional variances and conditional
covariance separately, as follows:
Var(W (t)|W (s) = x) = E[W 2 (t)|W (s) = x] − (E[W (t)|W (s) = x])2
= x2 + t − s − x2 = t − s,
Var(W (T )|W (s) = x) = E[W 2 (T )|W (s) = x] − (E[W (T )|W (s) = x])2
= x2 + T − s − x2 = T − s,
5
We now compute the elements of the covariance matrix C. Assume k ≤ `
(note: zero mean components µi = 0):
6
(b) Making use of the identity and the expectation computed in part (a) for
α = 1:
The last expression follows since I{S≥L} I{S≥K} = I{S≥K∨L} = I{S≥L} (where
K ≤ L).
The expectation is now calculated by using similar steps as above:
4. Assuming a given filtration for BM, in each case we show that the
process (i) starts at zero, (ii) has continuous paths, (iii) has independent
non-overlapping increments, and (iv) has normally distributed increments,
d
X(t) − X(s) = Norm(0, t − s), for s < t.
7
u < v ≤ s < t, then X(v) − X(u) = −(W (v) − W (u)) and
X(t) − X(s) = −(W (t) − W (s)) are independent Brownian
d
increments. Lastly, X(t) − X(s) = −(W (t) − W (s)) = Norm(0, t − s),
for s < t. Hence, the process is a standard BM.
1
= c2 (t/c2 ) + c2 (s/c2 ) − 2c2 2 s ∧ t
c
= t + s − 2s = t − s .
d
Hence, X(t) − X(s) = Norm(0, t − s), for s < t, and we conclude that
the process is a standard BM.
8
Here we used the fact that {τ1 ≤ t} ∈ Ft and {τ2 ≤ t} ∈ Ft since both τ1 , τ2
are stopping times w.r.t F. Moreover, the intersection of the two sets is in
Ft by closure of intersections of sets in the σ-algebra Ft .
(b) Since τ is a stopping time, we have τ ≥ 0 (a.s.). As given, T0 ≥ 0.
Hence, τ + T0 ≥ 0 (a.s.).
Now, fix a value for t ≥ 0. If 0 ≤ t < T0 , i.e., t − T0 < 0, then the event
{τ + T0 ≤ t} = {τ ≤ t − T0 } = ∅. [Remark: {τ < u} = ∅ for any u ≤ 0.]
If t ≥ T0 , i.e., t − T0 > 0, then {τ + T0 ≤ t} = {τ ≤ t − T0 } ∈ Ft−T0 . Since
t − T0 ≤ t, i.e., Ft−T0 ⊂ Ft , we have {τ + T0 ≤ t} ∈ Ft . We have therefore
shown that {τ + T0 ≤ t} ∈ Ft for any t ≥ 0.
(c) Based on part (a), we need only show that τ := τ1 ∧ τ2 is a stopping
time w.r.t. F, as it then follows that (τ1 ∧ τ2 ) ∨ τ3 (i.e., the maximum of two
stopping times) is a stopping time w.r.t. F.
Clearly, τ1 ∧ τ2 ≥ 0 since τ1 ≥ 0 and τ2 ≥ 0 (a.s.).
Fix any t ≥ 0. Then, {τ1 ∧ τ2 ≤ t} = {τ1 ≤ t} ∪ {τ2 ≤ t} ∈ Ft by closure
under the unoin of two sets {τ1 ≤ t} ∈ Ft and {τ2 ≤ t} ∈ Ft . This proves
that T , as defined, is a stopping time w.r.t. F.
6.
(a) For 0 ≤ s ≤ t: write W (t) = W (s) + Y , where Y := W (t) − W (s) is
independent of W (s). Hence, by the Independence Proposition,
where
9
(b) We can simply write W (t) = W (s) + Y , where Y := W (t) − W (s) is
independent of W (s). Hence,
Here we used the independence of Y and σ(W (s)) and the fact that
(W (s) − t) and (W (s) − t)2 are σ(W (s))–measurable.
Alternative solution: We make use of the martingale (and Markov)
property of W 2 (t) − t and W (t), i.e., E[W 2 (t) − t |W (s)] = W 2 (s) − s
and E[W (t)|W (s)] = W (s). Hence, expanding
10
Hence, g(x) = 4x2 + 3(t − s) + u − s, giving
E[(W (t) + W (u))2 |Fs ] = E[W 2 (t)|Fs ] + E[W 2 (u)|Fs ] + 2E[W (u)W (t)|Fs ] .
11
integrals can be used as in part (a) above. We compute:
12
Hence, equation (1) reads:
E[(W (t) + W (u))2 |W (s) = x]
Z Z Z
2
= p0 (t − s; y − x)y dy + p0 (u − s; y − x)y dy + 2 p0 (t − s; y − x) y 2 dy
2
RZ RZ R
= 3 p0 (t − s; y − x) y 2 dy + p0 (u − s; y − x)y 2 dy .
R R
(d) Since s > t, we have (see just above Eqn. (10.29) of the old book
version or above Eqn.(1.51) of the new version):
d t t(s − t)
W (t)|{W (s) = x} = Norm x, (∗).
s s
That is, the conditional mean of W (t), given W (s) = x ∈ R, is
t t
E[W (t)|W (s) = x] = x =⇒ E[W (t)|W (s)] = W (s).
s s
Again from (*), we obtain the conditional second moment as the sum
of the conditional variance and the conditional mean, i.e.,
E[W 2 (t)|W (s) = x] = Var(W (t)|W (s) = x) + (E[W (t)|W (s) = x])2
2
t(s − t) t t(s − t) t2 2
= + x = + 2x ,
s s s s
t(s−t) t2
i.e., E[W 2 (t)|W (s)] = s
+ s2
W 2 (s).
13
(e) One approach is to write W (t) = W (s) + Y , where Y := W (t) − W (s)
is independent of W (s). Hence,
7. Let Y (t) := S n (t), t ≥ 0. The state space of the Y -process is (0, ∞).
Using the GBM representation for the S-process, we reduce the transition
14
CDF of the Y -process to that of standard Brownian motion:
y x
ln S0n
− nµt − ln S n − nµs y
ln x − nµ(t − s)
0
=N √ =N
√ .
nσ t − s nσ t − s
Remark: The above expression for the transition PDF is also readily
derived from the known PDF for a GBM process. That is,
Y (t) := S n (t), t ≥ 0, is a GBM with drift and volatility parameters nµ and
nσ, respectively, as seen from the exponential representation:
n
Y (t) := S n (t) = S0 eµt+σW (t) = Y0 enµt+nσW (t)
15
8. (a) We can express the joint probability using the joint CDF of a
bivariate standard
√ √ of variables Z̃1 ≡ Z1 , Z̃2 ≡ −Z2 , where
normal pair
Z1 = W (s)/ s, Z2 = W (t)/ t, as:
P(W (s) < x, W (t) > y) = P(W (s) < x, −W (t) < −y)
x y
= P Z̃1 ≤ √ , Z̃2 ≤ − √
s t
r
x y s
= N2 √ , − √ ; − .
s t t
The correlation coefficient in N2 follows from:
r
Cov(W (s), W (t)) s
Cov(Z̃1 , Z̃2 ) = Cov(Z1 , −Z2 ) = − Cov(Z1 , Z2 ) = − √ =− .
st t
Alternative solution: By total law of probability:
P(W (s) < x, W (t) > y) + P(W (s) < x, W (t) < y) = P(W (s) < x)
and the joint CDF from part (a):
P(W (s) < x, W (t) > y) = P(W (s) < x) − P(W (s) < x, W (t) < y)
r
x x y s
=N √ − N2 √ , √ ; .
s s t t
16
Hence,
x
P(W (s) < W (t) < x) = P Z1 ≤ 0, Z2 ≤ √
t
r
x s
= N2 0, √ ; − 1 − .
t t
and
s s 1 s s(1 − s/T ) s(1 − t/T )
C11 − C12 C−1
22 C21 = − [s t] = .
s t T t s(1 − t/T ) t(1 − t/T )
17
Hence, X1 |{X2 = x2 } is given by
W (s) bs/T s(1 − s/T ) s(1 − t/T )
{W (T ) = b} ∼ Norm2 ,
W (t) bt/T s(1 − t/T ) t(1 − t/T )
" #
Xe1 W (s)
Let us denote e ≡ {W (T ) = b}. Then, according to the above,
X2 W (t)
we can express the random variables in terms of standard normals:
p p
Xe1 = bs/T + s(1 − s/T )Z1 , X e2 = bt/T + t(1 − t/T )Z2 ,
where Cov(X e2 ) = s(1 − t/T ). The standard normals are correlated as:
e1 , X
s
Cov(X1 , X2 )
e e s(1 − t/T ) s(1 − t/T )
ρ = Cov(Z1 , Z2 ) = =p p = .
Var(X1 ) Var(X2 )
e e s(1 − s/T ) t(1 − t/T ) t(1 − s/T )
Finally,
P(W (s) ≤ x, W (t) ≤ y|W (T ) = b) = P(X e1 ≤ x, Xe2 ≤ y)
!
x − bs/T y − bt/T
= P Z1 ≤ p , Z2 ≤ p
s(1 − s/T ) t(1 − t/T )
x − bs/T y − bt/T
= N2 p ,p ;ρ
s(1 − s/T ) t(1 − t/T )
s
x − bs/T y − bt/T s(1 − t/T )
≡ N2 p ,p ; .
s(1 − s/T ) t(1 − t/T ) t(1 − s/T )
d √ d √
(d) We have W (t) − W (s) = t − sZ1 and W (T ) − W (t) = T − tZ2 ,
where Z1 and Z2 are i.i.d. standard normal r.v.’s since the BM increments
are nonoverlapping in time. Hence,
√ √
P(W (T ) − W (t) > W (t) − W (s) + a) = P( T − tZ2 − t − sZ1 > a).
Now,
√ √ d √ √
T − tZ2 − t − sZ1 = Norm(0, Var( T − tZ2 − t − sZ1 ))
d
= Norm(0, (T − t) Var(Z2 ) + (t − s) Var(Z1 ))
d d
= Norm(0, (T − t) + (t − s)) = Norm(0, T − s).
18
√ √ d √
Therefore, T − tZ2 − t − sZ1 = T − sZ, where Z is a standard
normal r.v.. Finally, we have
√
P(W (T ) − W (t) > W (t) − W (s) + a) = P( T − sZ > a)
a
=N −√ .
T −s
(e) Recall: given a joint CDF F (x, y) ≡ FX,Y (x, y) of random variables
X, Y , we have
19
mapping F (x) := sinh(x) is strictly increasing. The transition CDF is then:
∂
p(s, t; x, y) = P (s, t; x, y)
∂y
p !
1 ∂ h p i 1 (y + y 2 + 1) µ √
= √ ln(y + y 2 + 1) · n √ ln √ − t−s
σ t − s ∂y σ t − s (x + x2 + 1) σ
p p !
y + y2 + 1 1 1 (y + y 2 + 1) µ √
= p √ n √ ln √ − t−s
1 + y(y + y 2 + 1) σ t − s σ t − s (x + x2 + 1) σ
2
e−z /2
where N 0 (z) ≡ n(z) = √ . Note: the logarithmic derivative term can
√2π
2
1+y/ y +1
also be expressed as √ .
y+ y 2 +1
(c) Using sinh x = 12 (ex − e−x ), and the MGF of standard normal Z, with
d √
W (t) = tZ, we have:
1 µt σ√tZ √
mX (t) = E[sinh(µt + σW (t))] = e E[e ] − e−µt E[e−σ tZ ]
2
eµt − e−µt σ2 t/2
= e
2
2
= sinh(µt)eσ t/2 .
2 /2
[Remark: the MGF, E[euZ ] = eu , is an even function of u.]
20
For the covariance, we have
cX (s, t) = Cov(X(s), X(t)) = E[X(s)X(t)] − mX (s)mX (t), where the mean
function is given in part (a). For s, t ≥ 0:
Var(W (s)+W (t)) = Var(W (s))+Var(W (t))+2 Cov(W (s), W (t)) = s+t+2(s∧t)
d d
i.e., W (s) + W (t) = Norm(0, s + t + 2(s ∧ t)) =⇒ W (s) + W (t) =
p d p
s + t + 2(s ∧ t)Z. Moreover, W (t) − W (s) = |t − s|Z.
Hence,
√ σ2
E[e±σ(W (s)+W (t)) ] = E[e±σ s+t+2(s∧t)Z ] = e 2 (s+t+2(s∧t))
and
σ2
E[e±σ(W (t)−W (s)) ] = e 2
|t−s|
.
Subsituting these expectations within the above expression gives:
21