0% found this document useful (0 votes)
79 views21 pages

MA451 S23 Assignment 2 Solutions Marking PDF

math

Uploaded by

burnseyburns79
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views21 pages

MA451 S23 Assignment 2 Solutions Marking PDF

math

Uploaded by

burnseyburns79
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

MA451: Solutions to Assignment 2

1. (a) We use the formula in Equation (1.3) of the new version (Equation
(10.3) of the old version). We have
    
W (t) − W (s) −1 1 W (s)
X := := ≡ BW
W (t) + W (s) 1 1 W (t)

where W = [W (s), W (t)]T has 2-by-1 zero mean vector. Hence,


 
T
 s s
X ∼ Norm2 0, BCB , C = ,
s t

where      
T−1 1 s s −1 1 t−s t−s
BCB = = .
1 1 s t 1 1 t − s t + 3s
Alternative solution:
We have the 2 × 1 vector
   
X1 W (t) − W (s)
X= :=
X2 W (t) + W (s)

having a bivariate normal distribution with mean vector E[X] = 0. The


(variance) covariance matrix is given by

Var(X1 ) = Var(W (t) − W (s)) = t − s,


Var(X2 ) = Var(W (t) + W (s))
= Var(W (t)) + Var(W (s)) + 2 Cov(W (s), W (t))
= t + s + 2s = t + 3s,
Cov(X1 , X2 ) = Cov(W (t) − W (s), W (t) + W (s))
= Cov(W (t) − W (s), W (t) − W (s)) + 2 Cov(W (t) − W (s), W (s)
| {z }
=0
= Var(W (t) − W (s)) = t − s.

We have the 2 × 1 vector having the bivariate normal distribution:


   
0 t−s t−s
X ∼ Norm2 , .
0 t − s t + 3s

1
(b) We use the formula in Equation (1.3) of the new version (Equation
(10.3) of the old version). We have the 3 × 1 vector having the trivariate
normal distribution:
   
W (s) s s s
W :=  W (t)  ∼ Norm3 (0, C) , C = s t t  ,
W (T ) s t T
where
    
W (T ) − W (s) −1 0 1 W (s)
X :=  W (T ) − W (t)  =  0 −1 1  W (t)  ≡ BW.
W (t) − W (s) −1 1 0 W (T )
Hence,
X ∼ Norm3 0, BCBT ,


where
   
−1 0 1 s s s −1 0 −1
T
BCB = 0 −1
 1 s t t   0 −1 1 
−1 1 0 s t T 1 1 0
 
T −s T −t t−s
= T − t T −t 0 
t−s 0 t−s
.
Alternative solution:
We have the 3 × 1 vector
   
X1 W (T ) − W (s)
X = X2  :=  W (T ) − W (t) 
X3 W (t) − W (s)
having a trivariate normal distribution with mean vector E[X] = 0. The
(variance) covariance matrix is given by
Var(X1 ) = Var(W (T ) − W (s)) = T − s,
Var(X2 ) = Var(W (T ) − W (t)) = T − t,
Var(X3 ) = Var(W (t) − W (s)) = t − s,
Cov(X1 , X2 ) = Cov(W (T ) − W (s), W (T ) − W (t)) = Var(W (T ) − W (t)) = T − t,
Cov(X1 , X3 ) = Cov(W (T ) − W (s), W (t) − W (s)) = Var(W (t) − W (s)) = t − s,
Cov(X2 , X3 ) = Cov(W (T ) − W (t), W (t) − W (s)) = 0.

2
Hence,    
0 T −s T −t t−s
X ∼ Norm3 0 ,  T − t T − t 0  ,
0 t−s 0 t−s
(c) One method is to make use of the conditioning formula in Equation
(1.4) (or (10.4) in old version) of the book. We identify the 1-by-1 (scalar)
and 2-by-1 vectors:
 
W (s)
X1 = [W (t)] ≡ W (t), X2 = .
W (T )
 
0
The respective mean (vectors) are E[X1 ] = 0, E[X2 ] = . Given the
0
3-by-3 covariance matrix C of [W (t), W (s), W (T )]T ,
 
t s t
C = s s s  ,
t s T
we have the sub-covariance matrices:
   
s s s
C11 = t, C12 = [s t], C21 = , C22 = ,
t s T
where  
−1 1 T −s
C22 = .
s(T − s) −s s
Using the conditioning formula for X1 |{X2 = x2 } gives the univariate
(scalar) normal distribution:
   
W (s) x
W (t) = ≡ W (t)| {W (s) = x, W (T ) = y}
W (T ) y
       
1 T −s x 1 T −s s
∼ Norm [s t] · , t − [s t] ·
s(T − s) −s s y s(T − s) −s s t
 
(T − t)x + (t − s)y (T − t)s + (t − s)t
∼ Norm ,t −
T −s T −s
 
(T − t) (t − s) (t − s)(T − t)
∼ Norm x+ y,
T −s T −s T −s
 
(t − s) (t − s)(T − t)
∼ Norm x + (y − x) y, .
(T − s) T −s

3
Note: the last three expressions are simply equivalent ways of writing the
expressions.
Alternative solution: We recognize this as a Brownian bridge that is
fixed at W (s) = x and at W (T ) = y. This follows from from Equation
(1.55) of the new book where we have (upon making the replacement
of parameters: t0 → s, a → x, b → y):

(x,y) (t − s) (0,0)
B[s,T ] (t) = x + (y − x) + B[s,T ] (t).
(T − s)

You can work out the mean and varaince of this normal r.v.. The
distrbution is actually given directly by Equation (1.56) in the new book:
 
(x,y) (t − s) (t − s)(T − t)
B[s,T ] (t) ∼ Norm x + (y − x) y,
(T − s) T −s

(d) One method is to make use of the conditioning formula in Equation


(1.5) (or (10.5) in old version) of the book. We identify the 1-by-1 (scalar)
and 2-by-1 vectors:
 
W (t)
X1 = [W (s)] ≡ W (s), X2 = .
W (T )
 
0
The respective mean (vectors) are E[X1 ] = 0, E[X2 ] = . Given the
0
3-by-3 covariance matrix C of [W (s), W (t), W (T )]T ,
 
s s s
C = s t t  ,
s t T

we have the sub-covariance matrices:


   
s t t
C11 = s, C12 = [s s], C21 = , C22 = ,
s t T
−1
where C11 = 1/s. Using the conditioning formula for X2 |{X1 = x} gives

4
the bivariate normal distribution:
 
W (t)
{W (s) = x}
W (T )
      
s 1 t t 1 s
∼ Norm · · x, − [s s]
s s t T s s
   
x t−s t−s
∼ Norm ,
x t−s T −s
Alternative solution: The above is equivalent to the method of
computing the conditional mean, conditional variances and conditional
covariance separately, as follows:

Var(W (t)|W (s) = x) = E[W 2 (t)|W (s) = x] − (E[W (t)|W (s) = x])2
= x2 + t − s − x2 = t − s,

Var(W (T )|W (s) = x) = E[W 2 (T )|W (s) = x] − (E[W (T )|W (s) = x])2
= x2 + T − s − x2 = T − s,

Cov(W (t), W (T )|W (s) = x) = E[W (t)W (T )|W (s) = x]


− E[W (t)|W (s) = x] · E[W (T )|W (s) = x]
= x2 + t − s − x · x = t − s.
 
W (t)
Hence, {W (s) = x} has the above bivariate normal distribution.
W (T )
[Note: the conditional density is a product of two transition densities, as
further discussed below. By expressing the product as one Gaussian
bivariate normal density, we can obtain the above means, variances and
correlation (covariance). This is a more laborious approach!]

2. Each Xk is a normal random variable. Hence, X = [X1 , X2 , . . . , Xn ]> is a


(jointly) normal n-dimensional random vector. Since each Zj is a standard
normal with mean zero, we have
k
X
E[Xk ] = E[Zj ] = 0 =⇒ µ ≡ E[X] = 0 .
j=1

5
We now compute the elements of the covariance matrix C. Assume k ≤ `
(note: zero mean components µi = 0):

Ck` = E[(Xk − µk )(X` − µ` )] = E[Xk X` ]


k X
X ` k X
X k k
X `
X k
X
= E[Zi Zj ] = E[Z Z ] + E[Zi Zj ] = (1) = k .
| {zi j} | {z }
i=1 j=1 i=1 j=1 i=1 j=k+1 i=1
=δij =0

Here we assumed k ≤ `. Hence, for any k, ` = 1, . . . , n we have Ck` = k ∧ `.


Note that the variances are given by Var(Xk ) = Ckk = k, k = 1, . . . , n.
In summary, we have
X ∼ Normn (0, C)
with covariance matrix elements Ck` = k ∧ `, k, ` = 1, . . . , n.
Alternative solution: We can directly apply the identity (see Eq. (10.3)
in the textbook):
X = BZ ∼ Normn (0, BB> )
with Z = [Z1 , Z2 , . . . , Zn ]> ∼ Normn (0, I) and lower triangular n × n matrix
B of ones: Bij = 1{i≥j} . Computing the covariance matrix elements gives
(the same result as above):
n
X n
X n
X n
X
Ck` = (BB> )k` = Bkj B`j = 1{j≤k} 1{j≤`} = 1{j≤k∧`} = (1) = k∧`.
j=1 j=1 j=1 j=k∧`

3. (a) The expectation is easily computed by directly using the expectation


identity (A.1) or (A.2) in the Appendix to the textbook. Applying identity
2
(A.1), i.e., E[eBZ I{Z>A} ] = eB /2 N (B − A), with constants
A ≡ (ln K − a)/b, B ≡ αb:

E[ S α I{S>K} ] = eαa E[ eαbZ I{ln(S/K)>0} ]


= eαa E[ eαbZ I{Z>(ln K−a)/b} ]
 
αa+α2 b2 /2 a − ln K
=e N αb + .
b

6
(b) Making use of the identity and the expectation computed in part (a) for
α = 1:

E[ min(S, K)] = E[S I{S<K} ] + K P(S > K)


= E[S] − E[S I{S>K} ] + K P(ln(S/K) > 0)
= ea E[ ebZ ] − E[S I{S>K} ] + K P(Z > (ln K − a)/b)
   
a+b2 /2 a+b2 /2 a − ln K a − ln K
=e −e N b+ +KN
b b
   
2 ln K − a a − ln K
≡ ea+b /2 N −b + +KN ,
b b

where we used 1 − N (x) = N (−x) in the last expression.


(c) Using the fact that K ≤ L (i.e., (K − L)+ = 0) we can simplify the
random variable as follows:

(S ∨ K − L)+ = (S − L)+ I{S≥K} + (K − L)+ I{S<K}


= (S − L)+ I{S≥K}
= (S − L) I{S≥L} I{S≥K} = (S − L) I{S≥L} = (S − L)+ .

The last expression follows since I{S≥L} I{S≥K} = I{S≥K∨L} = I{S≥L} (where
K ≤ L).
The expectation is now calculated by using similar steps as above:

E[(S ∨ K − L)+ ] = E[(S − L)+ ] = E[S I{S>L} ] − LP(S > L)


   
a+b2 /2 a − ln L a − ln L
=e N b+ − LN
b b

4. Assuming a given filtration for BM, in each case we show that the
process (i) starts at zero, (ii) has continuous paths, (iii) has independent
non-overlapping increments, and (iv) has normally distributed increments,
d
X(t) − X(s) = Norm(0, t − s), for s < t.

(a) X(t) := −W (t), so the process starts at the origin,


X(0) = −W (0) = 0 (a.s.). Since f (x) := −x is a continuous function
and W (t) is continuous for t > 0, we have that X(t) = f (W (t)) is
continuous for t > 0, i.e., the process has continuous paths. Letting

7
u < v ≤ s < t, then X(v) − X(u) = −(W (v) − W (u)) and
X(t) − X(s) = −(W (t) − W (s)) are independent Brownian
d
increments. Lastly, X(t) − X(s) = −(W (t) − W (s)) = Norm(0, t − s),
for s < t. Hence, the process is a standard BM.

(b) We have X(0) := W (T + 0) − W (T ) = 0 for any fixed T ∈ (0, ∞).


Since W (T + t) has continuous paths (and W (T ) is constant), the
process defined by X(t) := W (T + t) − W (T ) has continuous paths.
Given u < v ≤ s < t, X(v) − X(u) = W (T + v) − W (T + u) and
X(t) − X(s) = W (T + t) − W (T + s) are independent Brownian
increments with non-overlapping time intervals (s, t] and (u, v].
d
Lastly, X(t) − X(s) = W (T + t) − W (T + s) = Norm(0, t − s), where
T + t − (T + s) = t − s, for s < t. Hence, the process is a standard BM.

(c) X(0) = cW (0/c2 ) = cW (0) = 0, so the process starts at zero. Since


X(t) = W (f (t)), where f (t) := t/c2 is a continuous function of t, the
process has continuous
 paths. Given u < v ≤ s < t,
X(v) − X(u) = c W (v/c2 ) − W (u/c2) and
X(t) − X(s) = c W (t/c2 ) − W (s/c2 ) are independent Brownian
increments with non-overlapping (re-scaled) time intervals (s/c2 , t/c2 ]
and (u/c2 , v/c2 ]. Clearly, any increment X(t) − X(s) is a normal
random variable with zero mean (since E[W (t/c2 )] = 0 for any t ≥ 0)
and with variance (for s < t)

Var (X(t) − X(s)) = Var c W (t/c2 ) − W (s/c2 )


 

= c2 Var W (t/c2 ) + c2 Var W (s/c2 ) − 2c2 Cov W (s/c2 ), W (t/c2 )


  

1
= c2 (t/c2 ) + c2 (s/c2 ) − 2c2 2 s ∧ t
c
= t + s − 2s = t − s .
d
Hence, X(t) − X(s) = Norm(0, t − s), for s < t, and we conclude that
the process is a standard BM.

5. (a) Since τ1 , τ2 are stopping times, we have τ1 ≥ 0 and τ2 ≥ 0 (a.s.).


Hence, T = τ1 ∨ τ2 ≥ 0 (a.s.), i.e., T is nonnegative. Fix any parameter
t ≥ 0. The event

{T ≤ t} = {τ1 ∨ τ2 ≤ t} = {τ1 ≤ t} ∩ {τ2 ≤ t} ∈ Ft .

8
Here we used the fact that {τ1 ≤ t} ∈ Ft and {τ2 ≤ t} ∈ Ft since both τ1 , τ2
are stopping times w.r.t F. Moreover, the intersection of the two sets is in
Ft by closure of intersections of sets in the σ-algebra Ft .
(b) Since τ is a stopping time, we have τ ≥ 0 (a.s.). As given, T0 ≥ 0.
Hence, τ + T0 ≥ 0 (a.s.).
Now, fix a value for t ≥ 0. If 0 ≤ t < T0 , i.e., t − T0 < 0, then the event
{τ + T0 ≤ t} = {τ ≤ t − T0 } = ∅. [Remark: {τ < u} = ∅ for any u ≤ 0.]
If t ≥ T0 , i.e., t − T0 > 0, then {τ + T0 ≤ t} = {τ ≤ t − T0 } ∈ Ft−T0 . Since
t − T0 ≤ t, i.e., Ft−T0 ⊂ Ft , we have {τ + T0 ≤ t} ∈ Ft . We have therefore
shown that {τ + T0 ≤ t} ∈ Ft for any t ≥ 0.
(c) Based on part (a), we need only show that τ := τ1 ∧ τ2 is a stopping
time w.r.t. F, as it then follows that (τ1 ∧ τ2 ) ∨ τ3 (i.e., the maximum of two
stopping times) is a stopping time w.r.t. F.
Clearly, τ1 ∧ τ2 ≥ 0 since τ1 ≥ 0 and τ2 ≥ 0 (a.s.).
Fix any t ≥ 0. Then, {τ1 ∧ τ2 ≤ t} = {τ1 ≤ t} ∪ {τ2 ≤ t} ∈ Ft by closure
under the unoin of two sets {τ1 ≤ t} ∈ Ft and {τ2 ≤ t} ∈ Ft . This proves
that T , as defined, is a stopping time w.r.t. F.

6.
(a) For 0 ≤ s ≤ t: write W (t) = W (s) + Y , where Y := W (t) − W (s) is
independent of W (s). Hence, by the Independence Proposition,

E[W 3 (t)|FsW ] = E[(Y + W (s))3 |FsW ] = g(W (s))

where

g(x) = E[(Y + x)3 ] = E[Y 3 ] + 3xE[Y 2 ] + 3x2 E[Y ] + x3


= 0 + 3x(t − s) + 3x2 · 0 + x3 = x3 + 3x(t − s) .

Hence, E[W 3 (t)|FsW ] = W 3 (s) + 3(t − s)W (s).


Note: the calculation can also be done with the use of the transition
PDF of BM. Alternatively, one can expand (Y + W (s))3 in the above
conditional expectation and use independence.
For 0 ≤ t ≤ s: W 3 (t) is FsW –measurable since s ≥ t where
σ(W 3 (t)) ⊂ FsW . Hence, we ”pull out what is know” (i.e.,
E[X|G] = X when X is G–measurable ) giving

E[W 3 (t)|FsW ] = W 3 (t).

9
(b) We can simply write W (t) = W (s) + Y , where Y := W (t) − W (s) is
independent of W (s). Hence,

E[(W (t) − t)2 |W (s)] = E[(Y + W (s) − t)2 |W (s)]


= E[Y 2 |W (s)] + 2E[Y (W (s) − t)|W (s)] + E[(W (s) − t)2 |W (s)]
= E[Y 2 ] + 2(W (s) − t)E[Y ] + (W (s) − t)2
= t − s + (W (s) − t)2 .

Here we used the independence of Y and σ(W (s)) and the fact that
(W (s) − t) and (W (s) − t)2 are σ(W (s))–measurable.
Alternative solution: We make use of the martingale (and Markov)
property of W 2 (t) − t and W (t), i.e., E[W 2 (t) − t |W (s)] = W 2 (s) − s
and E[W (t)|W (s)] = W (s). Hence, expanding

E[(W (t) − t)2 |W (s)] = E[ W 2 (t) − 2tW (t) + t2 |W (s)]


= E[W 2 (t) − t |W (s)] + t − 2t E[W (t)|W (s)] + t2
= W 2 (s) − s + t − 2tW (s) + t2 ≡ t − s + (W (s) − t)2 .

[Remark: Yet another alternative solution is to use the transition


PDF method and compute appropriate integrals.]

(c) Write W (t) = Y1 + W (s) and W (u) = Y2 + W (s), where


d
Y1 := W (t) − W (s) = Norm(0, t − s),
d
Y2 := W (u) − W (s) = Norm(0, u − s). Note that Y1 , Y2 are
independent of Fs . Hence, Y := Y1 + Y2 is independent of Fs . By
applying Independence Proposition 6.7:

E[(W (t) + W (u))2 |Fs ] = E[(Y + 2W (s))2 |Fs ] = g(W (s))

where g(x) = E[(Y + 2x)2 ] = E[Y 2 ] + 4xE[Y ] + 4x2 = E[Y 2 ] + 4x2 .


Computing the second moment of Y (note: s ≤ t < u):

E[Y ] = E[Y12 ] + E[Y22 ] + 2 Cov(Y1 , Y2 )


= t − s + u − s + 2 Cov(W (t) − W (s), W (u) − W (s))
= t − s + u − s + 2 Var(W (t) − W (s))
= t − s + u − s + 2(t − s) = 3(t − s) + u − s.

10
Hence, g(x) = 4x2 + 3(t − s) + u − s, giving

E[(W (t) + W (u))2 |Fs ] = E[(Y + 2W (s))2 |Fs ] = 4W 2 (s) + 3(t − s) + u − s


= 4W 2 (s) + 3t + u − 4s.

Alternative solution: By expanding the square we have:

E[(W (t) + W (u))2 |Fs ] = E[W 2 (t)|Fs ] + E[W 2 (u)|Fs ] + 2E[W (u)W (t)|Fs ] .

The first expectation is evaluated as in the textbook: let


d
W (t) = W (s) + Y , where Y = W (t) − W (s) = Norm(0, t − s) is
independent of Fs and W (s) is Fs –measurable. Hence,

E[W 2 (t)|Fs ] = W 2 (s) + 2W (s)E[Y ] + E[Y 2 ] = W 2 (s) + t − s .

[Note: this also directly follows from the martingale property of


W 2 (t) − t.] The second expectation follows identically with time t
replaced by u: E[W 2 (u)|Fs ] = W 2 (s) + u − s . The third expectation
can be evaluated by combining the tower property, pulling out
Ft –measurable W (t) and then using the martingale property (note:
s ≤ t < u):

E[W (u)W (t)|Fs ] = E[E[W (u)W (t)|Ft ]|Fs ]


= E[W (t) E[W (u)|Ft ] |Fs ]
| {z }
W (t)

= E[W (t)|Fs ] = W 2 (s) + t − s .


2

Combining the three expectations gives

E[(W (t) + W (u))2 |Fs ] = W 2 (s) + t − s + W 2 (s) + u − s + 2(W 2 (s) + t − s)


= 4W 2 (s) + 3(t − s) + u − s.

Another solution method: The transition PDF and appropriate

11
integrals can be used as in part (a) above. We compute:

E[(W (t) + W (u))2 |W (s) = x]


=E[W 2 (t)|W (s) = x] + E[W 2 (u)|W (s) = x] + 2E[W (u)W (t)|W (s) = x]
Z Z
2
= fW (t)|W (s) (y|x)y dy + fW (u)|W (s) (y|x)y 2 dy
R Z
Z R

+2 fW (t),W (u)|W (s) (y, z|x)yzdydz


Z R R Z
2
= p0 (t − s; y − x)y dy + p0 (u − s; y − x)y 2 dy
R R
Z Z 
+2 p0 (u − t; z − y) zdz p0 (t − s; y − x)ydy . (1)
R R

Note: The last integral in (1) is expressed in terms of a product of


transition PDFs:
fW (s),W (t),W (u) (x, y, z)
fW (t),W (u)|W (s) (y, z|x) =
fW (s) (x)
fW (s),W (t) (x, y) fW (s),W (t),W (u) (x, y, z)
=
fW (s) (x) fW (s),W (t) (x, y)
= fW (t)|W (s) (y|x) fW (u)|W (s),W (t) (z|x, y)
= fW (t)|W (s) (y|x) fW (u)|W (t) (z|y)
= p(s, t; x, y) p(t, u; y, z)
= p0 (t − s; y − x) p0 (u − t; z − y).

By changing variables, v = z − y, z = v + y, the inner integral in the


double integral in (1) is simply given by y (this is also recognized as
the martingale property of std. BM)
Z Z
p0 (u − t; z − y) zdz = p0 (u − t; v) (v + y)dv
R Z R Z
= p0 (u − t; v) vdv +y p0 (u − t; v) dv = y.
R R
| {z } | {z }
=0 =1

12
Hence, equation (1) reads:
E[(W (t) + W (u))2 |W (s) = x]
Z Z Z
2
= p0 (t − s; y − x)y dy + p0 (u − s; y − x)y dy + 2 p0 (t − s; y − x) y 2 dy
2
RZ RZ R

= 3 p0 (t − s; y − x) y 2 dy + p0 (u − s; y − x)y 2 dy .
R R

Finally, note that for any t > 0 we have:


Z Z
2
p0 (t; y − x) y dy = p0 (t; z) (z + x)2 dz
R ZR Z Z
2 2
= p0 (t; z) z dz +2x p0 (t; z) zdz +x p0 (t; z) dz
|R {z } | R {z } | R {z }
t =0 =1
2
=t+x .
Using this integral identity twice in the above expression gives
E[(W (t) + W (u))2 |W (s) = x] = 3(t − s + x2 ) + u − s + x2 = 3(t − s) + u − s + 4x2 .
Therefore, combining this with the Markov property,
E[(W (t)+W (u))2 |Fs ] = E[(W (t)+W (u))2 |W (s)] = 3(t−s)+u−s+4W 2 (s).

(d) Since s > t, we have (see just above Eqn. (10.29) of the old book
version or above Eqn.(1.51) of the new version):
 
d t t(s − t)
W (t)|{W (s) = x} = Norm x, (∗).
s s
That is, the conditional mean of W (t), given W (s) = x ∈ R, is
t t
E[W (t)|W (s) = x] = x =⇒ E[W (t)|W (s)] = W (s).
s s
Again from (*), we obtain the conditional second moment as the sum
of the conditional variance and the conditional mean, i.e.,
E[W 2 (t)|W (s) = x] = Var(W (t)|W (s) = x) + (E[W (t)|W (s) = x])2
 2
t(s − t) t t(s − t) t2 2
= + x = + 2x ,
s s s s
t(s−t) t2
i.e., E[W 2 (t)|W (s)] = s
+ s2
W 2 (s).

13
(e) One approach is to write W (t) = W (s) + Y , where Y := W (t) − W (s)
is independent of W (s). Hence,

Cov(W 2 (s), W 2 (t)) = Cov(W 2 (s), (W (s) + Y )2 )


= Cov(W 2 (s), W 2 (s)) + Cov(W 2 (s), Y 2 ) +2 Cov(W 2 (s), W (s)Y )
| {z }
=0
2 2
= Var(W (s)) + 2 Cov(W (s), W (s)Y )
= Var(W 2 (s)) + 2E[W 3 (s)]E[Y ] − 2E[W 2 (s)Y ]E[W (s)]E[Y ]
| {z }
=0
2
= Var(W (s))
= Var(sZ 2 ) = s2 Var(Z 2 ) = s2 (3 − 12 )
= 2s2

Another solution method: Recall that for any {X(t)}t≥0 obeying


the conditional martingale property, we showed that
Cov(X(s), X(t)) = Var(X(s)), for 0 ≤ s ≤ t. Hence, using the
martingale defined by X(t) := W 2 (t) − t gives

Cov(W 2 (s), W 2 (t)) = Cov(X(s) − s, X(t) − t)


= Cov(X(s), X(t))
= Var(X(s)) = Var(W 2 (s) − s) = Var(W 2 (s))
= 2s2

7. Let Y (t) := S n (t), t ≥ 0. The state space of the Y -process is (0, ∞).
Using the GBM representation for the S-process, we reduce the transition

14
CDF of the Y -process to that of standard Brownian motion:

P (s, t; x, y) := P(Y (t) ≤ y|Y (s) = x)


 
1 1
= P ln S(t) ≤ ln y ln S(s) = ln x
n n
 
1 1
= P ln S0 + µt + σW (t) ≤ ln y ln S0 + µs + σW (s) = ln x
n n
    
1 1 1 1
= P W (t) ≤ ln y − ln S0 − µt W (s) = ln x − ln S0 − µs
σ n σ n
    
1 1 y 1 1 x
= P W (t) ≤ ln − µt W (s) = ln − µs
σ n S0n σ n S0n
    
1 1 y 1 1 x
 σ n ln S0n − µt − σ n ln S0n − µs 
=N √ 
 t−s 

   
y x
 ln S0n
− nµt − ln S n − nµs   y
ln x − nµ(t − s)

0
=N √ =N
 √ .
 nσ t − s nσ t − s

Differentiating w.r.t. y gives the transition PDF:


∂ ln xy − nµ(t − s)
  y
ln x − nµ(t − s)
 

p(s, t; x, y) = P (s, t; x, y) = √ n √
∂y ∂y nσ t − s nσ t − s
 y
ln x − nµ(t − s)

1
= √ n √
ynσ t − s nσ t − s
 y 2 !
1 ln x − nµ(t − s)
= p exp − .
ynσ 2π(t − s) 2(nσ)2 (t − s)

Remark: The above expression for the transition PDF is also readily
derived from the known PDF for a GBM process. That is,
Y (t) := S n (t), t ≥ 0, is a GBM with drift and volatility parameters nµ and
nσ, respectively, as seen from the exponential representation:
n
Y (t) := S n (t) = S0 eµt+σW (t) = Y0 enµt+nσW (t)

where Y0 = S0n is the initial value of the Y -process.

15
8. (a) We can express the joint probability using the joint CDF of a
bivariate standard
√ √ of variables Z̃1 ≡ Z1 , Z̃2 ≡ −Z2 , where
normal pair
Z1 = W (s)/ s, Z2 = W (t)/ t, as:
P(W (s) < x, W (t) > y) = P(W (s) < x, −W (t) < −y)
 
x y
= P Z̃1 ≤ √ , Z̃2 ≤ − √
s t
 r 
x y s
= N2 √ , − √ ; − .
s t t
The correlation coefficient in N2 follows from:
r
Cov(W (s), W (t)) s
Cov(Z̃1 , Z̃2 ) = Cov(Z1 , −Z2 ) = − Cov(Z1 , Z2 ) = − √ =− .
st t
Alternative solution: By total law of probability:
P(W (s) < x, W (t) > y) + P(W (s) < x, W (t) < y) = P(W (s) < x)
and the joint CDF from part (a):
P(W (s) < x, W (t) > y) = P(W (s) < x) − P(W (s) < x, W (t) < y)
   r 
x x y s
=N √ − N2 √ , √ ; .
s s t t

(b) Re-expressing the joint probability gives


P(W (s) < W (t) < x) = P(−(W (t) − W (s)) < 0, W (t) < x).
Now we define the standard normal r.v.’s:
W (t) − W (s) W (t)
Z1 := − √ , Z2 := √ .
t−s t
We have
1
Corr(Z1 , Z2 ) ≡ Cov(Z1 , Z2 ) = − √ √ Cov(W (t) − W (s), W (t))
t t−s
1
= −√ √ Var(W (t) − W (s))
t t−s
1
= −√ √ (t − s)
t t−s
r r
t−s s
=− =− 1− .
t t

16
Hence,
 
x
P(W (s) < W (t) < x) = P Z1 ≤ 0, Z2 ≤ √
t
 r 
x s
= N2 0, √ ; − 1 − .
t t

(c) We have the 3 × 1 vector having the trivariate normal distribution:


     
W (s) 0 s s s
X :=  W (t)  ∼ Norm3 0 , s t t  .
W (T ) 0 s t T

We will employ the conditioning identity in Eq. (10.4) of Chapter 10 in the


book. We identify the roles of the partitioned vectors (note: X2 is simply a
scalar)  
W (s)
X1 := , X2 := W (T ).
W (t)
The respective mean vectors are:
 
0
µ1 = , µ2 = 0.
0

The sub-covariance matrices (of the above 3 × 3) are:


   
s s s
C11 = , C12 = , C21 = [s t] , C22 = T.
s t t

We are conditioning on X2 ≡ W (T ) = b (i.e., the scalar x2 = b). Within


Eq. (10.4) we have:
     
−1 0 1 s bs/T
µ1 + C12 C22 (x2 − µ2 ) = + (b − 0) =
0 T t bt/T

and
     
s s 1 s s(1 − s/T ) s(1 − t/T )
C11 − C12 C−1
22 C21 = − [s t] = .
s t T t s(1 − t/T ) t(1 − t/T )

17
Hence, X1 |{X2 = x2 } is given by
     
W (s) bs/T s(1 − s/T ) s(1 − t/T )
{W (T ) = b} ∼ Norm2 ,
W (t) bt/T s(1 − t/T ) t(1 − t/T )
" #  
Xe1 W (s)
Let us denote e ≡ {W (T ) = b}. Then, according to the above,
X2 W (t)
we can express the random variables in terms of standard normals:
p p
Xe1 = bs/T + s(1 − s/T )Z1 , X e2 = bt/T + t(1 − t/T )Z2 ,

where Cov(X e2 ) = s(1 − t/T ). The standard normals are correlated as:
e1 , X
s
Cov(X1 , X2 )
e e s(1 − t/T ) s(1 − t/T )
ρ = Cov(Z1 , Z2 ) = =p p = .
Var(X1 ) Var(X2 )
e e s(1 − s/T ) t(1 − t/T ) t(1 − s/T )
Finally,
P(W (s) ≤ x, W (t) ≤ y|W (T ) = b) = P(X e1 ≤ x, Xe2 ≤ y)
!
x − bs/T y − bt/T
= P Z1 ≤ p , Z2 ≤ p
s(1 − s/T ) t(1 − t/T )
 
x − bs/T y − bt/T
= N2 p ,p ;ρ
s(1 − s/T ) t(1 − t/T )
 s 
x − bs/T y − bt/T s(1 − t/T )
≡ N2 p ,p ; .
s(1 − s/T ) t(1 − t/T ) t(1 − s/T )

d √ d √
(d) We have W (t) − W (s) = t − sZ1 and W (T ) − W (t) = T − tZ2 ,
where Z1 and Z2 are i.i.d. standard normal r.v.’s since the BM increments
are nonoverlapping in time. Hence,
√ √
P(W (T ) − W (t) > W (t) − W (s) + a) = P( T − tZ2 − t − sZ1 > a).
Now,
√ √ d √ √
T − tZ2 − t − sZ1 = Norm(0, Var( T − tZ2 − t − sZ1 ))
d
= Norm(0, (T − t) Var(Z2 ) + (t − s) Var(Z1 ))
d d
= Norm(0, (T − t) + (t − s)) = Norm(0, T − s).

18
√ √ d √
Therefore, T − tZ2 − t − sZ1 = T − sZ, where Z is a standard
normal r.v.. Finally, we have

P(W (T ) − W (t) > W (t) − W (s) + a) = P( T − sZ > a)
 
a
=N −√ .
T −s

(e) Recall: given a joint CDF F (x, y) ≡ FX,Y (x, y) of random variables
X, Y , we have

P(a < X < b, c < Y < d) = F (b, d) − F (a, d) − F (b, c) + F (a, c) .

Hence, defining Z1 := W√(s)


s
and Z2 := W√(t)
t
, where
ρ = Cov(Z1 , Z2 ) = √st Cov(W (s), W (t)) = √sst = st , gives:
1
p

P(a < W (s) < b, c < W (t) < d)


a b c d
= P( √ < Z1 < √ , √ < Z2 < √ )
s s t t
b d a d
= P(Z1 < √ , Z2 < √ ) − P(Z1 < √ , Z2 < √ )
s t s t
b c a c
− P(Z1 < √ , Z2 < √ ) + P(Z1 < √ , Z2 < √ )
s t s t
 r   r 
b d s a d s
= N2 √ , √ ; − N2 √ , √ ;
s t t s t t
 r   r 
b c s a c s
− N2 √ , √ ; + N2 √ , √ ;
s t t s t t

9. (a) Note the inverse relation: sinh−1 y = ln(y +


p
y 2 + 1), y ∈ R. The

19
mapping F (x) := sinh(x) is strictly increasing. The transition CDF is then:

P (s, t; x, y) := P(X(t) ≤ y|X(s) = x)



= P sinh(µt + σW (t)) ≤ y | sinh(µs + σW (s)) = x
sinh−1 y − µt sinh−1 x − µs
 
= P W (t) ≤ W (s) =
σ σ
 −1 −1 
sinh y − sinh x − µ(t − s)
=N √
σ t−s
p !
1 (y + y 2 + 1) µ √
=N √ ln √ − t−s .
σ t − s (x + x2 + 1) σ

(b) Differentiating w.r.t. y gives the transition PDF:


p(s, t; x, y) = P (s, t; x, y)
∂y
p !
1 ∂ h p i 1 (y + y 2 + 1) µ √
= √ ln(y + y 2 + 1) · n √ ln √ − t−s
σ t − s ∂y σ t − s (x + x2 + 1) σ
p p !
y + y2 + 1 1 1 (y + y 2 + 1) µ √
= p √ n √ ln √ − t−s
1 + y(y + y 2 + 1) σ t − s σ t − s (x + x2 + 1) σ
2
e−z /2
where N 0 (z) ≡ n(z) = √ . Note: the logarithmic derivative term can
√2π
2
1+y/ y +1
also be expressed as √ .
y+ y 2 +1

(c) Using sinh x = 12 (ex − e−x ), and the MGF of standard normal Z, with
d √
W (t) = tZ, we have:
1  µt σ√tZ √ 
mX (t) = E[sinh(µt + σW (t))] = e E[e ] − e−µt E[e−σ tZ ]
2
eµt − e−µt σ2 t/2
= e
2
2
= sinh(µt)eσ t/2 .
2 /2
[Remark: the MGF, E[euZ ] = eu , is an even function of u.]

20
For the covariance, we have
cX (s, t) = Cov(X(s), X(t)) = E[X(s)X(t)] − mX (s)mX (t), where the mean
function is given in part (a). For s, t ≥ 0:

E[X(s)X(t)] = E[sinh(µs + σW (s)) sinh(µt + σW (t))]



1 µ(t+s) σ(W (s)+W (t))
= e E[e ] + e−µ(t+s) E[e−σ(W (s)+W (t)) ]
4

µ(t−s) σ(W (t)−W (s)) −µ(t−s) −σ(W (t)−W (s))
−e E[e ]−e E[e ] .

These expectations are computed by the MGF of standard normal Z. That


is,

Var(W (s)+W (t)) = Var(W (s))+Var(W (t))+2 Cov(W (s), W (t)) = s+t+2(s∧t)
d d
i.e., W (s) + W (t) = Norm(0, s + t + 2(s ∧ t)) =⇒ W (s) + W (t) =
p d p
s + t + 2(s ∧ t)Z. Moreover, W (t) − W (s) = |t − s|Z.
Hence,
√ σ2
E[e±σ(W (s)+W (t)) ] = E[e±σ s+t+2(s∧t)Z ] = e 2 (s+t+2(s∧t))

and
σ2
E[e±σ(W (t)−W (s)) ] = e 2
|t−s|
.
Subsituting these expectations within the above expression gives:

eµ(t+s) + e−µ(t+s) σ2 (s+t+2(s∧t)) eµ(t−s) + e−µ(t−s) σ2 |t−s|


E[X(s)X(t)] = e2 − e2
4 4
1 σ2 1 σ2
= cosh(µ(t + s))e 2 (s+t+2(s∧t)) − cosh(µ(t − s))e 2 |t−s| .
2 2
Finally,

cX (s, t) = E[X(s)X(t)] − mX (s)mX (t)


1 σ2 1 σ2
= cosh(µ(t + s))e 2 (s+t+2(s∧t)) − cosh(µ(t − s))e 2 |t−s|
2 2
σ2
(s+t)
− sinh(µs) sinh(µt)e 2 .

21

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy