EJP v13-502
EJP v13-502
v13-502
a
rn l
u o
o f
J
P
c r
i ob
on abil
Electr ity
Journal URL
http://www.math.washington.edu/~ejpecp/
Abstract
The main goal of this paper is to generalize the results of Fournié et al. [8] for markets
generated by Lévy processes. For this reason we extend the theory of Malliavin calculus to
provide the tools that are necessary for the calculation of the sensitivities, such as differen-
tiability results for the solution of a stochastic differential equation .
Key words: Lévy processes, Malliavin Calculus, Monte Carlo methods, Greeks.
∗
Strasse des 17. Juni 136 D-10623 Berlin Germany, petrou@math.tu-berlin.de, research supported by DFG
the IRTG 1339 ”Stochastic Processes and Complex Systems”
852
1 Introduction
In recent years there has been an increasing interest in Malliavin Calculus and its applications to
finance. Such applications were first presented in the seminal paper of Fournié et al. [8]. In this
paper the authors are able to calculate the Greeks using well known results of Malliavin Calculus
on Wiener spaces, such as the chain rule and the integration by parts formula. Their method
produces better convergence results from other established methods, especially for discontinuous
payoff functions.
There have been a number of papers trying to produce similar results for markets generated by
pure jump and jump-diffusion processes. For instance El-Khatib and Privault [6] have consid-
ered a market generated by Poisson processes. In Forster et al. [7] the authors work in a space
generated by independent Wiener and Poisson processes; by conditioning on the jump part,
they are able to calculate the Greeks using classical Malliavin calculus. Davis and Johansson [4]
produce the Greeks for simple jump-diffusion processes which satisfy a separability condition.
Each of the previous approaches has its advantages in specific cases. However, they can only
treat subgroups of Lévy processes.
This paper produces a global treatment for markets generated by Lévy processes and achieves a
similar formulation of the sensitivities as in Fournié et al. [8]. We rely on Malliavin calculus for
discontinuous processes and expand the theory to fulfill our needs. Malliavin calculus for dis-
continuous processes has been widely studied as an individual subject, see for instance Bichteler
et al. [3] for an overview of early works, Di Nunno et al. [5], Løkka [12] and Nualart and Vives
[14] for pure jump Lévy processes, Solé et al. [16] for general Lévy processes and Yablonski [17]
for processes with independent increments. It has also been studied in the sphere of finance, see
for instance Benth et al. [2] and Léon et al. [11]. In our case we focus on square integrable Lévy
processes.
The starting point of our approach is the fact that Lévy processes can be decomposed into a
Wiener process and a Poisson random measure part. Hence we are able to use the results of Itô
[9] on the chaos expansion property. In this way every square integrable random variable in our
space can be represented as an infinite sum of integrals with respect to the Wiener process and
the Poisson random measure. Having the chaos expansion we are able to introduce operators for
the Wiener processes and Poisson random measure. With an application to finance in mind, the
Wiener operator should preserve the chain rule property. Such a Wiener operator was introduced
in Yablonski [17] for the more general class of processes with independent increments, using the
classical Malliavin definition. In our case we adopt the definition of directional derivative first
introduced in Nualart and Vives [14] for pure jump processes and then used in Léon et al. [11]
and Solé et al.[16]. The chain rule formulation that is achieved for simple Lévy processes in
Léon et al. [11], and for more general processes in Solé et al. [16], is only applicable to separable
random variables. As Davis and Johansson [4] have shown, this form of chain rule restricts the
scope of applications, for instance it excludes stochastic volatility models that allow jumps in the
volatility. We are able to bypass the separability condition, by generalizing the chain rule in this
setting. Following this, we define the directional Skorohod integrals, conduct a study of their
properties and give a proof of the integration by parts formula. We conclude our theoretical
part with the main result of the paper, the study of differentiability for the solution of a Lévy
stochastic differential equation.
With the help of these tools we produce formulas for the sensitivities that have the same sim-
plicity and easy implementation as the ones in Fournié et al [8].
853
The paper is organized as follows. In Section 2 we summarize results of Malliavin calculus, de-
fine the two directional derivatives, in the Wiener and Poison random measure direction, prove
their equivalence to the classical Malliavin derivative and the difference operator in Løkka [12]
respectively, and prove the general chain rule. In Section 3 we define the adjoint of the direc-
tional derivatives, the Skorohod integrals, and prove an integration by parts formula. In Section
4 we prove the differentiability of a solution of a Lévy stochastic differential equation and get an
explicit form for the Wiener directional derivative. Section 5 deals with the calculation of the
sensitivities using these results. The paper concludes in Section 6, with the implementation of
the results and some numerical experiments.
where {Wt }t∈[0,T ] is the standard Wiener process and µ(·, ·) is a Poisson random measure inde-
pendent of the Wiener process defined by
X
µ(A, t) = 1A (∆Zs )
s≤t
where A ∈ B(R0 ) . The compensator of the Lévy measure is denoted by π(dz, dt) = λ(dt)ν(dz)
and the jump measure of the Lévy process by ν(·), for more details see [1]. Since Z is square
2
R
integrable the Lévy measure satisfies R0 z ν(dz) < ∞. Finally σ is a positive constant, λ the
Lebesgue measure and R0 = R \ {0}. In the following µ̃(ds, dz) = µ(ds, dz) − π(ds, dz) will
represent the compensated random measure.
In order to simplify the presentation, we introduce the following unifying notation for the Wiener
process and the random Poisson measure
U0 = [0, T ] and U1 = [0, T ] × R
dQ0 (·) = dW· and Q1 = µ̃(·, ·)
dhQ0 i = dλ and dhQ1 i = dλ × dν
tk ,l = 0
ulk =
(tk , x), l = 1.
for j1 , . . . , jn = 0, 1.
(j ,...,j )
Finally, Jn 1 n (gj1 ,...,jn ) will denote the (n-fold) iterated integral of the form
Z
Jn(j1 ,...,jn ) (gj1 ,...,jn ) = gj1 ,...,jn (uj11 , . . . , ujnn )Qj1 (duj11 ) . . . Qjn (dujnn ) (1)
Gj1 ,...,jn
854
Nn
where gj1 ,...,jn is a deterministic function in L2 (Gj1 ,...,jn ) = L2 (Gj1 ,...,jn , i=1 dhQji i).
The theorem that follows is the chaos expansion for processes in the Lévy space L2 (Ω). It states
that every random variable F in this space can be uniquely represented as an infinite sum of
integrals of the form (1). This can be considered as a reformulation of the results in [9], or an
expansion of the results in [12].
Theorem 1. For every random variable F ∈ L2 (FT , P), there exists a unique sequence
{gj1 ,...,jn }∞ 2
n=0 , j1 , . . . , jn = 0, 1, where gj1 ,...,jn ∈ L (Gj1 ,...,jn ) such that
∞
X X
F = Jn(j1 ,...,jn ) (gj1 ,...,jn ) (2)
n=0 j1 ,...,jn =0,1
Given the chaos expansion we are able to define directional derivatives in the Wiener process
and the Poisson random measure. For this we need to introduce the following modification to
the expanded simplex
n
Gkj1 ,...,jn (t) = (uj11 , . . . , ûjkk , . . . , ujnn ) ∈ Gj1 ,...,jk−1 ,jk+1 ,...,jn :
0 < t1 < · · · < tk−1 < t < tk+1 · · · < tn < T } ,
where û means that we omit the u element. Note that Gkj1 ,...,jn (t) ∩ Glj1 ,...,jn (t) = ∅ if k 6= l. The
definition of the directional derivatives follows the one in [11].
(j ,...,jn )
is called the derivative of Jn 1 (gj1 ,...,jn ) in the l-th direction.
We can show that this definition is reduced to the Malliavin derivative if we take
ji = 0, ∀i = 1, . . . , n, and to the definition of [12] if ji = 1,∀i = 1, . . . , n.
From the above we can reach the following definition for the space of random variables differen-
tiable in the l-th direction, which we denote by D(l) , and its respective derivative D(l) :
855
Definition 2. 1. Let D(l) be the space of the random variables in L2 (Ω) that are differentiable
in the l-th direction, then
∞
X X
D(l) = {F ∈ L2 (Ω), F = E[F ] + Jn(j1 ,...,jn ) (gn ) :
n=1 j1 ,...,jn =0,1
∞
X X n
X Z
1{ji =l} kgj1 ,...,jn (·, ul , ·)k2L2 (Gi
i1 ,...,in )
n=1 j1 ,...,jn =0,1 i=1 Uji
From the definition of the domain of the l−directional derivative, all the elements of L2 (Ω) with
finite chaos expansion are included in D(l) . Hence, we can conclude that D(l) is dense in L2 (Ω).
In order to study the relation between the classical Malliavin derivative, see [13], the difference
operator in [12] and the directional derivatives, we need to work on the canonical space.
The canonical Brownian motion is defined on the probability space (ΩW , FW , PW ), where ΩW =
C0 ([0, 1]); the space of continuous functions on [0, 1] equal to null at time zero; FW is the Borel
σ-algebra and PW is the probability measure on FW such that Bt (ω) := ω(t) is a Brownian
motion.
Respectively, the triplet (ΩN , FN , PN ) denotes the space on which the canonical Poisson random
measure. We denote with ΩN the space of integer valued measures ω 0 on [0, 1] × R0 , such that
ω 0 (t, u) ≤ 1 for any point (t, u) ∈ [0, 1]×R0 , and ω 0 (A×B) < ∞ when π(A×B) = λ(A)ν(B) < ∞,
where ν is the σ-finite measure on R0 . The canonical random measure on ΩN is defined as
µ(ω 0 , A × B) := ω 0 (A × B).
With PN we denote the probability measure on FN under which µ is a Poisson random measure
with intensity π. Hence, µ(A × B) is a Poisson variable with mean π(A × B), and the variables
µ(Ai × Bj ) are independent when Ai × Bj are disjoint.
In our case we have a combination of the two above spaces. With
(Ω, F, {Ft }t∈[0,1] , P) we will denote the joint probability space,where
Ω := ΩW ⊗ ΩN equipped with the probability measure P := PW ⊗ PN and Ft := FtW ⊗ FtN .
Then there exists an isometry
where
Z
L2 (ΩW ; L2 (ΩN )) = {F : ΩW → L2 (ΩN ) : kF (ω)k2L2 (ΩN ) dPW (ω) < ∞}.
ΩW
856
Therefore we can consider every F ∈ L2 (ΩW ; L2 (ΩN )) as a functional F : ω → F (ω, ω 0 ).
This implies that L2 (ΩW ; L2 (ΩN )) is a Wiener space on which we can define the classical
Malliavin derivative D. The derivative D is a closed operator from L2 (ΩW ; L2 (ΩN )) into
L2 (ΩW × [0, 1]; L2 (ΩN )). We denote with D1,2 the domain of the classical Malliavin deriva-
tive. If F ∈ D1,2 , then
In the same way the difference operator D̃ defined in [12] with domain D̃1,2 is closed from
L2 (ΩN ; L2 (ΩW )) into L2 (ΩN × [0, 1]; L2 (ΩW )). If F ∈ D̃1,2 , then
Proposition 1. On the space D(0) the directional derivative D(0) is equivalent to the classical
Malliavin derivative D, i.e. D = D(0) . Respectively on D(1) the directional derivative D(1) is
equivalent the difference operator D̃, i.e. D̃ = D(1) .
Proposition 2. 1. Let F = f (Z, Z 0 ) ∈ L2 (Ω), where Z depends only on the Wiener part and
(0) 0
Z ∈ D , Z depends only on the Poisson random measure and f (x, y) is a continuously
differentiable function with bounded partial derivatives in x, then
∂
D(0) F = f (Z, Z 0 )D(0) Z
∂x
− c
(t,z) ω(A × B) = ω(A × B ∩ (t, z) ),
−
+
(t,z) ω(A × B) = (t,z) ω(A × B) + 1A (t)1B (z).
The last proposition is an extension of the results in [11], where the authors consider only simple
Lévy processes, and similar to corollary 3.6 in [16]. However, this chain rule is applicable to
random variables that can be separated to a continuous and a discontinuous part;separable
random variables, for more details see [4]. In what follows we provide the proof of chain rule
with no separability requirements.
The first step is to find a dense linear span of Doléans-Dade exponentials for our space. To
achieve this, as in [12], we use the continuous function
857
ez − 1 , z < 0
γ(z) = ,
1 − e−z , z > 0
which is totally bounded and has an inverse. Moreover γ ∈ L2 (ν), eλγ − 1 ∈ L2 (ν), ∀λ ∈ R and
for h ∈ C([0, T ]) we have ehγ − 1 ∈ L2 (π), hγ ∈ L2 (π), ehγ ∈ L1 (π).
Lemma 1. The linear span S of random variables Y = {Yt , t ∈ [0, T ]} of the form
Z t Z tZ
Yt = exp σh(s)dWs + h(s)γ(z)µ̃(dz, ds)
0 0 R0
Z t 2 Z tZ
σ h(s)2
− ds − (eh(s)γ(z) − 1 − h(s)γ(z))π(dz, ds) (3)
0 2 0 R0
The proof of the chain rule requires the next technical lemma.
in L2 ([0, T ] × Ω).
×dhQjn idt
∞
X X n
X Z
= 1{ji =0} kgjk1 ,...,jn (·) − gj1 ,...,jn (·)k2L2 (Gj )
1 ,...,jn−1
n=1 j1 ,...,jn =0,1 i=1 Ujn
×dhQjn i
∞
X X n
X
= 1{ji =0} kgjk1 ,...,jn − gj1 ,...,jn k2L2 (Gj ) < ∞.
1 ,...,jn
n=1 j1 ,...,jn =0,1 i=1
858
k
From (4) we can choose a subsequence such that kgj1m+1 2
,...,jn − gj1 ,...,jn kL2 (Gj ) ≤ kgjk1m,...,jn −
1 ,...,jn
gj1 ,...,jn k2L2 (Gj ) for all n. Hence
1 ,...,jn
∞
X X n
X
1{ji =0} kgjk1m,...,jn − gj1 ,...,jn k2L2 (Gj )
1 ,...,jn
n=1 j1 ,...,jn =0,1 i=1
X∞ X Xn
≤ 1{ji =0} kgjk11,...,jn − gj1 ,...,jn k2L2 (Gj ) <∞
1 ,...,jn
n=1 j1 ,...,jn =0,1 i=1
However, limm→∞ kgjk1m,...,jn − gj1 ,...,jn k2L2 (Gj ,...,j ) = 0. From the dominate convergence theorem
1 n
we have
∞
X X n
X
lim 1{ji =0} kgjk1m,...,jn − gj1 ,...,jn k2L2 (Gj ) =
m→∞ 1 ,...,jn
n=1 j1 ,...,jn =0,1 i=1
∞
X X X n
1{ji =0} lim kgjk1m,...,jn − gj1 ,...,jn k2L2 (Gj ) =0
m→∞ 1 ,...,jn
n=1 j1 ,...,jn =0,1 i=1
Using the fact that D(0) is a densely defined and closed operator, and that the elements of the
linear span S are separable processes, we prove in the following theorem the chain rule for all
processes in D(0) .
Theorem 2. (Chain Rule) Let F ∈ D(0) and f be a continuously differentiable function with
bounded derivative. Then f (F ) ∈ D(0) and the following chain rule holds:
Proof. Let F ∈ D(0) . F can be approximated in L2 (Ω) by a sequence {Fn }∞ n=0 , where Fn ∈ S
for all n ∈ N. Every term of Fn , as a linear combination of Lévy exponentials, is in D(0) .
(0) (0)
Then from Lemma 2 there exists a subsequence {Fnk }∞ k=0 such that limk→∞ Dt Fnk = Dt F
in L2 ([0, T ] × Ω).
However, the elements of the sequence {Fnk }∞ k=0 are separable processes. We can then apply the
chain rule in Proposition 2 to the process f (Fnk ) and we have
(0) (0)
Dt f (Fnk ) = f 0 (Fnk )Dt Fnk .
859
Remark.The theory developed in this chapter also holds in the case that our space is generated
by an d-dimensional Wiener process and k-dimensional random Poisson measures. However, we
will have to introduce new notation for the directional derivatives in order to simplify things.
(0)
For the multidimensional case, Dt F will denote a row vector, where the element of the i-th row
is the directional derivative for the Wiener process W i , for all i = 1, . . . , d. Similarly we define
(1)
the row vector D(t,z) F . Furthermore Di F , i = 1, . . . , d, will be scalars denoting the directional
derivative of the i-th Wiener process W i for i = 1, . . . , d, and the derivative in the direction of
the i-th random Poisson measure µ̃i for i = d + 1, . . . , d + k.
3 Skorohod Integral
The next step after the definition of the directional derivatives is to define their adjoint, which
are the Skorohod integrals in the Wiener and Poisson random measure directions.
The first two result of the section are the calculation of the Skorohod integral and the study
of its relation to the Itô and Stieltjes-Lebesgue integrals. These are extensions of the results
in [4] and [10] from simple Poisson processes to square integrable Lévy processes. The proof
are performed in parallel ways as in [4] (or in more detail in [10]), therefore they are omitted.
The main result however is an integration by parts formula. Although the separability result
is yet again an extension of [4], having attained a chain rule for D(0) that does not require a
condition, we are able to provide a simpler and more elegant proof. Finally the section closes
with a technical result.
860
Then the l-th directional Skorohod integral is
Z
(l)
δ (F h) = E[F ]h(u1 )Ql (du1 )
Ul
∞ n Z Z Z Z Z
gn (uj11 , . . . , ujnn ))h(u)1Gj1 ,...,jn
X X X
+ ···
n=1 j1 ,...,jn =0,1 k=1 Ujn Ujk+1 Ul Ujk Uj1
j
× 1{tk <t<tk+1 } Qj1 (duj11 ) · · · Qjk (dujkk )Ql (du)Qjk+1 (duk+1
k+1
) · · · Qjn (dujnn )
if the infinite sum converges in L2 (Ω).
Having the exact form of the Skorohod integral we can study its properties. For instance the
Skorohod integral can be reduced to an Itô or Stieltjes-Lebesgue integral in the case of predicable
processes.
Proposition 4. Let ht be a predictable process such that E[ U l h2t dhQl i] < ∞. Then h ∈ Dom
R
We are now able to prove one of the main results, the integration by parts formula.
Proposition 5. (Integration by parts formula) Let F h ∈ L2 (Ω × [0, T ]), where F ∈ D(0) ,
ht is predictable square integrable process. Then F h ∈ Domδ (0) and
Z T Z T
(0)
δ (F h) = F ht dWt − (D(0) F )ht dt,
0 0
Note that when F is an m-dimensional vector process and h a m × m matrix process the
integration by part formula can be written as follows:
Z T Z T
δ (0) (F h) = F ∗ ht dWt − T r D(0) F ht dt.
0 0
The last proposition of this chapter will provide a relationship between the Itô and the Stieltjes-
Lebesgue integrals and the directional derivatives.
861
Proposition 6. Let ht be a predictable square integrable process. Then
• if h ∈ D(0) then Z T Z T
(0)
Dt hs dWs = ht + Ds(0) hs dWs
0 t
Z T Z Z T Z
(0)
Dt hs µ̃(dz, ds) = Ds(0) hs µ̃(dz, ds)
0 R0 t R0
• if h ∈ D(1) then Z T Z T
(1) (1)
D(t,z) hs dWs = D(s,z) hs dWs
0 t
Z T Z Z T Z
(1) (1)
D(t,z) hs µ̃(dz, ds) = ht + D(s,z) hs µ̃(dz, ds)
0 R0 t R0
Proof. This result can be easily deduced from the definition of the directional derivative.
Let {Xt }t∈[0,T ] be an m-dimensional process in our probability space, satisfying the following
stochastic differential equation:
Z
dXt = b(t, Xt− )dt + σ(t, Xt− )dWt + γ(t, z, Xt− )µ̃(dz, dt)
R0
X0 = x (10)
R t ∈ [0,
for each T ], x ∈ Rm where C is a positive constant. Furthermore there exists ρ : R → R
2
with R0 ρ(z) ν(dz) < ∞, and a positive constant D such that
862
for all x, y ∈ Rm and z ∈ R0 .
Under these conditions there exists a solution for (10) which is also unique1 . For what follows
we denote with σi the i-th column vector of σ and adopt the Einstein convention of leaving the
summations implicit.
In the next theorem we prove that the solution {Xt }t∈[0,T ] is differentiable in both directions of
the Malliavin derivative. Moreover we reach the stochastic differentiable equations satisfied by
the derivatives.
(i)
1. Xt ∈ D(0) , ∀t ∈ [0, T ] and the derivative Ds Xt satisfies the following linear equation:
Z t
∂
Dsi Xt = b(r, Xr− )Dsi Xrk− dr
s ∂x k
Z t
∂
+ σi (s, Xs− ) + σα (r, Xr− )Dsi Xrk− dWrα
s ∂xk
Z tZ
∂
+ γ(r, z, Xr− )Dsi Xrk− µ̃(dz, dr) (12)
s R0 ∂x k
Xt0 = x0
Z t Z t
Xtn+1 = x0 + b(s, Xsn− )ds + σj (s, Xsn− )dWsj
0 0
Z tZ
+ γ(s, z, Xsn− )µ̃(dz, ds) (14)
0 R0
1
For existence and uniqueness see Theorem 6.2.3, Assumption 6.5.1 and discussion on page 312 in [1].
863
if n ≥ 0.
We prove by induction that the following hypothesis holds true for all n ≥ 0.
(H) Xtn ∈ D(0) for all t ∈ [0, T ],
Z t Z t
Dri σαj (s, Xsn− )dWsα = σij (r, Xrn− ) + Di σαj (s, Xsn− )dWrα
0 r
Z tZ Z tZ
Dri γ j
(s, z, Xsn− )µ̃(dz, ds) = Dri γ j (s, z, Xsn− )µ̃(dz, ds).
0 R0 r R0
Rt
Also 0 b(s, Xsn− )ds ∈ D(0) , hence
Z t Z t
Dri bj
(s, Xsn− )ds = Dri bj (s, Xsn− )ds.
0 r
(18)
864
From the above we can conclude that Xtn+1 ∈ D(0) for all t ∈ [0, T ]. Furthermore
n
i n+1 2 n 2
E sup |Dr Xs | ≤ 4 E sup |σi (r, Xr− )|
r≤u≤t r≤u≤t
Z u
2
i
+ E sup Dr b(s, Xs− )ds
r≤u≤t r
Z u
2
i α
+ E sup Dr σα (s, Xs− )dWs
r≤u≤t r
Z uZ
2 o
+ E sup Dri γ(s, z, Xs− )µ̃(dz, ds) (19)
r≤u≤t r R0
n
n 2
= c E sup |σi (r, Xr− )|
r≤u≤t
Z t
i 2
+ TE Dr b(s, Xs− ) ds
r
Z t
i 2
+ E |Dr σα (s, Xs− )| ds
r
Z t Z o
+ E |Dri γ(s, z, Xs− )|2 ν(dz)ds .
r R0
Thus, hypothesis (H) holds for n + 1. From Applebaum [1], Theorem 6.2.3, we have that
!
E sup |Xsn − Xs |2 →0
s≤T
2
see [15] Theorem 48 p.193
865
as n goes to infinity.
By induction to the inequality (20), see for more details appendix A, we can conclude that
the derivatives of Xsn are bounded in L2 (Ω × [0, T ]) uniformly in n. Hence Xt ∈ D(0) .
Applying the chain rule to (12) we conclude our proof.
2. Following the same steps we can prove the second claim of the theorem.
With the previous theorem we have proven that the solution of (10) is in D(0) , and reached the
(0)
stochastic differential equation that Ds Xt satisfies. However, the Wiener directional derivative
can take a more explicit form. As in the classical Malliavin calculus we are able to associate
the solution of (12) with the process Yt = ∇Xt ; first variation of Xt . Y satisfies the following
stochastic differential equation3 :
0 0
dYt = b (t, Xt− )Yt− dt + σi (t, Xt− )Yt− dWti
Z
0
+ γ (t, z, Xt− )Yt− µ̃(dz, dt)
R0
Y0 = I, (21)
where prime denotes the derivative and I the identity matrix. Hence, we reach the following
(0)
proposition which provides us with a simpler expression for Ds Xt .
(0)
Proposition 7. Let {Xt }t∈[0,T ] be the solution of (10). Then the derivative Dt in the Wiener
direction satisfies the following equation:
Dr(0) Xt = Yt Yr−1
− σ(r, Xr − ), ∀r ≤ t, (22)
866
Let {Zt }t∈[0,T ] be a d × d matrix valued process that satisfies the following equation
Z t
ij i ∂ k ∂ k ∂ α
Zt = δj + − b (t, Xs− ) + σ (t, Xs− ) σ (t, Xs− ) Zsik− ds
0 ∂xj ∂xl α ∂xj l
2
∂ i (t, z, X )
Z tZ
∂xk γ s −
+ ∂
Zsik− ν(dz)ds
i
0 R0 1 + ∂xk γ (t, z, Xs− )
Z t
∂ k
− σj (t, Xs− )Zsik− dWsl
0 ∂x l
Z tZ ∂ i
∂xk γ (t, z, Xs− )
− ∂
Zsik− µ̃(dz, ds)
i
0 R0 1 + ∂xk γ (t, z, Xs− )
Hence, Zt = Yt−1 . Furthermore it is easy to show applying again Itô’s formula, that
Ytil Zrlk− σjk (r, Xr− ) verifies (12) for all r < t. Hence the proof is concluded.
5 Sensitivities
Using the Malliavin calculus developed in the previous sections we are able to calculate the
sensitivities, i.e. the Greek letters. The Greeks are calculated for an m-dimensional process
{Xt }t∈[0,T ] that satisfies equation (10).
We denote the price of the contingent claim as
where φ(Xt1 , . . . , Xtn ) is the payoff function, which is square integrable, evaluated at times
t1 , . . . , tn and discounted from maturity T .
In what follows we assume the following ellipticity condition for the diffusion matrix σ.
Assumption 1. The diffusion matrix σ is elliptic. That implies that there exists k > 0 such
that
y ∗ σ ∗ (t, x)σ(t, x)y ≥ k|y|2 , ∀y, x ∈ Rd .
where is a scalar and ξ is a bounded function. Then we reach the following proposition.
867
Proposition 8. Let σ be a uniformly elliptic matrix. We denote u (x) the following payoff
Then
T
∂u (x)
Z
= E[φ(XT ) (σ −1 (t, Xt− )ξ(t, Xt− ))∗ dWt ].
∂ =0 0
we have
E[φ(XT )] = E[MT φ(XT )]
Hence
∂u(x) h φ(X ) − φ(X ) i
T T
= lim E
∂ =0 →0
h M − 1i
= lim E φ(XT ) T
→0
h Z T i
= E φ(XT ) (σ −1 (t, Xt− )ξ(t, Xt− ))∗ dWt
0
In order to calculate the variation in the initial condition we will define the set Γ, as follows
Z ti
2
Γ = {ζ ∈ L ([0, T )) : ζ(t)dt = 1, ∀i = 1, . . . , n}
0
Proof. Let φ be a continuously differentiable function with bounded gradient. Then we can
differentiate inside the expectation4 and we have
n
X ∂
∇u(x) = E[ ∇i φ(Xt1 , . . . , Xtn ) Xti ]
∂x
i=1
Xn
= E[ ∇i φ(Xt1 , . . . , Xtn )Yti ], (24)
i=1
4
For details see [4] and [8]
868
∂
where ∇i φ(Xt1 , . . . , Xtn ) is the gradient of φ with respect to Xti , and ∂x Xti is the d × d matrix
of the first variation of the d-dimensional process Xti .
From (22) we have
(0)
Yti = Dt Xti σ −1 (t, Xt− )Yt− .
However, ζ(t)(σ −1 (t, Xt− )Yt− )∗ is a predictable process, thus the Skorohod integral coincides
with the Wiener. Since the family of continuously differentiable functions is dense in L2 , the
result hold for any φ ∈ L2 , see [8] and [4] for more details.
869
In this case, we introduce the set Γn , where
Z ti
Γn = {ζ ∈ L2 ([0, T )) : ζ(t)dt = 1, ∀i = 1, . . . , n}.
ti−1
Proposition 10. Assume that the diffusion matrix σ is uniformly elliptic, and that for βti =
Yt−1
i
Zti , i = 1, . . . , n we have σ −1 (t, Xt− )Yt βt ∈ Domδ (0) , for all t ∈ [0, T ]. We denote u (x) the
following payoff
u (x) = E[φ(Xt1 , . . . , Xtn )].
Then for all ζ(t) ∈ Γ
∂u (x)
= E[φ(Xt1 , . . . , Xtn )δ (0) (σ −1 (t, Xt− )Yt− β̃t )],
∂ =0
where
n
X
β̃t = ζ(t)(βti − βti−1 )1{ti ≤t<ti }
i=1
Z T
(0)
Dt Xti σ −1 (t, Xt− )Yt− β̃t dt
0
Z ti
= Yti β̃t dt
0
n Z
X ti
= Yti ζ(t)(βti − βti−1 )1{ti ≤t<ti }
i=1 ti−1
= Yti βti = Zti .
870
Inserting the above to (25) we have the following
T n
∂u (x)
Z
(0)
X
= E[ ∇i φ(Xt1 , . . . , Xtn )Dt Xti σ −1 (t, Xt− )Yt β̃t dt],
∂ 0 i=1
since σ −1 (t, Xt− )Yt β̃t ∈ Domδ (0) , for all t ∈ [0, T ]
∂u (x)
= E[φ(Xt1 , . . . , Xtn )δ 0 (σ −1 (t, Xt− )Yt− β̃t )],
∂
the result follows. If β ∈ D(0) using Proposition 5 we can calculate the Skorohod integral.
Proposition 11. Assume that the diffusion matrix σ is uniformly elliptic, and that for βti =
Yt−1
i
Zti , i = 1, . . . , n we have σ −1 (t, Xt− )Yt β̃t ∈ Domδ (0) , for all t ∈ [0, T ]. We denote u (x) the
following payoff
u (x) = E[φ(Xt1 , . . . , Xtn )].
Then for all ζ(t) ∈ Γ
∂u (x)
= E[φ(Xt1 , . . . , Xtn )δ (0) (σ −1 (t, Xt− )Yt− β̃t )],
∂ =0
where
n
X
β̃t = ζ(t)(βti − βti−1 )1{ti ≤t<ti }
i=1
871
for t0 = 0. Moreover, if β ∈ D(0) then
n
(
X Z ti
−1
δ (0)
(σ (t, Xt− )Yt− β̃t ) = βt∗i ζ(t)(σ −1 (t, Xt− )Yt− )∗ dWt
i=1 ti−1
Z ti
(0)
− ζ(t)T r (Dt βti )σ −1 (t, Xt− )Yt− dt
ti−1
)
Z ti
− ζ(t)(σ −1 (t, Xt− )Yt− βti−1 )∗ dWt .
ti−1
6 Examples
In this section, we explicitly calculate the Greeks for a general stochastic volatility model with
jumps both in the underlying and the volatility (SVJJ). This is followed by some numerical
results on two specific cases of SVJJ, the Bates and the SVJ model.
Example 1. Let Xt = (Xt1 , Xt2 ) be a two dimensional stochastic process satisfying the following
stochastic differential equation.
q
dXt1 = rXt1− dt + Xt2− Xt1− dWt1
Z
1
+ Xt− γ1 (t, z)µ̃(dz, dt)
R0
X01 = x1
q
dXt2 = k(m − Xt2− )dt + σ Xt2− dWt2
Z
+ Xt2− γ2 (t, z)µ̃(dz, dt)
R0
X02 = x2
872
have the form
T T
dW 1 dW 2 i
Z Z
h
−rT ρ
Rho = E e φ(XT1 ) q t −p q t
0 2
Xt− 1 − ρ2 0 Xt2−
i
− E[T e−rT φ(XT1 )
T T
dW 1 dW 2
Z Z
h 1 ρ i
Delta = E e−rT φ(XT1 ) q t −p q t
x1 0 Xt2− 1 − ρ2 0 Xt2−
2
T T
dW 1 dW 2
Z Z
h 1 ρ
Gamma = E e−rT φ(XT1 ) q t −p q t
x21 T 2 0 Xt2− 1 − ρ2 0 Xt2−
!
Z T 1 Z T
Z T 2
1 dt dW ρ dW i
− −T q t −p q t
1 − ρ2
0 Xt2−
0 Xt2− 1 − ρ2 0 Xt2−
Z Tq
1 1
h
−rT 1 2
V ega = E e φ(XT ) WT − Xt− dt
T 0
Z T 1 Z T 2 Z T
dW ρ dW dt i
× q t −p q t − q
0 Xt2− 1 − ρ2 0 Xt2− 0 Xt2−
Z T Z Z TZ
1 1
h
−rT
Alpha = E e φ(XT ) z µ̃(dz, dt) − zγ1 (z, t)π(dz, dt)
T 0 R0 0 R0
Z T 1 Z T 2
dW ρ dW i
× q t −p q t
0 Xt2− 1 − ρ2 0 Xt2−
873
(z−a)2
1
The Lévy measure of µ is ν(z) = λ √2πb e b2 , thus the intensity of the Poisson process µ is λ
and the jump sizes follow the normal distribution with parameters a, b.
Stochastic volatility model with jump (SVJ) model
In Figures 2 we plot the delta for a digital option for an underlying that has a stochastic volatility
with jumps (SVJ). The SVJ is an extension of the Heston model, where the stochastic volatility
includes jumps. The sde are given by
q
dXt1 = rXt1− dt + Xt2− Xt1− dWt1
X01 = x1
q
dXt2 = k(m − Xt2− )dt + σ Xt2− dWt2
Z
2
+ Xt− z µ̃(dz, dt)
R0
X02 = x2 .
The Lévy measure of µ is ν(z) = λe−a , thus the intensity of the Poisson process µ is λ and the
jump sizes follow the exponential distribution with parameters a, b.
874
Figure 1: Delta for a digital option in the Bates model, with parameters T = 1, X01 = 100,K = 100,r =
0.05, σ = 0.04, X02 = 0.04, k = 1, m = 0.05, ρ = −0.8, λ = 1, a = −0.1 and b = 0.001
Figure 2: Delta for a digital option in the SVJ model, with parameters T = 1, X01 = 100,K = 100,r =
0.05, σ = 0.04, X02 = 0.04, κ = 1, η = 0.05, ρ = −0.8, λ = 1 and a = 0.001
875
It is obvious from the figures that the Malliavin weight performs better in the case of discontinues
payoffs as we would expect.
For n = 0 it is obvious that (26) holds. Let (26) hold for n. Then
Z T
ξn+1 ≤ c1 + c2 ξn (s)ds
r
n
T
cj (s − r)j
Z X
2
≤ c1 + c1 c2 ds
r j!
j=0
n+1
X cj (T
2 − r)j
≤ c1 + c1 c2
j!
j=1
n+1
X cj (T
2 − r)j
≤ c1
j!
j=0
Pn cj2 (T −r)j
Since limn→∞ j=0 j! = ec2 (T −r) < ∞, we can conclude that ξn (T ) < ∞.
876
and !
Xt1− γ1 (t, z)
γ(t, z, Xt− ) =
Xt2− γ2 (t, z)
The inverse of σ is
p q
1 σ 1 − ρ2 Xt2− 0
σ(t, Xt− )−1 = p q q
σXt1− Xt2− 1 − ρ2 −σρ Xt2− Xt1− Xt2−
• Rho In the case of the sensitivity with respect to r we perturb the drift with ξ(t, x) =
(x1 , 0)∗ . Hence,
∗ 1 ρ
σ −1 (t, Xt− )ξ(t, Xt− ) = q , −p q
Xt2− 1 − ρ2 Xt2−
0
r 0
where b (t, Xt− ) = ,
0 −k
q
X 1−
2 t
0 Xt− q
2 X 2−
σ1 (t, Xt− ) =
t ,
0 qσρ
2 X 2−
t
0 √0
0
σ2 (t, Xt− ) = 0 σ
q
1−ρ2 ,
2 X 2−
t
and
0 γ1 (t, z) 0
γ (t, z, Xt− ) =
0 γ2 (t, z)
Xt1
Since Yt1,1 = x1 and Yt2,1 = 0 the matrix (σ −1 (t, Xt− )Yt )∗ has the following form
−ρ
1
Y 1,1 √ Y 1,1
− −
q
X 2− t 1−ρ2 X 1− X 2− t
q
X 1−
t t t t !
−ρY 1,2 Y 2,2
− −
1
Y 2,1 √ 1q t t
q − X 1−
+ σ
X 1− X 2− t 1−ρ2 X 2− t
t t t
877
• Gamma In the case of the second derivative with respect the initial condition x by ap-
plying again Lemma 9 to the Delta we reach our result.
x 0
1
• Vega In order to calculate V ega we perturb Xt with ξ(t, x) = . From Itô’s for-
0 0
Rtp
mula it is easy to verify that Zt1 = Xt1 Wt1 − 0 Xs2 ds . Furthermore, since Xt2 does not
x W 1 − R t pX 2 ds
2 −1 1 t 0 t
depend on we can deduce that Zt = 0. Then βt = Yt Zt = .
0
q
The Wiener directional derivative for Xs2− has the following form
(0)
q 1 (0)
Dt Xs2− = q Dt Xs2−
Xs2−
2,2
2 Ys − 1
q
ρ Xt− Y 2,2 {t<s}
1 t−
= q
p q Y 2,2
Xs2− 1 − ρ2 Xt2− s2,2 −
1{t<s}
Y
t−
q
X 2− Y 2,2 √ q
X 2− Y 2,2
ρσ RT t s− 1−ρ2 σ RT t − s
1−
2
√ dWt1 − 2
√ dWt2
Dt βT = x1 t Xs2 Y 2,2 t Xs2 Y 2,2
t− t−
0 0
then
(0) 1
T r((Dt βT )σ −1 (t, Xt− )Yt− ) = q
Xt2−
• Alpha In order to calculate the sensitivity with respect to the jump amplitude we
x
1
perturb Xt with ξ(t, x) = . From Itô’s formula it is easy to verify that Zt1 =
0
R R
t RtR
Xt1 0 R0 z µ̃(dz, ds) − 0 R0 zγ(z, s)π(dz, ds) . Following the same steps as in V ega we
reach the wanted result.
References
[1] D Applebaum. Lévy Processes and Stochastic Calculus. Cambridge University Press, 2004.
MR2072890
[2] F Benth, G Di Nunno, A Løkka, B Øksendal, and F Proske. Explicit representation of the
minimal variance portfolio in markets driven by Lévy processes. Math. Finance, 13(1):55–
72, 2003. Conference on Applications of Malliavin Calculus in Finance (Rocquencourt,
2001). MR1968096
878
[3] K Bichteler, J Gravereaux, and J Jacod. Malliavin calculus for processes with jumps, vol-
ume 2 of Stochastics Monographs. Gordon and Breach Science Publishers, New York, 1987.
MR1008471
[4] M Davis and M Johansson. Malliavin Monte Carlo Greeks for jump diffusions. Stochastic
Process. Appl., 116(1):101–129, 2006. MR2186841
[5] G Di Nunno, T Meyer-Brandis, B Øksendal, and F Proske. Malliavin calculus and antici-
pative Itô formulae for Lévy processes. Infin. Dimens. Anal. Quantum Probab. Relat. Top.,
8(2):235–258, 2005. MR2146315
[6] Y El-Khatib and N Privault. Computations of Greeks in a market with jumps via the
Malliavin calculus. Finance Stoch., 8(2):161–179, 2004. MR2048826
[8] E Fournié, J Lasry, J Lebuchoux, P Lions, and N Touzi. Applications of Malliavin calculus
to Monte Carlo methods in finance. Finance Stoch., 3(4):391–412, 1999. MR1842285
[9] K Itô. Spectral type of the shift transformation of differential processes with stationary
increments. Trans. Amer. Math. Soc., 81:253–263, 1956. MR0077017
[10] M Johansson. Malliavin Calculus for Lévy Processes with Applications to Finance. PhD
thesis, Imperial College, 2004.
[11] J León, J Solé, F Utzet, and J Vives. On Lévy processes, Malliavin calculus and market
models with jumps. Finance Stoch., 6(2):197–225, 2002. MR1897959
[12] A Løkka. Martingale representation of functionals of Lévy processes. Stochastic Anal. Appl.,
22(4):867–892, 2004. MR2062949
[13] D Nualart. The Malliavin calculus and related topics. Probability and its Applications (New
York). Springer-Verlag, Berlin, second edition, 2006. MR2200233
[14] D Nualart and J Vives. Anticipative calculus for the Poisson process based on the Fock
space. In Séminaire de Probabilités, XXIV, 1988/89, volume 1426 of Lecture Notes in
Math., pages 154–165. Springer, Berlin, 1990. MR1071538
[15] P Protter. Stochastic integration and differential equations, volume 21 of Stochastic Mod-
elling and Applied Probability. Springer-Verlag, Berlin, 2005. Second edition. Version 2.1,
Corrected third printing. MR2273672
[16] J Solé, F Utzet, and J Vives. Canonical Lévy process and Malliavin calculus. Stochastic
Process. Appl., 117(2):165–187, 2007. MR2290191
[17] A Yablonski. The malliavin calculus for processes with conditionally independent incre-
ments. In Stochastic Analysis and Applications, volume 2 of Abel Symposia. Springer, 2007.
879