0% found this document useful (0 votes)
33 views65 pages

SDE Kultam

The document provides an overview of stochastic differential equations (SDEs). It introduces key concepts like Brownian motion, the Itô formula, and numerics. It also discusses applications of SDEs in areas like finance and physics. Specific topics covered include the geometric Brownian motion, properties of the Itô stochastic integral, and simulations of SDEs.

Uploaded by

indira maharani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views65 pages

SDE Kultam

The document provides an overview of stochastic differential equations (SDEs). It introduces key concepts like Brownian motion, the Itô formula, and numerics. It also discusses applications of SDEs in areas like finance and physics. Specific topics covered include the geometric Brownian motion, properties of the Itô stochastic integral, and simulations of SDEs.

Uploaded by

indira maharani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

SDEs - an Overview

Hausenblas Erika

Montanuniversity Leoben, Austria

October 12, 2023

1 / 65
Outline

Introduction and Motivation


The Brownian Motion and Stochastic integration
The Itô formula
Numerics
Applications
Lévy processes
fractal Brownian motion
Stochastic Partial differential equations

Outline 2 / 65
The Itô Integral
Let
n
M2 ([0, ∞); R) := f : Ω × [0, ∞) → L(Rd , Rd )
Z o
f is progressively measurable and E |f (s)|2 ds < ∞
R+

Theorem
There exists a linear bounded operator
I : M2 ([0, ∞); R) → L2 (Ω, P; E R),
which is the unique extension of the operator (∗).

For all f ∈ M2 ([0, ∞); R) and t > 0 let


Z t
f (s) dM(s) := I (1(0,t] h) .
0

Outline 3 / 65
The Itô-Isometry

Proposition
For any f ∈ M2 ([0, ∞); R) the stochastic integral I (f ) is a square integrable
random variable, i.e. I (f ) ∈ L2 (Ω), such that
Z ∞
E|I (f )|2 = E |f (t)|2 dt.
0

Outline 4 / 65
The Itô stochastic integral

Definition
For any t > 0 we denote by M 2 ([0, t]; R) the space of all stochastic processes
f : [0, t] × Ω → R such that

1[0,t] f ∈ M2 ([0, ∞); R).

The Itô stochastic integral (from 0 to t) is defined by


Z t 
f (s) dB(s) := It (f ) = I 1[0,t] f .
0

Outline 5 / 65
Properties

Theorem
The following properties hold for and f , g ∈ M2 ([0, ∞); R) and any α, β ∈ R
1 linearity
Z t Z t Z t
(αf (s) + βg (s)) dB(s) = α f (s) dB(s) + β g (s) dB(s)
0 0 0

2 isometry
Z t 2 Z t
2
E f (s) dB(s) =E |f (s)| ds
0 0

3 martingale property (and adaptivity)


Z t  Z r
E f (s) dB(s) | Fr = f (s) dB(s)
0 0

Properties of the stochastic integral 6 / 65


Properties

Burkholder Davis Gundy inequality


Let f ∈ M 2 ([0, ∞); Rd ). Then
Z t p Z t  p2
2
E sup f (s) dB(s) ≤E |f (s)| ds
0≤s≤t 0 0

Properties of the stochastic integral 7 / 65


Stochastic Differential Equations

Stochastic Differential Equations 8 / 65


The geometric Brownian motion

a simple deterministic differential equation

d
X (t) = aX (t), X (0) = x0
dt
The solution:

a simple stochastic differential equation

dX (t) = νX (t) + σX (t)dB(t), X (0) = x0

The solution:
σ2
  
X (t) = exp ν− t + σB(t) , t ≥ 0.
2

Stochastic Differential Equations An example from finance 9 / 65


Some simulations

Geometric Brownian motion with sigma=0.2 Geometric Brownian motion with sigma=0.2
2.6 1.5

2.4

1.4
2.2

2
1.3

1.8

1.6 1.2

1.4

1.1
1.2

1
1

0.8

0.6 0.9
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Geometric Brownian motion with sigma=0.2 Geometric Brownian motion with sigma=0.05
1.25 1.2

1.2

1.15
1.15

1.1
1.1

1.05

1.05
1

0.95
1

0.9

0.85 0.95
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Stochastic Differential Equations An example from finance 10 / 65


Some simulations

10 4 Realizations of Geometric Brownian Motion with different variances Realizations of Geometric Brownian Motion with different variances
3.5 12000
1 1
2.5 2.5
0.25 0.25
3
10000

2.5
8000

2
$x$

$x$
6000

1.5

4000
1

2000
0.5

0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
$t$ $t$

Realizations of Geometric Brownian Motion with different variances Realizations of Geometric Brownian Motion with different variances
4500 3500
1 1
2.5 2.5
4000 0.25 0.25
3000

3500

2500
3000

2000
2500
$x$

$x$

2000
1500

1500
1000

1000

500
500

0 0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
$t$ $t$

Stochastic Differential Equations An example from finance 11 / 65


The Itô Formula
Given
a

dX (t) = b(t) dt + σ(t) dB(t), X (0) = 0.


a
Karatzas and Shreve, Brownian motion and stochastic calculus. (1991), Revuz and Yor, Continuous martingales and
Brownian motion (1991).

Note
P limh→0 h1 (B(t + h) − B(t))2 = 1 = 1.


⇒ Itô Formula:
Z t
F (X (t)) = F (x0 ) + F ′ (X (s)) b(s) ds
0
Z t
1 t ′′
Z

+ F (X (s))σ(s) dB(s) + F (X (s))σ 2 (s) ds .
0 2
| 0 {z }
Itô correction term
Stochastic Differential Equations An example from finance 12 / 65
Application of the geometric Brownian motion

Stock price prediction


Let {X (t) : t ≥ 0} be the price of a share. Then one can model it, by taking the
ration between X (t) and X (t + h). If one assume that

X (t + h)
∼ Normal distributed with variance σ 2 h
X (t)

then one ends up by the geometric Brownian motion without drift term.

Stochastic Differential Equations An example from finance 13 / 65


Existence and Uniqueness of solution

Definition
Let X be a Banach space. A map T : X → X is called a contraction mapping on
X if there exists k ∈ [0, 1) such that |T (x) − T (y )|X ≤ k|x − y |X for all x, y ∈ X .

Banach fixed-point theorem


Let X be a Banach space and T : X → X a strict contraction, then there exists a
unique fix point x ∗ ∈ X such that Tx ∗ = x ∗ .

Given
A filtered probability space, a Brownian motion B, and the equation
dX (t) = F (X (t)) dt + σ(X (t))dB(t), X (0) = x0 .
where F and σ are Lipschitz continuous. Then, there exists a unique solution
X = {X (t) : t ≥ 0} such that we have P-a.s.
Rt Rt
X (t) = x0 + 0 F (X (s)) ds + 0 σ(X (s)) dB(s).

Stochastic Differential Equations Existence and Uniqueness of solution 14 / 65


Existence and Uniqueness of solution

Important ingredient:
Burkholder Davis Gundy inequality Let X ∈ M 2 ([0, T ]; R). Then
Z t p Z T  p2
E sup ξ(s) dB(s) ≤E |ξ(s)|2 ds
0≤t≤T 0 0

Stochastic Differential Equations Existence and Uniqueness of solution 15 / 65


The Numerical Modelling

Deterministic Equation

Ẋ (t) = F (x(t)), X (0) = x0 .

Explicit Euler scheme:


k
Given an equidistant grid {t0 = 0 < t1 < · · · tN = T }, tk = N T.

X̂ (tk+1 ) = X̂ (tk ) + (tk+1 − tk ) F (X̂ (tk )), X̂ (0) = x0 .

Stochastic Differential Equations The Numerical Modelling 16 / 65


The Numerical Modelling

Stochastic Equation

dX (t) = F (x(t)) dt + σ(X (t))dB(t), X (0) = x0 .

Explicit Euler Maryuama scheme:


Given a equidistant grid {t0 = 0 < t1 < · · · tN = T }, tk = Nk T . Let {ξk : k ∈} be
a sequence of normal distributed independent random variables with mean zero
and variance τk = tk+1 − tk . Then, X̂ (tk+1 ) is given by

X̂ (tk+1 ) = X̂ (tk ) + (tk+1 − tk ) F (X̂ (tk )) + σ(X̂ (tk ))ξk X̂ (0) = x0 .

Stochastic Differential Equations The Numerical Modelling 17 / 65


Error Estimates:

Mean square error:


 
2 T
E X (tk ) − X̂ (t) ≤C , k = 1, . . . , N.
N

Convergence in probability:
 
P lim X (tk ) − X̂ (t) = 0 = 1.
N→∞

Weak error: Let ϕ : R → R be continuous.


T
Eϕ(X (tk )) − Eϕ(X̂tk ) ≤ C .
N

Stochastic Differential Equations The Numerical Modelling 18 / 65


The Cox-Ingress-Ross Model

In mathematical finance, the Cox-Ingersoll-Ross (CIR) model, introduced in 1985


by John C. Cox, Jonathan E. Ingersoll and Stephen A. Ross, models the dynamic
of the interest rates. It is commonly employed in various applications within the
field of quantitative finance and risk management.

The equation
The CIR model of the interest rate is modelled by the following stochastic
differnetial equation (2κθ ≥ σ 2 )
 p
dX (t) = κ (θ − X (t)) dt + σ X (t) dW (t),
(1)
X (0) = X0 ,

where X = {X (t) : t ≥ 0} represents the short-term interest rate over the time
t ≥ 0, κ is the speed of mean reversion, determining how quickly the interest rate
moves back to the long-term average θ.

Stochastic Differential Equations The Cox-Ingress-Ross Model 19 / 65


Some simulations

Brownian motion The CIR process The inverse of the CIR process
6

0.6 80
4

70
2
0.5

0 60

0.4
-2 50

-4
0.3 40

-6
X 11.5 30
Y -8.05883 0.2
-8
20
-10
0.1
10
-12

0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25
Brownian motion The CIR process The inverse of the CIR process

40

6 0.5

35

5
0.4
30

4
25
0.3

3 20

0.2
2 15

10
1 0.1

5
0
0
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25

Stochastic Differential Equations The Cox-Ingress-Ross Model 20 / 65


Some simulations

Brownian motion The CIR process The inverse of the CIR process
0.55

9
0.5
10
8
0.45

0.4 7
5

0.35 6

0.3
0 5

0.25
4

0.2
-5
3
0.15

2
0.1
-10
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25
Brownian motion The CIR process The inverse of the CIR process

40

6 0.5

35

5
0.4
30

4
25
0.3

3 20

0.2
2 15

10
1 0.1

5
0
0
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25

Stochastic Differential Equations The Cox-Ingress-Ross Model 21 / 65


Key Applications of the CIR Model

Interest Rate Modeling: The CIR model is primarily used to model and
forecast short-term interest rates. It can capture mean reversion, which is a
characteristic of interest rates, and allows for the simulation of interest rate
paths over time.
Pricing Fixed Income Derivatives: The CIR model can be used to price
fixed income derivatives, such as bond options, interest rate swaps, and
swaptions, by simulating the future evolution of interest rates. It is a type of
one factor model (short-rate model) as it describes interest rate movements
as driven by only one source of market risk.
Risk Management: Financial institutions use the CIR model to manage
interest rate risk in their portfolios. By simulating different interest rate
scenarios, they can assess the impact of interest rate movements on their
positions and make informed risk management decisions.
Term Structure Modeling: The model helps in estimating the term
structure of interest rates. By calibrating the model to market data, analysts
can determine the parameters that best fit the observed yield curve.

Stochastic Differential Equations The Cox-Ingress-Ross Model 22 / 65


Key Applications of the CIR Model
Valuation of Mortgages: Mortgage-backed securities and related products
involve interest rate risk. The CIR model can be used to value and manage
the risk associated with these instruments.
Credit Risk Modeling: In credit risk modeling, interest rates play a
significant role. The CIR model can be incorporated into credit risk models
to assess the impact of interest rate fluctuations on the probability of default
and credit spreads.
Asset-Liability Management: Financial institutions, especially banks and
insurance companies, use the CIR model in asset-liability management to
match their assets and liabilities with varying maturities and interest rate
sensitivities.
Hedging Strategies: Traders and investors use the CIR model to develop
hedging strategies for interest rate risk. For example, it can be used to
hedge a portfolio of bonds against changes in interest rates.
Forecasting: The CIR model can be employed for short-term interest rate
forecasting, which is crucial for making investment decisions and managing
fixed income portfolios.
Stochastic Differential Equations The Cox-Ingress-Ross Model 23 / 65
Differences to the deterministic ODE
Uniqueness
The differential equation
p
Ẋ (t) = X (t), X (0) = x0

has not a unique solution (if x0 = 0). for any t0 ≥ 0 the solution is given by
(
0 if t ≤ t0
X (t) = 1 2
4 (t − t 0 ) if t0 < t.

Yamada-Watanabe Theory
If one can show pathwise uniqueness, then a unique strong solution exists!a
1 first a probabilistic weak solution is shown (using compactness of probability
measures)
2 pathwise uniqueness
3 both together gives a probabilistic strong solution
a
Stochastic
YamadaDifferential Equations
and Watanabe, On the uniqueness The Cox-Ingress-Ross
of solutions Model
of stochastic differential equations I and II, (1971) 24 / 65
Jump Processes

Definition
a A stochastic process L = {L(t), 0 ≤ t < ∞} is an R–valued Lévy process

over (Ω; F; P) if the following conditions are satisfied:

L(0) = 0;
L has independent, identical distributed, and stationary increments;
L is stochastically continuous, i.e. for ϕ continuous, the function
t 7→ Eϕ(L(t)) is continuous on R+ ;
L has a.s. càdlàg paths;
a
Cont and Tankov, Financial modelling with jump processes, 2004, Sato, Lv́y
processes and infinitely divisible distributions, 2013.

Jump Processes (or Lévy Processes) The definition 25 / 65


α-stable processes

Stable random variables


A random variable X : Ω → R is stable, iff for any n ∈, there exists
X1 , X2 , . . . , Xn , such that
the random variables X1 , X2 , . . . , Xn are mutually independent
Law(Xk ) = Law(Xl ) for all k, l ∈ {1, . . . , n}
such that
Law (X1 + · · · Xn ) = Law(X ).

Examples
Normal distributed random variables, The Cauchy distribution.

Jump Processes (or Lévy Processes) Some Examples 26 / 65


α-stable processes

Definition
X = {X (t) : t ≥ 0} is α–stable, iff
d 1
X (t) = t α X (1), ∀ t ≥ 0.

Examples
the Brownian motion;
Cauchy process is α–stable with α = 1.

Weak law of large numbers


Convergence to α–stable random variable, if the variance is not finite!

Jump Processes (or Lévy Processes) Some Examples 27 / 65


The Lévy Process L

The Fourier Transform of L is given by the Lévy - Hinchin - Formula1


 Z   
i⟨L(1),a⟩ iλ⟨y ,a⟩
Ee = exp i⟨y , a⟩λ + e − 1 − iλy 1{|y |≤1} ν(dy ) ,
E

where a ∈ R, y ∈ R and ν : B(R) → R+ is a Lévy measure2 .

1
Cont and Tankov, Financial modelling with jump processes, 2004, Sato, Lv́y
processes and infinitely divisible distributions, 2013.
2
A measure µ is called Lévy measure, iff it is σ–finite and |z|≤1 |z|2 ν(dz) < ∞.
R
Jump Processes (or Lévy Processes) Some Examples 28 / 65
The Lévy Process

different trajectories of an 1.5-stable Levy process different trajectories of an 1-stable Levy process
1.5 1.5

1 1

0.5 0.5

0 0

-0.5 -0.5

-1 -1

-1.5 -1.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Jump Processes (or Lévy Processes) Some Examples 29 / 65


The Lévy Process

different trajectories of an 1/2-stable Levy process 10 4 different trajectories of an 1/2-stable Levy process
20 2

15
1.5

10
1

5
0.5

0
-5

-0.5
-10

-1
-15

-20 -1.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Jump Processes (or Lévy Processes) Some Examples 30 / 65


the Cramer Lundberg model

The Cramer Lundberg model appears in actuarial science and applied


probability, ruin theory, and also in risk theory. The aim is to describe the
vulnerability to insolvency/ruin of the Insurance. In such models key
quantities of interest are the probability of ruin, distribution of surplus
immediately prior to ruin and deficit at time of ruin.
It was introduce by 1903 by Filip Lundberg, and republished 1930 by
Harald Cramér.

Jump Processes (or Lévy Processes) Application in finance and insurance 31 / 65


An insurance

the Story
Let us consider an insurance company, e.g., a car insurance. If an accident
happens, the client has a claim, and the insurance company has to pay the
costs according to the contract to the policyholder. However, the payout
amount is, in most cases, random and depends on the damage.

Jump Processes (or Lévy Processes) Cramer-Lundberg model 32 / 65


Modelling - the building blocks
Number of damages
One models the number of all damages through a damage count process, or
stochastic stochastic process

N : [0, ∞) × Ω → N0 = {0, 1, 2, . . . , }
(t, ω) 7→ N(t, ω).

This random process models the incoming claims from policyholders.

Loss distributions for individual hazards


The amount of loss per incurred loss for an individual risk Yk , k = 1, . . . , The
amount of loss per incurred loss for an individual risk is described by a
non-negative random variable Y . It is usually modelled independently of time.
The dimension of the random variable is usually a monetary unit - the individual
risk being modelled can be, for example, a machine production, an insured motor
vehicle or, more generally, a loan (with the or, in a more general sense, a loan
(where, in the case of a loan the case of loss is caused by a failure to properly
payment obligations by the borrower).
Jump Processes (or Lévy Processes) Cramer-Lundberg model 33 / 65
Modelling - the building blocks
Risk process R
Similar to overall loss distributions of several individual risks, one can model the
distribution of the sum of various assets under risk, e.g. the entire sum of
different assets at risk, e.g. the entire sume of portfolio of securities. Even more
generally, one could model the future total value of a company. This can be done
by including all relevant loss processes, pricing processes, premium, etc. into one
overall view.

Calculation of the Premium


Let X be the losses after one year. Then, one can use different principles to
calculate the premium:
X 7→ (1 + α)EX , α > 0 ( expectation principle )
X 7→ EX + βStd(X ), β > 0 ( Standard Deviation Principle )
X 7→ EX + γVar(X ), γ > 0 ( Variance Principle )
1
X 7→ δ ln(Ee δX ), δ > 0 ( Exponential Principle )

Jump Processes (or Lévy Processes) Cramer-Lundberg model 34 / 65


Mathematical Modelling
Building blocks
Counting process: Let {τj : j ∈} be
a family of mutually independent exponential distributed random variables and
Pj
Tj = k=1 τk .
Then,
X∞
N(t) := 1[0,Tl ) (t)
l=1

is the counting process.


The Risk process: Let {Yj : j ∈} be a family of mutually independent
random variables and modelling one single claim. Let a be the cumulated
premia and interest rate of the savings. Then,
N(t)
X
R(t) := a t − Yl
l=1

models the entire risk process.


Jump Processes (or Lévy Processes) Cramer-Lundberg model 35 / 65
The Cramer Lundberg model
Setting
X (t) is the asset of the insurance at time t
Capital at the beginning: x0
A Lévy process L = {L(t) : t ≥} with only positive jumps describing the
claim of the customer; Observe: PN(t)
L(t) = l=1 Yl .
a : monthly premium paid by the customers
Then the process X : [0, ∞) → R can be modelled by

X (t) = x0 + at − L(t).

What is important
How large has the premium a and the capital a to be, such that the insurance
does not get bankrupt. Let τ := inf t≥0 {X (t) < 0}. Then, one is interested in
Px {τ < ∞} = ????
Jump Processes (or Lévy Processes) Cramer-Lundberg model 36 / 65
Interpretation

Suppose we are interested in the probability that the probability that the insurance
company will be bankrupt at the end of the year. In particular, we are interested
in the value of whether the event R(T ) ≥ 0 occurs at time T = 1 or not.
Let X = R(1), respectively at the size

P (X > 0) = P (R(T ) > 0)

interested. If ϕ(x) = 1(0,∞) (x), then the following applies

E1(∞,0) (X ) = P (X > 0) = P (R(T ) > 0) .

Here we know which components X (T ) is made up of. However, it is often


analytically complicated to derive a distribution of the random variable X (T )
from an analytical point of view. That is, it is often analytically impossible to
derive the probability that X (T ) is greater than zero, i.e. to derive the value
c = P (X (T ) > 0).

Jump Processes (or Lévy Processes) Cramer-Lundberg model 37 / 65


Possible distributions of the damage

Definition
A distribution Q on (R+ ; B(R+ )) is heavy-tailed if holds:
R
R+
exp(sx)Q(dx) = ∞ for all s > 0 :

A distribution Q is called subexponential if for all x > 0 the relation


Q((x, ∞)) > 0 holds and if
Q ∗2 ((x,∞))
limx→∞ Q((x,∞)) =2

1 Lognormldistribution (not heavy tailed) (subexponentiall)


2 Inverse Gauss Verteilung (not heavy Tailed)
3 Weibull distributed with parameter β < 1 (subexponentiall)
4 Pareto distributed with parameter a and 1 (subexponentiall)
5 Gumbel (subexponentiall)

Jump Processes (or Lévy Processes) Our example 38 / 65


Modelling of dependency

Vine copulas (siehe Claudia Czado)

Copula
possibilities τj−1 with τj to connect
Yj−1 with Yj

Jump Processes (or Lévy Processes) Our example 39 / 65


Matlab commands

makdist(distname) possible waiting times:


Exponential: for the waiting times - independent or connected with
vine copula
Distribution of the Claims Lognormal, Weibul, Gamma,
GeneralizedExtremValue;
Uniform
copularnd(’type’,parameter): Gaussian, t, Clayton, Gumbel, Frank
icdf: Generate random variables with a specific distribution from uniformly
distributed random variables ⇒ inverse distribution function

Jump Processes (or Lévy Processes) Our example 40 / 65


Zufallszahlen für die Vine copula

acceptance-rejection method
Let F be a distribution function with density f and G with density g , which can
be simulated and for which there is a number c > 0 such that

f (x) < d · g (x), ∀x ∈ R.

Then one can generate a random variable X ∼ F given the following algorithm:

algorithm to generate a realization of X with distribution function f ac-


cording to the discard method:
1 Generate Y with distribution function G ;
2 Generate U, where U is uniformly distributed on [0, 1];
3 If U < f (Y )/(dg (Y )), set X := Y . Otherwise, go to step 1.

Jump Processes (or Lévy Processes) Our example 41 / 65


Random numbers for the Vine copula

Given U1 uniform distributed, create U2 such that (U1 , U2 ) ∼ C . A short


calculation shows
c(u, U2 )
P(X = u | Y = u) < d · .
U2
where c is the density of C .

Algorithm for generating a realisation of (U1 , U2 ) with Copula C according


to the discard method:
1 Generate Y uniformly distributed on [0, 1];
2 Generate U, where U is uniformly distributed on [0, 1];
3 If U < c(Y , U2 )/(dU2 ), set U2 := Y . Otherwise, go to step 1.

Jump Processes (or Lévy Processes) Our example 42 / 65


How large the sample should be

Let {Xi : i ∈} be identically distributed random variables with distribution F and


let
Xn
Sn := Xn
i=1
th
the n partial sum of the sequence {Xi : i ∈}. Then states that strong law of
large numbers that
1
µ̂n := Sn → µ := EX1 ,
n
if E|X1 | < ∞. Statistically, this means that the empirical mean value of a sample
converges to the mean value ν of the theoretical mean. If one would like to know
the confidence intervals of the estimator ν̂n , one uses the weak set of large
numbers. This states that
1
√ Sn → N (µ, σ),
n
where σ is the standard deviation of the sample.

Jump Processes (or Lévy Processes) Our example 43 / 65


How large the sample should be
Let {X1 , . . . , Xn } be random
Pn variables with a slowly varying tail function with
exponent κ. Let Sn = i=1 Xi and we are interested in c = EX1 . If the
confidence interval is to be observed with certainty 1 − δ. with certainty, set
1
d = δ− κ or else d = Fκ← (δ).

Now it is true after before that


1
1 (Sn − nc) ∼ Fκ ,

where Fκ holds a κ stabledistribution. Thus 
1
P 1 (Sn − nc) ≤ d ∼ 1 − δ.,

After normalisation one obtains
 that 
1
P n1 Sn − c ≤ dn κ −1 ∼ 1 − δ.,
However, this is a very rough estimate. In the case of a normal distribution, the
following would apply  
1
P n1 Sn − c ≤ xn− 2 ∼ 1 − δ.,
where x = Φ← (1 − δ).
Jump Processes (or Lévy Processes) Our example 44 / 65
Attention - if the second moments are not finite
Definition
A distribution F is said to be stable, if for each n there is a distribution Fn and a
α ∈ (0, 2] so that the following holds true
d 1 (n)∗
F = n − α Fn .

Definition
A random variable is said to have a stable distribution if there is for each n ∈
there is a family of n identical and independently distributed random variables
{Xni : i = 1, . . . , n}, a real number bn and
Pn an α  ∈ (0, 2] such that
X = 11 i
i=1 n − bn .
X

Such α–stable distributions occur relatively often in actuarial mathematics


relatively often. For example, the Cauchy distribution is 1–stable.

Positive stable distributions are leptokurtotic und heavy-tailed.

Jump Processes (or Lévy Processes) Our example 45 / 65


Valued at Risk (VaR)
VaR has become very popular in the financial sector. In particular, the Morgan
Guaranty Trust Company, J.P. Morgan for short, contributed to the spread of
Value at Risk in 1994 with its RiskMetrics TM system contributed to the spread
of Value at Risk. Its breakthrough came in the second half of Value at Risk
achieved its breakthrough in the second half of the 1990s, when it became a
Basel II prescribed it as a binding risk measure in bank supervisory law.
If the loss of a credit portfolio is described by the random variable L, the VaR is
formally defined as follows

VaRα (L) = inf {P (L > l) ≤ 1 − α}.


l∈R

It is the smallest number l that the loss L does not exceed with a probability of
1 − α. The VaR can also be interpreted as α–quantile of the distribution function,

VaRα (L) = F −1 (α).

The decisive probability for the calculation of the VaR is determined by means of
a confidence interval α. In practice, common values for α are 0.05 or 0.01.

Jump Processes (or Lévy Processes) Valued at Risk, expected shortfall 46 / 65


Expected Shortfall
The expected shortfall, also Conditional VaR or Average Var, was introduced in
1997 by Artzner et al. as a coherent Risikomaß. It is defined as the expected loss
in the event that VaR is actually exceeded. Thus, it is the probability-weighted
average of all losses that are higher than VaR. Let X be a random variable
describing the loss of a portfolio and VaRα (X ) be the VaR at a confidence level
of 100(1 − α) per cent. Then the expected shortfall is defined by

ESα (X ) = E [X | l ≥ VaRα (X )] .

The conditional expectation is calculated for a continuous distribution FX by

1 ∞
Z
ESα (X ) = x dFX (x).
α VaRα (X )

Also, the following (equivalent) definition can be found in the literature:


Z 1
1
ESα (X ) = VaRz (X ) dz.
1−α α

Jump Processes (or Lévy Processes) Valued at Risk, expected shortfall 47 / 65


Fractal Brownian motion

Let (Ω, F, (Ft )t≥0 , P) be a probability space. Then β = {β(t), t ∈ R+ } is a one


dimensional Brownian motion on (Ω; F; P) if
β(0) = 0;
the increments of β are independent;
β is P-a.s. continuous;;
for 0 ≤ s ≤ t < ∞, the difference βt − βs is normal distributed with mean 0
and variance t − s;

Generalisations:
Lévy: Keep independent increments, delete continuity
fractal Brownian motion: Keep continuity, delete independent increments

Jump Processes (or Lévy Processes) Gaussian Processes 48 / 65


Fractal Brownian motion

Definition
The fractional Brownian motion BH is defined by
Z t
1
BH (t) = (t − s)H−1/2 dB(s)
Γ(H + 1/2) 0

where H is a real number in (0, 1), called the Hurst index or Hurst parameter
associated with the fractional Brownian motion. For H = 12 one gets the
Brownian motion.

Properties
The process is self-similar, in particular, in terms of probability distributions
we have
BH (at) ∼ |a|H BH (t).
Stationary increments
BH (t) − BH (s) ∼ BH (t − s).

General Gaussian processes Brownian motion with memory 49 / 65


Fractal Brownian motion
Properties
The variance is given by

E[BH (t)BH (s)] = 12 (|t|2H + |s|2H − |t − s|2H ),

Stationary increments
BH (t) − BH (s) ∼ BH (t − s).
Long
range dependence: For H > 12 the process exhibits long-range dependence, i.e.
P∞
n=1 E [BH (1)(BH (n + 1) − BH (n))] = ∞.

Regularity: Almost-all trajectories are locally Hölder continuous of any order


strictly less than H: for each such trajectory, for every T > 0 and for every
ϵ > 0 there exists a (random) constant C (ω) > 0 such that

|BH (t) − BH (s)| ≤ C (ω)|t − s|H−ε for 0 ≤ s, t ≤ T .

General Gaussian processes Brownian motion with memory 50 / 65


The spectral density
Definition
Let ξ be a stochastic process. The function Rξ : R → R defined by

Rξ (τ ) := E (ξ(t) ξ(t + τ )) , τ > 0. (2)

is called the autocorrelation function.

Definition
Spectral density of an stationary Gaussian Process ξ = {ξ(t) : −∞ < t < ∞}

Sξ : R −→ R

is defined by
Z ∞
1
Sξ (ω) := Rξ (τ )e −iωτ dτ, ω ∈ R.
2π −∞

Here Rξ : R → R is the autocorrelation functions of ξ.

General Gaussian processes Brownian motion with memory 51 / 65


The spectral density
Definition
Let ξ be a stochastic process. The function Rξ : R → R defined by

Rξ (τ ) := E (ξ(t) ξ(t + τ )) , τ > 0. (3)

is called the autocorrelation function.

Definition
Spectral density of an stationar Gaussian Process ξ = {ξ(t) : −∞ < t < ∞}

Sξ : R −→ R

is defined by
Z ∞
1
Sξ (ω) := Rξ (τ )e −iωτ dτ, ω ∈ R.
2π −∞

Here Rξ : R → R is the autocorrelation functions of ξ.

General Gaussian processes Brownian motion with memory 52 / 65


Processes with non–trivial spectral density

1
0.4
0.75

0.5
0.2
0.25

-4 -2 2 4 -4 -2 2 4
-0.25
-0.2
-0.5

-0.75
-0.4
-1

1
X(t,w1)

−1

−2
−100 −50 0 50 100
t

1
X(t, w2)

−1

−2
−100 −50 0 50 100
t

Gaussian processes defined by the spectral


General Gaussian processes density 53 / 65
Processes with non–trivial spectral density

2
1
X(t,w )

X(t,w1)
1

0 0
−1
−2
−100 −50 0 50 100 −100 −50 0 50 100
t t

2
1
X(t, w )

X(t, w2)
2

0 0
−1
−2
−100 −50 0 50 100 −100 −50 0 50 100
t t

2
1
X(t, w )

X(t, w3)
3

0 0
−1
−2
−100 −50 0 50 100 −100 −50 0 50 100
t t

2
1
X(t, w )

X(t, w4)
4

0 0
−1
−2
−100 −50 0 50 100 −100 −50 0 50 100
t t

2 2
X(t,w1)

X(t,w1)
0 0
−2
−2
−4
−100 −50 0 50 100 −100 −50 0 50 100
t t

2 2
X(t, w2)

X(t, w2)
0 0
−2
−2
−4
−100 −50 0 50 100 −100 −50 0 50 100
t t

2 2
X(t, w3)

X(t, w3)
0 0
−2
−2
−4
−100 −50 0 50 100 −100 −50 0 50 100
t t

2 2
X(t, w4)

X(t, w4)
0 0
−2
−2
−4
−100 −50 0 50 100 −100 −50 0 50 100
t t

Gaussian processes defined by the spectral


General Gaussian processes density 54 / 65
Applications

Some examples
rough waves;
a street
wind (Tacoma Bridge Collapse, or wobbling Millenium bridge in London,
2000 by pedestrians)
surfaces . . .

Gaussian processes defined by the spectral


General Gaussian processes density 55 / 65
A simple example

Let O be a connected domain in Rd with smooth boundary.


the heat equation:
 Pd ∂ 2
∂t
∂ u(t, ξ) = i=1 ∂ξi2 u(t, ξ)





+ f (u(t, ξ)), ξ ∈ O, t > 0;


(⋆)


 u(0, ξ) = u0 (ξ), ξ ∈ O;


u(t, ξ) = 0, ξ ∈ ∂O, t ≥ 0;

What we are looking for: A function

u : [0, ∞) × O −→ R

such that (⋆) is satisfied.


Imagine: Assume O = [0, 1]. u(t), t ∈ R+ is the temperature of a conductor.

Stochastic Partial Differential Equations Motivation 56 / 65


A simple example

Let O be a connected domain in Rd with smooth boundary and


W = {W (t), 0 ≤ t < ∞} be a Wiener process over a probability space (Ω, F, P).
The heat equation with Gaussian 3noise
:
 Pd ∂ 2
∂t
 ∂ u(t, ξ) = i=1 ∂ξ 2 u(t, ξ) + σ(u(t, ξ))Ẇ (t)


 i

+ f (u(t, ξ)), ξ ∈ O, t > 0;


(⋆)


 u(0, ξ) = u0 (ξ) ξ ∈ O;


u(t, ξ) = 0, ξ ∈ ∂O, t ≥ 0;

What we are looking for: A random function (or stochastic process)


u : Ω × [0, ∞) × O −→ R
such that (⋆) is satisfied.
Imagine: Assume O = [0, 1]. u(t), t ∈ R+ is the temperature of a conductor,
which is exposed to a random media.

3
for details, see e.g. Da Prato and Zabczyk (1992), Chow 2007
Stochastic Partial Differential Equations Motivation 57 / 65
Motivation

Björk and Landén. On the term structure of futures and forward prices.
Mathematical finance—Bachelier Congress, 2000 (Paris), pages 111–149,
Springer Finance, Springer, Berlin, 2002.

Carmona and Tehranchi. Interest rate models: an infinite dimensional stochastic


analysis perspective. Springer Finance, (2006).

E.g. the forward rate of a zero coupon satisfies


d d
u(t, s) = u(t, s)
dt ds
+σ(u(t, s)) · W (t) + bσ (u(t, s)), t ≥ 0.

Stochastic Partial Differential Equations Motivation 58 / 65


Motivation

Björk and Landén. On the term structure of futures and forward prices.
Mathematical finance—Bachelier Congress, 2000 (Paris), pages 111–149,
Springer Finance, Springer, Berlin, 2002.

Carmona and Tehranchi. Interest rate models: an infinite dimensional stochastic


analysis perspective. Springer Finance, (2006).

...

Stochastic Partial Differential Equations Motivation 59 / 65


Motivation

Kohn, Reznikoff and Vanden-Eijnden; Magnetic Elements at Finite Temperature


and Large Deviation Theory; Journal of Nonlinear Science, 15:223-253, 2005;
Submircon-sized ferromagnetic elements are the main building blocks in
magneto-electronics, where they are widely used as information devices.
As these elements get smaller and smaller, the effects of the thermal noise
increase, particularly the ability of the noise to change the magnetization
and thereby limit the data-retention time of the memory element.

After all, the thermal noise eventually allows the magnetization to sur-
mount any energy barrier, and thereby visit all possible configurations,
no matter what the applied field is.

Stochastic Partial Differential Equations Motivation 60 / 65


Motivation

Grün, Mecke and Rauscher; Thin-Film Flow induced by Thermal Noise; Journal of
Statistical Physics, 122:1261–1291, 2006;
While the spatial stochastic features of the pattern formation process
which appears in the experiment are the same as those predicted by the
thin-film equation, the time evolution of the patterns does not match.
. . . We take this as a hint that thermal noise might play a role in the
dynamics of the dwetting of these thin films.

Comparing molecular dynamics simulations and numerical solutions of


deterministic and stochastic hydrodynamical equations it has recently
become evident that noise plays a significant role in the breakup of fluid
nanojets.

Stochastic Partial Differential Equations Motivation 61 / 65


Motivation

Falkovich, Kolokolov, Lebedev, Mezentsev, and Turitsyn; Non-Gaussian error


probability in optical soliton transmission. Physica D, 195:1–28, 2004.
Solitons play an important rule in the dynamics and statistics of non-
linear systems in fields as diverse as hydrodynamics, plasmas, nonlinear
optics, molecular biology, solid state physics, field theory, and astro-
physics. Presumably the most impressive practical implementation of
the fundamental soliton concept has been achieved in fiber optics.

The limitations on the error-free transmission distance are set mainly by


the spontaneous emission noise added by in-line optical amplifiers. Even
though the noise is weak one cannot generally use perturbation approach
to obtain the error proba-
bability because errors occur when signal changes substantially.

Stochastic Partial Differential Equations Motivation 62 / 65


The Brownian motion

Let (Ω; F; P) be a probability space. Then β = {β(t), t ∈ R+ } is a one


dimensional Brownian motion (BM) on (Ω; F; P) if
β(0) = 0;
the increments of β are independent;
β is P-a.s. continuous;;
for 0 ≤ s ≤ t < ∞, the difference βt − βs is normal distributed with mean 0
and variance t − s;
Let H a Hilbert space, {en , n ∈} an ONB in H, {λn , n ∈} ∈ l 2 (R) and {βn , n ∈}
a family of independent, identical distributed Brownian motion. Then
X
W (t) := λn en βn (t), t ≥ 0.
n∈

is a H-valued Brownian motion.

Stochastic Partial Differential Equations Motivation 63 / 65


Space Time White Noise in dimension 1

0.8

1.5
0.6

1
0.4

0.2 0.5

0 0

-0.2 -0.5

-0.4
-1

-0.6
-1.5

-0.8
-0.2 0 0.2 0.4 0.6 0.8 1 1.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Gaussian noise Brownian motion

Stochastic Partial Differential Equations Motivation 64 / 65


Space Time White Noise in dimension 2

Smoohting constant : 0 Smoohting constant : 0

10 3

20 0.5

2
30

40 1

0
50
0
60

70 -1
-0.5
1
80
0.8 1
-2
0.6 0.8
90
0.4 0.6
0.4
0.2
100 0.2
10 20 30 40 50 60 70 80 90 100 0 0

Gaussian noise Brownian sheet

Stochastic Partial Differential Equations Motivation 65 / 65

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy