0% found this document useful (0 votes)
7 views55 pages

FULLTEXT01

This thesis explores the Quadratic Rough Heston (QRH) model for option pricing, enhancing the traditional Heston model by incorporating a quadratic function for volatility changes. The research demonstrates that the QRH model, calibrated with real-world SPX data, provides a more accurate representation of option prices compared to existing models. The findings suggest that this innovative approach could significantly improve decision-making in financial markets, although it still requires prudent risk management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views55 pages

FULLTEXT01

This thesis explores the Quadratic Rough Heston (QRH) model for option pricing, enhancing the traditional Heston model by incorporating a quadratic function for volatility changes. The research demonstrates that the QRH model, calibrated with real-world SPX data, provides a more accurate representation of option prices compared to existing models. The findings suggest that this innovative approach could significantly improve decision-making in financial markets, although it still requires prudent risk management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Uppsala University logotype

U.U.D.M. Project Report 2023:43

Degree project 30 credits


Master`s Programme in Mathematics
2023

Option pricing with


Quadratic Rough Heston
Model
Marina Dushkina

Department of Mathematics, Uppsala University

Supervisor: Lauri Viitasaari


Examiner: Erik Ekström
Abstract
In this thesis, we study the quadratic rough Heston model and the corresponding simulation
methods. We calibrate the model using real-world market data. We compare and implement the
three commonly used schemes (Hybrid, Multifactor, and Multifactor hybrid). We calibrate the
model using real-world market SPX data. To speed up calibration, we apply quasi-Monte Carlo
methods. We study the effect of the various calibration parameters on the volatility smile.

Public Scientific Summary


In the complex and volatile world of financial markets, professionals are continuously seek-
ing robust methods to predict and evaluate the pricing of financial options. This thesis explores
a recently proposed approach to tackle this challenge.
To understand this research, one needs a brief background on option pricing. An ’option’
in finance is a contract that provides the holder the right (but not the obligation) to buy or sell
a certain asset at a specified price before or on a particular date. The tricky part is determining
how much to pay for this option today, which is where option pricing models come in.
A familiar name in this area is the Black-Scholes model, a mathematical model used to cal-
culate the theoretical price of options. However, this model uses quite simplistic assumptions.
It assumes that the speed at which the price of something being sold (which is called volatility)
changes is constant, and that the returns from selling this asset always follow a certain pre-
dictable pattern, like a bell curve. But life is rarely that simple. The same goes for financial
markets - they can be far more complicated and unpredictable. Therefore, the Black-Scholes
model sometimes does not do a great job of reflecting on what is really happening in the market.
This is where the proposed Quadratic Rough Heston (QRH) model comes in. The Heston
model, which this new model extends, is already a significant improvement over the Black-
Scholes model, as it assumes volatility can change over time. The ’rough’ in Quadratic Rough
Heston, meanwhile, introduces the idea that these changes are not smooth but rather jumpy,
much like the erratic fluctuations observed in real financial markets.
The QRH model gets its ’quadratic’ name from the mathematical equation used to describe
how volatility changes. In the standard Heston model, the volatility is driven by a linear func-
tion, while in the QRH model, it’s driven by a quadratic function, which allows for more com-
plex, realistic modeling of volatility changes.
The results of applying the QRH model, as presented in the thesis, have been impressive. By
using real-world data, it is shown that this model more accurately represents the actual observed
prices of options in real financial markets. This superior accuracy could potentially enable
financial professionals to make better decisions and even spot profitable trading opportunities.
In essence, the "Option Pricing with Quadratic Rough Heston Model" thesis is a major step
forward in financial modeling. By embracing the true, rough nature of financial markets, this
innovative model could be a game-changer in how we approach option pricing.
However, as with any model, the QRH model is not a crystal ball. It’s important to remember
that financial markets are influenced by a myriad of factors, many of which can be unpredictable.
The QRH model provides a tool - a more accurate tool, as per the research findings - but the
real-world application will always require a blend of sound judgment, prudent risk management,
and sometimes, a bit of good fortune.
Acknowledgments

I would like to express my deepest gratitude to my research supervisor,


Lauri Viitasaari. His constructive feedback and insights, rooted in his pro-
found knowledge across various fields of mathematics, have been invaluable.
I also want to thank him for his kindness and encouragement during my many
moments of self-doubt.
I also owe immense gratitude to my mom for her constant love and support.
To my patient husband, whose understanding has been a cornerstone of this
journey. And to my son Lev, my little bundle of joy, whose boundless love
and energy have been the driving forces that helped me move forward amidst
challenges and obstacles.
Contents

1 Introduction .................................................................................................. 7
2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Basics of Itô calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Black-Scholes theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Stochastic volatility models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1 Classical Heston model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Rough volatility models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.1 Fractional Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.2 Riemann-Liouville fractional Brownian motion . . . . . . . . . . . 20
3.2.3 Rough volatility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.4 Zumbach effect and super-Heston rough models . . . . . . . . . . 21
3.3 Quadratic Rough Heston model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4 Simulation of the Quadratic Rough Heston model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1 Notation and numerical preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 Hybrid scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.1 Simulating Riemann-Liouville process . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2.2 Simulating quadratic rough Heston . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3 Multifactor scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3.1 Kernel approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4 Multifactor Hybrid Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.1 Provided data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.1.1 SPX vs SPXW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.1.2 Recovering interest rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.2 Calibration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3 Objective function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3.1 Variance reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.4 Calibrating b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.5 Summary of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.6.1 Dependence of the smile on the calibration parameters 51
6 Conclusion ................................................................................................. 53
References ........................................................................................................ 54
1. Introduction

The field of quantitative finance is rapidly evolving, with the continuous


development of new models and methods to more and more accurately price
complex financial instruments and manage risk. One such area of innovation is
the modeling of asset price dynamics and their accompanying volatility. The
Heston model, introduced by Steven Heston in 1993 [14], has been a semi-
nal work in this field, as it offers a more realistic representation of stochastic
volatility than earlier models. However, the classical Heston model has its
limitations.
Recent advancements in the understanding of volatility have brought the
understanding of the fact that the volatility process is rough, that is, its trajec-
tories are less Hölder regular than the trajectories of Brownian motion. In that
sense, the volatility process is close to the fractional Brownian motion. This
gave rise to a new class of models, the so-called rough volatility models.
One such model is the quadratic rough Heston model, which is on the one
hand relatively simple and, on the other hand, flexible enough to allow a si-
multaneous calibration of SPX and VIX options. This improved flexibility
makes the quadratic rough Heston model a promising tool for practitioners in
the field of quantitative finance.
The objective of this thesis is to explore the implementation of the quadratic
rough Heston model, with an emphasis on understanding its key components,
and the numerical methods used to fit the model.
The rest of the thesis is organized as follows. In Chapter 2 we start with
the basics of the Itô calculus and then introduce the Black-Scholes theory. In
Chapter 3 we go through the evolution of stochastic volatility models start-
ing from Classical Heston and approaching the main model of our interest -
Quadratic Rough Heston model. Then in Chapter 4, we describe the difficul-
ties we met dealing with our class of models and discuss different simulation
schemes. In Chapter 5 we work with the data we have and do calibration and
analyze results. We end the thesis with a short conclusion in Chapter 6.

7
2. Preliminaries

This chapter outlines the mathematical and financial preliminaries needed


for understanding more advanced models of the subsequent chapters, such
as stochastic volatility models, and in particular, the Quadratic Rough Heston
model. It starts with fundamental concepts, including Itô calculus and geomet-
ric Brownian motion, moves on to the Black-Scholes model, and then briefly
covers risk-neutral valuation and Monte Carlo methods in finance.

2.1 Basics of Itô calculus


Brownian motion
French mathematician Louis Bachelier was the first to apply stochastic pro-
cesses to financial markets. In 1900, he wrote a Ph.D. thesis titled “The Theory
of Speculation”, in which he introduced a mathematical model of Brownian
motion for financial markets. Bachelier’s thesis defense, which took place on
29 March 1900, is often regarded as the birth of quantitative finance.
The concept of Brownian motion dates back to 1827 when Scottish botanist
Robert Brown first observed the erratic movement of pollen particles sus-
pended in fluid. Later, American mathematician Norbert Wiener investigated
the mathematical properties of one-dimensional Brownian motion, which is
why Brownian motion is also referred to as the Wiener process.

Definition 2.1.1 (Brownian motion). A standard Brownian motion (or Wiener


process) W is a continuous-time stochastic process defined on a probability
space (W, F , P) with the following properties:
1. W (0) = 0.
2. For any sequence of times 0  t1 < t2 < · · · < tn , the random variables
W (t2 ) W (t1 ), W (t3 ) W (t2 ), . . ., W (tn ) W (tn 1 ) are independent.
3. The increments of W are normally distributed, i.e., for any 0  s < t, the
random variable W (t) W (s) is normally distributed with mean zero
and variance t s, denoted as W (t) W (s) ⇠ N (0,t s)
4. W (t) has continuous sample paths.

One of the key features of Brownian motion is its lack of memory or de-
pendence on past values. In other words, the future movements of the process
are independent of its past movements. This property is known as the Markov
property.

8
Itô integral and SDEs
For option pricing, we need to model the underlying asset price as a stochas-
tic process, and this is where Itô calculus comes into play. The Japanese math-
ematician Kiyoshi Itô introduced the concept of stochastic integration in the
context of Brownian motion and established the Itô integral, which is a central
tool in stochastic calculus.
The Itô stochastic integral can be viewed as a stochastic generalization of
the Riemann-Stieltjes integral in analysis. The integrands and the integrators
are now stochastic processes:
Z T n

0
St dWt = lim
Dt!0
 Sti 1 (Wti Wti 1 ), (2.1)
i=1

where St is a locally square-integrable process adapted to the filtration gener-


ated by the Brownian motion Wt , Dt = maxi=1,...,n (ti ti 1 ), and the limit is
taken in probability.
Suppose that there exists a real number s0 such that for all t 0 the follow-
ing equation holds:
Z T Z T
St = s0 + µt dt + st dWt .
0 0
Then this stochastic integral expression can also be written in the stochastic
differential form:
dSt = µt dt + st dWt ,
S0 = s0 .
A fundamental result in stochastic calculus is Itô’s Lemma, which provides
a generalization of the chain rule to stochastic calculus and provides a frame-
work for differentiating functions of stochastic processes.

Itô’s lemma
Theorem 2.1.1 (Itô’s lemma). Assume that the process X satisfies the stochas-
tic differential equation
dXt = µt dt + st dWt , (2.2)
where µ and s are adapted processes. Let f (t, x) be a C1,2 -function. Define the
process Z by setting Z(t) = f (t, Xt ). Then Z satisfies the stochastic differential
equation


∂f ∂f
d f (t, Xt ) = (t, Xt ) + µt (t, Xt )
∂t ∂x
1 ∂2 f ∂f
+ st2 2 (t, Xt ) dt + s (t, Xt ) dWt . (2.3)
2 ∂x ∂x

9
Proof. We will not provide a complete rigorous proof here, but will instead
provide the heuristic proof presented in “Arbitrage Theory in Continuous Time”
by Thomas Björk [5] instead.
If we do a Taylor expansion till the second-order terms we obtain the fol-
lowing:

∂f ∂f 1 ∂2 f 2 1 ∂2 f 2 ∂2 f
df = dt + dXt + (dXt ) + (dt) + dtdXt . (2.4)
∂t ∂x 2 ∂ x2 2 ∂ x2 ∂t∂ x
By definition we have

dXt = µt dt + st dWt ,
so, at least formally, we obtain

(dXt )2 = µt2 (dt)2 + 2µt st dt · dWt + st2 (dWt )2 .


The term containing (dt)2 above is negligible compared to the dt-term in (2.2),
and it can also be shown that the dt · dWt term is negligible compared to the
dt-term. Furthermore using the fact (dWt )2 = dt and plugging all this into the
Taylor expansion (2.4) gives us the result.

Using the assumptions of Itô’s lemma and following the formal multiplica-
tion rules 8 2
< (dt) = 0,
>
dt · dWt = 0,
>
:
(dWt )2 = dt,
equation (2.4) takes the form:

∂f ∂f 1 ∂2 f
df = dt + dXt + (dXt )2 .
∂t ∂x 2 ∂ x2
With the Itô calculus, we can derive the stochastic differential equation
(SDE) that describes the evolution of asset prices over time. Solving these
SDEs gives us derivative pricing models, which are used to determine the fair
prices of financial derivatives.

Geometric Brownian motion


Accurately modeling stock prices is crucially important in mathematical
finance. Brownian motion has some inherent limitations when applied to stock
prices. Specifically, it fails to capture the presence of trends in stock prices and
allows for negative values, which are not applicable in this context.
A more suitable stochastic process for modeling stock prices is the geo-
metric Brownian motion. This process effectively addresses the limitations of

10
Brownian motion by ensuring the non-negativity of stock prices and incorpo-
rating a drift term to account for trends.
It can be formally defined as the solution to the SDE

dSt = µ St dt + s St dWt , S0 = s0 . (2.5)

Here, µ is often referred to as “drift", and s as “volatility".


By applying the Itô formula to ln St and then applying the exponential func-
tion to the result, we can show that the solution to the equation above is given
by ✓✓ ◆ ◆
s2
St = S0 exp µ t + s Wt .
2
The variable St follows a lognormal distribution (i.e. its logarithm is normally
distributed).

2.2 Black-Scholes theory


Options
The origins of options trading can be traced back to 332 BC, when Thales
of Miletus, a Greek philosopher, purchased the rights to acquire olives be-
fore the harvest season, resulting in substantial profits. Options trading gained
more formalization during the tulip mania in 1636, as individuals speculated
on the rapidly increasing value of tulips by trading options contracts on the
Amsterdam Stock Exchange. The modern era of options trading emerged
with the establishment of the Chicago Board Options Exchange (CBOE) in
1973, which introduced standardized exchange-traded call options. In 1977,
the CBOE also introduced put options, leading to the widespread adoption of
the standardized exchange-traded format for options trading that is prevalent
today. Nowadays, the most commonly followed equity index is the Standard
and Poor’s 500 (S&P 500), which is a stock market index tracking the stock
performance of the 500 largest companies listed on stock exchanges in the
United States.
Numerous varieties of options exist today, ranging from traditional to highly
exotic types. They differ in their payment structures, expiration dates, and
strike prices. In our work, we will consider the European type of traditional
options.
A European call (put) option is a contract that gives the owner the right but
not the obligation to buy (sell) the underlying asset at the predetermined price
K (strike price) and only at the predetermined time t = T (expiration time).
The payoff XT of a European call option at expiration time T is represented
by the following formula:

F(ST ) = max[ST K, 0].

11
If the underlying asset’s price at expiration T is greater than the strike price
K, the option will be exercised, and the payoff will be the difference between
ST and K. If the underlying asset’s price is less than or equal to the strike
price, the option will not be exercised, and the payoff will be zero. The payoff
functions for European call and put options are represented in Figure 2.1.

Figure 2.1. Payoff functions for European call and put options.

Black-Scholes model
The big question of how to price options has been a challenge for traders
and researchers for a long time. During the early days, people relied on their
intuition, experience, and on trade-offs between supply and demand. However,
as financial markets grew more complex, it became clear that this approach
was insufficient.
In 1973 the economists Fischer Black and Myron Scholes wrote a paper
“The Pricing of Options and Corporate Liabilities” [6] where they introduced
a mathematical model for the dynamics of a financial market. Later in 1997,
Scholes was awarded the Nobel Prize for this work, and though Black couldn’t
receive the award due to his passing in 1995, the Swedish Academy acknowl-
edged his significant contribution.
The Black-Scholes model consists of a risky asset St (stock) and a risk-free
asset Bt (bank account or bond) with dynamics given by:

dSt = St µ(t, St ) dt + St s (t, St ) dWt ,

dBt = r Bt dt.
The Black-Scholes differential equation is derived based on the following
set of assumptions:

12
1. The stock price follows a geometric Brownian motion.
2. There are no transaction costs or taxes.
3. There are no restrictions on short selling.
4. The risk-free interest rate r and the standard deviation s are known and
constant.
5. There are no arbitrage opportunities.
6. There are no dividends paid out during the life of the option.
7. Security trading is continuous.
8. The underlying asset is traded in a frictionless market.
9. European options are considered.

Risk-neutral valuation
Let F denote the payoff function. Then it can be shown that the price of the
corresponding option can also be expressed as
rT
F =e E[F(XT )],
where Xt denotes the solution to

dXt = r Xt dt + Xt s (t, Xt ) dWt , X0 = s0 .


This equation is similar to the equation for the stock price
dSt = St µ(t, St ) dt + St s (t, St ) dWt .
In fact, we can replace the objective measure P by a new, so-called, risk-
neutral measure Q, so that St satisfies
dSt = r St dt + St s (t, St ) dWtQ , X0 = s0 , (2.6)
where W Q is a Brownian motion with respect to the risk-neutral measure Q.
So we can also write that
rT
F =e E Q [F(ST )], (2.7)
where E Q denotes the expectation with respect to Q. This formula is central
to pricing options using Monte Carlo methods (see the next section for more
details).

Monte Carlo techniques


Using Monte Carlo techniques for option pricing typically involves simu-
lating paths of stochastic processes that describe the evolution of underlying
asset prices, and then taking averages instead of expectations in (2.7).
With the Black-Scholes model, the process of simulating an asset price boils
down to solving the SDEs of the form (2.6). Solving an SDE starts with sim-
ulating paths of a Brownian motion (or perhaps several independent or depen-
dent/correlated Brownian motions). Once that is done, multiple techniques are

13
available for solving the differential equation, but most of them are inspired
by the finite-difference techniques for solving ordinary differential equations.
In many cases, multiple (tens of thousands) simulations may be needed to
make sure that the average provides a good estimate for the expected value.
Since simulations are often expensive, various variance reduction techniques
are used, to reduce the variance without increasing the number of simulations.
An example of a variance reduction technique is the so-called method of an-
tithetic variates, where each random variable in the simulation is paired with
its negation, effectively canceling out some of the variance. This technique
leverages the negative correlation between the original and antithetic variates
to produce a more stable average. Thus, it can potentially reduce the number
of simulations required for accurate results. Other common variance reduc-
tion techniques include control variates, importance sampling, and stratified
sampling, each providing its own unique approach to mitigating variance in
the context of Monte Carlo simulations. By employing these techniques, re-
searchers and practitioners can obtain more reliable results with fewer simu-
lations. Consequently, these techniques can reduce computational costs and
increasing the efficiency of the overall process.
Quasi-Monte Carlo methods are another alternative to traditional Monte
Carlo simulations that can provide more accurate results with fewer simula-
tions. These methods replace the random number generation process with
deterministic sequences of low-discrepancy numbers, such as Sobol or Halton
sequences, which are specifically designed to cover the sample space more
uniformly than purely random samples. Quasi-Monte Carlo methods exploit
the structure and regularity of these low-discrepancy sequences to achieve
faster convergence rates and, in turn, more accurate estimates of the expected
value with fewer samples.
For additional details, see [13], which is a standard reference for using
Monte Carlo methods for pricing derivatives.

Volatility
Of all the variables used in the Black-Scholes model, the only one that is not
known with certainty is volatility. Volatility is a measure of the asset’s price
fluctuations and it is often used as an indicator of the level of risk associated
with that instrument. High volatility implies uncertainty about the option price
and hence high risk, while low volatility means relatively stable price and low
risk.
There are two approaches to getting an estimate of volatility - to use historic
volatility or to use implied volatility.
Historic or realized volatility measures the standard deviation of historic
returns over a specific period in the past. The main drawback of using histor-
ical volatility is that it does not consider market sentiment about future price
movements.

14
In many cases, the second approach, the so-called implied volatility, is more
favorable. The implied volatility is the volatility level that is implied by the
current market price of an option, based on the Black-Scholes option pricing
model. It can be calculated by getting market price data for another “bench-
mark” option written on the same time and the same underlying stock as the
option that we want to value. Denoting the price of the benchmark option
by p, the strike price by K, today’s observed value of the underlying stock
by S, and writing the Black-Scholes pricing formula for European calls by
c(S, t, T, r, a, K), we then solve the following equation for a:
p = c(S, t, T, r, a, K). (2.8)
In other words, we try to find the value of a which the market has implicitly
used for valuing the benchmark option. This value of a is called the implied
volatility, and we then use the implied volatility for the benchmark in order to
price our original option. Thus, we price the original option in terms of the
benchmark.
A plot of observed implied volatility as a function of the exercise price for
a fixed expiration date forms a curve which is called the volatility curve. If the
Black-Scholes model completely accurately represented the situation on the
stock market, this curve would be a straight line (as the volatility is assumed
constant in the model).
In practice, however, we can observe different types of curvature. When
the market is relatively stable and risk is higher for out-of-the-money (for
call/put options - the price of an underlying asset is lower/higher than the strike
price) and in-the-money (for call/put options - the price of an underlying asset
is higher/lower than the strike price) options we observe a U-shaped pattern
called the volatility smile. Another possible shape is when the smile is skewed
to one side. This is called a volatility smirk and it can be observed when the
market is turbulent or for options on volatile stocks. For example, the implied
volatility for out-of-the-money put options (downside) is typically higher than
for out-of-the-money call options (upside) due to the market’s perception of a
higher risk for large price drops. See Figure 2.2 for an example.
The collection of all implied volatilities of a set of maturities forms the so-
called volatility surface (Figure 2.3). The shape and dynamics of volatility
surfaces are subject of huge interest to both academics and practitioners.
The main use of a volatility surface is pricing and valuing options. The
implied volatilities derived from the surface are crucial inputs in option pricing
models. By understanding how the implied volatilities change across different
strike prices and expiration dates, traders and investors can identify mispriced
options and can better estimate the fair value of options.

Volatility index
One notable application of implied volatility in the financial markets is the
Volatility Index or VIX.

15
Figure 2.2. Volatility smile and volatility smirk.

The volatility Index or VIX was introduced in 1993 by the Chicago Board
Options Exchange. It measures the level of fear or stress in the stock market
and is often called the "Fear Index". Investors are able to trade VIX futures
since 2004 and VIX options since 2006. By definition, it is a derivative of the
SPX index, which can be represented as:
q
V IXt = 2E Q [log (St+D /St ) |Ft ] ⇥ 100, (2.9)
where St is the value of the SPX, D = 30 days, (Ft )t 0 is the natural filtra-
tion of the market, and E Q is the risk-neutral expectation. According to this
formula, the VIX represents the annualized square root of the price of a con-
tract with payoff equal log(St+D /St ) and measures the market’s expectation
of 30-day S&P500 volatility implicit in the prices of near-term S&P options.
When the VIX is up, it means that there are significant and rapid price fluc-
tuations in the S&P500. The VIX typically has a negative correlation with
the S&P500, so in periods of market stress, the VIX increases. VIX options
are a very useful diversification tool since they allow us to consider only the
direction of the volatility movement without taking into account market risk.

16
Figure 2.3. Volatility surface.

17
3. Stochastic volatility models

Since 1973, the Black-Scholes model’s assumptions have been relaxed and
generalized in many directions, leading to a myriad of models. We are partic-
ularly interested in the assumption of constant volatility which is one of the
main limitations of the Black-Scholes model.
In 1990 Hull and White [15] generalized the model by allowing volatility to
be stochastic. Since then, many stochastic volatility models have been intro-
duced and their popularity continues to grow alongside complexity. Among
others, the Heston model, SABR, and Bergomi model are notable examples.

3.1 Classical Heston model


The Heston model is a widely used stochastic volatility model proposed by
Steven Heston in 1993 [14]. It provides a closed-form solution for the price
of a European call option when the spot asset is correlated with volatility, and
it adapts the model to incorporate stochastic interest rates. In this model, the
asset price St satisfies the stochastic differential equation
p
dSt = µt St dt + Vt St dZ1 (t),

where µt is the drift of stock price returns and the volatility Vt is stochastic,
and in its turn satisfies the SDE
p
dVt = l (q Vt ) dt + s Vt dZ2 (t).

The meaning of parameters is as follows:


• q > 0 is the long-time variance, or long-run average variance of the price
(as t ! •, the expected value of Vt tends to q ),
• l > 0 is the rate of reversion to q ,
• V0 > 0 is the initial variance,
• h > 0 (the “volatility of the volatility”),
and Z1 , Z2 are two correlated Brownian motions such that

dZ1 · dZ2 = rdt.


p
The term Vt describes the evolution of the volatility over time. The quad-
ratic nature of this term captures the non-linear behavior of volatility and better
captures the dynamics of volatility, such as its mean reversion (the tendency

18
of volatility to return to an average level in the long run) and the presence
of volatility clustering (periods of high volatility followed by periods of low
volatility, and vice versa, instead of being randomly distributed over time).
This model has become popular because of many reasons, including that it
does not require that stock prices follow a log-normal probability distribution
and has a closed-form characteristic function.

3.2 Rough volatility models


Rough volatility models have gained incredible popularity in recent years
due to their ability to capture the empirical features of financial time series
more accurately than traditional models. For example, they are able to capture
the phenomenon of volatility clustering, where periods of high volatility tend
to be followed by other periods of high volatility, and vice versa (periods of
low volatility tend to be followed by other periods of low volatility). The
concept of rough volatility was first introduced by Gatheral et al. in the paper
“Volatility is Rough” [11] and is based on the observation that the realized
volatility is less regular than Brownian paths, and is similar in regularity to the
fractional Brownian motion, which we define below.

3.2.1 Fractional Brownian motion


Introduced in 1968 [18], the fractional Brownian motion (often abbreviated
as “fBM”) is a parametric family of stochastic processes, which generalizes
the notion of Brownian motion.
It is parameterized by the so-called Hurst parameter, H 2 (0, 1). Fractional
Brownian motion with Hurst parameter H, denoted by WH , is a continuous
Gaussian process, such that EWH (t) ⌘ 0, t > 0, and its covariance function is
given by
1
E[WH (t)WH (s)] = |t|2H + |s|2H |t s|2H .
2
For H = 1/2, this reduces to

E[W1/2 (t)W1/2 (s)] = min(t, s),

and W1/2 (t) is in fact the standard Brownian motion.


For other values of the Hurst parameter, however, the increments of WH (t)
are correlated (unlike those of the standard Brownian motion):
• for 0 < H < 1/2, the increments are negatively correlated,
• for 1/2 < H < 1, the increments are positively correlated.
This fact can be interpreted as the process having memory properties. In par-
ticular, short memory for the negatively correlated increments and long mem-
ory when the increments are positively correlated.

19
The correlation between the increments is closely related to the regularity
of the sample paths of the fractional Brownian motion.
A well-known property of the standard Brownian motion states that its paths
are almost surely Hölder continuous with parameter g for any g < 1/2. In other
words, for any e > 0,

|W (t) W (s)|  |t s|1/2 e , t, s > 0.


For the fractional Brownian motion, the Hölder continuity properties are
expressed as
|WH (t) WH (s)|  |t s|H e , t, s > 0.
This means that for H > 1/2 the fBm paths are more Hölder regular (more
“smooth”) than the paths of the standard Brownian motion, and less regular
for H < 1/2.
In financial literature, processes that have paths that are less Hölder regular
than the paths of Brownian motion, are called rough.

3.2.2 Riemann-Liouville fractional Brownian motion


The process
Z t
1
(t s)H 1/2
dWs , H 2 (0, 1),
G(H + 1/2) 0
is closely related to the fractional Brownian motion WH (t), and is variously
known as the Riemann-Liouville process, fractional Brownian motion of the
Riemann-Liouville type [16] or Type 2 fractional Brownian motion [19]. Its
paths have the same Hölder regularity properties as that of the fractional Brow-
nian motion WH (t) and exhibit similar long memory properties. Due to this
similarity, we will also occasionally denote it as WH (t).
As we will see in the following sections, integrals similar to the Riemann-
Liouville process play a key role in rough volatility models.
In literature, other constants can be used instead of the Hurst parameter H.
For example, Gatheral usually sets a = H + 1/2, so that
Z t
1
WtH = (t s)a 1
dWs .
G(a) 0
We will mostly use b = 1/2 H, to simplify the integral expression to
Z t
1 b
WtH = (t s) dWs .
G(1 b) 0
In this notation, the standard Brownian motion corresponds to
H = 1/2, a = 1, b = 0,
and rough trajectories are exhibited by
0 < H < 1/2, 1/2 < a < 1, 0 < b < 1/2.

20
3.2.3 Rough volatility
The concept of rough volatility is based on the observation that the realized
volatility can be characterized by a fractional Brownian motion with Hurst
parameter 0 < H < 1/2. As we have seen in the previous section, when H <
1/2, the time series has a long memory and exhibits so-called anti-persistence,
meaning that large changes in price are likely to be followed by large changes
in the opposite direction.
This observation was first demonstrated in 2018 by Gatheral, Jaisson, and
Rosenbaum in [11], where the authors also showed that H can vary over time,
but typically lies between 0.06 and 0.2, with higher values occurring during
financial crises.
Since then multiple rough volatility models have been introduced. One of
them is the so-called Rough Heston Model, introduced in [10]:
p
dSt = St Vt dWt ,
Z t
1
Vt = V0 + (t s)a 1
l (q Vs ) ds
G(a) 0
Z t
1 p
+ (t s)a 1
l n Vs dBs . (3.1)
G(a) 0

The parameters l , q ,V0 and n are positive real numbers, playing here the
same role as in the classical Heston model. Bt and Wt are two Brownian mo-
tions with the quadratic covariation process given by
dWt · dBt = r dt.
Since we want the volatility process to be rough, the parameter a = H + 1/2
should lie in the interval (1/2, 1). Setting a = 1, we can reduce the rough
Heston model to the classical Heston model.

3.2.4 Zumbach effect and super-Heston rough models


In [9], El Euch, Fukasawa, and Rosenbaum list four stylized (i.e. empiri-
cally established) facts about the microstructure of financial markets:
1. Endogeneity: most orders are sent by algorithms in reaction to other
orders, without any extrinsic economical motivation;
2. No arbitrage: at the high-frequency scales, a strategy that is profitable
on average is hardly possible;
3. Buying/selling asymmetry: when the market maker receives a buy order,
they are likely to raise the price by less than they would lower the price
following a sell order of the same size;
4. Long memory: A significant proportion of transactions is due to large
orders (metaorders), which are not executed at once but split in time by
trading algorithms.

21
They show that the classical Heston model is sufficient if one only needs to
take into account facts 1-3. However, the classical Heston model fails to catch
fact 4, and this is where the rough Heston comes to help.
However, there is also a fifth stylized fact, not reflected in the rough Heston
model, namely, the so-called Zumbach effect.
In 2003, Zumbach [17] described the effect, which was later named after
him, by measuring the impact of past price trends on volatility. He showed
that price trends induce an increase in volatility; specifically, price trends tend
to increase volatility. There are two types of this effect:
• Weak Zumbach effect: the predictive power of past squared returns on
future volatility is stronger than that of past volatility on future squared
returns. To check this on data, one typically shows that the covariance
between past squared price returns and future realized volatility (over a
given duration) is larger than that between past realized volatility and
future squared price returns.
• Strong Zumbach effect: the conditional law of future volatility depends
not only on the past volatility trajectory but also on past returns.
To be able to successfully model the strong Zumbach effect, Dandapani et
al. [8] introduced a class of super-Heston rough volatility models, based on
quadratic versions of so-called Hawkes processes. The rough Heston model is
a special case of the super-Heston models.
Without giving a formal definition, we will say that in super-Heston models,
the square root in the integral
Z t
1 p
(t s)a 1
l n Vs dBs ,
G(a) 0

from (3.1) can be replaced by a more general function.

3.3 Quadratic Rough Heston model


In 2020, Gatheral, Jusselin and Rosenbaum [12] introduced the quadratic
rough Heston model, which is a special case of the super-Heston model.
In the quadratic rough Heston model, the dynamics of the asset St and its
spot variance Vt satisfy
p
dSt = St Vt dWt , Vt = a (Zt b)2 + c, a, b, c > 0, (3.2)

Z t Z t
a 1 l l p
Zt = (t s) (q0 (s) Zs ) ds + (t s)a
h Vs dWs , 1
0 G(a) 0 G(a)
(3.3)
where a 2 (1/2, 1), l > 0, h > 0, and q0 is a deterministic function.

22
Contrary to, for example, the classical Heston model, in this model the asset
price S and its volatility V are driven by the same Brownian motion W .
The interpretation of parameters a, b, and c is as follows:
• a > 0 is the sensitivity of volatility to past price returns. The lower a is,
the flatter the volatility smile is.
• b > 0 is responsible for the asymmetry of the feedback effect. Volatility
is higher when Z is negative and lower when Z is positive considering
that they have the same absolute value (shifts of the volatility smile to
the left or to the right);
• c > 0 is the minimal instantaneous variance (shifts the volatility smile
upwards or downwards).
Apart from the simplicity of this model (e.g., the fact that the model is
driven by a single Brownian motion), the quadratic rough Heston model is in-
teresting because it was the first model that made it possible to simultaneously
solve the problem of fitting both SPX and VIX volatility smiles.
In the remaining part of the thesis, we focus on the questions of implement-
ing the quadratic rough Heston model.

23
4. Simulation of the Quadratic Rough Heston
model

In the literature, three main methods of simulating the quadratic rough He-
ston model are used:
1. Hybrid scheme by Bennedsen et al., [3];
2. Multifactor scheme by Abi Jaber et al., [2];
3. Hybrid Multifactor scheme by Rømer [22], combining the other two
schemes.

4.1 Notation and numerical preliminaries


Simplified notation for the model
Before we proceed to the simulation methods, let us first describe the nota-
tion that we are going to use.
Recall from the previous section that the quadratic Heston model is given
by p
dSt = St Vt dWt , Vt = a (Zt b)2 + c, a, b, c > 0,

Z t Z t
l l p
Zt = (t s)a 1
(q0 (s) Zs ) ds + (t s)a 1
h Vs dWs ,
0 G(a) 0 G(a)

where a 2 (1/2, 1), l > 0, h > 0 and q0 is a deterministic function.


We can then simplify the expression for Zt . The constants l , h are found
using calibration against real-world data, so we pcan hide G(a) within these
constants. Additionally, we can substitute st for Vt , to make the connection
between this process and the volatility in the classical Black-Scholes model
clearer. This results in
Z t Z t
Zt = l̃ (t s)a 1
(q0 (s) Zs ) ds + h̃ (t s)a 1
ss dWs ,
0 0

where
l l
l̃ = , h̃ = h.
G(a) G(a)
We can drop the tildes, and redefining l = l̃ , h = h̃, we can write

24
Z t Z t
a 1
Zt = l (t s) (q0 (s) Zs ) ds + h (t s)a 1
ss dWs .
0 0
As is typically done in the literature (e.g., [12, 21]), we can narrow the
space of admissible functions q0 (s) in such a way that the expression may be
rewritten as
Z t Z t
Zt = z0 l (t s)a 1
Zs ds + h (t s)a 1
ss dWs ,
0 0
where z0 2 R is a new calibration constant that replaces q0 (s).
Moreover, it is more convenient to substitute b for a 1, so that we can
write Z Z
t t
b b
Zt = z0 l (t s) Zs ds + h (t s) ss dWs .
0 0
Thus, we have reformulated the quadratic rough Heston model in the form
q
dSt = St st dWt , st = a (Zt b)2 + c, (4.1)

Z t Z t
b b
Zt = z0 l (t s) Zs ds + h (t s) ss dWs , (4.2)
0 0
where a, b, c, l , h > 0, b 2 (0, 1/2), z0 2 R.

Forward prices and stock process


Note that in (4.1), the interest rate (and dividend yield) is zero, unlike in the
typical setting of the Black-Scholes model. This is because volatility models
are usually formulated in terms of the so-called forward prices, i.e. the process
Ft = e(r q)t St . Suppose the price process St satisfies

dSt = (r q) dt + St st dWt .

Then, the forward price process is defined as Ft = e(r q)t St . By applying the
Itô formula to Ft , we see that it satisfies the same equation as in (4.1).
Using forward price processes allows us to simplify the notation, without
losing any information; we can always return to the stock price process using
multiplication by e (r q)t .

From the volatility process to the stock price process


Suppose the volatility process st is known, and we want to find the stock
price process
dSt = St st dWt .
Similarly to how it is done for deriving geometric Brownian motion, we can
apply the Itô formula to ln St , and get

25
1 1 1 1 1 2 2
d log St = dSt dSt · dSt = st dWt S s dt.
St 2 St2 2 St2 t t
Then, integrating, we see that
✓Z t Z t ◆
1
St = S0 exp ss dWs ss2 ds . (4.3)
0 2 0

Similarly, if the drift term is present,

dSt = (r q) St dt + St st dWt ,

then ✓Z Z t ◆
t 1
St = S0 exp ss dWs ss2 ds + (r q)t .
0 2 0

In other words, we can primarily focus on simulating the volatility process,


and once that is done, we can easily compute the stock price process using
elementary numeric stochastic integration.

4.2 Hybrid scheme


In 2017, Bennedsen, Lunde and Pakkanen [3] introduced the so-called Hy-
brid scheme to address the need to simulate various rough volatility models,
that were becoming popular at that time.
We will use the Hybrid model to simulate
Z t
b
Yt = (t s) ss dWs , t 0,
0

where ss is a predictable process with locally bounded trajectories, and such


that E[st2 ] < • for all t 2 R, b 2 ( 1/2, 1/2) \ {0}.
Note that the original hybrid scheme is more general than this. In this thesis,
however, we do not need the additional complexity of the general case.

Discretization
Suppose we want to simulate Y on an interval [0, T ], for some T > 0. We
divide the interval [0, T ] into N subintervals, creating the grid

{0 = t0 , t1 , . . . , tN = T }, ti = i · Dt,

where Dt := NT .
Our goal is to simulate

Yti , i = 0, 1, . . . , N.

26
First, we can write
Z ti+1 i Z tk+1
Yti+1 =
0
(ti+1 s) b
ss dWs = Â (ti+1 s) b
ss dWs ,
k=0 tk
and, assuming that s does not vary too much, approximate
i Z tk+1
Yti+1 ⇡ Â stk tk
(ti+1 s) b
dWs .
k=0
b
For all terms but the last one, we approximate (ti+1 s) by a constant func-
tion within each integration interval, i.e., write
i 1
 (ti+1 bk ) b
stk DWtk .
k=0

Here, DWtk = Wtk+1 Wtk , and (bk )Nk=01 is a sequence of freely chosen evalua-
tion points within the discretization intervals, bk 2 [tk ,tk+1 ].
In the last term, the kernel contains a singularity. We keep the singular term
intact within the integral:
Z ti+1
b
sti (ti+1 s) dWs .
ti
In the original paper, there was a suggestion to preserve the singularity term
intact in several last terms, but the paper’s authors noted that the performance
is already good when using only one term. We prefer to keep the algorithm as
simple as possible in this case and will not try to consider additional terms.
In other words, our approximation in the Hybrid scheme is given by
i 1 Z ti+1
 (ti+1 b b
(N)
Yti+1 := bk ) stk DWtk + sti (ti+1 s) dWs .
k=0 ti

If we denote Z ti+1
b b
DWti := (ti+1 s) dWs , (4.4)
ti
this simplifies to
i 1
b
 (ti+1 b
(N)
Yti+1 := bk ) stk DWtk + sti DWti . (4.5)
k=0

Simulating Gaussian increments


b
The remaining question is how to simulate DWtk and DWtk . They are both
centered Gaussian variables. However, they are not independent of each other.
To determine their covariance matrix, we use the Itô isometry:
Z tk+1
b b 1 b
cov (DWtk , DWtk ) = (tk+1 s) ds = (Dt)1 , (4.6)
tk 1 b

27
Z tk+1
b b 1
cov (DWtk , DWtk ) = (tk+1 s) 2b
ds = (Dt)1 2b
, (4.7)
tk 1 2b
and

cov(DWtk , DWtk ) = Dt. (4.8)


In other words, they are jointly Gaussian, with mean zero and the covariance
matrix given by
" 1 b
#
1
Dt 1 b (Dt)
1 1 b 1 1 2b . (4.9)
1 b (Dt) 1 2b (Dt)

4.2.1 Simulating Riemann-Liouville process


Before we can simulate the volatility process, we can try out the simulation
method with the simpler Riemann-Liouville process.
To simulate the Riemann-Liouville process with Hurst parameter H, we put
b = 1/2 H, and then the Hybrid scheme takes form:

!
i 1
1 b
WtH
i+1
=
G(1 b) Â (ti+1 bk ) b
DWtk + DWti , i = 0, . . . , N 1.
k=0
(4.10)
To be concrete, we can put bk = tk , and then we get ti+1 bk = ti+1 tk =
ti k+1 ,
resulting in

!
i 1
1 b b
WtH
i+1
=
G(1 b) Â ti k+1 DWtk + DWti , i = 0, . . . , N 1. (4.11)
k=0

Simulation results for various values of the Hurst parameter are shown in
Figures 4.1 - 4.4.

Remark. Fast simulation of FBM using FFT


b
In the formulas above, we need to compute Âik=0 1
ti k+1 DWtk for every i =
0, . . . , N 1, which means that the total complexity of the scheme is O(N 2 ).
b
We can speed this up by noting that the sequence Âik=0 1
ti k+1 DWtk is a dis-
b
crete convolution of the sequences {tk } and {DWtk }. The Fast Fourier Trans-
form algorithm allows to compute discrete convolutions in O(N log N) time,
which is an improvement compared to O(N 2 ).
However, we can not apply the same approach to more complicated pro-
cesses (including the quadratic rough Heston model).

28
Figure 4.1. Riemann-Liuville fBm with Hurst parameter H = 0.05.

Figure 4.2. Riemann-Liuville fBm with Hurst parameter H = 0.2.

29
Figure 4.3. Riemann-Liuville fBm with Hurst parameter H = 0.5.

Figure 4.4. Riemann-Liuville fBm with Hurst parameter H = 0.9.

4.2.2 Simulating quadratic rough Heston


Now let us get back to the simulation of
Z t Z t
b b
30 Zt = z0 l (t s) Zs ds + h (t s) ss dWs ,
0 0
where b 2 (0, 1/2), l , h, a, b, c > 0, and
q
ss = a (Zs b)2 + c.
The discretization of the second (stochastic) integral is done along the lines of
Section 4.2, i.e.,
Z ti+1
b
(ti+1 s) ss dWs
0
i 1 Z tk+1 Z ti+1
= Â (ti+1 s) b
ss dWs + (ti+1 s) b
ss dWs
k=0 tk ti
i 1
b b
⇡ Â ti k+1 stk DWtk + sti DWti . (4.12)
k=0
For the first, non-stochastic, integral, we note that

Z ti+1 i Z tk+1

0
(ti+1 s) b
Zs ds = Â (ti+1 s) b
Zs ds
k=0 tk
i Z tk+1
⇡ Â Ztk tk
(ti+1 s) b
ds.
k=0
The integrals inside of the sum are easy to calculate:
Z tk+1 h i
b 1 b b
(ti+1 s) ds = (ti+1 tk )1 (ti+1 tk+1 )1
tk 1 b
1 h i
1 b 1 b
= ti+1 k ti k ,
1 b
The last equality follows because ti = i · Dt and we can, e.g., write
ti+1 tk = (i + 1) · Dt k · Dt = (i + 1 k) · Dt = ti+1 k ,
therefore,
Z ti+1 h
i i
1 1 b 1 b
0
(ti+1 s) b
Zs ds ⇡
1 b  k ti+1
Zt k ti k . (4.13)
k=0
Finally, combining (4.12) and (4.13), the discretized equation can be written
as
l i h i i 1
1 b 1 b b b
Zti+1 = z0
1 b  Ztk ti+1 k ti k +h  ti k+1 stk DWtk + h sti DWti ,
k=0 k=0
q
sti+1 = a (Zti+1 b)2 + c.
The above expressions thus represent the hybrid scheme for the quadratic
rough Heston model.

31
4.3 Multifactor scheme
4.3.1 Kernel approximations
The Multifactor scheme, introduced by Abi Jaber and El Euch in [2], relies
on approximating the kernel function by a sum of exponentials:
m 1
t b
⇡ Â ci e gi t
. (4.14)
i=0
This representation is in fact a Laplace transform of the weighted sum of Dirac
measures µ = Âi cni dgi :
Z •
b tx
t ⇡ e µ(dx).
0

The role of such representations of the kernel t b in the approximation of the


fractional Brownian motion has been known at least since 1998 (see [7]).
The choice of the weights {ci } and exponents {gi } is far from unique. In
the literature, multiple different algorithms have been proposed, resulting in
different approximations. One of the most efficient approximations (Beylkin-
Monzón algorithm) allows us to replace the kernel with just a handful of ex-
ponential terms, without losing much accuracy.

Original approximation
When the multifactor scheme was first introduced for simulating rough
volatility models in [2], its authors used the following algorithm for computing
the coefficients ci , gi :
Let
1 p !2 r !2
5 5
m 5 10 (1 2H) 1 10 b
pm = = .
T 5 2H T m2 b
Then the weights and exponents are given by

b
pm ⇣ ⌘ b pm (i + 1)b +1 ib +1
ci = (i + 1)b ib , gi = , i = 0, . . . , m 1.
b G(b ) b + 1 (i + 1)b ib
In Figure 4.6 we can see that the accuracy of the approximation decreases as
t ! 0, and that we probably need several hundreds of terms to get satisfactory
results.

Geometric approximation
In the follow-up paper [1], the authors proposed a more efficient choice of
approximation.
For a given b , m and T , fix a parameter xm .

32
Figure 4.5. Original approximation of fractional kernel.

Then the optimal values of ci and gi are given by

(i+1 n/2)b b i n/2 b +1


xm (1 xm ) b xm (xm 1)
ci = , gi = , i = 0, . . . , m 1.
b G(b ) (b + 1)(xm
b
1)
(4.15)
The remaining question is how to find the optimal xm . There is no exact
formula in the literature. However, it can be found using standard optimiza-
tion techniques, by minimizing the L2 distance between the approximated and
exact kernels.

Beylkin-Monzón approximation (Beylkin-Monzón algorithm)


It turns out that a far more efficient exponential approximation algorithm
was discovered already in 2005 in [4] (motivated by quantum chemistry appli-
cations). Rømer [22] was the first to use this algorithm in the context of rough
volatility models. This method produces much better approximations than the
two methods described above, but the algorithm is far more involved, too.
The function in question can be more general than t b , but again, we do
not need to consider the general case in this thesis. The algorithm relies on
somewhat advanced linear algebra to generate a large number of the function’s
samples, and then to fit it into a sum of a smaller number of exponential terms
using so-called Prony’s method.
Without giving any additional derivations or proofs, we present the algo-
rithm below.

33
Figure 4.6. Geometric approximation of fractional kernel.

Beylkin-Monzón Algorithm
Assume that we want to approximate t b on the interval [tmin ,tmax ], with
accuracy e > 0.
1. By applying a linear transformation to its argument, convert t b into a
function f : [0, 1] ! R:
b
f (t) := (t · (tmax tmin ) + tmin ) .
2. Pick a sufficiently large integer N (on the order of 100-200) and calculate
samples ✓ ◆
k
hk = f , k = 0, . . . , 2N.
2N
3. Construct the so-called Hankel matrix H = {hi+ j 2 }N+1 i, j=1 and sort its
eigenvalues (which are real and non-negative since the matrix is sym-
metric): s0 s1 . . . sN 0.
4. Find the smallest eigenvalue sm such that sm  khk2 · e, and denote the
corresponding eigenvector by u 2 RN+1 .
5. Using the coordinates of u, construct the polynomial Pu of order N.
6. Select m distinct real roots {r0 , . . . , rm 1 } of Pu that lie in the interval
(0, 1].
7. By solving a least-square problem, find {c̃0 , . . . , c̃m 1 } that minimize
!2
2N m 1
 hk  c̃i rik .
k=0 i=0

34
8. Denote
g̃i = 2N log(ri ), i = 0, . . . , m 1.
9. Return back to the original coordinates by setting
g̃i
gi = , ci = c̃i · egi ·tmin , i = 0, . . . , m 1.
tmax tmin
The final approximation is then given by
m 1
t b
⇡ Â ci e gi t
.
i=0

Illustration of Beylkin-Monzón approximation


Once we manage to implement the above algorithm, its difficulty really
pays off: on an interval [1, 100] we can achieve an accuracy of 0.01 with just
4 exponential terms (Figure 4.7).

Figure 4.7. Beylkin-Monzón approximation of fractional kernel on the interval


[1, 100].

It also works fairly well for approximating the values near the singularity.
On the interval [0.001, 10], to achieve the accuracy of 0.01 it is sufficient to
take 5 terms (Figure 4.8).

Multifactor scheme for quadratic rough Heston


Recall that we need to solve the stochastic differential equation
Z t Z t
b b
Zt = z0 l (t s) Zs ds + h (t s) ss dWs ,
0 0

35
Figure 4.8. Beylkin-Monzón approximation of fractional kernel on the interval
[0.001, 10].

q
st = a (Zt b)2 + c,
where b 2 (0, 1/2), l , h, a, b, c > 0, and Z0 2 R.
The fractional kernel depending on t makes the process non-Markovian and
non-semimartingale. By representing the kernel as a sum of exponentials
m 1
t b
⇡ Â ci e gi t
,
i=0
we regularize the problem and make it more tractable.

Multifactor approximation
Substituting the multifactor approximation of the kernel into the equation,
we get
 Zt Z t
Zt = z0 Â ci l e gi (t s) Zs ds + h e gi (t s) ss dWs .
i 0 0

We denote
Z t Z t
gi (t s) gi (t s)
Zti := zi0 l e Zs ds + h e ss dWs ,
0 0

so that

36
Zt = Â ci Zti , z0 = Â ci zi0 .
i i

Zti
We can derive an SDE for as follows. Note that
 Z t Z t
Zti = zi0 + e gi t l egi s Zs ds + h egi s ss dWs .
0 0

Differentiating both sides, we get

dZti = gi Zti dt + [ l Zt dt + h st dWt ] =


⇥ ⇤
= gi Zti l Zt dt + h st , dWt ,

so Zti satisfy:
⇥ ⇤
dZti = gi Zti l Zt dt + h st dWt , Zt = Â ci Zti .
i
Note that in this scheme the calibration parameter Z0 (which was playing
the role of the function q (s)) is replaced by multiple calibration parameters
Z0i . This gives us additional degrees of freedom when doing calibration.
With this, we finally arrive at the following multifactor approximation of
the quadratic rough Heston model:
⇥ ⇤
dZti = gi Zti + l Zt dt + h st dWt , i = 0, . . . m 1,
q
st = a (Zt b)2 + c,
where
Z0i = zi0 , i = 0, . . . m 1, (4.16)

Zt = Â ci Zti .
i

Euler scheme
Following [21], we discretize the scheme using the explicit-implicit Euler
scheme. The time interval [0, T ] is split into N parts. Dt = T /N, tk = k · Dt,
0 = t0 < . . . < tN = N. We simulate Brownian increments as DWk ⇠ N (0, Dt).
Then, if we denote Zki = Ztik , Zk = Ztk , etc., the implicit-explicit Euler scheme
is given by

i 1
Zk+1 = (Z i l Zk Dt + h sk DWk ),
1 + gi Dt k
q
sk = a (Zk b)2 + c,

37
m 1
Zk = Â ci Zki .
i=0

Based on the Euler scheme, we plot example simulations of the volatility


st and the stock price process St on the same graph in Figure 4.9.

4.4 Multifactor Hybrid Scheme


This scheme was introduced in [22], and combines the accuracy of the hy-
brid scheme with the speed of the multifactor scheme. Again, we want to
solve
Z t Z t
b b
Zt = l (t s) Zs ds + h (t s) ss dWs .
0 0

Assume we want to solve the equation on the interval [0, T ]. We split the
interval into N parts, and denote Dt = T /N, tk = k · Dt, 0 = t0 < . . . < tN = N.
As we did in the multifactor scheme, we denote
Z t Z t
gi (t s) gi (t s)
Zti := zi0 l e Zs ds + h e ss dWs .
0 0

Next, for a tk > 0, we can write


Z tk 1
Z tk 1
b b
Ztk = l (tk s) Zs ds + h (tk s) ss dWs
0 0
Z tk Z tk
b b
l (tk s) Zs ds + h (tk s) ss dWs .
tk 1 tk 1

The singular terms can be approximated as follows:


Z tk Z tk
b b Dt 1 b
(tk s) Zs ds ⇡ Ztk (tk s) ds = Ztk ,
tk 1
1
tk 1
1
1 b

and
Z tk Z tk
b b b
(tk s) ss dWs ⇡ stk 1
(tk s) dWs = stk 1 DWtk 1 ,
tk 1 tk 1

where we use the same notation as we did in the hybrid scheme. Next, we
can look at the non-singular terms, and use the multifactor approximation for
them.

38
Z tk 1
Z tk 1
b b
l (tk s) Zs ds + h (tk s) ss , dWs
0 0
 Z tk Z tk
1 1
⇡ Â ci l e gi (tk s)
Zs ds + h e gi (tk s)
ss dWs
i 0 0
 Z tk Z tk
1 1
= Â ci e gi Dt
l e gi (tk 1 s)
Zs ds + h e gi (tk 1 s)
ss dWs
i 0 0

= Â ci e gi Dt i
Ztk 1 .
i

Combining the approximations we can write

Dt 1 b b
Ztk ⇡ Â ci e gi Dt i
Ztk 1 + Zt + stk 1 DWtk
i 1 b k 1 1

or, abusing the notation, write

Dt 1 b b
Zk+1 = Â ci e gi Dt i
Zk + Zk + sk DWk .
i 1 b
The factors Zki
can be found using the same implicit-explicit scheme as in
the simple multifactor scheme:

i 1
Zk+1 = (Z i l Zk Dt + h sk DWk ).
1 + gi Dt k
b
The pairs of random variables (DWtk , DWtk ), where

DWtk = Wtk+1 Wtk ,

and Z tk+1
b b
DWtk := (tk+1 s) dWs ,
tk
can, exactly like in the hybrid scheme, be generated as jointly normal with the
covariance matrix given by
" 1 b
#
1
Dt 1 b (Dt)
1 1 b 1 1 2b .
1 b (Dt) 1 2b (Dt)

39
(a)

(b)

(c)

Figure 4.9. Multifactor simulations of the quadratic rough Heston model.

40
5. Calibration

Even though the quadratic rough Heston is notable for the fact that it allows
to simultaneously fit SPX and VIX volatility surfaces, we only calibrate using
SPX smiles in this thesis.

5.1 Provided data


We are provided with historical market data for SPX from 2021-06-18, with
records containing information such as:
• The ask price, the bid price and the mid price of the options. In other
words, the price at which the option is being offered for sale by the seller,
offered for purchased by the buyer and the average of the two.
• Implied volatilities (IV) derived from the ask / bid / mid prices by using
the Black-Scholes formula.
• The price of the underlying asset (i.e. S&P 500 Index).
• The expected future price of the underlying asset, as implied by the op-
tion’s price and other market variables.
• The strike price of the option.
• The expiration date of the option.
• The total number of option contracts traded during a specific time period
("volume").
• The exercise style of the option, either "American" or "European."
• Expected dividend yield for the underlying asset, over the life of the
option.
• The expected future price of the underlying asset, by the interest rate and
the dividend yield.

5.1.1 SPX vs SPXW


Both SPX and SPXW options are based on the S&P 500 Index, but they
have different expiration dates and trading characteristics.
SPX options (also known as standard or monthly options) have expiration
dates that typically fall on the third Friday of each month.
SPXW options ("S&P 500 Weekly options") options have shorter expira-
tion cycles than standard monthly options. SPXW options expire every week,
usually on Fridays, except when a standard monthly expiration falls on the
same week, in which case the weekly expiration is on the preceding Thursday.

41
When choosing which of the two option types to use for calibration, we
need to consider:
1. Time horizon: SPXW is more suitable for shorter timeframes, and SPX
is more suitable for analyzing long-term market behavior.
2. Liquidity: SPX options have higher trading volume than SPXW, which
leads to more accurate pricing and narrower bid-ask spreads.
In this thesis, we choose to calibrate using SPX only.

5.1.2 Recovering interest rates


Notably, the interest rate is missing from our data. We can, however, re-
cover it as follows, using other fields such as the underlying price of the asset,
the dividend yield of the underlying asset, and the implied forward price of the
asset.
Let T denote time to maturity expressed in years, i.e.,
T = (expiry_date - 2021-06-18) / 365.25.
Note that
implied_forward_price
= underlying_asset_price
* exp((r-dividend_yield / 100) * T).
Then we can find the interest rate
r = log (implied_forward_price / underlying_asset_price) / T
+ dividend_yield / 100.

5.2 Calibration process


We primarily use the Nelder-Mead optimization method [20] to find optimal
values of the parameters (as implemented in the scipy.optimize library).
Our calibration process roughly looks as follows:
1. Generate a large array of Gaussian random variables with the covariance
matrix (4.9).
2. Take a set of calibration parameters n.
3. Generate the volatility process st for the given n using the multifactor
hybrid scheme.
4. Construct the forward price process Ft using (4.3).
5. Group the options by their expiry date, for each expiry date ti note the
interest rate ri and dividend yield qi .
6. For each expiry date, calculate simulated risk-neutral prices
qi )t
St = e(ri Ft .

42
7. For the given expiry date, for all available strikes, calculate the option
prices based on the simulated stock prices.
8. From the simulated option prices, calculate the implied volatility using
the py_vollib library (also known as "Let’s be rational").
9. Calculate the objective function for each expiry date, and take a sum of
the results.
10. Proceed to the next step of the optimization method.
However, in order to use the Nelder-Mead method, we first need to come up
with a reasonable initial guess for the parameters. It turns out that simply tak-
ing arbitrary points from the parameter space will most likely lead to numeri-
cal overflow errors, which will keep appearing even after multiple iterations of
the Nelder-Mead algorithm. To address this, we first use the Differential evo-
lution method [23] until we get a somewhat satisfactory approximation of the
volatility smile, and then we use the resulting parameter values as the initial
guess for the Nelder-Mead method. Note that an implementation of the Dif-
ferential evolution method is also available in the scipy.optimize library.

5.3 Objective function


Multiple choices of the objective function are possible. The most obvious
candidate is the normalized mean-square error, which is also frequently used
in the literature (e.g., [12]):
1
F(n) = Â (s o, mid s o, n )2 .
#O SPX o2O SPX

Here, n denotes our set of calibration parameters, O SPX - the given set of SPX
options, s o, mid the market mid implied volatility for the option o, and s o,n is
the implied volatility of the option o in the quadratic rough Heston model with
the given parameter n.
This choice has certain drawbacks. For example, options that are far in-
the-money or far out-of-the-money are often accompanied by a higher degree
of uncertainty and larger bid-ask spreads in implied volatilities. By using an
objective function that treats all options equally, we end up reducing the accu-
racy of the calibration for at-the-money options while trying to fit the options
which even the market cannot price with certainty.
To address this, we can use the bid-ask-spread as a proxy measure of how
important a certain strike price is:

1 (s o, mid s o, n )2
F(n) =
#O SPX Â e + |s o, bid s o, ask |
.
o2O SPX
Here, the smaller the spread is, the higher the contribution of each option to
the total error. e is a small constant (e.g. 0.001) added to avoid division by
zero.

43
As an alternative to using the bid-ask spread values, we can use the trading
volume as the weights.
Finally, we can choose to ignore s o, mid completely and only penalize the
cases when s o, n leaves the bid-ask corridor [s o, bid ; s o, ask ]:
1
F(n) =
#O SPX Â |s o, bid s o, n |2+ + |s o, n s o, ask |2+ ,
o2O SPX

where |x|+ denotes max(x, 0).


We can combine these various approaches together, but notice that after a
certain point, we do not get a qualitative improvement in the approximation
just by changing the objective function.

5.3.1 Variance reduction


When simulating the volatility smile, we observe significant variation in
shape across different simulation sets unless a large number of simulations (in
the order of 60,000) are conducted per set. Only at this scale do the simulation
outcomes begin to stabilize and resemble each other, which allows for calibra-
tion using the same set of Gaussians without the need for regeneration at each
optimization step.
However, a large number of simulations significantly increases the compu-
tation time required for calibration. Furthermore, overfitting issues still per-
sist: it is possible that after completing the calibration process, a new simu-
lation will produce an error several times larger than the error obtained at the
end of the simulations.
In an attempt to mitigate these issues, we employ quasi-Monte Carlo tech-
niques (for additional details, see [13]) to generate large quasi-random Gaus-
sian arrays. Specifically, we make use of the low-discrepancy Sobol and Hal-
ton sequences, which are designed to cover the multi-dimensional space more
uniformly than simple random sampling.
We generate the quasi-random Gaussians as follows. Suppose we need to
b
perform N simulations, with M time steps. Since we need both DWk and DWk ,
the resulting matrix should have dimensions N ⇥M ⇥2. We start by generating
Sobol (or Halton) sequences of length N of 2 ⇥ M -dimensional vectors. Then
we apply the inverse of the cumulative distribution function of the normal
distribution element-wise and reshape it, to obtain an N ⇥ M ⇥ 2 matrix of
quasi-random standard normal variables. Finally, we calculate the Cholesky
decomposition of the matrix (4.9) and apply the matrix to each of the N ⇥ M 2-
dimensional vectors, to make sure that they have the given covariance matrix.
To compare the performance of quasi-Monte Carlo simulations with Monte
Carlo simulations, we repeat the given number of simulations five times each,
and plot the results on the same graph. As we see in Figures 5.1 - 5.3, with
quasi-Monte Carlo methods we can run roughly 4 times fewer simulations,

44
without losing much accuracy (especially if we look at at-the-money strike
prices).

Figure 5.1. Monte Carlo vs quasi-Monte Carlo when the number of simulations is
low.

Figure 5.2. Lower variance of implied volatility with quasi-Monte Carlo simulations.

Figure 5.3. Monte Carlo vs quasi-Monte Carlo when the number of simulations is
high.

By looking at the scatter-plots of DW against DW b , we can explore the


practical differences between Halton and Sobol Gaussian sequences. For small
time steps, Halton sequences perform well, demonstrating a good coverage of
the sample space. However for larger time steps they demonstrate clustering
similar to usual Monte Carlo samples (Figure 5.4) (but it is not worse than
classical Monte Carlo samples).

45
In contrast, Sobol sequences showed really good performance for both large
and small time steps, resulting in smooth, well-distributed plots. However, for
a smaller percentage of time steps we observe undesirable “Moire” patterns
(Figure 5.5) on the scatter plots. Nevertheless, since the instances of Moire
patterns are infrequent and the general quality of samples is good, we prefer
to use Sobol sequences.

Figure 5.4. An example of the Sobol sampling performing better than Halton’s.

Figure 5.5. An example of the Sobol sampling exhibiting a Moire pattern.

5.4 Calibrating b
Unlike the rest of the calibration parameters, calibrating of b is compli-
cated. This is because:
1. The generated Gaussians depend on b via their covariance matrix (4.9).
2. The exponential kernel decomposition (4.14) depends on b .
3. As a consequence of 2, the role of the calibration parameters zi0 (see
(4.16)), also depend on b .
To address this, we perform calibration in two steps. First, we set zi0 = 0
for all i and let b vary. To avoid regenerating Gaussian arrays with a new
covariance matrix at each step, we do the following:
1. At the beginning of the calibration process we generate an array with
dimensions N ⇥ M ⇥ 2, consisting of standard independent Gaussians.

46
2. Whenever b and the covariance matrix change, we transform the ar-
ray by using the same procedure that we used for generating correlated
quasi-Gaussians (multiplying with the Cholesky decomposition of the
covariance matrix).
Once this step is finished, we fix b and let zi0 vary.
With the market data that we used, the value of b that we found was b ⇡
0.395, which corresponds to the Hurst parameter H = 0.105.

5.5 Summary of the algorithm


Putting together all the steps above, we get the following:
1. Use the Differential evolution method until we get a reasonably close fit
for the volatility smile.
2. Run the Nelder-Mead algorithm, using the results of the previous step as
the initial guess, setting zi0 ⌘ 0 and allowing b to vary.
3. Use the results of the previous step, fix b , let zi0 vary and run the Nelder-
Mead algorithm again.

5.6 Results
We obtain a fairly good fit to the volatility surface, see, e.g., Figures 5.6-5.8.

Figure 5.6. Simulated volatility surface, 28 to 119 days.

47
Figure 5.7. Simulated volatility surface, 154 to 245 days.

Figure 5.8. Simulated volatility surface, 273 to 336 days.

We note that we are getting a better fit for short expiries. This is in line
with the fact that the quadratic rough Heston model is typically used to model
options with short expiries.

48
The fit becomes even better if we limit ourselves to a single maturity date
(Figure 5.9 - 5.10).

(a) Calibrated for a single maturity date.

(b) Calibrated for all maturity dates.


Figure 5.9. Comparison of calibrations, 28 days to maturity.

We notice that the three discretization schemes (Hybrid, Multifactor, and


Multifactor hybrid), despite representing the same model, give very different
results even when the same calibration parameters are used (so calibration has
to be done separately for each of these methods). This means that at least

49
(a) Calibrated for a single maturity date.

(b) Calibrated for all maturity dates.


Figure 5.10. Comparison of calibrations, 336 days to maturity.

two of these three methods (or our implementations of these methods) are not
particularly accurate. We also notice that the simulated prices are clearly not
Gaussian, see Figure 5.11.

50
Figure 5.11. Histogram of simulated stock prices after t days.

5.6.1 Dependence of the smile on the calibration parameters


To gain an intuition of the role of the calibration parameters, we can let
each of the parameters vary in a reasonably selected range while keeping other
calibration parameters intact, and see how it affects the volatility smile.
We start by varying l and h, see Figure 5.12. We can see that when we vary
l , the smile pivots around a specific point, and becomes flatter when lambda
is high and steeper when l is low. In the case of h, the curve pivots around
a different point and the dependence of the curvature on h is the opposite
(higher values of h correspond to flatter smiles).
Next, we turn to the variation of a, b, and c, as shown in Figure 5.13. The
parameter a appears to make the smile steeper and also shift it vertically, with-
out affecting the x coordinate of the minimum of the volatility smile. The
parameter b shifts the smile horizontally (and affects the x coordinate of the
minimum of the smile). The parameter c both shifts the smile vertically and
flattens it somewhat. The dependence of the smile on the parameter b is shown
in Figure 5.14.

51
Figure 5.12. Variation of l and h.

Figure 5.13. Variation of a, b and c.

Figure 5.14. Variation of b .

52
6. Conclusion

The field of simulating rough volatility models is very active, with multiple
competing approaches and ideas, and a large number of recent papers.
We have studied the quadratic rough Heston model, which is relatively sim-
ple to implement (compared to other rough Heston models), and yet provides
a fairly good fit for the volatility surface.
We have implemented the model using three different schemes that are com-
monly used for simulating rough volatility models. In particular, we simplified
and implemented Rømer’s Hybrid multifactor scheme, as well as Beylkin-
Monzón algorithm, in Python.
We also found that quasi-Monte Carlo methods can be efficiently used to
speed up the calibration process also in rough volatility models. We have
discovered that different schemes may require different sets of calibration pa-
rameters (meaning that at least some of these schemes may not be accurate).
We have also found that completely different sets of calibration parameters
may yield equally good fits to the market data (meaning potentially that there
may be redundancy in the model parameters).
Unfortunately, we have not calibrated VIX options, even though this is
known to be one of the strengths of the quadratic rough Heston model. This
was mostly due to theoretical complications when simulating VIX option prices.
The complications were not unsurmountable, however, and given more time,
this presents a promising future direction for our work.

53
References

[1] Eduardo Abi Jaber. Lifting the Heston model. Quantitative Finance,
19(12):1995–2013.
[2] Eduardo Abi Jaber and Omar El Euch. Multifactor approximation of rough
volatility models. SIAM Journal on Financial Mathematics, 10(2):309–349,
2019.
[3] Mikkel Bennedsen, Asger Lunde, and Mikko S. Pakkanen. Hybrid scheme for
Brownian semistationary processes. Finance and Stochastics, 21(4):931–965,
2017.
[4] Gregory Beylkin and Lucas Monzón. On approximation of functions by
exponential sums. Applied and Computational Harmonic Analysis,
19(1):17–48.
[5] Tomas Björk. Arbitrage theory in continuous time. Oxford University Press, 3rd
edition, 2009.
[6] Fischer Black and Myron Scholes. The pricing of options and corporate
liabilities. Journal of political economy, 81(3):637–654, 1973.
[7] Laure Coutin and Philippe Carmona. Fractional Brownian motion and the
Markov property. Electronic Communications in Probability, 3:12, 1998.
[8] Aditi Dandapani, Paul Jusselin, and Mathieu Rosenbaum. From quadratic
Hawkes processes to super-Heston rough volatility models with Zumbach
effect. Quantitative Finance, 21(8):1235–1247, 2021.
[9] Omar El Euch, Masaaki Fukasawa, and Mathieu Rosenbaum. The
microstructural foundations of leverage effect and rough volatility. Finance and
Stochastics, 22:241–280, 2018.
[10] Omar El Euch and Mathieu Rosenbaum. The characteristic function of rough
Heston models. Mathematical Finance, 29(1):3–38, 2019.
[11] Jim Gatheral, Thibault Jaisson, and Mathieu Rosenbaum. Volatility is rough.
Quantitative Finance, 18(6):933–949, 2018.
[12] Jim Gatheral, Paul Jusselin, and Mathieu Rosenbaum. The quadratic rough
Heston model and the joint S&P 500/VIX smile calibration problem, 2020.
arXiv:2001.01789.
[13] P. Glasserman. Monte Carlo Methods in Financial Engineering. Applications
of mathematics: stochastic modelling and applied probability. Springer, 2004.
[14] Steven Heston. A closed-form solution for options with stochastic volatility
with applications to bond and currency options. Review of Financial Studies,
6:327–343, 1993.
[15] John Hull and Alan White. The Pricing of Options on Assets with Stochastic
Volatilities. The Journal of Finance, 42(2):281–300, June 1987.
[16] S.C. Lim and V.M. Sithi. Asymptotic properties of the fractional Brownian
motion of Riemann-Liouville type. Physics Letters A, 206(5):311–317.
[17] Paul E Lynch and Gilles O Zumbach. Market heterogeneities and the causal
structure of volatility. Quantitative Finance, 3(4):320, 2003.

54
[18] Benoit B Mandelbrot and John W Van Ness. Fractional Brownian motions,
fractional noises and applications. SIAM review, 10(4):422–437, 1968.
[19] D. Marinucci and P.M. Robinson. Alternative forms of fractional Brownian
motion. Journal of Statistical Planning and Inference, 80(1):111–122.
[20] J. A. Nelder and R. Mead. A Simplex Method for Function Minimization. The
Computer Journal, 7(4):308–313, 01 1965.
[21] Mathieu Rosenbaum and Jianfei Zhang. Deep calibration of the quadratic rough
heston model, 2022. arXiv:2107.01611.
[22] Sigurd Emil Rømer. Hybrid multifactor scheme for stochastic Volterra
equations with completely monotone kernels. SSRN Electronic Journal, May
2020.
[23] Rainer Storn and Kenneth Price. Differential Evolution – A Simple and
Efficient Heuristic for global Optimization over Continuous Spaces. Journal of
Global Optimization, 11(4):341–359, 1997.

55

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy