0% found this document useful (0 votes)
15 views39 pages

Rendu Final

Uploaded by

oussamasaber2400
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views39 pages

Rendu Final

Uploaded by

oussamasaber2400
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Analyzing Credit Risk: Modeling and

Calibrating Yield Curves

Authors: Bilal Belkadi and Youssef El Ayadi


April 2024

Table des matières

1 Regulatory context 7
1.1 Solvency II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Economic Scenario Generators . . . . . . . . . . . . . . . . . . . 7
1.2.1 Risk Neutral Approach . . . . . . . . . . . . . . . . . . . 8
1.2.2 Real World Approach . . . . . . . . . . . . . . . . . . . 8
1.3 Solvency II Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1 Best Estimate (BE) . . . . . . . . . . . . . . . . . . . . . 9
1.3.2 Solvency Capital Requirement (SCR) . . . . . . . . . . . 9
1.3.3 Minimum Capital Requirement (MCR) . . . . . . . . . . 10
1.3.4 Own Risk and Solvency Assessment (ORSA) . . . . . . . 10

2 Credit Risk Indicators 10


2.1 Recovery Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Loss Given Default (LGD) . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Spread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Pricing of a Zero Coupon Corporate Bond 12


3.1 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.1 Arbirtrage Free Market . . . . . . . . . . . . . . . . . . . 12
3.1.2 Market Eciency Hypothesis . . . . . . . . . . . . . . . . 12
3.2 Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.1 Lemma (Change of ltration) . . . . . . . . . . . . . . . . 13
3.2.2 Denition : . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.3 Zero Coupon Pricing . . . . . . . . . . . . . . . . . . . . . 14
3.2.4 Spread Formula : . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.5 Probability of Default (PD) . . . . . . . . . . . . . . . . . 14
3.2.6 Poisson Point Process . . . . . . . . . . . . . . . . . . . . 15
3.2.7 Proposition . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.8 Denition : . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.9 Denition : . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.10 Modeling the Intensity Of Default . . . . . . . . . . . . . 16

1
3.2.11 Intensity of Default and the Probability of Survival . . . . 16
3.2.12 Spread Reformulation . . . . . . . . . . . . . . . . . . . . 17
3.2.13 Stochastic Intensity . . . . . . . . . . . . . . . . . . . . . 17

4 Interest Rate Models 18


4.1 The Vasicek Model . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2 The Cox Ingersoll Ross Model (CIR) . . . . . . . . . . . . . . . . 19

5 presentation of the calibration 20


5.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Construction of a Yield Curve . . . . . . . . . . . . . . . . . . . . 23

6 Vasicek 24
6.1 Specics of the Vasicek Model . . . . . . . . . . . . . . . . . . . . 24
6.2 Analytical Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.3 Exploring the Maximum Likelihood Estimation Method for Ca-
libration Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . 24

7 Calibration using the Cox-Ingersoll-Ross model 26


7.1 Diusion Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.2 Properties of the Model . . . . . . . . . . . . . . . . . . . . . . . 26
7.3 Calibration of the Model . . . . . . . . . . . . . . . . . . . . . . . 26
7.3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 27

8 Results 27
9 Martingale Test 30
10 Observations from the Graphs 32
11 Code R pour optimisation CIR 33
12 Code R pour test de martingalite 36

2
Introduction

Modeling the dynamics of the interest rate structure is an important part of


measuring and managing a portfolio's exposure to adverse interest rate move-
ments in the advent of arbitrage-free pricing theory which continues to occupy
the eorts of both academics and practitioners.
Moreover, since the beginnings of valuing asset classes that are inuenced by
changes in interest rates like bonds, mortgages, xed income securities and other
interest rate derivatives such as : Interest rate swaps 1 , caps and oors 2 ,
interest rate options 3 , forward rate agreements 4 , the nance industry
has paid particular attention to mathematical models for pricing and hedging
assets whose underlying parameters are interest rates.
Several continuous-time approaches were developed and used by practitio-
ners and academics to price interest-rate assets, and it continues to occupy
eorts of both sides to produce more reliable approaches.
While the Black and Scholes model has rapidly established itself as "the
go to and most popular model for valuing equity-type assets (stocks, equity
options, etc), where it's general framework assume that the price of the under-
lying asset (usually a stock) follows a lognormal distribution. Other variations
of the Black and Scholes model like the Black-Derman-Toy (BDT) Model or
Black-Karasinski (BK) Model were adapted to the valuation of interest-rate
derivatives such as bond options and swaps. However, in many considerations,
interest rate derivatives present major dierences from equities, which make the
use of Black models such as BDT and BK unrealistic.
To illustrate this point, let's consider the simple case of a bond. Unlike a
share, its price at maturity is always known. As a result, the geometric brownian
process used in actions and stock prices modeling is not suitable for this scena-
rio. Also to note, the price of a bond over its lifetime depends on the interest
rate, which means is very complex to replicate dynamically. These dierences
can be observed in the case of more complex interest-rate derivatives such as
bond options, caps, oors, and interest-rate swaps. Thus, assessing, and hedging
interest rate risk is a delicate task, made also more challenging by the diculty
of choosing a model from the vast models proposed and the scientic literature
on the on the subject matter, especially where there has been doubts recently
in various published projects on the ability of some of the most widely used
models to reliably describe interest rate dynamics.
Another reason, which holds signicant weight from a market perspective,
1. Interest rate swaps are derivative contracts where two parties exchange cash ows based
on dierent interest rates.
2. Caps and oors are derivative contracts that allow the buyer to receive payments when
interest rates rise above or fall below specied levels, respectively.
3. Interest rate options are derivative contracts that give the holder the right, but not
the obligation, to buy or sell an interest rate-related asset at a predetermined price within a
specied period.
4. Forward rate agreements (FRAs) are over-the-counter derivative contracts that allow
two parties to lock in an interest rate on a notional amount of funds to be exchanged at a
future date.

3
pertains to the structure of the vanilla market within interest rate derivatives.
This market predominantly consists of caps, oors, and swaps, with pricing
commonly conducted using the Black framework. Under this framework, the
respective forward Libor 5 and swap rate underlying's are assumed to follow a
lognormal distribution, while the discount factors remain non-stochastic. Conse-
quently, in the context of hedging, it is essential for the market standard to view
these vanilla instruments as independent entities. This means that the volatility
matrix used for pricing swaptions generally has no impact on the volatility curve
associated with the cap and oor market.However, they can be indirectly rela-
ted through the underlying interest rates and market dynamics. Furthermore,
the simultaneous assumption of lognormal behavior in both the Libor and swap
rates poses a mathematical challenge because the observable rates are asymme-
tric towards the right and the tails of the distribution are thicker than in the
lognormal framework.
Despite these complexities, the overarching objective of interest rate mode-
ling remains consistent : to establish a framework that enables the pricing of a
wide range of interest rate-sensitive class assets in a coherent and methodical
manner.
In an insurance context, and more particularly in life insurance, credit risk is
one of the main risks that insurers have to take into account as part of their risk
management strategies . It mainly aects corporate bonds, which are generally
the products most commonly held by insurers. In addition to the complexity of
the underlying mathematical theory behind modeling and calibrating interest-
rate curves taking into account all the risk factors involved, an insurer has to
deal with issues related to regulations (Solvency II Regulations) and the na-
ture of restrictions and expectations held in the insurer portfolio (life insurance
contracts often have maturities of the order of years unlike most of other pure
interest-rate derivatives( life insurance contracts can even have maturities of the
order of a decades). Therefore, in the insurance asset management department,
the assessment of the company's level of commitment ¬to its clients, whether
they're insured customers or creditors, and the determination of its solvency in-
dicators, notably within the regulatory framework of Solvency II, are intimately
linked to the modeling of the risk-free yield curve used in its economic scenario
generator.
Given the specic context of life insurance and the prolonged lengths of
contracts maturity compared to other nancial contracts, it appears that the
interest-rate models developed¬ by traders of short-term contracts, for which the
maturity of transactions is relatively shorter, particularly in the context of high-
frequency trading, are not suitable directly for use with the long-term component
5. London Interbank Oered Rate. It is one of the most widely used benchmark interest
rates globally and serves as a reference rate for various nancial products and contracts, inclu-
ding loans, derivatives, mortgages, and bonds.other popular benchmarks include EURIBOR,
the Eurozone benchmark rate, TIBOR, the Japanese yen benchmark rate, HIBOR, the Hong
Kong dollar benchmark rate, SIBOR, the Singapore dollar benchmark rate, SONIA, the Bri-
tish pound sterling overnight rate, SOFR, the U.S. dollar secured overnight rate, and BBSW,
the Australian dollar benchmark rate.

4
of the commitments of insurance companies. In this academic project, we seek to
price a zero-coupon corporate bond using the Cox-Ingersoll-Ross (CIR) 6 model,
then we move to the calibration of diusion parameters 7 based on dierent yield
curves of market existing bonds varying in rating (from AAA to BBB). This
model allows us to calibrate diusion parameters based on European corporate
bonds in relation to the benchmark used for European government bonds, which
is usually the yield on the 10-year German government bond.
To this end, project is divided into three main parts :
1. The rst part is a mathematical review of the concepts encountered
throughout the project. It is mainly an outlook at the basics of credit
risk modeling theory and the selection of rate models currently in use on
the market.
2. The second part is the central part of the project. We present the theory
underlying our proposed model, and the calibration work carried out on
the model diusion parameters.
3. The third section validates the work presented in the second section by
using¬ Martingale test to assess the eciency of our nancial model and
to provide a conclusive out view on the modeling and calibration work.

6. The Cox-Ingersoll-Ross (CIR) model is a mathematical model used to describe the


evolution of interest rates over time. It is an extension of the Vasicek model and is widely
used in xed income and interest rate modeling. The CIR model is characterized by mean
reversion, where interest rates tend to revert to a long-term mean level, and volatility that
varies with the level of interest rates.
7. In the context of the CIR model, diusion parameters refer to the parameters that
govern the stochastic process describing the evolution of interest rates. These parameters
include the mean reversion speed, the long-term mean interest rate, and the volatility of
interest rates. The calibration of diusion parameters involves estimating these parameters
based on observed market data, such as yield curves of existing bonds, to accurately model
the behavior of interest rates over time.

5
Table des matières

6
1 Regulatory context

1.1 Solvency II
Solvency II is a set of regulatory requirements implemented by the European
Union (EU) to standardize and strengthen the prudential regulation of insurance
companies and ensure their nancial stability. It became eective on January
1, 2016, replacing the previous Solvency I regime. Solvency II aims to create a
harmonized regulatory framework across EU member states, enhance consumer
protection, and improve the overall resilience of the insurance industry.
The main pillars of Solvency II are :
1. Quantitative Requirements : This pillar sets out the quantitative
capital requirements that insurance companies must hold to cover their
risks adequately. It introduces a risk-based approach to capital adequacy,
where capital requirements are determined based on the underlying risks
of insurance and reinsurance activities. These requirements are calculated
using sophisticated risk models that take into account various factors such
as market risk, credit risk, underwriting risk, and operational risk.
2. Qualitative Requirements : This pillar focuses on governance, risk
management, and transparency. It requires insurance companies to es-
tablish robust risk management systems and governance structures to
identify, assess, and manage their risks eectively. It also emphasizes the
importance of disclosure and transparency to stakeholders, including po-
licyholders, regulators, and investors.
3. Supervisory Review Process : This pillar involves ongoing supervi-
sion by national regulatory authorities to ensure compliance with Sol-
vency II requirements. It includes regular assessments of insurance com-
panies' risk management practices, internal models, and capital adequacy.
Supervisors have the authority to intervene if they identify weaknesses
or deciencies in an insurer's risk management or nancial position.
Solvency II represents a signicant shift in regulatory approach compared to its
predecessor, Solvency I, by introducing a more risk-sensitive and forward-looking
framework. It aims to promote nancial stability, enhance market condence,
and protect policyholders by ensuring that insurance companies maintain ade-
quate capital buers to withstand adverse events and meet their obligations
over the long term.

1.2 Economic Scenario Generators


An economic scenario generator (ESG) for interest rates is a mathematical
model used to simulate future interest rate movements under dierent economic
scenarios. It is a key tool in nancial modeling, risk management, and valuation
of nancial products such as bonds, derivatives, and structured products.
The ESG takes into account various factors that inuence interest rates, in-
cluding macroeconomic indicators, central bank policies, market expectations,

7
and historical data. It uses stochastic processes to generate a range of possible
interest rate paths over time, reecting uncertainty and randomness in the eco-
nomic environment.
ESGs are typically used by nancial institutions, insurance companies, asset
managers, and regulators for a variety of purposes, including :
Asset Liability Management (ALM) : To help nancial and insurance
institutions assess the impact of interest rate changes on their balance sheets
and evaluate the eectiveness of their asset and liability strategies in dierent
economic scenarios.
Risk Management : To quantify and manage interest rate risk in invest-
ment portfolios, hedging strategies, and derivative positions. They enable ins-
titutions to stress-test their portfolios against adverse market conditions and
assess the potential impact on their nancial health.
Valuation and Pricing : To value and price xed income securities, inter-
est rate derivatives, mortgage-backed securities, and other interest rate-sensitive
instruments. They provide inputs for option pricing models, yield curve construc-
tion, and scenario analysis. Capital Planning and Stress Testing : Economic
Scenario Generators are used in regulatory stress testing exercises to assess
the resilience of nancial institutions to adverse economic scenarios, including
changes in interest rates. They help regulators evaluate the adequacy of capital
buers and risk management practices.
Overall, Economic Scenario Generators play a crucial role in nancial decision-
making by providing insights into the dynamics of interest rates and their impact
on nancial markets and institutions. They enable stakeholders to make infor-
med decisions, manage risks eectively, and navigate the complexities of the
global economy. Therefore, There are two possible approaches to GSEs : the
risk-neutral approach and the real-world approach. The following sections will
detail their respective characteristics.

1.2.1 Risk Neutral Approach


The risk-neutral world is a framework where all pricing valuation processes
are martingales, making it a hypothetical environment of adjusted probabili-
ties taking into account risk aversion. In this universe, the expected return on
assets is constantly equal to the risk-free rate. Risk-neutral Economic Scenario
Generators (ESGs) are specically calibrated based on market prices to reect
this environment. The Solvency II standard mandates the use of this framework
in the assessment of provisions as well as in the calculation of the solvency ra-
tio. By adopting this approach, insurers can meet regulatory requirements and
enhance their ability to assess and manage risks prudently and eectively

1.2.2 Real World Approach


The risk-neutral world, especially for risky assets, generates returns that
deviate from the observed reality in nancial markets. Therefore, when more
realistic scenarios are needed, the use of an Economic Scenario Generator (ESG)

8
based on the real world becomes essential. The real-world ESG typically relies on
historical data. Unlike its risk-neutral counterpart, it is subjective as it is heavily
inuenced by the chosen historical data for its calibration (depth, time interval,
nancial index, etc.). Furthermore, to better match market conditions, the real-
world ESG usually allows for the inclusion of expertise and macroeconomic
analysis. Thus, in this universe, each generated scenario reects a plausible
state of the nancial market, taking into account historical characteristics and
expert opinions. This approach provides a more realistic perspective and enables
nancial market professionals to explore a wider range of potential outcomes,
thereby enhancing their decision-making and risk management

1.3 Solvency II Metrics


Insurance regulators, such as the European Insurance and Occupational Pen-
sions Authority (EIOPA), require insurance companies to assess their solvency
position under dierent economic scenarios. ESGs are used to calculate regu-
latory capital requirements, such as the Solvency II Standard Formula SCR
(Solvency Capital Requirement),BE (Best Estimate), MCR (Minimum Capital
Requirement) and ensure compliance with regulatory capital adequacy stan-
dards.
The formulas for the Solvency II quantitative metrics varies depending on
the regulatory framework and the specic risk factors considered during the
analysis

1.3.1 Best Estimate (BE)


By denition of the Solvency II directive and the European Insurance and
Occupational Pensions Authority (EIOPA).The Best Estimate represents the
insurance company's best estimate of its future liabilities and cash ows, taking
into account expected future events and assumptions about future economic
and demographic conditions. It is calculated based on actuarial techniques and
reects the company's view of the most likely outcome. The Best Estimate forms
the basis for calculating the technical provisions, which represent the present
value of future insurance liabilities net of future premiums and expenses.

1.3.2 Solvency Capital Requirement (SCR)


The Solvency Capital Requirement is the amount of capital that an insu-
rance company is required to hold to ensure that it can meet its obligations
to policyholders with a specied level of condence over a dened time hori-
zon. It is calculated as the dierence between the value of assets and the value
of liabilities, with adjustments for risk factors such as market risk, credit risk,
underwriting risk, and operational risk. The SCR is designed to provide a buf-
fer against unexpected adverse events and ensure the nancial stability and
solvency of insurance companies.

9
1.3.3 Minimum Capital Requirement (MCR)
The Minimum Capital Requirement is the minimum amount of capital that
an insurer is required to hold at all times to ensure its ongoing solvency. It is
set at a level sucient to cover the basic risks faced by the insurer and prevent
insolvency in normal operating conditions.

1.3.4 Own Risk and Solvency Assessment (ORSA)


ORSA is a risk management framework that requires insurers to assess their
own risks and solvency position in relation to their risk appetite and capital
resources. It involves a comprehensive evaluation of the company's risk prole,
including both quantitative and qualitative assessments, to ensure that it has
adequate capital to support its business activities and meet regulatory require-
ments.

2 Credit Risk Indicators

Before moving to our conceptual framework, we'll introduce the foundation


of Credit Risk theory and the indicators associated to it.

The credit risk, or otherwise known as counterparty risk, is the risk associa-
ted with the debtor's inability to repay, resulting in nancial loss. A portion of
the loss can be recovered by the lending party through various legal procedures.
The amount recovered from the total loss is referred to as the recovery amount.
Thus, credit risk consists of the main following risks :
 Default Risk : The risk that a borrower will fail to repay their debt
obligations in full or on time.
 Downgrade Risk : The risk that the credit rating of a borrower or
issuer will be downgraded, indicating a higher likelihood of default.
 Counterparty Risk : The risk that the counterparty in a nancial
transaction will default or fail to fulll their obligations, resulting in
nancial loss.

2.1 Recovery Rate


The recovery rate refers to the proportion of a loan or investment that is
recovered after a borrower defaults on their obligation. It represents the amount
of funds or assets that can be recovered through various recovery procedures,
such as selling collateral or pursuing legal action.
The recovery rate is expressed as a percentage and is typically estimated
based on historical data, statistical models, or expert judgment. It is a key
parameter in credit risk modeling and is used to calculate the Loss Given Default
(LGD), which represents the portion of the exposure that is not recovered after
default.

10
2.2 Loss Given Default (LGD)
The Loss Given Default (LGD) is a measure used in credit risk analysis, to
quantify the extent of nancial loss incurred when a borrower or counterparty
defaults on their obligations. LGD represents the proportion of the exposure
that is not recovered after default.
The formula for Loss Given Default (LGD) is :

LGD = (1 − α) × 100%

Where :
LGD = Loss Given Default (expressed as a percentage)
α = Recovery rate (expressed as a decimal, typically between 0 and 1 )

This formula indicates that LGD is equal to 100% minus the recovery rate
multiplied by 100%. It represents the percentage of the exposure that is not
recovered after default. A higher LGD implies greater nancial loss for the
lender or the investor in the event of default.
Furthermore, a higher recovery rate implies a lower LGD, as more of the
exposure is recovered after default, reducing the overall loss for the lender or
investor. Conversely, a lower recovery rate results in a higher LGD and greater
losses for the lender or investor. Therefore, accurately estimating the recovery
rate is essential for assessing credit risk, determining capital requirements, and
making informed lending and investment decisions.

2.3 Spread
The interest rate spread, also known as the credit spread or yield spread, is
the dierence between the yield of two dierent types of xed-income securities,
usually of similar maturities but with dierent credit ratings or risk levels. It
represents the compensation investors demand for holding securities with higher
credit risk compared to risk-free securities.
The formula for calculating the interest rate spread between two securities
is :
Spread = Yield of Higher-Risk Security - Yield of Risk-Free Security

Alternatively, it can be expressed as :

Spread = Yield on Corporate Bonds - Yield on Treasury Bonds

Where :
 Yield of Higher-Risk Security : The yield or interest rate earned on the
bond or security with higher credit risk, such as corporate bonds.
 Yield of Risk-Free Security : The yield or interest rate earned on a risk-
free security, typically government bonds or Treasury securities.

11
The interest rate spread reects the additional compensation investors require
for taking on credit risk. A wider spread indicates higher perceived credit risk,
while a narrower spread suggests lower perceived risk. It is an important in-
dicator used by investors, analysts, and policymakers to assess credit market
conditions and investor sentiment.

3 Pricing of a Zero Coupon Corporate Bond

3.1 Framework
3.1.1 Arbirtrage Free Market
According to wikipedia, arbitrage is the practice of taking advantage of a
dierence in prices in two or more markets  striking a combination of matching
deals to capitalise on the dierence, the prot being the dierence between
the market prices at which the unit is traded. When used by academics, an
arbitrage is a transaction that involves no negative cash ow at any probabilistic
or temporal state and a positive cash ow in at least one state ; in simple terms,
it is the possibility of a risk-free prot.

Mathematically it is dened as follows :

P (Vt ≥ 0) = 1 and P (Vt ̸= 0) > 0, 0 < t ≤ T


where V0 = 0, Vt denotes the portfolio value at time t and T is the time the
portfolio ceases to be available on the market. This means that the value of the
portfolio is never negative, and guaranteed to be positive at least once over its
lifetime.
Anti, or free arbitrage on the other hand is similarly dened as :

P (Vt ≤ 0) = 1 and P (Vt ̸= 0) > 0, 0 < t ≤ T


In a market where protable arbitrage opportunities are absent, prices are consi-
dered to be in an arbitrage equilibrium or an arbitrage-free state. Such equi-
librium is a fundamental prerequisite for achieving a broader economic equili-
brium. Within quantitative nance, the assumption of "free arbitrage" is leve-
raged to calculate a singular risk-neutral price for derivatives.

We will suppose in the rest of the project that the markets in which we
operate are arbitrage-free for pricing our zero coupon corporate bond by dis-
counting the future cash ow by discount rates. In doing so, a more accurate
price can be obtained than if the price is calculated with a present-value pricing
approach.

3.1.2 Market Eciency Hypothesis


The ecient-market hypothesis (EMH) is a hypothesis in nancial economics
that states that asset prices reect all available information. A direct implication

12
is that it is impossible to "beat the market" consistently on a risk-adjusted basis
since market prices should only react to new information.

3.2 Pricing
We assume tha the risk-free asset B(t) is dened as :

dB(t) = rt B(t)dt

with rt the interest rate represented by a positive stochastic process. As a conse-


quence, we have : Z t
B(t) = rs B(s)ds.
0
The stochastic discount factor is also dened as :
Z T !
B(T )
D(t, T ) = = exp − rs ds .
B(t) t

Thus, the price at t of a zero-coupon bond with maturity T is given by :

P (t, T ) = EQ [D(t, T ) | Ft ] .

où Q is the risk-neutral probability measure. More generally, the price (value)


of an asset at the instant tXt generating a payo at time T XT is given by :

Xt = EQ [D(t, T ) · XT | Ft ]

3.2.1 Lemma (Change of ltration)


. We dene the ltration that contains the market information and the de-
fault information as Gt = Ft ∨ σ{τ ≤ u, u ≤ t}. Moreover, for any measurable
random variable G and for any t ≤ T

   f{τ >t} · EQ 1{τ >T } Y | Ft
EQ 1{τ >T } Y | Gt = EQ 1{τ >t} 1{τ >T } Y | Gt = .
Q (τ > t | Ft )

3.2.2 Denition :
Let Q (A | Ft ) , P -Almost surely is dened for A ∈ F (not dened on : NA ∈
F , Q(A) = 0 ). There exists a regular conditional probability Qt (A)(ω), A ∈
F, ω ∈ Ω such that : - Qt ().(ω) is a probability on (Ω, F), ∀ω ∈ Ω ; - ∀A ∈ F :
ω → Qt (A)(ω) est F -mesurable. - Q (A | Ft ) (ω) = Qt (A)(ω), P -Almost surely,
∀A ∈ F
As a consequence, we have :

Qt ({τ > T } ∩ {τ > t}) Qt (τ > T )


Qt (τ > T | τ > t) := =
Qt (τ > T ) Qt (τ > t)

13
3.2.3 Zero Coupon Pricing
Suppose that, following the default of the counterparty, the holder of a bond
(with 1 euro as a nominal value) recovers a fraction α (Recovery) at the instant
of default τ . The loss in case od payment default, noted LGD for Loss Given
Default, is therefore 1−α. The value of the payo actualized to take into account
the default at the instant τ is written as :

CFcorp (t, T ) = D(t, T ) · 1{r>T } + α · D(t, τ ) · 1{τ ≤T }

Then, in the event of recovery at maturity T, we have :



CFoorp (t, T ) = D(t, T ) 1 − 1{τ ≤T } + α · D(t, T ) · 1{τ ≤T }

= D(t, T ) − (1 − α) · D(t, T ) · 1{τ ≤T }


= D(t, T ) − LGD · D(t, T ) · 1{τ ≤T }
= (LGD + α) · D(t, T ) − LGD · D(t, T ) · 1{τ ≤T }

= α · D(t, T ) + LGD · 1 − 1{τ ≤T } · D(t, T )
= α · I(t, T ) + LGD · D(t, T ) · 1{r>T }
and so the price at the instant t od the zero coupon risky bond is given by :

Pcorp (t, T )1{τ >t} = EQ [CFcorp (t, T ) | Gt ]


 
= EQ α · D(t, T ) + LGD
 · D(t, T ) · 1{r>T } | Gt = αEQ [D(t, T ) | Gt ] + LGD ·
EQ D(t, T )1{r>T } | Gt Using the change of ltration lemma, we have :

EQ 1{τ >T } D(t, T ) | Ft
Pcorp (t, T ) = α · P (t, T ) + 1{τ >t} LGD ·
Q (τ > t | Ft )

hence the result :



Pcorp (t, T ) = P (t, T ) α + 1{τ >t} LGD · Qt (τ > T | τ > t)

3.2.4 Spread Formula :


In the previous section, we saw that the price of a risky corporate coupon
zero bond can be written as a linear function of an issuer's default probability.
Assuming that the interest rate is deterministic, the spread S of this bond can
be written as :

ln (1 − LGD (1 − Q (τ > T | (τ > t | Ft ))


Scorp (t, T ) = −
T −t

3.2.5 Probability of Default (PD)


Let Q (A | Ft ) , P -Almost surely dened for A ∈ F (not dened on NA ∈
F, Q(A) = 0 ). here exists a regular conditional probability Qt (A)(ω), A ∈

14
F, ω ∈ Ω such that : - Qt ().(ω) is a probability on (Ω, F), ∀ω ∈ Ω ; - ∀A ∈ F :
ω → Qt (A)(ω) is F -mesurable. - Q (A | Ft ) (ω) = Qt (A)(ω), P -Almost surely,
∀A ∈ F
As a consequence, we have :

Qt ({τ > T } ∩ {τ > t}) Qt (τ > T )


Qt (τ > T | τ > t) := =
Qt (τ > T ) Qt (τ > t)

3.2.6 Poisson Point Process


We would like to model the default intensity using a poisson point process.
First, we'll describe its construction and denition.
Start with the construction of a poisson point process (Nt ).
For a xed value of λ > 0, Let (τn )n≥1 be a sequence of random variables
i.i.d. (independant and identically distributed) of an exponentiel law E(λ). On
note :
−T0 = 0, - for all n ≥ 1, Tn = τ1 + . . . + τn , - and for all t ≥ 0, Nt =
sup {n ≥ 0 | Tn ≤ t}.

3.2.7 Proposition
The process (Nt ) dened above verify the following properties :
1. N0 = 0 ;
2. For all ω ∈ Ω, the function t 7→ Nt (ω) is continuous on the right ;
3. (Nt ) has independant increments : For all n ∈ N∗ and for all 0 ≤ t0 <
. . . < tn , The random variables Nt1 − Nt0 , . . . , Ntn − Ntn−1 are indepen-
dant ;
4. Stationary : ∀s ≥ 0 and t > 0, the random variable Ns+t − Ns follows a
poisson law of parameter λt.
We have then the following denition :

3.2.8 Denition :
The process (Nt ) is a Poisson point process with parameter λ > 0.

3.2.9 Denition :
We call the hasard function of the poisson process the function Λ dened
as :
Z t
Λ(t) = λ(s)ds
0

15
3.2.10 Modeling the Intensity Of Default
The default instant is dened as the rst jump in an inhomogeneous sh
process (cf. Brigo & Marcurio : short interest rate models). We then dene the
fault intensity λ(t) as the risk-neutral intensity of this process.
The instant of default τ can then be written as :
 Z t 
τ = inf t ≥ 0 | λ(s)ds ≥ ξ
0

where ξ is a random variable with an exponential distribution of parameter


1.
In other words, the probability that the instant of default τ belongs to an
inverval of the form [t, t + ∆t[ is given by :

Q (τ ∈ [t, t + ∆t [| τ > t, Ft ) = λ(t)∆t


The Intensity of the Poisson point process can be :
1. Constant : τ is the time of the rst jump in a time-homogeneous Poisson
process ;
2. Deterministic : τ is the instant of the rst jump in an inhomogeneous
Poisson process in time. This choice allows us to capture the time struc-
ture of spreads ; no uncertainty in intensity and therefore no volatility in
credit spreads.
3. Stochastic : τ is the instant of the rst jump in a Cox process. This choice
makes it possible to capture the time-varying nature of the parameter ,
the temporal structure of credit spreads, uncertainty in intensity and
therefore volatility in credit spreads are captured.
In what follows, we rst consider intensity as an deterministic function, then
as a random variable. The question of which model to choose for its modeling
in the stochastic case will arise later.
AIn other words, the probability that the instant od default τ belongs to an
inverval of the form [t, t + ∆t[ is given by :

Q (τ ∈ [t, t + ∆t [| τ > t, Ft ) = λ(t)∆t

3.2.11 Intensity of Default and the Probability of Survival


1.Denition
An inhomogeneous Poisson process in time Nt is a Poisson process whose
intensity varies with time. Dening the hazard function as
Z t
Λ(t) = λ(s)ds
0
the process is obtained as a poisson point process Tt

16
Nt = TΛ(t)
Thus,since : Nt = TΛ(t) , it follows that the instant of the rst jump τ of N
occurs when T jumps to Λ(τ ).
Recall that for a homogeneous Poisson process, τ ∼ Exp(λ) and Λ(τ ) = λτ ∼
Exp(1), this implies that for a time-inhomogeneous poisson processs, Λ(τ ) :=
ξ ∼ Exp(1). It also results that :

τ = Λ−1 (ξ)

3.2.12 Spread Reformulation


1.Denition :
We call the probability of survival the probability knowing the complete
ltration Gt that there is no default of payment :

S(t, T ) = 1 − Qt (τ ≤ T | τ > t)
Qt (Λ(τ ) > Λ(t)) − Qt (Λ(τ ) > Λ(T ))
=1−
Qt (Λ(τ ) > Λ(t))
Qt (Λ(τ ) > Λ(T ))
=1−1+
Qt (Λ(τ ) > Λ(t))
e−Λ(T )
=
e−Λ(t)
RT
e− 0 λ(s)ds
= Rt
− 0 λ(s)ds
RT
= e− t
λ(s)ds

It results that the spread can be written :

ln (1 − LGD (1 − Q (τ > T | (τ > t | Ft ))


Scorp (t, T ) = −
T −t
ln(1 − LGD · S(t, T ))
=−
T −t
 RT 
ln 1 − LGD · e− t λ(s)ds
=−
T −t
This last formula will be used for our modeling

3.2.13 Stochastic Intensity


1.Denition :
Noting by Ft the ltration containing market information and λt a right-
continuous stochastic process adapted to Ft . We call the Cox process Λ(t),
dened as follows :

17
Z t
Λ(t) = λs ds
0

2.Denition :
The default intensity λt is a stochastic process that veries :
1. λt is adapted to Ft
2. λt is right-continuous
3. λt is strictly positive for all t.

3.Proposition :
The cox process, conditionally on Ft preserves the structure of the Poisson
process and all the results that we saw in the previous section for λ(t) remain
valid for λt .
In particular, we have : Λ(τ ) = ξ ∼ Exp(1), with ξ independent o of Ft . In
a similar way, we have for the probability of survival :

h Rt i
Q(τ > t) = Q(Λ(τ ) > Λ(t)) = Q(ξ > Λ(t)) = E Q ξ > Λ(t) | Ftλ = E e− 0 λs ds
 

which is completely analogous to the price of a zero-coupon bond with a


deterministic interest rate process λs .

4 Interest Rate Models

4.1 The Vasicek Model


Vasicek (1977) assumed that the instantaneous spot rate under the real-world
measure evolves as an Ornstein-Uhlenbeck process with constant coecients.
For a suitable choice of the market price of risk , this is equivalent to assume
that r follows an Ornstein-Uhlenbeck process with constant coecients under
the risk-neutral measure as well, that is

dr(t) = k[θ − r(t)]dt + σdW (t), r(0) = r0 ,

where r0 , k, θ, σ are positive constants and W (t) is a Brownian motion.

By Integrating this equation , we obtain, for each s ≤ t,


  Z t
r(t) = r(s)e−k(t−s) + θ 1 − e−k(t−s) + σ e−k(t−u) dW (u),
s

18
so that r(t) conditional on Fs is normally distributed with mean and variance
given respectively by
 
E {r(t) | Fs } = r(s)e−k(t−s) + θ 1 − e−k(t−s)
σ2 h i
Var {r(t) | Fs } = 1 − e−2k(t−s) .
2k
This implies that, for each time t, the rate r(t) can be negative with positive
probability. The possibility of negative rates is indeed a major drawback of the
Vasicek model. However, achieving analytical tractability when assuming other
distributions for the process r is hardly feasible.
As a consequence of the conditional interest-rate expectancy above, the short
rate r exhibits mean reversion, since the expected rate tends, for t going to
innity, to the value θ. The fact that θ can be regarded as a long-term average
rate could also be inferred from the dynamics of the Ornstein-Uhlenbeck process
itself. Additionally, we notice that the drift of the process r is positive whenever
the short rate is below θ and negative otherwise, so that r is consistently nudged
towards the level θ, on average, over time.
The price of a pure-discount bond can be derived by computing the price
formula of a zero coupon bond (with 1 euro nominal value) multiplied by the
discount factor . We obtain

P (t, T ) = A(t, T )e−B(t,T )r(t) ,

where
σ2 σ2
  
A(t, T ) = exp θ− 2 [B(t, T ) − T + t] − B(t, T )2
2k 4k
1h i
B(t, T ) = 1 − e−k(T −t) .
k

4.2 The Cox Ingersoll Ross Model (CIR)


The general equilibrium approach developed by Cox, Ingersoll and Ross
(1985) led to the introduction of a "square-root" term in the diusion coef-
cient of the instantaneous short-rate dynamics proposed by Vasicek (1977).
The resulting model has been a benchmark for many years because of its ana-
lytical tractability and the fact that, contrary to the Vasicek (1977) model, the
instantaneous short rate is always positive. The model formulation under the
risk-neutral measure Q is
p
dr(t) = k(θ − r(t))dt + σ r(t)dW (t), r(0) = r0 ,

with :
 k Speed of mean reversion parameter. It measures the rate at which the
interest rate returns to its long-term level r
 θ Long-term level. This is the value that the interest rate tends to reach
over the long term.

19
 σ Volatility. It represents the amplitude of the variation of the interest
rate.
 dWt is the dierencial of the brownian motion.
The condition
2kθ > σ 2
has to be imposed to ensure that the origin is inaccessible to the process, so
that we can grant that r remains positive.

As we seen in the last chapter, the intensity of default is completely analogous


to the price of a zero-coupon bond with a deterministic interest rate process λs .
We reformulate the CIR equation as :
p
dλt = k(θ − λt )dt + σ λt dW (t)
where :
λt represents the intensity of default at time t.
This equation tells us that fault intensity varies over time according to two
main factors :
1. The tendency to converge towards the long-term mean θ, controlled by
the rst term k(θ − λt )dt where dt is the discretisation step.
2. The condition 2kθ > σ 2 guarantees that the solution of this EDS remains
strictly positive, thus ensuring that our intensity of default remains rea-
listic from a nancial point of view. This condition also guarantees that
the process remains stable and converges towards its long-term mean.

5 presentation of the calibration

In nance, model calibration involves adjusting the model's parameters so


that the theoretical results best match the data observed in the markets. His-
torically, the calibration of interest rate models is of crucial importance as it
allows nancial institutions to better estimate and manage the risks associated
with rate uctuations. These models are used to evaluate a variety of nancial
instruments, ranging from simple bonds to more complex interest rate deriva-
tives.
In a historical context, the calibration of interest rate models has gained
importance with the increased sophistication of nancial markets and the need
for accurate pricing of nancial instruments. Models like those of Vasicek and
CIR allow capturing the dynamics of interest rates by considering factors such
as interest rate risk, instrument maturity, and default probability.
The Vasicek model, introduced in the 1970s, was one of the rst to consider
the interest rate as a stochastic process. This model and its extensions, including
the CIR model developed in the 1980s, have enabled an understanding of the
random nature of interest rates and provided analytical formulas for pricing
bonds and other derivatives. The calibration of these models, through methods
such as least squares or maximum likelihood, is fundamental to align theory with

20
practice, thereby enabling more informed decision-making in investment and
risk hedging. Historically, the calibration of interest rate models is particularly
important because these models are at the core of credit risk assessment. Indeed,
the interest rate embedded in bond prices and other debt instruments reects
not only monetary policy expectations and economic conditions but also the
credit risk associated with the debt issuer.
Accurate calibration allows for the dierentiation of interest rate compo-
nents related to general market movements from those specic to the issuer's
risk. This provides a more rened assessment of risk premiums and helps nan-
cial institutions better manage their credit portfolios. Moreover, calibration can
reveal deviations from normal market conditions, which may signal changes in
credit risk perception.
The Vasicek and CIR models also allow for stress scenario simulation and
testing the resilience of portfolios to extreme events. By adjusting parameters to
match historical data, risk managers can assess the likelihood of default and po-
tential exposure to losses, which is crucial for strategic planning and compliance
with regulatory capital requirements.

5.1 Data

(a) Yield Curve for AAA Bonds (b) Yield Curve for AA Bonds

21
(a) Yield Curve for A Bonds (b) Yield Curve for BBB Bonds

(a) Yield Curve for BB Bonds (b) Yield Curve for B Bonds

A yield curve, also known as a term structure of interest rates, is a graph


that shows the interest rates at dierent times for similar credit bonds but with
dierent maturity dates. It is crucial for understanding the current state and
future expectations of the debt market.
Yield curves can vary based on the credit rating of the bond issuers. Credit
rating, often indicated by notations like AAA, AA, A, BBB, BB, B, etc., is
an indicator of the solvency of the company or government issuing the bond.
Ratings are assigned by rating agencies such as Standard and Poor's, Moody's,
and Fitch Ratings, and they reect the risk associated with the bond. The higher
the rating, the lower the supposed default risk is, and therefore, the lower the
interest rate demanded by investors is generally.
AAA : This rating is the highest possible and indicates the best credit quality.
Issuers rated AAA are judged to have an extremely strong capacity to repay
their debt. AA : This is a very good credit rating, although slightly more risky

22
than AAA. A : Bonds with this rating have a good repayment capacity but
are somewhat more sensitive to adverse economic conditions. BBB : This is
the lowest rating considered as "investment grade". It indicates that the issuer
has an adequate repayment capacity, but future economic factors or changes
in circumstances could aect this capacity. BB, B : These ratings indicate a
speculative grade ; they carry a higher credit risk and, consequently, oer higher
interest rates. An issuer with a BB or B rating is more likely to face nancial
diculties.

5.2 Construction of a Yield Curve


Our yield curves present 30 points, one for each maturity. Now, we would
like to ll in the gaps between our points to create a continuous curve that can
provide an estimation of rates for any maturity. To achieve this, we turned to
the cubic spline method.

Cubic Splines
A cubic spline is a sequence of piecewise third-degree polynomials dened
over adjacent intervals. For a set of data points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ),
with x0 < x1 < . . . < xn , the cubic spline S(x) is constructed as follows :
1. On each interval [xi , xi+1 ], S(x) is a cubic polynomial, yielding n poly-
nomials for n − 1 intervals.
2. Each cubic polynomial can be expressed by the following equation :
(continue with the specic equations and conditions of your cubic spline
interpolation)

Si (x) = ai + bi (x − xi ) + ci (x − xi )2 + di (x − xi )3
where ai , bi , ci , di are the coecients to be determined for interval i.
3. Continuity conditions and continuous derivatives at the points xi are
ensured by :
 Continuity : Si−1 (xi ) = Si (xi ) = yi .
 Continuity of the rst derivative : Si−1 ′
(xi ) = Si′ (xi ).
 Continuity of the second derivative : Si−1 (xi ) = Si′′ (xi ).
′′

4. Boundary conditions for a natural spline are dened by :


S ′′ (x0 ) = 0 and S ′′ (xn ) = 0

5. A system of linear equations is formed from these conditions and is solved


to obtain the coecients of the polynomials.
6. The practical solution of these equations is often performed using ma-
trix methods to exploit the tridiagonal structure of the corresponding
matrices, making the algorithm numerically ecient.
The result is a cubic spline function S(x)S(x) that is continuous, with conti-
nuous rst and second derivatives, providing a smooth t to the data

23
6 Vasicek

The Vasicek model is a single-factor model for the term structure of interest
rates, which describes the evolution of short-term interest rates. It was introdu-
ced by Old°ich Va²í£ek in 1977 and is widely used for its analytical simplicity
and its ability to adapt to dierent shapes of yield curves.

6.1 Specics of the Vasicek Model


The Vasicek model is characterized by the following stochastic dierential
equation for the instantaneous interest rate rt :

drt = k(θ − rt )dt + σdWt , (1)

where :
 k > 0 is the speed of mean reversion parameter.
 θ is the long-term average interest rate, towards which rt gravitates.
 σ is the volatility of the interest rate.
 Wt is a standard Brownian motion, representing the uncertainty and the
random component of the interest rate.
One of the important features of the Vasicek model is that it allows for
negative interest rates, a direct consequence of the normality of interest rates.
It is also known for its 'mean reversion,' a property that drives interest rates
towards their long-term average.

6.2 Analytical Solution


The Vasicek model has an analytical solution for the price of a zero-coupon
bond, which makes it useful for valuing interest rate derivatives. The price of a
zero-coupon bond at time t for a maturity T is given by :

P (t, T ) = A(t, T )e−B(t,T )rt , (2)

where A(t, T ) and B(t, T ) are deterministic functions of time and model para-
meters.

6.3 Exploring the Maximum Likelihood Estimation Me-


thod for Calibration Purposes
All my detailed and justied handwritten calculations will be included in
the bibliography.
The likelihood function L is :
(ri+1 − ri exp(−a) − b(1 − exp(−a)))2
 
1
f (ri+1 |ri , a, b, σ) = √ exp −
2πσ 2σ 2

24
The likelihood function L is :
T
Y
L(a, b, σ) = f (rt+1 | rt , a, b, σ)
t=1

n−1
X
ln L(a, b, σ) = ln f (rt+1 | rt , a, b, σ)
t=1
n−1 2
!!
X 1 (rt+1 − rt e−a − b(1 − e−a ))
= ln √ exp −
t=1 2πσ 2 2σ 2
n−1
n−1 n−1 1 X 2
=− ln(2π) − ln(σ 2 ) − 2 rt+1 − rt e−a − b(1 − e−a )
2 2 2σ t=1

∂L(a, b, σ)
=0
∂a
n−1
1 X
(ri+1 − ri e−a − b(1 − e−a ))2

=− 2
2σ i=1
n−1
−2 X
= (ri e−a − be−a )(ri+1 − ri e−a − b(1 − e−a )) = 0
2σ 2 i=1
After developing and simplifying, here is what we obtain : :
Pn−1 !
2
i=1 (ri + ri+1 − b · ri − b · ri+1 + b )
a = − ln Pn−1 2 (3)
2
i=1 (ri + b )

∂L(a, b, σ)
=0 (4)
∂b
n−1
1 X
1 − e−a ri+1 − ri e−a − b 1 − e−a = 0
 

σ 2 i=1
Pn−1
(ri+1 − ri e−a )
b = i=1
1 − e−a

 
∂L(a, b, σ)
=0
∂σ

25
n−1
1 X
σ2 = (ri+1 − ri · exp(−a) − b · (1 − exp(−a))2 )
n − 1 i=1

7 Calibration using the Cox-Ingersoll-Ross mo-


del

The Cox-Ingersoll-Ross (CIR) model is widely used in nance to model the


evolution of interest rates. It was developed as an extension of the Vasicek model
to better capture the constant variance observed in real interest rate data. The
stochastic dierential equation that governs the CIR model is as follows :

drt = α(θ − rt )dt + σ rt dWt

7.1 Diusion Parameters


 α : Speed of mean reversion coecient. It measures the rate at which the
interest rate returns to its long-term level θ.
 θ : Long-term level. This is the value that the interest rate tends to reach
over the long term.
 σ : Volatility. It represents the random variation of the interest rate.
 Wt : Standard Brownian motion. It represents the random noise in the
process.

7.2 Properties of the Model


 The CIR model is governed by a stochastic dierential equation that
takes into account the volatility of the interest rate, modeled by the
square root of the interest rate itself.
 The equation ensures that the interest rate rt remains positive, which is
an important characteristic in the nancial context.
 The CIR model allows for modeling the constant variance observed in
real long-term interest rate data.
 In practice, the CIR model is often used to estimate interest rate para-
meters from observed data and to evaluate nancial derivative products
such as variable rate bonds.
The Cox-Ingersoll-Ross (CIR) model is a powerful tool for modeling the
evolution of interest rates over time. By accounting for the volatility of the
interest rate, it captures important characteristics of real data and thus is a
crucial element of quantitative nancial modeling.

7.3 Calibration of the Model


 The Cox-Ingersoll-Ross (CIR) model is used to model the evolution of
interest rates.

26
 Calibration of this model allows for the adjustment of theoretical rates
to market-observed rates.
Objective of Calibration
 Objective : minimize the gap between modeled spreads and market spreads.

2
min ∥Spreadmodel (λ0 , κ, θ, σ) − Spreadmarket ∥
λ0 ,κ,θ,σ

Constraints and Optimization Method


 Constraints on parameters to ensure the positivity of rates.
 Use of the Nelder-Mead method for optimization.

7.3.1 Methodology
The calibration process proceeded as follows (The code will be provided in
the bibliography) :
1. We began by dening the set of maturities for which we have market-
observed rates.
2. For each credit rating category, we used cubic splines to smooth the
observed rate curves and interpolate rates for a wider range of maturities.
3. With these interpolated data, we calculated the spreads relative to a
benchmark rate curve.
4. We then dened a survival probability function based on the parameters
of the CIR model to be calibrated : λ, k , θ, and σ .
5. An objective function was dened to measure the discrepancy between
the observed spreads and those generated by the model.
6. The Nelder-Mead optimization algorithm was used to nd the parameters
that minimize this objective function.
7. About 200 iterations were performed for each credit rating category to
nd the optimal set of parameters.
8. The results were analyzed and compared to ensure the coherence and
accuracy of the model.

8 Results

The optimal parameters found by the calibration algorithm for each credit
rating category are presented in the table :
The calibration of the CIR model proved to be more robust than Vasicek,
accurately reecting the observed term structure of interest rates in the market.

27
(a) AAA bonds (b) AA bonds

(c) A bonds (d) BBB bonds

Figure 4  Comparison of CIR Model Spread vs. Market Spread

28
(a) BB Bonds (b) B bonds

Figure 5  Deux images sur la deuxième page

Table 1  Results of the CIR Model Calibration

(a) B bonds

29
9 Martingale Test

The Martingale test ensures that, under the assumption of risk-neutral pro-
bability, the updated simulated processes are indeed martingales. In this way,
we can determine whether or not there is any arbitrage opportunity.
Our test will allow us to verify the proximity of the average spread trajec-
tories of our model to those of our benchmark, Germany for the same maturity
and according to the dierent bond ratings.
We recall the formula for the stochastic discount factor :
Z T !
D(t, T ) = exp − rs ds .
t

with rt the interest rate.


If our model is correct, we should end up with the martingale property that
indicates :
E[D(t, T )] ≈ P M (0, t)
with : P M (0, t) The price of the zero coupon corporate bond discounted at the
instant t=0
To simplify the procedure : We will be content with a Martingale test on
the probability of survival since it's proportionally linked to the price by the
formula in the pricing chapter.

30
(a) Test martingalite AAA bonds (b) Test Martingalite AA

(c) Test Martingalite A (d) Test Martingalite BBB

Figure 6  Comparison of CIR Model Spread vs. Market Spread

(a) Test Martingalite BB (b) Test Martingalite B

Figure 7

31
10 Observations from the Graphs

The graph presents the results of a martingale test applied to compare mar-
ket's zero-coupon bond prices with those generated by a nancial model, presu-
mably the Cox-Ingersoll-Ross (CIR) model. The martingale test is employed to
verify if the price sequence forecasted by a model aligns with the characteristics
of a martingale, implying that the best predictor of tomorrow's price is today's
price, absent any predictable trend.
 The x-axis (Maturity) represents the time until the zero-coupon instru-
ments' maturity.
 The y-axis (Prix ZC) indicates the zero-coupon instruments' current
prices across varying maturities.
 The plotted blue line (Market) illustrates the current market prices of
zero-coupon instruments as a function of their maturity.
 The plotted green line (Model) depicts the prices predicted by the cali-
brated CIR model.
Both lines are in close proximity, suggesting the calibrated CIR model's pro-
ciency in accurately mirroring the market prices of the instruments. It indicates
a reasonably successful calibration of the CIR model, as the generated prices
conform well to those observed in the market.
We observe that the survival probability of our model is superimposable on
that of the market data by a condence interval of 95% except to the last B
bond which may be due to the high interest rate or the low rating of the bond (
we could not tell for sure ) By comparing the model's survival probability curve
with the market data, we can validate that the model adequately captures the
behavior of bond prices presented in our yield curves data and that the discount
price of the bond indeed follows a Martingale process.

32
11 Code R pour optimisation CIR

x=c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 25, 30) # maturité


#AAA
AAA=c( 3.7, 3.314, 3.091, 3.018, 2.953, 2.949, 2.964, 2.985, 3.021, 3.089, 3.257, 3.383, 3
)
spline_resultAAA <- spline(x, AAA/100, n = length(x)*2)
SplineAAA=spline_resultAAA$y
#DEUBENCH
DEUBENCH=c(3.3070,2.8460,2.5720,2.4290,2.3530,2.3230,2.3130,2.2750,2.3130,2.3560,2.4880,2.
spline_resultDEUBENCH <- spline(x, DEUBENCH/100, n = length(x)*2)
SplineDEUBENCH=spline_resultDEUBENCH$y
#AA
AA=c(3.7, 3.416, 3.239, 3.161, 3.129, 3.131, 3.156, 3.19, 3.226, 3.26, 3.35, 3.437, 3.519,
spline_resultAA <- spline(x, AA/100, n = length(x)*2)
SplineAA=spline_resultAA$y
#A
A=c(3.781, 3.549, 3.368, 3.304, 3.297, 3.31, 3.331, 3.356, 3.386, 3.416, 3.495, 3.588, 3.7
spline_resultA <- spline(x, A/100, n = length(x)*2)
SplineA=spline_resultA$y
#BBB
BBB=c(3.86,3.72,3.55,3.5,3.52,3.57,3.61,3.67,3.70,3.72,3.77,3.81,3.99,4.15,4.41)
spline_resultBBB <- spline(x, BBB/100, n = length(x)*2)
SplineBBB=spline_resultBBB$y
#BB
BB=c(4.68,4.74,4.66,4.72,4.74,4.74,4.73,4.73,4.74,4.74,4.81,4.85,5.07,5.29,5.62)
spline_resultBB <- spline(x, BB/100, n = length(x)*2)
SplineBB=spline_resultBB$y
#B
B=c(5.14,5.4,5.79,5.75,5.61,5.55,5.52,5.49,5.47,5.5,5.69,5.88,6.14,6.35,6.63)
spline_resultB <- spline(x, B/100, n = length(x)*2)
SplineB=spline_resultB$y

Spline=data.frame(spline_resultAAA$x,SplineAAA,SplineAA,SplineA,SplineBBB,SplineBB,SplineB
Spline=Spline[,-1]

prob_survie=function(lambda, k, theta, sigma, m){


h=sqrt(k^2+2*sigma^2)
a1=2*h*exp((k+h)*m/2)
a2=(k+h)*(exp(h*m)-1)+2*h
A=(a1/a2)^(2*k*theta/sigma^2)
a3=2*(exp(h*m)-1)
B=a3/a2
prob_survie=A*exp(-lambda*B)
}

33
#spread

spread=Spline-SplineDEUBENCH
colnames(spread)=c('AAA','AA','A','BBB','BB','B')

install.packages("optimx")
library(optimx)

optim=function(col){
parametres=function(col){
x0 <- runif(4, 0, 0.05)

while (TRUE) {
fct_obj=function(x){
lambda=x[1]
k=x[2]
theta=x[3]
sigma=x[4]

#Calcul Prob_survie et Spread_cir en utilisant les nouveaux paramètres


spread_cir=rep(NA,30)
for (i in 1:30) {
spread_cir[i]=log10(prob_survie(lambda,k,theta,sigma,i)^(-1/i))
}

#Calcul l'écart entre rate et Spread_cir


ecart=(spread[[col]]-spread_cir)^2

#Calcul la condition de Feller


feller=min(0,2*k*theta-sigma^2)
if (!is.finite(feller)) {
return(Inf)#Retourner une valeur infinie pour indiquer un échec
}

ecart=sum(ecart) + 1000 * feller^2


return(ecart)
}

#Optimisation des paramètres


result=optimx(par=x0,fn=fct_obj,method="Nelder-Mead")

# Extraire les paramètres optimisés


lambda=result$p1
k=result$p2
theta=result$p3

34
sigma=result$p4
feller=2*k*theta-sigma^2
ecart=result$value

# Vérifier la condition de Feller


if (feller> 0 & lambda>0 & k>0 & sigma>0 & theta>0 & lambda<1 & k<1 & sigma<1 & theta<1
break
} else {
x0=runif(4,0,0.5) #nouveaux paramètres si conditions non respectées
}
}
param=data.frame(lambda, k, theta, sigma, feller,ecart)
colnames(param)=c('lambda','k','theta','sigma','feller','ecart')
return(param)
}
meilleurs_parametres <- NULL
meilleur_ecart <- Inf
for(i in 1:200){
x=parametres(col)
ecart_test=x$ecart
if (ecart_test < meilleur_ecart) {
meilleur_ecart <- ecart_test
meilleurs_parametres <- x
}
print(i)
i=i+1
}
lambda=meilleurs_parametres$lambda
k=meilleurs_parametres$k
theta=meilleurs_parametres$theta
sigma=meilleurs_parametres$sigma
spread_model=rep(NA,30)
for (i in 1:30) {
spread_model[i]=log10(prob_survie(lambda,k,theta,sigma,i)^(-1/i))
}
plot(seq(1,30,1),spread_model,type='l',col='red',ylim=c(0,0.04),xlab="Maturité",ylab='Spr
lines(spread[[col]],col='blue')
legend("bottomright", legend=c("Spread CIR","Spread Marché"), col=c("red","blue"), lty=c(
return(meilleurs_parametres)
}
AAA
AAA=optim('AAA')
AA=optim('AA')
A=optim('A')
BBB=optim('BBB')
BB=optim('BB')

35
B=optim('B')

tableau_optim=rbind(AAA,AA,A,BBB,BB,B)
rownames(tableau_optim)=c("AAA","AA","A","BBB","BB","B")
print(tableau_optim)

12 Code R pour test de martingalite

x=c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 25, 30) # maturité


AAA=c( 3.7, 3.314, 3.091, 3.018, 2.953, 2.949, 2.964, 2.985, 3.021, 3.089, 3.257, 3.383, 3
spline_resultAAA <- spline(x, AAA/100, n = length(x)*2)
SplineAAA=spline_resultAAA$y
#DEUBENCH
DEUBENCH=c(3.3070,2.8460,2.5720,2.4290,2.3530,2.3230,2.3130,2.2750,2.3130,2.3560,2.4880,2.
spline_resultDEUBENCH <- spline(x, DEUBENCH/100, n = 30)
SplineDEUBENCH=spline_resultDEUBENCH$y
#AA
AA=c(3.7, 3.416, 3.239, 3.161, 3.129, 3.131, 3.156, 3.19, 3.226, 3.26, 3.35, 3.437, 3.519,
spline_resultAA <- spline(x, AA/100, n = length(x)*2)
SplineAA=spline_resultAA$y
#A
A=c(3.781, 3.549, 3.368, 3.304, 3.297, 3.31, 3.331, 3.356, 3.386, 3.416, 3.495, 3.588, 3.7
spline_resultA <- spline(x, A/100, n = length(x)*2)
SplineA=spline_resultA$y
#BBB
BBB=c(3.86,3.72,3.55,3.5,3.52,3.57,3.61,3.67,3.70,3.72,3.77,3.81,3.99,4.15,4.41)
spline_resultBBB <- spline(x, BBB/100, n = length(x)*2)
SplineBBB=spline_resultBBB$y
#BB
BB=c(4.68,4.74,4.66,4.72,4.74,4.74,4.73,4.73,4.74,4.74,4.81,4.85,5.07,5.29,5.62)
spline_resultBB <- spline(x, BB/100, n = length(x)*2)
SplineBB=spline_resultBB$y
#B
B=c(5.14,5.4,5.79,5.75,5.61,5.55,5.52,5.49,5.47,5.5,5.69,5.88,6.14,6.35,6.63)
spline_resultB <- spline(x, B/100, n = length(x)*2)
SplineB=spline_resultB$y

36
Spline=data.frame(spline_resultAAA$x,SplineAAA,SplineAA,SplineA,SplineBBB,SplineBB,SplineB
Spline=Spline[,-1]

spread=Spline-SplineDEUBENCH
colnames(spread)=c('AAA','AA','A','BBB','BB','B')

simul_cir=function(params, m){
lambda0 = params[1]
k = params[2]
theta = params[3]
sigma = params[4]
cir = lambda0
for(i in 2:(m+1)){
lambda = k*(theta-cir[i-1])+sigma*sqrt(cir[i-1])*rnorm(1) + cir[i-1]
cir = c(cir, max(lambda,0))}
return(cir)}

params=c( 0.007812234, 0.21894157,0.02312722 , 1.135399e-04)


mat = 30
aaa= c(0,spread[,1])
gap = rep(aaa[1]-params[1],mat+1)

lambda0 = params[1]
k = params[2]
theta = params[3]
sigma = params[4]
h = sqrt(k^2+2*sigma^2)

f_cir=function(m){
f=(2*k*theta*(exp(m*h)-1))/(2*h+(k+h)*(exp(m*h)-1)) + lambda0*(4*h^(2)*exp(m*h))/(2*h+(k+
return(f)}

for(i in 1:mat){
gap[i+1] =aaa[i+1]-f_cir(i)}

matrice_proba_survie = NULL
for(i in 1:1000){

s = simul_cir(params, mat)
s = s+gap
proba_survie = cumprod(1/(1+s))
matrice_proba_survie = cbind(matrice_proba_survie, proba_survie)}

library(ggplot2)

37
proba_survie = apply(matrice_proba_survie, 1, mean)

deviation = apply(matrice_proba_survie, 1, sd)

data = data.frame(Maturité = 0:mat, Model = proba_survie, Market = cumprod(1/(1+aaa)))


ggplot(data, aes(x = Maturité)) +
geom_line(aes(y = Model, color = "Model"), size = 1) +
geom_line(aes(y = Market, color = "Market"), size = 1) +
geom_ribbon(aes(ymin = proba_survie - qnorm(1 - 0.05 / 2) * deviation / sqrt(1000),
ymax = proba_survie + qnorm(1 - 0.05 / 2) * deviation / sqrt(1000)),fill = "grey", alpha =
labs(title = "Test de Martingalite Prix ZC B",
x = "Maturité",
y = "Prix ZC") +
scale_color_manual(values = c("Model" = "green", "Market" = "blue")) +
theme(legend.position = "right")

Bibliographie

Références

[1] D. Brigo andF. Mercurio, Classical Time-Homogeneous


Short-Rate Models.
Springer Finance, [Online]. Available :
https://www.researchgate.net/profile/Florina-Halasan-2/
publication/238503196_INTEREST_RATE_THEORY_AND_
CONSISTENCY_PROBLEMS/links/5424ead80cf26120b7ac4a98/
INTEREST-RATE-THEORY-AND-CONSISTENCY-PROBLEMS.pdf.
Modèle de diusion des taux sans risque à long
[2] A. Laghraib,
terme dans une optique assurance et gestion ALM. [Online].
Available : https://www.institutdesactuaires.com/docs/mem/
9a24d8403d3da6369dfeb13a8ad5ed5d.pdf.
[3] F.Halasan, INTEREST RATE THEORY AND CONSISTENCY
PROBLEMS. M.Sc. Thesis, Al.I.Cuza University, Romania, 2001,
[Online]. Available : https://www.researchgate.net/profile/
Florina-Halasan-2/publication/238503196_INTEREST_RATE_THEORY_
AND_CONSISTENCY_PROBLEMS/links/5424ead80cf26120b7ac4a98/
INTEREST-RATE-THEORY-AND-CONSISTENCY-PROBLEMS.pdf.
Modélisation en risque de crédit. Calibration et discrétisation
[4] A. Alfonsi,
de modèles nanciers. PhD Thesis, [Online]. Available : https://cermics.
enpc.fr/~alfonsi/These.pdf.

38
[5] Calculs justications estimateurs maximum de vraisemblance :
https://www.dropbox.com/scl/fo/a3qk8bee5guvvf8x0wrq5/
AH6SvZmhhIEcC1UAkGtKDw4?rlkey=xcwkqkvw4xr2bi47f4e79v535&st=
cqcpafim&dl=0.

39

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy