0% found this document useful (0 votes)
13 views25 pages

Important Distributions

The document discusses key concepts in financial theory and models, focusing on important probability distributions such as the moment generating function, exponential, Bernoulli, binomial, Poisson, and normal distributions. It provides definitions, properties, and the moment generating functions for these distributions, along with theorems related to linear combinations and sums of independent variables. Additionally, it includes calculations for expectations and variances for the exponential and normal distributions, highlighting their significance in economic applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views25 pages

Important Distributions

The document discusses key concepts in financial theory and models, focusing on important probability distributions such as the moment generating function, exponential, Bernoulli, binomial, Poisson, and normal distributions. It provides definitions, properties, and the moment generating functions for these distributions, along with theorems related to linear combinations and sums of independent variables. Additionally, it includes calculations for expectations and variances for the exponential and normal distributions, highlighting their significance in economic applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Financial theory and models

Important distributions

The moment generating function


The exponential distribution
The Bernoulli and binomial distributions
The Poisson distribution and its chacterization
The normal and the multi-normal distributions

Frank Hansen
Department of Economics
Copenhagen University
2022
Moment generating function
Definition
Let X be a random variable and suppose that exp(tX ) has finite mean
for every t in an open interval I with 0 ∈ I.
We then define
ψ(t) = E[etX ] t ∈I
and call it the moment generating function for X .

Notice that if X is bounded then the moment generating function is


defined for every t ∈ R.
We also notice that ψ(0) = E[1] = 1.
By using linearity of the expectation operator we may calculate
hd i
ψ 0 (t) = E etX = E[XetX ] t ∈ I,
dt
and thus obtain ψ 0 (0) = E[X ].
1 / 24
Higher moments
We may continue differentiation. Since X is independent of t we have

ψ 00 (t) = E[X 2 etX ] t ∈ I.

In particular, ψ 00 (0) = E[X 2 ].

This line of argument may be continued and we obtain:


Theorem
Let X be a stochastic variable with moment generating function ψ
defined in an open interval I with 0 ∈ I.
Then ψ is infinitely differentiable, and X has moments of any order.
The moments are given by

E[X n ] = ψ (n) (0) n = 1, 2, . . . .

We state without proof that two distributions are identical if they have
moment generating functions coinciding in an open interval around 0.
2 / 24
Linear combinations of independent variables

Theorem
Let X1 , . . . , Xn be independent stochastic variables with moment
generating functions ψ1 , . . . , ψn defined in open intervals I1 , . . . , In .
The moment generating function of X = a1 X1 + · · · + an Xn is

ψ(t) = ψ1 (a1 t) · · · ψn (an t) t ∈ a1−1 I1 ∩ · · · ∩ an−1 In .

Proof. By definition
n
hY i
tX t(a1 X1 +···+an Xn )
ψ(t) = E[e ] = E[e ]=E etai Xi .
i=1

The stochastic variables eta1 X1 , . . . , etan Xn are also independent, thus

ψ(t) = E[eta1 X1 ] · · · E[etan Xn ] = ψ1 (a1 t) · · · ψn (an t).

3 / 24
The exponential distribution

A non-negative stochastic variable X : S → [0, ∞) is said to be


exponentially distributed with parameter λ > 0, if it has a density
function fX given on the form

fX (x) = λe−λx x ≥ 0.

Notice that it is a density since


Z ∞ h i∞
λe−λx dx = −e−λx = 1.
0 0

The distribution is given by


Z x h ix
P(X ≤ x) = λe−λt dt = −e−λt = 1 − e−λx .
0 0

4 / 24
The expectation of the exponential distribution
The moment generating function ψ for the exponential distribution, with
parameter λ > 0, is given by
Z ∞ Z ∞
ψ(t) = E[e ] =tX tx
e fX (x) dx = etx λe−λx dx
−∞ 0
Z ∞ ix=∞
λ h λ
=λ e(t−λ)x dx = e(t−λ)x =
0 t −λ x=0 λ−t

for −∞ < t < λ. We thus obtain


λ
ψ 0 (t) =
(λ − t)2

implying that the mean

1
E[X ] = ψ 0 (0) = .
λ

5 / 24
Higher moments of the exponential distribution
We calculate the second derivative of ψ and obtain

ψ 00 (t) = .
(λ − t)3
Therefore, the second moment
2
E[X 2 ] = ψ 00 (0) =
λ2
and the variance
2 1 1
Var[X ] = E[X 2 ] − E[X ]2 = 2
− 2 = 2.
λ λ λ
More generally
λn!
ψ (n) (t) = t < λ.
(λ − t)n+1
The nth moment of the exponential distribution is thus
E[X n ] = ψ n (0) = λ−n n! .
6 / 24
The binomial distribution
A random variable X is said to have the Bernoulli distribution with
parameter p, for 0 ≤ p ≤ 1, if it only takes the values 0 and 1 and
P[X = 1] = p and P[X = 0] = 1 − p.
The moment generating function is
ψ(t) = E[etX ] = et P[X = 1] + e0 P[X = 0] = pet + 1 − p.
The binomial distribution with parameters (n, p) is the distribution of
the sum X of n independent Bernoulli distributed variables, each with
parameter p. The variable X takes integer values from 0 to n and
 
n
P[X = k ] = pk (1 − p)n−k k = 0, 1, . . . , n.
k
The moment generating function is
ψ(t) = (pet + 1 − p)n .

7 / 24
Sum of binomially distributed variables

Let X1 , . . . , Xk be independent stochastic variables.


We assume that each Xi is binomially distributed with parameters
(ni , p) for i = 1, . . . , k .
The moment generating function for Xi is thus

ψi (t) = (pet + 1 − p)ni i = 1, . . . , k .

Since the stochastic variables are independent the moment generating


function ψ for the sum X = X1 + · · · + Xk is given by

ψ(t) = ψ1 (t) · · · ψk (t) = (pet + 1 − p)n1 +···+nk .

Since the moment generating function determines the distribution we


derive that X is binomially distributed with parameters (n, p), where
n = n1 + · · · + nk .

8 / 24
The Poisson distribution
A stochastic variable X that takes integer values x = 0, 1, 2, . . . is
Poisson distributed with parameter λ > 0 if

λx
P[X = x] = e−λ x = 0, 1, 2, . . . .
x!
The moment generating function ψ is given by
∞ ∞
X X λx
ψ(t) = E[etX ] = etx P[X = x] = etx e−λ
x!
x=0 x=0

X (λet )x
= e−λ = e−λ exp(λet ).
x!
x=0

Since ψ 0 (t) = e−λ exp(λet )λet we obtain that the mean

E[X ] = ψ 0 (0) = λ.

9 / 24
Characterisation of the Poisson distribution
Suppose that phone calls arrive independently and at random.
Let X denote the stochastic variable that counts the number of phone
calls arriving in a certain time interval.

Theorem
X is Poisson distributed with parameter λ = E[X ].

Outline of proof:
We subdivide the time interval into n slots of equal length.
We choose n so large that the probability of two phone calls arriving in
the same slot becomes very small.
(i) The probability p that a phone call arrives in any chosen slot is

λ
p= where λ = E[X ].
n

10 / 24
Continuation of proof I
(ii) The probability that exactly k phone calls arrive in the given time
interval is given by
 
n
P[X = k ] = pk (1 − p)n−k
k

since the phone calls arrive independently.


(iii) The probability that k + 1 phone calls arrive is then
 
n
P[X = k + 1] = pk +1 (1 − p)n−k −1
k +1
 
n n−k k p (n − k )p
= p (1 − p)n−k = P[X = k ] .
k k +1 1−p (k + 1)(1 − p)

For n large compared to k we approximate


λ
P[X = k + 1] = P[X = k ] .
k +1
11 / 24
Continuation of proof II
(iv) Recursively, we now calculate
P[X = 1] = P[X = 0]λ
λ λ2
P[X = 2] = P[X = 1] = P[X = 0]
.. 2 2
.
λ λk
P[X = k ] = P[X = k − 1] = P[X = 0]
k k!
and since the sure event has probability one we obtain
∞ ∞
X X λk
P[X = k ] = P[X = 0] = P[X = 0]eλ = 1.
k!
k =0 k =0

In conclusion,
λk
P[X = 0] = e−λ and thus P[X = k ] = e−λ
k!
showing that X is Poisson distributed with parameter λ.
12 / 24
Sum of independent Poisson distributed variables

Let X1 , . . . , Xk be independent stochastic variables.


We assume that each Xi is Poisson distributed with parameter λi for
i = 1, . . . , k .
The moment generating function for Xi is thus

ψi (t) = e−λi exp(λi et ) i = 1, . . . , k .

Since the stochastic variables are independent the moment generating


function ψ for the sum X = X1 + · · · + Xk is given by

ψ(t) = ψ1 (t) · · · ψk (t)


= e−(λ1 +···+λk ) exp (λ1 + · · · + λk )et .


Since the moment generating function determines the distribution we


derive that X is Poisson distributed with parameter λ = λ1 + · · · + λk .

13 / 24
The normal distribution
A stochastic variable X : S → R is said to be normally distributed with
parameters (m, σ) if it has a density function fX given on the form

(x − m)2
 
1
fX (x) = √ exp − ,
σ 2π 2σ 2
where σ > 0 and m ∈ R.
We must prove that the positive function fX is a density. We make the
substitution y = (x − m)/σ and obtain
Z ∞ Z ∞
(x − m)2
 
1
fX (x) dx = √ exp − dx
−∞ −∞ σ 2π 2σ 2
Z ∞  2
1 y
=√ exp − dy = 1.
2π −∞ 2

A normally distributed variable X is said to be standard normally


distributed if the parameters (m, σ) = (0, 1).
14 / 24
The moment generating function
The moment generating function
Z ∞
tX
ψ(t) = E[e ] = etx fX (x) dx
−∞

(x − m)2
Z  
1
= √ exp tx − dx.
−∞ σ 2π 2σ 2
We complete the square and obtain
2
(x − m)2 1 2 2 x − (m + σ 2 t)
tx − = mt + σ t − .
2σ 2 2 2σ 2
Therefore,

(x − m)2
Z  
1  1
ψ(t) = exp mt + σ 2 t 2 √ exp − dx
2 σ 2π −∞ 2σ 2
1 2 2
= exp mt + σ t .
2
15 / 24
Mean and variance
Theorem
The normal distribution X with parameters (m, σ) has

E[X ] = m and Var[X ] = σ 2 .

Proof. The derivative of the moment generating function


1
ψ 0 (t) = (m + σ 2 t) exp mt + σ 2 t 2 = (m + σ 2 t)ψ(t).

2
The mean is therefore given by E[X ] = ψ 0 (0) = mψ(0) = m.
The second derivative
ψ 00 (t) = σ 2 ψ(t) + (m + σ 2 t)ψ 0 (t),
thus E[X 2 ] = ψ 00 (0) = σ 2 + m2 . Finally, the variance
Var[X ] = E[X 2 ] − E[X ]2 = σ 2 .

16 / 24
Linear combinations of independent normal variables
Let X1 , . . . , Xk be independent stochastic variables.
We assume that each Xi is normally distributed with parameters
(mi , σi ) for i = 1, . . . , k .
The moment generating function for Xi is
1
ψi (t) = exp mi t + σi2 t 2

i = 1, . . . , k .
2
Since the stochastic variables are independent the moment generating
function ψ for the X = a1 X1 + · · · + ak Xk is given by

ψ(t) = ψ1 (a1 t) · · · ψk (ak t)


= exp (a1 m1 + · · · + ak mk )t + 12 (a12 σ12 + · · · + ak2 σk2 )t 2 .


We derive that X is normally distributed with parameters

m = a1 m1 + · · · + ak mk and σ = (a12 σ12 + · · · + ak2 σk2 )1/2 .

17 / 24
An affine function of a normally distributed variable

Let X be a normally distributed stochastic variable with mean m and


variance σ 2 , and consider for constants a and b the stochastic variable

Y = aX + b.

This is a special case of the situation just considered.

Since X and the constant variable b are independent, we realise that


Y is normally distributed with mean

E[Y ] = am + b

and variance
Var[Y ] = a2 σ 2 .

18 / 24
Samples from the normal distribution
A random sample (X1 , . . . , Xn ) from the normal distribution with
parameters (m, σ) is a vector of independent stochastics variables
X1 , . . . , Xn
which are normally distributed with the same parameters (m, σ).
The stochastic variable
X1 + · · · + Xn
Xn =
n
is called the sample mean. Since the variables are independent it
follows that X n is normally distributed with mean
1 
E[X n ] = m + ··· + m = m
n
and variance
1 2 2
 σ2
Var[X n ] = σ + · · · + σ = .
n2 n
19 / 24
The multi-normal distribution
Consider a vector valued stochastic variable X : S → Rn .

X is said to be multi-normally distributed if there is a vector


m = (m1 , . . . , mn ) ∈ Rn and a positive definite n × n matrix A = (aij )
such that the density function of X can be written on the form
√  
det A 1 
pX (x1 , . . . , xn ) = exp − A(x − m) | (x − m)
(2π)n/2 2
√ 
n

det A 1 X
= exp − (xi − mi )aij (xj − mj )
(2π)n/2 2
i,j=1

for x = (x1 , . . . , xn ) ∈ Rn . Then the probability


Z x1 Z xn
P[X1 ≤ x1 , . . . , Xn ≤ xn ] = ··· pX (u1 , . . . , un ) du1 · · · dun .
−∞ −∞

20 / 24
The vector of means and the covariance matrix
The vector of means is given by

(E[X1 ], . . . , E[Xn ]) = (m1 , . . . , mn ).

The covariance matrix

SX = (Sij )ni,j=1 = A−1 ,

where

Sij = E[(Xi − mi )(Xj − mj )] for i, j = 1, . . . , n.

We say that X is N(m, S) distributed.

The covariance matrix SX is diagonal if X1 , . . . , Xn are independent.


We mention without proof the following result:
If the covariance matrix SX of a multi-normally distributed variable
X = (X1 , . . . , Xn ) is diagonal, then X1 , . . . , Xn are independent.
21 / 24
Transformation of a multi-normally distributed vector

Let X = (X1 , . . . , Xn ) be a N(m, S)-distributed stochastic vector, and let


B be an invertible n × n matrix.
Theorem
The stochastic vector Y defined by

Y 0 = BX 0

is N(Bm0 , BSB 0 ) multi-normally distributed.

Proof: Let Vol(x1 , . . . , xn ) denote the volume of the polytope generated


by the vectors x1 , . . . , xn . We consider in particular the vectors

xi = ti ei i = 1, . . . , n

where (e1 , . . . , en ) is the canonical basis in Rn and t1 , . . . , tn > 0.

22 / 24
Proof
If B is an invertible n × n matrix then

Vol(Bx1 , . . . , Bxn ) = t1 · · · tn Vol(Be1 , . . . , Ben )


= t1 · · · tn Vol(B(1) , . . . , B(n) ) = t1 · · · tn · | det B|.

Suppose the stochastic vector X takes values in B −1 u + ∆u, where


∆u is the polytope generated by (t1 e1 , . . . , tn en ) with volume

Vol(∆u) = t1 · · · tn .

Then the stochastic vector BX 0 takes values in u + B∆u. Note that the
polytope B∆u has volume Vol(B∆u) = t1 · · · tn · | det B|. We therefore
obtain the relationship

fBX 0 (u) = fX (B −1 u) · | det B|−1 u ∈ Rn

between the density functions for X and BX 0 .


23 / 24
Continuation of proof

Suppose now X is N(m, S) multi-normally distributed. Then by setting


A = S −1 we obtain

fBX 0 (u) = | det B|−1 fX (B −1 u)



det A
= | det B|−1 · exp − 12 A(B −1 u − m) | (B −1 u − m)

(2π)n/2


det((B 0 )−1 AB −1 )
exp − 12 (B 0 )−1 AB −1 (u − Bm) | (u − Bm) .

= (2π)n/2

It follows that BX 0 is multi-normally distributed with Bm0 as vector of


means and
((B 0 )−1 AB −1 )−1 = BA−1 B 0 = BSB 0
as covariance matrix.

24 / 24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy