0% found this document useful (0 votes)
78 views32 pages

Arma

This document discusses ARMA(1,1) time series models. It defines the ARMA(1,1) model, outlines its properties including stationarity and invertibility conditions, and describes how to estimate the model parameters. The summary also discusses how an ARMA process has aspects of both AR and MA processes and decays exponentially.

Uploaded by

raniinda6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views32 pages

Arma

This document discusses ARMA(1,1) time series models. It defines the ARMA(1,1) model, outlines its properties including stationarity and invertibility conditions, and describes how to estimate the model parameters. The summary also discusses how an ARMA process has aspects of both AR and MA processes and decays exponentially.

Uploaded by

raniinda6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Lecture 2: ARMA(p,q) models

(part 3)

Florian Pelgrin

University of Lausanne, École des HEC


Department of mathematics (IMEA-Nice)

Sept. 2011 - Jan. 2012

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 1 / 32
Introduction

Motivation

Characterize the main properties of ARMA(p,q) models.

Estimation of ARMA(p,q) models

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 2 / 32
Introduction

Road map

1 ARMA(1,1) model
Definition and conditions
Moments
Estimation

2 ARMA(p,q) model
Definition and conditions
Moments
Estimation

3 Application

4 Appendix

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 3 / 32
ARMA(1,1) model Definition and conditions

1. ARMA(1,1)
1.1. Definition and conditions

Definition
A stochastic process (Xt )t∈Z is said to be a mixture autoregressive moving
average model of order 1, ARMA(1,1), if it satisfies the following
equation :

Xt = µ + φXt−1 + t + θt−1 ∀t
Φ(L)Xt = µ + Θ(L)t

where θ 6= 0, θ 6= 0, µ is a constant term, (t )t∈Z is a weak white noise


process with expectation zero and variance σ2 (t ∼ WN(0, σ2 )),
Φ(L) = 1 − φL and Θ(L) = 1 + θL.

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 4 / 32
ARMA(1,1) model Definition and conditions

Simulation of an ARMA(1,1) with (-0.5;-0.5) Simulation of an ARMA(1,1) with (-0.5;0.5)


4 10

2 5

0 0

-2 -5

-4 -10
0 100 200 300 0 100 200 300

Simulation of an ARMA(1,1) with (0.9;-0.5) Simulation of an ARMA(1,1) with (0.9;0.5)


10 4

5 2

0 0

-5 -2

-10 -4
0 100 200 300 0 100 200 300

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 5 / 32
ARMA(1,1) model Definition and conditions

The properties of an ARMA(1,1) process are a mixture of those of an


AR(1) and MA(1) processes :
The (stability) stationarity condition is the one of an AR(1) process (or
ARMA(1,0) process) :

|φ| < 1.

The invertibility condition is the one of a MA(1) process (or


ARMA(0,1) process) :

|θ| < 1.

The representation of an ARMA(1,1) process is fundamental or causal


if :

|φ| < 1 and |θ| < 1.

The representation of an ARMA(1,1) process is said to be minimal and


causal if :

|φ| < 1, |θ| < 1 and φ 6= θ.

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 6 / 32
ARMA(1,1) model Definition and conditions

If (Xt ) is stable and thus weakly stationary, then (Xt ) has an infinite
moving average representation (MA(∞)) :
µ
Xt = + (1 − φL)−1 (1 + θL)t
1−φ

µ X
= + ak t−k
1−φ
k=0

where :

a0 = 1
ak = φk + θφk−1 .

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 7 / 32
ARMA(1,1) model Definition and conditions

If (Xt ) is invertible, then (Xt ) has an infinite autoregressive


representation (AR(∞)) :
µ
(1 − θ∗ L)−1 (1 − φL)Xt = + t
1 − θ∗
i.e.

µ X
Xt = = ∗
+ bk Xt−k + t
1−θ
k=1

where θ∗ = −θ, and :

bk = −θ∗k − θ∗k−1 φ.

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 8 / 32
ARMA(1,1) model Moments

1.2. Moments

Definition
Let (Xt ) denote a stationary stochastic process that has a fundamental
ARMA(1,1) representation, Xt = µ + φXt−1 + t + θt−1 . Then :
µ
E [Xt ] = ≡m
1−φ
1 + 2φθ + θ2 2
γX (0) ≡ V(Xt ) = σ
1 − φ2
(φ + θ)(1 + φθ) 2
γX (1) ≡ Cov [Xt , Xt−1 ] = σ
1 − φ2
γX (h) = φγX (h − 1) for |h| > 1.

Proof : See Appendix 1.

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 9 / 32
ARMA(1,1) model Moments

Definition
The autocorrelation function of an ARMA(1,1) process satisfies :


 1 if h = 0




(φ+θ)(1+φθ)
ρX (h) = 1+2φθ+θ2
if |h| = 1





 φρ (h − 1) if |h| > 1.
X

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 10 / 32
ARMA(1,1) model Moments

The autocorrelation function of an ARMA(1,1) process exhibits


exponential decay towards zero : it does not cut off but gradually dies
out as h increases.

The autocorrelation function of an ARMA(1,1) process displays the


shape of that of an AR(1) process for |h| > 1.

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 11 / 32
ARMA(1,1) model Moments

Partial Autocorrelation :
The partial autocorrelation function of an ARMA(1,1) process will
gradually die out (the same property as a moving average model).

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 12 / 32
ARMA(1,1) model Estimation

Estimation
Same techniques as before, especially those of MA models.

Yule-Walker estimator : the extended Yule-Walker equations could be


used in principe to estimate the AR coefficients but the MA
coefficients need to be estimated by other means.

In the presence of moving average components, the least squares


estimator becomes nonlinear and the corresponding estimator is the
conditional nonlinear least squares estimator (see estimation of
MA(q) models). It has to be solved with numerical methods.

Taking explicit distributional assumption for the error term, the


conditional or exact maximum likelihood estimator can be computed
(using also numerical or optimization methods).

Other methods are also available : the Kalman filter, the generalized
method of moments, etc.
Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 13 / 32
ARMA(1,1) model Estimation

Estimation of ARMA(p,q) models (true DGP: µ = 0, φ = 0.9, and θ = 0.5)

Coefficient Std. Error t-Statistic p-value. Akaike info criterion Schwarz criterion
ARMA(1,1)
C 0.4532 0.7558 0.5996 0.5490 2.8841 2.9134
AR(1) 0.9112 0.0184 49.4884 0.0000
MA(1) 0.4639 0.0409 11.3368 0.0000
ARMA(2,2)
C 0.3137 0.6942 0.4518 0.6516 2.8865 2.9288
AR(1) 1.4836 0.2655 5.5879 0.0000
AR(2) -0.5230 0.2422 -2.1598 0.0313
MA(1) -0.1233 0.2666 -0.4625 0.6439
MA(2) -0.2837 0.1245 -2.2784 0.0231
ARMA(2,1)
0.3941 0.7431 0.5303 0.5961 2.8857 2.9195
AR(1) 0.8581 0.0964 8.8992 0.0000
AR(2) 0.0491 0.0937 0.5244 0.6002
MA(1) 0.5047 0.0843 5.9848 0.0000
AR(2)
C 0.4276 0.6628 0.6451 0.5192 2.9206 2.9459
AR(1) 1.2819 0.0419 30.5971 0.0000
AR(2) -0.3523 0.0416 -8.4649 0.0000
AR(1)
C 0.2857 0.9394 0.3042 0.7611 3.0559 3.0728
AR(1) 0.9467 0.0135 70.1399 0.0000
MA(2)
C 0.6545 0.2043 3.2042 0.0014 3.6527 3.6779
MA(1) 1.3803 0.0329 41.9665 0.0000
MA(2) 0.6740 0.0324 20.8027 0.0000
MA(4)
C 0.6259 0.2633 2.3768 0.0178 3.2039 3.2461
MA(1) 1.4334 0.0408 35.1707 0.0000
MA(2) 1.2250 0.0657 18.6595 0.0000
MA(3) 0.8552 0.0658 12.9989 0.0000
MA(4) 0.4131 0.0407 10.1483 0.0000
Note: C, AR(j), and MA(j) are respectively the estimate of the constant term, the jth autogressive term, and the jth moving average term.

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 14 / 32
ARMA(p,q) model Definition and conditions

2. ARMA(p,q)
2.1. Definition and conditions

Definition
A stochastic process (Xt )t∈Z is said to be a mixture autoregressive moving
average model of order p and q, ARMA(p,q), if it satisfies the following
equation :

Xt = µ + φ1 Xt−1 + · · · + φp Xt−p + t + θ1 t−1 + · · · + θq t−q ∀t


Φ(L)Xt = µ + Θ(L)t

where θq 6= 0, φp 6= 0, µ is a constant term, (t )t∈Z is a weak white noise


process with expectation zero and variance σ2 (t ∼ WN(0, σ2 )),
Φ(L) = 1 − φ1 L − · · · − φp Lp and Θ(L) = 1 + θ1 L + · · · + θq Lq .

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 15 / 32
ARMA(p,q) model Definition and conditions

Main idea of ARMA(p,q) models


Approximate Wold form of stationary time series by parsimonious
parametric models

AR and MA models can be cumbersome because one may need a


high-order model with many parameters to adequately describe the
data dynamics (see the effective Fed fund rate application)

By mixing AR and MA models into a more compact form, the number


of parameters is kept small...

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 16 / 32
ARMA(p,q) model Definition and conditions

The properties of an ARMA(p,q) process are a mixture of those of an


AR(p) and MA(q) processes :
The (stability) stationarity conditions are those of an AR(p) process (or
ARMA(p,0) process) :

z p Φ(z −1 ) = 0 ≡ z p − φ1 z p−1 − · · · − φp = 0 ⇔ |zi | < 1.

for i = 1, · · · , p.
The invertibility conditions are those of an MA(q) process (or
ARMA(0,q) process) :

z q Θ(z −1 ) = 0 ≡ z q + θ1 z q−1 + · · · + θq = 0 ⇔ |z̃i | < 1.

for i = 1, · · · , q.
The representation of an ARMA(p,q) process is fundamental or causal
if it is stable and invertible
The representation of an ARMA(1,1) process is said to be minimal and
causal if it is stable, invertible and the characteristic polynomials
z p Φ(z −1 ) and z q Θ(z −1 ) have no common roots.

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 17 / 32
ARMA(p,q) model Definition and conditions

Definition
The representation of a mixture autoregressive moving average process of order p and q
defined by :

Xt = µ + φ1 Xt−1 + · · · + φp Xt−p + t + θ1 t−1 + · · · + θq t−q ,

is said to be a minimal causal (fundamental) representation—(t ) is the innovation


process—if :
(i) All the roots of the characteristic equation associated to Φ
z p − φ1 z p−1 − · · · − φp = 0 are of modulus less than one, |zi | < 1 for i = 1, · · · , p ;
(ii) All the roots of the characteristic equation associated to Θ
z q + θ1 z q−1 + · · · + θq = 0 are of modulus less than one, |z̃i | < 1 for i = 1, · · · , q ;
(iii) The characteristic polynomials z p Φ(z −1 ) and z q Θ(z −1 ) have no common roots.

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 18 / 32
ARMA(p,q) model Definition and conditions

If (Xt ) is stable and thus weakly stationary, then (Xt ) has an infinite
moving average representation (MA(∞)) :
µ
Xt = Pp + Φ(L)−1 (1 + θL)t
1− k=1 φk

µ X
= Pp + ak t−k
1− k=1 φk k=0

where :

a0 = 1
X∞
|ak | < ∞
k=0

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 19 / 32
ARMA(p,q) model Definition and conditions

If (Xt ) is invertible, then (Xt ) has an infinite autoregressive


representation (AR(∞)) :
µ
Θ(L)−1 Φ(L)Xt = Pq ∗ + t
1− k=1 θk

i.e.

µ X
Xt = = Pq ∗
+ bk Xt−k + t
1− k=1 θq k=1

where θk∗ = −θk .

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 20 / 32
ARMA(p,q) model Moments

2.2. Moments of an ARMA(p,q)

The properties of the moments of an ARMA(p,q) are also a mixture


of those of an AR(1) and MA(1) processes.
The mean is the same as the one of an AR(p) model (with a constant
term) :
µ
E(Xt ) = Pp ≡ m.
1− k=1 φk

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 21 / 32
ARMA(p,q) model Moments

Autocorrelation :
The autocorrelation function of an ARMA(p,q) process exhibits
exponential decay towards zero : it does not cut off but gradually dies
out as h increases (possibly damped oscillations.

The autocorrelation function of an ARMA(p,q) process displays the


shape of that of an AR(p) process for |h| > max(p, q + 1).
Partial Autocorrelation : The partial autocorrelation function of an
ARMA(p,q) process will gradually die out (the same property as a
MA(q) model).

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 22 / 32
ARMA(p,q) model Estimation

2.3. Estimation

Same techniques as in previous models...


1 Conditional least squares method
2 Maximum likelihood estimator (conditional or exact)
3 Generalized method of moments
4 Etc

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 23 / 32
Application

3. Application

Effective Fed fund rate : 1970 :01-2010 :01 (monthly observations)


An ARMA(1,2) captures better the dynamics of the effective Fed
fund rate.
ML estimation of the effective Fed fund rate : ARMA(1,2)
Coefficients Estimates Std. Error P-value
µ 6.225 1.263 0.000
φ 0.967 0.012 0.000
θ1 0.488 0.048 0.000
θ2 0.082 0.048 0.088

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 24 / 32
Application

Effective Fed fund rate: ARMA(1,2) specification

20

15

10

4 5
2
0 0

-2
-4
-6
-8
1970 1975 1980 1985 1990 1995 2000 2005

Residual Actual Fitted

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 25 / 32
Application

Effective Fed fund rate: diagnostics of the ARMA(1,2) specification

1.0
0.9
Autocorrelation

0.8
0.7
0.6
0.5
0.4
2 4 6 8 10 12 14 16 18 20 22 24

Actual Theoretical

1.0
Partial autocorrelation

0.5

0.0

-0.5
2 4 6 8 10 12 14 16 18 20 22 24

Actual Theoretical

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 26 / 32
Application

Effective Fed fund rate: Impulse response function of the estimated ARMA(1,2) specification

Impulse Response ± 2 S.E.


1.0

0.8

0.6

0.4

0.2
2 4 6 8 10 12 14 16 18 20 22 24

Accumulated Response ± 2 S.E.


20

15

10

0
2 4 6 8 10 12 14 16 18 20 22 24

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 27 / 32
Appendix

4. Appendix

1. Moments of an ARMA(1,1).

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 28 / 32
Appendix

1. Moments of an ARMA(1,1)

The properties of the moments of an ARMA(1,1) are a mixture of


those of an AR(1) and MA(1) processes.
The mean is the same as the one of an AR(1) model (with a constant
term) :

E(Xt ) = E (µ + φXt−1 + t + θt−1 )


= µ + φE(Xt−1 ) + E(t ) + θE(t−1 )
= µ + φE(Xt )

since E(Xt ) = E(Xt−j ) for all j (stationarity property) and


E(t−j ) = 0 for all j(white noise). Therefore,
µ
E(Xt ) = ≡ m.
1−φ

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 29 / 32
Appendix

Autocovariances
Trick : Proceed in the same way and note that the Yule-Walker
equations are the same as those of an AR(1) for |h| > 1.
For h = 0 :
γX (0) = E [(Xt − m)(Xt − m)]
= E [(φ(Xt−1 − m) + t + θt−1 ) (Xt − m)]
= φE [(Xt − m)(Xt−1 − m)] + E [t (Xt − m)] + θ E [t−1 (Xt − m)]
| {z }
6=0

= σ2
φγX (1) + +θE [(φ(Xt−1 − m) + t + θt−1 ) t−1 ]
| {z }
AR(1) part
h i
= φγX (1) + σ2 + θφE [(Xt−1 − m)t−1 ] + θE [t t−1 ] + θ2 E 2t−1

= φγX (1) + σ2 (1 + θ(φ + θ)) .

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 30 / 32
Appendix

For h = 1 :

γX (1) = E [(Xt − m)(Xt−1 − m)]


= E [(φ(Xt−1 − m) + t + θt−1 ) (Xt−1 − m)]
= φE [(Xt−1 − m)(Xt−1 − m)] + E [t (Xt−1 − m)]
+θE [t−1 (Xt−1 − m)]
= φγX (0) +θσ2
| {z }
AR(1) part

Solving for γX (0) and γX (1) :

1 + 2φθ + θ2 2
γX (0) ≡ V(Xt ) = σ
1 − φ2
(φ + θ)(1 + φθ) 2
γX (1) ≡ Cov (Xt , Xt−1 ) = σ .
1 − φ2

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 31 / 32
Appendix

For |h| > 1 :

γX (h) = E [(Xt − m)(Xt−h − m)]


= E [(φ(Xt−1 − m) + t + θt−1 ) (Xt−h − m)]
= φE [(Xt−1 − m)(Xt−h − m)] + E [t (Xt−h − m)]
+θE [t−1 (Xt−h − m)]
= φγX (h − 1)
| {z }
AR(1) part

since E [t−j (Xt−h − m)] = 0 for all h > j—(t ) is the innovation
process.
The expression of the autocovariance of order h displays the same
difference or recurrence equation as in an AR(1) model—only the
initial value γX (1) changes !

Florian Pelgrin (HEC) Univariate time series Sept. 2011 - Jan. 2012 32 / 32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy