0% found this document useful (0 votes)
53 views19 pages

Slide 1 A

This document outlines an econometrics lecture on linear time series analysis and autoregressive (AR) models. It discusses the properties, estimation, and selection of AR models of order p. It also covers the autocorrelation function (ACF) and partial autocorrelation function (PACF) of AR(1) and AR(2) models, how to estimate AR models using least squares regression, and model selection criteria like the Akaike information criterion (AIC) and Bayesian information criterion (BIC).

Uploaded by

yingdong liu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views19 pages

Slide 1 A

This document outlines an econometrics lecture on linear time series analysis and autoregressive (AR) models. It discusses the properties, estimation, and selection of AR models of order p. It also covers the autocorrelation function (ACF) and partial autocorrelation function (PACF) of AR(1) and AR(2) models, how to estimate AR models using least squares regression, and model selection criteria like the Akaike information criterion (AIC) and Bayesian information criterion (BIC).

Uploaded by

yingdong liu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

ECMT3150: The Econometrics of Financial

Markets
1a. Linear Time Series Analysis

Simon Kwok
University of Sydney

Semester 1, 2022

1 / 19
Outline

1. AR Model
1.1 Model Properties
1.2 Autocorrelation Function (ACF)
1.3 Model Estimation
1.4 Model Selection
1.5 Goodness-of-…t
1.6 Forecasting

2 / 19
Autoregressive (AR) Model

fyt g follows an AR (p ) model if

yt = φ0 + φ1 yt 1 + φ2 yt 2 + + φp yt p + εt ,

where fεt g is wn(0, σ2ε ).


Suppose fyt g is covariance stationary. The unconditional (long
run) mean is:
φ0
µ E (yt ) = . (1)
1 ∑pi=1 φi

3 / 19
AR Model

Write the AR (p ) model in the demeaned form:

yt = µ + ut ,
p
ut = ∑ φi ut i + εt . (2)
i =1

De…ne the p th -order polynomial function


φ (x ) = φ1 x + φ2 x 2 + + φp x p .
Let L be the lag operator such that Lut = ut 1, Li ut = ut i , and
Lc = c for any constant c.
We can rewrite (2) as

[1 φ(L)] ut = εt .

4 / 19
Autocovariance and Autocorrelation Functions
Suppose fyt g is covariance stationary.
I For a given integer j, the lag-j autocovariance function is
de…ned as
γj = Cov (yt , yt j ).
In particular, the variance is γ0 = Var (yt ).
I For a given integer j, the lag-j autocorrelation function (ACF)
is de…ned as
Cov (yt , yt j ) γj
ρj = Corr (yt , yt j) = = .
Var (yt ) γ0

By convention, ρ0 1.
Both the autocovariance function and ACF are symmetric
functions: γj = γ j and ρj = ρ j .

5 / 19
AR(1) Model
AR (1) model:

yt = µ + ut ,
ut = φ1 ut 1 + εt . (3)

Q: What are the autocovariance and autocorrelation functions of


yt ?
Multiply both sides of (3) by u t j for j 1, and take expectations:

E (u t u t j) = φ1 E (u t 1 u t j ) + E ( εt u t j )
γj = φ1 γj 1. (4)

Note that E (εt u t j) = E [E (εt jFs )u t j] = 0. (4) is the Yule-Walker equation of AR(1).

σ2ε σ2ε φ1
Set j = 1: γ1 = φ1 γ0 . As γ0 = Var (u t ) = 1 φ21
, we can solve for γ1 : γ1 = 1 φ21
.
j
σ2ε φ1
For j > 1, γj = φ1 γj 1. By recursive substitution, we have γj = 1 φ21
.
jj j
σ2ε φ1
By symmetry, the lag-j autocovariance function is γj = 1 φ21
for all integers j .
6 / 19
ACF of AR(1)

γj
Recall that the ACF is ρj = γ0 .
The ACF of AR (1) is
jj j
,
σ2ε φ1 σ2ε
ρj =
1 φ21 1 φ21
jj j
= φ1

for all integers j.


I When 0 < φ1 < 1, the ACF decays smoothly to zero.
I When 1 < φ1 < 0, the ACF alternates between positive and
negative values, but its magnitude decays smoothly to zero.

7 / 19
ACF of AR(1)

8 / 19
ACF of AR(2)
Let yt AR (2): yt = µ + ut and ut = φ1 ut 1 + φ2 ut 2 + εt .
Ex: Show that the autocovariance function is
σ2ε
γ0 = (1 φ2 ),
D
σ2ε
γ1 = φ ,
D 1
γj = φ1 γj 1 + φ2 γj 2 for j 2,
γ j = γj ,

where D (1 + φ2 )(1 + φ1 φ2 )(1 φ1 φ2 ), and that the


ACF is
γ1 φ1
ρ1 = = ,
γ0 1 φ2
ρj = φ1 ρj 1 + φ2 ρj 2 for j 2,
ρ j = ρj .

9 / 19
ACF of AR(2)
The polynomial function of AR (2) is φ (x ) = φ1 x + φ2 x 2 .
Consider the polynomial equation

1 φ (x ) = 0
φ2 x 2 = 0.
1 φ1 x
p 2
φ1 φ1 +4φ2
The equation has two roots x = 2φ .
2

I When the roots are real (φ21 + 4φ2 0), the ACF decays
smoothly.
I When the roots are complex (φ21 + 4φ2 < 0), the ACF is
2π p
oscillating with an average period k = 1
.
cos (φ1 /2 φ2 )
Let the complex roots be a ib. The period kpcan be solved from
φ21 4φ2
cos ( 2π p a
φ1
k ) = a 2 +b 2
, where a = 2φ
2
and b = 2φ 2
.

10 / 19
ACF of AR(2)

11 / 19
PACF
Consider the following regressions:

yt = φ0,1 + φ1,1 yt 1 + e1t


yt = φ0,2 + φ1,2 yt 1 + φ2,2 yt 2 + e2t
yt = φ0,3 + φ1,3 yt 1 + φ2,3 yt 2 + φ3,3 yt 3 + e3t
..
.

The lag-j population partial autocorrelation function (PACF) is


de…ned as φj ,j for each j = 1, 2, . . ..
The lag-j sample PACF is the ordinary least squares (OLS)
estimate φ̂j ,j .
If fyt g follows an AR(p) process, then φ̂p,p ! φp,p as T ! ∞. In
particular, φ̂`,` ! 0 as T ! ∞ for ` > p. The asymptotic variance
of φ̂`,` is T1 for ` > p.
12 / 19
AR Model Estimation
Given the time series data fyt gTt=1 , we can estimate an AR(p)
model by conditional least squares method. Conditional on the
…rst p values: y1 , . . . , yp , we run the regression
yt = φ0 + φ1 yt 1 + + φp yt p + εt
for t = p + 1, p + 2, . . . , T .
Number of observations = T p.
Number of parameters = p + 1.
Degrees of freedom = (T p ) (p + 1) = T 2p 1.
Let φ̂0 , . . . , φ̂p be the OLS coe¢ cient estimates. The residual
series is fε̂t gTt=p +1 , where
ε̂t = yt φ̂0 φ̂1 yt 1 ... φ̂p yt p.

If fεt g is homoskedastic and serially uncorrelated, the variance of


εt is consistently estimated by the sample variance of the residuals
T
1
σ̂2ε =
T 2p ∑
1 t =p +1
ε̂2t .
13 / 19
AR Model Selection

I Akaike information criterion:


AIC = T2 ln(likelihood) + T2 (no. of parameters).
I The Schwarz–Bayesian information criterion:
ln (T )
BIC = T2 ln(likelihood) + T (no. of parameters).
For an AR (p ) model with Gaussian errors, AIC and BIC become

2p
AIC = ln(σ̃2p ) + + constant,
T
p ln(T )
BIC = ln(σ̃2p ) + + constant,
T
where σ̃2p is the maximum likelihood estimate of the error variance:
σ̃2p = T1 ∑Tt=p +1 ε̂2t .

14 / 19
Goodness-of-…t

SSE ∑T 2
t =p +1 ε̂t
I R2 = 1 SST =1 T .
∑t =p +1 t ȳ )2
( y
By SST = SSR + SSE, we have 0 R2 1.
1 T 2
T 2p 1 ∑t =p +1 ε̂t σ̂2ε
I Adjusted R 2 = 1 1 T =1 σ̂2y
, where σ̂2y is
T p 1 ∑t =p +1 (y t ȳ )
2

the sample variance of yt .


Adjusted R 2 may fall outside the [0,1] interval.

15 / 19
Time Series Forecasting
The `-step ahead forecast ŷt (`) is obtained by minimizing the
conditional mean squared error:

ŷt (`) = arg min E [(yt +` g )2 jFt ].


g

The solution is ŷt (`) = E [yt +` jFt ].


Interpretation: the projection of yt +` on the information set at
time t.
The forecast error is

et (`) = yt +` ŷt (`).

Two types of forecasts in practice:


I Conditional forecast: compute ŷt (`) using estimated
parameters, without accounting for parameter uncertainty.
I Unconditional forecast: explicitly account for parameter
uncertainty. Wider con…dence interval around ŷt (`).
16 / 19
Forecasting with AR model
Suppose fyt g follows a stationary AR (p ) with errors fεt g
wn(0, σ2ε ).
For linear models, we consider linear projection. Here, Ft
represents the history of fyt g (or equivalently the history of fεt g)
up to time t, i.e., Ft = fyt , yt 1 , . . .g = fεt , εt 1 , . . .g.
For ` = 1,

ŷt (1) = E [yt +1 jFt ]


= E [φ0 + φ1 yt + + φp yt p +1 + εt +1 jFt ]
= φ0 + φ1 yt + + φp yt p +1 ,

as yt , . . . , yt p +1 are known given Ft , and


E [εt +1 jFt ] = E [εt +1 jεt , εt 1 , . . .] = 0.1
The forecast error is et (1) = yt +1 ŷt (1) = εt +1 , with variance
Var [et (1)] = σ2ε .
1 This would be invalid if the condition expectation is a nonlinear projection,
which is typically the case for a nonlinear model. A stronger mds assumption is
required to ensure E (εt +1 jFt ) = 0.
17 / 19
Forecasting with AR model
For ` = 2,

ŷt (2) = E [yt +2 jFt ]


= E [φ0 + φ1 yt +1 + φ2 yt + + φp yt p +2 + εt +2 jFt ]
= φ0 + φ1 E [yt +1 jFt ] + φ2 yt + + φ p yt p + 2
= φ0 + φ1 ŷt (1) + φ2 yt + + φp yt p +2 .

The forecast error is


et (2) = yt +2 ŷt (2) = φ1 et (1) + εt +2 = φ1 εt +1 + εt +2 , with
variance

Var [et (2)] = φ21 Var (εt +1 ) + Var (εt +2 ) + 2φ1 Cov (εt +1 , εt +2 )
= φ21 σ2ε + σ2ε + 0
= (1 + φ21 )σ2ε .

18 / 19
Forecasting with AR model

Q: Let ` > p. What is the `-step ahead forecast of a stationary


AR (p ) model? What happens to the forecast when ` ! ∞?

A: With ` > p, the `-step ahead forecast is

ŷt (`) = φ0 + φ1 ŷt (` 1) + + φp ŷt (` p ). (5)

By stationarity, ŷt (`) converges to a limit b as ` ! ∞. The limit


must satisfy (5), yielding the solution
φ0
b= ,
1 φ1 φp

which is µ = E (yt ). This is the mean-reverting property.

19 / 19

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy