0% found this document useful (0 votes)
63 views45 pages

VAR and VEC Models

The document discusses stationary and nonstationary time series, spurious regression, and unit root tests. It introduces vector autoregressive (VAR) and vector error correction (VECM) models. The Johansen maximum likelihood procedure for testing cointegration in multivariate systems using a VAR approach is also covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views45 pages

VAR and VEC Models

The document discusses stationary and nonstationary time series, spurious regression, and unit root tests. It introduces vector autoregressive (VAR) and vector error correction (VECM) models. The Johansen maximum likelihood procedure for testing cointegration in multivariate systems using a VAR approach is also covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Week 5 & 6

FINC7325
VAR and VECM Models

Source: Studendmund (2014), Chris Brooks (2013) and others


Note: The slide is prepared for the purpose of lecture and not to be
circulated to others or for publication purposes
Stationary and Nonstationary Time Series
• A time series variable Xt is stationary if:
1. the mean of Xt is constant over time,

2. the variance of Xt is constant over time, and

3. the simple correlation coefficient between X t and


Xt-k depends on the length of the lag (k) but on no
other variable (for all k).

• If one or more of these properties is not met, then X t is


nonstationary.
• This problem is known as nonstationarity.
Stationary and Nonstationary
Time Series (continued)
• Some variables are nonstationary because they rapidly
increase over time.

• Adding a time trend to the regression model can help avoid


spurious regression in this case.

• Unfortunately, for many variables, this does not alleviate


nonstationarity.

• Nonstationarity often takes the form of a “random walk.”


Stationary and Nonstationary
Time Series (continued)
• A random walk is a time-series variable in which the next
period’s value equals this year’s value plus a stochastic error
term.
• Suppose:

where vt is a classical error term

• If then Yt is stationary
• If then Yt is nonstationary
• If then Yt is nonstationary (unit root)
Spurious Correlation and Nonstationarity
• Spurious correlation is a strong relationship between two or
more variables that is not caused by a real underlying causal
relationship.

• A regression in which the dependent variable and one or


more of the independent variables are spuriously correlated
is a spurious regression.

• The t-scores and overall fit are likely to be overstated and


untrustworthy.

• One cause of spurious correlation is nonstationarity.


Spurious Regression
Example: Price and tuition

• Price is the price (in dollars) of a gallon of gas in Portland, OR.


• Tuition is the tuition (in dollars) for a semester of study at
Occidental College in Los Angeles, CA.
Detection and test of nonstationarity
• Graphical analysis-using time series plot
-stationary series would have random fluctuations above and below
historical mean of the series.
-non-stationary series  TREND (stochastic trend/deterministic trend)

• Correlogram (ACF)
-stationary series: tails off quickly and falls inside the 95% Bartlett CI
-nonstationary series: tails off extremely slowly and remains significant even
at high lags.
Detection and test of nonstationarity
• Unit Root Test (Dickey and Fuller, 1979)
- Based on process Yt =Yt-1 +εt which is specific case of AR(1): Y t =φ1Yt-1 +εt
- If φ1=1, then Yt-> has a unit root, nonstationary
- The basic objective of the test is to test the null hypothesis that  =1 in:
yt = yt-1 + εt
against the one-sided alternative  < 1. So we have
H0: series contains a unit root/nonstationary
vs. H1: series is stationary.
We usually use the regression:
yt = δyt-1 + εt
so that a test of =1 is equivalent to a test of δ=0 (since -1=δ).
Detection and test of nonstationarity
• Augmented Dickey-Fuller test
-The tests above are only valid if εt is white noise. In particular, εt will be
autocorrelated if there was autocorrelation in the dependent variable of the
regression (yt) which we have not modelled. The solution is to “augment” the
test using p lags of the dependent variable. The alternative model in case (i) is
now written:
𝑝
Δ 𝑦 𝑡 =δ 𝑦 𝑡 −1 + ∑ 𝛼𝑖 Δ 𝑦 𝑡 − 𝑖 +𝑢𝑡
𝑖 =1

-The same critical values from the DF tables are used as before. A problem now
arises in determining the optimal number of lags of the dependent variable.
There are 2 ways
- use the frequency of the data to decide
- use information criteria
Use Mackinnon (1991) critical value
Testing for Higher Orders of Integration
• Consider the simple regression:
yt = δyt-1 + εt
We test H0: δ=0 vs. H1: δ<0.
• If H0 is rejected we simply conclude that yt does not contain a unit root.
• But what do we conclude if H0 is not rejected? The series contains a unit root, but
is that it? No! What if ytI(2)? We would still not have rejected. So we now need
to test
H0: ytI(2) vs. H1: ytI(1)
We would continue to test for a further unit root until we rejected H0.
• We now regress 2yt on yt-1 (plus lags of 2yt if necessary).
• Now we test H0: ytI(1) which is equivalent to H0: ytI(2).

• So in this case, if we do not reject (unlikely), we conclude that yt is at least I(2). 11
The Phillips-Perron Test
• Phillips and Perron have developed a more comprehensive theory of unit root
nonstationarity. The tests are similar to ADF tests, but they incorporate an
automatic correction to the DF procedure to allow for autocorrelated and
heterokedasticity in residuals.

• The tests usually give the same conclusions as the ADF tests, and the calculation
of the test statistics is complex.

‘Introductory Econometrics for


12
Finance’ © Chris Brooks 2013
Criticism of Dickey-Fuller and Phillips-Perron-type tests
• Main criticism is that the power of the tests is low if the process is stationary but
with a root close to the non-stationary boundary.
e.g. the tests are poor at deciding if
=1 or =0.95,
especially with small sample sizes.
• If the true data generating process (dgp) is
yt = 0.95yt-1 + εt
then the null hypothesis of a unit root should be rejected.

• One way to get around this is to use a stationarity test as well as the unit root
tests we have looked at.
13
Stationarity tests
• Stationarity tests have
H0: yt is stationary
versus H1: yt is non-stationary

So that by default under the null the data will appear stationary.

• One such stationarity test is the KPSS test (Kwaitowski, Phillips, Schmidt and Shin,
1992).
• Thus we can compare the results of these tests with the ADF/PP procedure to
see if we obtain the same conclusion.
• It is a right tail test, thus, reject null hypothesis if test statistic is larger than
critical values
14
Stationarity tests (cont’d)

• A Comparison

ADF / PP KPSS
H0: yt  I(1) H0: yt  I(0)
H1: yt  I(0) H1: yt  I(1)

• 4 possible outcomes

Reject H0 and Do not reject H0


Do not reject H0 and Reject H0
Reject H0 and Reject H0
Do not reject H0 and Do not reject H0
‘Introductory Econometrics for
15
Finance’ © Chris Brooks 2013
Introduction to VAR-VECM Models
• VAR model involves multiple independent variables and therefore has more
than one equations.
• Each equation uses as its explanatory variables lags of all the variables and
likely a deterministic trend.
• Time series models for VAR are usually based on applying VAR to stationary
series with first differences to original series and because of that, there is
always a possibility of loss of information about the relationship among
integrated series.
• Therefore, differencing the series to make them stationary is one solution, but at
the cost of ignoring possibly important (“long run”) relationships between the
levels. A better solution is to test whether the levels regressions are trustworthy
(“cointegration”.) The usual approach is to use Johansen’s method for testing
whether or not cointegration exists. If the answer is “yes” then a vector error
correction model (VECM), which combines levels and differences, can be
estimated instead of a VAR in levels..
Introduction to VAR Model
VAR Model
VECM form of VAR
VECM form of VAR
Cointegration of variables in VAR
Multivariate Approach to Cointegration
• Using the Johansen Maximum Likelihood (ML) procedure, it is possible to obtain
more then a single cointegrating relationship.
• If there is evidence of more than one cointegrating relationship, which one
should be used?
• There are two separate tests for cointegration, which can give different results.
• Given that this is a maximum likelihood based test (Engle-Granger is OLS based),
it requires a large sample.
• The multivariate test is based on a VAR, not a single OLS estimation.
The Johansen ML Procedure
• This is based on a VAR approach to cointegration
• All the variables are assumed to be endogenous (although it is
possible to include exogenous variables)
• The test relies on the relationship between the rank of a matrix and
its eigenvalues or characteristic roots.
You do not need to understand the mechanics of this approach, just
how to use it and how to interpret the results
Johansen ML Approach
• The approach to testing for cointegration in a
multivariate system is similar to the ADF test, but
requires the use of a VAR approach:

xt  A1 xt 1  ut
xt  ( A1  I ) xt 1  ut
xt 1  ut
xt 
Where :   ( A1  I )
Johansen ML Approach
• Where in a system of g variables:

xt and ut are g x 1 vectors.


A1 is an g x g matrix of parameters
I is an g x g Identity matrix
Johansen ML Approach
• The rank of π equals the number of cointegrating vectors
• If π consists of all zeros, as with the ADF test, the rank of the
matrix equals zero, all of the xs are unit root processes, implying
the variables are not cointegrated.
• As with the ADF test, the equation can also include lagged
dependent variables, although the number of lags included is
important and can affect the result. This requires the use of the
Akaike or Schwarz-Bayesian criteria to ensure an optimal lag
length.
JJ-developed a sequential procedure to test the
number of cointeragting relationship
• 1)Joint(trace test) • 2)Maximum eigenvalue test
Differences Between the Two Test Statistics
• The Trace test is a joint test, the null hypothesis is that the number of
cointegrating vectors is less than or equal to r, against a general
alternative hypothesis that there are more then r.
• The Maximal Eigenvalue test conducts separate tests on each
eigenvalue. The null hypothesis is that there are r cointegrating
vectors present against the alternative that there are (r + 1) present.
• The distribution of both test statistics is non-standard.
The π Matrix
• As mentioned, r is the rank of π and determines the number of
cointegrating vectors.
• When r = 0 there are no cointegrating vectors
• If there are k variables in the system of equations, there can be a
maximum of k-1 cointegrating vectors.
The π Matrix
• Π is defined as the product of two matrices: α and β’ , of
dimension (g x r) and (r x g) respectively. The β gives the long-
run coefficients of the cointegrating vectors, the α is known as
the adjustment parameter and is similar to an error correction
term. The relationship can be expressed as:

   
Test Statistics
• There are two test statistics produced by the Johansen ML procedure.
• There are the Trace test and maximal Eigenvalue test.
• Both can be used to determine the number of cointegrating vectors
present, although they don’t always indicate the same number of
cointegrating vectors.
The Approach to Multivariate Cointegration and
VAR-VECMs
1) Test the variables for stationarity using the usual ADF tests.
2) If all the variables are I(1) or integrated at the same order
3) Use the AIC or SBIC to determine the number of lags in VAR (order of VAR)
4) Test for the cointegration-VEC (lag reduce by 1). Use the trace and maximal
eigenvalue tests to determine the number of cointegrating vectors present.
5) If no cointegration, run VAR at first difference. VECM is not required as the
variables are I(1)
6) If there is cointegration, assess the long-run β coefficients and the adjustment
α coefficients.
7) Produce the VECM for all the endogenous variables in the model and use it to
carry out Granger causality tests over the short and long run.
Long Run relationships
Granger Causality
• Granger causality is a circumstance in which one time-series
variable consistently and predictably changes before another
variable.
• Don’t be lured into thinking Granger causality proves
economic causality.
• If one variable precedes (“Granger causes”) another, we can’t
be sure the first variable “causes” the other to change.
• There are a number of different tests for Granger causality.
Granger Causality (continued)
• Expansion of test Granger originally suggested: Test if A
Granger-causes Y.
• First, estimate original model:

• Then test the null that the coefficients of lagged A’s are
jointly equal to zero with an F-test.
• If you reject the null, then A is said to Granger cause Y.
• You can run the test in other direction.
Short term Granger causality in VAR

• While long-run causality is tested on the significance of the EC term,


the short-term causal directional relationship is tested on the
coefficients of the first differenced term by applying Granger causality
test.
• Causality may occur both in the long-run and short-run. Cointegration
does not necessarily mean long-run causality is significant
• If variables are not cointegrated, causality can occur in the short run.
• The outcome of causality can be sensitive to lag of VAR/VECM as well
as the number of cointegrating, relation binding the variables.
• Please take note that the existence of cointegration does not
necessarily indicate the presence of short-run causality and vice-versa.
Granger Causality –direction graph
Determination of Ordering

1 -1 - 1 + 1/4 -1/4 + 1/2

• Note: This approach (based on significance level) is shared by Dr Zainudin


during the workshop)
Introduction to Innovation Accounting
Impulse Response
Impulse Response
Generalized Impulse Response
Choleski Decomposition
Variance Decomposition

• The variance decomposition indicates the amount of information each


variable contributes to the other variables in the autoregression. It
determines how much of the forecast error variance of each of the
variables can be explained by exogenous shocks to the other variables.
• It may be term as out-of –sample causality test. It separates the variation in
an endogenous variable in the VAR model into the component shocks to
the VAR.
• Ordering of variables is necessary! It can be made based on economic
theory or GRANGER causality directional impact.
Variance Decomposition: based on granger causality

lklci lsti lshci ljnk lkospi lsp

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy