0% found this document useful (0 votes)
86 views35 pages

Testing Seasonal Unit Roots in Data at Any Frequency

econometrie

Uploaded by

Viorel Adirva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views35 pages

Testing Seasonal Unit Roots in Data at Any Frequency

econometrie

Uploaded by

Viorel Adirva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Working papers in transport, tourism, information technology and microdata analysis

Testing Seasonal Unit Roots in Data at Any Frequency

an HEGY approach

Xiangli Meng
Changeli He Nr: 2012:08

Editor: Hasan Fleyeh

Working papers in transport, tourism, information technology and microdata analysis


ISSN: 1650-5581
© Authors
Testing Seasonal Unit Roots in Data at Any Frequency, an HEGY approach

Xiangli Meng Changli He

Dalarna University

Abstract
This paper generalizes the HEGY-type test to detect seasonal unit roots in data

at any frequency, based on the seasonal unit root tests in univariate time series by

Hylleberg, Engle, Granger and Yoo (1990). We introduce the seasonal unit roots

at rst, and then derive the mechanism of the HEGY-type test for data with any

frequency. Thereafter we provide the asymptotic distributions of our test statistics

when dierent test regressions are employed. We nd that the F-statistics for testing

conjugation unit roots have the same asymptotic distributions. Then we compute the

nite-sample and asymptotic critical values for daily and hourly data by a Monte Carlo

method. The power and size properties of our test for hourly data is investigated, and

we nd that including lag augmentations in auxiliary regression without lag elimination

have the smallest size distortion and tests with seasonal dummies included in auxiliary

regression have more power than the tests without seasonal dummies. At last we apply

the our test to hourly wind power production data in Sweden and shows there are no

seasonal unit roots in the series.

1 Introduction
Data collected periodically usually exhibit seasonality. The data would appear seasonality if

the spectrum of the process have peaks at certain frequencies. Modeling seasonality has been

a commonly used method for dealing with such kind of data. There are 3 approaches that are

most widely used for modeling seasonal time series: deterministic seasonal processes, stationary

and nonstationary seasonal processes. The dierences lie in how they react to the shocks to the

seasonal patterns. In deterministic seasonal processes shocks have no eect on the seasonal pat-

tern, and in stationary seasonal processes they have temporary eect which would diminish with

time passes by. But in nonstationary processes the shocks have non-diminishing eect, causing

permanent changes to the seasonal pattern and increasing variance of the series. Therefore, the

nonstationary seasonal process raises the most concern and testing seasonal unit roots has high

priority in the modeling procedure. The misspecication of the type of seasonality would cause

severe bias in modeling and forecasting process.

There are many tests that are proposed for testing seasonal unit roots such as Dickey-Hasza-

Fuller (DHF) test which is by Dickey, Hasza and Fuller (1984), OCSB test which is proposed by

1
Osborn, Chui, Smith and Birchenhall (1988). Among these seasonal unit root tests, the HEGY

test which is posed by Hylleberg, Engle, Granger and Yoo (1990) has the advantage of testing

seasonal unit root at each frequency separately, thus it is widely applied. The HEGY test is

rstly proposed for testing seasonal unit roots in quarterly data, Franses (1990), Beaulieu and

Miron (1992) extend it to monthly data. However, the HEGY test is not available for testing

seasonal unit roots in data at other frequencies such as hourly data and daily data, therefore

it is imperative to extend the test to data with other frequencies, which is the focus of this

paper. In this paper we propose an HEGY type test for testing seasonal unit root in data with

any frequency. Centering on the test we proposed, we provide the test procedure, asymptotic

distributions of statistics, and analyze the power and size of our test on hourly data. Based on

the power and size properties we compare the performance of dierent methods of choosing lag

augmentations and the performance of the test when deterministic components are included or

not. In the end we apply our test to the hourly wind power production data in Sweden and nd

there are no seasonal unit roots in the series.

The rest of the paper is organized as follows: Section 2 introduces the seasonal unit roots.

In section 3 test equations and the procedure for testing seasonal unit roots are presented. The

asymptotic distributions of the test statistics are also given in this section. Section 4 provides

the nite and asymptotic critical values for HEGY test for hourly and daily data. In section 5

nite sample properties of tests are investigated. In section 6 we apply our test to hourly wind

power production data in Sweden. Concluding remarks are given in section 7.

2 Seasonal unit roots


Consider a basic autoregressive polynomial ϕ(B) having a form,

ϕ(B) = 1 − B S (2.1)

where B is the lag operator, and  S  is the number of time periods in a seasonal pattern which

repeats regularly. For example, S=4 for quarterly data where the seasonal pattern repeats itself

every year, and S=24 for hourly data.  S  could also be an odd number, such as S=7 for daily

data. The equation ϕ(z) = 0 has S roots on the unit circle:

2πi
k 2kπ 2kπ
zk = e S = cos( ) + isin( ), k = 0, 1, 2, ..., (S − 1) (2.2)
S S
2kπ
where i is the imaginary unit. Each root zk in (2.2) is related to a specic frequency
S
. When

k = 0, the root zk in (2.2) is called non-seasonal unit root. The other roots zk in (2.2) are called

seasonal unit roots.

Except for roots zk in (2.2) at frequencies 0 and π , zk in (2.2) at the others frequencies are

pairs of conjugation frequencies. We re-order the S frequencies of zk by putting their conjugation

frequencies together.

a) When S is an even number, the S frequencies are ordered as:

2



 0 m=1

 m−1 π

m = 2, 4, ..., (S − 2)
S
θm = (2.3)
m−2


 2π − S
π m = 3, 5, ..., (S − 1)


π m=S

In (2.3), θm and θm+1 are conjugation frequencies if m = 2, 4, ..., (S − 2).


b) When S is an odd number, there is no unit root at frequency π , the S frequencies are

ordered as:


0 m=1



θm = m
S
π m = 2, 4, ..., (S − 1) (2.4)


2π − m−1
π m = 3, 5, ..., S

S

In (2.4), θm and θm+1 are conjugation frequencies if m = 2, 4, ..., (S − 1).


For both cases (2.3) and (2.4), the unit roots corresponding to frequency θm are:

um = cosθm + isinθm (2.5)

The frequencies θm of also indicate the number of cycles for um in the seasonal pattern, which
θm S π
are derived by . For example, consider hourly data where S=24, setting m=2, u2 = cos +
√ √ 2π √ √ 12
π
isin 12 = 6+4 2 + 6−4 2 i. Its frequency is θm = 12
π θ S
, and it corresponds to m =1 cycle in every

24 hours.

We make the following notations for simplication. For m = 1 we still use m = 1. For m = S
when S is even, denote m = π . For the rest, when m is even, i.e., m = 2, 4, ..., (S − 2) in (2.3),

and m = 2, 4, ..., (S − 1) in (2.4), denote m = meven ; when m is odd, i.e., m = 3, 5, ..., (S − 1) in

(2.3) and m = 3, 5, ..., S in (2.4), denote m = modd . The notations are are used throughout the

paper.

As discussed above, the seasonal unit roots in time series would permanently change the

seasonal patterns of the series and make the variance of the series increase linearly. Therefore

testing seasonal unit roots proceeds modeling seasonality. However the HEGY test is only

availiable for data at certain frequencies, such as quarterly and monthly. In order to detect

seasonal unit roots in data with any frequency, we extend the HEGY test to data at any frequency

in the following section.

3 HEGY-type test
3.1 The HEGY-type testing equations
Let {yt : t ∈ Z+ } be a univariate time series satisfying a pth-order autoregressive model:

ϕ(B)yt = εt (3.1)

where ϕ(B) is the autoregressive polynomial with order p, S ≤ p ≤ ∞,


{εt } is a sequence
and
2 2
of independent and identically distributed variables with mean 0 and variance σ (0 < σ < ∞),

3
denoted as εt ∼ iid(0, σ 2 ). Note that the case for p=∞ is discussed in Section 5.

To carry out a seasonal unit root test at any frequency, we begin with the decomposition

theory of the polynomial ϕ(B) in (3.1) by following the decomposition technique in HEGY

(1990), and it is stated in Lemma 1.

Lemma 1: Consider the autoregressive polynomial ϕ(u) in the model (3.1). Assume that

ϕ(u) is expanded at the S unit roots um in (2.5), m = 1, ..., S . Then ϕ(u) in (3.1) can be

decomposed as:

S
X
ϕ(u) = τm ϕm (u) + ϕ∗ (u)(1 − uS ) (3.2)
m=1

ϕm (u) = uum Sj=1,j6=m (1− uuj ) for m = 1,...,S , and ϕ∗ (B) in (3.2) is a remainder polynomial
Q
where

with order p − S . For details of the proof of Lemma 1, see Appendix I.

The following testing equation of the HEGY-type test is derived by applying the decomposi-

tion in Lemma 1.

Lemma 2: Consider a univariate seasonal time series{yt : t ∈ Z+ } with frequency S. Assume


yt satises the autoregressive model (3.1). Then the model (3.1) has the following form:

s
X
∗ S
ϕ (B)(1 − B )yt = ρm xm,t + εt (3.3)
m=1

where εt ∼ iid(0, σ 2 ) andxm,t = ζm (B)yt , whereas



PS cos(jθ )B j m = 1, meven , π
m
ζm (B) = PSj=1 (3.4)
 sin(jθm−1 )B j
j=1 m = modd

Proof: See Appendix I.

The model (3.3) can be used for testing the seasonal unit roots of {yt } in (3.1). For details,

we shall discuss as follows:

(a) Misspecication of ϕ∗ (B) : Assume that the order of ϕ(B) in (3.1) satises S ≤ p ≤ ∞.
By Lemma 1 the order of the remainder polynomial ϕ∗ (B) is p−S and p − S ≥ 0. If ϕ∗ (B)
is chosen to be constant while in fact it is not, the residuals of the regression (3.3) are serially

correlated. The details of choosing ϕ∗ (B) in (3.3) in practical issue will be discussed in the

subsection 4.2.

(b) Properties of regressors: It follows from Lemma 2 that the S regressors xm,t in (3.3)
P
are orthogonal to each other, which means t xm1 ,t xm2 ,t = 0 when m1 6= m2 . Each regressor xm
is related to the specic frequency θm . In practical issue, we have the observed univariate series

yt , xm,t are obtained by operating yt with the lters ζm (B). Noting that in
and the regressors

(3.3), we can express xmeven by the xmodd which correspond to its conjuagation frequencies, and
cosθmeven B
the relationship could be formulated by xmeven ,t = xmodd ,,t ( − ). For example,
sinθmeven √ sinθmeven √ √
consider the example above where m = 2 for hourly data, x2,t = (2 + 3)x3,t − ( 6 + 2)x3,t−1 .
Thus, we could derive xmodd rst and then use them to derive xmeven .

(c) Testing seasonal unit root. For the polynomial ϕ(z) in (3.1), ϕ(z) = 0 has a unit root
at frequency θm if and only if the parameter of the related regressor xm,t in (3.3) equals to 0.

4
Thus testing for presence of seasonal unit roots for data at frequency θm are equivalent to test

if the corresponding parameters ρm of xm,t in (3.3) are zero.

For the testing procedures, we give the remarks below:

(d) Estimation: Assume that the residuals εt in (3.3) are iid(0, σ 2 ) and all roots of ϕ∗ (B)
lie outside the unit circle. The parameters in auxiliary regression (3.3) can be estimated by the

ordinary least squares method.

(e) Testing unit root at frequency 0 and π. ρ1 corresponds to the unit


The parameter

root at frequencies 0 and ρπ corresponds to the unit root at frequencies π . To verify the presence

of unit roots, we need to test if the 2 parameters equal to 0. The null hypothesis is

H0m : ρm = 0

against the alternative hypothesis Ham : ρm < 0, where m = 1 or π . The test statistics used
here are t-statistics: tm = ρ̂m /σ̂ρm , where ρ̂m is the OLS estimator of ρm and σ̂ρm is the sample
standard error of ρ̂m . If the null hypothesis is not rejected, the test indicates that the unit root

exists at that frequency. When S is odd, we only need to test if the rst parameter ρ1 equals to

0.

(f) Testing complex unit roots. Due to the fact that these pairs of unit roots are conju-

gates, the regressors appear in pairs and correspond with frequencies in pairs. Thus, only that

both parameters are zero could proof the existence of unit roots. This leads to the joint test of

each pair. The null hypothesis is:

H0m : ρm = ρm+1 = 0

The alternative hypothesis Ham : ρm 6= 0orρm+1 6= 0, where m = meven . The F-statistics:Fm,m+1 =


1 2 2
(t
2 m
+ tm+1 ) are used, where the t-statistics tm and tm+1 are derived in the same way as t1 and
tπ . If the null hypothesis is not rejected, the test indicates unit roots exist at the corresponding

2 frequencies. The proof of deriving F-statistics is given in Lemma 3.

Lemma 3:
1
Fm,m+1 = (t2m + t2m+1 )
2
Proof: The F-statistics could be derived by :

1 0
Fm = 2
(Rm β − r)0 [Rm (X 0 X)−1 Rm ]−1 (Rm β − r)
2σ̂
where β = [ρ1 , ρ2 , ..., ρS ]0 , r = [0, 0]0 , Rm = [um , um+1 ]0 with ui is an S-vector with 1 in mth
0
element and 0 elsewhere, X = [x1 , x2 , ..., xS ] with xi = [xi,1 , xi,2 , ..., xi,T ] , i = 1, 2, ..., S .
−1 0 −1
0
= diag( Tj= x2m;j , Tj= x2m+1;
P P
Considering xi are orthogonal with each other, we have [Rm (X X) Rm ] 1 1
1
(ρ2m Tj= x2m;j + ρ2m+1 Tj= x2m+1;j ) = 12 (t2m + t2m+1 ).
P P
Thus Fm =
2σ̂ 2 1 1


Another strategy for testing conjugate unit roots is to test H0(m+1) : ρm+1 = 0 against

Ha(m+1) : ρm+1 6= 0 by t-type statistics tm+1 in (e), where m = modd . If the null hypothesis is
∗ ∗
not rejected, then examine H0m : ρm = 0 against Ham : ρm < 0 by t-type statistics tm in (e). If

the null hypothesis is not rejected again, there are unit roots at the corresponding 2 frequencies.

5
Compared with this strategy, using F-statistics have simpler procedures, thus we focus on the

strategy of using F statistics in this paper.

An advantage of the testing procedure is that we only need to estimate (3.3) one time to test

the S unit roots. Based on the test results, we are able to choose appropriate dierencing lter

to render the series stationary. The existence of unit root at frequency 0 and π indicates the

lter (1 − B) and(1 + B), and the existence of unit roots at conjugation frequencies θm and
θm+1 indicates the lter (1 − cosθm B)(1 − cosθm+1 B). The operator that we choose to dierence

the data would be the mutiplication of the lters that corresponding to the extisting unit roots.

(g) Testing seasonal integration at order S. The HEGY-type test could test the presence
of all the S unit root as a whole, and this leads to a joint test for all the S parameters. The null

hypothesis is:

H0 : ρ1 = ρ2 = ... = ρS = 0

against the alternative Ha :The series is seasonally stationary. An F-type test statistic is used
1
PS 2
here: Fall = m=1 tm . The proof of the equation is similar with that of Lemma 3. The overall
S
test is sensitive to the absence of unit roots at certain frequencies, because stationarity at one or

a pair of frequencies could lead to invalidity of the null hypothesis while there are unit roots at

all the other frequencies. Therefore, one should consider both the Fall test in (g) and the joint

F test in (f ) to decide if the operator 1 − BS is needed to render the series stationary.

(h) Deterministic component included in test equation. Our test also applies when

there is deterministic components in the series. In this case test equation (3.3) is amended to

contain deterministic components such as constant, time trend and seasonal dummies, which

lead to test equation (3.5).

S
X S
X
∗ S
ϕ (B)(1 − B )yt = ρm xm,t + c0 + c1 t + ci Di,t + εt (3.5)
m=1 i=2

where c0 is constant, t is the time trend, Di,t , i = 2, 3, ..., S are seasonal dummy variables which
th
equals to 1 if yt is at the i time unit in a seasonal period and 0 elsewhere. When (3.5) is

employed in our test, the test procedures (d)-(g) are still the same as the case when (3.3) is

employed for the test, but the distributions of the test statistics change, see more discussions in

the following subsection.

3.2 Asymptotic distributions of the HEGY type test statistics


This subsection gives the asymptotic distributions of the test statistics in subsection 3.1. The

asymptotic distributions of those test statistics are derived by following Beaulieu and Miron

(1992), Chan and Wei (1988) and Hamilton (1994). First, Theorems 1 gives the asymptotic

distributions of the t-statistics when no deterministic component exits in test regression (3.3).

The asymptotic distributions of F-statistics could be derived by the asymptotic distributions

of t-statistics. Second, Theorems 3 gives the asymptotic distributions of test statistics when

dierent deterministic components are included in testing equation (3.5).

In order to derive the asymptotic distributions of our test statistics, the following assumptions

are needed:

6
Assumption 1: In (3.3) and (3.5), {εt } is martingale dierences with respect to an increasing
sequence of σ−elds {zt } satisfying the 2 conditions below:

E{ε2t | zt−1 } = σ 2 a.s.

supE{|εt |2+δ | zt−1 } < ∞ a.s., f or some δ > 0


t

where σ2 is nite.

Assumption 2: Consider the autoregressive polynomial ϕ(z) in (3.1), it is assumed there

are no repeated unit roots for ϕ(z) = 0 at the frequencies θm , m = 1, 2, ..., S .


This assumption ensures that ϕ(zm ) = 0 if and only if the parameter of the related regressor

xm,,t equals to 0, thus it ensures that testing for presence of seasonal unit roots is equivalent to

testing if the corresponding parameters in (3.3) are zero.

Assumption 3: ϕ∗ (Z) = 0 lie outside the unit circle.


All the roots of
S 2 p
In a special case when ϕ(B) = (1−B )(1−β1 B −β2 B −· · ·−βp B ), this assumption ensures
2 p
all roots of 1 − β1 B − β2 B − · · · − βp B = 0 lie outside the unit circle. The augmentations do

not aect the asymptotic distributions according to Beaulieu and Miron (1992), as long as they

are correctly specied.

With the assumptions above, we derive the Theorem 1:

Theorem 1: Consider the regression model (3.3) with assumption 1-3 fullled. Under the

hypothesis H0m : ρm = 0, m = 1, π , the t-statistics tm have the asymptotic distributions below;

´1
L Wm (r)dWm (r)
tm → ´0 1
( 0 Wm (r)2 dr)1/2
Under the hypothesis H0m : ρm = ρm+1 = 0, m = meven , the t-statistics have the asymptotic

distributions below:

´1 ´1
 0 W´m1 (r)dWm2(r)+´ 01 Wm+1 (r)dW
2
m+1 (r)
1/2
m = meven
L [ 0 Wm (r) dr+ 0 Wm+1 (r) dr]
tm → ´1 ´
Wm (r)dWm−1 (r)− 01 Wm−1 (r)dWm (r)
 0 ´1 ´1 m = modd
[ 0 Wm−1 (r)2 dr+ 0 Wm (r)2 dr]1/2

L
where  → stands for converge in distribution, Wm , m = 1, meven , modd , π are mutually indepen-

dent standard Brownian motions. For details of proof see Appendix II.

We can see that t1 and tπ have the same asymptotic distribution, and the asymptotic dis-

tributions of tmeven are not the same with tmodd . The F-statistics for joint test Fm,m+1 =
1 2
(t
2 m
+ t2m+1 ),m = meven have the same asymptotic distributions. The asymptotic distri-

bution of F-statistic for integration at order S is retrieved in the similar way as Fm,m+1 by

Fall = S1 Sm=1 t2m , which varies across dierent S.


P

Next we consider the case when there are deterministic components in the test equation, i.e.,

(3.5) is employed for test. Theorem 2 gives us the asymptotic distributions for the t-statistics.

Theorem 2. Consider the regression model (3.5) with assumptions 1-3 fullled. Under the

hypothesis H0m : ρm = 0, m = 1, π , the t-statistics tm have the asymptotic distributions below;

7
´1 ´1
L W m (r)dW m (r) − W m (1) Wm (r)dr + 1t WN∗
t1 → 0 ´ 1 ´1 0
[ 0 Wm (r)2 dr − ( 0 Wm (r)dr)2 + 1t WN∗∗ ]1/2
´1 ´1
L Wm (r)dWm (r) − 1µ Wm (1) 0 Wm (r)dr
tπ → 0
´1 ´1
[ 0 Wm (r)2 dr − 1µ ( 0 Wm (r)dr)2 ]1/2
Under the hypothesis H0m : ρm = ρm+1 = 0, m = meven , the t-statistics have the asymptotic

distributions below:

´1 ´1 µ ∗
 0 W´m1 (r)dWm2(r)+´ 01 Wm+1 (r)dW
2
m+1 (r)+1 Wcos
µ ∗∗ 1/2
m = meven
L [ 0 Wm (r) dr+ 0 Wm+1 (r) dr+1 Wcos ]
tm → ´1 ´
Wm (r)dWm−1 (r)− 01 Wm−1 (r)dWm (r)+1µ Wsin

 0 ´1 ´1 ∗∗ m = modd
2 2 µ
[ 0 Wm (r) dr+ 0 Wm−1 (r) dr+1 Wsin ] 1/2

where,
´1 ´1 ´1 ´1 ´1
WN∗ = 3W1 (1) 0 W1 (r)dr−6W1 (1) 0 rW1 (r)dr−6[ 0 W1 (r)dr]2 +12 0 rW1 (r)dr 0 W1 (r)dr
´1 ´1 ´1 ´1
WN∗∗ = −3( 0 W1 (r)dr)2 + 12 0 W1 (r)dr 0 rW1 (r)dr − 12( 0 rW1 (r)dr)2

´1 ´1
Wcos = −Wm (1) 0 Wm (r)dr − Wm+1 (1) 0 Wm+1 (r),
∗∗
´1 ´1
Wcos = −( 0 Wm (r)dr)2 − ( 0 Wm+1 (r)dr)2

´1 ´1
Wsin = −Wm (1) 0 Wm−1 (r)dr − Wm−1 (1) 0 Wm (r)dr,
∗∗
´1 ´1
Wsin = −( 0 Wm−1 (r)dr)2 − ( 0 Wm (r)dr)2
1t and 1µ are indicator functions, 1t = 1 if trend is included in (3.5) and 0 if trend is not included.
1µ = 1 if seasonal dummies are included in (3.4) and 0 elsewhere. For proof of Theorem 2, see
Appendix III.

We can see in Theorem 2, dierent deterministic components included in the testing equation

would aect the asymptotic distributions of the t-statistics. The included trend only aect the

distributions of t1 , and the dummies included would aect the distributions for all the t-statistics

except t1 . Thus the inclusion of seasonal dummies would aect the asymptotic distributions of
Fm,m+1 , m = meven . Similar with the conclusion from Theorem 1, the F-statistics Fm,m+1 have
the same asymptotic distributions because of the same distribution for the t-statistics when

m = meven and m = modd . Fall has dierent distributions for dierent S.

4 Critical values for hourly and daily data.


In this part we focus on the critical values for the HEGY-type test at dierent frequencies.

In this section we give the asymptotic and nite-sample critical values of the test statistics in

section 3 for hourly data rst, and then give those for the test statistics of daily data.

4.1 Hourly data.


k
For hourly data, there are 24 frequencies,
12
π, k = 0, 1, ...24. We have to test unit root at

frequency 0, π and 11 pairs of conjugation frequencies. As stated in Section 2, , the frequencies


1 2
as following: m=1 for frequency 0, m=2,4,...,22 (m = meven ) for frequency
12
π, 12 π, ..., 11
12
π,
23
and m=3,5,...,23 (m = modd ) for
12
π, 22
12
13
π, ..., 12 π , m=24 for frequency π . The nite-sample
critical values for our test are derived by simulating data from the model yt = yt−24 + εt , where

εt ∼ N (0, 1), then estimate the test regression (3.3) and (3.5) to get the value of statistics. The

8
procedure is repeated 10000 times, yielding the nite-sample critical values. The critical values of

Fm,m+1 , m = meven are derived by combining the 11 F-statistics that have the same asymptotic

distributions and get quantiles. The critical values of the asymptotic distributions are calculated

by letting T=5000 to simulate a Brownian motion W (r) on [0,1], the number of replications are

set to 10,000. The nite sample and asymptotic critical values when (3.3) is employed for test

are reported in the Table 1-4. Because dierent deterministic components in (3.5) would lead

to dierent distributions of the statistics, we provide the distributions of 4 situations when

dierent deterministic components are included in (3.5): only intercept included, intercept and

trend included, intercept and seasonal dummies included, intercept, trend and seasonal dummies

included. For the critical values of these 4 cases see Appendix V.

Table 1: Critical values for t1 of hourly data. No deterministic part included


Sample size T Probability that t1 is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -2.29 -1.96 -1.68 -1.38 0.89 1.28 1.60 2.00
T=240 -2.44 -2.08 -1.81 -1.49 0.90 1.29 1.60 1.95
T=360 -2.43 -2.10 -1.82 -1.50 0.89 1.28 1.59 1.95
T=480 -2.44 -2.12 -1.82 -1.52 0.89 1.27 1.59 1.96
T= ∞ -2.51 -2.20 -1.95 -1.64 0.86 1.25 1.62 1.98

Table 2: Critical values for the t24 of hourly data. No deterministic part included
Sample size T Probability that t24 is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -2.35 -1.99 -1.72 -1.42 0.90 1.25 1.58 2.02
T=240 -2.49 -2.13 -1.85 -1.51 0.90 1.26 1.61 2.00
T=360 -2.55 -2.18 -1.89 -1.53 0.89 1.27 1.61 2.00
T=480 -2.61 -2.23 -1.92 -1.57 0.88 1.26 1.62 2.00
T= ∞ -2.52 -2.22 -1.93 -1.60 0.88 1.27 1.64 2.00

Table 3: Critical values for the Fm,m+1 , m = meven of hourly data. No deterministic part included
Sample size T Probability that Fm,m+1 , m = meven is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 0.01 0.02 0.05 0.10 2.06 2.67 3.29 4.11
T=240 0.01 0.03 0.05 0.11 2.23 2.88 3.53 4.40
T=360 0.01 0.03 0.05 0.11 2.25 2.90 3.59 4.51
T=480 0.01 0.03 0.05 0.11 2.34 3.01 3.69 4.57
T= ∞ 0.01 0.03 0.05 0.12 2.40 3.06 3.74 4.83

Table 4: Critical values for the Fall of hourly data. No deterministic part included
1
P24 2
Sample size T Probability that Fall = 24 1 tm is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 0.39 0.45 0.51 0.58 1.28 1.41 1.55 1.75
T=240 0.44 0.50 0.56 0.63 1.36 1.51 1.64 1.79
T=360 0.45 0.51 0.57 0.66 1.40 1.54 1.65 1.85
T=480 0.47 0.53 0.59 0.67 1.43 1.57 1.68 1.87
T= ∞ 0.49 0.56 0.63 0.71 1.46 1.59 1.74 1.89

4.2 Daily data.


k
For daily data, there are 7 frequencies,
7
π, k = 0, 1, ...6, we have to test unit roots at frequency

9
0 and 3 pairs of conjugation frequencies. The frequencies are arranged as introduced in Section
2
2, i.e., m=1 for frequency 0, m=2,4,6 (m = meven ) for frequency
7
π, 47 π, ..., 67 π , and m=3,5,7,
5
(m = modd ) for
7
π, 37 π, 71 π . The data generating process is the same with that for hourly data,
except that there regressors are dierent according to (3.4) and the model is yt = yt−7 + εt ,
where εt ∼ N (0, 1). The asymptotic distributions of the test statistics are derived in the same

way as in hourly data. The critical values for the rst case are given in Table 5-7. The critical

values for the other cases are given in Appendix V.

Table 5: Critical values for the t1 for daily data. No deterministic part included
Sample size T Probability that t1 is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 -2.55 -2.18 -1.88 -1.56 0.91 1.28 1.59 1.99
T=280 -2.47 -2.17 -1.91 -1.58 0.88 1.26 1.60 1.99
T=420 -2.55 -2.24 -1.95 -1.61 0.89 1.26 1.61 2.00
T=560 -2.54 -2.23 -1.94 -1.63 0.89 1.25 1.60 1.97
T= ∞ -2.51 -2.21 -1.94 -1.64 0.86 1.25 1.62 1.97

Table 6: Critical values for the Fm,m+1 , m = meven for daily data. No deterministic part included
Sample size T Probability that Fm,m+1 , m = meven is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 0.01 0.03 0.06 0.12 2.31 2.99 3.68 4.50
T=280 0.01 0.03 0.06 0.12 2.38 3.07 3.71 4.66
T=420 0.01 0.03 0.06 0.11 2.38 3.08 3.72 4.68
T=560 0.01 0.03 0.06 0.12 2.40 3.14 3.81 4.70
T= ∞ 0.01 0.03 0.06 0.12 2.41 3.14 3.80 4.75

Table 7: Critical values for the Fall for daily data. No deterministic part included
1
P7 2
Sample size T Probability that Fall = 7 1 tm is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 0.20 0.27 0.34 0.43 1.74 2.02 2.31 2.63
T=280 0.20 0.27 0.34 0.45 1.77 2.06 2.35 2.68
T=420 0.19 0.26 0.33 0.45 1.80 2.08 2.36 2.73
T=560 0.21 0.28 0.35 0.45 1.81 2.10 2.39 2.70
T= ∞ 0.22 0.29 0.35 0.46 1.82 2.12 2.40 2.72

5 Size and Power studies for houly data


In this subsection we give the size and power analysis of our test for hourly data. Many papers

do research on the size and power of the HEGY test, e.g., Ghysels, Lee and Noh (1994) studied

the size and power of HEGY test for quarterly data, Rodrigues and Osborn (1999) studied those

for monthly data. In their studies, the HEGY test suers from size distortions when there is

strong seasonal moving average innovation in the series and the lag augmentations could reduce

the size distortions. Based on the following size studies, the HEGY-type test for hourly data

also suers from the size distortion due to the negative strong moving average component in

the series. To study size of our test, we focus on the size distortion problem due to the moving

average components in the series, and nd that among the 3 commonly used methods to choose

ϕ∗ (B), including lags without lag elimination performs the best in reducing size distortion.

10
Another important practical issue is whether to include deterministic components especially

seasonal dummies in application. For power of our test, we investigate the eects of including

seasonal dummies in test regression on the power of our test, and nd that unless there are

evident signs indicating there are no deterministic seasonality in the series, it is prudent to

include dummies in the testing equation.

Size of test. The reason of these distortions is easy to gure out. In the test regression

(3.3) and (3.5), when the order of ϕ(B) is greater than S, the augmentation ϕ∗ (B) is needed to

accommodate the serial correlation in the residuals. When there is moving average component

in the series, the order of ϕ∗ (B) should be innity to render the residuals white noise. Therefore

the nite order ϕ∗ (B) in practical issue would cause changes in the distributions of test statistics.
Appropriate augmentations would attenuate these biases. There are 2 commonly used methods

to choose augmentations:

M1: Include a certain number of lags in the auxiliary regression rst and then exclude the

lags whose parameters are not signicantly dierent from.

M2: Include lags without eliminating the lags which do not have signicantly nonzero pa-

rameters, the number of lags included is decided by AIC criteria.

M1 is recommended by HEGY (1990) and Beaulieu and Miron (1992). Ghysels, Lee and Noh

(1994) nd that M2 has better performance than M1 in monthly data. We study the size of

our test with the 2 kinds of augmentations above, and show that M2 is more appropriate to

attenuate the size distortions in hourly data.

The data generating process (DGP) is

(1 − B 24 )yt = εt + θi εt−i

where εt ∼ iid N (0, 1), i = 1, 24, and we choose θ1 = ±0.5, ±0.9 and θ24 = ±0.5, ±0.9. The sam-

ple size is 480, and the size estimates are based on 5% critical values in the previous subsection.

We employ test equation (3.5) for our test with constant included, i.e., ci = 0, i = 1, 2, ..., S .
5 dierent kinds of augmentations are used: no augmentation, 24 lags augmentations, 48 lags

augmentations, 48 lags augmentation with lag elimination and prewhitening procedure. In the

lag elimination process, we include the lags with parameter signicantly dierent from zero at

0.1 level. These procedures are repeated 10,000 times each, yielding the size estimates in Table

8-11.

Based on the size estimates in Table 8-11, when there are no lag augmentations (Table 8),

the size distortions are quite large. When 24 or 48 lag augmentations are included in the

auxiliary regression (3.3) (Table 9 and 10), the size estimates are all around 0.05 except when

there is strong negative seasonal MA component, i.e., (1 − B 24 )yt = εt − 0.9εt−24 . When there

is signicant negative seasonal MA component in the series, M2 also perform better and the

distortions are reduced sharply when 48 lags are included. The lag elimination augmentation

(Table 11) process have size distortions for all statistics in most DGPs except for the DGP

where θ24 = ±0.5. The size estimations suggest us that M2 performs better to accomadate serial

correlations in residuals, and when there is strong negative seasonal MA component in the series,

more lag augmentations are needed.

11
Table 8: Empirical size of HEGY-type test statistics at of 5% level with SARIMA models. No augmentation.
Frequencies
π 23π π 11π π 3π π 5π 5π 19π
0 π 12 , 12 6, 6 4, 4 3, 3 12 , 12
θ1 = 0.5 0.02 0.37 0.17 0.23 0.12 0.12 0.21
θ1 = −0.5 0.54 0.01 0.29 0.22 0.18 0.14 0.12
θ24 = 0.5 0.03 0.01 0.14 0.14 0.14 0.14 0.14
θ24 = −0.5 0.31 0.35 0.36 0.35 0.35 0.37 0.35
θ1 = 0.9 0.02 0.76 0.44 0.56 0.36 0.16 0.54
θ1 = −0.9 1.00 0.27 0.31 0.17 0.12 0.09 0.08
θ24 = 0.9 0.03 0.01 0.17 0.17 0.17 0.18 0.17
θ24 = −0.9 0.99 1.00 1.00 1.00 1.00 1.00 1.00
π 3π 7π 17π 2π 4π 3π 5π 5π 7π 11π 13π
2, 2 12 , 12 3 , 3 4 , 4 6 , 6 12 , 12 Fall
θ1 = 0.5 0.11 0.22 0.14 0.17 0.30 0.29 0.59
θ1 = −0.5 0.12 0.11 0.12 0.11 0.12 0.12 0.69
θ24 = 0.5 0.14 0.14 0.14 0.14 0.14 0.14 0.36
θ24 = −0.5 0.36 0.36 0.34 0.37 0.36 0.35 0.98
θ1 = 0.9 0.34 0.56 0.19 0.31 0.73 0.73 0.97
θ1 = −0.9 0.08 0.08 0.07 0.08 0.08 0.08 1.00
θ24 = 0.9 0.17 0.17 0.17 0.16 0.17 0.17 0.48
θ24 = −0.9 1.00 1.00 1.00 1.00 1.00 1.00 1.00

Table 9: Empirical size of HEGY-type test statistics at of 5% level with SARIMA models. 24 lag augmentations.
Frequencies
π 23π π 11π π 3π π 5π 5π 19π
0 π 12 , 12 6, 6 4, 4 3, 3 12 , 12
θ1 = 0.5 0.04 0.04 0.05 0.05 0.05 0.04 0.05
θ1 = −0.5 0.04 0.04 0.05 0.05 0.05 0.04 0.04
θ24 = 0.5 0.08 0.07 0.05 0.06 0.05 0.05 0.05
θ24 = −0.5 0.08 0.11 0.08 0.08 0.09 0.08 0.08
θ1 = 0.9 0.03 0.02 0.05 0.04 0.05 0.05 0.05
θ1 = −0.9 0.07 0.05 0.05 0.05 0.05 0.05 0.05
θ24 = 0.9 0.11 0.10 0.07 0.07 0.07 0.06 0.07
θ24 = −0.9 0.49 0.77 0.93 0.93 0.91 0.94 0.91
π 3π 7π 17π 2π 4π 3π 5π 5π 7π 11π 13π
2, 2 12 , 12 3 , 3 4 , 4 6 , 6 12 , 12 Fall
θ1 = 0.5 0.05 0.05 0.05 0.05 0.04 0.05 0.05
θ1 = −0.5 0.05 0.04 0.04 0.05 0.05 0.04 0.04
θ24 = 0.5 0.05 0.06 0.05 0.06 0.05 0.05 0.08
θ24 = −0.5 0.08 0.09 0.08 0.09 0.08 0.08 0.18
θ1 = 0.9 0.04 0.04 0.04 0.04 0.05 0.05 0.04
θ1 = −0.9 0.05 0.04 0.05 0.04 0.05 0.04 0.04
θ24 = 0.9 0.07 0.07 0.06 0.07 0.07 0.07 0.14
θ24 = −0.9 0.93 0.93 0.90 0.93 0.93 0.91 1.00

12
Table 10: Empirical size of HEGY-type test statistics at of 5% level with SARIMA models. 48 lag augmentations.
Frequencies
π 23π π 11π π 3π π 5π 5π 19π
0 π 12 , 12 6, 6 4, 4 3, 3 12 , 12
θ1 = 0.5 0.03 0.04 0.05 0.05 0.05 0.05 0.04
θ1 = −0.5 0.04 0.04 0.05 0.04 0.04 0.04 0.05
θ24 = 0.5 0.04 0.03 0.05 0.06 0.05 0.05 0.05
θ24 = −0.5 0.04 0.06 0.04 0.04 0.04 0.04 0.04
θ1 = 0.9 0.04 0.03 0.04 0.04 0.05 0.04 0.04
θ1 = −0.9 0.04 0.04 0.05 0.05 0.04 0.04 0.04
θ24 = 0.9 0.03 0.02 0.08 0.07 0.07 0.07 0.07
θ24 = −0.9 0.20 0.40 0.62 0.59 0.57 0.62 0.56
π 3π 7π 17π 2π 4π 3π 5π 5π 7π 11π 13π
2, 2 12 , 12 3 , 3 4 , 4 6 , 6 12 , 12 Fall
θ1 = 0.5 0.04 0.05 0.04 0.04 0.04 0.04 0.04
θ1 = −0.5 0.04 0.04 0.04 0.04 0.04 0.04 0.03
θ24 = 0.5 0.05 0.05 0.05 0.05 0.05 0.05 0.05
θ24 = −0.5 0.04 0.04 0.04 0.04 0.04 0.04 0.03
θ1 = 0.9 0.04 0.04 0.05 0.05 0.05 0.05 0.04
θ1 = −0.9 0.04 0.04 0.04 0.04 0.04 0.05 0.03
θ24 = 0.9 0.08 0.07 0.06 0.07 0.07 0.07 0.09
θ24 = −0.9 0.60 0.62 0.54 0.62 0.59 0.58 1.00

Table 11: Empirical size of HEGY-type test statistics at of 5% level with SARIMA models. 48 lag augmentations
with lag elimination.
Frequencies
π 23π π 11π π 3π π 5π 5π 19π
0 π 12 , 12 6, 6 4, 4 3, 3 12 , 12
θ1 = 0.5 0.02 0.27 0.16 0.21 0.12 0.12 0.17
θ1 = −0.5 0.52 0.02 0.18 0.14 0.12 0.10 0.09
θ24 = 0.5 0.04 0.03 0.05 0.05 0.05 0.05 0.05
θ24 = −0.5 0.04 0.06 0.04 0.04 0.04 0.04 0.04
θ1 = 0.9 0.02 0.70 0.43 0.53 0.36 0.15 0.50
θ1 = −0.9 1.00 0.02 0.23 0.12 0.08 0.06 0.05
θ24 = 0.9 0.03 0.02 0.07 0.07 0.07 0.07 0.07
θ24 = −0.9 0.22 0.43 0.64 0.60 0.57 0.62 0.61
π 3π 7π 17π 2π 4π 3π 5π 5π 7π 11π 13π
2, 2 12 , 12 3 , 3 4 , 4 6 , 6 12 , 12 Fall
θ1 = 0.5 0.09 0.16 0.11 0.12 0.20 0.18 0.40
θ1 = −0.5 0.10 0.10 0.10 0.11 0.12 0.12 0.55
θ24 = 0.5 0.06 0.06 0.06 0.05 0.06 0.06 0.05
θ24 = −0.5 0.05 0.05 0.05 0.05 0.04 0.04 0.04
θ1 = 0.9 0.32 0.50 0.15 0.26 0.65 0.65 0.94
θ1 = −0.9 0.06 0.06 0.07 0.07 0.07 0.08 1.00
θ24 = 0.9 0.13 0.07 0.08 0.07 0.07 0.07 0.10
θ24 = −0.9 0.65 0.66 0.58 0.66 0.63 0.62 1.00

Power of test. For the power analysis, we focus on the inuence of including seasonal

dummies in test equation (3.5) on power of our test statistics. The data generating processes is

Seasonal ARIMA(1,0,0)24 model with dierent seasonal intercepts:

24
X
24
(1 − 0.9B )yt = αi Dit + εt
i=1

where αi , i = 1, 2, ..., 24 are (0.9,-0.6,0.1,0.5,-0.3,0,0.8,-0.3,-0.5,0.6,0.9,-0.7,0,0.5,0,0.1,0.2,-0.4,0.7,-


0.2,-0.8,0,0.1,0). First we include only constant in (3.5) i.e., ci = 0, i = 1, 2, ..., S , denote as Case

A. Next we include constant and seasonal dummies in (3.5), i.e., c1 = 0, denoted as Case B.

The augmentations used are 0 lags and 24 lags without lag elimination. The sample size is 480,

13
nominal size is 5%, and each process is replicated for 10,000 times, yielding the results in Table

12.

From the estimates in Table 12, the Fall statistics have poor power estimates in all the data

generating processes, and when there is seasonal intercept in the DGP but not in the test equa-

tion, the Fall statistics have very low power. When Case B is employed for testing seasonal unit

root, the power of Fall have better performance than Case A. The F-statistics for conjugation

frequencies have power close to 1, and they have similar power estimates among dierent frequen-

cies. When Case A is used where there are no seasonal dummies in the regressions, the power for

Fm,m+1 are smaller than those when Case B is employed for test. The power of the t-statistics

for testing unit root at frequency 0 and π are about the same for the two cases. Therefore the

inclusion of the seasonal dummies is a prudent decision in practical issue, especially for Fall .
Similar conclusion is derive by Rodrigus and Osborn (1999). The results also suggests that one

should combine the test results of Fm,m+1 statistics and the Fall statistic to decide whether the
S
seasonal dierention (1 − B ) is needed to render the series stationary.

Table 12: Empirical power of HEGY-type test statistics at 5% level for SARIMA(1,0,0) with seasonal dummies.
DGP with seasonal intercepts ai
Frequencies
π 23π π 11π π 3π π 5π 5π 19π
0 π 12 , 12 6, 6 4, 4 3, 3 12 , 12
Case A 0 lag 0.94 0.88 0.85 0.85 0.86 0.85 0.86
Case A 24 lag 0.95 0.90 0.88 0.89 0.88 0.89 0.89
Case B 0 lag 0.92 0.92 0.95 0.96 0.96 0.95 0.96
Case B 24 lag 0.94 0.94 0.97 0.97 0.97 0.97 0.97
π 3π 7π 17π 2π 4π 9π 15π 5π 7π 11π 13π
2, 2 12 , 12 3 , 3 12 , 12 6 , 6 12 , 12 Fall
Case A 0 lag 0.86 0.87 0.89 0.86 0.85 0.86 0.37
Case A 24 lag 0.89 0.90 0.91 0.89 0.88 0.88 0.55
Case B 0 lag 0.96 0.96 0.96 0.95 0.96 0.96 0.65
Case B 24 lag 0.97 0.97 0.97 0.97 0.97 0.97 0.77

6 Testing seasonal unit roots in hourly wind power production data in Sweden
Our test is applied to hourly wind power production data in Sweden. The wind condition in

Sweden are dierent in winter and summer time, so we separate the year into warm season

and cold season. Warm season covers the time from April to September and cold season covers

the rest. We test seasonal unit roots in the production data in warm season and cold season

separately. For cold season, the data used starts from 0 o'clock Nov 1, 2008, ends at 23 o'clock

Feb 28, 2009, and the data has 2880 observation for 120 days. For warm season, the data used

starts from 0 o'clock May 1, 2009, ends at 23 o'clock Aug 31, 2009, which have 2952 observation

for a 123 days period. The data are plotted in Figure 1. We include constant and seasonal

dummies in (3.5) considering there is no evident trend in both series.  p is chosen to be 33 for

cold season and 30 for warm season without lag elimination based on AIC criteria. Considering

the F statistics for conjugate unit roots have better performance than the t-statistics, we only

provide the F statistics for conjugate unit roots in Table 13.

From the results in Table 13 we can see that the F statistics for conjugate unit roots are all

signicantly nonzero at 5% level, so is tπ , therefore there is no seasonal unit root in both series.

The estimate of t1 for the 2 series are not signicant at 5% level, therefore there is unit root at

14
0 frequency. The dierencing lter (1 − B) is needed to render both series stationary.

R code/Code for 24/test application/pasted1.png

Figure 1: Hourly wind power production data in Sweden 2008-2009.

Table 13: Testing for seasonal roots in wind production data


Frequency
π 23π π 11π π 7π π 5π 5π 19π
0 π 12 , 12 6, 6 4, 4 3, 3 12 , 12
coldseason 0.02 -12.75* 128.66* 117.36* 114.88* 127.23* 111.08*
warmseason 0.49 -12.23* 122.67* 119.60* 123.18* 145.36* 138.15*
π 3π 7π 17π 2π 4π 3π 5π 5π 7π 11π 13π
2, 2 12 , 12 3 , 3 4 , 4 6 , 6 12 , 12 Fall
coldseason 124.61* 127.75* 99.35* 117.02* 124.68* 114.31* 115.68*
warmseason 131.01* 150.14* 141.52* 142.37* 161.67* 116.91* 130.62*
 * stands for signicant at 5% level

7 Concluding remarks
In this paper we propose an HEGY type test for testing seasonal unit roots in data with any

frequency. Seasonal unit roots in a univariate time series would make the series nonstationary,

and misspecication of the seasonal patterns in the modeling process would lead to seriously

biased results. Among the many approaches in detecting seasonal unit roots, the HEGY type

test has the advantage of testing presence of seasonal unit roots at dierent frequencies separately,

therefore it could detect certain types of nonstationarity in application. We use the technique of

HEGY test to derive the test regression in data with any frequency, and then provide the testing

procedure. The nite-sample and asymptotic distributions of the test statistics are also given.

Considering the inclusion of deterministic component like constant, trend and seasonal dummies

would aect the nite-sample and asymptotic distribution of the test statistics, we provide the

distributions under dierent cases.

The analysis of power and size of our test for hourly data is provided in this paper, giving

suggestions on how to choose augmentations and deterministic components in test regression.

According to the simulation results, we nd including lags without lag elimination has better

performance, and it is better to include more lag augmentations when there are strong negative

seasonal moving average component in the series. Also we nd that the inclusion of seasonal

dummies are prudent unless there are strong indication saying no deterministic seasonality exists.

Our decomposition is similar with Chan and Wei (1988), specically x1,t and xπ are the same

with ut and vt in their paper. In their paper, Chan and Wei pointed out that the limiting

distributions of the estimators for model yn = ϕ1 yn−1 + ϕ2 yn−2 + ... + ϕp yn−p + εt do not have
a closed form when there are (non-) seasonal unit roots in the series yt , and the distributions

maybe extremely complicated. So they gave up deriving such expressions and instead derive

the asymptotic properties of their decomposition factors such as ut and vt . Thus they do not

provide a strategy for testing seasonal unit roots. The HEGY-type test bypass the asymptotic

distributions of the estimators in the autoregressive model and consider factorizing the autore-

gressive polynomial (1 − ϕ1 B − ϕ2 B 2 − ... − ϕp B p ), and then use the factors to test for the

15
existence of seasonal unit roots. Our results still could not lead to the asymptotic distributions

for estimators of an autoregressive model containing seasonal unit roots. But they may be helpful

to derive them.

8 Appendix
Appendix I: Decomposition process.
Proof of Lemma 1:
To carry out a seasonal unit root test in quarterly data, HEGY (1990) make decomposition of

ϕ(B). The decomposition is based on the mathematic approximation theory below:

Lagrange Interpolation Polynomial: Given a polynomial f (x) and a set of n points

x1 , ..., xn which lie in (a, b), −∞ < a < b < ∞. The Lagrange Interpolation Polynomial is

dened by

n
X
L(x) = f (xk )lk (x)
k=1

ω(x) 0
where lk (x) = (x−xk )ω 0 (xk )
, ω(x) = (x − x1 )(x − x2 ) · · · (x − xn ), and ω (x) is the derivative of

ω(x).
(n)
L(x) is used to approximate f (x), and its deviation is given by f (x) − L(x) = f n!(ε) ω(x),
(n)
where f (ε) is the nth derivative of f (x) and ε ∈ (a, b). With the Lagrange Interpolation
Polynomial and its deviation, the polynomial f (x) has the decomposition:

n
X f (n) (ε)
f (x) = f (xk )lk (x) + ω(x)
k=1
n!
We give the proof for the case when S is even rst. Expand the autoregressive polynomial

ϕ(B) above at the S unit root um , k = 1, 2, ..., S which are dened in section 2. Let ω(B) =
QS QS B f (um )
m=1 (B − um ) = m=1 (1 − um ), − ω 0 (um )um = τm , we could get:

S S S
X 1 − uS f (S) (B) S
X
S 1 f (S) (B) X
ϕ(u) = τm u + (1−u ) = τm (1−u )( u −1)+( + τm )(1−uS )
m=1
1 − ue S! m=1
1 − ue S! m=1
m m

PS S
The last equation is derived by subtracting and adding m=1 τm (1 − u ). Denote ϕm (u) =
S (S)
u u ∗ f (B)
+ Sm=1 τm , we could get (3.2).
Q P
uem j=1,j6=m (1 − uj ), for m = 1, 2, ..., S and ϕ (u) = S!
QS f (S) (B)
B f (um ) ∗
+ Sm=1 τm , with
P
When S is odd, let ω(B) = − m=1 (1 − um ), ω 0 (um ) = τm , ϕ (u) = − S!
the same procedure we could get (3.2).

Proof of Lemma 2:
m = 1, ϕ1 (B) = B(1 + B + B 2 + ... + B S−1 ), and
For (3.2), it is obvious to see that, for

m = π , ϕπ (B) = (−B)[1 − B + B 2 + ... + (−B)S−1 ].


Next we consider ϕm (B) that is related to the frequencies other than 0 and π . These frequen-

cies are pairs of conjugation frequencies, therefore ϕm (B) need to be considered in pairs. For

m = meven ,

16
S
Y B
τm ϕm (B) = τm um+1 B(1 − um B) (1 − )
j=1,j6=m,m+1
uj
S
Y B
= τm B(cosθm − isinθm − B) (1 − )
j=1,j6=m,m+1
uj

where i is the imaginary unit.

τm+1 ϕm+1 (B) = τm+1 B(cosθm + isinθm − B) Sj=1,j6=m,m+1 (1 − B


Q
Similarly we get
uj
)
Make the substitution: 2τm = ρm + iρm+1 , 2τm+1 = ρm − iρm+1 . Then we have:

S
Y B
τm ϕm (B) + τm+1 ϕm+1 (B) = [ρm B(cosθm − B) + ρm+1 Bsinθm ] (1 − )
j=1,j6=m,m+1
uj

= ρm ζm (B) + ρm+1 ζm+1 (B) (8.1)

We give the two triangle equations below:

S S
Y x X
(cosθm − x) (1 − )= cos(jθm )xj−1 (8.2)
j=1,j6=m,m+1
zj j=1

S S
Y x X
sinθm (1 − ) = sin(jθm )xj−1 (8.3)
j=1,j6=m,m+1
zj j=1

(cosθm − x)(1 − xS ) = Sj=1 cos(jθm )xj−1 (1 − 2cosθm x + x2 ). Using the


P
(8.2) is equivalent to:
1
triangle equation cosa + cosb = [cos(a + b) + cos(a − b)], and we could easily proof the equation.
2
S
For (8.3), with similar process as above, (8.3) is equivalent to sinθm (1 − x ) = (1 − 2cosθm x +

x2 ) Sj=1 sin(jθm )xj−1 , and we could proof the equation.


P

With (8.2) and (8.3), we could express the two polynomials in (8.1):

ζm (B) = Sj=1 cos(jθm )B j , ζm+1 (B) = Sj=1 sin(jθm )B j , where m = meven


P P
PS j
When m = 1, we could also write ζ1 (B) = ϕ1 (B) = j=1 cos(jθ1 )B where θ1 = 0. Similarly
PS j
when m = π , ζπ (B) = ϕπ (B) = j=1 cos(jπ)B . So when m = 1, π , we have the same form as
m = meven .
With the process above, Lemma 2 is proofed.

Appendix II Asymptotic distributions of test statistics when (3.3) is employed for


test.
A2.1 Derivation of t-statistics and F-statistics
The t-statistics in this paper have

A2.1 Matrix Decomposition


In the process of deriving the asymptotic distributions of the test statistics, the following matrixes

are used:

17
 
cos(0 ∗ θm ) cos(θm ) cos(2θm ) ... cos((S − 1)θm )
 cos((S − 1)θm ) cos(0 ∗ θm ) cos(θm ) ... cos((S − 2)θm ) 
 
 
 cos((S − 2)θm ) cos((S − 1)θm ) cos(0 ∗ θm ) ... cos((S − 3)θm ) 
Am =  
...
 
 
cos(θm ∗ 1) cos(θm ∗ 2) cos(θm ∗ 3) ... cos(0 ∗ θm )
 
sin(0 ∗ θm−1 ) sin(θm−1 ) sin(2θm−1 ) ... sin((S − 1)θm−1 )
 sin((S − 1)θm−1 ) sin(0 ∗ θm−1 ) sin(θm−1 ) ... sin((S − 2)θm−1 )
 

 
Cm =  sin((S − 2)θm−1 ) sin((S − 1)θm−1 ) sin(0 ∗ θm−1 ) ... sin((S − 3)θm−1 )
 

...
 
 
sin(θm−1 ∗ 1) sin(θm−1 ∗ 2) sin(θm−1 ∗ 3) ... sin(0 ∗ θm−1 )
We provide the decomposition theory of the matrix Am and Cm in this part. The decom-

position is used to lower the dimensions of Am and Cm because they are not full rank. The

technique used is singular value decomposition (SVD), which factorizes a matrix Am and Cm
m m m 0 m∗ m∗ m∗ 0 m
in to 3 matrixes, Am = U D (V ) , Cm = U D (V ) . Consider Am , the matrix D is

a diagonal matrix with diagonal elements λ1 , ..., λS which are square roots of eigen values of
0
Am Am , and they are ordered form largest to smallest (Called singular values). Both U m and V m
th m m m m
are unitrary matrixes. Denote the j column of U and V as uj and vj . The decomposition
th m∗ m∗ m∗ m∗
of Cm follows the same way. Similarly, denote j column of U and V as uj and vj . The
m 0 m
columns of U are the eigen vectors of Am Am and the columns of V are the eigen vectors of
0 m∗ −1 m
Am Am . They have the relationship uj = λj Cm vj .
The following Lemma is used in getting the eigen values and eigen vectors of the matrixes

above. The lemma could be found in many books about circulant matrixes, so we directly give

it.

Property of circulant matrix: Suppose matrix A is a circulant matrix in a form:

 
a0 a1 a2 ... as−1
as−1 a0 a1 ... as−2
 
 
 
A=
 as−2 as−1 a0 ... as−3 

...
 
 
a1 a2 a3 ... a0
2πjk
λk = S−1 i
P
The eigen values for A is k=0 aj e
S , k = 0, 1, ..., S − 1 and the corresponding eigen
2πk 4πk 2(n−1)πk
1 i i i 0
vector is wk = √S (1, e S , e S , ..., e S ) , where i is the imaginary unit.
I) First we consider θm equal to 0 and π . The matrixes needed are A1 and Aπ , and we have
m m m 0
the factorization Am = U D (V ) , m = 1, π . We could derive that there is only 1 non-zero

singular value for both A1 and Aπ which is λ1 = S . Thus only the rst column of U and V
0 0 m
matter. Because A1 and Aπ are symmetric, A1 A1 and Aπ Aπ are symmetric, so U = V m.
0
It is also easy to get that the eigen vector corresponding to singular value λ1 . For A1 A1 ,
0
u11 = v11 = √1s (1, 1, ..., 1)0 , and for Aπ Aπ , uπ1 = v1π = √1s (1, −1, ..., 1, −1)0 . Therefore we get the
1 0 1 1 0 1 2 0 2 2 0 2
equation, (u1 ) u1 = (v1 ) v1 = 1, (u1 ) u1 = (v1 ) v1 = 1.

II) Next we consider the matrix Am for meven .


0 S 2π
D: We could get Am Am = Am which is also a circulant matrix. Denote βk =
2 S
(k − 1), k =

18
S
1, 2, ..., S . The eigen values of A are
2 m

S−1 S−1 S−1


SX SX SX
λk = cos(jθm )ejβk i = cos[j(θm + βk )] + cos[j(θm − βk )]
2 j=0 4 j=0 4 j=0
S−1 S−1
SX SX
+i sin[j(θm + βk )] + i sin[j(θm − βk )]
4 j=0 4 j=0
S−1 S−1
SX SX
= cos[j(θm + βk )] + cos[j(θm − βk )]
4 j=0 4 j=0

It can be seen that for a specic θm , λk are non-zero only when βk = θm and 2π − θm , and in
S2 S
that case λk = . Thus we get the singular values for Ak are λ1 = λ2 = .
4 2
V: The corresponding eigen vectors for λ1 and λ2 are %1 =
m √1
S
iθm 2iθm
[1, e , e , ..., e(S−1)iθm ]0 ,
−iθm −2iθm
%m √1
2 = S [1, e ,e , ..., e−(S−1)iθm ]0 . Make linear transformation:
r
%m + %m 2
v1m = 1 √ 2 = [1, cosθm , cos(2θm ), ..., cos((S − 1)θm ]0
2 S
r
%m − %m 2
v2m = 1√ 2 = [0, sinθm , sin(2θm ), ..., sin, ´´((S − 1)θm ]0
2i S
and we get the rst two columns of V m.
U: The columns of U are derived with the relationship uj = λ−1 m
j Am vj , so we can get:

r
2
um
1 = [1, cosθm , cos(2θm ), ..., cos((S − 1)θm ]0
S
r
2
um [0, sinθm , sin(2θm ), ..., sin((S − 1)θm ]0
2 =
S
m 0 m m 0 m m 0 m m 0 m
Therefore we get the relationship (u1 ) v1 = (u2 ) v2 = 1, (u1 ) v2 = (u2 ) v1 = 0.

III) Then we consider decomposition of Cm when m = modd .


T S
We could get Cm Cm = A , therefore we have the same singular values as Am−1 which
2 m−1
S m∗
are λ1 = λ2 = , therefore we get that the rst two columns of V are the same with those
2
m−1 m−1 m−1 m∗ −1 m−1
of V , i.e., v1 and v2 . With the relationship uj = λj Cm vj , we could get um∗ 1 =
q q
− S [0, sinθm−1 , sin(2θm−1 ), ..., sin((S−1)θm−1 ]0 , um∗
2
2 =
2
S
[1, cosθm−1 , cos(2θm−1 ), ..., cos((S−
0
1)θm−1 ] .
m∗ 0 m∗ 0 m∗ m∗ 0 m∗ m∗ 0 m∗
Thus we have the relationship (u1 ) v1 = (um∗
2 ) v2 = 0, (u1 ) v2 = −1, (u2 ) v1 = 1.

A2.2 Proof of Theorem 1:


We give the proof for the case when S is even, and the case when S is odd is proofed in the

same way. The asymptotic distribution of the t-statistics for hourly data have been proofed by

Beaulieu and Miron (1992), we restated with general S. Under the assumption 1, we give the
T
convergences below. J denotes the periods, i.e., J = S
, and p = 1, 2, ..., S . Denote B(r) =
0
[B1 (r), B2 (r), ..., BS (r)] , where Bi (r) are mutually independent standard Brownian motions.

19
J
1 −1/2 X L 1
T → √ Bp (1)
εSj+p − (8.4)
σ j=0
S

J−1 j ˆ 1
1 −3/2 X X L
J εSh+p −
→ Bp (r)dr (8.5)
σ j=0 0h=0

J−1 j ˆ 1
1 −1 X X L
J ε ε
Sh+q Sj+p −
→ Bq (r)dBp (r) (8.6)
σ2 j=0 0
h=0

S
1 X
W1 (1) = √ Bp (1) (8.7)
S p=0

L
where  −
→ means converge in distribution. (8.4) is proofed by Chan and Wei (1988) under the
assumption 1. (8.5) could be found in White (See White, 2001, Page 179) under assumption 1

and (8.4). (8.6) is derived by Chan and Wei (1988) with the same assumptions as (8.5).

For the OLS estimators for regression (3.3),

T T
X 0 X
ρ̂ = [ Xt Xt ]−1 [ Xt εt ] (8.8)
1 1

whereρ̂ = [ρ̂1 , ρ̂2 , ..., ρ̂S ]0 , Xt = [x1,t , x2,t , ..., xS,t ]0


For m = 1, π, meven , the following expression is used:

S S [t/S]
X X X
i
xm,t = cos(iθm )B yt = cos(iθm ) εSh+p−i
i=1 i=1 h=0
p−S [t/S]
X X
= cos(θm (p − q)) εSh+q
q=p−1 h=1

The last step comes from the substitution i = p − q. Thus we have:

T
X J−1
XX S p−S
X j
X
xm,t εt = cos(θm (p − q)) εSh+q εSj+p
t=0 j=0 p=1 q=p−1 h=0
S
X p−S
X J−1
XX j
= cos(θm (p − q)) εSh+q εSj+p
p=1 q=p−1 j=0 h=0
S X
X S J−1
XX j p−1
S X
X J−1
X
= cos(θm (p − q)) εS(h−1)+q εSj+p + cos(θm (p − q) εSj+q εSj+p
p=1 q=1 j=0 h=0 p=2 q=1 j=0

The second part would converge in probability to 0 when divided by J under assumption 1,
PT
then with (8.5), t=0 xm,t εt converge to

T S X
S ˆ 1
L
X X
J −1 xm,t εt −
→ σ2 cos(θm (p − q)) Bq (r)dBp (r)
t=0 p=0 q=1 0
ˆ 1 ˆ 1
= σ2 B T Am dB = σ 2 B T U m Dm V m dB
0 0

PT 0
For the elements in matrix 1 Xt Xt :

20
T
X J−1
XX S p−S
X j
X
J −2 x2m,t = J −2 [ cos(θm (p − q)) εSh+q ]2
t=0 j=0 p=1 q=p−1 h=1
J−1 J−1
X 0 0 X 0 0
= J −2 Ej Am Am Ej = J −2 Ej Am Am Ej
j=0 j=0
ˆ 1
L 0

→ σ2 B (r)U m Dm Dm U m B(r)dr
0

j j j
εSh+q−S ]0 .
P P P
where Ej = [ εSh+q−1 , εSh+q−2 , ...,
h=0 h=0 h=0
For the elements that are not in the diagonal, m1 6= m2 , similar with above:
T J−1
P 0 0
J −2 xm1 ,t xm2 ,t = J −2
P
Ej Am1 Am2 Ej , (Am1 and Am2 are changed to Cm1 and Cm2 for m =
t=0 j=0
T
0 0 0 0
modd ). Because Am1 Am2 = Am1 Cm2 = C m1 Am2 = C m1 Cm2 = 0, so J −2
P
xm1 ,t xm2 ,t = 0. Thus
PT t=0
xm,t εt 2 s2T
the regressors are orthogonal, and the estimators becomes ρ̂m = P1T 2 , and σ̂ρ = T ,
1 xm,t
m
x2m,t
P
t=0
2 2
where  sT  is the estimator of σ .
PT
xm,t εt
ρ̂m t=0
We have tm = = . The convergences of the numerator and denominator are
σ̂ρm T
P 2 1/2
sT ( xm,t )
t=0
given above. The asymptotic distributions of t-statistics are derived by the continuous mapping

theorem:

´1
L σ2 J 0
B 0 U m Dm V m dB(r)
tm → ´1 , m = 1, π, meven (8.9)
σ 2 J( 0
B 0 (r)U m Dm Dm U m B(r)dr)1/2

In the decomposition in A2.1, when m=1, we get u11 = v11 = √1s (1, 1, ..., 1)0 so we could write
B 0 (r)u11 = (v11 )0 B(r) = W1 (r). When m = π , we have uπ1 = v1π = √1s (1, −1, ..., 1, −1)0 , so we could
0 π π 0
get B (r)u1 = (v1 ) B(r) = Wπ (r), where W1 (r) and Wπ (r) are mutually independent Brownian

motions. Then for the numerator and denominator of the t-statistics in (8.9), when m = 1, π ,

B 0 (r)U m Dm V m dB(r) = B 0 (r)um m 0


1 S(v1 ) dB(r) = SWm (r)dWm (r)

B 0 (r)U m Dm Dm U m B(r) = B 0 (r)um 2 m 0 2


1 S (u1 ) B(r) = S Wm (r)
2

q
When m = meven , we have v1m = um
1 =
2
S
[1, cosθm , cos(2θm ), ..., cos((S − 1)θm ]0 , v2m =
q
um
2 = 2
S
[0, sinθm , sin(2θm ), ..., sin((S − 1)θm ]0 , then we get B 0 (r)um m 0
1 = (v1 ) B(r) = Wm (r),
B 0 (r)um m 0 m
2 = (v2 ) B(r) = Wm+1 (r). Because v1 and u1 are
m
orthogonal, Wm (r) and Wm+1 (r) are

independent. We have the following equation:

S m 0 S m 0
B 0 (r)U m Dm V m dB(r) = B 0 (r)um
1 (v ) dB(r) + B 0 (r)um (v ) dB(r)
2 1 2
2 2
S S
= Wm (r)dWm (r) + Wm+1 (r)dWm+1 (r)
2 2

21
S2 m 0 S2 m 0
B 0 (r)U m Dm Dm U m B(r) = B 0 (r)um
1 (u1 ) B(r) + B 0 (r)um
2 (u ) B(r)
4 4 2
S2 S2
= Wm (r)2 + Wm+1 (r)2
4 4

Substitute the terms above into (8.9), and we could get the asymptotic distributions for t1 ,
tπ and tmeven .
For m = modd , the asymptotic distributions of the t-statistics are derived in the same way.

We could get (8.10):

´1
L σ 2 J 0 B 0 (r)U m∗ Dm∗ V m∗ dB(r)
tm → ´1 , m = modd (8.10)
σ 2 J( 0 B 0 (r)U m∗ Dm∗ Dm∗ U m∗ B(r)dr)1/2

From the decomposition derived in A2.1, we get −B 0 (r)um∗ m∗ 0 0 m∗


1 = (v2 ) B(r) = Wm (r), B (r)u2 =
(v1m∗ )0 B(r) = Wm−1 (r)

S m∗ 0 S m∗ 0
B 0 (r)U m∗ Dm∗ V m∗ dB(r) = B 0 (r)um∗
1 (v1 ) dB(r) + B 0 (r)um∗
2 (v ) dB(r)
2 2 2
S S
=− Wm (r)dWm−1 (r) + Wm−1 (r)dWm (r)
2 2

S 2 m∗ 0 S 2 m∗ 0
B 0 (r)U m∗ Dm∗ Dm∗ U m∗ B(r) = B 0 (r)um∗
1 (u1 ) B(r) + B 0 (r)um∗
2 (u ) B(r)
4 4 2
S2 S2
= Wm−1 (r)2 + Wm (r)2
4 4

Substitute the terms above into (8.10), and we could get the asymptotic distributions of tmodd .

Theorem 1 is proofed.

Appendix III. Asymptotic distribution of HEGY type test statistics when (3.5) is
employed for test.
In this section we consider the asymptotic distributions of the t-statistics when there are deter-

ministic components in the auxiliary regression (3.5). We derive the asymptotic distributions

of the numerators for t-statistics rst. The asymptotic distributions of the denominators could

be derived in the same way. Then with continuous mapping theorem, we could derive the

asymptotic distributions of t-statistics. The following lemma is needed in our proof.

Lemma 4: (´ 1
1 −3/2
PT L 0
W1 (r)dr m=1
(a) σT t=0 xm,t −

0 elsewhere
(´ 1
1 −5/2
PT L 0
rW1 (r)dr m=1
(b) σT t=0 txm,t −

0 elsewhere
 ´1
Wm (1) 0 Wm (r)dr m = 1, π
´1 ´1


PS L
(c) σ12 S −1 p=1 x̄m,p εp → 12 Wm (1) 0 Wm (r)dr + 21 Wm+1 (1) 0 Wm+1 (r)dr m = meven
1
 ´1 1
´1
2 Wm (1) 0 Wm−1 (r)dr − 2 Wm−1 (1) 0 Wm (r)dr m = modd

22
 ´1
2
( 0´ Wm (r)dr)

´1
m = 1, π
1 −1
PS 2 L 1
(d) σ 2 (T S) p=1 x̄m,p → 4 ( 0 Wm (r)dr)2 + 14 ( 0 Wm+1 (r)dr)2
1
m = meven
1 ´ 1
 2 1
´1 2
4 ( 0 Wm−1 (r)dr) + 4 ( 0 Wm (r)dr) m = modd
J−1 J−1
εp = J −1 εSj+p , xm,p = J −1
P P
where xm,Sj+p .
j=0 j=0
Proof (a): When m = 1, π, meven , the left side of (a) could be rewritten in the form:

T S J−1
1 −3/2 X 1 −3/2 X X
T xm,t = T xm,Sj+p
σ t=1
σ p=1 j=0
S J−1 p−S j
1 −3/2 X X X X
= T cos((p − q)θm )εSh+q
σ p=1 j=0 q=p−1 h=0
S p−S J−1 j
1 −3/2 X X XX
= T cos((p − q)θm ) εSh+q
σ p=1 q=p−1 j=0 h=0

With the convergence in (8.5), we have

J−1 j ˆ 1
1 −3/2 X X L 1
T → √
εSh+q − Bi (r)dr
σ j=0 h=0
S S 0

where i=q when q≥0 and i = S+q when q < 0. We also have the sum of the trg part, for

any uq

S Su
X q m=1
cos((p − q)θm )uq =
0 m = π, meven
p=1

With the convergence above and the sum of the cosine part:

S p−S J−1 j

 1 PS ´1
1 −3/2 X X XX L S i=1 0 Bi (r)dr m=1
T cos((p − q)θm ) εSh+q −

σ p=1 q=p−1 j=0
0 m = π, meven
h=0

PS
When m = modd , follow the similar procedure, and consider p=1 sin(p − q)θm−1 uq = 0, we
get
T S p−S J−1 j
1 −3/2 X 1 X X XX
T xm,t = T −3/2 sin((p − q)θm−1 ) εSh+q = 0
σ t=1
σ p=1 q=p−1 j=0 h=0

1
PS ´1 ´ 1 PS
Note that √ Bp (r)dr = √1 Bp (r)dr , and substitute the convergences above
S p=1 0 0 p=1 S
with (8.7), (a) is proofed.

Proof (b): We directly apply the results in Park and Phillips (1988) and get the convergence

when m = 1:

T T X
t−1 ˆ 1
L
X X
T −5/2 txm,t = T −5/2 tεi −
→ rW1 (r)dr
t=0 t=0 i=0 0

23
When m = π, meven ,
T
X S J−1
X X p−S
X j
X
T −5/2 txm,t = T −5/2 (Sj + p) cos(θm (p − q)) εSh+q
t=0 p=1 j=0 q=p−1 h=0
S
X S J−1
X X S
X j
X
= T −5/2 J pxm,p + T −5/2 S j cos(θm (p − q)) εS(h−1)+q
p=1 p=1 j=0 q=1 h=0
S J−1
X X X p−1
+T −5/2 S j cos(θm (p − q))εSh+q
p=2 j=0 q=1

The rst and third term above vanish when T grows large. For the second term, rewrite as:

S J−1
X X X S j
X J−1
XX S X
S j
X
j cos(θm (p − q)) εS(h−1)+q = jcos(θm (p − q)) εS(h−1)+q
p=1 j=0 q=1 h=0 j=0 q=1 p=1 h=0

S
P j
P
Similar with (a): jcos(θm (p − q)) uSh+q = 0, then the second term equals to 0.
p=1 h=0
T
T −5/2
P
When m = modd , follow the same procedure, we get txm,t = 0
t=0
Proof (c): When m= π, meven  we get the expression below:

J−1
X X p−S
J−1 X j
X
xm,Sj+p = cos(θm (p − q)) εSh+q
j=0 j=0 q=p−1 h=0
p−S
X J−1
XX j
= cos(θm (p − q)) εSh+q
q=p−1 j=0 h=0

Then we have the right part of (c):

S
X S
X J−1
X J−1
X
T −1 J xm,p εp = T −1 J (J −1 xm,Sj+p )(J −1 εSj+p )
p=1 p=1 j=0 j=0
S p−S J−1 j J−1
1X X − 23
XX
− 12
X
= [ cos(θm (p − q))J εSh+q ](J εSj+p )
S p=1 q=p−1 j=0 j=0
h=0

´1 ´1 ´1
With (7.6), let Z = [ 0 B1 (r)dr, 0 B2 (r)dr, ..., 0 BS (r)dr], B = [B1 (r), B2 (r), ..., B23 (r)], the
convergence of the term above is obtained.

S S S
X L σ2 X X σ2 0
T −1 J xm,p εp → cos(θm (p − q))Zq Bp = Z Am B
p=1
S p=1 q=1 24

With the decomposition in A2.1, similar with the proof of Lemma 1, we derive the convergence

when m = 1, π, meven .
With the same procedure, when m = modd , we derive the following convergence:

S S S
X L σ2 X X σ2 0
T −1 J xm,p εp → sin(θm−1 (p − q))Zq Bp = Z Cm B
p=1
S p=1 q=1 24

Similar as m = 1, π, meven , we could derive the convergence when m = modd . (c) is proofed.

24
Proof (d): For the proof in (c), replace εp with xm,p , and we could get

S
X 0 0
T −2 J x2m,p = X 0 Am Am X (or X 0 Cm C m X f or modd )
p=1

Follow the similar step with the proof of (c), also with the decomposition theory in A2.1, the

proof would be nished.

Proof of Theorem 2:
Regress yt , xm,t and εt on the deterministic component included in (3.5), we could re express

(3.5) as

S
X
ϕ∗ (B)(1 − B S )e
yt = ρm x
em,t + εet (8.11)
m=1
PS PS
where yet = yt − c0,y − c1,y t − i=2 ci,y Di,t , em,t = xm,t − c0,m − c1,m t −
x i=2 ci,m Di,t , εet =
εt − c0,ε − c1,ε t − Si=2 ci,ε Di,t .
P

Asymptotically the vector of seasonal dummies is orthogonal to the vector of trend terms

(Beaulieu and Miron, 1992), therefore we could express the terms above with trend and seasonal

means:

em,t = xm,t − c1,m (t − t̄) − dm,t , dm,t = c0,m + Si=2 ci,m Di,t + c1,m t is the seasonal mean of xm,t ,
P
x
th
i.e., xm,p when it is the p observation in a circle.

εet = εt − c1,ε (t − t̄) − dε,t , dε,t = c0,ε + Si=2 ci,ε Di,t + c1,ε t.
P

Note (i) When (3.5) includes constant, c1,y = c1,m = c1, = 0, ci,y = ci,m = ci,ε = 0 for

i = 2, ..., S .
(ii) When (3.5) includes constant and trend, ci,y = ci,m = ci,ε = 0 for i = 2, ..., S .

(iii) When (3.5) includes constant and seasonal dummies, c1,y = c1,m = c1, = 0.
T
P
(xm,t −xm )(t−t)
Due to the asymptotic orthogonality we get the coecient of each term c1,m = t=0
T ≈
(t−t)2
P
t=1
T
P
P (εt −ε)(t−t) P T
12 xm,t t−6xm T 12 εt t−6εT T3
(t − t)2 ≈
P
T3
, c1,ε = t=0
T ≈ T3
. The approximation comes from
12
(t−t)2
P
t=1
t=1
and T + 1 ≈ T. The same expressions and approximations of these terms are also derived by

Beaulieu and Miron (1992).


T
P
x
em,t εet
ρ̂m t=0
With (8.11), the t-statistics could be expressed as tm = σ̂ρm
= T .
e2m,t )1/2
P
sT ( x
t=0
For the numerator:

25
T
X T
X
x
em,t εet = (xm,t − c1,m (t − t̄) − dm,t )(εt − c1,ε (t − t̄) − dε,t )
t=1 t=0
T
X T
X T
X T
X
= xm,t εt − xm c1,ε (t − t̄) − xm,t dε,t − c1,m (t − t)εt
t=1 t=1 t=1 t=1
T
X T
X T
X
+ c1,m c1,ε (t − t)2 + dε,t c1,m (t − t̄) − dm,t εt
t=1 t=1 t=0
T
X T
X
+ dm,t c1,ε (t − t̄) − dm,t dε,t
t=0 t=0

T
P PT PT PS T
P
For the terms above, xm,t dε,t = t=0 dm,t εt = t=0 dm,t dε,t = J p=1 xm,p εp , c1,m c1,ε (t−
t=1 t=1
T T
t.)2 = c1,m
P P
(t − t)(εt − ε) = c1,m εt (t − t)
t=1 t=1
PT T
P
Next we show that t=1 dm,t c1,ε (t − t) and xm c1,ε (t − t̄) converge to 0 when T goes to
t=1
innity.

T S J−1 S
X X X T +1 X S+1
dm,t c1,ε (t − t) = c1,ε xm,p (Sj + p − ) = c1,ε xm,p J(p − )
t=1 p=1 j=0
2 p=1
2
T S S
12 X T X S+1 6 X S+1
= ( εt t)( xm,p (p − )) − 3 ε xm,p J(p − )
T 3 t=1 S p=1 2 T p=1 2

The last term comes from substituting c1,ε with its approximation term. P For the rst term,

P
εt t 1 S S+1
T 3/2
converge according to Hamilton (See Hamilton, 1994, page 486),
TS p=1 xm,p (p − 2 ) <
PS 2
PS 2
p=1 xm,p (2p−S−1)
2T S
+ p=1 8T S both converge to 0 when T goes to innity (Lemma 4 (d)), so the rst

term converges to 0 when T goes to innity. Similarly, the second term also converges to 0 as T

goes to innity.
T
P
With the same method, xm c1,ε (t − t̄) also converges to 0 as T goes to innity.
t=1
Then for the numerator of the t-statistics becomes:

T
X T
X S
X T
X
x
em,t εet = xm,t εt − J xm,p εp − xm c1,ε (t − t̄) (8.12)
t=1 t=1 p=1 t=1

For the last term, again substitute c1,ε with its approximation term, we get:

T
X XT T
X T
X T
X
− xm,t c1,ε (t − t̄) = −3T xm ε − 12T −3 ( xm,t t)( tεt ) + 6T −1 xm tεt + 6T −1 ε xm,t t
t=1 t=1 t=1 t=1 t=1

Substitute the term above in (8.12), we get (8.13)

T
X T
X S
X T
X XT
x
em,t εet = xm,t εt − J xm,p εp − 3T xm ε − 12T −3 ( xm,t t)( tεt )
t=1 t=1 p=1 t=1 t=1

T
X T
X
−1 −1
+6T xm tεt + 6T ε xm,t t (8.13)
t=1 t=1

26
Divide each term by T, (8.13) would be reexpress as:

T S T T T T
X 1X X X X X
xm,t εt − xm,p εp − 3(T −3/2 xm,t )(T −1/2 εt ) − 12(T −5/2 xm,t t)(T −3/2 tεt )
t=1
s p=1 t=1 t=1 t=1 t=1

T
X T
X T
X T
X
−3/2 −3/2 −1/2 −5/2
+6(T xm,t )(T tεt ) + 6(T εt )(T xm,t t) (8.14)
t=1 t=1 t=1 t=1

We directly give the covergence in Hamilton (See Hamilton 1994, Page 486):

T
1 −1/2 X L
T εt → W1 (1) (8.15)
σ t=1

T ˆ1
1 −3/2 X L
T tεt → W1 (1) − W1 (r)dr (8.16)
σ t=1 0

With (8.15), (8.16) and Lemma 4, we could get the asymptotic distributions for each term.

Thus we could get the asymptotic distribution of the numerator of the t-statistics when constant,

trend and seasonal dummies are included in regression (3.5).

When dierent deterministic components are included in (3.5), certain changes are needed

for (8.13):

When only constant is included in (3.5), c1,y = c1,m = c1, = 0, ci,y = ci,m = ci,ε = 0 for
i = 2, ..., S . Then we get dm,t = c0,m = xm , and dε,t = c0,ε = ε, thus we get J Sp=1 xm,p εp =
P

PT PT T
P
t=0 dm,t dε,t = T xm ε, (8.13) would change to x
em,t εet = xm,t εt − T xm ε.
t=1 t=1
When constant and trend are included in (3.5), ci,y = ci,m = ci,ε = 0 for i = 2, ..., S . Same
PS T
P T
P
as above we get J p=1 xm,p εp = T xm ε, (8.13) would change to xem,t εet = xm,t εt − T xm ε −
t=1 t=1
T
P
xm c1,ε (t − t̄).
t=1
When constant and dummies are included in (3.5), c1,y = c1,m = c1, = 0, (8.13) would change
T T
xm,t εt − J Sp=1 xm,p εp .
P P P
to x
em,t εet =
t=1 t=1
So we could express (8.13) as

T
X T
X S
X T
X XT
x
em,t εet = xm,t εt − T xm ε − 1µ [J xm,p εp − T xm ε] − 1t [3T xm ε − 12T −3 ( xm,t t)( tεt )
t=1 t=1 p=1 t=1 t=1

T
X T
X
−1 −1
+6T xm tεt + 6T ε xm,t t] (8.17)
t=1 t=1

With Lemma 4, (8.15) and (8.16), the asymptotic distributions of the numerators for t-

statistics are derived.

For the denominator, we apply the same procedure above and substitute εt with xm,t , we

could derive the asymptotic distributions of the denominator of the statistics with Lemma 4.

Then with the continuous mapping theorem we could derive the asymptotic distributions of the

27
t-statistics. Theorem 2 is proofed.

Appendix IV
A4.1 Critical value for hourly data when dierent deterministic components in-
cluded.

Table 14: Critical values for the t1 of hourly data. Only intercept included

Sample size T Probability that t1 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -3.21 -2.85 -2.55 -2.26 -0.29 0.03 0.34 0.72
T=240 -3.22 -2.92 -2.70 -2.43 -0.41 -0.04 0.25 0.65
T=360 -3.28 -3.00 -2.89 -2.49 -0.44 -0.10 0.24 0.65
T=480 -3.35 -3.08 -2.82 -2.52 -0.44 -0.10 0.23 0.66
T= ∞ -3.41 -3.14 -2.89 -2.57 -0.44 -0.07 0.26 0.61

Table 15: Critical values for the t24 of hourly data. Only intercept included

Sample size T Probability that t2 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -2.26 -1.92 -1.66 -1.36 0.91 1.28 1.65 2.01
T=240 -2.52 -2.11 -1.81 -1.49 0.90 1.26 1.63 1.99
T=360 -2.52 -2.16 -1.84 -1.52 0.89 1.25 1.63 1.98
T=480 -2.53 -2.19 -1.90 -1.57 0.86 1.23 1.60 1.96
T= ∞ -2.60 -2.24 -1.96 -1.63 0.90 1.28 1.61 2.03

Table 16: Critical values for the F-statistics of hourly data. Only intercept included

Sample size T Probability that Fm,m+1 , m = meven is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 0.01 0.03 0.05 0.11 2.02 2.62 3.24 4.08
T=240 0.01 0.03 0.05 0.11 2.22 2.86 3.52 4.41
T=360 0.01 0.03 0.05 0.11 2.30 2.95 3.60 4.51
T=480 0.01 0.03 0.05 0.11 2.33 3.00 3.68 4.58
T= ∞ 0.01 0.03 0.05 0.11 2.41 3.13 3.81 4.60

Table 17: Critical values for the F-statistics of hourly data. Only intercept included
1
P24 2
Sample size T Probability that F all = 24 1 tk is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 0.43 0.49 0.54 0.61 1.32 1.48 1.62 1.79
T=240 0.48 0.55 0.61 0.69 1.43 1.58 1.71 1.91
T=360 0.49 0.55 0.62 0.70 1.50 1.62 1.75 1.95
T=480 0.51 0.57 0.64 0.72 1.52 1.66 1.81 2.00
T= ∞ 0.54 0.61 0.68 0.77 1.57 1.72 1.85 2.00

Table 18: Critical values for the t1 of hourly data. Intercept and trend included

Sample size T Probability that t1 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -3.63 -3.33 -3.06 -2.79 -0.99 -0.68 -0.41 -0.10
T=240 -3.82 -3.51 -3.25 -2.96 -1.14 -0.84 -0.61 -0.28
T=360 -3.78 -3.51 -3.27 -2.99 -1.16 -0.88 -0.60 -0.27
T=480 -3.91 -3.57 -3.35 -3.05 -1.19 -0.92 -0.64 -0.34
T= ∞ -3.98 -3.66 -3.40 -3.12 -1.25 -0.95 -0.69 -0.30

28
Table 19: Critical values for the t24 of hourly data. Intercept and trend included

Sample size T Probability that t24 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -2.28 -1.94 -1.67 -1.37 0.91 1.29 1.59 1.95
T=240 -2.40 -2.08 -1.80 -1.49 0.97 1.34 1.66 2.03
T=360 -2.48 -2.14 -1.83 -1.52 0.88 1.26 1.59 2.02
T=480 -2.52 -2.20 -1.92 -1.57 0.90 1.27 1.58 2.00
T= ∞ -2.58 -2.27 -1.97 -1.64 0.91 1.31 1.60 2.02

Table 20: Critical values for the F-statistics of hourly data. Intercept and trend included

Sample size T Probability that Fm,m+1 , m = meven is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 0.01 0.03 0.05 0.10 2.01 2.63 3.25 4.09
T=240 0.01 0.03 0.05 0.11 2.21 2.85 3.51 4.38
T=360 0.01 0.03 0.05 0.11 2.29 2.97 3.62 4.50
T=480 0.01 0.03 0.05 0.11 2.32 3.00 3.69 4.58
T= ∞ 0.01 0.03 0.06 0.11 2.46 3.16 3.88 4.64

Table 21: Critical values for the F-statistics of hourly data. Intercept and trend included
1
P24 2
Sample size T Probability that F all = 24 1 tk is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 0.54 0.61 0.67 0.76 1.53 1.67 1.86 2.27
T=240 0.54 0.61 0.67 0.75 1.54 1.68 1.81 2.01
T=360 0.56 0.65 0.69 0.78 1.54 1.69 1.85 2.06
T=480 0.57 0.64 0.71 0.80 1.59 1.74 1.89 2.06
T= ∞ 0.61 0.69 0.76 0.85 1.66 1.82 1.94 2.11

Table 22: Critical values for the t1 of hourly data. Intercept and dummies included

Sample size T Probability that t1 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -2.89 -2.56 -2.32 -2.03 -0.20 0.07 0.33 0.68
T=240 -3.14 -2.89 -2.61 -2.32 -0.38 -0.04 0.23 0.60
T=360 -3.27 -2.94 -2.67 -2.39 -0.40 -0.07 0.22 0.57
T=480 -3.27 -2.98 -2.70 -2.42 -0.39 -0.05 0.23 0.62
T= ∞ -3.45 -3.13 -2.87 -2.57 -0.47 -0.10 0.21 0.57

Table 23: Critical values for the t24 of hourly data. Intercept and dummies included

Sample size T Probability that t24 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -2.99 -2.73 -2.45 -2.07 -0.23 -0.06 0.37 0.75
T=240 -3.24 -2.91 -2.64 -2.36 -0.34 -0.04 0.25 0.64
T=360 -3.30 -2.98 -2.74 -2.45 -0.39 -0.04 0.26 0.63
T=480 -3.30 -3.00 -2.76 -2.47 -0.39 -0.05 0.23 0.63
T= ∞ -3.46 -3.15 -2.87 -2.58 -0.44 -0.12 0.20 0.55

Table 24: Critical values for the F-statistics of hourly data. Intercept and dummies included

Sample size T Probability that Fm,m+1 , m = meven is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 0.05 0.12 0.23 0.42 3.68 4.45 5.23 6.20
T=240 0.09 0.20 0.35 0.62 4.71 5.57 6.42 7.48
T=360 0.09 0.21 0.39 0.68 5.02 5.95 6.83 7.97
T=480 0.10 0.23 0.41 0.71 5.15 6.11 7.11 8.10
T= ∞ 0.09 0.23 0.44 0.77 5.60 6.60 7.57 8.54

29
Table 25: Critical values for the F-statistics of hourly data. Intercept and dummies included
1
P24 2
Sample size T Probability that F all = 24 1 tk is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 1.02 1.12 1.23 1.35 2.50 2.72 2.93 3.20
T=240 1.45 1.58 1.70 1.87 3.15 3.37 3.57 3.85
T=360 1.60 1.74 1.88 2.02 3.38 3.60 3.81 4.04
T=480 1.66 1.80 1.93 2.09 3.46 3.69 3.89 4.16
T= ∞ 1.89 2.03 2.17 2.33 3.76 4.00 4.21 4.42

Table 26: Critical values for the t1 of hourly data. Intercept, trend and dummies included

Sample size T Probability that t1 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -3.34 -3.05 -2.80 -2.53 -0.79 -0.50 -0.33 -0.12
T=240 -3.59 -3.34 -3.09 -2.82 -1.08 -0.80 -0.56 -0.27
T=360 -3.77 -3.45 -3.18 -2.91 -1.14 -0.82 -0.54 -0.24
T=480 -3.85 -3.51 -3.25 -2.99 -1.16 -0.87 -0.59 -0.26
T= ∞ -3.98 -3.68 -3.41 -3.11 -1.22 -0.92 -0.65 -0.30

Table 27: Critical values for the t24 of hourly data. Intercept, trend and dummies included

Sample size T Probability that t24 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 -2.89 -2.54 -2.31 -2.05 -0.17 0.15 0.45 0.83
T=240 -3.13 -2.86 -2.62 -2.33 -0.34 0.01 0.29 0.66
T=360 -3.34 -3.00 -2.71 -2.41 -0.34 0.00 0.27 0.63
T=480 -3.29 -2.99 -2.72 -2.45 -0.40 -0.07 0.24 0.63
T= ∞ -3.38 -3.08 -2.84 -2.55 -0.44 -0.07 0.29 0.70

Table 28: Critical values for the F-statistics of hourly data. Intercept, trend and dummies included

Sample size T Probability that Fm,m+1 , m = meven is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 0.05 0.12 0.23 0.42 3.68 4.46 5.21 6.22
T=240 0.09 0.20 0.37 0.63 4.65 5.53 6.35 7.41
T=360 0.10 0.22 0.40 0.69 4.98 5.90 6.78 7.88
T=480 0.10 0.23 0.41 0.71 5.17 6.11 7.02 8.11
T= ∞ 0.10 0.25 0.45 0.82 5.55 6.56 7.61 8.87

Table 29: Critical values for the F-statistics of hourly data. Intercept, trend and dummies included
1
P24 2
Sample size T Probability that F all = 24 1 tk is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 1.03 1.16 1.26 1.38 2.59 2.81 3.02 3.34
T=240 1.52 1.65 1.77 1.93 3.23 3.46 3.67 3.90
T=360 1.66 1.82 1.94 2.11 3.45 3.65 3.86 4.09
T=480 1.74 1.89 2.03 2.20 3.54 3.77 3.99 4.22
T= ∞ 1.88 2.06 2.21 2.40 3.84 4.07 4.28 4.54

30
A4.2 Critical value for daily data when dierent deterministic components included.

Table 30: Critical values for the t1 of daily data. Only intercept included

Sample size T Probability that t1 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 -3.40 -3.05 -2.78 -2.51 -0.38 -0.02 0.29 0.60
T=280 -3.45 -3.10 -2.83 -2.52 -0.40 -0.06 0.23 0.55
T=420 -3.40 -3.12 -2.88 -2.56 -0.43 -0.06 0.26 0.60
T=560 -3.41 -3.12 -2.89 -2.55 -0.44 -0.08 0.25 0.61
T= ∞ -3.41 -3.13 -2.91 -2.57 -0.45 -0.09 0.26 0.61

Table 31: Critical values for the F-statistics of daily data. Only intercept included

Sample size T Probability that Fm,m+1 , m = meven is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 0.01 0.03 0.05 0.11 2.29 3.01 3.73 4.64
T=280 0.01 0.03 0.06 0.11 2.34 3.03 3.70 4.55
T=420 0.01 0.03 0.05 0.11 2.37 3.08 3.75 4.59
T=560 0.01 0.03 0.06 0.12 2.40 3.10 3.82 4.70
T= ∞ 0.01 0.03 0.05 0.12 2.40 3.13 3.83 4.70

Table 32: Critical values for the F-statistics of daily data. Only intercept included
1
P24 2
Sample size T Probability that F all = 7 1 tk is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 0.27 0.35 0.45 0.58 2.09 2.42 2.76 3.19
T=280 0.28 0.37 0.46 0.59 2.13 2.43 2.75 3.13
T=420 0.27 0.36 0.45 0.59 2.18 2.50 2.81 3.23
T=560 0.27 0.37 0.47 0.61 2.19 2.50 2.80 3.17
T= ∞ 0.28 0.39 0.48 0.63 2.19 2.55 2.83 3.21

Table 33: Critical values for the t1 of daily data. Intercept and trend included

Sample size T Probability that t1 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 -3.96 -3.64 -3.35 -3.07 -1.18 -0.88 -0.58 -0.26
T=280 -3.96 -3.66 -3.40 -3.10 -1.20 -0.89 -0.61 -0.27
T=420 -3.93 -3.62 -3.37 -3.11 -1.24 -0.93 -0.66 -0.30
T=560 -3.94 -3.66 -3.40 -3.11 -1.24 -0.93 -0.66 -0.29
T= ∞ -3.96 -3.66 -3.40 -3.12 -1.25 -0.95 -0.69 -0.30

Table 34: Critical values for the F-statistics of daily data. Intercept and trend included

Sample size T Probability that Fm,m+1 , m = meven is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=120 0.01 0.03 0.05 0.11 2.30 2.98 3.67 4.62
T=240 0.01 0.03 0.05 0.11 2.35 3.05 3.76 4.68
T=360 0.01 0.03 0.05 0.12 2.39 3.08 3.78 4.60
T=480 0.01 0.03 0.05 0.12 2.42 3.10 3.80 4.60
T= ∞ 0.01 0.03 0.06 0.12 2.47 3.15 3.81 4.61

31
Table 35: Critical values for the F-statistics of daily data. Intercept and trend included
1
P7 2
Sample size T Probability that F all = 7 1 tk is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 0.42 0.54 0.66 0.81 2.50 2.85 3.18 3.60
T=280 0.41 0.53 0.66 0.83 2.54 2.89 3.22 3.66
T=420 0.44 0.55 0.68 0.82 2.57 2.91 3.24 3.62
T=560 0.42 0.55 0.66 0.83 2.56 2.92 3.25 3.69
T= ∞ 0.45 0.56 0.67 0.84 2.55 2.92 3.26 3.67

Table 36: Critical values for the t1 of daily data. Intercept and dummies included

Sample size T Probability that t1 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 -3.33 -3.04 -2.76 -2.47 -0.38 -0.01 0.29 0.63
T=280 -3.36 -3.07 -2.76 -2.49 -0.39 -0.03 0.25 0.56
T=420 -3.39 -3.09 -2.83 -2.52 -0.40 -0.05 0.21 0.63
T=560 -3.41 -3.08 -2.83 -2.54 -0.44 -0.09 0.23 0.60
T= ∞ -3.45 -3.13 -2.87 -2.57 -0.47 -0.10 0.22 0.61

Table 37: Critical values for the F-statistics of daily data. Intercept and dummies included

Sample size T Probability that Fm,m+1 , m = meven is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 0.11 0.23 0.41 0.72 5.23 6.20 7.18 8.27
T=280 0.10 0.23 0.43 0.75 5.33 6.38 7.36 8.51
T=420 0.11 0.25 0.45 0.77 5.51 6.53 7.50 8.61
T=560 0.11 0.25 0.44 0.77 5.53 6.55 7.51 8.63
T= ∞ 0.12 0.26 0.44 0.77 5.60 6.58 7.57 8.65

Table 38: Critical values for the F-statistics of daily data. Intercept and dummies included
1
P7 2
Sample size T Probability that F all = 7 1 tk is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 0.92 1.15 1.36 1.62 4.11 4.57 4.98 5.45
T=280 0.99 1.22 1.44 1.70 4.23 4.69 5.14 5.64
T=420 0.98 1.24 1.44 1.70 4.29 4.79 5.17 5.64
T=560 1.04 1.26 1.47 1.73 4.33 4.78 5.19 5.70
T= ∞ 1.06 1.25 1.49 1.76 4.32 4.82 5.21 5.72

Table 39: Critical values for the t1 of daily data Intercept, trend and dummies included

Sample size T Probability that t1 is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 -3.91 -3.60 -3.32 -3.03 -1.17 -0.88 -0.58 -0.29
T=280 -3.93 -3.62 -3.36 -3.10 -1.22 -0.92 -0.63 -0.27
T=420 -3.94 -3.65 -3.39 -3.07 -1.22 -0.92 -0.66 -0.32
T=560 -3.93 -3.64 -3.39 -3.10 -1.24 -0.92 -0.65 -0.33
T= ∞ -3.99 -3.66 -3.41 -3.11 -1.24 -0.92 -0.66 -0.33

32
Table 40: Critical values for the F-statistics of daily data. Intercept, trend and dummies included

Sample size T Probability that Fm,m+1 , m = cosi is less than entry


0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 0.10 0.22 0.40 0.70 5.22 6.17 7.11 8.23
T=280 0.11 0.24 0.42 0.74 5.41 6.40 7.35 8.60
T=420 0.11 0.26 0.45 0.77 5.48 6.47 7.47 8.62
T=560 0.12 0.26 0.45 0.78 5.53 6.53 7.47 8.63
T= ∞ 0.10 0.26 0.48 0.85 5.55 6.56 7.59 8.80

Table 41: Critical values for the F-statistics of daily data. Intercept, trend and dummies included
1
P7 2
Sample size T Probability that F all = 7 1 tk is less than entry
0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99
T=140 1.08 1.34 1.56 1.84 4.43 4.92 5.34 5.91
T=280 1.21 1.44 1.66 1.93 4.60 5.06 5.49 6.08
T=420 1.23 1.46 1.69 2.00 4.65 5.14 5.59 6.05
T=560 1.23 1.47 1.69 1.99 4.68 5.16 5.60 6.16
T= ∞ 1.25 1.48 1.75 2.02 4.74 5.21 5.71 6.18

Reference

Beaulieu, J.J., & Miron, J.A., 1992. Seasonal unit Roots In Aggregate U.S. Data. National

Bureau of Economic Research, Technical Paper No. 126

Chan, N.H, Wei, C.Z., 1988. Limiting distributions of least squares estimates of unstable

autoregressive processes. Annals of Statistics, 16, 367-401.

Dickey, D.A., Hasza, D.P. & Fuller, W.A., 1984. Testing for Unit Roots in Seasonal Time

Series, Journal of the American Statistical Association 79, 355-367

Franses, P.H., 1990. Testing for seasonal unit roots in monthly data. Technical Report

9032/A of the Econometric Institute, Erasmus University, Rotterdam.

Franses, P.H., 1991, Seasonality, nonstationarity and forecasting of monthly time series, In-

ternational journal of Forecasting, 7, 199-208.

Ghysels, E., Lee, H.S., & Noh, J., 1994. Testing for seasonal unit roots in seasonal time series.

Journal of Econometrics, 62, 415-442.

Hamilton, J.D., 1994. Time Series analysis. Princeton: Princeton University Press.

Hylleberg, S., Engle, R.F., Granger, C.W.J, & Yoo, B.S., 1990. Seasonal integration and

cointegration. Journal of Econometrics, 44, 215-238.

Park, J.Y., & Phillips, P.C.B., 1988, Statistical inference in regressions with integrated pro-

cesses: Part 1. Econometric theory, 4, 468-497

Psaradakis, Z., 1997. Testing for unit roots in time series with nearly deterministic seasonal

variation. Econometric reviews, 16(4), 421-439

Osborn, D.R,. Chui, A.P.L., Smith, J.P., & Birchenhall., C.R. (1988) Seasonality and the

order of integration for consumption, Oxford Bulletin of Economics and Statistics, 50, pp. 361-

377.

Rodrigues, P.M.M., & Osborn, D.R., 1999. Performance of seasonal unit root tests for

monthly data. Journal of Applied statistics, 26, 985-1004.

33
White, H., 2001. Asymptotic theory for Econometricians . San Diego:Academic Press

34

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy