0% found this document useful (0 votes)
26 views77 pages

Economic Forecasting Economic Issues Problems and Perspectives Alan T Molnar Download

The document discusses economic forecasting, emphasizing its importance for decision-making in various sectors, particularly in understanding housing demand. It includes contributions from multiple authors on topics such as econometric modeling, time series analysis, and the effects of economic variables on forecasting accuracy. The publication aims to provide a comprehensive overview of current methodologies and challenges in economic forecasting, highlighting the need for robust models to address uncertainties in economic conditions.

Uploaded by

ausensawdocd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views77 pages

Economic Forecasting Economic Issues Problems and Perspectives Alan T Molnar Download

The document discusses economic forecasting, emphasizing its importance for decision-making in various sectors, particularly in understanding housing demand. It includes contributions from multiple authors on topics such as econometric modeling, time series analysis, and the effects of economic variables on forecasting accuracy. The publication aims to provide a comprehensive overview of current methodologies and challenges in economic forecasting, highlighting the need for robust models to address uncertainties in economic conditions.

Uploaded by

ausensawdocd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Economic Forecasting Economic Issues Problems

And Perspectives Alan T Molnar download

https://ebookbell.com/product/economic-forecasting-economic-
issues-problems-and-perspectives-alan-t-molnar-2445580

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Economic Forecasting Graham Elliott Allan Timmermann

https://ebookbell.com/product/economic-forecasting-graham-elliott-
allan-timmermann-56348624

Economic Forecasting And Policy Second Nicolas Carnot Vincent Koen

https://ebookbell.com/product/economic-forecasting-and-policy-second-
nicolas-carnot-vincent-koen-2365036

Economic Forecasting 1st Edition Nicolas Carnot Vincent Koen

https://ebookbell.com/product/economic-forecasting-1st-edition-
nicolas-carnot-vincent-koen-5359544

Applied Economic Forecasting Using Time Series Methods Ghysels

https://ebookbell.com/product/applied-economic-forecasting-using-time-
series-methods-ghysels-9980310
Advances In Economic Forecasting 1st Edition Matthew L Higgins

https://ebookbell.com/product/advances-in-economic-forecasting-1st-
edition-matthew-l-higgins-51423556

Handbook Of Economic Forecasting 2 1st Edition Graham Elliott And


Allan Timmermann Eds

https://ebookbell.com/product/handbook-of-economic-forecasting-2-1st-
edition-graham-elliott-and-allan-timmermann-eds-4550132

Handbook Of Economic Forecasting 1 1st Edition G Elliott Cwj Granger


And A Timmermann Eds

https://ebookbell.com/product/handbook-of-economic-forecasting-1-1st-
edition-g-elliott-cwj-granger-and-a-timmermann-eds-4550134

A Companion To Economic Forecasting Michael P Clements David F Hendry


Eds

https://ebookbell.com/product/a-companion-to-economic-forecasting-
michael-p-clements-david-f-hendry-eds-4302012

Time Series Models For Business And Economic Forecasting Draft 2ed
Franses Ph

https://ebookbell.com/product/time-series-models-for-business-and-
economic-forecasting-draft-2ed-franses-ph-4677574
ECONOMIC ISSUES, PROBLEMS AND PERSPECTIVES SERIES

ECONOMIC FORECASTING

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or
by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no
expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No
liability is assumed for incidental or consequential damages in connection with or arising out of information
contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in
rendering legal, medical or any other professional services.
ECONOMIC ISSUES, PROBLEMS AND
PERSPECTIVES SERIES

Trust, Globalisation and Market Expansion


Jacques-Marie Aurifeille, Christopher Medlin, and Clem Tisdell
2009. ISBN: 978-1-60741-812-2

TARP in the Crosshairs: Accountability in the Troubled Asset Relief Program


Paul W. O'Byrne (Editor)
2009. ISBN: 978-1-60741-807-8

TARP in the Crosshairs: Accountability in the Troubled Asset Relief Program


Paul W. O'Byrne (Editor)
2009. ISBN: 978-1-60876-705-2 (Online book)

Government Interventions in Economic Emergencies


Pablo Sastre (Editor)
2010. ISBN: 978-1-60741-356-1

NAFTA Stock Markets: Dynamic Return and Volatility Linkages


Giorgio Canarella, Stephen M. Miller and Stephen K. Pollard
2010. ISBN: 978-1-60876-498-3

Economic Forecasting
Alan T. Molnar (Editor)
2010. ISBN: 978-1-60741-068-3
ECONOMIC ISSUES, PROBLEMS AND PERSPECTIVES SERIES

ECONOMIC FORECASTING

ALAN T. MOLNAR
EDITOR

Nova Science Publishers, Inc.


New York
Copyright © 2010 by Nova Science Publishers, Inc.

All rights reserved. No part of this book may be reproduced, stored in a retrieval system or
transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical
photocopying, recording or otherwise without the written permission of the Publisher.

For permission to use material from this book please contact us:
Telephone 631-231-7269; Fax 631-231-8175
Web Site: http://www.novapublishers.com

NOTICE TO THE READER


The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or
implied warranty of any kind and assumes no responsibility for any errors or omissions. No
liability is assumed for incidental or consequential damages in connection with or arising out of
information contained in this book. The Publisher shall not be liable for any special,
consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or
reliance upon, this material. Any parts of this book based on government reports are so indicated
and copyright is claimed for those parts to the extent applicable to compilations of such works.

Independent verification should be sought for any data, advice or recommendations contained in
this book. In addition, no responsibility is assumed by the publisher for any injury and/or damage
to persons or property arising from any methods, products, instructions, ideas or otherwise
contained in this publication.

This publication is designed to provide accurate and authoritative information with regard to the
subject matter covered herein. It is sold with the clear understanding that the Publisher is not
engaged in rendering legal or any other professional services. If legal or any other expert
assistance is required, the services of a competent person should be sought. FROM A
DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE
AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS.

LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA

Economic forecasting / editor, Alan T. Molnar.


p. cm.
Includes index.
ISBN 978-1-61122-478-8 (eBook)
1. Economic forecasting--Mathematical models. I. Molnar, Alan T.
HB3730.E243 2009
330.01'5195--dc22
2009032429

Published by Nova Science Publishers, Inc. New York


CONTENTS

Preface vii
Chapter 1 Temporal Disaggregation of Time Series—A Review 1
Jose M. Pavía
Chapter 2 Econometric Modelling and Forecasting of Private 29
Housing Demand
James M.W. Wong and S. Thomas Ng
Chapter 3 Signal and Noise Decomposition of Nonstationary Time Series 55
Terence C. Mills
Chapter 4 A Cost of Capital Analysis of the Gains from Securitization 107
Hugh Thomas and Zhiqiang Wang

Chapter 5 The Nonparametric Time-Detrended Fisher Effect 135


Heather L.R. Tierney
Chapter 6 Forecasting Ability and Strategic Behavior of Japanese 163
Institutional Forecasters
Masahiro Ashiya
Chapter 7 Qualitative Survey Data on Expectations. 181
Is There an Alternative to the Balance Statistic?
Oscar Claveria
Chapter 8 Analyst Origin and Their Forecasting Quality on the Latin 191
American Stock Markets
Jean-François Bacmann and Guido Bolliger
Chapter 9 Modeling and Forecasting Income Tax Revenue: 213
The Case of Uzbekistan
Marat Ibragimov, Rustam Ibragimov and Nishanbay Sirajiddinov
Chapter 10 Forecasting the Unconditional and Conditional Kurtosis 229
of the Asset Returns Distribution
Trino Manuel Ñíguez, Javier Perote and Antonio Rubia
vi Contents

Chapter 11 Transporting Turkish Exam Takers: 249


A New Use for an Old Model
Nurhan Davutyan and Mert C. Demir
Index 263
PREFACE

Economic forecasting is the process of making predictions about the economy as a whole
or in part. Such forecasts may be made in great detail or may be very general. In any case,
they describe the expected future behaviour of all or part of the economy and help form the
basis of planning. Economic forecasting is of immense importance as any economic system is
a stochastic entity of great complexity and vital to the national development in the
information age. Forecasts are required for two basic reasons: the future is uncertain, and the
full impact of many decisions taken now might deviate later. Consequently, accurate
predictions of the future would improve the efficiency of the decision-making process. In
particular, the knowledge of future demand for products and services is imperative to all
industries since it is a prerequisite for any viable corporate strategy. This new and important
book gathers the latest research from around the globe in this field and related topics such as:
the econometric modeling and forecasting of private housing demand, the nonparametric
time-detrended Fisher effect, and others.
One of the biggest concerns of an economic analyst is to understand the condition that an
economy is experiencing at any given time, monitoring it properly in order to anticipate
possible changes. However, despite social and economic life having quickened and become
more turbulent, many relevant economic variables are not available at the desired frequency.
Therefore, great quantities of methods, procedures and algorithms have been specifically
proposed in the literature to solve the issue of transforming a low-frequency series into a
high-frequency one. Moreover, a non-negligible number of proposals have been also
conveniently adapted to deal with the problem of interpolation, distribution and extrapolation
of time series. Thus, in order to put some order in the subject and to comprehend the current
state of the art on the topic, Chapter 1 offers a revision of the historical evolution of the
temporal disaggregation problem, analysing the proposals and classifying them. This permits
one to decide which method to use under what circumstances, to conclude by identifying the
topics in need of further development and to make some comments on possible future
research directions in this field.
Governments, corporations and institutions all need to prepare various types of forecasts
before any policies or decisions are made. Particularly, serving as a significant sector of an
economy, the importance of predicting the movement of the private residential market is
undeniable. However, it is well recognised that the housing demand is volatile and it may
fluctuate dramatically according to general economic conditions. As globalisation continues
to dissolve boundaries across the world, more economies are increasingly subjected to
viii Alan T. Molnar

external shocks. Frequently the fluctuations in the level of housing demand can cause
significant rippling effects in the economy as the housing sector is associated with many other
economic sectors. The development of econometric models is thus postulated to assist policy-
makers and relevant stakeholders to assess the future housing demand in order to formulate
suitable policies.
With the rapid development of econometric approaches, their robustness and
appropriateness as a modelling technique in the context of examining the dynamic
relationship between the housing market and its determinants are evident. Chapter 2 applies
the cointegration analysis as well as Johansen and Juselius’s vector error correction model
(VEC) model framework to housing demand forecasting in Hong Kong. Volitality of the
demand to the dynamic changes in relevant macro-economic and socio-economic variables
are considered. In addition, an impulse response function and a variance decomposition
analysis are employed to trace the sensitivity of the housing demand over time to the shocks
in the macro-economic and socio-economic variables. This econometric time-series
modelling approach surpasses other methodologies by its dynamic nature and sensitivity to a
variety of factors affecting the output of the economic sector for forecasting purposes, taking
into account indirect and local inter-sectoral effects.
Empirical results indicated that that the housing demand and the associated economic
factors: housing prices, mortgage rate, and GDP per capita are cointegrated in the long-run.
Other key macro-economic and socio-economic indicators, including income, inflation, stock
prices, employment, population, etc., are also examined but found to be insignificant in
influencing the housing demand. A dynamic and robust housing demand forecasting model is
developed using VEC model. The housing prices and mortgage rate are found to be the most
important and significant factors determining the quantity demand of housing. Findings from
the impulse response analyses and variance decomposition under the VEC model further
confirm that the housing price terms has relatively large and sensitive impact on the housing
demand, although at different time intervals, on the volume of housing transactions in Hong
Kong. Addressing these two attributes is critical to the formulation of both short- and long-
term housing policies that could satisfy the expected demand effectively.
The research contributes knowledge to the academic field as currently the area of housing
demand forecast using advanced econometric modelling techniques is under-explored. This
study has developed a theoretical model that traces the cause-and-effect chain between the
housing demand and its determinants, which is relevant to the current needs of the real estate
market and is significant to the economy’s development. It is envisaged that the results of this
study could enhance the understanding of using advanced econometric modelling
methodologies, factors affecting housing demand and various housing economic issues.
The decomposition of a time series into components representing trend, cycle, seasonal,
etc., has a long history. Such decompositions can provide a formal framework in which to
model an observed time series and hence enable forecasts of both the series and its
components to be computed along with estimates of precision and uncertainty. Chapter 3
provides a short historical background to time series decomposition before setting out a
general framework. It then discusses signal extraction from ARIMA and unobserved
component models. The former includes the Beveridge-Nelson filter and smoother and
canonical decompositions. The latter includes general structural models and their associated
state space formulations and the Kalman filter, the classical trend filters of Henderson and
Macaulay that form the basis of the X-11 seasonal adjustment procedure, and band-pass and
Preface ix

low-pass filters such as the Hodrick-Prescott, Baxter-King and Butterworth filters. An


important problem for forecasting is to be able to deal with finite samples and to be able to
adjust filters as the end of the sample (i.e., the current observation) is reached. Trend
extraction and forecasting under these circumstances for a variety of approaches will be
discussed and algorithms presented. The variety of techniques will be illustrated by a
sequence of examples that use typical economic time series.
In Chapter 4 we study the gains that securitizing companies enjoy. We expressed the
gains as a spread between two costs of capital, the weighted cost of capital of the asset selling
firm and the all-in, weighted average cost of the securitization. We calculate the spread for
1,713 securitizations and regress those gains on asset seller characteristics. We show that they
are increasing in size in the amount of asset backed securities originated but are decreasing in
the percent of the balance sheet that the originator has outstanding. Companies that off-lay the
risk of the sold assets (i.e., those retaining no subordinate interest in the SPV) pick their best
assets to securitize. Companies that do not off-lay risk gain more from securitization the less
liquid they are. We find that securitization is a substitute for leverage and that those
companies that use more conventional leverage benefit less from securitization.
Chapter 5 uses frontier nonparametric VARs techniques to investigate whether the Fisher
Effect holds in the U.S. The Fisher Effect is examined taking into account structural breaks
and nonlinearities between nominal interest rates and inflation, which are trend-stationary in
the two samples examined. The nonparametric time-detrended test for the Fisher Effect is
formed from the cumulative orthogonal dynamic multiplier ratios of inflation to nominal
interest rates. If the Fisher Effect holds, this ratio statistically approaches one as the horizon
goes to infinity. The nonparametric techniques developed in this paper conclude that the
Fisher Effect holds for both samples examined.
Chapter 6 investigates the effect of forecasting ability on forecasting bias among
Japanese GDP forecasters. Trueman (1994, Review of Financial Studies, 7(1), 97-124) argues
that an incompetent forecaster tends to discard his private information and release a forecast
that is close to the prior expectation and the market average forecast. Clarke and Subramanian
(2006, Journal of Financial Economics, 80, 81-113) find that a financial analyst issues bold
earning forecasts if and only if his past performance is significantly different from his peers.
This paper examines a twenty-eight-year panel of annual GDP forecasts, and obtains
supportive evidence of Clarke and Subramanian (2006). Our result indicates that conventional
tests of rationality are biased toward rejecting the rational expectations hypothesis.
As explained in Chapter 7, through Monte Carlo simulations it is possible to isolate the
measurement error introduced by incorrect assumptions when quantifying survey results. By
means of a simulation experiment we test whether a variation of the balance statistic
outperforms the balance statistic in order to track the evolution of agents’ expectations and
produces more accurate forecasts of the quantitative variable generated used as a benchmark.
Chapter 8 investigates the relative performance of local, foreign, and expatriate financial
analysts on Latin American emerging markets. We measure analysts’ relative performance
with three dimensions: (1) forecast timeliness, (2) forecast accuracy and (3) impact of forecast
revisions on security prices. Our main findings can be summarized as follows. Firstly, there is
a strong evidence that foreign analysts supply timelier forecasts than their peers. Secondly,
analyst working for foreign brokerage houses (i.e. expatriate and foreign ones) produce less
biased forecasts than local analysts. Finally, after controlling for analysts’ timeliness, we find
that foreign financial analysts’ upward revisions have a greater impact on stock returns than
x Alan T. Molnar

both followers and local lead analysts forecast revisions. Overall, our results suggest that
investors should better rely on the research produced by analysts working for foreign
brokerage houses when they invest in Latin American emerging markets.
Income tax revenue crucially depends on the wage distribution across and within the
industries. However, many transition economies present a challenge for a sound econometric
analysis due to data unavailability. Chapter 9 presents an approach to modeling and
forecasting income tax revenues in an economy under missing data on individual wages
within the industries. We consider the situations where only the aggregate industry-level data
and sample observations for a few industries are available. Using the example of the Uzbek
economy in 1995-2005, we show how the econometric analysis of wage distributions and the
implied tax revenues can be conducted in such settings. One of the main conclusions of the
paper is that the distributions of wages and the implied tax revenues in the economy are well
approximated by Gamma distributions with semi-heavy tails that decay slower than those of
Gaussian variables.
Chapter 10 analyzes the out-of-sample ability of different parametric and semiparametric
GARCH-type models to forecast the conditional variance and the conditional and
unconditional kurtosis of three types of financial assets (stock index, exchange rate and
Treasury Note). For this purpose, we consider the Gaussian and Student-t GARCH models by
Bollerslev (1986, 1987), and two different time-varying conditional kurtosis GARCH models
based on the Student-t and a transformed Gram-Charlier density.
Chapter 11 argues that the transportation model of linear programming can be used to
administer the Public Personnel Language Exam of Turkey in many different locations
instead of just one, as is the current practice. It shows the resulting system to be much less
costly. Furthermore, once the decision about number of locations is made, the resulting
system can be managed either in a centralized or decentralized manner. A mixed mode of
management is outlined, some historical perspectives on the genesis of the transportation
model are offered and some ideas regarding the reasons for the current wasteful practices are
presented. The possibility of applying the same policy reform in other MENA (Middle East
and North Africa) countries is discussed in brief.
In: Economic Forecasting ISBN: 978-1-60741-068-3
Editor: Alan T. Molnar, pp. 1-27 © 2010 Nova Science Publishers, Inc.

Chapter 1

TEMPORAL DISAGGREGATION
OF TIME SERIES—A REVIEW

Jose M. Pavía*
Department of Applied Economics
University of Valencia, Spain.

Abstract
One of the biggest concerns of an economic analyst is to understand the condition that an
economy is experiencing at any given time, monitoring it properly in order to anticipate
possible changes. However, despite social and economic life having quickened and become
more turbulent, many relevant economic variables are not available at the desired frequency.
Therefore, great quantities of methods, procedures and algorithms have been specifically
proposed in the literature to solve the issue of transforming a low-frequency series into a high-
frequency one. Moreover, a non-negligible number of proposals have been also conveniently
adapted to deal with the problem of interpolation, distribution and extrapolation of time series.
Thus, in order to put some order in the subject and to comprehend the current state of the art
on the topic, this chapter offers a revision of the historical evolution of the temporal
disaggregation problem, analysing the proposals and classifying them. This permits one to
decide which method to use under what circumstances, to conclude by identifying the topics
in need of further development and to make some comments on possible future research
directions in this field.

Keywords: Interpolation, Temporal Distribution, Extrapolation, Benchmarking, Forecasts.

1. Introduction
The problem of increasing the frequency of a time series has concerned economic
analysts for a long time. Nevertheless, the subject did not start to receive the required
attention among economists before the 1970s, despite more frequent information being of
great importance for both modelling and forecasting. Indeed, according to Zellner and

*
E-mail address: pavia@uv.es
2 Jose M. Pavía

Montmarquette (1971, p. 355): “When the behavior of individuals, firms or other economic
entities is analyzed with temporally aggregated data, it is quite possible that a distorted view
of parameters’ value, lag structures and other aspects of economic behavior can be obtained.
Since policy decisions usually depend critically on views regarding parameter values, lag
decisions, etc., decisions based on results marred by temporal aggregation effects can
produce poor results”. Unfortunately, it is not unusual that some relevant variables are not
available with the desired timeliness and frequency. Delays in the process of managing and
gathering more frequent data, the extra costs that entails to collect variables more frequently,
and practical limitations to obtain some statistics with a higher regularity deprive analysts of
the valuable help that more frequent records would provide to perform a closer and more
accurate short-term analysis. Certainly, having available statistical series with a higher
frequency would facilitate a smaller delay and a more precise analysis of the economy (and/or
of a company situation) making it easier to anticipate changes and to react to them. It is not
surprising, therefore, that a number of methods, procedures and algorithms have been
proposed from different perspectives in order to increase the frequency of some critical
variables.
Obviously, business and economics are not the sole areas where it would be useful. Fields
as diverse as engineering, oceanography, astronomy and geology also use these techniques
and take advantage of these strategies in order to improve the quality of their analysis.
Nevertheless, this chapter will concentrate on the contributions made and used within the
economic field. There are many examples, in both macro- and microeconomics, where having
available more frequent data could be useful. As examples I cite the following: (i) agents who
deal within a certain region have annual aggregated information about the economic evolution
of the region (e.g., annual regional accounts), although they would prefer the same
information quarterly rather than annually to perform better short-term analysis; (ii) in some
areas, demographic figures are known every ten years, although it would be great to have
them annually making available a better match between population needs and provision of
public services; (iii) a big company counts on quarterly records about its raw material
necessities, although it would be much more useful to have that information monthly, or even
weekly, to better manage its costs; or, (iv) in econometric modelling, where some of the
relevant series are only available at lower frequencies, it could be convenient to previously
disaggregate these series to estimate the model, instead of estimating the complete model at
lower frequency level with the resulting loss of information (Lütkepohl, 1984; Nijman and
Palm, 1985, 1990) and efficiency in the estimation of the parameters of the model (Palm and
Nijman, 1984; Weiss, 1984; or, Nijman and Palm, 1988a, 1988b).
In general, inside this framework and depending on the kind of variable handled (either,
flow or stock) two different main problems can be posed: the distribution problem and the
interpolation problem. The distribution problem appears when the observed values of a flow
low-frequency series of length T must be distributed among kT values, such that the temporal
sum of the estimated high-frequency series fits the values of the low-frequency series. The
interpolation problem consists in generating a high-frequency series with the values of the
new series being the same as the ones of the low-frequency series for those temporal
moments where the latter is observed. In both cases, when estimates are extended out of the
period covered by the low-frequency series, the problem is called extrapolation. Extrapolation
is used, therefore, to forecast values of the high-frequency series when no temporal
constraints from short series are available; although, nevertheless, in some cases (especially in
Temporal Disaggregation of Time Series—A Review 3

multivariate contexts) other different forms of constraints can exist. Furthermore, and related
to distribution problems, they can be found benchmarking and balancing⎯which are mainly
used in management and by official statistical agencies to adjust the values of a high-
frequency series of ballpark figures (usually obtained employing sample techniques) to a
more accurate low-frequency series⎯and other temporal disaggregation problems where the
temporal aggregation function is different from the sum function. Anyway, despite the great
quantity of procedures for temporal disaggregation of time series being proposed in the
literature, the fulfilment of the constraints derived from the observed low-frequency series is
the norm in the subject.
Although temporal disaggregation methods are currently used in a great variety of
business and economic problems, the enlargement and improvement of most of the
procedures have been usually developed connected with short-term analysis. They have been
in fact linked to the need of solving problems related to the production of coherent quarterly
national accounts (see, e.g., OECD, 1996; Eurostat, 1999; or, Bloem et al., 2001) and
quarterly regional accounts (see, e.g., Pavía-Miralles and Cabrer-Borrás, 2007; and, Zaier and
Trabelsi, 2007). Actually, it is quite probable that some of the future fruitful developments
expected in this topic get to solve the new challenges posed in this area. Nevertheless,
additionally to the methods specifically proposed to estimate quarterly and monthly accounts,
another significant number of methods suggested to estimate missing observations have been
also adapted to this issue. Thus, classifying the large variety of methods proposed in the
literature emerges as a necessary requirement in order to perform a systematic and ordered
study of the numerous alternatives suggested. In fact, a classification would be in itself a
proper tool for a suitable selection of the technique in each particular situation due to as
DiFonzo and Filosa (1987, p. 10) pointed out: “It also seemed opportune to stress the crucial
importance of the fact that differing algorithms though derived from the same research field
and using the same basic information, can give rise to series with different cyclical, seasonal
and stochastic properties”.
A first division could arise attending to the plane from which the problem is faced, either
the frequency domain or the temporal plane. This division, however, is not well-balanced,
since the temporal perspective has been by and large more popular. On the one hand, the
procedures that deal with the problem from the temporal plane will be analyzed in Sections 2,
3, and 4. On the other hand, the methods that try to solve the problem from the spectral point
of view will be introduced in Section 5.
Another possible criterion of division attends to the use or not of related series, usually
called indicators. Economic events tend to be made visible in different ways and to affect
many dimensions. The economic series are therefore correlated variables that do not evolve in
an isolated way. Consequently, it is not unusual that some variables available in high-
frequency could display similar fluctuations than those (expected) for the target series. Some
methods try to take advantage of this fact to temporally distribute the target series. Thus, the
use or not of indicators has been considered as another criterion to classify.
The procedures which deal with the series in an isolated way and compute the missing
data of the disaggregated high-frequency series taking into account only the information
given by the objective series have been grouped under the name of methods that do not use
indicators. Different approaches and strategies have been employed to solve the problem
within this group of proposals. The first algorithms proposed were quite mechanical and
4 Jose M. Pavía

distributed the series imposing some properties considered “interesting”. Step by step,
nevertheless, new methods (theoretically founded on the ARIMA representation of the series
to be disaggregated) were progressively appearing, introducing more flexibility in the
process. Section 2 is devoted to the methods classified in this group.
Complementary to the group of techniques that do not use indicators appears the
procedures based on indicators, which exploit the economic relationships between indicators
and objective series to temporally distribute the target series. This group is composed for an
extent and varied collection of methods that have had enormous success and that have been
widely used. In fact, as Chow and Lin (1976, p. 720) remarked: “...there are likely to be some
related series, including dummy variables, which can usefully serve as regressors. One
should at least use a single dummy variable identically equal to one; its coefficient gives the
mean of the time series.” and moreover as Guerrero and Martínez (1995, p. 360) said: “It is
our belief that, in practice, one can always find some auxiliary data. These data might simply
be an expected trend and seasonal behaviour of the series to be disaggregated”. Hence, it is
not surprising the great success of these procedures and that the utilization of procedures
based on indicators is a rule among the agencies and governmental statistic institutes that
estimate quarterly and monthly national accounts using indirect methods. These procedures
are presented in Section 3.
Finally, and independent of their use or not of indicators, it has been grouped in another
category the methods that use the Kalman filter for the estimation of the non available values.
The great flexibility that offers the representation of temporal processes in the state space and
the enormous possibilities that these representations present to properly deal with log-
transformations and dynamic approximations to the issue justify broadly its own section. The
procedures based on this algorithm can be found in Section 4.
It is clear that alternative classifications could be reached if different criteria had been
followed and that any classification runs the risk of being inadequate and a bit artificial.
Moreover, the categorization chosen does not avoid the problem of deciding where place
some procedures or methods, which could belong to different groups and whose location
turns out as extremely complicated. Nevertheless, the classification of the text has been
chosen because it is the belief of the author that it clarifies and makes easier the exposition.
Furthermore, it is necessary to remark that no mathematical expressions have been included
in the text in order to make the exposition quicker and easier to follow. The chapter makes a
verbal review of the several alternatives. The interested reader can consult specific
mathematical details in the numerous references cited throughout the chapter, or consult
Pavía-Miralles (2000a), who offers a revision of many of the procedures suggested before
1997 unifying the mathematical terms.
In short, the structure of the chapter is as follows. Section 2 introduces the methods that
do not use related information. Section 3 describes the procedures based on indicators. This
section has been divided into subsections in order to handle the great quantity of methods
proposed using the related variable approach. Section 4 deals with the methods that use the
Kalman filter. Section 5 shows the procedures based on spectral developments. And finally,
Section 6 offers some concluding remarks and comments on possible future research
directions in the subject.
Temporal Disaggregation of Time Series—A Review 5

2. Methods that Do not Use Indicators


Despite the use of indicators being the most popular approach in the problem of temporal
disaggregation of time series, a remarkable quantity of methods have been also proposed in
econometric and statistical literature trying to face the problem using exclusively the observed
low-frequency values of the own time series. This approach comprises purely mathematical
methods and more theoretically founded model-based methods relying on the Autoregressive
Integrated Moving Average (ARIMA) representation of the series to be disaggregated.
The first methods proposed in this group were developed as mere instruments, without
any theoretical justification, for the elaboration of quarterly (or monthly) national accounts.
These first procedures were purely ad-hoc mathematical algorithms to derive a smooth path
for the unobserved series. They constructed the high-frequency series (from now on and
without losing generality, they are supposed quarterly in order to lighten the language) from
the low-frequency series (without losing generality, they are assumed annual) according to
the properties that the series to build were supposed to follow, imposing the annual
constrains.
The design of these primary methods, nevertheless, was already in those first days
influenced for the need of solving one issue that appears recurrently in the subject and whose
solution must tackle every method suggested to disaggregate time series: the problem of
spurious steps. To prevent series with undesired discontinuities from one year to the next, the
pioneers proposed to make dependent on several annual data the quarterly estimates values
belonging to a particular year. The disaggregation methods proposed by Lisman and Sandée
(1964), Zani (1970), and Greco (1979) were devised to estimate the quarterly series
corresponding to the year t as a weighted average of the annual values of periods t-1, t and
t+1. They estimate the quarterly series through a fix weight structure. The difference among
these methods lies on their election of the weight matrix. Lisman and Sandée (1964)
calculated the weight matrix by requiring that the estimated series verify some a priori
“interesting” properties. Zani (1970) assumed that the curve of the quarterly estimates is
located upon a second degree polynomial that passes by the origin. And Greco (1979)
extended Zani’s proposal to polynomials with other degrees. Furthermore, Glejser (1966)
expanded Lisman and Sandée (1964) to the case of distributing quarterly or annual series into
monthly ones and later Almon (1988) provided, in an econometric computer package G, a
method to convert annual figures into quarterly series, assuming that a cubic polynomial is
fitted to each successive set of two points of the low-frequency series. All these methods,
however, are univariate and it was necessary to wait two more decades to reach from this
approach a solution for the multivariate problem. Just recently, Zaier and Trabelsi (2007) has
extended, for both stock and flow variables, Almon’s univariate polynomial method to the
multivariate case.
Using also an ad-hoc mathematical approach, although with a different line of thinking,
Boot et al. (1967) proposed building the quarterly series by solving an optimization problem.
Particularly, their method proposes to construct the quarterly series as solution of the
minimization of the sum of squares of either the first or the second differences of the
(unknown) consecutive quarterly values, under the condition that the annual aggregation of
the estimated series adds up the available annual figures. Although Boot’s et al. algorithms
mostly reduced the subjective charge of the preceding methods, their way of solving the
6 Jose M. Pavía

problem of spurious steps was still a bit subjective and therefore it was not free of criticism.
On the particular, it could be consulted, among others, Ginsburg (1973), Nijman and Palm
(1985), DiFonzo and Filosa (1987), and Pavía-Miralles (2000a).
In the same way that polynomial procedures were generalized, Boot’s et al. approach was
also extended in both flexibility and in the number of series to be handled. Cohen et al.
(1971) extended Boot’s et al. work introducing flexibility in the process in a double way: on
one hand, they dealt with any pair of possible combination of high and low frequencies; and,
on the other hand, they considered the minimization of the sum of the squared of the ith
differences between successive subperiod values (not only first and second differences). The
multivariate extension, nevertheless, was introduced in Pavía et al. (2000).
Additionally to the abovementioned ad-hoc mathematical algorithms, many other
methods could be also classified within the group of techniques that base the estimates
exclusively on the observed data of the target series. Among them, it will be stressed in this
section Doran (1974) and Stram and Wei (1986). On the one hand, Doran (1974) assumed
that there is a part of the sample period where the target series is observed in its higher
frequency (it generally happening towards the end of the sample) and proposed to use this
subsample to estimate the temporal characteristics of the series and to obtain the non-
available values employing this information. This strategy, however, is not optimum, as
Chow and Lin (1976) proved. Chow and Lin (1976) adapted the estimator suggested in Chow
and Lin (1971) to the same situation treated by Doran and showed that Doran’s method
generated estimates with larger mean square errors. On the other hand, Stram and Wei (1986)
proposed to obtain the target series minimizing a squared function defined by the inverse
covariance matrix associated to the quarterly stationary ARMA(p,q) process obtained taking
differences on the non-stationary one. In particular, they suggested adjusting an ARIMA
model to the low-frequency series, selecting an ARIMA(p,d,q) model (an ARIMA process
with autoregressive order p, integrated order d, and moving average order q) for the quarterly
values compatible with the annual model and minimizing the dth differences of the objective
series by the loss function, with the annual series as constraint.
Stram and Wei’s proposal, besides, made possible to reassess Boot et al. (1967). As they
showed Boot et al.’s algorithm is equivalent to use this procedure assuming that the series to
estimate follows either a temporal integrated process of order one or two (i.e., I(1) or I(2)).
According to Rodríguez-Feijoo et al. (2003), however, Stram and Wei method only performs
well when the series are long enough to permit a proper estimation of the ARIMA process. In
this line, in order to know the advantages and disadvantages of these methods and decide
which to use under what circumstances, it could be consulted Rodríguez-Feijoo et al. (2003).
They performed a simulation exercise in which the methods proposed by Lisman and Sandée
(1964), Zani (1970), Boot et al. (1967), Denton (1971) ⎯in its variant without indicators⎯,
Stram and Wei (1986), and Wei and Stram (1990) were analysed.
Finally, it must be noted that although the approach using ARIMA processes has
produced many others fruits, no more proposals has been set up in this section. On the one
hand, those procedures based on ARIMA models which take advantage of the representation
in the space of the states and use the Kalman filter and smoothing techniques to estimate both
the coefficients of the process and the non-observed values of the high-frequency series have
been placed in Section 4. On the other hand, those approaches that try to deduce the ARIMA
process of the high-frequency series from the ARIMA model of the low-frequency as a
strategy to estimate the missing values are presented in Section 3. Notice that this last strategy
Temporal Disaggregation of Time Series—A Review 7

can be seen as a particular case of a dynamic regression model with missing observations in
the dependent variable and without indicators, whose general situation with indicators is
introduced in the next section.

3. Methods Based on Indicators


The methods based on related variables have been the most popular and the most widely
used and successful. Thus, a great number of procedures can be found within this category.
Comparing with the algorithms non-based on indicators, related variable procedures have
been assigned two main principal advantages: (i) they present better foundations in the
construction hypothesis, which can comparatively affect the results validation; and, (ii) they
make use of relevant economic and statistical information, being more efficient. Although, as
Nasse (1973) observed, in return they hide an implicit hypothesis according to which the
annual relationship is accepted to be maintained in the quarterly basis. Nevertheless, as
Friedman (1962, p. 731) pointed out: ‘a particular series Y is of course chosen to use in
interpolation because its intrayearly movements are believed to be highly correlated with the
intrayearly movements of X’. Anyway, additionally to Nasse’s observation, it must be noted
that the resulting estimates depend crucially on the indicators chosen and therefore special
care should be taken in selecting them. To solve this issue, already in 1951, Chang and Liu
(1951) tried to establish some criteria the indicators should fulfil; nevertheless, the debate far
from closed has been opened during decades (see, e.g., Nasse 1970, 1973; Bournay and
Laroque, 1979; INE, 1993; OECD, 1996; or, Pavía et al., 2000). For example, although the
use of indicators to predict the probable evolution of key series throughout the quarters of a
year is the rule among the countries that estimate quarterly accounts by indirect methods (a
summary of the indicators used by the different national statistics agencies can be found in
OECD, 1996, pp. 22-37), there are no apparent universal criteria for selecting them. It,
however, does not mean that any sound criteria have been proposed. In particular, and related
to the elaboration of quarterly accounts, Pavía-Miralles and Cabrer-Borrás (2007, p. 161)
pointed out that “…indicators, available monthly or quarterly, that verified —at least in an
approximate way— the following properties: (a) economic implication, (b) representation or
greatest coverage, (c) maintenance of a ‘constant’ relation with the regional series being
estimated, (d) quick availability and short lag, (f) acceptable series length, and (g) smooth
profile or predominance of the trend-cycle signal” must be chosen, to which it could be added
statistical quality and having an intrayear evolution similar to the objective series. Despite the
great debate about indicators, very few tests about their validity can be found in the literature.
As exception, INE (1993, p.12) offers a statistical test about the accuracy of the selected
indicators to estimate quarterly accounts.
This section has been divided in three subsections to better manage the large quantity of
procedures classified in this category. The first subsection is devoted to those procedures,
called adjusting methods, which given an initial approximation of the target series adjust their
values using some penalty function in order to fulfil the annual constraints. Subsection 2
presents the procedures that take advantage of structural or econometric models⎯including
some techniques using dynamic regression models in the identification of the relationship
linking the series to be estimated and the (set of) related time series⎯to approximate the
incompletely observed variables. According to Jacob (1994), the structural model may take
8 Jose M. Pavía

the form of a (simple) time series model or a regression model with other variables where the
estimates are obtained as by-product of parameter estimation of the model. Finally, subsection
three shows those methods, named optimal methods, which jointly obtain the estimates of
both parameters and quarterly series combining target annual series and quarterly indicators
and incorporating the annual constraints in the process of estimation, basically Chow and Lin
(1971) and its extensions.

3.1. Adjusting Methods

In general the adjusting methods are composed of two stages. In the first step an initial
approximation of the objective series is obtained. In the second step the first estimates are
adjusted by imposing the constraints derived from the available and more reliable annual
series. The initial estimates are reached using either sample procedures or some kind of
relationship among indicators and target series. When the initial estimates come from surveys,
additionally to adjusting procedures, the so-called benchmarking and balancing techniques
are usually employed (see, e.g., Dagum and Cholette, 2006; and, Särndal, 2007). Although, it
must be noted that the frontier between benchmarking and adjustment algorithms is not clear
and somewhat artificial (see, DiFonzo and Marini, 2005). Among those options that use
related variables to obtain initial approximations both non-correlated and correlated strategies
could be found. The non-correlated proposals, historically the first ones, do not explicitly take
into account the existing correlation between target series and indicators ⎯Friedman (1962)
can be consulted for wide summary of those first algorithms. On the other hand, the
correlation strategies usually assume a lineal relationship between the objective series and the
indicators, from which an initial high-frequency series is obtained.
Once the initial approximation is available, it is adjusted to make it congruent with the
observed annual series. The discrepancies between both annual series (the observed series and
the series obtained by quarterly aggregation of the initial estimates) are then removed. A great
quantity of adjustment procedures can be found in the literature. Bassie (1958, pp. 653-61)
proposed to distribute annual discrepancies by a structure of fixed weights. Such a structure is
calculated taking into account the discrepancies corresponding to two successive years and
assuming that the weights function follows a third degree polynomial. Despite having no
theoretical support and Bassie recognizing that the method spawns series with irregularities
and cyclical components different to the initial approximations when the annual discrepancies
are too big (OCDE, 1966, p. 21), Bassie’s proposal has been historically applied to series of
the Italian economy (ISCO, 1965; ISTAT, 1983) and currently Finland and Denmark use
variants of this method to adjust their quarterly GDP series (OECD, 1996, p. 19).
Vangrevelinghe (1966) planned a different approach. His proposal (primary suggested to
estimate the French quarterly familiar consumption series) consists of (i) applying Lisman
and Sandée (1964) to both objective annual series and indicator annual series to obtain,
respectively, an initial approximation and a control series, to then (ii) modifying the initial
estimate by aggregating the discrepancies between the observed quarterly indicator and the
control series, using as scale factor the Ordinary Least Squares (OLS) estimator of the linear
observed annual model. Later, minimal variations of Vagrevelinghe’s method were proposed
by Ginsburg (1973) and Somermeyer et al. (1976). Ginsburg suggested obtaining the initial
estimates using Boot et al. (1967), instead of Lisman-Sandée, and Somermeyer et al.
Temporal Disaggregation of Time Series—A Review 9

proposed generalizing Lisman and Sandée (1964) by allowing the weight structure to be
different for each quarter and year, with the weight structure obtained, using annual
constraints, from a linear model.
One of the most successful methods in the area (not only among adjusting procedures) is
the approach proposed by Denton in 1971. The fact that, according to DiFonzo (2003a, p. 2),
short-term analysis in general and quarterly accounts in particular need disaggregation
techniques being “…flexible enough to allow for a variety of time series to be treated easily,
rapidly and without too much intervention by the producer;” and that “the statistical
procedures involved should be run in an accessible and well known, possibly user friendly,
and well sounded software program, interfacing with other relevant instruments typically
used by data producers (i.e. seasonal adjustment, forecasting, identification of regression
models,…)” explains the great attractiveness of methods such as Denton (1971) and Chow
and Lin (1971) among analysts and statistical agencies (see, e.g., Bloem et al., 2001; and
Dagum and Cholette, 1994, 2006); despite using more sophisticated procedures generally
yielding better estimates (Pavía-Miralles and Cabrer-Borrás, 2007).
Denton (1971) suggested adjusting the initial estimates minimizing a loss function
defined by a square form. Therefore, the choice of the symmetrical matrix determining the
specific square form of the loss function is the crucial element in Denton’s proposal. Denton
concentrated on the solutions obtained minimizing the hth differences between the to-be-
estimated series and the initial approximation and found Boot et al. (1967) as a particular case
of his algorithm. Later on, Cholette (1984) proposed a slight modification to this function
family to avoid dependence on the initial conditions. Although, nevertheless, the main
extensions of Denton approach were reached by Hillmer and Trabelsi (1987), Trabelsi and
Hillmer (1990), Cholette and Dagum (1994), DiFonzo (2003d) and DiFonzo and Marini
(2005), they made more flexible the algorithm and extended it to the multivariate case.
Hillmer and Trabelsi (1987) and Trabelsi and Hillmer (1990) worked on the problem of
adjusting a univariate high-frequency series using data obtained from different sampling
sources, and found Denton (1971) and Cholette (1984) as particular cases of their proposal. In
particular, they relaxed the requirements about the low-frequency series permitting it to be
observed with error; although, as compensation, they had to suppose known the temporal
structure of the errors caused by sampling the low frequency series (see also Weale, 1992).
When benchmarks are observed without error, the problem transforms into minimizing the
discrepancies between the initial estimates and the annual series according to a loss function
of the square form type (Trabelsi and Hillmer, 1990). In these circumstances, they showed
that the method of minimizing the hth differences proposed by Denton (1971) and Cholette
(1984) implies to implicitly admit: (i) that the rate between the variances of the observation
errors and the ARIMA modelization errors of the initial approximation tends to zero; and, (ii)
that the observation errors follow a I(h) process with either null initial conditions, in Denton’s
approach, or with the initial values of the series of observation errors begin in a remote past,
in Cholette’s method.
In sample survey most time series data come from repeated surveys whose sample
designs usually generate autocorrelated errors and heterocedasticity. Thus, Cholette and
Dagum (1994) introduced a regression model to take into account it explicitly and showed
that the gain in efficiency of using a more complex model varies with the ARMA model
assumed for the survey errors. In this line, Chen and Wu (2006) showed, through a simulation
exercise and assuming that the survey error series follows an AR(1) process, that Cholette and
10 Jose M. Pavía

Dagum (1994) and Dagum et al. (1998) have great advantages over Denton method and that
they are robust to misspecification of the survey error model. On the other hand, the
multivariate extension of Denton method was proposed in DiFonzo (2003d) and DiFonzo and
Marini (2005) under a general accounting constraint system. They assumed a set of linear
relationships among target variables and indicators from which initial estimates are obtained,
to then, applying the movement preservation principle of Denton approach subject to the
whole set of contemporaneous and temporal aggregation relationships, reach estimates of all
the series verifying all the constraints.
Although Denton (1971) and also DiFonzo and Marini (2005) do not require any
reliability measurement of survey error series, their need in many other proposals led
Guerrero (1990, p. 30) to propose an alternative approach after writing that “These
requirements are reasonable for a statistical agency:…but they might be very restrictive for a
practitioner who occasionally wants to disaggregate a time series”. In particular, to
overcome some arbitrariness in the choice of the stochastic structure of the high frequency
disturbances, Guerrero (1990) and Guerrero and Martínez (1995) developed a new adjustment
procedure assuming that the initial approximation and the objective series share the same
ARIMA model. More specifically, they combined an ARIMA-based approach with the use of
high frequency related series in a regression model to obtain the Best Linear Unbiased
Estimate (BLUE) of the objective series verifying annual constraints. This approach permits
an automatic (which takes a recursive form in Guerrero and Martínez, 1995) ‘revision’ of the
estimates with each new observation. This feature illustrates an important difference with the
other procedures where the estimates obtained for the periods relatively far away from the
sample final period are in practice ‘fixed’. Likewise, the multivariate extension of this
approach was also provided by Guerrero, who, together with Nieto (Guerrero and Nieto,
1999), suggested a procedure for estimating unobserved values of multiple time series whose
temporal and contemporaneous aggregates are known using vector autoregressive models.
Under this approach, moreover, it must be noted that even though the problem can be cast
into a state-space formulation, the usual assumptions underlying Kalman filtering are not
fulfilled in this case and that therefore Kalman filter approach cannot be applied directly.
A very interesting variant in this framework emerges when log-transformations are taken.
Indeed, in many circumstances, it is strongly recommended to use logarithms or other
transformations of original data (for example, most time series become stationary after
applying first differences to their logarithms) to achieve better modelizations of time series
and also, as Aadland (2000) showed through a simulation experiment, to obtain more accurate
disaggregates because of “…the failure to account for data transformations may lead to
serious errors in estimation” (Aadland, 2000, p. 141). However, due to the logarithmic
transformation being not additive, the annual aggregation constraint can not be directly
applied in a distribution problem.
The problem of dealing with log-transformed variables in the distribution framework was
first considered by Pinheiro and Coimbra (1993), and later treated, among others, in Proietti
(1998) and Aadland (2000). Proietti (1998) tackled the problem of adjusting estimated values
to fulfil temporal aggregation constraints. On the one hand, Proietti (1998) proposed to obtain
initial estimates applying the exponential function to the approximations reached using a
linear relationship between the log-transformation of the target series and the indicators, to
then in a second step adopt Denton’s algorithm to get the final values. According to DiFonzo
(2003a), however, this last step could be unnecessary as “the disaggregated estimates present
Temporal Disaggregation of Time Series—A Review 11

only negligible discrepancies with the observed aggregated values.” (DiFonzo 2003a, p. 17).
On the other hand, when the linear relationship is expressed in terms of rate of change of the
target variable (i.e., using the logarithmic difference), initial estimates for the non-
transformed values of the objective variable could be now obtained using Fernandez (1981),
being a further adjustment (using Denton’s formula) for either flow or index variable
eventually performed to exactly fulfil the temporal aggregation constraints (DiFonzo, 2003a,
2003b).

3.2. Structural and Econometrics Model Methods

The economic theory stands for functional relationships among variables. The
econometric models express those relations by means of equations. Models based on annual
data conceal higher frequency information and are not considered sufficiently informative to
policy makers. Building quarterly and monthly macroeconometric models is therefore
imperative and responds for one of its traditional motivations: the demand of high-frequency
forecasts. Sometimes, the frequency of the variables taking part in the model is not
homogeneous and expressing the model in the lower common frequency almost never offers
an acceptable approximation. Indeed, with the aim of forecasting, Jacobs (2004) showed that
is preferable to deal with the quarterly model with missing quarterly observations rather than
generate quarterly predictions disaggregating the annual forecasts from the annual model: the
quarterly estimator based on approximations is revealed as more efficient (even biased) than
the annual estimator. Thus, putting the model in the desired frequency and use the same
model, not only to estimate the unknown parameters but also to estimate the non-observed
values of the target series, represents in general a good alternative to forecast. Furthermore,
according with Vanhonacker (1990), it is also preferable to estimate the missing observations
simultaneously with the econometric model rather than previously interpolated the
unavailable values to directly handle the high-frequency equations, because of “…its effects
on subsequent econometric analysis can be serious: parameter estimates can be severely
(asymptotically) biased…” (Jacobs, 2004, p. 5).
There are a lot of econometric models susceptible of being formulated. And therefore,
many strategies may be adopted to estimate the missing observations. As examples of this
variety of model-based approach could be cited, among others, Drettakis (1973), Sargan and
Drettakis (1974), Dagenais (1973, 1976), Dempster et al. (1977), Hsiao (1979, 1980),
Gourieroux and Monfort (1981), Palm and Nijman (1982, 1984), Conniffe (1983), Wilcox
(1983), Nijman and Palm (1986, 1990), and Dagum et al. (1998).
Drettakis (1973) formulated a multiequational dynamic model about the United Kingdom
economy with one of the endogenous variables observed only annually for a part of the
sample and obtained estimates for the parameters and the unobserved values by Maximum
Likelihood (ML) with complete information. Sargan and Drettakis (1974) extended Drettakis
(1973) to the case in which the number of unobserved series is higher than one and introduced
an improvement to reduce the computational charges of the estimation procedure. The use of
ML was also followed in Hsiao (1979, 1980) and Palm and Nijman (1982). As example, Palm
and Nijman (1982) derived the ML estimator when data are subject to different temporal
aggregations and compared its sample variance with those obtaining after applying the
estimator proposed by Hsiao, Generalized Least Squares (GLS) and Ordinary Least Squares
12 Jose M. Pavía

(OLS). On the other hand, GLS estimators were be employed by Dagenais (1973),
Gourieroux and Monfort (1981), and Conniffe (1983) for models with missing observations
in the exogenous variables and, therefore, probably with a heteroscedastic and serially
correlated disturbance term.
In the extension to dynamic regression models, the ML approach was again used in Palm
and Nijman’s works. Nijman and Palm (1986) considered a simultaneous equations model,
not completely specified, about the Dutch labour market with some variables only annually
observed and proposed to obtain initial estimates for those variables using the univariate
quarterly ARIMA process that congruent with the multiequational model is derived from the
observed annual series. These initial estimates were used to estimate the model parameters by
ML. Palm and Nijman (1984) studied the problem of parameters identification and Nijman
and Palm (1986, 1990) the estimation one. To estimate the parameters they proposed two
alternatives based on ML. The first one consisted of building the likelihood function from the
forecast errors, using the Kalman filter. The second alternative consisted of applying the EM
algorithm adapted to incompletes samples. This adaptation was developed in a wide and long
paper by Dempster et al. (1977) from Hartley (1958). Dagum et al. (1998), on the other hand,
presented a general dynamic stochastic regression model, which permits to deal with the most
common short-term data treatment (including interpolation, benchmarking, extrapolation and
smoothing), and showed that the GLS estimator is the minimum variance linear unbiased
estimator (see, also Dagum and Cholette, 2006). With respect to other temporal
disaggregation procedures based on dynamic models (e.g., Santos Silva and Cardoso, 2001;
Gregoir, 2003; or, DiFonzo, 2003a), they will be considered in the next subsection, since they
could be observed as dynamic extensions of Chow and Lin (1971). Although, they could be
also placed on the previous subsection due to they follow the classical two-step approach of
adjusting methods.

3.3. Optimal Methods

Optimal methods get their name to the estimation strategy they adopt. Such procedures
directly incorporate the restrictions derived from the observed annual series into the
estimation process to jointly obtain the BLUE of both parameters and quarterly series. To do
that, a linear relationship between target series and indicators is usually assumed. This group
of methods is one of the most widely used and in fact its root proposal (Chow and Lin, 1971)
has served as basis for many statistical agencies (see, e.g., ISTAT, 1985; INE, 1993; Eurostat,
1998; or DiFonzo, 2003a) and analysts (e.g., Abeysinghe and Lee, 1998; Abeysinghe and
Rajaguru, 1999; Pavía and Cabrer, 2003; and, Norman and Walker, 2007) to quarterly
distribute annual accounts and to provide flash estimates of quarterly growth, among other
tasks.
Although many links between adjustment and optimal procedures exist, as DiFonzo and
Filosa (1987, p. 11) indicated “(i) … compared to optimal methods, adjustment methods make
an inefficient (and sometimes, biased) use of the indicators; (ii) the various methods have a
different capability of providing statistically efficient extrapolation…”, which points at
optimal methods as more suitable to perform short-term analysis using forecasts. In
compensation, the solution of this sort of methods crucially depends on the correlations
structure assumed for the errors of the linear relationship. In fact, many proposals are only
Temporal Disaggregation of Time Series—A Review 13

different in that point. All of them, nevertheless, pursue to avoid spurious steps in the
estimated series.
Friedman (1962) was the first one in applying this approach. In particular and for the case
of a stock variable, he obtained (assuming a linear relationship between target series and
indicators) the BLUE of both coefficients and objective series. Nevertheless, Chow and Lin
(1971) were who, extending Friedman’s result, wrote the paper probably most influential and
cited in this subject. They obtained the BLUE of the objective series for interpolation,
distribution and extrapolation problems using a common notation. They focused on the case
of converting a quarterly series into a monthly one and assumed an AR(1) hypothesis for the
errors in order to avoid unjustified discontinuities in the estimated series. Under this
hypothesis, the covariance matrix is governed by the autoregressive coefficient of order one
of the high-frequency disturbance series, which is unknown. Hence, to apply the method it
has to be previously estimated. Chow and Lin (1971) suggested exploiting the functional
relationship between autoregressive coefficients of order one of the low- and the high-
frequency errors to estimate it. Specifically, they proposed an iterative procedure to estimate
the monthly AR(1) coefficient from the rate between elements (1,2) and (1,1) of the quarterly
error covariance matrix.
The Chow-Lin strategy of relating the autoregressive coefficients of order one of the high
and low error series, however, can not be completely generalized to any pair of frequencies
(Acosta et al., 1977) and consequently several other stratagems were followed to solve the
issue. In line with Chow-Lin approach, DiFonzo and Filosa (1987) obtained for the annual-
quarterly case a function between the two autoregressive coefficients. The problem of the
relation reached by DiFonzo and Filosa is that it only has unique solution for non-negative
annual autoregressive coefficient. Despite it, Cavero et al. (1994) and IGE (1997) took
advantage of such a relation to suggest two iterative procedures to handle the Chow-Lin
method in the quarterly-annual case with AR(1) errors. Cavero et al. (1994) even provided a
solution to apply the method when an initial negative estimate of the autoregressive
coefficient is obtained. Although, to handle the problem of the sign, Bourney and Laroque
(1979) had already proposed to estimate the autoregressive coefficient through a two-step
algorithm in which, in the first step, the element (1,3) of the covariance matrix of the annual
errors is used to determinate the sign of the autoregressive coefficient. In addition to the
above possibilities, strategies based on the maximum likelihood (with the hypothesis of
normality for the errors) have been also tried. Examples of this approach can be found in
Barbone et al. (1981), ISTAT (1985), and Quilis (2005).
Although the AR(1) temporal error structure has been the most extensively analyzed,
other structures for the errors has been also proposed. Among the stationary structures
Schmidt (1986) held MA(1), AR(2), AR(4), and a mix between AR(1) and AR(4) processes
as reasonable possibilities to deal with the annual-quarterly case. Although, the Monte Carlo
evidence in Pavía et al. (2003) showed that assuming an AR(1) hypothesis on the disturbance
term does not significantly influence the quality of the estimates, despite disturbances
following other stationary structures. In regard to the extensions towards no stationary
structures, Fernández (1981) and Litterman (1983) can be cited. On the one hand, Fernández
(1981, p. 475) recommended using Denton’s approach proposing “estimate regression
coefficients using annual totals of the dependent variables, and then apply these coefficients
to the high frequency series to obtain preliminary estimates…” to afterwards “they are
'adjusted' following the approach of Denton” and showed that such an approach to the
14 Jose M. Pavía

problem is equivalent to using the Chow-Lin method with a random walk hypothesis for the
errors—a hypothesis that he defended:“a random walk hypothesis for a series of residuals ...
should not be considered unrealistic” (Fernández, 1981, p. 475), supported by results in
Nelson and Gould (1974) and Fernández (1976). On the other hand, Litterman (1983) studied
the problem of monthly disaggregating a quarterly series and extended the Chow-Lin method
for the case in which the residual series followed a Markov random walk. Litterman did not
solve the problem of estimating the parameter of the Markov process for small samples
though. Fortunately, Silver (1986) found a solution to this problem and extended Litterman’s
proposal to the case of annual series and quarterly indicators.
Despite DiFonzo and Filosa’s abovementioned words about the superiority of optimal
methods over adjustment procedures, all the previous methods can be obtained as solutions of
a quadratic-linear optimization problem (Pinheiro and Coimbra, 1993), where the metric
matrix that defines the loss function is the inverse of the high-frequency covariance error
matrix. Therefore, theoretically other structures for the disturbances could be easily managed.
In particular, in order to improve disaggregated estimates, the high-frequency covariance
error matrix could be estimated, following Wei and Stram (1990), from the data by imposing
an ARIMA structure and using its relationship with the low-frequency covariance matrix.
Despite it, low AR order models are still systematically chosen in practice due to (i) the
covariance matrix of the high-frequency disturbances cannot be, in general, uniquely
identified from the low-frequency one and (ii) the typical sample sizes occurring in
economics usually provide poor low-frequency error matrix estimates (Rossana and Seater,
1995; Proietti 1998; DiFonzo, 2003a). In fact, the Monte Carlo evidence presented in Chan
(1993) showed that this approach would likely perform comparatively badly when the low-
frequency sample size is lower than 40 (a really non infrequent size in economics).
The estimates obtained according to Chow and Lin’s approach, however, are only
completely satisfactory in the case where the temporal aggregation constraint is linear and
there are no lagged dependent variables in the regression. Thus, to improve accuracy of
estimates taking into account dynamics specifications usually encountered in applied
econometrics works, several authors (e.g., Salazar et al., 1997a, 1997b; Santos Silva and
Cardoso, 2001; Gregoir, 2003) have proposed to generalize Chow-Lin approach (including
Fernández and Litterman extensions) by the use of linear dynamic models. It permits to
perform temporal disaggregation providing more robust results in a broad range of
circumstances. In this line, Santos Silva and Cardoso (2001), following the way initiated by
Salazar et al. (1997a, 1997b) and Gregoir (2003), proposed an extension of Chow-Lin⎯by
means of a well-known transformation developed to deal with distributed lag model (e.g.,
Klein, 1958; Harvey, 1990)⎯which is particularly adequate when the series used are
stationary or cointegrated (see also DiFonzo, 2003c). Their extension, furthermore, compared
to Salazar et al. and Gregoir, solves the problems in the estimation of the first low-frequency
period and produces disaggregated estimates and standard errors in a straightforward way
(which was very difficult to implement in a computer program in the initial proposals). Two
empirical applications of this procedure, additionally to a panoramic revision of this
approach, could be found in DiFonzo (2003a, 2003b), while in Quilis (2003) a MATLAB
library to perform it is offered. This library completed the MATLAB libraries that to run Boot
et al. (1967), Denton (1971), Chow and Lin (1971), Fernández (1981), Litterman (1983),
Temporal Disaggregation of Time Series—A Review 15

DiFonzo (1990) and DiFonzo (2003d) is provided by Instituto Nacional de Estadística


(Quilis, 2002).
The Chow-Lin approach and its abovementioned extensions are all univariate, thus to
handle problems with more than J (>1) series to-be-estimated, multivariate extensions are
required. In this situations, apart from the low-frequency temporal constraints, some
additional cross-section, transversal or contemporaneous aggregates among the high-
frequency target series are usually available. To deal with this issue, different procedures
(extending Chow-Lin method) have been proposed in the literature. Rossi (1982) was the first
who faced this problem. Rossi assumed that the contemporaneous quarterly aggregate of the J
series is known and proposed to apply an estimation procedure in two steps. In the first step,
he suggested applying the Chow-Lin method, in an isolated way, to each one of the J series
imposing only the corresponding annual constraint and assuming white noise residuals. In the
second step, he proposed to apply again Chow-Lin procedure, imposing as constraint the
observed contemporaneous aggregated series and under a white noise error vector, to
simultaneously estimate the J series using as indicators the series estimated in the first step.
This strategy, however, as DiFonzo (1990) pointed out, does not guarantee the fulfilment of
the temporal restrictions.
DiFonzo (1990), attending to Rossi’s limitation, generalized the Chow-Lin estimator and
got the BLUE of the J series, fulfilling simultaneously the temporal and the transversal
restrictions. Similar to Chow-Lin, DiFonzo (1990) again obtained that the estimated series
crucially depend on the structure assumed for the disturbances. Nevertheless, he only offered
a practical solution under the hypothesis of errors temporally uncorrelated. That hypothesis
unfortunately is inadequate due to it can produce spurious steps in the estimated series. In
order to solve it, Cabrer and Pavía (1999) and Pavía-Miralles (2000b) introduced a structure
for the disturbances in which each one of the J error series follow either an AR(1) process or
a random walk with shocks only contemporaneously correlated. Pavía-Miralles (2000b),
additionally, extended the estimator obtained in DiFonzo (1990) to situations with more
general contemporaneous aggregations and provided an algorithm to run such so complex
disturbance structure in empirical works. Finally, DiFonzo (2003d) proposed to simplify
Pavía-Miralles (2000b) suggesting a multivariate random walk structure for the error vector
and Pavía-Miralles and Cabrer-Borrás (2007) extended Pavía-Miralles’s (2000b) proposal to
deal with the extrapolation issue.

4. Methods Based on the Representation in the State Space


One of the approaches in the study of time series is to consider the series as a realisation
of a stochastic process with a particular model generator (e.g., an ARIMA process), which
depends on some parameters. In order to predict how the series will behave in a future or to
rebuild the series estimating the missing observation it is necessary to know the model
parameters. The Kalman filter permits to take advantage of the temporal sequence of the
series to implement through a set of mathematical equations a predictor-corrector type
estimator, which is optimal in the sense that it minimizes the estimated error covariance when
some presumed conditions are met. In particular, it is an efficient recursive filter that
estimates the state of a dynamic system from a series of incomplete and noisy measurements.
Within the temporal disaggregation problem, this approach appears very promising due its
16 Jose M. Pavía

great versatility and presents the additional advantage of making possible that both unadjusted
and seasonally adjusted series can be simultaneously estimated.
Among the different approaches to approximate the population parameters of the data
generating process it stands out ML. The likelihood function of the stochastic process can be
calculated in a relatively simple and very operative way by the Kalman filter. The density of
the process, under a Gaussian distribution assumption for the series, can be easily derived
from the forecast errors. Prediction errors can be computed in a straightforward way by
representing the process in the state space, and the Kalman filter can then be used. In general,
the pioneers methods based on the representation in the state space supposed an ARIMA
process for the objective series and computed the likelihood of the process through the
Kalman filter by employing the smooth point-fixed algorithm (details of this algorithm can be
consulted, among others, in Anderson and Moore, 1979; and, Harvey, 1981; and in the
multivariate extension in Harvey 1989) to estimate the not available values.
Despite the representation of a temporal process in the state space not being unique, the
majority of the proposals to adapt Kalman filter to manage missing observations can be
reduced to the one proposed by Jones (1980). Jones suggested building the likelihood
function excluding the prediction errors associated to those temporal moments where no
observation exist and proposed to use forecasts obtained in the previous instant to go on
running the Kalman filter equations. Among others, this pattern was followed by Harvey and
Pierse (1984), Ansley and Kohn (1985), Kohn and Ansley (1986), Al-Osh (1989), Harvey
(1989), and Gómez and Maravall (1994). Additionally to Jones’s approach, other approaches
can be found. DeJong (1989) developed a new filter and some smooth algorithms which
allow interpolating the non observed values with simpler computational and analytical
expressions. Durbin and Quenneville (1997) used state space models to adjust a monthly
series obtained from a survey to an annual benchmark. And, Gómez et al. (1999) followed the
strategy of estimating missing observations considering them as outliers, while Gudmundsson
(1999) introduced a prescribed multiplicative trend in the problem of quarterly disaggregating
an annual flow series using its state space representation.
Jones (1980), pioneer in the estimation of missing observations from the state space
representation, treated⎯from a representation proposed by Akaike (1974)⎯the case of a
stock variable which is assumed to follow a stationary ARMA process. Later on, Harvey and
Pierse (1984), also dealing with stationary series, extended Jones’s proposal⎯using another
representation due to Akaike (1978)⎯for the case of flow variables. Likewise, they adapted
the algorithm to that case in which the target series follows a regression model with stationary
residuals and dealt with the problem of working with logarithms of the variable. Furthermore,
Harvey and Pierse also extended the procedure to the case of stock variables following non
stationary ARIMA processes; although in this case, they compelled the target variable being
available in a high-frequency for a large enough sample subperiod.
In the non stationary case, however, when Harvey and Pierse’s hypothesis is not verified,
building the likelihood of the process becomes difficult. Problems in converting the process
into stationary and in defining the initial conditions arise. Thus, in order to solve it, on the one
hand, Ansley and Kohn (1985) proposed to consider a diffuse initial distribution in the pre-
sample and, on the other hand, Kohn and Ansley (1986) suggested transforming the
observations in order to define the likelihood of the process. Kohn and Ansley’s
transformation made possible to generalize the previous results (including those reached by
Temporal Disaggregation of Time Series—A Review 17

Harvey and Pierse), although at the cost of destroying the sequentiality of the series, altering
both smoothing and filtering algorithms. Fortunately, Gómez and Maravall (1994) went
beyond this difficulty and solved it making possible to use the classical tools to deal with the
problem of non stationary processes whatever the structure of the missing observations.
However, although Kohn and Ansley (1986) and Gómez and Maravall (1994) proposals
extended the issue to the treatment of regression models with non stationary residuals
(allowing related variables to be included in this framework), they did not deal with the case
of flow variables in an explicit way. Indeed, it was Al-Osh (1989) who handled such a
problem and extended the solution to non stationary flow series. Al-Osh, moreover, suggested
using the Kalman filter for the recursive estimate of the non-observed values as a tool to
overcome the problem of the change of the estimates due to the increasing of the available
sample. In this line, Cuche and Hess (2000) used information contained in related series to
using a general approach based on the Kalman filter estimate the monthly Swiss GDP from
the quarterly series, while Liu and Hall (2001) estimated a monthly US GDP series from
quarterly values after testing several state space representations to, through a MonteCarlo
experiment, identify which variant of the model gives the best estimates. They found the more
simple representations did almost as well as more complex ones.
Most of above proposals, however, consider the temporal structure (the ARIMA process)
of the objective series known. In practice, however, it is unknown and it is required to specify
the orders of the process to deal with it. In order to solve it, some strategies have been
followed. Some attempts have tried to infer the process of a high-frequency series from the
observed process of the low-frequency one (e.g., Nijman and Palm, 1985; Al-Osh, 1989;
Guerrero and Martínez, 1995); while many other studies have concentrated on analyzing the
effect of aggregation over a high frequency process (e.g., among others, Telser, 1967;
Amemiya and Wu, 1972; Tiao, 1972; Wei, 1978; Lütkepohl, 1984; Stram and Wei, 1986;
and, more recently, Rossana and Seater, 1995) and on studying its effect over stock variables
observed in fixed step times (among others, Quenouille, 1958; Werner, 1982; or Weiss,
1984). Fortunately, the necessary and sufficient conditions under which the aggregate and/or
disaggregate series can be expressed by the same class of model was derived by Hotta and
Vasconcellos (1999).
Both multivariate and dynamic extensions have been also tackled from this framework,
although they are just incipient. On the one hand, the multivariate approach started by Harvey
(1989) was continued in Moauro and Savio (2005), who suggested a multivariate seemingly
unrelated time series equations model to using the Kalman filter estimate the high-frequency
series when several constraints exits. The framework they proposed is flexible enough to
allow for almost any kind of temporal disaggregation problems of both raw and seasonally
adjusted time series. On the other hand, Proietti (2006) offered a dynamic extension
providing, among others contributions, a systematic treatment of Litterman (1983), which
permits to explain the difficulties commonly encountered in practice when estimating
Literman’s model.

5. Approaches from the Frequency Domain


From the previous sections it can be deduced that a great amount of energy has been
devoted to deal with the issue from the temporal perspective. Similarly, great efforts have
18 Jose M. Pavía

been also devoted from the frequential plane, although they have had less successful and have
therefore done less fruits. In particular, the greatest efforts have been invested on estimating
the spectral density function or spectrum of the series, the main tool of a temporal process in
the frequency domain. The estimation of the spectrum of the series has been undertaken from
both angles: the parametric and the non-parametric perspective.
Both Jones (1962) and Parzen (1961, 1963) were pioneers in the study of missing
observations from the frequency domain. They analyzed the problem under a systematic
scheme for the observed (and therefore also for the unobserved) values. Jones (1962), one of
the pioneers in studying the problem of estimating the spectrum, treated the case of estimating
the spectral function of a stock stationary series sampled systematically. This problem was
also faced by Parzen (1963) who introduced the term of amplitude modulation, the key
element in which later spectral developments were based on in their search for solutions. The
amplitude modulation defines itself as a zeros and ones series in the sample period. The value
of the amplitude modulation is one in those periods where the series is observed, whereas it is
zero in case of no being observed.
Different schemes for the amplitude modulation have been considered in the literature.
Scheinok (1965) considered the case in which the amplitude modulation followed a Bernoulli
random scheme. This random scheme was extended to others by Bloomfield (1970, 1973).
More recently, Tolio and Morettin (1993) obtained estimators of the spectral function for
three types of modulation sequences: determinist, random and correlated random. On the
other hand, Dunsmuir and Robinson (Dunsmuir, 1981; and Dunsmuir and Robinson, 1981a,
1981b), followed a different way, they assumed an ARIMA process and estimated its
parameters with the help of the spectral approximation to the likelihood function.
Although the great majority of patterns for the missing observations can apparently be
treated from the frequency domain, not all of them have a solution. This fact is due to the
impossibility of completely estimating the autocovariances of the process in many practical
situations. In this sense, Clinger and Van Ness (1976) studied the situations in which it is
possible to estimate all the autocovariances. On the particular, it must be remembered
Dunsmuir’s (1981, p. 620) words: “… (the estimators) are asymptotically efficient when
compared to the Gaussian maximum likelihood estimate if the proportion of missing data is
asymptotically negligible.” Hence, the problem of disaggregating an annual time series in
quarterly figures is one of those that do not still have a satisfactory solution from this
perspective. Nevertheless, from a related approach, Gudmundsson (2001) have made some
advances proposing a method to estimate (under some restrictive hypothesis and in a
continuous way) a flow variable. Likewise, the efforts made to employ the spectral tools to
estimate the missing values using the information given by a group of related variables have
required so many restrictive hypotheses that its use has not been advisable until now.

6. Conclusion
As can be easily inferred from the references and all the above sections a really huge
quantity of procedures, methods and algorithms have been proposed in the literature to try to
solve the problem of transforming a low-frequency series into a high-frequency one. The first
group of methods that built series through ad-hoc procedures was progressively overcome,
and the methods based on indicators were progressively gaining the preference of researchers.
Temporal Disaggregation of Time Series—A Review 19

Within this group of methods, it highlights the Chow-Lin procedure and all its multiple
extensions. Interesting solutions have been also proposed from the state space and its great
flexibility makes it a proper tool to deal with the future challenges to appear in the subject and
to handle situations of missing observations different from those analysed in the current
document. In compensation, however, within the methods proposed from the frequency
domain the progress made does not seem encouraging. Nevertheless, none of the proposals
should be discarded rapidly, because according to Marcellino (2007) pooling estimates
obtained from different procedures can improve the quality of the disaggregated series.
Broadly speaking, an analysis of the historical evolution of the topic seems to point
towards the techniques using dynamic regression models and the techniques using
formulations in terms of unobserved component models/structural time series and the Kalman
filter as the two research lines that will hold a pre-eminent position in the future. On the one
hand, the extension of the topic to deal with multivariate dynamic models is still waiting to be
tackled; and, on the other hand, the state space methodology offers the generality that is
required to address a variety of inferential issues that have not been dealt with previously. In
this sense, both approaches could be combined in order to solve one of the main open
problems in the area: in particular, to jointly estimate some high-frequency series of rates
when the low-frequency series of rates, some transversal constraints and several related
variables are available. For example, the issue of distributing regionally the quarterly national
growth of a country when the annual regional growth series are known and several high-
frequency regional indicators are available and, moreover, both the regional and the sectoral
structure of weights change quarterly and/or annually.
Furthermore, a new emerging approach⎯which is taking into account the more recent
developments of econometric literature (e.g., data mining, dynamic common component
analyses, or time series models environment) and takes advantage of the continuous advances
in computer hardware and software by making use of a large dataset available⎯will likely
turn up in the future as a main line in the subject. Indeed, as Angelini et al. (2006, p. 2693)
point out: “Existing methods … are either univariate or based on a very limited number of
series, due to data and computing constraints … until the recent past. Nowadays large
datasets are readily available, and models with hundreds of parameters are easily
estimated”. In this line, Proietti and Moauro (2006) dealt with a dynamic factor model using
the Kalman filter to perform an index of coincident US economic indicators; while Angelini
et al. (2006) modelled a large dataset with a factor model and developed an interpolation
procedure that exploits the estimated factors as a summary of all the available information.
This last research also shows this strategy clearly improving univariate approaches.

References
Aadland, D.M. (2000). Distribution and Interpolation using Transformed Data. Journal of
Applied Statistics, 27, 141-156.
Abeysinghe, T., & Lee, C. (1998). Best Linear Unbiased Disaggregation of Annual GDP to
Quarterly Figures: The Case of Malaysia. Journal of Forecasting, 17, 527-537.
Abeysinghe, T., & Rajaguru, G. (1999). Quarterly Real GDP Estimates for China and
ASEAN4 with a Forecast Evaluation. Journal of Forecasting, 18, 33-37.
20 Jose M. Pavía

Acosta, L.R., Cortigiani, J.L., & Diéguez, M.B. (1977). Trimestralización de Series
Económicas Anuales. Buenos Aires: Banco Central de la República Argentina,
Departamento de Análisis y Coordinación Estadística.
Akaike, H. (1974). Markovian Representation of a Stochastic Processes and its Application to
the Analysis of Autoregressive Moving Average Processes. Annals of the Institute of
Statistical Mathematics, 26, 363-87.
Akaike, H. (1978). Covariance Matrix Computation of the State Variable of a Stationary
Gaussian Process. Annals of the Institute of Statistical Mathematics, 30, 499-504.
Almon, C. (1988). The Craft of Economic Modeling. Boston: Ginn Press.
Al-Osh, M. (1989). A Dynamic Linear Model Approach for Disaggregating Time Series
Data. Journal of Forecasting, 8, 85-96
Amemiya, T., & Wu, R.Y. (1972). The Effect of Aggregation on Prediction in the
Autoregressive Model. Journal of the American Statistical Association, 67, 628-632.
Anderson, B.D.O., & Moore, J.B.(1979). Optimal Filtering. Englewood Cliffs, New Jersey:
Prentice-Hall
Angelini, E., Henry, J., &. Marcellino, M. (2006). Interpolation and Backdating with a Large
Information Set. Journal of Economic Dynamics and Control, 30, 2693-2724.
Ansley, C.F., & Kohn, R. (1985). Estimating, Filtering and Smoothing in State Space Models
with Incompletely Specified Initial Conditions. Annals of Statistics, 13, 1286-1316.
Bassie, V.L. (1958) Economic Forecasting. New York: Mc Graw-Hill. pp. 653-61.
Barbone, L., Bodo, G., & Visco, I. (1981). Costi e Profitti in Senso Stretto: un’Analisi du
Serie Trimestrali, 1970-1980, Bolletino della Banca d’Italia, 36, 465-510.
Bloem, A.M., Dippelsman, R.J., & Maehle, N.∅. (2001). Quarterly National Accounts
Manual. Concepts, Data Sources, and Compilation. Washington DC: International
Monetary Fund.
Bloomfield, P. (1970). Spectral Analysis with Randomly Missing Observations. Journal of
the Royal Statistical Society, Ser. B, 32, 369-380.
Bloomfield, P. (1973). An Exponential Model for the Spectrum of a Scalar Time Series.
Biometrika, 60, 217-226.
Boot, J.C.G., Feibes, W., & Lisman, J.H. (1967). Further Methods of Derivation of Quarterly
Figures from Annual Data. Journal of the Royal Statistical Society, Ser. C, 16, 65-75.
Bournay, J., & Laroque, G. (1979). Réflexions sur le Méthode d'Élaboration des Comptes
Trimestriels. Annales de L'Insée, 36, 3-29.
Cabrer, B., & Pavía, J.M. (1999). Estimating J(>1) Quarterly Time Series in Fulfilling Annual
and Quarterly Constraints. International Advances in Economic Research, 5, 339-350.
Cavero, J., Fernández-Abascal, H., Gómez, I., Lorenzo, C., Rodríguez, B., Rojo, J.L., & Sanz,
J.A. (1994). Hacia un Modelo Trimestral de Predicción de la Economía Castellano-
Leonesa. El Modelo Hispalink CyL. Cuadernos Aragoneses de Economía, 4, 317-343.
Chan, W. (1993). Disaggregation of Annual Time-Series Data to Quarterly Figures: A
Comparative Study. Journal of Forecasting, 12, 677-688.
Chang, C.G., & Liu, T.C. (1951). Monthly Estimates of Certain National Product
Components, 1946-49. The Review of Economic and Statistics, 33, 219-227.
Chen, Z.G., & Wu, K.H. (2006). Comparison of Benchmarking Methods with and without a
Survey Error Model. International Statistical Review, 74, 285-304.
Cholette, P.A. (1984). Adjusting Sub-Anual Series to Yearly Benchmarks. Survey
Methodology, 10, 35-49.
Temporal Disaggregation of Time Series—A Review 21

Cholette, P.A., & Dagum, E.B. (1994). Benchmarking Time Series With Autocorrelated
Survey Errors. International Statistical Review, 62, 365-377.
Chow, G. C., & Lin, A. (1971). Best Linear Unbiaded Interpolation, Distribution, and
Extrapolation of Time Series By Related Series. The Review of Economics and Statistics,
53, 372-375.
Chow, G.C., & Lin, A. (1976). Best Linear Unbiased Estimation of Missing Observations in
an Economic Time Series. Journal of the American Statistical Association, 71, 719-721.
Clinger, W., & VanNess, J.W. (1976). On Unequally Spaced Time Points in Time Series.
Annals of Statistics, 4, 736-745.
Cohen, K.J., Müller, M., & Padberg, M.W. (1971). Autoregressive Approaches to
Disaggregation of Time Series Data. Journal of the Royal Statistical Society, Ser. C, 20,
119-129.
Conniffe, D. (1983). Small-Sample Properties of Estimators of Regression Coefficients Given
a Common Pattern of Missing Data. Review of Economic Studies, 50, 111-120.
Cuche, N.A., & Hess, M.K. (2000). Estimating Monthly GDP in a General Kalman Filter
Framework: Evidence from Switzerland. Economic and Financial Modelling, 7, 1-37.
Dagenais, M. G. (1973). The Use of Incomplete Observations in Multiple Regression
Analysis: a Generalized Least Squares Approach. Journal of Econometrics, 1, 317-328.
Dagenais, M. G. (1976). Incomplete Observations and Simultaneous Equations Models.
Journal of Econometrics, 4, 231-241.
Dagum, E.B., Cholette, P.A., & Chen, Z.G. (1998). A Unified View of Signal Extraction,
Benchmarking, Interpolation and Extrapolation of Time Series. International Statistical
Review, 66, 245-269.
Dagum, E.B., & Cholette, P.A. (2006). Benchmarking, Temporal Distribution and
Reconciliation Methods for Time Series. New York: Springer Verlag.
Dagum, E.B., Cholette, P.A., & Chen, Z.G. (1998). A Unified View of Signal Extraction,
Benchmarking, Interpolation and Extrapolation of Time Series. International Statistical
Review, 66, 245-269.
DeJong, P. (1989). Smoothing and Interpolation with the State-Space Model. Journal of the
American Statistical Association, 84, 1085-1088.
Dempster, A.P., Laird, N.M. & Rubin, D.B. (1977). Maximun Likelihood from Incomplete
Data via the EM Algorithm. Journal of the Royal Statistical Society, Ser. B, 39, 1-38.
Denton, F.T. (1971). Adjustment of Monthly or Quarterly Series to Annuals Totals: An
Approach Based on Quadratic Minimization. Journal of the American Statistical
Association, 66, 99-102.
DiFonzo, T. (1990). The Estimation of M Disaggregate Time Series when Contemporaneous
and Temporal Aggregates are Known. The Review of Economics and Statistics, 72, 178-
182.
DiFonzo, T. (2003a). Temporal Disaggregation of Economic Time Series: Towards a
Dynamic Extension. Luxembourg: Office for Official Publications of the European
Communities.
DiFonzo, T. (2003b). Temporal Disaggregation Using Related Series: Log-Transformation
and Dynamic Extensions. Rivista Internazionale di Scienze Economiche e Commerciali,
50, 371-400.
22 Jose M. Pavía

DiFonzo, T. (2003c). Constrained Retropolation of High-frequency Data Using Related


Series: A Simple Dynamic Model Approach. Statistical Methods and Applications, 12,
109-119.
DiFonzo, T. (2003d). Temporal Disaggregation of System of Time Series When the
Aggregates Are Known. In: Barcellan R., & Mazzi, G.L. (Eds.), INSEE-Eurostat
Quarterly National Accounts Workshop, Paris-Bercy, December 1994 (pp. 63-78).
Luxembourg: Office for Official Publications of the European Communities. Available at
http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-AN-03-014/EN/KS-AN-03-
014-EN.PDF.
DiFonzo, T., & Filosa, R. (1987). Methods of Estimation of Quarterly National Account
Series: A Comparison. Paper presented at “Journee Franco-Italianne de Comptabilite
Nationale (Journee de Stadistique)”, Lausanne 18-20 mai, 1987. pp. 1-69.
DiFonzo, T., & Marini, M. (2005). Benchmarking Systems of Seasonally Adjusted Time
Series. Journal of Business Cycle Measurement and Analysis, 2, 84-123.
Doran, H.E. (1974). Prediction of Missing Observations in the Time Series of an Economic
Variable. Journal of the American Statistical Association, 69, 546- 554
Drettakis, E.G. (1973). Missing Data in Econometric Estimation. Review of Economic
Studies, 40, 537-52.
Dunsmuir, W. (1981). Estimation for Stationary Time Series when Data Are Irregularly
Spaced or Missing. In Findley D. F. (Ed.), Applied Time Series Analysis II (pp. 609-649).
New York: Academic Press.
Dunsmuir, W., & Robinson, P.M. (1981a). Parametric Estimators for Stationary Time Series
with Missing Observations. Advances in Applied Probability, 13, 129-146.
Dunsmuir, W., & Robinson, P.M. (1981b). Estimation of Time Series Models in the Presence
of Missing Data. Journal of the American Statistical Association, 76, 456-467.
Durbin, J., & Quenneville, B. (1997) Benchmarking by State Space Models. International
Statistical Review, 65, 23–48.
Eurostat (1999). Handbook of Quarterly National Accounts. Luxembourg: European
Commission.
Fernández, R. B. (1976). Expectativas Adaptativas vs. Expectativas Racionales en la
Determinación de la Inflación y el Empleo. Cuadernos de Economía, 40, 37-58.
Fernández, R. B. (1981). A Methodological Note on the Estimation of Time Series. The
Review of Economics and Statistics, 53, 471-478.
Friedman, M. (1962). The Interpolation of Time Series by Related Series. Journal of the
American Statistical Association, 57, 729-757.
Ginsburgh, V.A. (1973). A Further Note on the Derivation of Quarterly Figures Consistent
with Annual Data. Journal of the Royal Statistical Society, Ser. C, 22, 368-374.
Glejser, H. (1966). Une Méthode d’Evaluation de Donnés Mensuelles à Partir d’Indices
Trimestriels ou Annuels. Cahiers Economiques de Bruxelles, 29, 45-54.
Gómez, V., & Maravall, A. (1994). Estimation, Prediction and Interpolation for
Nonstationary series with the Kalman Filter. Journal of the American Statistical
Association, 89, 611-624.
Gómez, V., Maravall, A. & Peña, D. (1999). Missing Observations in ARIMA Models.
Skipping Strategy versus Additive Outlier Approach. Journal of Econometrics, 88, 341-
363
Temporal Disaggregation of Time Series—A Review 23

Gourieroux, C., & Monfort, A. (1981). On the Problem of Missing Data in Linear Models.
Review of Economic Studies, 48, 579-586.
Greco, C. (1979). Alcune Considerazioni Sui Criteri di Calcolo di Valori Trimestrali di
Tendenza di Serie Storiche Annuali. Annali della Facoltà di Economia e Commercio, 4,
135-155.
Gregoir, S. (2003). Propositions pour une Désagrégation Temporelle Basée sur des Modèles
Dynamiques Simples. In Barcellan R., & Mazzi, G.L. (Eds.), INSEE-Eurostat Quarterly
National Accounts workshop, Paris-Bercy, December 1994 (pp. 141-166). Luxembourg:
Office for Official Publications of the European Communities. Available at
http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-AN-03-014/EN/KS-AN-03-
014-EN.PDF.
Gudmundsson, G. (1999). Disaggregation of Annual Flow Data with Multiplicative Trends.
Journal of Forecasting, 18, 33-37.
Gudmundsson, G. (2001). Estimation of Continuous Flows from Observed Aggregates.
Journal of the Royal Statistical Society, Ser. D, 50, 285-293.
Guerrero, V. M. (1990). Temporal Disaggregation of Time Series: An ARIMA-Based
Approach. International Statistical Review, 58, 29-46.
Guerrero, V.M., & Martínez, J. (1995). A Recursive ARIMA-Based Procedure for
Disaggregating a Time Series Variable Using Concurrent Data. Test, 4, 359-376.
Guerrero, V.M., & Nieto, F.H. (1999). Temporal and Contemporaneous Disaggregation of
Multiple Economic Time Series. Test, 8, 459-489.
Hartley, H.O. (1958). Maximum Likelihood Estimation from Incomplete Data. Biometrics,
14, 174-194.
Harvey, A.C. (1981). Time Series Models. Oxford: Philip Allan.
Harvey A.C. (1989). Forecasting, Structural Time Series and the Kalman Filter. Cambridge:
Cambridge University Press.
Harvey, A.C. (1990). The Econometric Analysis of Time Series. Deddington: Philip Allan.
Harvey, A.C., & Pierse, R.G. (1984). Estimating Missing Observations in Economic Time
Series. Journal of the American Statistical Association, 79, 125-131.
Hillmer, S.C., & Trabelsi, A. (1987). Benchmarking of Economic Time Series. Journal of the
American Statistical Association, 82, 1064-1071.
Hotta, L.K., & Vasconcellos, K.L. (1999). Aggregation and Disaggregation of Structural
Time Series Models. Journal of Time Series Analysis, 20, 155-171.
Hsiao, C. (1979). Linear Regression Using Both Temporally Aggregated and Temporally
Disaggregated Data. Journal of Econometrics, 10, 243-252.
Hsiao, C. (1980). Missing Data and Maximum Likelihood Estimation. Economics Letters, 6,
249-253.
IGE (1997). Contabilidade Trimestral de Galicia. Metodoloxía e Series Históricas 1980-
1991. Santiago de Compostela: Instituto Galego de Estadística.
INE (1993). Contabilidad Nacional Trimestral de España. Metodología y Serie Trimestral
1970-1992. Madrid: Instituto Nacional de Estadística.
ISCO (1965). L'Aggiustamento delle Stime nei Conti Economici Trimestrali. Rassegna del
Valori Interni dell'Istituto, 5, 47-52.
ISTAT (1983). I Conti Economici Trimestrali dell'Italia 1970-1982. Supplemento al
Bollettino Mensile di Statistica, 12.
24 Jose M. Pavía

ISTAT (1985). I Conti Economici Trimestrali dell’Italia, anni 1970-1984. Supplemento al


Bollettino Mensile di Statistica, 14.
Jacobs, J. (2004). ‘Dividing by 4’: A Feasible Quarterly Forecasting Method?, CCSO Series
22, Center for Cyclical and Structural Research, Groningen. Available at
http://www.eco.rug.nl/ccso/CCSOseries/ccso22.pdf.
Jones, R. H. (1962). Spectral Analysis with Regularly Missed Observations. Annals of
Mathematical Statistics, 33, 455-461.
Jones, R. H. (1980). Maximum Likelihood Fitting of ARMA Models to Time Series with
Missing Observations. Technometrics, 22, 389-395.
Klein L.R. (1958). The Estimation of Distributed Lags. Econometrica, 26, 553-65.
Kohn, R., & Ansley, C.F. (1986). Estimation, Prediction, and Interpolation for ARIMA
Models with Missing Data. Journal of the American Statistical Association, 81, 751-761.
Lisman, J.H.C. & Sandee, J. (1964). Derivation of Quarterly Figures from Annual Data.
Journal of the Royal Statistical Society, Ser. C, 13, 87-90.
Litterman, R.B. (1983). A Random Walk, Markov Model for Distribution of Time Series.
Journal of Business and Economic Statistics, 1, 169-173.
Liu, H., & Hall, S.G. (2001). Creating High-Frequency National Accounts with State-Space
Modelling: A Monte Carlo Experiment. Journal of Forecasting, 20, 441-449
Lütkepohl, H. (1984). Linear Transformations of Vector ARMA Processes. Journal of
Econometrics, 4, 283-293.
Marcellino, M. (2007). Pooling-Based Data Interpolation and Backdating. Journal of Time
Series Analysis, 28, 53–71.
Moauro, F., & Savio, G. (2005). Temporal Disaggregation Using Multivariate Structural
Time Series Models. The Econometrics Journal, 8, 214–234
Nasse, P. (1970). Peut-on Suivre L’évolution Trimestralle de la Consommation? Economie et
Statistique, 8, 33-52.
Nasse, P. (1973). Le Système des Comptes Nationaux Trimestrels. Annales de L’Inssée, 14,
127-161.
Nelson, P., & Gould, G. (1974). The Stochastic Properties of the Income Velocity of Money.
American Economic Review, 64, 405-418.
Nijman, Th., & Palm, F.C. (1985). Series Temporelles Incompletes en Modelisation
Macroeconomiques. Cahiers Du Seminaire d'Econometrie, 29, 141-68.
Nijman, Th., & Palm F.C. (1986). The Construction and Use of Approximations for Missing
Quarterly Observations: A Model Approach. Journal of Business and Economic
Statistics, 4, 47-58.
Nijman, Th., & Palm, F.C. (1988a). Efficiency Gains due to Missing Data Procedures in
Regression Models. Statististical Papers, 29, 249–256.
Nijman, Th., & Palm, F.C. (1988b). Consistent Estimation of Regression Models with
Incompletely Observed Exogenous Variables. The Annals of Economics and Statististics,
12, 151–175.
Nijman, Th., & Palm, F.C. (1990). Predictive Accuracy Gain From Disaggregate Sampling in
ARIMA Models. Journal of Business and Economic Statistics, 8, 189-196.
Norman, D., & Walker, T. (2007) Co-movement of Australian State Business Cycles.
Australian Economic Papers, 46, 360–374.
OECD (1966). La Comptabilité Nationale Trimestrelle. Series Etudes Economiques, 21.
Temporal Disaggregation of Time Series—A Review 25

OECD (1996). Sources and Methods used by the OECD Member Countries. Quarterly
National Accounts. Paris: OECD Publications.
Palm, F.C., & Nijman, Th. (1982). Linear Regression using both Temporally Aggregated and
Temporally Disaggregated Data. Journal of Econometrics, 19, 333-343.
Palm, F.C., & Nijman, Th. (1984). Missing Observations in the Dynamic Regression Model.
Econometrica, 52, 1415-1435.
Parzen, E. (1961). Mathematical Considerations in the Estimation of Spectra. Technometrics,
3, 167-190.
Parzen, E. (1963). On Spectral Analysis with Missing Observations and Amplitude
Modulation. Sankhyä, A25, 383-392.
Pavía Miralles, J.M. (2000a): La Problemática de Trimestralización de Series Anuales.
Valencia: Universidad de Valencia.
Pavía Miralles, J.M. (2000b) Desagregación Conjunta de Series Anuales: Perturbaciones
AR(1) Multivariante. Investigaciones Económicas, XXIV, 727-737.
Pavía, J.M., Cabrer, B., & Felip, J.M. (2000): Estimación del VAB Trimestral No Agrario de
la Comunidad Valenciana. Valencia: Generalitat Valenciana.
Pavía, J.M., & Cabrer, B. (2003). Estimación Congruente de Contabilidades Trimestrales
Regionales: Una Aplicación. Investigación Económica, LXII, 119-141.
Pavía-Miralles, J.M., & Cabrer-Borrás, B. (2007). On Estimating Contemporaneous Quarterly
Regional GDP, Journal of Forecasting, 26, 155-177.
Pavía, J.M., Vila, L.E., & Escuder, R. (2003). On the Performance of the Chow-Lin
Procedure for Quarterly Interpolation of Annual Data: Some Monte-Carlo Analysis.
Spanish Economic Review, 5, 291-305.
Pinheiro, M., & Coimbra, C. (1993). Distribution and Extrapolation of Time Series by
Related Series Using Logarithms and Smoothing Penalties, Economia, 17, 359-374.
Proietti, T. (1998). Distribution and Interpolation Revisited: A Structural Approach.
Statistica, LVIII, 411-432.
Proietti, T. (2006). Temporal Disaggregation by State Space Methods: Dynamic Regression
Methods Revisited. The Econometrics Journal, 9, 357–372
Proietti, T., & Moauro, F. (2006). Dynamic Factor Analysis with Nonlinear Temporal
Aggregation Constraints. Journal of the Royal Statistical Society, Ser. C, 55, 281-300.
Quenouille, M-H. (1958). Discrete Autoregressive Schemes with Varing Time-Intervals.
Metrika, 1. 21-27.
Quilis, E. (2002). A MATLAB Library of Temporal Disaggregation Methods: Summary.
Madrid: Instituto Nacional de Estadística. http://www.ine.es/.
Quilis, E. (2003). Desagregación Temporal Mediante Modelos Dinámicos: El Método de
Santos Silva y Cardoso. Boletín Trimestral de Coyuntura, 88, 1-11.
Quilis, E. (2005). Benchmarking Techniques in the Spanish Quarterly National Accounts.
Luxembourg: Office for Official Publications of the European Communities.
Rodríguez-Feijoo, S., Rodríguez-Caro A., & Dávila-Quintana, D. (2003). Methods for
Quarterly Disaggregation without Indicators; A Comparative Study Using Simulation.
Computational Statistics and Data Analysis, 43, 63–78.
Rossana, R.J., & Seater, J.J. (1995). Temporal Aggregation and Economic Time Series.
Journal of Business and Economic Statistics, 13, 441-451.
Rossi, N. (1982). A Note on the Estimation of Disaggregate Time Series when the Aggregate
is Known. The Review of Economics and Statistics, 64, 695-696.
26 Jose M. Pavía

Salazar, E.L., Smith, R.J., & Weale, R. (1997a). A Monthly Indicator of GDP. London:
National Institute of Economic and Social Research/NIESR Discussion Papers.
Salazar, E.L., Smith, R.J., & Weale, R. (1997b). Interpolation using a Dynamic Regression
Model: Specification and Monte Carlo Properties. London: National Institute of
Economic and Social Research/NIESR Discussion Papers.
Santos Silva, J.M.C., & Cardoso, F.N. (2001). The Chow-Lin Method Using Dynamic
Models. Economic Modelling, 18, 269-80.
Särndal, C.E. (2007) The Calibration Approach in Survey Theory and Practice. Survey
Methodology, 33, 99-120.
Sargan, J.D., & Drettakis, E.G. (1974). Missing Data in an Autoregressive Model.
International Economic Review, 15, 39-59.
Scheinok, P.A. (1965). Spectral Analysis with Randomly Missed Observations: The Binomial
Case. Annals of Mathematical Statistics, 36, 971-977.
Schmidt, J.R. (1986). A General Framework for Interpolation, Distribution and Extrapolation
of Time Series by Related Series. In Regional Econometric Modelling (pp. 181-194).
Boston: Kluwer Nighoff Pub.
Silver, J.L. (1986). Two Results Useful for Implementing Litterman’s Procedure for
Interpolating a Time Series. Journal of Business and Economic Statistics, 4, 129-130.
Somermeyer, J., Jansen, R., & Louter, J. (1976). Estimating Quarterly Values of Annually
Know Variables in Quarterly Relationships. Journal of the American Statistical
Association, 71, 588-595.
Stram, D.O., & Wei, W.W.S. (1986). Temporal Aggregation in the ARIMA process. Journal
of Time Series Analysis, 39, 279-292.
Telser, L.G. (1967). Discrete Samples and Moving Sums in a Stationary Stochastic Process.
Journal of the American Statistical Association, 62, 489-499
Tiao, G.C. (1972). Asymptotic Behavior of Time Series Aggregates. Biometrika, 59, 523-531.
Toloi, C.M.C. & Morettin, P.A. (1993). Spectral Analysis for Amplitude-Modulated Time
Series. Journal of Time Series Analysis, 14, 409-432.
Trabelsi, A. & Hillmer, S.C. (1990). Bench-marking Time Series with Reliable Bench-Marks.
Journal of the Royal Statistical Society, Ser. C, 39, 367-379.
Vangrevelinghe, G. (1966). L'Evolution à Court Terme de la Consommation des Ménages:
Connaisance, Analyse et Prévision. Etudes et Conjoncture, 9, 54-102.
Vanhonacker, W.R. (1990). Estimating Dynamic Response Models when the Data are Subject
to Different Temporal Aggregation. Marketing Letters, 1, 125-137.
Weale, M. (1992). Estimation of Data Measured with Error and Subject to Linear
Restrictions. Journal of Applied Econometrics, 7, 167-174.
Wei, W.W.S. (1978). Some Consequences of Temporal Aggregation in Seasonal Time Series
Models. In Zellner A. (Ed) Seasonal Analysis of Economic Time Series (pp. 433-448)
Washington, DC: Government Printing Office.
Wei, W.W.S. (1981). Effect of Systematic Sampling on ARIMA Models. Communications in
Statistics, A10, 2389-2398.
Wei, W.W.S., & Stram, D.O. (1990). Disaggregation of Time Series Models. Journal of the
Royal Statististical Society, Ser. B, 52, 453–467.
Weiss, A.A. (1984). Systematic Sampling and Temporal Aggregation in Time Series Models.
Journal of Econometrics, 26, 271-281
Temporal Disaggregation of Time Series—A Review 27

Werner, H.J. (1982). On the Temporal Aggregation in Discrete Dynamical Systems. In


Drenick, R.F.. & Kozin, F. (Eds.) System Modeling and Optimatization (pp. 819-825).
New York: Springer-Verlag.
Zaier, L., & Trabelsi, A. (2007) Polynomial Method for Temporal Disaggregation of
Multivariate Time Series, Communications in Statistics - Simulation and Computation,
36, 741-759.
Zani, S. (1970). Sui Criteri di Calcolo Dei Valori Trimestrali di Tendenza Degli Aggregati
della Contabilitá Nazionale. Studi e Ricerche, VII, 287-349.
Zellner, A., & Montmarquette, C. (1971). A Study of Some Aspects of Temporal Aggregation
Problems in Econometric Analyses. The Review of Economics and Statistics, 53, 335-
342.
In: Economic Forecasting ISBN: 978-1-60741-068-3
Editor: Alan T. Molnar, pp. 29-54 © 2010 Nova Science Publishers, Inc.

Chapter 2

ECONOMETRIC MODELLING AND FORECASTING


OF PRIVATE HOUSING DEMAND

James M.W. Wong 1 and S. Thomas Ng 2


Department of Civil Engineering
The University of Hong Kong, Pokfulam, Hong Kong

Abstract
Governments, corporations and institutions all need to prepare various types of forecasts
before any policies or decisions are made. Particularly, serving as a significant sector of an
economy, the importance of predicting the movement of the private residential market is
undeniable. However, it is well recognised that the housing demand is volatile and it may
fluctuate dramatically according to general economic conditions. As globalisation continues to
dissolve boundaries across the world, more economies are increasingly subjected to external
shocks. Frequently the fluctuations in the level of housing demand can cause significant
rippling effects in the economy as the housing sector is associated with many other economic
sectors. The development of econometric models is thus postulated to assist policy-makers
and relevant stakeholders to assess the future housing demand in order to formulate suitable
policies.
With the rapid development of econometric approaches, their robustness and
appropriateness as a modelling technique in the context of examining the dynamic relationship
between the housing market and its determinants are evident. This study applies the
cointegration analysis as well as Johansen and Juselius’s vector error correction model (VEC)
model framework to housing demand forecasting in Hong Kong. Volitality of the demand to
the dynamic changes in relevant macro-economic and socio-economic variables are
considered. In addition, an impulse response function and a variance decomposition analysis
are employed to trace the sensitivity of the housing demand over time to the shocks in the
macro-economic and socio-economic variables. This econometric time-series modelling
approach surpasses other methodologies by its dynamic nature and sensitivity to a variety of
factors affecting the output of the economic sector for forecasting purposes, taking into
account indirect and local inter-sectoral effects.

1
E-mail address: jmwwong@hkucc.hku.hk; Tel: Int+ (852) 2241 5348; Fax: Int+ (852) 2559 5337. Postdoctoral
Fellow, Department of Civil Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong.
2
E-mail address: tstng@hkucc.hku.hk. Associate Professor, Department of Civil Engineering, The University of
Hong Kong, Pokfulam Road, Hong Kong.
30 James M.W. Wong and S. Thomas Ng

Empirical results indicated that that the housing demand and the associated economic
factors: housing prices, mortgage rate, and GDP per capita are cointegrated in the long-run.
Other key macro-economic and socio-economic indicators, including income, inflation, stock
prices, employment, population, etc., are also examined but found to be insignificant in
influencing the housing demand. A dynamic and robust housing demand forecasting model is
developed using VEC model. The housing prices and mortgage rate are found to be the most
important and significant factors determining the quantity demand of housing. Findings from
the impulse response analyses and variance decomposition under the VEC model further
confirm that the housing price terms has relatively large and sensitive impact on the housing
demand, although at different time intervals, on the volume of housing transactions in Hong
Kong. Addressing these two attributes is critical to the formulation of both short- and long-
term housing policies that could satisfy the expected demand effectively.
The research contributes knowledge to the academic field as currently the area of housing
demand forecast using advanced econometric modelling techniques is under-explored. This
study has developed a theoretical model that traces the cause-and-effect chain between the
housing demand and its determinants, which is relevant to the current needs of the real estate
market and is significant to the economy’s development. It is envisaged that the results of this
study could enhance the understanding of using advanced econometric modelling
methodologies, factors affecting housing demand and various housing economic issues.

Keywords: Economic forecasting, housing demand, impulse responses analysis,


econometrics, vector error-correction modeling.

Introduction
Economic forecasting is of immense importance as any economic system is a
deterministic-stochastic entity of great complexity and vital to the national development for
the information age (Hoshmand, 2002). Holden et al. (1990) state that forecasts are required
for two basic reasons: the future is uncertain; and the full impact of many decisions taken now
might not be felt until later. Consequently, accurate predictions of the future would improve
the efficiency of the decision-making process. In particular, the knowledge of future demand
for products and services is imperative to all industries since it is a prerequisite for any viable
corporate strategy (Akintoye and Skitmore, 1994).
Among the many aspects of economic forecasting, demand for residential properties has
always been of great interest not only to policy-makers in the government, but also to
business leaders and even the public, especially in a country with land scarcity like Hong
Kong (HK). Private residential properties make up a major constituent of the private-sector
wealth, and play a significant part in the whole economy (Case and Glaester, 2000; Heiss and
Seko, 2001). Its large linkage effect on the economy and its anchoring function for most
household activities also amplify the financial importance. In addition, housing demand has
traditionally been a target for large-scale government interference. Hence, understanding both
the short- and long-term future housing demand is a prerequisite for enlightened housing
policy.
The Asian financial crisis started in July 1997 has indeed revealed that the overbuilding
of housing in HK would cause serious financial distress on the overall economy. Foremost
among those taking the brunt of the shock was the real estate brokerage sector. Others who
might also be seriously impacted include decorators, lawyers, bankers, retailers, contractors,
sellers of construction materials, and inevitably real estate developers (Tse and Webb, 2004).
Econometric Modelling and Forecastin of Private Housing Demand 31

Not only would the real estate sector be hampered, but it may also give rise to unemployment,
deteriorating fiscal revenues (partially due to the drop in land sales) and sluggish retail sales.
It is therefore wise and sensible to incorporate the real estate into a full macro-economic
model of an economy.
However, models being developed for analysing the housing demand per se are limited in
reliability because they cannot cater to the full set of interactions with the rest of the economy.
A review of several academic papers (Arnott, 1987; Follain and Jimenez, 1985; Smith et al.,
1988) reveals the narrow focus of the neoclassical economic modelling of housing demand.
These studies have concentrated on the functional forms, one-equation versus simultaneous
equation systems, or measurement issues about a limited range of housing demand
determinants, principally price and income factors. Some other estimations are made
according to a projection of flats required for new households (e.g., population growth, new
marriage, new immigrant, etc.) and existing families (e.g., those affected by redevelopment
programmes). No doubt the demographic change would have certain implications on housing
demand, yet one should not ignore the impacts of economic change on the desire for property
transactions if housing units are significantly viewed as investment assets (Lavender, 1990;
Tse, 1996).
Consequently, the most feasible research strategy to advance our understanding of
housing consumption decisions lies in furthering the modelling of housing demand
determinants to include a more conceptually comprehensive analysis of the impact of
demographic and economic indicators on housing consumption decisions. However, Baffor-
Bonnie (1998) stated that modelling the supply of, or demand for, housing within any period
of time may not be an easy task because the housing market is subject to a dynamic
interaction of both those economic and non-economic variables.
The choice of a suitable forecasting technique is therefore critical to the generation of
accurate forecasts (Bowerman and O’Connell, 1993). Amongst the variety of methodologies,
econometric modelling is one of the dominant methodologies of estimating macro-economic
variables. Econometric modelling is readily comprehensible and has remained popular with
economists and policy-makers because of its structured modelling basis and outstanding
forecasting performance (Lütkepohl, 2004). This methodology is also preferred to others
because of its dynamic nature and sensitivity to a variety of factors affecting the level and
structure of employment, not to mention its ability to take into account the indirect and local
inter-sectoral effects (Pindyck and Rubinfeld, 1998). With the rapid development of
econometric approaches, their robustness and appropriateness as a modelling technique in the
context of examining the dynamic relationship between the housing market and its determinants
are evident.
The aim of this study is, through the application of the econometric modelling techniques,
to capture the past behaviour and historical patterns of the private housing demand in HK by
considering the volatility of the demand to the dynamic changes in macro-economic and socio-
economic variables for forecasting purpose. The structure of the paper is as follows: the
theoretical background regarding the relationship of the private housing sector and the
relevant economic variables is hypothesised in the next section. The research method and data
are then presented. The results of the empirical analyses are subsequently discussed prior to
concluding remarks.
32 James M.W. Wong and S. Thomas Ng

Housing Demand and Macro-economic Variables


Like any other business sector, the real estate market tends to move in a fluctuating
pattern. Contrasting to a standard sine wave in physical science, the characteristics of real
estate market fluctuations are typically complicated and they exhibit much more stochastic
patterns as shown in Figure 1. Fluctuations in the real estate market do not occur at regular
time intervals and do not last for the same periods of time, and each of their amplitudes also
varies (Chin and Fan, 2005). As the econometric approach is proposed for developing a
housing demand forecasting model, this section first attempts to identify the key determinants
of housing demand.

Figure 1. Number of Registrations of Sale and Purchase Agreements of Private Residential Units in HK
(1995Q3-2008Q2).

The neoclassical economic theory of the consumer was previously applied to housing
(Muth, 1960; Olsen, 1969) which relates to the role of consumer preferences in housing
decisions to the income and price constraints faced by the household. The theory postulates
that rational consumers attempt to maximise their utility with respect to different goods and
services including housing in which they can purchase within the constraints imposed by
market prices and their income (Megbolugbe et al., 1991). The general form of the housing
demand equation is:

Q = ƒ(Y, Ph, Po) (1)

where Q is housing consumption, Y is household income, Ph is the price of housing, and Po is


a vector of prices of other goods and services.
Econometric Modelling and Forecastin of Private Housing Demand 33

The link between income and housing decision is indisputable for most households.
Income is fundamental to explaining housing demand because it is the source of funds for
homeowners’ payments of mortgage principal and interest, property taxes and other relevant
expenses (Megbolugbe et al., 1991). Hendershott and Weicher (2002) stressed that the
demand for housing is strongly related to real income. Green and Hendershott (1996) also
estimated the household housing demand equations relating the demand to the income and
education of the household. Hui and Wong (2007) confirmed that household income Granger
causes the demand for private residential real estate, irrespective of the level of the housing
stock. Kenny (1999), on the other hand, found that the estimated vector exhibits a positive
sensitivity of housing demand to income based on a vector error correction model (VECM).
A number of economists agree that permanent income is the conceptually correct measure of
income in modelling housing decisions and housing demand. Yet, most economists often use
current income in their housing demand equations because of difficulties in measuring
permanent income (see Chambers and Schwartz, 1988; Muth, 1960; Gillingham and
Hagemann, 1983)
Demand for housing may decline when the housing price increases (Tse et al., 1999).
Mankiw and Weil (1989) formulated a simple model which indicates a negative relationship
between the US real house price and housing demand. However, in a broader view, trend of
property price may also incorporate inexplicable waves of optimism, such as expected income
and economic changes, changes in taxation policy, foreign investment flows, etc. (Tse et al.,
1999). For example, an expected rise in income will increase the aspiration of home owning
as well as the incentive of investing in property, resulting in positive relationship between
housing demand and the price.
The principal feature of housing as a commodity that distinguishes it from most other
goods traded in the economy are its relatively high cost of supply, its durability, its
heterogeneity, and its spatial immobility (Megbolugbe et al., 1991). Initially, neoclassical
economic modelling of housing market as shown in Eq. [1] ignored many of these unique
characteristics of housing. Indeed, these characteristics make housing a complex market to
analyse. Some research considered user costs, especially on how to model the effects of taxes,
inflation, and alternative mortgage designs on housing demand decisions.
If interest rate in the economy falls while everything else being equal, the real user cost
of a unit of housing services shall fall and the quantity of housing services demanded may
rise. Follain (1981) demonstrated that at high interest rates, the household’s liquidity
constraints tend to dampen housing demand. Kenny (1999) also found that the estimated
vector exhibits a negative sensitivity of housing demand to interest rates. Harris (1989) and
Tse (1996), however, demonstrated that a declining real interest rate tends to stimulate house
prices and thereby lead to decreases in rent-to-value ratio and housing demand.
Housing demand also depends on the inflation rate in a fundamental way (Hendershott
and Hu 1981, 1983). As inflation rises, more investors are drawn into the property market,
expecting continued appreciation to hedge against inflation (Kenny, 1999). Tse et al. (1999)
stressed that the housing demand should therefore include those with adequate purchasing
power to occupy available housing units as well as those desires to buy a house for renting or
price appreciation. For instance, the inflation experienced by HK in the early 1990s was a
period of rising speculative activities in the housing market. In addition, owner-occupied
housing is largely a non-taxed asset and mortgage interest is partially deductible (Hendershott
and White, 2000). As a result, when inflation and thus nominal interest rates rise, the tax
34 James M.W. Wong and S. Thomas Ng

subsidy reduces the real after-tax cost of housing capital and increases aggregate housing
demand (Summers, 1981; Dougherty and Van Order, 1982). Harris (1989) suggested that
housing consumers tend to respond to declining real costs rather than rising nominal costs. In
this context, consumers’ expectations about price appreciation and inflation are supposed to
be an important factor in determining the demand for housing.
The findings of previous research studies (e.g. Killingsworth, 1990; Akintoye and
Skitmore, 1994) realised that the building and business cycles are closely related. Bon (1989)
related building cycles to business or economic cycles and postulated how economic
fluctuations affect fluctuations in building activity. Swings in the general economy and stock
market may thereby be treated as indicators of the prospective movement in the housing
market and vice versa (Ng et al., 2008). Maclennan and Pryce (1996) also suggested that
economic change shapes the housing system and that recursive links run back from housing to
the economy. Housing investment is a sufficiently large fraction of total investment activity in
the economy (about a third of total gross investment) to have important consequences for the
economy as a whole and vice versa (Pozdena, 1988, p. 159).
One of the crucial foundations for residential development is employment, which serves
not only as a lead indicator of future housing activity but also as an up-to-date general
economic indicator (Baffor-Bonnie, 1998). The decrease in the employment that results from
this process tends to reduce the demand for new housing. The macro implications for real
estate activity and employment have been explored at some length in the literature, and the
general consensus is that the level of employment growth tends to produce real estate cycles
(Smith and Tesarek, 1991; Sternlieb and Hughes, 1977). Baffor-Bonnie (1998) applied a
nonstructural vector autoregressive (VAR) model to support earlier studies that employment
changes explain real estate cycles of housing demand.
In addition, a number of studies consistently have shown that housing demand is also
driven mainly by demographic factors in a longer term (Rosen, 1979; Krumm, 1987;
Goodman, 1988; Weicher and Thibodeau, 1988; Mankiw and Weil, 1989; Liu et al., 1996).
Population growth captures an increase in potential housing demand, especially if the growth
stems mainly from the home buying age group with significant income (Reichert, 1990). In a
separate study, Muellbauer and Murphy (1996) also showed that demographic changes
together with the interest rate are the two important factors causing the UK house price boom
of the late 1980s. They found that demographic trends were favourable, with stronger
population growth in the key house buying age group. Tse (1997), on the other hand, argued
that in the steady state, rate of construction depends mainly upon the rate of household
formation. Growth of population and number of households are proposed to be included as
independent variables in the econometric study.
As discussed above based on a comprehensive literature of modelling specifications, the
demand for housing services can be derived by assuming utility maximisation on the part of
homeowners and wealth maximisation on the part of investors. The specific factors that
determine the demand for housing have been previously identified and are summarised in Eq.
[2].

Q = ƒ(Y, Ph, MR, CPI, GDP, Ps, U, POP) (2)

where
Econometric Modelling and Forecastin of Private Housing Demand 35

Q represents the quantity of housing sold;


Y is real household income;
Ph is the real price of housing;
MR measures the real mortgage interest rates;
CPI is the consumer price index to proxy inflation;
GDP is the real gross domestic product;
Ps is the stock prices proxied by the Hang Seng Index;
U is the unemployment rate; and
POP is the total resident population.

Methodology
The Econometric Model

In light of the above predictions that there will be a sluggish adjustment on the housing
demand, any empirical attempt to model the housing market must clearly distinguish the long-
run from the short-run information in the data. Recent advances in econometrics, in particular
the development of cointegration analysis and vector error correction (VEC) model, have
proven useful in help distinguishing an equilibrium as opposed to a disequilibrium
relationships among economic variables (Kenny, 1999). Adopting simple statistical methods
such as regression or univariate time series analysis like auto-regressive integrated moving
average (ARIMA) models may be only reliable for the short-term forecast of economic time
series (Tse, 1997) and may give rise to large predictive errors as they are very sensitive to
‘noise’ (Quevedo et al., 1988; Tang et al., 1991).
This study employs the Johansen cointegration technique in order to assess the extent to
which the HK housing market possesses the self-equilibrating mechanisms discussed above,
i.e. a well behaved long-run housing demand relationships. The HK market provides a
particularly interesting case study because there have been large-scale fluctuations in the price
of owner occupied dwellings over recent years. The econometric analysis takes an aggregate
or macro-economic perspective and attempt to identify equilibrium relationships using key
macro variables. In particular, the analysis will examine: (i) the impact of monetary policy,
i.e. interest rates, on developments in the housing market; (ii) the effects of rising real
incomes on house prices; (iii) the nature and speed of price adjustment in the housing market;
(iv) effect of demographical change to the demand for housing; and (v) the nature and speed
of stock adjustment in the housing market.
The Johansen multivariate approach to cointegration analysis and VEC modelling
technique seems particularly suitable for the analysis of the above relationship as shown in
Eq. [2] because it is a multivariate technique which allows for the potential endogeneity of all
variables considered (Kenny, 1999). In common with other cointegration techniques, the
objective of this procedure is to uncover the stationary relationships among a set of non-
stationary data. Such relationships have a natural interpretation as long-run equilibrium
relationships in economic sense. VEC is a restricted vector autoregressive (VAR) that has
cointegration restrictions built into specification (Lütkepohl, 2004). The VEC framework
developed by Johansen (1988) and extended by Johansen and Juselius (1990) provides a
multivariate maximum likelihood approach that permits the determination of the number of
36 James M.W. Wong and S. Thomas Ng

cointegration vectors and does not depend on arbitrary normalisation rules, contrary to the
earlier error correction mechanism proposed by Engle and Granger (1987).
The Johansen and Juselius’s VEC modelling framework is adopted to the housing
demand forecasting because of its dynamic nature and sensitivity to a variety of factors
affecting the demand, and its taking into account indirect and local inter-sectoral effects.
Applying conventional VAR techniques may lead to spurious results if the variables in the
system are nonstationary (Crane and Nourzad, 1998). The mean and variance of a
nonstationary or integrated time series, which has a stochastic trend, depend on time. Any
shocks to the variable will have permanent effects on it. A common procedure to render the
series stationary is to transform it into the first differences. Nevertheless, the model in its first
difference level will be misspecified if the series are cointegrated and converged to stationary
long-term equilibrium relationships (Engle and Granger, 1987). The VEC specification allows
investigating the dynamic co-movement among variables and the simultaneous estimation of
the speed with which the variables adjust in order to re-establish the cointegrated long-term
equilibrium, a feature unavailable in other forecasting models (Masih, 1995). Such estimates
should prove particularly useful for analysis of the effect of alternative monetary and housing
market policies. Empirical studies (e.g. Anderson et al., 2002; Darrat et al., 1999; Kenny,
1999; Wong et al., 2007) have also shown that the VEC model achieved a high level of
forecasting accuracy in the field of macro-economics.
The starting point for deriving an econometric model of housing demand is to establish
the properties of the time series measuring the demand and its key determinants. Testing for
cointegration among variables was preceded by tests for the integrated order of the individual
series set, as only variables integrated of the same order may be cointegrated. Augmented
Dickey-Fuller (ADF) unit root tests were employed which was developed by Dickey and
Fuller (1979) and extended by Said and Dickey (1984) based on the following auxiliary
regression:

p
Δyt = α + δt + γ yt −1 + ∑ β i Δyt − i + ut (3)
i =1

The variable ∆yt-i expresses the lagged first differences, μt adjusts the serial correlation
errors, and α, β and γ are the parameters to be estimated. This augmented specification was
used to test for H 0 : γ = 0 vs. H a : γ < 0 in the autoregressive (AR) process.
The specification in the ADF tests was determined by a ‘general to specific’ procedure by
initially estimating a regression with constant and trend, thus testing their significance.
Additionally, a sufficient number of lagged first differences were included to remove any
serial correlation in the residuals. In order to determine the number of lags in the regression,
an initial lag length of eight quarters was selected, and the eighth lag was tested for
significance using the standard asymptotic t-ratio. If the lag is insignificant, the lag length is
reduced successively until a significant lag length is obtained. Critical values simulated by
MacKinnon (1991) were used for the unit root tests.
Cointegration analysis and VEC model were then applied to derive housing demand
specification. The econometric model attempts to link housing demand to variables in
equilibrium identified with economic theory. Although many economic time series may have
Econometric Modelling and Forecastin of Private Housing Demand 37

stochastic or deterministic trend, the groups of variables may drift together. Cointegration
analysis allows the derivation of long-run equilibrium relationships among the variables. If
the economic theory is relevant, it is expected that the specific set of suggested variables are
interrelated in the long run. Hence, there should be no tendency for the variables to drift apart
increasingly as time progresses, i.e. the variables in the model form a unique cointegrating
vector.
To test for the cointegration, the maximum likelihood procedures of Johansen and
Juselius were employed. Suppose that the variables in the housing demand function are in the
same integrated order, these variables may cointegrate if there exists one or more linear
combinations among them. A VAR specification was used to model each variable as a
function of all the lagged endogenous variables in the system. Johansen (1988) suggests that
the process yt is defined by an unrestricted VAR system of order (p):

yt = δ + Γ1 yt-1 + Γ2 yt-2 + …+ Γp yt-p + ut t = 1, 2, 3, …, T (4)

where yt are I(1) independent variables, Γ’s are estimable parameters, and ut ~ niid(0, Σ) is
vector of impulses which represent the unanticipated movements in yt. However, such a
model is only appropriate if each of the series in yt is integrated to order zero, I(0), meaning
that each series is stationary (Price, 1998). Using ∆ = (I – L), where L is the lags operator, the
above system can be reparameterised in the VEC model as:

p −1
Δyt = δ + Π yt −1 + ∑ Γi Δyt −i + ut (5)
i =1

where
p p
Π = ∑ Ai − I , Γi = − ∑ Aj (6)
i =1 j = i +1

Δy t is an I(0) vector, δ is the intercept, the matrix Γ reflects the short-run aspects of the
relationship among the elements of yt, and the matrix П captures the long-run information.
The number of linear combinations of yt that are stationary can be determined by the rank of
П, which is denoted as r. If there are k endogenous variables, Granger’s representation
theorem asserts that if the coefficient matrix П has reduced rank r < k, then there exists k × r
matrices, α and β, each with rank r such that П = α β' and β'yt is stationary.
The order of r is determined by trace statistics and the maximum eigenvalue statistics.
The trace statistic tests the null hypothesis of cointegrating relations r against the alternative
of k cointegrating relations, where k is the number of endogenous variables, for r = 0, 1, …,
k–1. The alternative of k cointegrating relations corresponds to the case where none of the
series has a unit root and a stationary VAR may be specified in terms of the levels of all of the
series. The trace statistic for the null hypothesis of r cointegrating relations is computed as:

k
LRtr ( r k ) = −T ∑ log(1 − λ )
i = r +1
i (7)
38 James M.W. Wong and S. Thomas Ng

for r = 0, 1, …, k-1 where T is the number of observation used for estimation, and λi is the i-
th largest estimated eigenvalue of the П matrix in Eq. [6] and is the test of H0(r) against H1(k).
The maximum eigenvalue statistic tests the null hypothesis of r cointegrating relations
against the alternative of r+1 cointegrating relation. This test statistic is computed as:

LRtr ( r r + 1) = −T log(1 − λr +1 ) (8)

= LRtr ( r k ) − LRtr ( r + 1 k )

for r = 0, 1, …, k-1.
The models will be rejected where П has a full rank, i.e. r = k–1 since in such a situation
yt is stationary and has no unit root, thus no error-correction can be derived. If the rank of П is
zero, this implies that the elements of yt are not cointegrated, and thus no stationary long-run
relationship exists. As a result, the conventional VAR model in first-differenced form shown
in Eq. [4] is an alternative specification.
The choice of lag lengths in cointegration analysis was decided by multivariate forms of
the Akaike information criterion (AIC) and Schwartz Bayesian criterion (SBC). The AIC and
SBC values3 are model selection criteria developed for maximum likelihood techniques. In
minimising the AIC and SBC, the natural logarithm of the residual sum of squares adjusted
for sample size and the number of parameters included are minimised. Based on the
assumption that П does not have a full rank, the estimated long-run housing demand in HK
can be computed by normalising the cointegration vector as a demand function.
While the cointegrating vectors determine the steady-state behaviour of the variables in
the vector error correction model, the dynamic representation of the housing demand to the
underlying permanent and transitory shocks were then completely determined by the sample
data without restriction. One motivation for the VEC model(p) form is to consider the relation
β'yt = c as defining the underlying economic relations and assume that the agents react to the
disequilibrium error β'yt – c through the adjustment coefficient α to restore equilibrium; that
is, they satisfy the economic relations. The cointegrating vector, β are the long-run parameters
(Lütkepohl, 2004).
Estimation of a VEC model proceeded by first determining one or more cointegrating
relations using the aforementioned Johansen procedures. The first difference of each
endogenous variable was then regressed on a one period lag of the cointegrating equation(s)
and lagged first differences of all of the endogenous variables in the system. The VEC model
can be written as the following specification:

p p p
Δdt = δ + α ( β ' yt −1 + ρ0 ) + ∑ γ 1,i Δy1,t −i + ∑ γ 2,i Δy2,t −i + ...... + ∑ γ j ,i Δy j ,t −i + ut (9)
i =1 i =1 i =1

where yt are I(1) independent variables, d is the quantity of housing sold, α is the adjustment
coefficient, β are the long-run parameters of the VEC function, and γj,i reflects the short-run
aspects of the relationship between the independent variables and the target variable.

3
AIC = T ln (residual sum of squares) + 2k; SBC = T ln (residual sum of squares) + kln(T)
where T is sample size and k is the number of parameters included
Other documents randomly have
different content
Diese, die noch eben atemlos
flohen mitten aus dem Kindermorden:
o, wie waren sie unmerklich groß
über ihrer Wanderschaft geworden.

Kaum noch daß im scheuen Rückwärtsschauen


ihres Schreckens Not zergangen war,
und schon brachten sie auf ihrem grauen
Maultier ganze Städte in Gefahr;

denn sowie sie, klein im großen Land,


— fast ein Nichts — den starken Tempeln nahten,
platzten alle Götzen wie verraten
und verloren völlig den Verstand.

Ist es denkbar, daß von ihrem Gange


alles so verzweifelt sich erbost?
und sie wurden vor sich selber bange,
nur das Kind war namenlos getrost.

Immerhin, sie mußten sich darüber


eine Weile setzen. Doch da ging —
sieh: der Baum, der still sie überhing,
wie ein Dienender zu ihnen über:

er verneigte sich. Derselbe Baum,


dessen Kränze toten Pharaonen
für das Ewige die Stirnen schonen,
neigte sich. Er fühlte neue Kronen
blühen. Und sie saßen wie im Traum.
zurück zum
Inhaltsverzeichnis
Von der Hochzeit zu Kana
Konnte sie denn anders, als auf ihn
stolz sein, der ihr Schlichtestes verschönte?
War nicht selbst die hohe, großgewöhnte
Nacht wie außer sich, da er erschien?

Ging nicht auch, daß er sich einst verloren,


unerhört zu seiner Glorie aus?
Hatten nicht die Weisesten die Ohren
mit dem Mund vertauscht? Und war das Haus

nicht wie neu von seiner Stimme? Ach


sicher hatte sie zu hundert Malen
ihre Freude an ihm auszustrahlen
sich verwehrt. Sie ging ihm staunend nach.

Aber da bei jenem Hochzeitsfeste,


als es unversehns an Wein gebrach, —
sah sie hin und bat um eine Geste
und begriff nicht, daß er widersprach.

Und dann tat er's. Sie verstand es später,


wie sie ihn in seinen Weg gedrängt:
denn jetzt war er wirklich Wundertäter,
und das ganze Opfer war verhängt,

unaufhaltsam. Ja, es stand geschrieben.


Aber war es damals schon bereit?
Sie: sie hatte es herbeigetrieben
in der Blindheit ihrer Eitelkeit.

An dem Tisch voll Früchten und Gemüsen


freute sie sich mit und sah nicht ein,
daß das Wasser ihrer Tränendrüsen
Blut geworden war mit diesem Wein.
zurück zum
Inhaltsverzeichnis
Vor der Passion

O hast du dies gewollt, du hättest nicht


durch eines Weibes Leib entspringen dürfen:
Heilande muß man in den Bergen schürfen,
wo man das Harte aus dem Harten bricht.

Tut dir's nicht selber leid, dein liebes Tal


so zu verwüsten? Siehe meine Schwäche;
ich habe nichts als Milch- und Tränenbäche,
und du warst immer in der überzahl.

Mit solchem Aufwand wardst du mir verheißen.


Was tratst du nicht gleich wild aus mir hinaus?
Wenn du nur Tiger brauchst, dich zu zerreißen,
warum erzog man mich im Frauenhaus,

ein weiches reines Kleid für dich zu weben,


darin nicht einmal die geringste Spur
von Naht dich drückt —: so war mein ganzes Leben
und jetzt verkehrst du plötzlich die Natur.
zurück zum
Inhaltsverzeichnis
Pietà

Jetzt wird mein Elend voll, und namenlos


erfüllt es mich. Ich starre wie des Steins
Inneres starrt.
Hart wie ich bin, weiß ich nur Eins:
Du wurdest groß —
.... und wurdest groß,
um als zu großer Schmerz
ganz über meines Herzens Fassung
hinauszustehn.
Jetzt liegst du quer durch meinen Schoß,
jetzt kann ich dich nicht mehr
gebären.
zurück zum
Inhaltsverzeichnis
Stillung Mariä mit dem
Auferstandenen

Was sie damals empfanden: ist es nicht


vor allen Geheimnissen süß
und immer noch irdisch:
da er, ein wenig blaß noch vom Grab,
erleichtert zu ihr trat:
an allen Stellen erstanden.
O zu ihr zuerst. Wie waren sie da
unaussprechlich in Heilung.
Ja sie heilten, das war's. Sie hatten nicht nötig,
sich stark zu berühren.
Er legte ihr eine Sekunde
kaum seine nächstens
ewige Hand an die frauliche Schulter.
Und sie begannen
still wie die Bäume im Frühling,
unendlich zugleich,
diese Jahreszeit
ihres äußersten Umgangs.
zurück zum
Inhaltsverzeichnis
Vom Tode Mariä
(Drei Stücke)

1
Derselbe große Engel, welcher einst
ihr der Gebärung Botschaft niederbrachte,
stand da, abwartend daß sie ihn beachte,
und sprach: Jetzt wird es Zeit, daß du erscheinst.
Und sie erschrak wie damals und erwies
sich wieder als die Magd, ihn tief bejahend.
Er aber strahlte, und unendlich nahend,
schwand er wie in ihr Angesicht — und hieß
die weithin ausgegangenen Bekehrer
zusammenkommen in das Haus am Hang,
das Haus des Abendmahls. Sie kamen schwerer
und traten bange ein: Da lag, entlang
die schmale Bettstatt, die in Untergang
und Auserwählung rätselhaft Getauchte,
ganz unversehrt, wie eine Ungebrauchte,
und achtete auf englischen Gesang.
Nun da sie alle hinter ihren Kerzen
abwarten sah, riß sie vom übermaß
der Stimmen sich und schenkte noch von Herzen
die beiden Kleider fort, die sie besaß,
und hob ihr Antlitz auf zu dem und dem ...
(o Ursprung namenloser Tränen-Bäche).

Sie aber legte sich in ihre Schwäche


und zog die Himmel an Jerusalem
so nah heran, daß ihre Seele nur,
austretend, sich ein wenig strecken mußte:
schon hob er sie, der alles von ihr wußte,
hinein in ihre göttliche Natur.

2
Wer hat bedacht, daß bis zu ihrem Kommen
der viele Himmel unvollständig war?
Der Auferstandne hatte Platz genommen,
doch neben ihm, durch vierundzwanzig Jahr,
war leer der Sitz. Und sie begannen schon
sich an die reine Lücke zu gewöhnen,
die wie verheilt war, denn mit seinem schönen
Hinüberscheinen füllte sie der Sohn.

So ging auch sie, die in die Himmel trat,


nicht auf ihn zu, so sehr es sie verlangte;
dort war kein Platz, nur Er war dort und prangte
mit einer Strahlung, die ihr wehe tat.
Doch da sie jetzt, die rührende Gestalt,
sich zu den neuen Seligen gesellte
und unauffällig, licht zu licht, sich stellte,
da brach aus ihrem Sein ein Hinterhalt
von solchem Glanz, daß der von ihr erhellte
Engel geblendet aufschrie: Wer ist die?
Ein Staunen war. Dann sahn sie alle, wie
Gott-Vater oben unsern Herrn verhielt,
so daß, von milder Dämmerung umspielt,
die leere Stelle wie ein wenig Leid
sich zeigte, eine Spur von Einsamkeit,
wie etwas, was er noch ertrug, ein Rest
irdischer Zeit, ein trockenes Gebrest —.
Man sah nach ihr: sie schaute ängstlich hin,
weit vorgeneigt, als fühlte sie: ich bin
sein längster Schmerz —: und stürzte plötzlich vor.
Die Engel aber nahmen sie zu sich
und stützten sie und sangen seliglich
und trugen sie das letzte Stück empor.
3

Doch vor dem Apostel Thomas, der


kam da es zu spät war, trat der schnelle
längst darauf gefaßte Engel her
und befahl an der Begräbnisstelle:

Dräng den Stein beiseite. Willst du wissen,


wo die ist, die dir das Herz bewegt:
Sieh: sie ward wie ein Lavendelkissen
eine Weile da hineingelegt,

daß die Erde künftig nach ihr rieche


in den Falten wie ein feines Tuch.
Alles Tote (fühlst du), alles Sieche
ist betäubt von ihrem Wohlgeruch.

Schau den Leinwand: wo ist eine Bleiche,


wo er blendend wird und geht nicht ein?
Dieses Licht aus dieser reinen Leiche
war ihm klärender als Sonnenschein.

Staunst du nicht, wie sanft sie ihm entging?


Fast als wär sie's noch, nichts ist verschoben.
Doch die Himmel sind erschüttert oben:
Mann, knie hin und sieh mir nach und sing.
zurück zum
Inhaltsverzeichnis

Anmerkung zur Transkription:


Das Inhaltsverzeichnis befindet sich im Original am Ende des
Buches und wurde an den Anfang verschoben.
*** END OF THE PROJECT GUTENBERG EBOOK DAS MARIEN-LEBEN
***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States
copyright in these works, so the Foundation (and you!) can copy
and distribute it in the United States without permission and
without paying copyright royalties. Special rules, set forth in the
General Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree
to abide by all the terms of this agreement, you must cease
using and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project Gutenberg™
works in compliance with the terms of this agreement for
keeping the Project Gutenberg™ name associated with the
work. You can easily comply with the terms of this agreement
by keeping this work in the same format with its attached full
Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and
with almost no restrictions whatsoever. You may copy it,
give it away or re-use it under the terms of the Project
Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country
where you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy