0% found this document useful (0 votes)
33 views36 pages

Time Series Models - 23.12.23

The document discusses the concepts of stationarity and non-stationarity in time series analysis, emphasizing that stationary time series have constant statistical properties, while non-stationary series exhibit changing properties over time. It highlights the importance of stationarity for accurate forecasting using ARIMA models and outlines methods for testing stationarity, such as the Augmented Dickey-Fuller test. Additionally, the document covers the components and parameters of ARIMA models, explaining how they account for trends, seasonality, and autocorrelation in time series data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views36 pages

Time Series Models - 23.12.23

The document discusses the concepts of stationarity and non-stationarity in time series analysis, emphasizing that stationary time series have constant statistical properties, while non-stationary series exhibit changing properties over time. It highlights the importance of stationarity for accurate forecasting using ARIMA models and outlines methods for testing stationarity, such as the Augmented Dickey-Fuller test. Additionally, the document covers the components and parameters of ARIMA models, explaining how they account for trends, seasonality, and autocorrelation in time series data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Time Series Models

Dr. Sriparna Guha


Stationarity and Non-stationarity
• A time series is said to be stationary when its statistical properties are constant and
there’s no seasonality in the time series. For a time series to be stationary,
 The mean of the time series is constant.

 The standard deviation of the time series is constant.

 There’s no trend or seasonality in the time series.

• In a non-stationary time series, the statistical properties change over time, and
there is a trend and seasonality component.
Stationarity and Non-stationarity
Stationarity and Non-stationarity
• In a stationary time series, the statistical properties of the data points are consistent
and do not depend on the time at which they are observed. This means that the
relationships and patterns observed in the data are reliable and can be used to make
accurate forecasts.

• In contrast, a non-stationary time series has statistical properties that change over
time, which can make it difficult to draw reliable inferences or make accurate
forecasts. As the statistical properties of the data keep changing, any model or analysis
based on a non-stationary time series may not provide reliable results
Stationarity and Non-stationarity
• Stationarity is a very important factor in time series. In ARIMA time series
forecasting, the first step is to determine the number of differences required to make
the series stationary because a model cannot forecast on non-stationary time series
data.

• Therefore, analyzing stationary data is easier and more reliable than non-stationary
data. Stationary data allow for the use of simpler models and statistical techniques, as
well as more accurate predictions. Using non-stationary data can lead to inaccurate
and misleading forecasts, as the underlying statistical properties of the data keep
changing with time.
[Source: Hyndman & Athanasopoulos, 2018]
Stationarity and Non-stationarity

❑ Seasonality can be observed in series (d), (h), and (i)

❑ The trend can be observed in series (a), (c), (e), (f), and

(i)

❑ Series (b) and (g) are stationary


Stationarity and Non-stationarity
• It can be difficult to tell if a model is stationary or not. To test whether a time series is
stationary, there are several methods that can be used. Here are some common
techniques:

• Summary Statistics: Calculate the mean and standard deviation of the time series and
check if they are consistent over time. If they are constant, the time series may be
considered stationary.

• Unit root tests (e.g. Augmented Dickey-Fuller (ADF) test): This is a statistical test
that checks for the presence of a unit root in the time series. If the test indicates that
there is no unit root, then the time series is likely stationary.

• A KPSS test (run as a complement to the unit root tests): This test checks for the
presence of a trend or structural break in the time series. If the test indicates that there
is no trend or structural break, then the time series may be considered stationary.
What is ADF testing?
• The Augmented Dickey-Fuller test is a type of statistical test called a unit root test.

• In probability theory and statistics, a unit root is a feature of some stochastic processes
(such as random walks) that can cause problems in statistical inference involving time
series models. Unit root is a characteristic of a time series that makes it non-stationary.

• ADF test is conducted with the following assumptions:

• Null Hypothesis (H0): Series is non-stationary, or series has a unit root.

• Alternate Hypothesis(H1): Series is stationary, or series has no unit root.

• If the null hypothesis is failed to be rejected, this test may provide evidence that the
series is non-stationary.
ADF testing in R
• library(tseries) The test statistic and p-value come
out to be equal to -1.6549 and 0.7039
respectively. Since the p-value is
greater than 0.05, hence we would
fail to reject the null hypothesis.

It implies that the time series is non-


stationary. In simple words, we can
say that it possesses some time-
dependent structure and does not
possess constant variance over time.
ADF testing in R
• Interpret the Result:
Converting non-stationary to stationary
To detrend the time series data there are certain transformation techniques used and they
are listed as follows:

• Log transforming of the data

• Taking the square root of the data

• Taking the cube root

• Proportional change
Autocorrelation
Autocorrelation
• Autocorrelation represents the degree of similarity between a given time series and
a lagged version of itself over successive time intervals.

• Autocorrelation measures the relationship between a variable's current value and


its past values.

• An autocorrelation of +1 represents a perfect positive correlation, while an


autocorrelation of -1 represents a perfect negative correlation.

• Technical analysts can use autocorrelation to measure how much influence past
prices for a security have on its future price.
Autocorrelation
Autocorrelation
Autocorrelation in Technical Analysis
• Although autocorrelation should be avoided in order to apply further data analysis more
accurately, it can still be useful in technical analysis, as it looks for a pattern from historical
data. The autocorrelation analysis can be applied together with the momentum factor
analysis.

• Autocorrelation can be useful for technical analysis, that's because technical analysis is most
concerned with the trends of, and relationships between, security prices using charting
techniques. This is in contrast with fundamental analysis, which focuses instead on a
company's financial health or management.

• Technical analysts can use autocorrelation to figure out how much of an impact past prices
for a security have on its future price.

• Autocorrelation can help determine if there is a momentum factor at play with a given stock.
If a stock with a high positive autocorrelation posts two straight days of big gains, for
example, it might be reasonable to expect the stock to rise over the next two days, as well.
Autocorrelation Test
• The most common method of test autocorrelation is the Durbin-Watson test.
Durbin-Watson detects autocorrelation from a regression analysis.

• The Durbin-Watson always produces a test number range from 0 to 4.


Values closer to 0 indicate a greater degree of positive correlation, values
closer to 4 indicate a greater degree of negative autocorrelation, while values
closer to the middle suggest less autocorrelation.
Autocorrelation
Test of Autocorrelation
Test of Autocorrelation
Example of Autocorrelation
• Let’s assume Mr.X is looking to determine if a stock's returns in their portfolio exhibit
autocorrelation; that is, the stock's returns relate to its returns in previous trading sessions.

• If the returns exhibit autocorrelation, Mr.X could characterize it as a momentum stock because past
returns seem to influence future returns.

• Mr.X runs a regression with the prior trading session's return as the independent variable and the
current return as the dependent variable. They find that returns one day prior have a positive
autocorrelation of 0.8.

• Since 0.8 is close to +1, past returns seem to be a very good positive predictor of future returns for
this particular stock.

• Therefore, Mr.X can adjust their portfolio to take advantage of the autocorrelation, or momentum, by
continuing to hold their position or accumulating more shares.
Example of Autocorrelation
Self-Test
• Find the Durbin-Watson statistic for the data in Figure 1
Durbin-Watson Test in R
• We will use dwtest() function avialble in “lmtest” R package for performing the
Durbin-Watson (DW) test. The dwtest() function takes the fitted regression model
and returns DW test statistics (d) and p value.
Durbin-Watson Test in R
• INTERPRETATION

• In the Durbin-Watson critical values table, the critical region lies between
1.24 (dL) and 1.43 (dU) for N=22 at 5% significance. Since the Durbin-
Watson test statistic (DW=1.13) is lower than 1.43 (DW < dU), we reject the
null hypothesis (p > 0.05) that there is autocorrelation.

• As the p value obtained from the Durbin-Watson test is not significant (d =


1.13, p = 0.005), we reject the null hypothesis. Hence, we conclude that the
residuals are autocorrelated.
Durbin-Watson Test in R
Interpret the Result:
Durbin-Watson Test in R
Introduction to ARIMA
• ARIMA stands for auto-regressive integrated moving average. It’s a way of modelling
time series data for forecasting (i.e., for predicting future points in the series), in such a
way that:

• a pattern of growth/decline in the data is accounted for (hence the “auto-regressive”


part)

• the rate of change of the growth/decline in the data is accounted for (hence the
“integrated” part)

• noise between consecutive time points is accounted for (hence the “moving average”
part)
Introduction to ARIMA
• ARIMA, or Autoregressive Integrated Moving Average, is a set of models that explains a
time series using its own previous values given by the lags (Autoregressive) and lagged
errors (Moving Average) while considering stationarity corrected by differencing (opposite of
Integration.) In other words, ARIMA assumes that the time series is described by
autocorrelations in the data rather than trends and seasonality. In these context, we define
trends and seasonality as the following:

• Trend: A time series has a trend if there is a overlying long term increase or decrease in the
data, which is not necessarily linear.

• Seasonality: A time series data has seasonality when it is affected by seasonal factors such
as the time of the year or the day of the week. The seasonality of data is apparent as there is
a fixed frequency of pattern occurrence.
Model Components
• As previously mentioned, ARIMA models are built given the following key aspects:

• AR: Autoregression. A model that uses the dependent relationship between an


observation and some number of lagged observations.

• I: Integrated. The use of differencing of raw observations (e.g. subtracting an


observation from an observation at the previous time step) in order to make the time
series stationary.

• MA: Moving Average. A model that uses the dependency between an observation and a
residual error from a moving average model applied to lagged observations.
Model Components
• Each of these components are explicitly specified in the model as a parameter. A standard notation
is used of ARIMA(p,d,q) where the parameters are substituted with integer values to quickly
indicate the specific ARIMA model being used:

• p: The number of lag observations included in the model, also called the lag order (deals with
window of Xt)

• d: The number of times that the raw observations are differenced, also called the degree of
differencing (deals with order of differencing of Xt)

• q: The size of the moving average window, also called the order of moving average (deals with
residuals)

Given this, the general case of ARIMA(p,d,q) can be written as:


Model Components
Given this, the general case of ARIMA(p,d,q) can be written as:

Or in words :

Predicted Xt

= Constant + Linear combination of Lags of X (up to p lags) + Linear Combination of


Lagged forecast errors (up to q lags). Provided that the time-series is already differenced
(up to d terms) to ensure stationarity.
Model parameters p, d, and q and Special
Cases
• Before we discuss how we determine p, d, and q that are best to represent a time series, let’s first take a look at
special cases of ARIMA models that should help us illustrate the formulation of the ARIMA equation.

• Case 1: ARIMA(p,0,0) = autoregressive model: if the series is stationary and autocorrelated, perhaps it can be
predicted as a multiple of its own previous value, plus a constant.

• The forecasting equation for ARIMA(1,0,0) is:


Model parameters p, d, and q and Special
Cases
• Case 2: ARIMA(0,0,q) = moving average model: if the series is stationary but is correlated to the errors of
previous values, we can regress using the past forecast errors.

• The forecasting equation for this is ARIMA(0,0,1) given by:


Model parameters p, d, and q and Special
Cases
• Case 3: ARIMA(0,1,0) = Random Walk: if the series is non-stationary then the simplest model that we can use is
a random walk model, which is given by:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy