Chapter 5
Chapter 5
It helps to analyse how the typical value of the dependent variable changes
when any one of the independent variables changes, while the other are held
constant (ceteris paribus).
The first step is choosing the dependent variable; this step is determined by
the purpose of the research.
After choosing the dependent variable, it’s logical to follow the following
sequence:
1. Review the literature and develop the theoretical model
2. Specify the model: Select the independent variables and the
functional form
3. Hypothesize the expected signs of the coefficients
4. Collect the data. Inspect and clean the data
5. Estimate and evaluate the equation
6. Document the results
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 2
5.2 Multiple Regression Analysis of Cross-sectional Data
See difference between the regression results using ‘Stata’ and ‘Eviews ‘ below.
• Note: If not, change the variables into log transformation or first difference
and repeat the same steps and observe whether the probability values are
insignificant.
Value df Probability
t-statistic 1.059549 5 0.3378
F-statistic 1.122645 (1, 5) 0.3378
Likelihood ratio 2.025563 1 0.1547
F-test summary:
Sum of Sq. df Mean Squares
Test SSR 9.518767 1 9.518767
Restricted SSR 51.91314 6 8.652191
Unrestricted SSR 42.39438 5 8.478875
LR test summary:
Value df
Restricted LogL -22.42432 6
Unrestricted LogL -21.41154 5
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 8
Test for multicollinearity
Steps: Estimate, Go to View, Coefficient Diagnostics, Variance Inflation Factors
Null hypothesis, Ho: No multicollinearity
Ha: multicollinearity
• Have more data size and run the regression using same procedures we have
made so far.
𝑌 = 𝛽0 + 𝛽1 𝑋1 + 𝛽2 𝑋2 + 𝛽3 𝑋3
𝑅𝑆𝑆 = 𝜃0 + 𝜃1 𝑋1 + 𝜃2 𝑋2 + 𝜃3 𝑋3
Take one of the methods such as Breusch-Pagan-Godfrey LM test of
Heteroskedasticity:
𝐻0 : 𝜃1 = 𝜃2 = 𝜃3 = 0
𝐻𝑎 : 𝜃1 ≠ 𝜃2 ≠ 𝜃3 ≠ 0
Steps: Go to Proc and save the error term, generate RSS and regress the RSS
on auxiliary variables and get the results.
Sample: 1 10
Included observations: 10
be 𝜒 2 𝑘 − 1 = 𝜒 2 4 − 1 = 𝜒 2 3 = 7.8147.
chis=@qchisq(.95,3)
Inverse standard, write your prior information, say X2(−.5) in weight series
box
Test Equation:
Dependent Variable: RESID^2
Method: Least Squares
Date: 09/25/22 Time: 08:40
Sample: 1 10
Included observations: 10
Dependent Variable: Y
Method: Least Squares
Sample: 1 10
Included observations: 10
16 November 2024
Prepared by Urgaia Rissa(PhD) for MBA 20
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 21
5.3 Time series Data Analysis
Time series data is a case where observations are generated over time.
The time series regression can be problematic issue of stationarity.
The two approaches of time series data analyses are the Johansen
cointegration and ARDL Bounds test cointegration.
5.3.1 The Johansen cointegration approach
After importing the raw data to Eviews, we can covert the variables into
logarithmic function, passing the following steps:
Go to Quick,
Open Generate series by equations,
Write your variable in the box appeared, eg; lny=log(y) and then
click on Ok.
Click ok
t-Statistic Prob.*
20
CUSUM 5% Significance
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 32
To see the structural breaks in the data trend, draw the graph. Steps: Choose
the variables, Open as group, Go to view, Graph, in graph option, Choose basic
type, Line and Symbol and Ok, you get it as shown below
24.5
24.0
23.5
23.0
22.5
22.0
21.5
From the trends, there is
21.0 structural breaks at year
1992.
20.5
1985 1990 1995 2000 2005 2010 2015 2020
We don’t reject the null hypothesis: Correlated Random Effects as the chi-sq.
statistic value for the cross section random is quite insignificant any
conventional level of significance and we choose random over fixed effects
according to Hausman test.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 45