Mixed Analysis of Variance Models With Spss
Mixed Analysis of Variance Models With Spss
1
Outline
1. Classification of Effects
2. Random Effects
1. Two-Way Random Layout
2. Solutions and estimates
3. General linear model
1. Fixed Effects Models
1. The one-way layout
4. Mixed Model theory
1. Proper error terms
5. Two-way layout
6. Full-factorial model
1. Contrasts with interaction terms
2. Graphing Interactions
2
Outline-Cont’d
3
Definition of Mixed
Models by their
component effects
1. Mixed Models contain both
fixed and random effects
2. Fixed Effects: factors for
which the only levels under
consideration are contained
in the coding of those effects
3. Random Effects: Factors for
which the levels contained in
the coding of those factors
are a random sample of the
total number of levels in the
population for that factor.
4
Examples of Fixed and
Random Effects
1. Fixed effect:
2. Sex where both male and
female genders are included
in the factor, sex.
3. Agegroup: Minor and
Adult are both included in the
factor of agegroup
4. Random effect:
1. Subject: the sample is a
random sample of the target
population
5
Classification of effects
1. There are main effects:
Linear Explanatory Factors
2. There are interaction effects:
Joint effects over and above
the component main effects.
6
Interactions are Crossed Effects
All of the cells are filled
Each level of X is crossed with each level of Y
Variable Y
Level 1
X11 X12 X13 X14
Variable X
Level 2
X21 X22 X23 X24
7
Classification of Effects-
cont’d
Hierarchical designs have nested
effects. Nested effects are
those with subjects within
groups.
An example would be patients
nested within doctors and
doctors nested within hospitals
This could be expressed by
patients(doctors)
doctors(hospitals)
8
Nesting of patients within
Doctors and Doctors within
Hospitals
Hospital 1 Hospital 2
9
Between and Within-
Subject effects
• Such effects may sometimes be fixed or random. Their
classification depends on the experimental design
Between-subjects effects are those who are in one group or
another but not in both. Experimental group is a fixed effect
because the manager is considering only those groups in his
experiment. One group is the experimental group and the
other is the control group. Therefore, this grouping
10
Between Subject
effects
• Gender: One is either male or
female, but not both.
• Group: One is either in the
control, experimental, or the
comparison group but not more
than one.
11
Within-Subjects Effects
12
Repeated Observations
are Within-Subjects
effects
Repeated Measures
Design
Experimental
Experimental
Group Experimental Group
Group
Pre-test Follow-up
Post-test
Group
where
Yij observation for ith
grand mean (an unknown fixed parm)
i effect of ith value of (ai )
b j effect of jth value of b (b j )
ij exp erimental error ~ N (0, 2 )
14
A factorial model
yij i i ij eij
The interaction or crossed effect is the joint effect, over and
above the individual main effects. Therefore, the main effects
must be in the model for the interaction to be properly
specified.
i j ( yij ) ( )
yij
15
Higher-Order
Interactions
If 3-way interactions are in the
model, then the main effects
and all lower order interactions
must be in the model for the 3-
way interaction to be properly
specified. For example, a
3-way interaction model would
be:
yijk ai b j ck abij acik bc jk
abcijk eijk
16
The General Linear
Model
• In matrix terminology, the
general linear model may be
expressed as
Y X
where
Y the observed data vector
X the design matrix
the vector of unknown fixed effect parameters
the vector of errors
17
Assumptions
E ( ) 0
var( ) I 2
var(Y ) I 2
E (Y ) X
18
General Linear Model
Assumptions-cont’d
1. Residual Normality.
2. Homogeneity of error variance
3. Functional form of Model:
Linearity of Model
4. No Multicollinearity
5. Independence of observations
6. No autocorrelation of errors
7. No influential outliers
We have to test for these to be sure that the model is
valid.
We will discuss the robustness of the model in face
of violations of these assumptions.
We will discuss recourses when these assumptions are 19
violated.
Explanation of these
assumptions
1. Functional form of Model: Linearity of
Model: These models only analyze the
linear relationship.
2. Independence of observations
3. Representativeness of sample
4. Residual Normality: So the alpha
regions of the significance tests are
properly defined.
5. Homogeneity of error variance: So the
confidence limits may be easily found.
6. No Multicollinearity: Prevents efficient
estimation of the parameters.
7. No autocorrelation of errors:
Autocorrelation inflates the R2 ,F and t
tests.
8. No influential outliers: They bias the
parameter estimation.
20
Diagnostic tests for these
assumptions
1. Functional form of Model:
Linearity of Model: Pair plot
2. Independence of observations:
Runs test
3. Representativeness of sample:
Inquire about sample design
4. Residual Normality: SK or SW
test
5. Homogeneity of error variance
Graph of Zresid * Zpred
6. No Multicollinearity: Corr of X
7. No autocorrelation of errors: ACF
8. No influential outliers: Leverage
and Cook’s D.
21
Testing for outliers
22
Studentized Residuals
ei
ei
s
s 2 (i ) (1 hi )
where
ei s studentized residual
s( i ) standard deviation where ith obs is deleted
hi leverage statistic
23
Influence of Outliers
24
Leverage and the Hat
matrix
1. The hat matrix transforms Y into the
predicted scores.
2. The diagonals of the hat matrix indicate
which values will be outliers or not.
3. The diagonals are therefore measures
of leverage.
4. Leverage is bounded by two limits: 1/n
and 1. The closer the leverage is to
unity, the more leverage the value has.
5. The trace of the hat matrix = the
number of variables in the model.
6. When the leverage > 2p/n then there is
high leverage according to Belsley et
al. (1980) cited in Long, J.F. Modern
Methods of Data Analysis (p.262). For
smaller samples, Vellman and Welsch
(1981) suggested that 3p/n is the
criterion.
25
Cook’s D
1. Another measure of
influence.
2. This is a popular one. The
formula for it is:
1 hi ei 2
Cook ' s Di 2
p 1 hi s (1 hi )
26
Cook’s D in SPSS
27
What to do with outliers
28
Decomposition of the
Sums of Squares
1. Mean deviations are computed
when means are subtracted from
individual scores.
1. This is done for the total, the
group mean, and the error terms.
2. Mean deviations are squared and
these are called sums of squares
3. Variances are computed by
dividing the Sums of Squares by
their degrees of freedom.
4. The total Variance = Model
Variance + error variance
29
Formula for Decomposition
of Sums of Squares
yi j y ( yij y. j ) ( y. j y ..)
total effect error within model effect
30
Variance
Decomposition
Dividing each of the sums of
squares by their respective
degrees of freedom yields the
variances.
SStotal SSerror SSmodel
n1 nk k 1
Total variance= error variance
+ model variance.
model variance
F in fixed effects models
error variance
31
Proportion of Variance
Explained
R2 = proportion of variance
explained.
SStotal = SSmodel + SSerrror
Divide all sides by SStotal
SSmodel/SStotal
=1 - SSError/SStotal
R2=1 - SSError/SStotal
32
The Omnibus F test
33
Testing different Levels of
a Factor against one
another
• Contrast are tests of the mean
of one level of a factor against
other levels.
H 0 : 1 2 3
1 2
Ha : 2 3
1 3
34
Contrasts-cont’d
' ˆ 1
L '( L ' V L) L
Z Z
F
rank ( L)
35
Construction of the F
tests in different models
The F test is a ratio of two variances (Mean Squares).
It is constructed by dividing the MS of the effect to be
tested by a MS of the denominator term. The division
should leave only the effect to be tested left over as a remainder.
36
Data format
37
Data Format for Mixed
Models is Long
38
Conversion of Wide to
Long Data Format
• Click on Data in the header bar
• Then click on Restructure in the
pop-down menu
39
A restructure wizard
appears
Select restructure selected variables into cases
and click on Next
40
A Variables to Cases: Number of
Variable Groups dialog box appears.
We select one and click on next.
41
We select the repeated
variables and move them
to the target variable box
42
After moving the repeated variables into the target variable
box, we move the fixed variables into the Fixed variable
box, and select a variable for case id—in this case, subject.
Then we click on Next
43
A create index variables dialog box
appears. We leave the number of index
variables to be created at one and click on
next at the bottom of the box
44
When the following box
appears we just type in
time and select Next.
45
When the options dialog box appears, we select the
option for dropping variables not selected.
We then click on Finish.
46
We thus obtain our data
in long format
47
The Mixed Model
48
The Mixed Model
y X Z
where
fixed effects parameter estimates
X fixed effects
Z Random effects parameter estimates
random effects
errors
u
E 0
u G 0
Variance 0
R
51
Random Effects
Covariance Structure
• This defines the structure of
the G matrix, the random
effects, in the mixed model.
• Possible structures permitted
by current version of SPSS:
– Scaled Identity
– Compound Symmetry
– AR(1)
– Huynh-Feldt
52
Structures of Repeated
effects (R matrix)-cont’d
Variance Components
1 2 0 0 0
0 2
2
0 0
0 0 2
0
3 1 2 3
0 0 0 4 2 2
1
AR(1) 2
1
3 2 1
Compound Symmetry
2 1 2 1 2 1 2 1 2
2 2
1 2 1 1
2 2 2
2 2
1 1
2
2
3
2
1
1 1
2 2
1 4
2 2 2
53
Structures of Repeated
Effects (R matrix)
Huynh Feldt
2 12 2 2 12 3 2
1
2 2 2
2 1 2
2 3
2 2
2 2
2 2
2
3 1 3 2
2 2 2
3 2
2 2
54
Structures of Repeated
effects (R matrix) –con’td
unstructured
1 1 2 12 1 3 13
2
2 1 21 2 2 3 23
2
2
3 1 31 3 2 32 3
55
R matrix, defines the
correlation among
repeated random effects
1 2 1 1
1 1 1
2
1 1 1 2
.
R
.
2
1 1
1
1 1 2 1
1 1 1 2
56
GLM Mixed Model
57
Mixed Analysis of a Fixed
Effects model
SPSS tests these fixed effects just as it does with the GLM
Procedure with type III sums of squares.
i 1 i sH 1 g
where
g gradient matrix of 1st derivatives
H Hessian matrix of 2nd derivatives
s increment of step parameter
59
Estimation: Minimization
of the objective functions
1 n n
ML(G, R ): log | V | log r 'V 1r (1 log(2 / n))
2 2 2
1 n
REML(G, R ) : log | V | log | X 'V 1 X |
2 2
n p n p
log r 'V 1r 1 log | 2 /(n p) |
2 2
where r y X ( X 'V 1 X ) X 'V 1 y
p rank of X
60
Significance of
Parameters
L is a linear combination
Ho : 0
t
LCL '
where
X ' R 1 X X ' R 1Z
C 1
Z ' R X ZR 1Z ' G 1
61
Test one covariance
structure against the other
with the IC
• The rule of thumb is smaller is
better
• -2LL
• AIC Akaike
• AICC Hurvich and Tsay
• BIC Bayesian Info Criterion
• Bozdogan’s CAIC
62
Measures of Lack of fit:
The information Criteria
-2LL is called the deviance. It is a
measure of sum of squared errors.
AIC = -2LL + 2p (p=# parms)
BIC = Schwartz Bayesian Info
criterion = 2LL + plog(n)
AICC= Hurvich and Tsay’s small
sample correction on AIC: -2LL +
2p(n/(n-p-1))
CAIC = -2LL + p(log(n) + 1)
63
Procedures for Fitting the
Mixed Model
• One can use the LR test or the
lesser of the information criteria.
The smaller the information
criterion, the better the model
happens to be.
• We try to go from a larger to a
smaller information criterion
when we fit the model.
64
LR test
69
Click on continue
We specify subjects and
repeated effects with the
next dialog box
71
Click on continue
We select the Fixed
effects to be tested
72
Move them into the model box,
selecting main effects, and type III
sum of squares
Click on continue
73
When the Linear Mixed
Models dialog box
appears, select random
74
Under random effects, select
scaled identity as covariance
type and move subjects over into
combinations
Click on continue
75
Select Statistics and check of the
following in the dialog box that
appears
77
You will get your tests
78
Estimates of Fixed effects
and covariance
parameters
79
R matrix
80
Rerun the model with
different nested covariance
structures and compare the
information criteria
81
GLM vs. Mixed
GLM has
means
lsmeans
sstype 1,2,3,4
estimates using OLS or WLS
one has to program the correct F tests for random
effects.
losses cases with missing values.
Mixed has
lsmeans
sstypes 1 and 3
estimates using maximum likelihood, general methods
of moments, or restricted maximum likelihood
ML
MIVQUE0
REML
gives correct std errors and confidence intervals for
random effects
Automatically provides correct standard errors for
analysis.
Can handle missing values
82