0% found this document useful (0 votes)
56 views14 pages

Multivariate Exams

multivariate statistics

Uploaded by

akan george
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views14 pages

Multivariate Exams

multivariate statistics

Uploaded by

akan george
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Multivariate analysis refers to statistical techniques that simultaneously look at three or more

variables in relation to the subject under investigation with the aim of identifying or clarifying the
relationships between them. Multivariate analysis is a set of techniques used for analysis of data
sets that contain more than one variable, and the techniques are especially valuable when working
with correlated variables. The techniques provide an empirical method for information extraction,
regression, or classification. Multivariate analysis is used to find patterns and correlations between
multiple factors by analyzing two or more variables at once.

MULTIPLE DISCRIMINANT ANALYSIS by DR. HEMAL PANDYA

INTRODUCTION
• The original dichotomous discriminant analysis was developed by Sir Ronald Fisher
in 1936.
• Discriminant Analysis is a Dependence technique.
• Discriminant Analysis is used to predict group membership.
• This technique is used to classify individuals/objects into one of alternative groups
on the basis of a set of predictor variables (Independent variables) .
• The dependent variable in discriminant analysis is categorical and on a nominal
scale, whereas the independent variables are either interval or ratio scale in nature.
• When there are two groups (categories) of dependent variable, it is a case of two
group discriminant analysis.
• When there are more than two groups (categories) of dependent variable, it is a
case of multiple discriminant analysis.
2
INTRODUCTION
• Discriminant Analysis is applicable in situations in which the total sample can be
divided
into groups based on a non-metric dependent variable.
• Example:- male-female
- high-medium-low
• The primary objective of multiple discriminant analysis are to understand group
differences and to predict the likelihood that an entity (individual or object) will belong
to a particular class or group based on several independent variables.
3
ASSUMPTIONS OF DISCRIMINANT ANALYSIS
• NO Multicollinearity
• Multivariate Normality
• Independence of Observations
• Homoscedasticity
• No Outliers
• Adequate Sample Size
• Linearity (for LDA)
4
ASSUMPTIONS OF DISCRIMINANT ANALYSIS
• The assumptions of discriminant analysis are as under. The analysis is quite
sensitive to outliers and the size of the smallest group must be larger than the
number of predictor variables.
• Non-Multicollinearity: If one of the independent variables is very highly correlated
with another, or one is a function (e.g., the sum) of other independents, then the
tolerance value for that variable will approach 0 and the matrix will not have a
unique discriminant solution. There must also be low multicollinearity of the
independents. To the extent that independents are correlated, the standardized
discriminant function coefficients will not reliably assess the relative importanceof
the predictor variables. Predictive power can decrease with an increased correlation
between predictor variables. Logistic Regression may offer an alternative to DA as it
usually involves fewer violations of assumptions.
5
Multivariate Normality: Independent variables are normal for each level of the
grouping variable. Normal distribution: It is assumed that the data (for the variables)
represent a sample from a multivariate normal distribution. You can examine whether
or not variables are normally distributed with histograms of frequency distributions.
However, note that violations of the normality assumption are not "fatal" and the
resultant significance test are still reliable as long as non-normality is caused by
skewness and not outliers (Tabachnick and Fidell 1996).
Independence: Participants are assumed to be randomly sampled, and a
participant's score on one variable is assumed to be independent of scores on that
variable for all other participants. It has been suggested that discriminant analysis is
relatively robust to slight violations of these assumptions, and it has also been shown
that discriminant analysis may still be reliable when using dichotomous variables
(where multivariate normality is often violated).
6
ASSUMPTIONS UNDERLYING DISCRIMINANT
ANALYSIS
• Sample size: Unequal sample sizes are acceptable. The sample size of the
smallest group needs to exceed the number of predictor variables. As a “rule of
thumb”, the smallest sample size should be at least 20 for a few (4 or 5) predictors.
The maximum number of independent variables is n - 2, where n is the sample size.
While this low sample size may work, it is not encouraged, and generally it is best to
have 4 or 5 times as many observations and independent variables..
• Homogeneity of variance/covariance (homoscedasticity): Variances among group
variables are the same across levels of predictors. Can be tested with Box's M
statistic.. It has been suggested, however, that linear discriminant analysis be used
when covariances are equal, and that quadratic discriminant analysis may be used
when covariances are not equal. DA is very sensitive to heterogeneity of variance-
covariance matrices. Before accepting final conclusions for an important study, it is a
good idea to review the within-groups variances and correlation matrices.
Homoscedasticity is evaluated through scatterplots and corrected by transformation
of variables..
7
ASSUMPTIONS UNDERLYING DISCRIMINANT
ANALYSIS
• NO Outliers: DA is highly sensitive to the inclusion of outliers. Run a test for
univariate and multivariate outliers for each group, and transform or eliminate them.
If one group in the study contains extreme outliers that impact the mean, they will
also increase variability. Overall significance tests are based on pooled variances,
that is, the average variance across all groups. Thus, the significance tests of the
relatively larger means (with the large variances) would be based on the relatively
smaller pooled variances, resulting erroneously in statistical
significance.
• DA is fairly robust to violations of the most of these assumptions. But highly
sensitive to
Multivariate Normality and Outliers.

Introduction
• Discriminant Analysis is a dependence technique.
• Discriminant Analysis is used to predict group membership.
• This technique is used to classify individuals/objects into one of
alternative groups on the basis of a set of predictor variables (Independent variables) .
• The dependent variable in discriminant analysis is categorical and on a
nominal scale, whereas the independent variables are either interval or ratio scale in nature.
• When there are two groups (categories) of dependent variable, it is a case of two group
discriminant analysis.
• When there are more than two groups (categories) of dependent
variable, it is a case of multiple discriminant analysis.

Introduction
• Discriminant Analysis is applicable in situations in which the total
sample can be divided into groups based on a non-metric
dependent variable.
• Example:- male-female
- high-medium-low
• The primary objective of multiple discriminant analysis are to
understand group differences and to predict the likelihood that an
entity (individual or object) will belong to a particular class or
group based on several independent variables.
Example
• Heavy product users from light users
• Males from females
• National brand buyers from private label buyers
• Good credit risks from poor credit risks
Objectives
• To find the linear combinations of variables that discriminate
between categories of dependent variable in the best possible
manner.
• To find out which independent variables are relatively better in
discriminating between groups.
• To determine statistical significance of the discriminant function
and whether any statistical difference exists among groups in
terms of predictor variable.
• To evaluate the accuracy of classification, i.e., the percentage of
customers that is able to classify correctly.
Discriminant Analysis Model
• The mathematical form of the discriminant analysis model is:
Y = b0 + b1X1 + b2X2 +b3X3 + … + bkXk + ε
where, Y = Dependent variable
bs = Coefficients of independent variable
Xs = Predictor or independent variable
• Y should be categorized variable and it should be coded as 0, 1 or
1,2,3 similar to dummy variables.
• X should be continuous.
• The coefficients should maximize the separation between the
groups of the dependent variable.
Accuracy of classification
• The classification of the existing data points is done using the
equation, and the accuracy of the model is determined.
• This output is given by the classification matrix (also called
confusion matrix), which tells what percentage of the existing data
points is correctly classified by the model.
Relative importance of independent variable
• Suppose we have two independent variables X1 and X2.
• How do we know which one is more important in discriminating
between groups?
• Coefficients of both the variables will provide the answer.
Predicting the group membership for a new data
point
• For any new data point that we want to classify into one of the
groups, the coefficients of the equation are used to calculate Y
discriminant score.
• A decision rule is formulated – to determine the cutoff score,
which is usually the midpoint of the mean discriminant score of
two groups.
Coefficients
• There are two types of coefficients.
1. Standardized coefficients
2. Unstandardized coefficients
• Main difference standardized coefficient will not have constant
‘a’.

What is Multivariate Analysis?


Multivariate analysis is a statistical technique used to analyse datasets that involve multiple
variables. Unlike univariate analysis, which examines one variable at a time, multivariate
analysis considers the relationships between multiple variables simultaneously.

 It aims to uncover patterns, trends, and dependencies within the data, providing a
more comprehensive understanding of the underlying structure.

 Multivariate analysis encompasses a wide range of methods, including regression


analysis, factor analysis, cluster analysis, and principal component analysis.

 These techniques enable researchers to explore the relationships between variables,


identify important factors or components, classify observations into groups, and make
predictions or interpretations based on the data.
This approach finds applications in various fields, such as social sciences, marketing
research, finance, healthcare, and environmental science. It helps in understanding
consumer behaviour, market segmentation, risk analysis, disease prediction, and
environmental impact assessment, among others. MANOVA Assumptions

In order to use MANOVA the following assumptions must be met:

 Observations are randomly and independently sampled from the


population
 Each dependent variable has an interval measurement
 Dependent variables are multivariate normally distributed within each
group of the independent variables (which are categorical)
 The population covariance matrices of each group are equal (this is an
extension of homogeneity of variances required for univariate ANOVA)

These assumptions are similar to those for Hotelling’s T-square test (see
Hotelling’s T-square for Two samples). In particular, we test for multivariate
normality and homogeneity of covariance matrices in a similar fashion.

Multivariate normality: If the samples are sufficiently large (say at least 20


elements for each dependent × independent variable combination), then the
Multivariate Central Limit Theorem holds and we can assume the multivariate
normality assumption holds. If not, we would need to check that the data (or
residuals) for each group is multivariate normally distributed. Fortunately, as for
Hotelling’s T-square test, MANOVA is not very sensitive to violations of
multivariate normality provided there aren’t any (or at least many) outliers.

Univariate normality: We start by trying to show that the sample data for each
combination of independent and dependent variables is (univariate) normally
distributed (or at least symmetric). If there is a problem here, then the
multivariate normality assumption may be violated (of course you may find that
each variable is normally distributed but the random vectors are not multivariate
normally distributed).

For Example 1 of Manova Basic Concepts, for each dependent variable we can
use the Real Statistics ExtractCol function to extract the data for that variable
by group and then use the Descriptive Statistics and Normality data analysis
tool contained in the Real Statistics Resource Pack.

E.g. for the Water dependent variable (referring to Figure 1 of Manova Basic
Concepts and Figure 3 of Real Statistics Manova Support), highlight the range
F5:I13 and enter the array formula =ExtractCol(A3:D35,”water”). Then enter
Ctrl-m and select Descriptive Statistics and Normality from the menu. When
a dialog box appears, enter F5:I13 in the Input Range and chose the following
options: Column headings included with data, Descriptive Statistics, Box
Plot and Shapiro-Wilk and then click on OK. The resulting output is shown in
Figure 1.

Figure 1 – Tests for Normality for Water

The descriptive statistics don’t show any extreme values for the kurtosis or
skewness. We can see that the box plots are reasonably symmetric and there
aren’t any prevalent outliers. Finally, the Shapiro-Wilk test shows that none of
the samples shows a significant departure from normality.

The results are pretty similar for Yield. Also, the results for Herbicide show that
the sample is normally distributed, but the box plot shows that there may be a
potential outlier. The kurtosis value shown in the descriptive statistics for loam
is 3.0578, which indicates a potential outlier. We return to this issue shortly.

We can also construct QQ plots for each of the 12 combinations of groups and
dependent variables using the QQ Plot data analysis tool provided by the Real
Statistics Resource Pack. For example, we generate the Water × Clay QQ plot
as follows: press Ctrl-m, select the QQ Plot from the menu and then enter
F6:F13 (from Figure 1) in the Input Range and click on OK. The chart that
results, as displayed in Figure 2, shows a pretty good fit with the normal
distribution assumption (i.e. the points lie close to the straight line).

Figure 2 – QQ plot Water × Clay

Multivariate normality: It is very difficult to show multivariate normality. One


indicator is to construct scatter plots for the sample data for each pair of
dependent variables. If the distribution is multivariate normal, the cross-sections
in two dimensions should be in the form of an ellipse (or straight line in the
extreme case). E.g. for Yield × Water, highlight the range B4:C35 and then
select Insert > Charts|Scatter. The resulting chart is shown in Figure 3.

Figure 3 – Scatter plot for Yield × Water


To produce the scatter plot for Water × Herbicide, similarly highlight the range
C4:D35 and select Insert > Charts|Scatter. The case of Yield × Herbicide is a
bit more complicated: highlight the range B4:D35 (i.e. all three columns of
data) and select Insert > Charts|Scatter as before. The resulting chart is shown
in Figure 4.

Figure 4 – Scatter plots for Yield × Water and Yield × Herbicide

Series 1 (in blue) represents Yield × Water and series 2 (in red) represents Yield
× Herbicide. Click on any of the points in series 1 and hit the Delete (or
Backspace) key. This erases the blue series and only the desired red series
remains. Adding the title and removing the legend produces the scatter chart in
Figure 5.

Figure 5 – Scatter plot for Yield × Herbicide

All three scatter plots are reasonably elliptical, supporting the case for
multivariate normality.
Outliers: As mentioned above, the multivariate normality assumption is
sensitive to the presence of outliers. Here we need to be concerned with both
univariate and multivariate outliers. If outliers are detected they can be dealt
with in a fashion similar to the univariate case.

Univariate outliers: For the univariate case, generally we need to look at data
elements with a z-score of more than 3 or less than -3 (or 2.5 for smaller
samples, say less than 80 elements). For data that is normally distributed
(which, of course, we are assuming is true of our data), the probability of a z-
score of more than +3 or less than -3 is 2*(1–NORMSDIST(3)) = 0.0027 (i.e.
about 1 in 370). The probability of a z-score of more than 2.5 or less than -2.5 is
0.0124 (i.e. about 1 in 80).

The values 2.5 and 3.0 are somewhat arbitrary, and different estimates can be
used instead. In any case, even if a data element can be classified as a potential
outlier based on this criterion, it doesn’t mean that it should be thrown away.
The data element may be perfectly reasonable (e.g. in a sample of say 1,000
elements, you would expect at least one potential outlier 1 – (1-.0027) 1000 =
93.3% of the time.

Since suspect that there is an outlier in the herbicide sample, we will


concentrate on the data in that sample. We first use the supplemental array
formula =ExtractCol(A3:D35,”herbicide”) to extract the herbicide data. We
then look at the box plot and investigate the outliers for that sample using Real
Statistics’ Descriptive Statistics and Normality data analysis tool as follows:
enter Ctrl-m, select Descriptive Statistics and Normality from the menu,
enter F33:I41 in the Input Range and choose the Column headings included
with data, Box Plot and Outliers and Missing Data options. The output is
shown in Figure 6.
Figure 6 – Investigation of potential outliers in Herbicide data

As mentioned previously, the Box Plot (see Figure 6) for herbicide in Example
1 of Manova Basic Concepts indicates a potential outlier, namely the data
element in cell G38. The z-score for this entry is given by the formula (cell
S13).

STANDARDIZE(G38,AVERAGE(G34,G41),STDEV(G34,G41)) = 2.16

This value is still less than 2.5, and so we aren’t too concerned. In fact, the
report shows there are no potential outliers.

Multivariate outliers: Multivariate outliers are harder to spot graphically, and so


we test for these using the Mahalanobis distance squared. For any data sample X
with k dependent variables (here, X is an k × n matrix) with covariance matrix
S, the Mahalanobis distance squared, D2, of any k × 1 column vector Y from the
mean vector of X (i.e. the center of the hyper-ellipse) is given by

Since the data in standard format is represented by an n × k matrix, we look at


the row equivalent version of the above formula, namely, for any data sample X
with k dependent variables with covariance matrix S, the Mahalanobis distance
squared, D2, of any 1 × k row vector Y is given by
To check for outliers we calculate D2 for all the row vectors in the sample. This
can be done using the Real Statistics MANOVA data analysis tool, this time
choosing the Outliers options (see Figure 1 of Real Statistics Manova
Support). The output is displayed in Figure 7.

Figure 7 – Using Mahalanobis D2 to identify outliers

Here the covariance matrix for the sample data (range I4:K6) is calculated by
the array formula

=MMULT(TRANSPOSE(B4:D35-I14:K14),
B4:D35-I14:K14)/(COUNT(B4:B35)-1)

Or simply COV(B4:D35) using the supplemental function COV. The inverse of


the covariance matrix (range I9:K11) is then calculated by the array formula
MINVERSE(I4:K6).

The values of D2 can now be calculated as described above. E.g. D2 for the first
sample element (cell F4) is calculated by the formula
=MMULT(B4:D4-$I$14:$K$14,MMULT($I$9:$K$11,
TRANSPOSE(B4:D4-$I$14:$K$14)))

The values of D2 play the same role as the z-scores in identifying multivariate
outliers. Since the original data is presumed to be multivariate normal, by
Property 3 of Multivariate Normal Distribution Basic Concepts, the distribution
of the values of D2 is chi-square with k (= the number of dependent variables)
degrees of freedom. Usually, any data element whose p-value is < .001 is
considered to be a potential outlier. As in the univariate case, this cutoff is
somewhat arbitrary.

For Example 1 of Manova Basic Concepts, the p-values are displayed in column
G of Figure 7. E.g. the p-value of the first sample element is calculated by the
formula

=CHIDIST(F4,COUNTA($B$3:$D$3))

Any element that is a potential outlier is indicated by an asterisk in column H.


We note that none of the p-values in column G is less than .001 and so there are
no potential multivariate outliers.

If the yield value for the first sample element (cell B4) is changed from 76.7 to
176.7, then the D2 value in cell G4 would change to 96.76 and the p-value
would now become 7.71E-21, which is far below .001. While the value 176.7
might be correct, it would be so much higher than the other yields obtained that
we would probably suspect that it was a typing mistake and check to see that the
correct value is 76.7.

Real Statistics Functions: The following function is supplied by the Real


Statistics Resource Pack:

MDistSq(R1,R2): the Mahalanobis distance squared between the 1 × k row


vector R2 and the mean vector of the sample contained in n × k range R1

MOUTLIERS(R1, alpha): when alpha = 0 or is omitted, returns an n × 2


array whose first column contains the Mahalanobis distance squared of each
vector in R1 (i.e. the rows of the n × k array R1) and the center of the hyper-
ellipse defined by the data elements in R1 and whose second column contains
the corresponding p-values based on the chi-square test as described above.
When alpha > 0 then only those values for which p-value < alpha are returned.

For example, the Mahalanobis distance squared between the row vector R2 =
(50, 25, 5) and the mean of the sample R1 = A4:D35 is MDistSq(R1, R2) =
1.072043.
Referring to Figure 7, the output from the array formula
=MOUTLIERS(B4:D35) is identical to the data shown in range F4:G35.

Homogeneity of covariance matrices: As for Hotelling’s T-square test,


MANOVA is not so sensitive to violations of this assumption provided the
covariance matrices are not too different and the sample sizes are equal.

If the sample sizes are unequal (generally if the largest sample is more than 50%
bigger than the smallest), Box’s Test can be used to test for homogeneity of
covariance matrices (see Box’s Test). This is an extension of Bartlett’s Test as
described in Homogeneity of Variances. As mentioned there, caution should be
exercised and many would recommend not using this test since Box’s Test is
very sensitive to violations of multivariate normality.

If the larger samples also have the larger variance, then the MANOVA test
tends to be robust for type I errors (with a loss in power). If the smaller sized
samples have larger variance then you should have more confidence when
retaining the null hypothesis than rejecting the null hypothesis. Also, you should
use a more stringent test (Pillai’s instead of Wilk’s).

Since the sample sizes for Example 1 of Manova Basic Concepts are equal, we
probably don’t need to use the Box Test, but we could perform the test using the
Real Statistics MANOVA data analysis tool, this time choosing the Box Test
option (see Figure 1 of Real Statistics Manova Support). The output is shown
in Figure 8.

Figure 8 – Box’s Test

Since the p-value for the Box Test is .715, which is far higher than the
commonly used value of α = .001, we conclude there is no evidence that the
covariance matrices are significantly unequal.

Collinearity: MANOVA extends ANOVA when multiple dependent variables


need to be analyzed. It is especially useful when these dependent variables are
correlated, but it is also important that the correlations not be too high (i.e.
greater than .9) since, as in the univariate case, collinearity results in instability
of the model.

The correlation matrix for the data in Example 1 of Manova Basic Concepts is
given in range R29:T31 of Figure 2 of Real Statistics Manova Support. We see
that none of the off-diagonal values are greater than .9 (or less than -.9) and so
we don’t have any problems with collinearity.

61 thoughts on “MANOVA Assumptions”


You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy