Dr. Dame Presentation Last
Dr. Dame Presentation Last
Studies
Department Of EDPM
TM
Normality
Data Have A Normal Distribution (Or At Least Is Symmetric)
Homogeneity of variances
Linearity
Independence
Data Are Independent
Collinearity
Refers to a situation where two or more predictor variables are closely related to
one another.
Linearity of Normal Distribution
For example, t-tests, F-tests, and regression analyses all require in some sense that
the ne numeric variables are approximately normally distributed.
Parametric Procedure Analysis
Difference, and equivalent non-parametric test Data are changed from scores to ranks or signs
focuses on the difference between medians.
Must have the same Population Variance Variable under study has underlying continuity
Applications/Limitations Advantage
Parametric tests are often used in a variety of different
Parametric tests are preferred because non-
applications, including medical research, market research, and
parametric tests tend to be less sensitive at
social sciences.
detecting differences between samples or an
This test is mainly useful;
effect of the independent variable on the
This test is used when the given data is quantitative and
dependent variable.
continuous.
The power efficiency of the nonparametric test
When the data is of normal distribution.
is lower than its parametric counterpart.
The parametric tests are helpful when the data is estimated
A larger sample size is required for the non-
on the approximate ratio or interval scales of measurement.
parametric test than the parametric test
(Robson, 1994).
For example, when comparing the means of
two independent samples, the variances of the
two distributions should be approximately
Statistical Analysis of Parametric Tests
1. T-test (Test of Significance)
Analysis of co-variance is useful for experimental psychologists where for variance reasons it is impossible or difficult to
equate experimental and control groups at the start a situation which often obtains in actual situations or in experiments.
Defined as the function of two correlated factors and their analysis into corresponding parts”.
Practically analysis of co-variance is technique to adjust the initial scores to final scores, so that net effect can be
analyzed.
The analysis of variance technique is to analyse and test the significance difference among final scores or initial scores.
Statistical Analysis of Parametric Tests
3. F-ratio (ANOVA-Single Factor)
Useful technique, for testing the difference The F-test is an effective way to determine whether the
between the means of multiple independent
means of more than two samples are too different to
samples.
attribute to sampling error. It contains of following
Test the differences among the means of the
operations.
samples by examining the amount of
The sum of scores and the sum of squares of
variation between the samples relative to
the scores are obtained.
the amount of variation between the
The variances of the score of one composite
samples.
If the ‘F’ value worked out is equal or to one composite group are known as the
exceeds the ‘F’ limit value (from tables) it total group variance
Statistical Analysis of Parametric Tests
4. Multivariate Analysis of Variance (MANOVA)
5. Regression Test
) variable
Predict or explain the variation in one
KEY TAKE AWAYS
based on another variable.
A regression is a statistical technique that relates a
Determine the strength and character of the
dependent variable to one or more independent
relationship between one dependent variable
(explanatory) variables.
(usually denoted by Y) and a series of other
A regression model is able to show whether changes
variables (known as independent variables).
observed in the dependent variable are associated
Linear regression is the most common form of
with changes in one or more of the explanatory
this technique.
variables.
Linear regression establishes the
It does this by essentially fitting a best-fit line and
linear relationship between two variables based
seeing how the data is dispersed around this line.
Regression Analysis Types
6. Correlation Test
)
Measures two variables and assesses the The first is that they do not believe that the statistical
statistical relationship (i.e., the correlation) relationship is a causal one or are not interested in causal
between them with little or no effort to relationships.
control extraneous variables. Used to describe the strength and direction of the relationship
There are many reasons that researchers
between two variables
interested in statistical relationships The strength of a correlation between quantitative variables is typically
between variables would choose to conduct measured using a statistic called Pearson’s Correlation Coefficient (or
a correlational study rather than an Pearson’s r).
experiment. Pearson’s r is a good measure only for linear relationships, in
increase in one variable leads to a rise in the other manipulate variables with a scientific methodology to
variable. A decrease in one variable will see a either agree or disagree with a hypothesis.
2. Negative correlation: If there is an back at historical data and observes events in the
show a decrease and vice versa. Dynamic: The patterns between two variables from
3. No correlation: There is no correlation correlational research are never constant and are
1 DATA COLLECTION
Both cost and scope information must be identified