Para and Non-Para
Para and Non-Para
The parametric and non-parametric statistical tests are commonly employed in behavioural
researches. A parametric statistical test is one which specifies certain conditions about the
parameter of the population from which a sample is taken. Such statistical tests are
considered to be more powerful than non-parametric statistical tests and should be used if
their basic requirements or assumptions are met. These assumptions are based upon the
nature of the population distribution as well as upon the type of measurement scales used
in quantifying the data.
3. The samples drawn from a population must have equal variances and this condition
is more important if the size of the sample is particularly small. When the different
samples taken from the same population have equal or nearly equal variances, this
condition we known as homogeneity of variance. Statistically speaking, by
homogeneity of variance mean that there should not be a significant difference
among the variances of different samples.
4. The variables must be expressed in interval or ratio scales. Nominal measures (that
is, frequency counts) and ordinal measures (that is, rankings) do not qualify for a
parametric statistical test.
5. The variable under study should be continuous. The examples of a parametric test
are the z test, t-test and F test.
A non—parametric statistical test is one which does not specify any conditions about the
parameter of the population from which the sample is drawn. Since these statistical tests do
not make any specified and precise assumption about the form of the distribution of the
population, these are also known as distribution-free statistics. The non-parametric statistics
do not specify any rigid conditions like parametric statistical tests although certain
assumptions are associated with them. For a non-parametric statistical test, the variables
under study should be continuous and the observations should be independent. But these
assumptions are neither rigid nor so elaborate as we find in the case of a parametric
statistical.test. The examples of non-parametric tests are the chi-square test, the Mann—
Whitney U test, Kendall's tau, Kendall's coefficient of concordance, etc.
1. The shape of the distribution of the population from which a sample is drawn is not
known to be a normal one.
2. The variables have been quantified on the basis of nominal measures (or frequency
counts).
3. The variables have been quantified on the basis of ordinal measures (or ranking).
Because non-parametric statistical tests are based upon frequency counts or rankings rather
than on the measured values, they are less precise, and are less likely to reject a null
hypothesis when it is false. That is why, a non-parametric statistical test is used only when
the parametric assumptions cannot be met. Some statisticians, however, argue that non-
parametric statistical tests are more powerful and have more merits than parametric tests
because their validity is not based upon the assumptions about the population distribution.
They further argue that the parametric assumptions are often ignored by the researchers
and there are evidences in certain parametric statistical tests like the t test and F test that
violation of assumptions, particularly when a sample is large, does not affect the power of
the statistical tests. Not only this, for some population distributions non-parametric
statistical tests are superior in power to parametric statistical tests.
6. Impact of sample size: When size is 10 or less than 10, non-parametric statistics are
easier, quicker and more efficient than the parametric statistics. If the assumptions
of parametric statistics are violated for such small cases, the result is likely to get
badly affected. Therefore, for this sample size, non-parametric statistics are always
superior to the parametric statistics. The reader should note that as the sample size
increases, non-parametric statistics become time-consuming, laborious and less
efficient than the parametric statistics.
7. Statistical efficiency: Non-parametric tests are often more convenient than the
parametric tests. If the data is such that it meets all assumptions of non-parametric
statistics but not of parametric statistics then non-parametric statistics have
statistical efficiency equal to parametric statistics. If both parametric and non-
parametric statistics are applied to the data which fulfils all assumptions of
parametric tests, the distribution-free statistics become more efficient with a small
sample size but they become less and less efficient as sample size increases.
Disadvantages:
PARAMETRIC STATISTICS
Parametric and Nonparametric Statistics
A parameter, as we learned in an earlier chapter, is a population value. If all the scores of a
defined population are available and a mean is calculated, this mean is a parameter.
Similarly, the valiance and the standard deviation of a population are parameters. It may not
be possible to calculate population measures. They are still referred to as parameters. A
statistic, on the other hand, is a measure calculated from a sample. Whenever statistical
tests, parametric or nonparametric, are used, certain assumptions are made.
Nonparametric statistical tests are hemmed in by fewer and less stringent assumptions than
parametric tests. They are particularly free of assumptions about the characteristics or the
form of the distributions of the populations of research samples. Thus they are also called
distribution-free tests. As Siegel puts it, "A nonparametric statistical test is a test whose
model does not specify conditions about the parameters of the population from which the
sample was drawn.
Assumptions of Normality
The most famous assumption behind the use of many parametric statistics is the
assumption of normality. It is assumed in using the t and F tests, for example, that the
samples with which we work have been drawn from populations that are normally
distributed. It is said that, if the populations from which samples are drawn are not normal,
then statistical tests that depends on the normality assumption are vitiated. As a result, the
conclusions drawn from sampled observations and their statistics will be in question. When
in doubt about the normality of a population, or when one knows that the population not
normal, one should use a nonparametric test that does not make the normality assumption,
it is said. Some teachers urge students of education and psychology to use only
nonparametric tests on the questionable ground that most educational and psychological
populations are not normal. The issue is not this simple.
Homogeneity of Variance
The next most important assumption is known as the homogeneity of variance assumption.
It is assumed, in analysis of variance, that the variances within the groups are statistically
the same. That is, variances are assumed to be homogeneous from group to group, within
the bounds of random variation. If this is not true, the F test is vitiated. There is good reason
for this statement. We saw earlier that the within-groups variance was an average of the
variances within the two, three, or more groups of measures. If the Variances differ widely,
then such averaging is questionable. The effect of widely differing variances is to inflate the
within-groups variance. Consequently an F test may be not significant when in reality there
are significant differences between the means.
To return to the evidence on normality and homogeneity, Lindquist says, the F distribution is
amazingly insensitive to the form of the distribution of criterion measures in the parent
population. Lindquist also says, on the basis Of Norton's data, that unless variances are so
heterogeneous as to be readily apparent, that is, relatively large differences exist, the effect
on the F test will probably be negligible. Boneau confirms this. He says that in a large
number research the probability statements resulting from the use oft and F tests, even
when these two assumptions are violated, will be highly accurate. In brief, in most cases in
education and psychology, it is probably safer—and usually more effective — to use
parametric tests rather than nonparametric tests. It was concluded that parametric
procedures are the standard tools of psychological statistics, although nonparametric
procedures are useful minor techniques.