Statistics Assignment Ii
Statistics Assignment Ii
SHORT QUESTION
Population: population is totality or collection of all objects, items, or individuals on which observation
are taken on the basis of the some characteristic of the objects in any field of inquiry.
1. Finite population.
2. Infinite population.
What is sample?
Sample: A sample is a part of population that is taken and considered for study.
parameter statistic
Any numerical value that describing the Any numerical value that describing the
characteristic of population is called parameter. characteristic of sample is called parameter.
It is denoted by the Greek letter (mu) It is denoted by small letter of the English alphabet
Population parameter is more accurate than sample Sample statistic is less accurate than population
statistic. parameter.
What is sampling?
It’s also called random sampling. It’s also called non-random sampling.
Sampling error: Statistical error are sample error. The sampling error are influenced by sample size and
sampling scheme. It’s occurred by the act of taking sample and also occurs when the result from sample is
very different from result of population.
Sampling bias: According to Gillian Fournier it is also known as selection bias an error in choosing
participants for a scientific study.
On the other hand sampling bias is a bias in which a sample collected in such a way that some members
of the intend population are less likely to be included than others.
Test of significance: it is a procedure which enable us to decide whether to accept or reject the hypothesis
or to determine whether observed samples differ significantly from expected results is called test of
significance.
According to C.R Kothari the procedure which enable us to decide on the basis of sample if the deviation
between the observed sample statistic and hypothetical parametric value or two independent sample
statistics is significant or might be attributed to the calculation.
What is variance?
Variance: variance is the expectation of the squared deviation of a random variable from its mean.
Informally, it measures how far a set of (random) numbers are spread out from their average
value. ... Variance is an important tool in the sciences, where statistical analysis of data is common.
Co- efficient of co-relation: co- relation coefficients are used in statistics to measure how strong a
relationship is between two variables. There are several types of correlation coefficient
Pearson’s correlation is a correlation coefficient commonly used in linear regression.
1. Null hypothesis.
2. Alternative hypothesis.
Null hypothesis: There are no significant differences between parameter and statistic.
Alternative hypothesis: which differ from null hypothesis is called alternative hypothesis.
Level of significance: In testing a given hypothesis, the maximum probability with which we
would be willing to risk a type-I error is called level of significance of the test.
Degrees of freedom: In statistics, the number of degrees of freedom is the number of values in
the final calculation of a statistic that are free to vary.
Explain the type-I & type-II error.
Type-I error: In hypothesis testing if we reject hypothesis when it should be acceptable then it is
called type-I error.
Type-II error: In hypothesis testing if we accept a hypothesis when it should be rejected then it is
called type-II error.
Power of a test: Ability to correctly rejecting a null hypothesis is called power of a test.
On the other hand the power of a statistical test gives the likelihood of rejecting the null hypothesis when
the null hypothesis is false.
Parametric test: parametric test are those test that are stated in terms of making assumption about
population parameter.
Non-parametric test: Non-parametric test are those that do not compare population parameter and make
fewer assumption than parametric test.
t-test F-test
In this test sample should be relatively small. In this test the sample size can be 30 or more than
30.
t-test used to justify differences between two F-test used to justify differences between more than
groups. two groups.
Data obtained from two groups in this test. Data obtained from more than two groups.
Co-related & uncorrelated types of t-test. One way and two way analysis of variance is a type
of F-test.
Main effect: when an independent variable individually effect on dependent variable is called main effect.
Interaction effect: when an independent variable combinedly effect on dependent variable then its called
interaction effect.
Goodness of fit: Goodness of fit is the extent to which observed data matches the values expected by
theory. Chi square test and coefficient of determination can be used to determine goodness of fit.
In other words goodness of a fit of statistic model describes how well it fits a set of observation.
Factor analysis: factor analysis is a class of procedures that allow the researchers to observe a group of
variables that tend to be correlated to each other and identify the underlying dimensions that explain these
correlations.
According to C.R Kothari factor analysis is a technique applicable when there is a systematic
interdependence among a set of observed or manifest variables and the researchers is interested in finding
out something more fundamental or latent which creates this commonality.
What is uniqueness?
Uniqueness: Uniqueness is a state or condition where in someone or something is unlike anything else in
comparison.
What is communality?
Communality: The concept of communality in factor analysis shows how much of each variable is
accounted for the underlying factor taken together.
A high value of communality means not a much of the variable is left over after whether the factors
represent is taken into consideration.
Factor loading: factor loading is the correlation between the original variables and the factors.
According to C.R Kothari factor loading are those values which explain how closely the variables are
related to each one of the factors discovered. They are also known as a factor variable correlations.
Significant difference: The difference which arises due to some other reasons except sampling fluctuation
is known as a significant difference.
Insignificant difference: the difference which arises due to sampling fluctuation is known as insignificant
difference.