0% found this document useful (0 votes)
12 views68 pages

7 Chi-Square and F

Uploaded by

Puneet Chawla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views68 pages

7 Chi-Square and F

Uploaded by

Puneet Chawla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 68

Chi-square and F Distributions

Distributions
• There are many theoretical distributions, both
continuous and discrete. Howell calls these
test statistics
• We use 4 test statistics a lot: z (unit normal), t,
chi-square ( ), and F.
2

• Z and t are closely related to the sampling


distribution of means; chi-square and F are
closely related to the sampling distribution of
variances.
Sampling distribution
Some important sampling distributions, which
are commonly used, are:
(1) sampling distribution of mean;
(2) sampling distribution of proportion;
(3) student’s ‘t’ distribution;
(4) F distribution; and
(5) Chi-square distribution.

3
Control of Dispersion
• Taguchi defined Quality as Deviation from
target
• Cost of quality is proportional to square of
deviation from target
• Quality can be achieved only by reducing
variation
• Recent developments in Quality Engineering
include topics like Tolerance Design and
Robust Design
Control of dispersion (Cont)
• When research is undertaken to control
dispersion, it is required to measure success
• It is required to estimate population variance from
sample variance
• If Sample variance is V, It is not an unbiased
estimate of population variance
n
• Vis an unbiased point estimate of population
n 1
variance
Interval Estimate of variance
• Some more distributions like Chi Square
Distributions and F distribution are required
for interval estimation of Population variance
from sample variance
Chi-square
•The chi-square test is an important test amongst the several tests
of significance developed by statisticians.
• Chi-square, symbolically written as χ2 (Pronounced as Ki-
square), is a statistical measure used in the context of sampling
analysis for comparing a variance to a theoretical variance.
•As a non-parametric* test, it “can be used to determine if
categorical data shows dependency or the two classifications are
independent.
•It can also be used to make comparisons between theoretical
populations and actual data when categories are used.”
•The test is, in fact, a technique through the use of which it is
possible for all researchers to
(i) test the goodness of fit;
(ii) test the significance of association between two attributes,
(iii) test the homogeneity or the significance of population
variance.
Chi-square as a test for comparing
variance
• The chi-square value is often used to judge the significance of
population variance i.e., we can use the test to judge if a
random sample has been drawn from a normal population
with mean (μ) and with a specified variance , σ2 p
• The test is based on χ2 –distribution i.e dealing with collections
of values that involve adding up squares.
• If we take each one of a collection of sample variances, divided
them by the known population variance and multiply these
quotients by (n – 1), where n means the number of items we
 2
shall obtain a – distribution.
Chi-square
• Then by comparing the calculated value of χ2 with its table
value for (n – 1) degrees of freedom at a given level of
significance, we may either accept H0 or reject it.
• If the calculated value of χ2 is equal to or less than the table
value, the null hypothesis is accepted; otherwise the null
hypothesis is rejected.
• This test is based on chi-square distribution which is not
symmetrical and all the values happen to be positive; one must
simply know the degrees of freedom for using such a
distribution.*
Chi-square Distribution
• The χ2 -distribution is not symmetrical and all the values are

positive. For making use of this distribution, one is required to

know the degrees of freedom since for different degrees of freedom

we have different curves.

• The smaller the number of degrees of freedom, the more skewed is

the distribution
Chi-square
The distribution of chi-square depends
on 1 parameter, its degrees of freedom
(df or v). As df gets large, curve is less
skewed, more normal.
HYPOTHESIS TESTING FOR COMPARING A VARIANCE
TO SOME HYPOTHESISED POPULATION VARIANCE

•The test we use for comparing a sample variance to some theoretical or hypothesised
variance of population is different than z-test or the t-test.

• The test we use for this purpose is known as chi-square test and the test statistic
symbolised as χ2 , known as the chi-square value, is worked out.

•The chi-square value to test the null hypothesis viz, H : σs 2 = σs2 worked out as
under:
Test of goodness of fit,
• As a test of goodness of fit, χ2 test enables us to see how well
does the assumed theoretical distribution (such as Binomial
distribution, Poisson distribution or Normal distribution) fit to the
observed data.
• When some theoretical distribution is fitted to the given data, we
are always interested in knowing as to how well this distribution
fits with the observed data.
• The chi-square test can give answer to this.If the calculated value
of χ2 is less than the table value at a certain level of significance,
the fit is considered to be a good one which means that the
divergence between the observed and expected frequencies is
attributable to fluctuations of sampling.
But if the calculated value of χ2 is greater than its table value, the
fit is not considered to be a good one.
Multinomial Experiments
A multinomial experiment is a probability experiment consisting of
a fixed number of trials in which there are more than two possible
outcomes for each independent trial. (Unlike the binomial
experiment in which there were only two possible outcomes.)

Example:
A researcher claims that the distribution of favorite pizza toppings
among teenagers is as shown below.
Topping Frequency, f
Each outcome is Cheese 41% The probability for
classified into Pepperoni 25% each possible
categories. Sausage 15% outcome is fixed.
Mushrooms 10%
Onions 9%
Chi-Square Goodness-of-Fit Test
A Chi-Square Goodness-of-Fit Test is used to test whether a frequency
distribution fits an expected distribution.
To calculate the test statistic for the chi-square goodness-of-fit test, the
observed frequencies and the expected frequencies are used.

The observed frequency O of a category is the frequency for the category


observed in the sample data.
The expected frequency E of a category is the calculated frequency for the
category. Expected frequencies are obtained assuming the specified (or
hypothesized) distribution. The expected frequency for the ith category is
Ei = npi
where n is the number of trials (the sample size) and pi is the assumed
probability of the ith category.
Chi-Square Goodness-of-Fit Test
For the chi-square goodness-of-fit test to be used, the following must be true.

1. The observed frequencies must be obtained by using a random sample.


2. Each expected frequency must be greater than or equal to 10(5 for some).

The Chi-Square Goodness-of-Fit Test


If the conditions listed above are satisfied, then the sampling distribution for the
goodness-of-fit test is approximated by a chi-square distribution with k – 1
degrees of freedom, where k is the number of categories. The test statistic for the
chi-square goodness-of-fit test is

2 (O  E )2 The test is always a right-


χ 
E
where O represents the observed frequencytailed test.
of each category and E represents the
expected frequency of each category.
Chi-Square Goodness-of-Fit Test
Performing a Chi-Square Goodness-of-Fit Test
In Words In Symbols
1. Identify the claim. State the null and State H0 and Ha.
alternative hypotheses.

2. Specify the level of significance. Identify .

3. Identify the degrees of freedom. d.f. = k – 1

4. Determine the critical value. Use Table 6 in


Appendix B.
5. Determine the rejection region.

Continued.
Chi-Square Goodness-of-Fit Test
Performing a Chi-Square Goodness-of-Fit Test
In Words In Symbols
6. Calculate the test statistic. 2 (O  E )2
χ 
E

7. Make a decision to reject or fail to If χ2 is in the rejection


reject the null hypothesis. region, reject H0.
Otherwise, fail to reject
H0.
8. Interpret the decision in the context
of the original claim.
Observed and Expected Frequencies
Example:
200 teenagers are randomly selected and asked what their favorite
pizza topping is. The results are shown below.
Find the observed frequencies and the expected frequencies.

Topping Results (n % of Observed Expected Frequency


= 200) teenagers Frequency
Cheese 78 41% 78 200(0.41) = 82
Pepperoni 52 25% 52 200(0.25) = 50
Sausage 30 15% 30 200(0.15) = 30
Mushrooms 25 10% 25 200(0.10) = 20
Onions 15 9% 15 200(0.09) = 18
Chi-Square Independence Test
A chi-square independence test is used to test the independence of
two variables. Using a chi-square test, you can determine whether
the occurrence of one variable affects the probability of the
occurrence of the other variable.

For the chi-square independence test to be used, the following must


be true.
1. The observed frequencies must be obtained by using a random
sample.
2. Each expected frequency must be greater than or equal to 5.
Test of independence
• For instance, we may be interested in knowing whether a new medicine is effective in
controlling fever or not, χ2 test will helps us in deciding this issue.

• Proceed with the null hypothesis that the two attributes (viz., new medicine and control of
fever) are independent which means that new medicine is not effective in controlling fever.

• First calculate the expected frequencies and then work out the value of χ2 . If the calculated
value of χ2 is less than the table value at a certain level of significance for given degrees of
freedom, we conclude that null hypothesis stands which means that the two attributes are
independent or not associated (i.e., the new medicine is not effective in controlling the fever).

• But if the calculated value of χ2 is greater than its table value, our inference then would be
that null hypothesis does not hold good which means the two attributes are associated and
the association is not because of some chance factor but it exists in reality (i.e., the new
medicine is effective in controlling the fever and as such may be prescribed).
Test of independence

• χ2 is not a measure of the degree of relationship or the form of relationship


between two attributes, but is simply a technique of judging the significance of
such association or relationship between two attributes.

• In order to apply the chi-square test either as a test of goodness of fit


or as a test to judge the significance of association between
attributes, it is necessary that the observed as well as theoretical or
expected frequencies must be grouped in the same way and the
theoretical distribution must be adjusted to give the same total
frequency as we find in case of observed distribution.
Chi-Square Independence Test
The Chi-Square Independence Test
If the conditions listed are satisfied, then the sampling distribution
for the chi-square independence test is approximated by a chi-
square distribution with
(r – 1)(c – 1)
degrees of freedom, where r and c are the number of rows and
columns, respectively, of a contingency table. The test statistic for
the chi-square independence test is
(O  E )2 The test is always a right-
2
χ  tailed test.
E
where O represents the observed frequencies and E represents the
expected frequencies.
Chi-Square Goodness-of-Fit Test
Example:
A researcher claims that the distribution of favorite pizza toppings
among teenagers is as shown below. 200 randomly selected
teenagers are surveyed.
Topping Frequency, f
Cheese 39%
Pepperoni 26%
Sausage 15%
Mushrooms 12.5%
Onions 7.5%

Using  = 0.01, and the observed and expected values previously


calculated, test the surveyor’s claim using a chi-square goodness-
of-fit test.
Continued.
Chi-Square Goodness-of-Fit Test
Example continued:
H0: The distribution of pizza toppings is 39% cheese, 26%
pepperoni, 15% sausage, 12.5% mushrooms, and 7.5%
onions. (Claim)
Ha: The distribution of pizza toppings differs from the claimed
or expected distribution.

Because there are 5 categories, the chi-square distribution has k – 1 = 5


– 1 = 4 degrees of freedom.

With d.f. = 4 and  = 0.01, the critical value is χ20 = 13.277.

Continued.
Chi-Square Goodness-of-Fit Test
Example continued:
Topping Observed Expected
Rejection Frequency Frequency
region
Cheese 78 82
  0.01 Pepperoni 52 50
Sausage 30 30
X2
Mushrooms 25 20
χ20 = 13.277 Onions 15 18

2 (O  E )2 (78  82)2 (52  50)2 (30  30)2 (25  20)2 (15  18)2
χ      
E 82 50 30 20 18
 2.025
Fail to reject H0.
There is not enough evidence at the 1% level to reject the surveyor
’s claim.
Contingency Tables
An r  c contingency table shows the observed frequencies for
two variables. The observed frequencies are arranged in r rows and
c columns. The intersection of a row and a column is called a cell.

The following contingency table shows a random sample of 321


fatally injured passenger vehicle drivers by age and gender.
(Adapted from Insurance Institute for Highway Safety)

Age
Gender 16 – 20 21 – 30 31 – 40 41 – 50 51 – 60 61 and older
Male 32 51 52 43 28 10
Female 13 22 33 21 10 6
Expected Frequency
Assuming the two variables are independent, you can use the
contingency table to find the expected frequency for each cell.

Finding the Expected Frequency for Contingency Table Cells


The expected frequency for a cell Er,c in a contingency table is

(Su m of r ow r )  (Su m of colu m n c )


E xpect ed fr equ en cy E r ,c  .
Sa m ple size
Expected Frequency
Example:
Find the expected frequency for each “Male” cell in the contingency table
for the sample of 321 fatally injured drivers. Assume that the variables,
age and gender, are independent.
Age
Gender 16 – 20 21 – 30 31 – 40 41 – 50 51 – 60 61 and Total
older
Male 32 51 52 43 28 10 216
Female 13 22 33 21 10 6 105
Total 45 73 85 64 38 16 321

Continued.
Expected Frequency
Example continued:
Age
Gender 16 – 20 21 – 30 31 – 40 41 – 50 51 – 60 61 and Total
older
Male 32 51 52 43 28 10 216
Female 13 22 33 21 10 6 105
Total 45 73 85 64 38 16 321
(Su m of r ow r )  (Su m of colu m n c )
E xpect ed fr equ en cy E r ,c 
Sa m ple size
216  45 216  73 216  85
E 1,1   30.28 E 1,2   49.12 E 1,3   57.20
321 321 321

216  64 216  38 216  16


E 1,4   43.07 E 1,5   25.57 E 1,6   10.77
321 321 321
Chi-Square Independence Test
Performing a Chi-Square Independence Test
In Words In Symbols
1. Identify the claim. State the null and State H0 and Ha.
alternative hypotheses.

2. Specify the level of significance. Identify .

3. Identify the degrees of freedom. d.f. = (r – 1)(c – 1)

4. Determine the critical value. Use Table 6 in


Appendix B.
5. Determine the rejection region.

Continued.
Chi-Square Independence Test
Performing a Chi-Square Independence Test
In Words In Symbols
6. Calculate the test statistic. 2 (O  E )2
χ 
E

7. Make a decision to reject or fail to If χ2 is in the rejection


reject the null hypothesis. region, reject H0.
Otherwise, fail to reject
H0.
8. Interpret the decision in the context
of the original claim.
Chi-Square Independence Test
Example:
The following contingency table shows a random sample of 321
fatally injured passenger vehicle drivers by age and gender. The
expected frequencies are displayed in parentheses. At  = 0.05,
can you conclude that the drivers’ ages are related to gender in
such accidents?
Age
Gender 16 – 20 21 – 30 31 – 40 41 – 50 51 – 60 61 and Total
older
Male 32 51 52 43 28 10 216
(30.28) (49.12) (57.20) (43.07) (25.57) (10.77)
Female 13 22 33 21 10 6 (5.23) 105
(14.72) (23.88) (27.80) (20.93) (12.43)
45 73 85 64 38 16 321
Chi-Square Independence Test
Example continued:
Because each expected frequency is at least 5 and the drivers were
randomly selected, the chi-square independence test can be used to
test whether the variables are independent.

H0: The drivers’ ages are independent of gender.


Ha: The drivers’ ages are dependent on gender. (Claim)

d.f. = (r – 1)(c – 1) = (2 – 1)(6 – 1) = (1)(5) = 5

With d.f. = 5 and  = 0.05, the critical value is χ20 = 11.071.

Continued.
Chi-Square Independence Test
Example continued: O E O–E (O – E)2 (O  E )2
Rejection
E
32 30.28 1.72 2.9584 0.0977
region
51 49.12 1.88 3.5344 0.072
  0.05 52 57.20 5.2 27.04 0.4727
43 43.07 0.07 0.0049 0.0001
X2 28 25.57 2.43 5.9049 0.2309
10 10.77 0.77 0.5929 0.0551
χ20 = 11.071
13 14.72 1.72 2.9584 0.201
(O  E )2 22 23.88 1.88 3.5344 0.148
2
χ   2.84 33 27.80 5.2 27.04 0.9727
E
21 20.93 0.07 0.0049 0.0002
Fail to reject H0. 10 12.43 2.43 5.9049 0.4751
6 5.23 0.77 0.5929 0.1134

There is not enough evidence at the 5% level to conclude that age


is dependent on gender in such accidents.
§ 10.3
Comparing Two Variances
F-Distribution
Let s 12 a n d s 22represent the sample variances of two different
populations. If both populations are normal and the population
2 2
variances s 12 are equal,σthen
1 a n the
d σ 2sampling distribution of
F 
s 22

is called an F-distribution.
There are several properties of this distribution.

1. The F-distribution is a family of curves each of which is determined


by two types of degrees of freedom: the degrees of freedom
corresponding to the variance in the numerator, denoted d.f.N, and
the degrees of freedom corresponding to the variance in the
denominator, denoted d.f.D. Continued.
F-Distribution
Properties of the F-distribution continued:
2. F-distributions are positively skewed.
3. The total area under each curve of an F-distribution is equal to 1.
4. F-values are always greater than or equal to 0.
5. For all F-distributions, the mean value of F is approximately equal to
1.

d.f.N = 1 and d.f.D = 8


d.f.N = 8 and d.f.D = 26
d.f.N = 16 and d.f.D = 7
d.f.N = 3 and d.f.D = 11

F
1 2 3 4
Critical Values for the F-Distribution
Finding Critical Values for the F-Distribution
1. Specify the level of significance .
2. Determine the degrees of freedom for the numerator, d.f. N.
3. Determine the degrees of freedom for the denominator, d.f. D.
4. Use Table 7 in Appendix B to find the critical value. If the hypothesis
test is
a. one-tailed, use the  F-table.
1
b. two-tailed, use the 2 F-table.
Critical Values for the F-Distribution
Example:
Find the critical F-value for a right-tailed test when  = 0.05,
d.f.N = 5 and d.f.D = 28.
Appendix B: Table 7: F-Distribution
d.f.D: Degrees  = 0.05
of freedom, d.f.N: Degrees of freedom, numerator
denominator

1 2 3 4 5 6
1 161.4 199.5 215.7 224.6 230.2 234.0
2 18.51 19.00 19.16 19.25 19.30 19.33
27 4.21 3.35 2.96 2.73 2.57 2.46
28 4.20 3.34 2.95 2.71 2.56 2.45
29 4.18 3.33 2.93 2.70 2.55 2.43

The critical value is F0 = 2.56.


Critical Values for the F-Distribution
Example:
Find the critical F-value for a two-tailed test 1 1
 = 2 (0.10) = 0.05
when  = 0.10, d.f.N = 4 and d.f.D = 6. 2

Appendix B: Table 7: F-Distribution


d.f.D: Degrees  = 0.05
of freedom, d.f.N: Degrees of freedom, numerator
denominator

1 2 3 4 5 6
1 161.4 199.5 215.7 224.6 230.2 234.0
2 18.51 19.00 19.16 19.25 19.30 19.33
3 10.13 9.55 9.28 9.12 9.01 8.94
4 7.71 6.94 6.59 6.39 6.26 6.16
5 6.61 5.79 5.41 5.19 5.05 4.95
6 5.99 5.14 4.76 4.53 4.39 4.28
The critical
7 value5.59
is F0 =4.74
4.53. 4.35 4.12 3.97 3.87
Two-Sample F-Test for Variances
Two-Sample F-Test for Variances
A two-sample F-test is used to compare two population variances
when σ 12aasample
n d σ 22 is randomly selected from each population.
The populations must be independent and normally distributed.
The test statistic is
s 12
F  2
s2
where s 12 a n d s 22 represent the sample variances with
2 2
sThe
1  s 2.
degrees of freedom for the numerator is d.f.N = n1 – 1 and
the degrees of freedom for the denominator is d.f.D = n2 – 1, where
n1 is the size of the sample having
2
variance and n2 is the size of
s1
the sample having variance
s 22.
Two-Sample F-Test for Variances
Using a Two-Sample F-Test to Compare σ 12 and σ 22
In Words In Symbols
1. Identify the claim. State the null and State H0 and Ha.
alternative hypotheses.

2. Specify the level of significance. Identify .

3. Identify the degrees of freedom. d.f.N = n1 – 1


d.f.D = n2 – 1

4. Determine the critical value. Use Table 7 in


Appendix B.

Continued.
Two-Sample F-Test for Variances
Using a Two-Sample F-Test to Compare σ 12 and σ 22
In Words In Symbols
5. Determine the rejection region.
6. Calculate the test statistic. s 12
F  2
s2

7. Make a decision to reject or fail to If F is in the rejection


reject the null hypothesis. region, reject H0.
Otherwise, fail to reject
H0.
8. Interpret the decision in the context
of the original claim.
Two-Sample F-Test
Example:
A travel agency’s marketing brochure indicates that the standard
deviations of hotel room rates for two cities are the same. A
random sample of 13 hotel room rates in one city has a standard
deviation of $27.50 and a random sample of 16 hotel room rates in
the other city has a standard deviation of $29.75. Can you reject
the agency’s claim at  = 0.01?

Because 29.75 > 27.50, s 12 =885.06 a n d s 22  756.25.


H0: σ 12  σ 22 (Claim)
Ha: σ 12  σ 22
Continued.
Two-Sample F-Test
Example continued:
1 1
This is a two-tailed test with  =2 ( 0.01)
2 = 0.005, d.f.N = 15 and d.f.D
= 12.
The critical value is F0 = 4.72.
1
  0.005
2 The test statistic is
s 12 885.06
F F  2  1.17.
1 2 3 4
F0 = 4.72 s2 756.25

Fail to reject H0.


There is not enough evidence at the 1% level to reject the claim
that the standard deviation of the hotel room rates for the two
cities are the same.
§ 10.4
Analysis of Variance
One-Way ANOVA
One-way analysis of variance is a hypothesis-testing technique
that is used to compare means from three or more populations.
Analysis of variance is usually abbreviated ANOVA.

In a one-way ANOVA test, the following must be true.


1. Each sample must be randomly selected from a normal, or
approximately normal, population.
2. The samples must be independent of each other.
3. Each population must have the same variance.
One-Way ANOVA
Va r ia n ce bet ween sa m ples
Test st a t ist ic 
Va r ia n ce wit h in sa m ples

1. The variance between samples MSB measures the differences


related to the treatment given to each sample and is sometimes
called the mean square between.
2. The variance within samples MSW measures the differences
related to entries within the same sample. This variance,
sometimes called the mean square within, is usually due to
sampling error.
One-Way ANOVA
One-Way Analysis of Variance Test
If the conditions listed are satisfied, then the sampling
distribution for the test is approximated by the F-
distribution. The test statistic is
MS B
F  .
M SW

The degrees of freedom for the F-test are


d.f.N = k – 1
and
d.f.D = N – k
where k is the number of samples and N is the sum of the sample
sizes.
Test Statistic for a One-Way ANOVA
Finding the Test Statistic for a One-Way ANOVA Test
In Words In Symbols
1. Find the mean and variance of each x 2 (x  x )2
x  s 
sample. n n 1

2. Find the mean of all entries in all x


x 
samples (the grand mean). N
3. Find the sum of squares between the S S B   n i (x i  x )2
samples.
4. Find the sum of squares within the S S W  (n i  1)s i2
samples.
Continued.
Test Statistic for a One-Way ANOVA
Finding the Test Statistic for a One-Way ANOVA Test
In Words In Symbols
5. Find the variance between the SS B SS B
MS B  
samples. k  1 d.f.N

6. Find the variance within the S SW SS


M SW   W
samples N  k d.f.D

7. Find the test statistic. MS B


F 
M SW
Performing a One-Way ANOVA Test
Performing a One-Way Analysis of Variance Test
In Words In Symbols
1. Identify the claim. State the null and State H0 and Ha.
alternative hypotheses.

2. Specify the level of significance. Identify .

3. Identify the degrees of freedom. d.f.N = k – 1 d.f.D


=N–k

4. Determine the critical value. Use Table 7 in


Appendix B.
Continued.
Performing a One-Way ANOVA Test
Performing a One-Way Analysis of Variance Test
In Words In Symbols
5. Determine the rejection region.
6. Calculate the test statistic. MS B
F 
M SW

7. Make a decision to reject or fail to If F is in the rejection


reject the null hypothesis. region, reject H0.
Otherwise, fail to reject
H0.
8. Interpret the decision in the context
of the original claim.
ANOVA Summary Table
•A table is a convenient way to summarize the results in a one-
way ANOVA test.

Sum of Degrees of Mean


Variation F
squares freedom squares

SS B
Between SSB d.f.N MS B  M S B  M SW
d.f.N

S SW
Within SSW d.f.D M SW 
d.f.D
Performing a One-Way ANOVA Test
Example:
The following table shows the salaries of randomly selected
individuals from four large metropolitan areas. At  = 0.05, can
you conclude that the mean salary is different in at least one of the
areas? (Adapted from US Bureau of Economic Analysis)

Pittsburgh Dallas Chicago Minneapolis


27,800 30,000 32,000 30,000
28,000 33,900 35,800 40,000
25,500 29,750 28,000 35,000
29,150 25,000 38,900 33,000
30,295 34,055 27,245 29,805

Continued.
Performing a One-Way ANOVA Test
Example continued:
H0: μ1 = μ2 = μ3 = μ4
Ha: At least one mean is different from the others. (Claim)

Because there are k = 4 samples, d.f.N = k – 1 = 4 – 1 = 3.

The sum of the sample sizes is


N = n1 + n2 + n3 + n4 = 5 + 5 + 5 + 5 = 20.

d.f.D = N – k = 20 – 4 = 16

Using  = 0.05, d.f.N = 3, and d.f.D = 16,


the critical value is F0 = 3.24.
Continued.
Performing a One-Way ANOVA Test
Example continued:
To find the test statistic, the following must be calculated.

 x 140745  152705  161945  167805


x    31160
N 20
SS B  n (x  x )2
MS B   i i
d.f.N k 1
5(28149  31160)2  5(30541  31160)2
 
4 1
5(32389  31160)2  5(33561  31160)2
4 1
 27874206.67
Continued.
Performing a One-Way ANOVA Test
Example continued:
S SW (n i  1)s i2
M SW  
d.f.D N k
(5  1)(3192128.94)  (5  1)(13813030.08)
 
20  4
(5  1)(24975855.83)  (5  1)(17658605.02)
20  4
 14909904.97 Test statistic Critical
value
MS B 27874206.67  1.870
F   1.870 < 3.24.
M SW 14909904.34
Fail to reject H0.
There is not enough evidence at the 5% level to conclude that the mean
salary is different in at least one of the areas.
Normality Assumption
• We assume normal distributions to figure sampling
distributions and thus p levels.
• Violations of normality have minor implications for
testing means, especially as N gets large.
• Violations of normality are more serious for testing
variances. Look at your data before conducting this
test. Can test for normality.
Review
• You have sample 25 children from an
elementary school 5th grade class and
measured the height of each. You wonder
whether these children are more variable in
height than typical children. Their variance in
height is 4. Compute a confidence interval for
this variance. If the variance of height in
children in 5th grade nationally is 2, do you
consider this sample ordinary?
The F Distribution (1)
• The F distribution is the ratio of two variance
estimates: s 2 est. 2
F 1
 1
s2
2 est. 2
2

• Also the ratio of two chi-squares, each divided


by its degrees of freedom:
 (2v ) / v1 In our applications, v2 will be larger
F 2 1

 ( v ) / v2
2
than v1 and v2 will be larger than 2.
In such a case, the mean of the F
distribution (expected value) is
v2 /(v2 -2).
F Distribution (2)
• F depends on two parameters: v1 and v2 (df1
and df2). The shape of F changes with these.
Range is 0 to infinity. Shaped a bit like chi-
square.
• F tables show critical values for df in the
numerator and df in the denominator.
• F tables are 1-tailed; can figure 2-tailed if you
need to (but you usually don’t).
F table – critical values
Numerator df: dfB

dfW 1 2 3 4 5

5 5% 6.61 5.79 5.41 5.19 5.05


1% 16.3 13.3 12.1 11.4 11.0

10 5% 4.96 4.10 3.71 3.48 3.33


1% 10.0 7.56 6.55 5.99 5.64

12 5% 4.75 3.89 3.49 3.26 3.11


1% 9.33 6.94 5.95 5.41 5.06

14 5% 4.60 3.74 3.34 3.11 2.96


1% 8.86 6.51 5.56 5.04 4.70

e.g. critical value of F at alpha=.05 with 3 & 12 df =3.49


Testing Hypotheses about 2 Variances
H 0 :  12   22 ; H1 :  12   22
• Suppose
– Note 1-tailed.
• We find
N1  16; s12  5.8; N 2  16; s22  1.7

• Then df1=df2 2 = 15, and


s1 5.8 Going to the F table with 15
F 2   3.41
s 2 1.7 and 15 df, we find that for alpha
= .05 (1-tailed), the critical
value is 2.40. Therefore the
result is significant.
A Look Ahead
• The F distribution is used in many statistical
tests
– Test for equality of variances.
– Tests for differences in means in ANOVA.
– Tests for regression models (slopes relating one
continuous variable to another like SAT and
GPA).
Relations among Distributions – the
Children of the Normal
• Chi-square is drawn from the normal. N(0,1)
deviates squared and summed.
• F is the ratio of two chi-squares, each divided
by its df. A chi-square divided by its df is a
variance estimate, that is, a sum of squares
divided by degrees of freedom.
• F = t2. If you square t, you get an F with 1 df
in the numerator.

t(2v )  F(1,v )
Review
• How is F related to the Normal? To chi-
square?
• Suppose we have 2 samples and we want to
know whether they were drawn from
populations where the variances are equal.
Sample1: N=50, s2=25; Sample 2: N=60,
s2=30. How can we test? What is the best
conclusion for these data?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy