This document provides an overview of item analysis and test analysis procedures. It describes how to conduct a simplified item analysis by sorting test papers into upper, lower, and middle groups based on score and comparing the number of students in the upper and lower groups who answered each item correctly. This helps evaluate the difficulty and discriminating ability of each item. The document also briefly outlines common methods for describing test scores through measures of central tendency and variability, and methods for determining the reliability and validity of a test.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
100%(1)100% found this document useful (1 vote)
404 views17 pages
Item Analysis
This document provides an overview of item analysis and test analysis procedures. It describes how to conduct a simplified item analysis by sorting test papers into upper, lower, and middle groups based on score and comparing the number of students in the upper and lower groups who answered each item correctly. This helps evaluate the difficulty and discriminating ability of each item. The document also briefly outlines common methods for describing test scores through measures of central tendency and variability, and methods for determining the reliability and validity of a test.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 17
Item
Analysis Item Analysis
The systematic evaluation of
the efectiveness of each item of a test. Item analysis can tell us:
The difculty of the item
The discriminating power of
the item
The efectiveness of each
alternative Simplifed Item-Analysis rocedure
There are a num!er of diferent
item-analysis procedures that might !e applied "#ownie$ %&'(). *or informal achievement tests used in teaching$ only the simplest of procedures seems warranted. The following steps outline a simple !ut efective procedure. +e shall use ,- test papers to illustrate the steps.
%. Arrange the test papers "all ,-
papers) in order from the highest score to the lowest score.
-. Select appro.imately one-third of
the papers with the highest scores and call this the upper group "%/ papers). Select the same num!er of papers with the lowest scores and call this the lower group "%/ papers).
Set the middle group of papers
aside "%- papers). Although these could !e included in the analysis$ using only the upper and lower groups simplifes the procedure.
,. *or each item$ count the num!er of students in
the upper groups who selected each alternative.$ 0a1e the same count for the lower group.
'. 2stimate item discriminating power$ !y
comparing the num!er of students in the upper and lower groups who got the item right. 3ote in our sample item a!ove$ that si. students in the upper group and two students in the lower group selected the correct answer. This indicates positive discrimination$ since the item diferentiates !etween students in the same way that the total test score does. That is$ students with high scores on the test "upper group) got the item right more fre4uently than students with low scores on the test "lower group). Although analysis !y inspection may !e all that is necessary for most purposes$ an inde. of discrimination can !e easily computed. Simply su!tract the num!er in the lower group who got the item right from the num!er in the upper group who got the item right and divide !y the num!er in each group. Thus$ for our sample item$ the computation would !e as follows:
(. #etermine the efectiveness of the
distracters$ !y comparing the num!er of students in the upper and lower groups who selected each incorrect alternative. A good distracter will attract more students from the lower group than the upper group. Thus$ for our illustrative item-analysis data$ in step 5$ alternatives A and # are functioning efectively$ alternative 6 is poor since it attracted more students from the upper group and alternative 2 is completely inefective since it attracted no one.
An analysis such as this is
useful in evaluating a test item$ and$ when com!ined with an inspection of the item itself$ it provides helpful information for improving the item. Simplifed 0ethods of Treating Test Scores.
Test scores are normally descri!ed !y two measures:
%. Average score or measure of central tendency. -. Spread of scores or measure of varia!ility.
Three types of averages:
%. 0ean -. 0edian ,. 0ode
Two ways of descri!ing varia!ility:
%. 7ange -. Standard deviation #etermining the mean and standard deviation. 8alidity and 7elia!ility
8alid tests measure what they actually
were designed to measure.
Tests of validity: %. 6ontent - . 6riterion - related ,. 6onstruct
7elia!le tests measure what they were
designed to measure consistently. 0ethods of determining relia!ility: %. Test - retest method. -. 24uivalent - forms method. ,. Test - retest with e4uivalent forms. 5. Internal consistency method. 72*272362S
www.cte.cornell.edu
www.msu.edu9dept9sowe!9writitem.html:stem
;u!!ard$ 2valuation in 2ducation
+hitfeld$ 7.6.$ 6riteria of <uality for 0ultiple
6hoice <uestions
=ind4uist$ 2ducational 0anagement
I>cr!.we!.com
Interdisciplinary ?ournal of 6ontemporary 7esearch$