0% found this document useful (0 votes)
125 views96 pages

Assessment of Learning Notes

The document discusses different types of assessment including formative and summative assessment, and how assessment can be used for, as, and of learning. It also covers topics like constructing tests, learning outcomes, and ensuring constructive alignment between outcomes, content, methodology, and assessment tasks. The purpose of assessment is to improve the teaching-learning process and allow students to experience developing quality assessments.

Uploaded by

Mae Ann Lacbayo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
125 views96 pages

Assessment of Learning Notes

The document discusses different types of assessment including formative and summative assessment, and how assessment can be used for, as, and of learning. It also covers topics like constructing tests, learning outcomes, and ensuring constructive alignment between outcomes, content, methodology, and assessment tasks. The purpose of assessment is to improve the teaching-learning process and allow students to experience developing quality assessments.

Uploaded by

Mae Ann Lacbayo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 96

Assessment of

Learning

by:

DR. ARCHIEVAL A. RODRIGUEZ


Arellano University
Warm Up
Define assessment the following:

1.) assessment
2.) measurement
3.) evaluation
Measurement

• It refers to the quantitative aspect of


evaluation. It involves outcomes that can
be quantified statistically. It can also be
defined as the process in determining
and differentiating the information about
the attributes or characteristics of things.
Evaluation
• It is the qualitative aspect of determining
the outcomes of learning. It involves
value judgment. Evaluation is more
comprehensive than measurement. In
fact, measurement is one aspect of
evaluation.
Learning Outcomes
• Distinguish among assessment FOR, AS and OF
learning
• Determine constructive alignment of learning
outcomes and assessment tasks
• Construct test following the guideline of test
construction
Assessment FOR Learning

• We do assess to ensure learning


• We do assess for purposes of learning

FORmative Assessment
Assessment FOR Learning

When done?

-At the beginning of a lesson


-During the lesson
What assessment tools?
• Formative

-Observation
-Q and A
-Written test
Assessment AS Learning

• Assessment as a way of learning SELF-


ASSESMENT
Assessment AS Learning

• When you do self-assessment – learn


*self-motivated
*self-directed based from goals
*independent

• When you use scoring rubric – learn


Assessment OF Learning

• To measure learning
Assessment OF Learning
When done?

At the end of the unit, grading period, Quarter


Check for Understanding

• Compare formative and summative


assessment by means of a metaphor
Example

Formative assessment is drive testing when


buying a car while summative assessment is
long driving after buying a car.
Assessment OF Learning

• It focuses on the development and


utilization of assessment tools to
improve the teaching-learning process. It
emphasizes on the use of testing for
measuring knowledge, comprehension
and other thinking skills.
• It allows the student to go through
the standard steps in test
constitution for quality assessment.
Students will experience how to
develop rubrics for performance-
based and portfolio assessment.
Constructive Alignment

Learning Outcomes Content Methodology Assessment Task


Outcome/Competency
• Knowledge, Skills and Attitude

• That the leaner demonstrates at the end of


the lesson
Declarative Procedural
-what of -how to do
something something

KNOWLEDGE
Conceptual

Factual Principles

DECLARATIVE
Process Procedure

PROCEDURAL
Bloom’s and Anderson’s Cognitive
Domain

Creating
Evaluating
Analyzing
Applying
Understanding
Remembering
Dave, R. Psychomotor Domain

Naturalization
Articulation
Precision
Manipulate
Imitate
Krathwol’s Affective Domain

Characterization
Organization
Valuing
Responding
Receiving
The New Taxonomy (Marzano and
Kendall, 2007)

Self-system thinking
Metacognition
Knowledge utilization
Analysis
Comprehension
Retrieval
Constructive Alignment

Learning Outcome Assessment Task


-Declarative Knowledge -Written Test/
-Procedural Knowledge Oral
Constructive Alignment
Test

• It consists of questions or
exercises or other devices for
measuring the outcomes of
learning.
CLASSIFICATION OF TESTS
1. According to manner of response
a. oral
b. written
2. According to methods of preparation
a. subjective/essay
b. Objective
3. According to the nature of answer
a. Personality tests
b. Intelligence test
c. Aptitude test
d. Achievement or summative test
e. Sociometric test
f. Diagnostic or formative test
g. Trade or vocational test
• Objective tests are tests which have definite answers and
therefore are not subject to personal bias.
• Teacher-made tests or educational tests are constructed by the
teachers based on the contents of different subjects taught.
• Diagnostic tests are used to measure a student’s strengths and
weaknesses, usually to identify deficiencies in skills or
performance.
• Formative and summative are terms often used with evaluation,
but they may also be used with testing. Formative testing is done
to monitor student’s attainment of the instructional objectives.
Formative testing occurs over a period of time and monitors
student progress. Summative testing is done at the conclusion of
instruction and measures the extent to which students have
attained the desired outcomes.
• Standardized tests are already valid, reliable and objective.
Standardized tests are tests for which contents have been
selected and for which norms or standards have been
established. Psychological tests and government national
examinations are examples of standardized tests.
• Standards or norms are the goals to be achieved expressed in
terms of the average performance of the population tested.
• Criterion-referenced measure is a measuring device with a
predetermined level of success or standard on the part of the
test-takers. For example, a level of 75 percent score all the test
items could be considered a satisfactory performance.
• Norm-referenced measure is a test that is scored on the basis of
the norm or standard level of accomplishment by the whole
group taking the test. The grades of the students are based on
the normal curve of distribution.
CRITERIA OF A GOOD EXAMINATION
A good examination must pass the following criteria:
Validity
Validity refers to the degree to which a test measures what it is intended
to measure. It is the usefulness of the test for a given measure. A valid
test is always reliable. To test the validity of a test it is to be pretested in
order to determine if it really measures what it intends to measure or
what it purports to measure.
Reliability
Reliability pertains to the degree to which a test measures what it
suppose to measure. The test of reliability is the consistency of the
results when it is administered to different groups of individuals with
similar characteristics in different places at different times. Also, the
results are almost similar when the test is given to the same group of
individuals at different days and the coefficient of correlation is not less
than 0.85.
• Objectivity
Objectivity is the degree to which personal bias is eliminated in
the scoring of the answers. When we refer to the quality of
measurement, essentially we mean the amount of information
contained in a score generated by the measurement.
Norm-Referenced

and

Criterion Referenced Measurement


Norm-Referenced Interpretation
stems from the desire to differentiate
among individuals or to discriminate among the
individuals of some defined group on whatever is
being measured. In norm-referenced measurement,
an individual’s score is interpreted by comparing it to
the scores of a defined group, often called the
normative group. Norms represent the scores
earned by one or more groups of students who have
taken the test.
.
Norm-referenced interpretation is a
relative interpretation based on an individual’s
position with respect to some group, often called
the normative group. Norms consist of the scores,
usually in some form of descriptive statistics, of
the normative group.
In norm-reference interpretation, the
individual’s position in the normative group is of
concern; thus, this kind of positioning does not
specify the performance in absolute terms. The
norm being used is the basis of comparison and
the individual score is designated by its position in
the normative group
Achievement Test as An Example.
Most standardized achievement tests, especially those
covering several skills and academic areas, are primarily
designed for norm-referenced interpretations. However, the
form of results and the interpretations of these tests are
somewhat complex and require concepts not yet introduced in
this text. Scores on teacher-constructed tests are often given
norm-referenced interpretations. Grading on the curve, for
example, is a norm-referenced interpretation of test scores on
some type of performance measure. Specified percentages of
scores are assigned the different grades, and an individual’s
score is positioned in the distribution of scores. (we mention
this only as an example; we do not endorse this procedure)
Suppose an algebra teacher has a total of 150
students in five classes, and the classes have a
common final examination. The teacher decides that
the distribution of letter grades assigned to he final
examination performance will be 10 percent As, 20
percent Bs, 40 percent Cs, 20 ;percent Ds, and 10
percent Fs. (Note that the final examination grade is
not necessarily the course grade) Since the grading is
based on all 150 scores, do not assume that 3 students
in each class will receive As, on the final examination.
James receives a sore on the final exam such that
21 students have higher scores and 128 students have
lower scores. What will James’s letter grade be on the
exam? The top 15 scores will receive As, and the next
30 scores (20 percent of 150) will receive Bs. Counting
from the top score down, James’s score is positioned
22nd, so he will receive a B on the final examination.
Note that in this interpretation example, we did not
specify James’s actual numerical score on the exam.
That would have been necessary in order to determine
that his score positioned 22nd in the group of 150
scores. But in terms of the interpretation of the score,
it was based strictly on its position in the total group
of scores...
Criterion-Referenced Interpretation
-it means referencing an individual’s performance to
some criterion that is a defined performance level.
The individual’s score in interpreted absolute rather
than relative terms. The criterion, in this situation,
means some level of specified performance that has
been determined independently of how others might
perform.
A second meaning for criterion-referenced
involves the idea of a defined behavioral
domain—that is, a defined group of behaviors.
The learner’s performance on a test is referenced
to a specifically defined group of behaviors. The
criterion in this situation is the desired behaviors.
Criterion-referenced
interpretation is an absolute rather than relative
interpretation, referenced to a defined body of
learner behavior’s, or, as is commonly done, to
some specified level of performance.
Criterion-referenced tests require the specification of learner
behaviors prior to constructing the test. The behaviors should be
readily identifiable from instructional objectives. Criterion-
referenced tests tend to focus on specific learner behaviors, and
usually only a limited number are covered on any one test.
Suppose before the test is
administered an 80-percent-correct criterion is established as the
minimum performance required for mastery of each objective. A
student who does not attain the criterion has not mastered the skill
sufficiently to move ahead in the instructional sequence. To a large
extent, the criterion is based on teacher judgment. No magical,
universal criterion for mastery exists, although some curriculum
materials that contain criterion-referenced tests do suggest criteria
for mastery. Also, unless objectives are appropriate and he criterion
for achievement relevant, there is little meaning in the attainment
of a criterion, regardless of what it is.
Distinctions between Norm-Referenced and
Criterion-Referenced Tests
Although interpretations, not characteristics,
provide the distinction between norm-referenced
and criterion-referenced tests, the two types do tend
to differ in some ways. Norm-referenced tests are
usually more general and comprehensive and cover a
large domain of content and learning tasks. They are
used for survey testing, although this is not their
exclusive use.
Criterion-referenced tests focus on a specific
group of learner behaviors. Top show the
contrast, consider an example. Arithmetic skills
represent a general and broad category of
student outcomes and would likely be measured
by a norm-referenced test. On the other hand,
behaviors such as solving addition problems with
two five-digit numbers or determining the
multiplication products of three-and four digit
numbers are much more specific and may be
measured by criterion-referenced tests.
A criterion-referenced test tends to focus
more on sub skills than on broad skills. Thus,
criterion-referenced tests tend to be shorter. If
mastery learning is involved, criterion-referenced
measurement would be used. Norm-referenced
test scores are transformed to positions within the
normative group. Criterion-referenced test scores
are usually given in the percentage of correct
answers or another indicator of mastery or the
lack thereof.
Criterion-referenced tests tend to lend
themselves more to individualizing instruction
than do norm-referenced tests. In
individualizing instruction, a student’s
performance is interpreted more
appropriately y comparison to the desired
behaviors for that particular student, rather
than by comparison with the performance of a
group.
Norm-referenced test items tend to be of
average difficulty. Criterion-referenced tests have
item difficulty matched to the learning tasks. This
distinction in item difficulty is necessary because
norm-referenced tests emphasize the discrimination
among individuals and criterion-referenced tests
emphasize the description of performance. Easy
items, for example, do little for discriminating among
individuals, but they may be necessary for describing
performance.
Finally, when measuring attitudes, interests, and
aptitudes, it is practically impossible to interpret the
results without comparing them to a reference group.
The reference groups in such cases are usually typical
students or students with high interests in certain
areas. Teachers have no basic for anticipating theses
kinds of scores; therefore, in order to ascribe meaning
to such a score, a referent group must be used. Fr
instance, a score of 80 on an interest inventory has no
meaning in itself. On the other hand, is a score of 80 is
the typical response by a group interested in
mechanical areas, the score takes on meaning.
STAGES IN TEST CONSTRUCTION
I. Planning the Test
A. Determining the Objectives
B. Preparing the Table of Specifications
C. Selecting the Appropriate Item Format
D. Writing the Test Items
E. Editing the Test Items
II. Trying Out the Test
A. Administering the First Tryout-then Item Analysis

B. Administering the Second Tryout-then Item Analysis

C. Preparing the Final Form of the Test


III. Establishing Validity Test
IV. Establishing the Test Reliability
V. Interpreting the Test Score
MAJOR CONSIDERATIONS IN TEST CONSTRUCTION
The following are the
major considerations in test construction:
Type of Test
Our usual idea of testing is an in-class
test that is administered by the teacher, however,
there are many variation this theme: group tests,
individual tests, written tests, oral tests, speed tests,
power tests, pretests and post tests. Each of these
has different characteristics that must be considered
when the tests are planned.
If it is a take-home test rather than an in-class
test, how do you make sure that students work
independently, have equal access to sources and
resources, or spend a sufficient but not enormous
amount of time on the task? If it is a pretest, should
it exactly match the past test so that a gain score can
be computed, or should the pretest contain items
that are diagnostic of prerequisite skills and
knowledge? If it is an achievement test, should
partial credit be awarded, should there be penalties
for guessing, or should points be deducted for
grammar and spelling errors?
Obviously, the test plan must include a wide array of
issues. Anticipating these potential problems allows the
test constructor to develop positions or policies that are
consistent with his or her testing philosophy. These can
then be communicated to students, administrators,
parents, and others who may be affected by the testing
program. Make a list of the objectives, the subject
matter taught, and the activities undertaken. These are
contained in the daily lesson plans of the teacher and I
the references or textbook used. Such tests are usually
very indirect methods that only approximate real-world
applications. The constraints in classroom testing are
often due to time and the development level of the
students.
Test Length
A major decision in the test planning is how many items
should be included on the test. There should be enough to cover
the content adequately, but the length of the class period or the
attention span or fatigue limits of the students usually restrict
the test length. Decisions about test length ate usually based on
practical constraints more than on theoretical considerations.
Most teachers want test
scores to be determined by how mush the student understands
rather than by how quickly he or she answers the questions.
Thus, teachers prefer power tests, where at least 90 percent of
the students have time to attempt 90 percent of the test items.
Just how any items will fit into a given test occasion is something
that is learned through experience with similar groups of
students
Item Formats

Determining what kind of items to include on the test is a


major decision. Should they be objectively scored formats such
as multiple choice or matching type? Should they cause the
students to organize their own thoughts through short answer
or essay formats? These are important questions that can be
answered only by the teacher in terms of the local context, his
or her students, his or her classroom, and the specific purpose
of the test. Once the planning decisions are made, the item
writing begins. This tank is often the most feared by the
beginning test constructors. However, the procedures are more
common sense than formal rules.
POINTS TO BE CONSIDERED IN PREPARING A TEST
1. Are the instructional objectives clearly defined?
2. What knowledge, skills and attitudes do you want to measure?
3. Did you prepare a table of specifications?
4. Did you formulate well defined and clear test items?
5. Did you employ correct English in writing the item
6. Did you avoid giving clues to the correct answer?
7. Did you test the important ideas rather than the trivial?
8. Did you adapt the test’s difficulty to your student’s ability?
9. Did you avoid using textbook jargons?
10. Did you cast the items in positive form?
11. Did you prepare a scoring key?
12. Does each item have a single correct answer?
13. Did you review items?
GENERAL PRINCIPLES IN CONSTRUCTING DIFFERENT TYPES
OF TESTS
1. The test items should be selected very carefully. Only important
facts should be included.
2. The test should have extensive sampling of items.
3. The test items should be carefully expressed in simple, clear,
definite, and meaningful sentences.
4. There should be only one possible correct for each test item.
5. Each item should be independent. Leading clues to other items
should be avoided.
6. Lifting sentences from books should not be done to encourage
thinking and understanding.
7. The first person personal pronouns I and we should not be used.
8. Various types of test items should be made to avoid monotomy.
9. Majority of the test items should be of moderate difficulty. Few
difficult and few easy items should be included.
10. The test items should be arranged in an ascending order of
difficulty. Easy items should be at the beginning to encourage the
examinee to pursue the test and the most difficult items should be
at the end.
11. Clear, concise, and complete directions should precede all types of
test. Sample test items may be provided for expected responses.
12. Items which can be answered by previous experiences alone
without knowledge of the subject matter should not be included.
13. Catchy words should not be used in the test items.
14. Test items must be based upon the objectives of the course and
upon the course content.
15. The test should measure the degree of achievement or determine
the difficulties of facts.
16. The test should emphasize ability to apply and
use facts as well as knowledge of facts.
17. The test should be of such length that it can be
completed within the time allotted by all nearly all of the
pupils. The teacher should perform the test herself to
determine its approximate time allotment. 18. Rules
governing good language expression, grammar, spelling,
punctuation, and capitalization should be observed in all
items. 19. Information on how
scoring will be done should be provided.
20. Scoring Keys in correcting and
scoring tests should be provided.
POINTERS TO BE OBSERVED IN CONSTRUCTING
AND SCORING THE DIFFERENT TYPES OF TESTS

RECALL TYPES
Simple recall type
a. This type consists of questions calling for a single
word or expression as an answer.
b. Items usually begin with who, where, when, and
what.
c. Score is the number of correct answer.
2. Completion type
a. Only important words or phrases should be omitted to
avoid confusion.
b. Blanks should be equal lengths.
c. The blank, as much as possible, is placed near or at the
end of the sentence.
d. Article a, an, and the should not be provided before the
omitted word or phrase to avoid clues for answers.
e. Score is the number of correct answers.
Enumeration type
a. The exact number of expected answers should be
stated.
b. Blanks should be of equal lengths.
c. Score is the number of correct answers.

Identification type

a. The items should make an examinee think of a word,


number or group of words that would complete the
statement or answer the problem.
b. Score is the number of correct answers.
Identification type

a. The items should make an examinee think


of a word, number or group of words that
would complete the statement or answer the
problem.

b. Score is the number of correct answers.


RECOGNITION TYPES
1. True-false or alternate-response type
a. Declarative sentences should be used.

b. The number of “true” and “false” items should be more or


less equal.

c. The truth or falsity of the sentence should not be too


evident.

d. Negative statements should be avoided.

e. The “modified true-false” is more preferable than the


“plain true-false”.
f. In arranging the items, avoid the regular
recurrence of “true” and “false” statements.

g. Avoid using specific determiners like: all,


always, never, none, nothing, most, often,
some, etc. and avoid weak statements as may,
sometimes as may, sometimes, as rule, in
general etc.
h. Minimize the use of qualitative terms like: few, great,
many, more, etc.

i. Avoid leading clues to answers in all items.

j. Score is the number of correct answers in “modified


true-false and right answers minus wrong answers in
“plain true-false”.
Yes-No type
a. The items should be in interrogative sentences.
b. The same rules as in “true-false” are applied.

Multiple-response type
a. There should be three to five choices. The number of choices used in
the first item should be the same number of choices in all the items of
this type of test.
b. The choices should be numbered or lettered so that only the number or
letter can be written on the blank provided.

c. If the choices are figures, they should be arranged in ascending order.

d. Avoid the use of “a” or “an” as the last word prior to the listing of the
responses.
e. Random occurrence of responses should be employed.
f. The choices, as much as possible, should be at the
end of the statements.

g. The choices should be related in some way or should


belong to the same class.

h. Avoid the use of “none of these” as one of the choices.

i. Score is the number of correct answers.


Best answer type
a. There should be there to five choices all
of which are right but vary in their degree of
merit, importance or desirability
b. The other rules for multiple-response
items are applied here.
c. Score is the number of correct answers.
Matching type

a. There should be two columns. Under “A” are the


stimuli which should be longer and more descriptive
than the responses under column “B”. The response
may be a word, a phrase, a number, or a formula.
Matching Type
b. The stimuli under column “A” should be numbered
and the responses under column “B” should be
lettered. Answers will be indicated b letters only on
lines provided in column “A”.
Matching Type
c. The number of pairs usually should not exceed
twenty items. Less than ten introduces chance
elements. Twenty pairs may be used but more than
twenty is decidedly wasteful of time
d. The number of responses in column “B” should be
two or more than the number of items in Column “A” to
avoid guessing.
e. Only one correct matching for each item should be
possible.
f. Matching sets should neither be too long nor too
short.
g. All items should be on the same page to avoid
turning of pages in the process of matching pairs.

h. Score is the number of correct answers.


C. ESSAY TYPE EXAMINATIONS

1. Comparison of two things


2. Explanation of the use or meaning of a statement
or passage
3. Analysis
4. Decision for or against
5. Discussion
2 Types of Essay Questions

1. Restricted
2. Non-restricted/Extended
Classify
1. According to Erikson, what is the most critical
stage in the development of the child? Why?

2. Get 3 examples of simile referring to 3


politicians.

3. What are the 4 food groups? Give two


examples for each.
4. Discuss the possible consequences of the
passage of the CPD Bill into law.

5. Given current economic realities and current


government economic policies, discuss the
future of global recession in the Philippines for
the next 4 years.
How to construct essay examinations
1. Determine the objectives or essentials for each
questions to be evaluated.
2. Phrase questions in simple, clear and concise
language.
3. Suit the length of the questions to the time available
for answering the essay examination. The teacher
should try to answer the test herself.
4. Scoring:
a. Have a model answer in advance
b. Indicate the number of points for each
questions.
c. Score a point for each essential.
Exercise
• True or False
1. Baguio City is in the province of Benguet and
is the national capital of the Philippines.
2. Tuberculosis is not a non-communicable
disease.
3. All bacteria cause disease.
4. The Philippine Constitution is better than the
Malaysian Constitution.
5. The Raven was written by Edgar Allen Poe.
6. The postulation of capillary effectuation
promotes elucidation of how plant substances
ascend in incommodious veins.
7. Many voted for Roxas last election.
8. All boys develop later than girls.
9. Developmental pattern acquired during early
childhood are not unchangeable.

10. Executive usually suffer from hyperacidity.

11. Twelve-year basic education is good for


Filipino children.
Exercise
Multiple Choice

1. A figure with eight sides is called an _______.


A. pentagon C. octagon
B. quadrilateral D. heptagon
2. Clay can be used for
A. making hollow blocks
B. making pots
C. garden soil

3. Jesus taught that the greatest attribute of


God is
A. justice C. sin
B. power D. love
3. The Roman Empire __________.
A. had no central government
B. had no definite territory
C. had no common religion
D. had no heroes
6. As compared to the autos of the 1960’s, autos
in the 1980’s
A. travelling slower.
B. bigger interiors.
C. to use less fuel.
D. contain more safety measures.
7. Which of the following is categorized in
Bloom’s cognitive taxonomy of objectives?
A. Creating C. All of the above
B. Rote learning D. None of the above
8. Jose Rizal was born on
A. June 11, 1861
B. June 19, 1869
C. June 19, 1861
D. June 16, 1861
Exercise – Matching Type of Test
Column A Column B
1. Poly A. Sides
2. Triangle B. Eight-sided polygon
3. Pentagon C. Ten-sided polygon
4. Square D. Close plane figure
5. Decagon E. Polygon made of 3 segments.
6. Hexagon F. Polygon with 4 equal segments
7. Isosceles triangle G. Five-sided polygon
8. Octagon H. A triangle with two equal sides
9. Gons I. A six-sided polygon
10. Circle J. Many
In column 1 are works and writings in American Literature and in column 2
are their authors. Write the letter of the author with corresponds to his work
on the blank provided before each number. In some cases, an answer may
be repeated.
Column 1 Column 2
1. The Alhambra A. Cooper
2. The Pioneers B. Dana
3. The Guardian Angel C. Emerson
4. Two Years Before the Mast D. Holmes
5. Moby Dick E. Irving
6. The World in a Man of War F. James
7. The Last of the Mohicans G. Melville
8. The American Scholar H. Mark Twain
(Clemens)
9. The Autocrat of the Breakfast Table I. Wharton
10. Tom Sawyer
Exercise – Completion Test
1. The ____ produced by the ____ is used by
green ____ to change the ____ and ____ into
____. This process is called ____.
2. Ernest Hemingway wrote ______.
3. _____ is an example of leafy vegetable.
4. Jose Rizal was born on June __, 1861.
5. A word that describes a noun or a pronoun is
an _____.

6. An action word is called a _____.

7. That which is made up of sentence related to


a topic sentence is called a _______________.
Exercise - Essay
1. Write down everything that you learned from
this course.

2. What are the factors that affect the


development of children.

3. What is genetic engineering?


Item Analysis
• Difficulty Index

R R
u l
N
• Discrimination Index

R R
u l
1
N
2
Description
• Difficulty Index

0.85 or above - Easy


0.51 – 0.84 - Moderate
0.50 or below - Hard
• Discrimination Index

above 0.30- Good


0.10 – 0.30 - Fair
0.10 below- Poor

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy