0% found this document useful (0 votes)
75 views9 pages

Pstmls Evaluation in Health Science Education Part 1 Objectives

The document discusses evaluation, assessment, and testing in health science education. It defines key terms like teaching, evaluation, assessment, and different types of tests and assessments. Formative and summative assessment are described as well as normative and criterion-referenced evaluation. The purposes and applications of assessment in the classroom are also outlined.

Uploaded by

Kayla Mae Ga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views9 pages

Pstmls Evaluation in Health Science Education Part 1 Objectives

The document discusses evaluation, assessment, and testing in health science education. It defines key terms like teaching, evaluation, assessment, and different types of tests and assessments. Formative and summative assessment are described as well as normative and criterion-referenced evaluation. The purposes and applications of assessment in the classroom are also outlined.

Uploaded by

Kayla Mae Ga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

PSTMLS

Evaluation in Health Science Education Part 1

OBJECTIVES:
● Discuss the nature of educational evaluation according to its different purposes and types
● Determine the various areas in health science education where educational evaluation can be applied
● Distinguish the different evaluation models and their respectives uses
● Identify the different requisites for good evaluation instruments
● Explain different steps in conducting educational evaluation

WHAT IS TEACHING?
Definition
● Teaching is the process of attending to people’s needs, experiences and feelings, and intervening so that
they learn particular things, and go beyond the given.
● To transfer skills (knowledge, know-how, and interpersonal skills) to a learner, a student, or any other
audience within the setting of an educational institution, a teacher must engage in the activity of teaching.
● Learning, or the process through which a pupil assimilates this knowledge, and teaching go hand in hand.
The notion of education as a whole includes teaching.

Relevance
● Provides individuals with the opportunity to acquire knowledge, skills, and values that are essential for
personal and societal development.
● Helps individuals to develop critical thinking, problem-solving, and communication skills, which are
required for success in various aspects of life.
● Contributes to the development of a well-informed and productive society by promoting lifelong learning
and facilitating the transfer of knowledge from one generation to another.

Application to classroom setup


● To assess students' understanding of a particular subject, identify areas where they need further
instruction, and evaluate the effectiveness of the teaching methods used.
● The test can be used as a formative assessment to guide instructional decisions or as a summative
assessment to measure learning outcomes.
● Provide feedback to students and parents on their progress and to inform grading decisions.

WHAT IS A TEST?

Definition

● A test is a method of evaluating someone's knowledge, abilities, or skills in a particular subject or area.
● A procedure intended to establish the quality, performance, or reliability of something, especially before it
is taken into widespread use.

Relevance
● Testing is a part of learning and lets students “show what they know” and what they can do.
● Test results show student strengths. You will learn what subject areas your student excels in. Test results
show student weaknesses.
Application to classroom setup
● Assess students' understanding of the material and their ability to apply it.
● Identify areas of strengths and weaknesses in the curriculum, and to guide instructional decision-making.
● Promote effective teaching and learning in the classroom.

WHAT IS ASSESSMENT?
● Assessment is the systematic basis for making inferences about the learning and development of
students.
● It is the process of defining, selecting, designing, collecting, analyzing, interpreting, and using information
to increase students' learning and development.
Relevance
● Assessment plays an important role in the process of learning and motivation.
● The types of assessment tasks that we ask our students to do determine how students will approach the
learning task and what study behaviors they will use.
Application to classroom setup
● evaluate students' learning progress and to identify areas where additional support may be needed.
● Assessments can take many forms, such as tests, quizzes, essays, or projects. Teachers may use
assessments to set goals for their students, measure their achievement, and adjust their teaching
methods to better support their students' needs.

HOW IS A TEST RELATED TO ASSESSMENT?

● The systematic process of documenting and using empirical evidence on knowledge, skills, attitudes, and
beliefs is known as assessment. While a test is used to evaluate someone's understanding of something
in order to ascertain what they are aware of or what they have learned.It measures the level of skill or
knowledge that has been reached.
● An evaluative device or procedure in which a sample of an examinee’s behavior in a specified domain is
obtained and subsequently evaluated and scored using a standardized process (The Standards for
Educational and Psychological Testing, 1999)
● ASSESSMENT - is used during and after the instruction has taken place. After you've received the results
of your assessment, you can interpret the results and in case needed alter the instruction.
● TESTS - are done after the instruction has taken place, it's a way to complete the instruction and get the
results. The results of the tests don't have to be interpreted, unlike assessment.

TYPES OF ASSESSMENT

Assessment can be classified as either normative or criterion referenced:

1. Normative assessment
● according to (Abbatt,1992) describe the achievement of learners in their previous performance or
with a specific group of learners
- Compare the scores of learners in the same domain.
- It has limited scope and therefore, it is not used to assess the standard of an educational
system or the whole population.
Example: They may also be used to determine eligibility for advanced programs, scholarships, or college
admission

2. Criterion-referenced assessment
● the students’ achievement is judged against an absolute and pre-determined standard or pass
grade (Abbatt, 1992; Downing & Yudkowsky, 2009)
- It relates the learners’ scores with the behavior that is expected of learners with such a
score.
- aims at testing mastery levels among learners
Example: Answering a certain question correctly—they will pass the test, meet the expected standards.
- Have been compared to driver’s-license exams, which require would-be drivers to achieve a
minimum passing score to earn a license.

Assessment of meaningful learning can be approached in various forms and contexts. It can be either product
(summative) or process (formative) assessment.

1. Formative Assessment

● Any task or activity which creates feedback (or feed forward) for students about their learning
- enhance learners’ learning outcomes through their engagement with a variety of
assessment activities and feedback

Example: Quizzes, group discussions, and self-assessments

-Observations of students during a role play scenario, exit slips after a motor development activity,
and conversations with students about health skills.
- In curriculum implementation, there are three forms of formative assessment: Teacher designed
assessment, Self- Assessment and Peer assessment
a. Teacher designed assessment
- To evaluate the learning progress of their learners and to identify the gaps in their learning
- Helps teachers to accordingly adjust their teaching and learning activities to meet the needs of
their learners
b. Self- Assessment
- Essential to the learning process as it enhances students’ ability to develop and overview their
own work so that it becomes possible for them to manage and control it for themselves.
- learners are able to plan their own learning, identify their own strengths and weaknesses,
determine target areas for remedial action and develop meta-cognitive and other personal and
professional transferable skills
c. Peer Assessment
- Strengthen the student’s voice and improves communication between students and their teachers
during the teaching and learning processes and enables students recognize their own learning
needs and to inform the teacher about these needs
- Helps teachers to adjust their teaching accordingly to meet the identified individual needs, which is
critical to the concept of formative assessment internalize the criteria

2. Summative assessment

- Used to evaluate the overall understanding of students at the end of a learning unit or course.
- It provides a clear indication of a student’s overall understanding of the material, and can be used to
compare student performance across different sections of a course or different courses

Example: Can take the form of exams, projects, or presentations, and are often used to determine a
student’s final grade in a course

Another type of assessment that is commonly used in health science education is known as Diagnostic and
Authentic Assessment

1. Diagnostic Assessment
- Used to identify areas of weakness or gaps in knowledge before the learning process begins.
- Allows educators to tailor the learning experience to the individual needs of each student, and can
help to minimize frustration and feelings of overwhelm that can arise when students are struggling
with material they don't understand.

Example: Take a variety of forms, including pre-tests, surveys, or interviews.

2. Authentic assessment
- focuses on real-world, relevant tasks that require critical thinking, problem-solving, and application
of knowledge
- Designed to assess the practical, transferable skills that are essential for success in the
healthcare industry.

Example: Case studies, simulations, and performances

EPISTEMOLOGY OF ASSESSMENT AND EVALUATION

● EPISTEMOLOGY- comes from the two greek words: "episteme" means knowledge and "logos" means
study.
● A branch of philosophy that examines the nature of knowledge and belief, and how we acquire and justify
knowledge (Kornblith, 2018).
● It investigates questions such as what constitutes knowledge, how we can differentiate knowledge from
opinion or belief, and what criteria we can use to justify knowledge claims.
● The epistemology of evaluation and assessment refers to the underlying assumptions and beliefs
about knowledge and learning that inform the design, implementation, and interpretation of evaluation and
assessment practices (Ercikan & Roth, 2019).
- It is an important area of inquiry for educators and researchers, as it provides insight into the
underlying assumptions and beliefs that shape educational practice.
● Evaluation - refers to the systematic assessment and analysis of educational programs, policies, or
practices, with the aim of improving educational quality and effectiveness (Scriven, 2018). Evaluation
helps educators to identify strengths and weaknesses in their programs.
● Teachers engage in a series of important activities such as testing, measurement, grading, and
assessment to determine students' knowledge.
○ Testing - act of administering a set of questions to students at the end of instruction.
○ Measurement - determining the degree to which a learner possesses a certain attribute (Popham,
1993)
○ Grading - assigning students to their corresponding grade.
○ Assessment - refers to the collection of data and organizing them to measure how the learners
have achieved the expected levels of competencies (Best and Khan, 1989).
■ This is applied to evaluation of student achievement.

WHAT IS EVALUATION?
Definition
● Evaluation is the process of rating or valuing a learning experience in accordance with specific criteria to
figure out the extent of knowledge or ability that is acquired and utilized. The intent of an evaluation is to
judge the efficacy of learning (Goel, 2021).
● Evaluation is a vital component of the teaching-learning process. It supports educators and students in
enhancing instruction and learning (Ifeoma, 2022).
● It fosters achievement among learners, educational standing, and helps form value judgment. The
process of teaching and learning is bound to involve a form of evaluation since judgments must be taken
throughout all areas of educational endeavor (Ifeoma, 2022).

Relevance
Importance of Educational Evaluation (Villegas, n.d.):
Educational evaluation serves as crucial to the teaching-learning process.
1. Diagnostic: Helps an educator in determining problems and solving them with his students.
2. Remedial: A teacher may support students in developing their personalities and making the required
behavioral adjustment.
3. To Make Education Goals Clear: Through evaluation, a teacher can show how a learner's behavior
has changed.
4. It Offers Guidance: Advice can only be given following a complete evaluation that takes into account
all facets of aptitude, interest, intelligence, etc.
5. Classification Aid: Evaluation is a way for teachers to classify their students and help with figuring out
their student's
6. Helpful for Improving the Learning and Teaching Process: An educator can enhance a student's
personality and learn through evaluation, and he can also know the effectiveness of his instruction.

PRINCIPLES OF EVALUATION

Roles of educational evaluation


1. To provide a basis for decision making and policy formulation
2. To assess student achievement
3. To evaluate curricula
4. To accredit schools
5. To monitor expenditures of public funds, and
6. To improve educational materials and programs

Purposes of educational evaluation (Worthen and Sanders 1987)


1. To determine ways by which to systematically improve an educational product or phenomenon. This can
be In terms of identifying needs, selecting the best strategies from. among the known ones, monitoring
changes as they occur, and measuring the impact of these changes.
2. To establish the cost-benefit analysis of the program being evaluated. Especially in educational programs
that require scarce state appropriations, evaluations should be built-in in these programs to justify
continuous appropriations.
3. To test the applicability of known theories on student development. The need for systematic and often
subtle information to supplant or confirm casual observations is what generates the need for evaluation.
4. To appraise the quality of the school programs and to constantly seek ways of improving that quality. This
is a professional responsibility of educators.
5. To satisfy the need of funding agencies for reports and updates to legitimize their decisions and improve
their public relations through credible, empirical decision making.

PRINCIPLES OF ASSESSMENT

Practicality
- Every good assessment has to be practical and feasible to implement. Practicality refers to the ease of
design and use for both teachers and learners (Brown, 2004).
- When we talk about practicality, we talk about the simpleness of its design and the simplicity of the
scoring. Because time is a limited commodity for teachers, the following must be considered:

1. Time required. Practical Assessments are time-efficient. The time constraints should be
appropriate for the test. It should not take long for the person being assessed for it may be
troublesome and impractical. Gather only as much information as you need. The time required
should include how long it takes to make the assessment and how long it takes to score the
results.
A test is impractical when:
- They are too long and;
- It takes several hours to grade a test.
2. Scoring. Scoring systems should be straightforward and easy to understand. Meaning, the
assessment should be easy to score and interpret.
- Use the easiest method of scoring appropriate to the method and purpose of the
assessment.
3. Accessibility. The assessment should be designed to be accessible to all students and
individuals, regardless of their visual, hearing, or motor impairments. This means that the design of
the assessment ensures that every individual being assessed can fully participate and
demonstrate their knowledge and skills.
4. Complexity of administration. The directions and procedures should be clear. The layout should
be easy to follow and understand.
- Assessments that require long and complicated instructions are less efficient because it
may cause misunderstandings among students making the results less reliable and valid.
5. Cost. The assessments should be cost-effective. The resources should be used efficiently while
still providing reliable and valid results.
6. Ethical considerations. The assessments should be designed with respect for the privacy and
dignity of the individuals being assessed. Ethical considerations may include: obtaining consent
forms and maintaining confidentiality.

Reliability

- This refers to the consistency with which a test measures what it is measuring. To put it simply, reliability is
the degree to which an assessment tool produces stable and consistent results (Phelan & Wren, 2007).
- Does the test produce constant, consistent, and repeatable results? Consistency or similarity of results
will be obtained or estimated through the following methods (Abarquez 1989):

1. Test-retest method: Same people, different times. The same test is given twice at different
times. A reliable test will show similar scores obtained by the examinees in either test.

Ex. If a group of BSMLS students take a PMLS2 Lec test just before the end of semester and one
when they return to school at the beginning of the next, the tests should produce broadly the same
results.

2. Alternate form method: Different people, same time, different test. Two equivalent or parallel
forms of a test are administered at the same time. Equivalent forms of a test may be done by
giving the same item but arranged differently in the second set.

Ex. Different sets during exams and quizzes taken at the same time

3. Comparing results from different raters: Different people, same test. A wide variation in
results may point to a highly unreliable evaluation tool. This method is also described as inter-rater
reliability.
Ex. Judges in a pageant (ex. Miss Universe). For the academic setting, an example would be
when professors collaborate for a final performance task (ex. NSTP & Theology).

4. Internal Consistency Reliability: Different questions, same construct. Evaluates individual


questions in comparison with one another for their ability to give consistently appropriate results.

Ex. In the PMLS2 Lec Midterm Exam, if a student gets one item correct about a particular topic, for
example, items about Phlebotomy, it is expected that the students get similar items correct.

As with validity, you must also be aware of threats to the reliability of a test. These include:

1. Unclear directions,

2. Insufficient time allotment,

3. Lengthy examination,

4. Presence of distractions, disturbances during test administration,

5. Ambiguous test items, and

6. Lack of objectivity in scoring leading to wide disagreement among raters

Validity

- Validity refers to what characteristic the test measures and how well the test measures that characteristic.
- It will tell you how good a test is for a particular situation
- This refers to the degree to which correct inferences can be made based on the information obtained from
the given data.
- Validity of any data can be established through 4 basic means:
1. Face Validity- is interested in whether a measurement appears to be pertinent and appropriate for
the subject matter it is evaluating.
2. Content Validity -measure the concepts and skills in the standards at the indicated levels of
cognitive complexity. Every item in a high-quality assessment goes through a rigorous
development process with several levels of review, which ensures that item content is clear,
accurate, and relevant.

essential to drawing valid conclusions. The conclusions drawn will be useless if a teacher is unsure of
what an assessment is measuring. In other words, the assessment will have fallen short of its main goal,
which was to offer insightful knowledge about what the test-taker knows and is capable of.

3. Criterion Related validity - evaluates the precision with which a test measures the result that it
was intended to measure.

A sickness, a behavior, or a performance can all be outcomes. While predictive validity assesses tests
and criterion variables in the future, concurrent validity assesses them in the present.

4. Construct Validity- it is about how accurately a test assesses the idea it was intended to assess.
It's essential to proving a method's general validity.

This is crucial for studying traits like intelligence, self-confidence, or happiness that cannot be clearly
measured or witnessed. To measure those constructs, you need a variety of observable or quantifiable
indicators; otherwise, you run the danger of injecting research bias into your work.

5. External Validity- is the degree to which the results of a study may be applied to different
circumstances, subjects, locations, and measurements.

The goal of scientific study is to generate knowledge that can be applied to the real world. You cannot
extrapolate laboratory results to other people or the actual world if the external validity is low. Research
biases including undercoverage bias will affect these findings.
Authenticity

- “The degree of correspondence of a given language test task to the features of a target language task”
(Bachman & Palmar, 1996)
- Language learners are motivated to perform when they are faced with task that reflect real world
situations and contexts
● AUTHENTIC TEST
○ Contains language that is as natural as possible
○ Items are contextualized rather than isolated
○ Includes meaningful, relevant, interesting topics
○ Provides thematic organization to items, such as through a storyline or episode
○ Offers task that replicate real-world tasks
● Listening comprehension sections feature natural language with hesitations, white noise, and interruptions
● More tests offer items that are episodic in that they are sequenced to form meaningful units, paragraph, or
stories
REFERENCES

Ercikan, K., & Roth, W. M. (2019). Epistemology of assessment. In R. J. Mislevy, M. R. Wilson, M. T. Behrens, &
J. R. DiCerbo (Eds.), Designing assessments for learning with advances in technology and psychometrics
(pp. 3-22). Springer.

Goel, T. (2021, June 15). Assessment and Evaluation in Learning. LinkedIn. Retrieved April 18, 2023, from
https://www.linkedin.com/pulse/assessment-evaluation-learning-taruna-goel

Ifeoma, E. F. (2022, October). The Role of Evaluation in Teaching and Learning Process in Education. ARCN
Journals. Retrieved April 18, 2023, from https://arcnjournals.org/images/2726145223713511.pdf

Kornblith, H. (2018). Epistemology. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2018
ed.). Retrieved from https://plato.stanford.edu/archives/sum2018/entries/epistemology/

Phelan, C., & Wren, J. (2007). Exploring Reliability in Academic Assessment. Retrieved from
https://www.uni.edu/chfasoa/reliabilityandvalidity.htm

Sana, E. A. (2010). Teaching and learning in the health sciences. UP Press. Retrieved April 11, 2023.

Scriven, M. (2018). Evaluation. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2018
ed.). Retrieved from https://plato.stanford.edu/archives/sum2018/entries/evaluation/

Villegas, F. (n.d.). Educational Evaluation: What Is It & Importance. QuestionPro. Retrieved April 18, 2023, from
https://www.questionpro.com/blog/educational-evaluation/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy