AoL1 Module 1
AoL1 Module 1
self-reflection and personal accountability among students about their own learning; and
to provide bases for the profiling of student performance on the learning competencies
Meanwhile, Mentz & Lubbe (2021) defined assessment as “the process of gathering
information” wherein the assessor’s intention directly affects tools, methods, and
strategies incorporated in said process. Further, Popham (2017) said that assessment is
These definitions clearly shows that assessment must be done with clearly-stated aims
and objectives.
Here are other definitions of assessment:
▪ “a process for documenting, in measurable terms, the knowledge, skills, attitudes, and beliefs
▪ “the collection of relevant information that may be relied on for making decisions” (Fenton,
1996)
Testing also includes all the physical procedures done in the administration of the tests.
Penn State University (2017) states that evaluation is “the process of making
judgments based on criteria and evidence.”
This is in line with a prior definition by Madaus & Keillor (1988) as the use of
any test in making critical decisions affecting an individual or a group of
individuals. Results of such a test can lead to “punishments, rewards, or
advancement of individuals or programs”.
Federation University Australia (2023) stated four functions of assessment in education:
1. Certification – Marking and certifying students’ achievement through a collection of activities
such as assignments, exams, and performance tasks.
2. Quality assurance – Ensuring institutional and academic standards are scrutinized through careful
study of students’ output.
3. Learning – Completion of assessment tasks engages student in the learning process and provides
them with “formative and diagnostic functions” relevant to their progress.
4. Lifelong learning – “Developing students' ability to self-assess and self-regulate their learning
beyond formal requirements.”
In a lecture by Choo (2014), seven basic principles of assessment was discussed
which are as follows:
• Reliability • Validity • Practicality
• Authenticity • Objectivity • Interpretability • Washback Effect
Retrieved from Choo
Brown (2010, cited from Choo, 2014) states that an assessment tool is reliable if:
It is important to note that scores change depending the takers, scorers (usually
teachers), and other factors which implies reliability as a critical factor in
properly assessing learning.
There are different ways to establish the reliability of an assessment tool; these
ways will be discussed in later modules. However, several factors directly affect
the reliability of any assessment tool. Choo (2014) has enumerated them as
follows.
Teacher and Student Factors
Factors
Environment
Factors
Assessment
Test
Administration Reliability Test Factors
Marking Factors
▪ Longer tests are generally more reliable.
▪ Some test-takers may be dependent on guessing answers. Having more items will not
reduce the tendency of guessing but will result in a more accurate score as one,
usually, does not entirely guess an entire exam or task.
▪ Further, objective tests (tests with items having an objectively correct answer)
provide consistency in scoring as correct answers are not open for interpretation.
▪Not all tests that are reliable may be valid as consistent scores may
▪Test items created from the same subject matter still has variability
▪ Considers the time and effort involved for both design and scoring
Interpreting scores is the process of attaching meaning to them. Is a score of
7/12 a “good” score for a student?
Doing this process requires the interpreter to have “knowledge about the test,
which can be obtained by studying the manual or related tools along with
current research literature with respect to its use.”
▪ Positively influence what and how teachers teach and learners learn ▪ Offer
Initial assessment aims to give teachers relevant information about learners such
as their specific assessment requirements or needs. This assessment also ensures
that the learners are in the proper program and is, thus, done before any learning
begins (Gravells, 2014). Initial assessments also ensure any entry requirements
to a program is satisfied. A simple question of “What are your perceptions on
Filipino Literature?” is an example of initial assessment conduct.
Gravells (2014) states that initial assessment can:
▪ ascertain why the learner wants to take the program along with their capability to achieve
▪ find out the expectations and motivations of your learner
▪ enable learners to demonstrate their current level of skills, knowledge and understanding ▪ ensure
▪ identify gaps in skills, knowledge and understanding to highlight areas to work on ▪ identify
DepEd added that this assessment occur to make “appropriate decisions about
future learning or job suitability”.
Meanwhile, UNESCO Program on Teaching and Learning for a
Sustainable Future (UNESCO-TLSF, cited in DepEd, 2015) states
that summative assessment is for the “benefit of the people rather
than of the learner” as this assessment measures if a certain test taker
can demonstrate or show certain skills, knowledge, or capabilities.
Gravells (2014) remarks that summative assessments have “a
tendency to teach purely what is required to achieve a pass which
does not maximize a learner’s ability or potential; and may not help
them in life or in work as they are not able to put theories into
practice”; which emphasizes the need for authenticity in summative
assessments.
Gravells (2014) defined holistic assessment as “a method of assessing several
aspects of a qualification, unit, program or job specification at the same time.”
This is also characterized by “a more efficient and quicker system as one piece
of good quality evidence or a carefully planned observation.” Further, this
assessment allows learners to integrate knowledge and skills through
demonstration aided or supplemented by questioning for further scrutiny.
Extracted
from
McMillan
(2017)
In this juncture, McMillan’s (2017) enumeration of different types of assessment methods are used
and these methods are:
▪ Selected-response
▪ Constructed-response
▪ Performance-based
▪ Essay
▪ Oral Questioning
▪ Teacher Observations
▪ Student Self-assessment
Selected-response tests shows students a question or item and a set of responses
they may choose from. Tests using these methods include multiple-choice test,
binary choice or otherwise known as true or false, identification, or matching
type; and usually are objective – items only have one correct or best answer –
which makes them easier to assess due to its independence of judges’ biases and
the simple counting of “correct” answers selected by the learners.
This assessment method requires learners to construct or produce their own answers or
outputs to a question or item. Some constructed-response tests are brief and requires
students to write short, concise answer to items. Examples of these are fill-in-the-
blanks, short-answer test, or simple mathematical problem-solving. Brief constructed-
response items are also objective where there is most often a single correct answer or
correct idea that needs to be seen in the responses of the learners.
Performance tasks are a type of constructed-response test that requires learners to make
an “extensive and elaborate response”. These assessments are well-defined and ,
according to McMillan (2017), asks students to “create, produce, or do something, often
in settings that involve real-world application of knowledge and skills through which
students show their proficiency.” Responses in this method can be performance or
product. Samples of performance include dances, speeches, recitals, etc.; while
examples of products includes portfolios, paintings, posters, or models.
These constructed-response items enforce students to write extended and
comprehensive responses that range from a few paragraphs (restricted-
response) to multiple pages of papers long (extended-response). McMillan
(2017) asserts that “restricted-response essay items include limits to the
content and nature of the answer, whereas extended-response items allow
greater freedom in response”.
In the classroom, teachers informally ask learners several questions
for formative reasons; which implies that these questions are
informal. In a formalized format, questioning is used to “to test or to
determine student understanding through interviews or conferences”
(McMillan, 2017).
Naturally, teachers observe their students whenever a learning episode is
occurring. This occurrence is apparent enough to be considered as a
natural action in class. However, teachers must note that non-verbal cues
such as “squinting, inattention, looks of frustration, and other cues” are
more useful than verbal ones (McMillan, 2017). Further, teacher
observations are also useful in assessing classroom conditions and
instruction.
This method of assessment pits students’ own performance to established
standards using their own judgement. Meanwhile, self-report inventories
are forms or questions asking about students’ attitudes and beliefs about
others or themselves (McMillan, 2017). Peer assessment may also be used
where students evaluate their classmates but is prone to problems.
Different institutions use varied methods for assessment in
accordance with policies or standards unique to them. Examples of
an exhaustive list for assessment methods used could be found here.
It is recommended to browse them.
Figure retrieved
froMentz & Lubbe
(2021