0% found this document useful (0 votes)
18 views35 pages

Monitoring and Evaluation

The document provides an overview of Monitoring and Evaluation (M&E), defining key concepts, purposes, and characteristics of both monitoring and evaluation processes. It emphasizes the importance of M&E in project management, organizational learning, and stakeholder accountability, while detailing the selection of indicators, baselines, and targets for effective assessment. Additionally, it outlines various frameworks for evaluation, types of monitoring, evaluation criteria, and different classifications of evaluations to enhance project performance and impact assessment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views35 pages

Monitoring and Evaluation

The document provides an overview of Monitoring and Evaluation (M&E), defining key concepts, purposes, and characteristics of both monitoring and evaluation processes. It emphasizes the importance of M&E in project management, organizational learning, and stakeholder accountability, while detailing the selection of indicators, baselines, and targets for effective assessment. Additionally, it outlines various frameworks for evaluation, types of monitoring, evaluation criteria, and different classifications of evaluations to enhance project performance and impact assessment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Centre of Change and Innovation

MONITORING & EVALUATION- LECTURE NOTES

SESSION 1: OVERVIEW OF MONITORING AND EVALUATION

(i) What is Monitoring and Evaluation.


 Monitoring and Evaluation is a process of continued gathering of
information and its analysis to determine whether progress is being
made towards pre-specified goals and objectives, and highlight whether
there are any unintended (positive or negative) effects from a
project/program and its activities.

(ii) What is a Monitoring?


 Monitoring is a continuous process of collecting, analyzing, documenting,
and reporting information on progress to achieve set project objectives.
It helps identify trends and patterns, adapt strategies, and inform
project or program management decisions.

(iii) What is Evaluation?


 Evaluation is a periodic assessment, as systematic and objective as
possible, of an ongoing or completed project, program or policy, its
design, implementation, and results. It involves gathering, analyzing,
interpreting, and reporting information based on credible data. The
aim is to determine the relevance and fulfillment of objectives,
developmental efficiency, effectiveness, impact, and sustainability.

(iv) Purpose/Importance of Monitoring and Evaluation Timely


and reliable M&E provides information to:
• Support project/program implementation with accurate, evidence-
based reporting that informs management and decision-making to guide
and improve project/program performance.
• Contribute to organizational learning and knowledge sharing by
reflecting upon and sharing experiences and lessons.
• Uphold accountability and compliance by demonstrating whether or
not our work has been carried out as agreed and in compliance with
established standards and with any other stakeholder requirements
• Provide opportunities for stakeholder feedback ,
• Promote and celebrate project/program work by highlighting
accomplishments and achievements, building morale, and contributing
to resource mobilization.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 1


• Strategic management in the provision of information to inform the
setting and adjustment of objectives and strategies.
• Build the capacity, self-reliance, and confidence of stakeholders ,
especially beneficiaries and implementing staff and partners to
effectively initiate and implement development initiatives.

v) Characteristics of monitoring and evaluation


Monitoring tracks changes in program performance or key outcomes over
time. It has the following characteristics:
• Conducted continuously
• Keeps track and maintains oversight
• Documents and analyzes progress against planned program
activities
• Focuses on program inputs, activities, and outputs
• Looks at processes of program implementation
• Considers program results at the output level
• Considers continued relevance of program activities to resolving the
health problem
• Reports on program activities that have been implemented  Reports
on immediate results that have been achieved

Evaluation is a systematic approach to attribute changes in specific


outcomes to program activities. It has the following characteristics:
• Conducted at important program milestones
• Provides in-depth analysis
• Compares planned with actual achievements
• Looks at processes used to achieve results
• Considers results at outcome level and in relation to cost
• Considers the overall relevance of program activities for resolving
health problems
• References implemented activities
• Reports on how and why results were achieved
• Contributes to building theories and models for change
• Attributes program inputs and outputs to observed changes in
program outcomes and/or impact

(v) Key benefits of Monitoring and Evaluation


a. Provide regular feedback on project performance and show any
need for ‘midcourse’ corrections
b. Identify problems early and propose solutions
c. Monitor access to project services and outcomes by the target
population;
d. Evaluate achievement of project objectives, enabling the tracking
of progress towards achievement of the desired goals
e. Incorporate stakeholder views and promote participation,
ownership, and accountability
f. Improve project and programme design through feedback
provided from baseline, mid-term, terminal and ex-post
evaluations

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 2


g. Inform and influence organizations through analysis of the
outcomes and impact of interventions, and the strengths and
weaknesses of their implementation, enabling development of a
knowledge base of the types of interventions that are successful
(i.e. what works, what does not and why.
h. Provide the evidence basis for building consensus between
stakeholders

SESSIONS 2 & 3 SELECTING INDICATORS, BASELINES AND TARGETS


a) The indicator: “An indicator is defined as a quantitative measurement of
an objective to be achieved, a resource mobilized, an output
accomplished, an effect obtained or a context variable (economic, social
or environmental)”. precise information needed to assess whether
intended changes have occurred. Indicators can be either quantitative
(numeric) or qualitative (descriptive observations). Indicators are
typically taken directly from the log frame but should be checked in the
process to ensure they are SMART (specific, measurable, achievable,
relevant and time-bound).
b) The Indicator definition- key terms in the indicator that need further
detail for precise and reliable measurement.
c) The methods/sources- identifies sources of information and data
collection methods and tools, such as the use of secondary data, regular
monitoring or periodic evaluation, baseline or end-line surveys, and
interviews.
d) The frequency/schedules -how often the data for each indicator will be
collected, such as weekly, monthly, quarterly, annually, etc.
e) The person(s) responsible- lists the people responsible and accountable
for the data collection and analysis, e.g. community volunteers, field
staff, project/program managers, local partner(s), and external
consultants.
f) The information use/audience - identifies the primary use of the
information and its intended audience. Some examples of information
used for indicators include:
• Monitoring project/program implementation for decision-making
• Evaluating impact to justify intervention
• Identifying lessons for organizational learning and knowledge-
sharing
• Assessing compliance with donor or legal requirements
• Reporting to senior management, policy-makers, or donors for
strategic planning
• Accountability to beneficiaries, donors, and partners
• Advocacy and resource mobilization.
g) Types of Indicators

 Context indicators that measure an economic, social, or environmental


variable concerning an entire region, sector, or group and the Project
location, as well as relevant national and regional policies and
programs. The situation before the project starts, the (baseline) data,
primarily from official statistics.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 3


 Input indicators include indicators that measure the human and financial
resources, physical facilities, equipment and supplies that enable the
implementation of a program
 Process indicators reflect whether a program is being carried out as
planned and how well program activities are being carried out.
 Output indicators that relate to activities, measured in physical or
monetary units /results of program efforts (inputs and
processes/activities) at the program level.
 Outcome indicators measure the program’s level of success in
improving service accessibility, utilization, or quality.
 Result indicators- direct and immediate effects arising from the project
activities that provide information on changes of the direct project
beneficiaries.
 Impact indicators refer to the long-term, cumulative effects of programs
over time, beyond the immediate and direct effects on beneficiaries
 Exogenous indicators are those that cover factors outside the control of
the project but which might affect its outcome.
 Proxy indicators – an indirect way to measure the subject of interest

h) Characteristics of Good Indicators.


a) Specific – focused and clear
b) Measurable - quantifiable and reflecting change
c) Attainable - reasonable in scope and achievable within set time
frame
d) Relevant - pertinent to the review of performance
e) Time-Bound/Trackable - progress can be charted chronologically
Also be CREAM: Clear, Relevant, Economical, Adequate, and Monitor-
able.

i) Baselines and Targets


• A baseline is qualitative or quantitative information that provides
data at the beginning of, or just before, the implementation of an
intervention.
• Targets are established for each indicator by starting from the
baseline level, and by including the desired level of improvement in
that indicator

SESSION 4: FRAMEWORKS FOR EVALUATION - THE LOGICAL FRAMEWORK


APPROACH (LFA) Four types of frameworks dominate the M&E field:
a) Conceptual frameworks are also known as theoretical or causal
frameworks.
b) Results-based frameworks are also known as strategic frameworks and
serve as a management tool with an emphasis on results.
The purpose of results frameworks is to increase focus, select strategies,
and allocate resources accordingly.
Impact The higher-order objective to which a development intervention is intended to
contribute.
Outcom The likely or achieved short-term and medium-term effects of an intervention’s

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 4


e outputs.
Output The products, capital goods, and services that result from a development
intervention; may also include changes resulting from the intervention that are
relevant to the achievement of outcomes.
Activity Actions are taken or work performed through which inputs, such as funds, technical
assistance, and other types of resources are mobilized to produce specific outputs.
Inputs The financial, human, and material resources used for the development
intervention.

c) Logical frameworks are also known as Log Frames and are commonly
used to help set clear program objectives and define indicators of
success. They also outline the critical assumptions on which a project is
based, similar to the results framework.
d) Logic models also known as M&E frameworks are commonly used to
present a clear plan for the use of resources to meet the desired goals
and objectives. They are a useful tool for presenting programmatic and
evaluation components.
The choice of a particular type of framework—whether a conceptual
framework, results framework, logical framework, or logic model—depends on
the program’s specific needs, the M&E team’s preferences, and donor
requirements.
In particular, the LFA is a systematic planning procedure for complete project
cycle management, a participatory Planning, Monitoring & Evaluation tool;
 A tool for planning a logical set of interventions;
 A tool for appraising a Programme document;
 A concise summary of the Programme;
 A tool for monitoring progress made with regard to the delivery of
outputs and activities;
 A tool for evaluating the impact of Programme outputs, e.g. progress
in achieving purpose and goal.

Narrative summary Objectively Means of Assumptions/Risks-


A snapshot of the different verifiable indicators verification what assumptions
levels of the project objectives (OVI) -
how will we (MOV) - how will underlie the structure of
— known as the “hierarchy of know we've been we check our our project and what is
objectives”. successful? reported the risk they will not
results? prevail?
Goal (Impact)- Longer-term
effects/General or overall
objective
Purpose- - why are we doing
this? direct and immediate
effects/objectives/Outcomes/
Results
Outputs - what are the
deliverables? – goods and
services produced/operational
objectives
Activities- what tasks will we Inputs Cost
undertake to deliver the By what means do What does it
outputs? we cost
carry out the
activities

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 5


Pre-conditions
What needs to be fulfilled
before activities can start

SESSION 5a: MONITORING CRITERIA


a) Project monitoring & control cycle.
To achieve effective control over project implementation, it is necessary to
assess the progress from time at regular intervals in terms of physical
completion of scheduled activities, actual cost incurred in performing those
activities and achievement of desired performance levels by comparing the
status with the plans to find deviations. This assessment process is known as
‘monitoring’.

COMPARE

PLAN ACTUAL
STATUS

VARIANCES

NO YES

REVISED SCHEDULES,BUDGETS ESTIMATES TO COMPLETE


ACTION PLAN

Key elements of project monitoring and control


 Project Status reporting
 Conducting a project review with stakeholders
 Controlling schedule variances
 Controlling scope and change requests
 Controlling budget
 Tracking and mitigating risks

b) Types of monitoring
A project/program usually monitors a variety of things according to its specific
informational needs. These monitoring types often occur simultaneously as

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 6


part of an overall monitoring system commonly found in a project/program
monitoring system.
TAB 1: Common types of monitoring
• Results monitoring: Tracks effects and impacts to determine if the
project/program is on target towards its intended results (inputs,
activity, outputs, outcomes, impact, assumptions/risks monitoring) and
whether there may be any unintended impact (positive or negative
• Process (activity) monitoring: Tracks the use of inputs and resources,
the progress of activities, how activities are delivered – the efficiency in
time and resources and the delivery of outputs
• Compliance monitoring: Ensures compliance with, say, donor
regulations and expected results, grant and contract requirements, local
governmental regulations and laws, and ethical standards.
• Context (situation) monitoring: Tracks the setting in which the
project/program operates, especially as it affects identified risks and
assumptions, and any unexpected considerations that may arise,
including the larger political, institutional, funding, and policy context
that affect the project/program.
• Beneficiary monitoring: Tracks beneficiary perceptions of a
project/program. It includes beneficiary satisfaction or complaints with
the project/program, including their participation, treatment, access to
resources, and their overall experience of change.
• Financial monitoring: Accounts for costs by input and activity within
predefined categories of expenditure, to ensure implementation is
according to the budget and time frame.
• Organizational monitoring: Tracks the sustainability, institutional
development, and capacity building in the project/program and with its
partners.

c) Monitoring Questions and the Log Frame

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 7


SESSION 5 b. EVALUATION CRITERIA FOR PROJECTS
a) Five-Part Evaluation Criteria
• Relevance - Was/is the project a good idea given the situation to
improve? Was the logic of the project correct? Why or Why Not? -The
validity of the Overall Goal and Project Purpose at the evaluation
stage.
• Effectiveness - Have the planned results been achieved? Why or Why
Not? -The degree to which the Project Purpose has been achieved by
the project Outputs.
• Efficiency - Have resources been used in the best possible way? Why
or Why Not? -The productivity in project implementation. The degree
to which Inputs have been converted into Outputs.
• Impact - To what extent has the project contributed towards its
longer-term goals? Why or Why Not? Have there been any
unanticipated positive or negative consequences of the project? Why
did they arise? -Positive and negative changes produced, directly or
indirectly, as a result of the Implementation of the project.
• Sustainability – Can the outcomes be sustained after the project
funding to ensure continued impacts? Why or Why Not? -The
durability of the benefits and development effects produced by the
project after its completion.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 8


b) Evaluation Questions and the LogFrame

SESSION 6a. TYPES OF


EVALUATION Three ways of
classifying:
 When it is done - Ex-ante evaluation; Formative evaluation;
Summative – end of project, and Ex-Post evaluation.
 Who is doing it - External evaluation; Internal evaluation or self-
assessment
 What methodology or technicality is used- Real-time evaluations
(RTEs); Meta-evaluations; Thematic evaluations; Cluster/sector
evaluations; Impact evaluations
The details are as follows: -
a) Ex–event evaluation: Conducted before the implementation of a
project as part of the planning. Needs assessment determines who
needs the program, how great the need is, and what might work to
meet the need. Implementation(feasibility)evaluation monitors the
fidelity of the program or technology delivery, and whether or not the
program is realistically feasible within the programmatic constraints
b) Formative evaluation: Conducted during the implementation of the
project. Used to determine the efficiency and effectiveness of the

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 9


implementation process, to improve performance and to assess
compliance. Provides information to improve processes and learn
lessons. Process evaluation investigates the process of delivering
the program or technology, including alternative delivery procedures.
Outcome evaluations investigate whether the program or
technology caused demonstrable effects on specifically defined
target outcomes. Cost-effectiveness and cost-benefit analysis
address questions of efficiency by standardizing outcomes in terms
of their dollar costs and values
c) Midterm evaluations are formative in purpose and occur midway
through implementation.
d) Summative evaluation: Conducted at the end of the project to assess
the state of project implementation and achievements at the end of
the project. Collate lessons on content and implementation process.
Occur at the end of project/program implementation to assess
effectiveness and impact.
e) Ex-post evaluation: Conducted after the project is completed. Used to
assess the sustainability of project effects, and impacts. Identifies
factors of success to inform other projects. Conducted sometime
after implementation to assess long-term impact and sustainability.
f) External evaluation: Initiated and controlled by the donor as part of a
contractual agreement. Conducted by independent people – who are
not involved in implementation. Often guided by project staff
g) Internal or self-assessment: Internally guided reflective processes.
Initiated and controlled by the group for its learning and
improvement. Sometimes done by consultants who are outsiders to
the project. Need to clarify ownership of information before the
review starts
h) Real-time evaluations (RTEs): are undertaken during
project/program implementation to provide immediate feedback for
modifications to improve ongoing implementation.
i) Meta-evaluations: are used to assess the evaluation process itself.
Some key uses of meta-evaluations include: taking inventory of
evaluations to inform the selection of future evaluations; combining
evaluation results; checking compliance with evaluation policy and
good practices; assessing how well evaluations are disseminated and
utilized for organizational learning and change, etc.
j) Thematic evaluations: focus on one theme, such as gender or
environment, typically across a number of projects, programs or the
whole organization.
k) Cluster/sector evaluations: focus on a set of related activities,
projects or programs, typically across sites and implemented by
multiple organizations
l) Impact evaluations: are broader and assess the overall or net effects
-- intended or unintended -- of the program or technology as a whole
focus on the effect of a project/program, rather than on its
management and delivery. Therefore, they typically occur after
project/program completion during a final evaluation or an ex-post
evaluation. However, impact may be measured during

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 10


project/program implementation during longer projects/programs and
when feasible.

SESSION 6b: EVALUATION MODELS AND APPROACHES


• Behavioral Objectives Approach. -“Is the program, product, or process
achieving its objectives?” Focuses on the degree to which the objectives
of a program, product, or process have been achieved.
• The Four-Level Model or Kirkpatrick Model -“What impact did the
training have on participants in terms of their reactions, learning,
behavior, and organizational results?” Often used to evaluate training
and development programs and focuses on four levels of training
outcomes: reactions, learning, behavior, and results.
o Reaction - Measures trainees’ valuable experience, and feel good
about the instructor, the topic, the material, its presentation, and
the venue.
o Learning- How much has their knowledge increased as a result of
the training? o Behavior-trainees have changed their behavior,
based on the training they received.
o Results- good for business, good for the employees, or good for
the bottom line.
• Management Models-“What management decisions are required
concerning the program”. The evaluator’s job is to provide information
to management to help them in making decisions about programs,
products, etc. Daniel Scuffle Beam’s CIPP Model has been very popular.
CIPP stands for context evaluation, input evaluation, process evaluation,
and product evaluation.
Context evaluation includes examining and describing the context of the
program you are evaluating, conducting a needs and goals assessment,
determining the objectives of the program, and determining whether the
proposed objectives will be sufficiently responsive to the identified needs.
It helps in making program planning decisions. Input evaluation includes
activities such as a description of the program inputs and resources, a
comparison of how the program might perform compared to other
programs, a prospective benefit/cost assessment (i.e., decide whether you
think the benefits will outweigh the costs of the program before the
program is implemented), an evaluation of the proposed design of the
program, and an examination of what alternative strategies and
procedures for the program should be considered and recommended.
Process evaluation includes examining how a program is being
implemented, monitoring how the program is performing, auditing the
program to make sure it is following required legal and ethical guidelines,
and identifying defects in the procedural design or in the implementation
of the program.
Product evaluation includes determining and examining the general and
specific anticipated and unanticipated outcomes of the program (i.e., which
requires using impact or outcome assessment techniques).
• Responsive Evaluation. -“What does the program look like to different
people?” - Calls for evaluators to be responsive to the information needs
of various audiences or stakeholders.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 11


• Goal-Free Evaluation. -“What are all the effects of the program,
including any side effects?”-Focuses on the actual outcomes rather than
the intended outcomes of a program. Thus, the evaluator is unaware of
the program’s stated goals and objectives.
• Adversary/Judicial Approaches. “What are the arguments for and
against the program? “These Adopt the legal paradigm to program
evaluation, where two teams of evaluators representing two views of
the program’s effects argue their cases based on the evidence (data)
collected. Then, a judge or a panel of judges decides which side has
made a better case and makes a ruling.
• Consumer-Oriented Approaches. “Would an educated consumer
choose this program or product?”- helps consumers to choose among
competing programs or products.
• Expertise/Accreditation Approaches. “How would professionals rate
this program?”- The accreditation model relies on expert opinion to
determine the quality of programs. The purpose is to provide
professional judgments of quality.
• Utilization-Focused Evaluation. “What are the information needs of
stakeholders, and how will they use the findings?”- Evaluation done for
and with specific, intended primary users for specific, intended uses. ”
Assumes stakeholders will have a high degree of involvement in many,
if not all, phases of the evaluation. The major question being addressed
is,
• Participatory/Collaborative Evaluation. - “What are the information
needs of those closest to the program?”- Engaging stakeholders in the
evaluation process, so they may better understand evaluation and the
program being evaluated and ultimately use the evaluation findings for
decision-making purposes.
• Empowerment Evaluation. “What is the information needs to foster
improvement and self-determination?”. Use of evaluation concepts,
techniques, and findings to foster improvement and self-determination,
a catalyst for learning in the workplace a social activity in which
evaluation issues are constructed by and acted on by organization
members
• Organizational Learning. “What are the information and learning needs
of individuals, teams, and the organization in general?” ongoing and
integrated into all work practices
• Theory-Driven Evaluation. - “How is the program supposed to work?
What are the assumptions underlying the program’s development and
implementation?”- Focuses on theoretical rather than methodological
issues to use the “program’s rationale or theory as the basis of an
evaluation to understand the program’s development and impact” using
a plausible model of how the program is supposed to work.
• Success Case Method. “What is really happening?”- focuses on the
practicalities of defining successful outcomes and success cases and
uses some of the processes from theory-driven evaluation to determine
the linkages, which may take the form of a logic model, an impact
model, or a results map. Evaluators using this approach gather stories

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 12


within the organization to determine what is happening and what is
being achieved. The major question this approach asks is,
THE EVALUATION PROCESS
Evaluation operates within multiple domains and serves a variety of functions
at the same time. Moreover, it is subject to budget, time, and data constraints
that may force the evaluator to sacrifice many of the basic principles of
impact evaluation design. Before entering into the details of evaluation
methods it is important for the reader to have a clear picture of the way an
evaluation procedure works.

(i) The M&E Plan/strategy


A comprehensive planning document for all monitoring and evaluation
activities within a program. This plan documents the key M&E questions
to be addressed: what indicators will be collected, how, how often, from
where, and why; baseline values, targets, and assumptions; how data
are going to be analyzed/interpreted; and how/how often the report will
be developed and distributed.

Typically, the components of an M&E plan are:


• Establishing goals and objectives
• Setting the specific M&E questions
• Determining the activities to be implemented
• The methods and designs to be used for monitoring and evaluation
• The data to be collected
• The specific tools for data collection
• The required resources
• The responsible parties to implement specific components of the plan
• The expected results
• The proposed timeline

(ii) Monitoring and Evaluation Cycle

Step 1 – Identify the purpose and scope of the M&E system


• Formulating objectives
• Selecting Indicators
• Setting baselines and targets
Step 2 – Plan for data collection and management

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 13


• Major sources of data- secondary data, primary data-sample surveys,
project output data, qualitative studies- PRA, mapping, KIIs, FGDs,
observation, checklists, external assessments, participatory
assessments
• planning for data collection - prepare data collection guidelines, pre-
test data collection tools, train data collectors, address ethical issues

Step 3 – Plan for data analysis


Step 4 – Plan for information reporting and utilization
Step 5 – Plan for M&E human resources and capacity building
Step 6 – Prepare the M&E budget

(iii) Setting up an M&E system often involves the following aspects

a) Assess the existing readiness and capacity for monitoring and


evaluation
b) Review current capacity within (or outsourced without) the
organization and its partners which will be responsible for project
implementation, covering: technical skills, managerial skills,
existence and quality of data systems, available technology and
existing budgetary provision.
c) Establish the purpose and scope
Why is M&E needed and how comprehensive should the system be?
What should be the scope, rigour and should the M&E process be
participatory?
d) Identify and agree with main stakeholders the outcomes and
development objective(s).
Set a development goal and the project purpose or expected
outcomes, outputs, activities and inputs. Indicators, baselines and
targets are similarly derived
e) Select key indicators i.e the qualitative or quantitative variables
that measure project performance and achievements for all levels
of project logic with respect to inputs, activities, outputs, outcomes
and impact, as well as the wider environment, requiring pragmatic
judgment in the careful selection of indicators.
f) Developing and Evaluation Frame work - set out the methods,
approaches and evaluation designs ( Experimental, Quasi-
Experimental and Non-Experimental) to be used to address the
question of whether change observed through monitoring
indicators can be attributed to the project interventions.
g) Set baselines and planning for results -The baseline is the first
measurement of an indicator, which sets the pre-project condition
against which change can be tracked and evaluated.
h) Select data collection methods as applicable.
i) Setting targets and developing a results framework- A target is a
specification of the quantity, quality, timing and location to be
realized for a key indicator by a given date. Starting from the
baseline level for an indicator the desired improvement is defined
taking account of planned resource provision and activities, to
arrive at a performance target for that indicator.
Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 14
j) Plan monitoring, data analysis, communication, and reporting:
Monitoring and
Evaluation Plan
k) Implementation monitoring tracking the inputs, activities and
outputs in annual or multiyear work plans, and ‘Results monitoring’
tracking achievement of outcomes and impact, are both needed.
The demands for information at each level of management need to
be established, responsibilities allocated, and plans made for:
i. what data to be collected and when;
ii. how data are collected and analyzed;
iii. who collects and analyses data; iv. who reports information,
v. when ?
l) Facilitating the necessary conditions and capacities to sustain the
M&E System - organizational structure for M&E, partner’s
responsibilities and information requirements, staffing levels and
types, responsibilities and internal linkages, incentives and training
needs, relationships with partners and stakeholders, horizontal and
vertical lines of communication and authority, physical resource
needs and budget.

SESSION 8: EVALUATION DESIGN

Developing an evaluation design includes:


• Determining what type of design is required to answer the questions
posed
• Selecting a methodological approach and data collection instruments
• Selecting a comparison group
• Sampling
• Determining timing, sequencing, and frequency of data collection

Evaluation research may adopt two general methodological approaches—


either a quantitative, a qualitative or mixed-methods design approach.
Quantitative designs normally take the form of experimental designs.
Qualitative evaluation approaches are non-experimental approaches which
answer ‘why’ and ‘how’ questions.

The following are brief descriptions of the most commonly used evaluation
(and research) designs.

One-Shot In using this design, the evaluator gathers data following


Design an intervention or program. For example, a survey of
participants might be administered after they complete a
workshop.
Retrospective As with the one-shot design, the evaluator collects data at
Pretest. one time but asks for recall of behaviour or conditions
prior to, as well as after, the intervention or program.
One-Group Pre- The evaluator gathers data prior to and following the
test-Post-test intervention or program being evaluated.
Design.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 15


Time Series The evaluator gathers data prior to, during, and after the
Design implementation of an intervention or program
Pre-test-Post- . The evaluator gathers data on two separate groups prior
test to and following an intervention or program. One group,
Control-Group typically called the experimental or treatment group,
Design receives the intervention. The other group, called the
control group, does not receive the intervention.
Post-test-Only The evaluator collects data from two separate groups
Control-Group following an intervention or program. One group, typically
Design. called the experimental or treatment group, receives the
intervention or program, while the other group, typically
called the control group, does not receive the
intervention.
Data are collected from both of these groups only after
the intervention.
Case Study When evaluations are conducted for the purpose of
Design understanding the program’s context, participants’
perspectives, the inner dynamics of situations, and
questions related to participants’ experiences, and where
generalization is not a goal, a case study design, with an
emphasis on the collection of qualitative data, might be
most appropriate. Case studies involve in-depth
descriptive data collection and analysis of individuals,
groups, systems, processes, or organizations. In
particular, the case study design is most useful when you
want to answer how and why questions and when there is
a need to understand the particulars, uniqueness, and
diversity of the case.

Decisions for Designing an Evaluation Study

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 16


SESSION 9: METHODS OF EVALUATION AND TOOLS

a) Evaluation Methods
Informal and less-structured methods
• Conversation with concerned individuals
• Community interviews
• Field visits
• Reviews of records
• Key informant interviews
• Participant observation
• Focus group interviews

Formal and more-structured methods


• Direct observation
• Questionnaires
• One-time survey
• Panel survey
• Census
• Field experiments

Evaluation Description Remarks


method/Appro
ac h/Tool
Case study. A detailed description of individuals, Useful in evaluating complex
communities, organizations, events, situations and exploring qualitative
programmes, time periods or a story impact. Helps to illustrate findings
and includes comparisons
(commonalities); only when
combined (triangulated) with other
case studies or methods can one
draw conclusions about key
principles.
Checklist A list of items used for validating or Allow for systematic review that can
inspecting whether procedures/steps be useful in setting benchmark
have been followed, standards and

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 17


Evaluation Description Remarks
method/Approac
h/Tool
or the presence of examined establishing periodic measures
behaviours. of improvement.

Community book A community-maintained document Where communities have low


of a project belonging to a literacy rates, a memory team is
community. It can include written identified whose
records, pictures, drawings, songs responsibility it is to relate the
or whatever community members written record to the rest of the
feel is appropriate. community in keeping with their
oral traditions.

Community A form of public meeting open to all Interaction is between the


interviews/meeti ng. community members. participants and the interviewer,
who presides over the meeting
and asks questions following a
prepared interview guide.
Direct observation A record of what observers see and An observation guide is often
hear at a specified site, using a used to reliably look for
detailed observation form. consistent criteria, behaviours,
Observation may be of physical or patterns.
surroundings, activities or
processes. Observation is a good
technique for collecting data on
behavioural patterns and physical
conditions.
Document review A review of documents (secondary It includes written
data) can documentation (e.g.
provide cost-effective and project records and reports,
timely baseline information and administrative databases,
a historical perspective of the training materials,
project/programme. correspondence, legislation and
policy documents) as well as
videos, electronic data or
photos.
Focus group Focused discussion with a small A moderator introduces the
discussion. group (usually eight to 12 people) topic and uses a prepared
of participants to record attitudes, interview guide to lead the
perceptions and beliefs relevant to discussion and extract
the issues being examined. conversation, opinions and
reactions.
Interviews. An open-ended (semi-structured) Replies can easily be
interview is a technique for numerically coded for statistical
questioning that allows the analysis.
interviewer to probe and pursue
topics of interest in depth (rather
than just “yes/no” questions). A
close deeded(structured)
interview systematically follows
carefully organized questions
(prepared in advance in an
interviewer’s guide) that only allow
a limited range of answers, such as
“yes/no” or expressed by a
rating/number on a scale.
Key informant An interview with a person having These interviews are generally
Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 18
interview. special information about a conducted in an open-ended or
particular topic. semi-structured fashion.
Laboratory testing. Precise measurement of specific
objective phenomenon, e.g. infant
weight or water quality test.

Mini-survey Data was collected from interviews Structured questionnaires with a


with 25 to 50 individuals, usually limited number of closed-ended
selected using questions are used to generate
nonprobability quantitative data that can be
sampling collected and analyzed quickly.
techniques.
Most significant A participatory monitoring They give a rich picture of the
change (MSC). technique based on stories about impact of development work and
significant changes, rather than provide the basis for
indicators. dialogue over key objectives and
the value of the development
programme
Participant A technique first used by This method gathers insights
observation. anthropologists (those who study that might otherwise be
humankind); it requires the overlooked but is time-
researcher to spend considerable consuming.
time (days) with the group being
studied and to interact with them as
a participant in their community.
Participatory rapid This uses community engagement It is usually done quickly and
(or rural) appraisal techniques to understand intensively – over a two- to
(PRA). community views on a particular three-week period. Methods
issue. include interviews, focus groups
and community mapping. Tools
include stakeholder analysis,
participatory rural
Evaluation Description Remarks
method/Approach/T
ool
appraisal, beneficiary
assessment, and participatory
monitoring and evaluation.
Questionnaire A data collection instrument Typically used in a survey
containing a set of questions
organized in a systematic way, as
well as a set of instructions for the
data collector/interviewer about
how to ask the questions
Rapid appraisal (or A quick, cost-effective technique to This technique shares many of
assessment). gather data systematically for the characteristics of
decision-making, using quantitative participatory appraisal (such as
and qualitative methods, such as triangulation and
site visits, observations, and sample multidisciplinary teams) and
surveys. recognizes that Indigenous
knowledge is a critical
consideration for decision-
making. Methods include: key
informant interviews, focus
group discussions, community
group interviews, direct
observation, and mini-survey
Statistical data A review of population censuses,

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 19


review. A research studies, and other sources
of statistical data.
Story An account or recital of an event or A learning story focuses on the
a series of events. A success story lessons learned through an
illustrates impact by detailing an individual’s positive and
individual’s positive experiences in negative experiences (if any)
his or her own words. with a project/program.
Formal Survey Systematic collection of information Includes multi-topic or single-
from a defined population, usually topic household/living standards
by means of interviews or surveys, client satisfaction
questionnaires administered to a surveys, and core welfare
sample of units in the population indicators questionnaires.
(e.g. person, beneficiaries and Public expenditure tracking
adults). An surveys- tracking the flow of
enumerated survey is one in which public funds and the extent to
the which resources actually reach
survey is administered by someone the target groups.
trained (a data Sampling-related methods-
collector/enumerator) to record sample frame, sample size,
responses from respondents. A self- sample method e.g. random –
administered survey is a written simple (and systematic) or
survey completed by the stratified Non-random-
respondent, either in a group purposive (and cluster) and
setting or in a separate location. quota sampling, etc
Respondents must be literate.
Visual techniques. Participants develop maps, This technique is especially
diagrams, calendars, timelines, and effective where verbal methods
other visual displays to examine the can be problematic due to low-
study topics. Participants can be literate or mixed-language
prompted to construct visual target populations, or in
responses to questions posed by situations where the desired
the interviewers; e.g. by information is not easily
constructing a map of their local expressed in either words or
area. numbers.
Cost Benefit and Assesses whether or not the costs Cost Benefit- measures both
Cost Effectiveness of an activity can be justified by the inputs and outputs in monetary
Analysis outcomes and impacts terms
Cost Effectiveness- inputs in
monetary and outputs in non-
monetary terms

M&E Tool/Method Advantages Disadvantages


Survey Good for gathering Self-report may lead to
descriptive data biased reporting
Can cover a wide range of Data may provide a
topics general picture but
Are relatively inexpensive lack depth May not
to use Can be analyzed provide adequate
using a variety of existing information on context
software

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 20


Case studies Provide a rich picture of Require a sophisticated and
what is happening, as seen well-trained data collection
through the eyes of many and reporting team
individuals Can be costly in terms of
Allow a thorough the demands on time and
exploration of resources Individual cases
interactions between may be over-interpreted or
treatment and contextual overgeneralized
factors Can help explain
changes or facilitating
factors that might
otherwise not emerge from
the data

Interviews Usually yield the richest Expensive and time-


data, details, new insights consuming Need well-
Permit face-to-face contact qualified, highly trained
with respondents interviewers Interviewee
Provide the opportunity to may distort information
explore topics in depth through recall error,
Allow the interviewer to selective perceptions,
experience the affective desire to
as well as cognitive please interviewer
aspects of responses Flexibility can result in
Allow interviewer to inconsistencies across
explain or help clarify interviews Volume of
questions, information very large;
increasing the likelihood of may be difficult to
useful responses transcribe and reduce data
Allow interviewer to be
flexible in administering
interview to particular
individuals or in
particular circumstances

b) PARTICIPATORY M&E
Participatory evaluation is a partnership approach to evaluation in which
stakeholders actively engage in developing the evaluation and all phases of
its implementation. Participatory evaluations often use rapid appraisal
techniques. Name a few of them.
• Key Informant Interviews - Interviews with a small number of individuals
who are most knowledgeable about an issue.
• Focus Groups - A small group (8-12) is asked to openly discuss ideas,
issues and experiences.
• Mini-surveys - A small number of people (25-50) is asked a limited
number of questions.
• Neighborhood Mapping - Pictures show the location and types of
changes in an area to be evaluated.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 21


• Flow Diagrams - A visual diagram shows proposed and completed
changes in systems.  Photographs - Photos capture changes in
communities that have occurred over time.
• Oral Histories and Stories - Stories capture progress by focusing on one
person’s or organization’s account of change.

E.g. Specific applications of the focus group method in evaluations.


• Identifying and defining problems in project implementation
• Pretesting topics or ideas
• Identifying project strengths, weaknesses, and recommendations
• Assisting with an interpretation of quantitative findings
• Obtaining perceptions of project outcomes and impacts
• Generating new ideas

SESSION 10: DATA ANALYSIS AND REPORTING

The term “data” refers to raw, unprocessed information while “information,”


or “strategic information,” usually refers to processed data or data presented
in some sort of context.
• Data –primary or secondary- is a term given to raw facts or figures
before they have been processed and analyzed.
• Information refers to data that has been processed and analyzed for
reporting and use.
• Data analysis is the process of converting collected (raw) data into
usable information.

(i) Quantitative and Qualitative data


• Quantitative data measures and explains what is being studied with
numbers (e.g. counts, ratios, percentages, proportions, average scores,
etc).
• Qualitative data explains what is being studied with words (documented
observations, representative case descriptions, perceptions, opinions of
value, etc).
• Quantitative methods tend to use structured approaches (e.g. coded
responses to surveys) which provide precise data that can be
statistically analyzed and replicated (copied) for comparison.
• Qualitative methods use semi-structured techniques (e.g. observations
and interviews) to provide an in-depth understanding of attitudes,
beliefs, motives and behaviors. They tend to be more participatory and
reflective in practice.

Quantitative data is often considered more objective and less biased than
qualitative data but recent debates have concluded that both quantitative
and qualitative methods have subjective (biased) and objective (unbiased)
characteristics.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 22


Therefore, a mixed-methods approach is often recommended that can utilize
the advantages of both, measuring what happened with quantitative data and
examining how and why it happened with qualitative data.

(ii) Some Data Quality Issues in Monitoring and Evaluation


• Coverage: Will the data cover all of the elements of interest?
• Completeness: Is there a complete set of data for each element of
interest?
• Accuracy: Have the instruments been tested to ensure the validity and
reliability of the data?
• Frequency: Are the data collected as frequently as needed?
• Reporting schedule: Do the available data reflect the periods of
interest?
• Accessibility: Are the data needed collectible/retrievable?
• Power: Is the sample size big enough to provide a stable estimate or
detect change?

(iii) Data Analysis


Quantitative or qualitative research methods or a complementary
combination of both approaches are used. Analysis may include:
• Content or textual analysis, making inferences by objectively and
systematically identifying specified characteristics of messages.
• Statistical descriptive techniques, the most common include: graphical
description (histograms, scatter-grams, bar chart,…); tabular description
(frequency distribution, cross tabs,…); parametric description (mean,
median, mode, standard deviation, skewness, kurtosis, …).
• Statistical inferential techniques which involve generalizing from a
sample to the whole population and testing hypothesis. Hypothesis are
stated in mathematical or statistical terms and tested through two or
one-tailed tests (t-test, chi-square, Pearson correlation, Statistic, …)

SESSION 11: TERMS OR REFERENCE IN M&E AND EVALUATION REPORT


TEMPLATE

(i) Terms of Reference in Evaluation


Evaluation organizers are usually the ones who are in charge of a particular
project and want to have the project evaluated to better manage project
operations. Responsibility in the evaluation organizers differs from those of
evaluators, who are usually consultants contracted for the evaluation.

Tasks of the evaluation organizers include:


• Preparing the TOR; TOR is a written document presenting the purpose
and scope of the evaluation, the methods to be used, the standard
against which performance is to be assessed or analyses are to be
conducted, the resources and time allocated, and reporting
requirements. TOR also defines the expertise and tasks required of a
contractor as an evaluator and serves as job descriptions for the
evaluator.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 23


• Appointing evaluator(s);
• Securing budget for evaluation;
• monitoring the evaluation work;
• Providing comments on the draft;
• publicizing the evaluation report, and
• Providing feedback from the results to concerned parties.

The role of the evaluator includes:


• Preparing the detailed evaluation design;
 Collecting and analyzing information, and
• Preparing an evaluation report.

The role of Management includes:


• Management response
• Action on recommendations
• Tracking status of the implementation of recommendations

Management Response
Template Prepared by:
Reviewed by:
Evaluation recommendation
1.
Management response:

Key action(s) Time Responsible Tracking*


Frame unit(s)
Comments Status
1.1
1.2
Evaluation recommendation
2.
Management response:

Key action(s) Time Responsible Tracking*


Frame unit(s) Comments
Comments Status
2.1
2.2

ii) Evaluation Report Template


The is no single universal format for M&E but t he template is intended to serve as
a guide for preparing meaningful, useful, and credible evaluation reports that meet
quality standards. It only suggests the content that should be included in a quality
evaluation report but does not purport to prescribe a definitive section-by-section
format that all evaluation reports should follow.

E.g.
Formal reports developed by evaluators typically include six major sections:
(1)Background
(2)Evaluation study questions
(3)Evaluation procedures
(4)Data analyses
Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 24
(5)Findings
(6)Conclusions (and recommendations)

Or detailed:
Summary sections
A. Abstract
B. Executive summary II. Background
A. Problems or needs addressed
B. Literature review
C. Stakeholders and their information needs
D. Participants
E. Project’s objectives
F. Activities and components
G. Location and planned longevity of the project
H. Resources used to implement the project
I. Project’s expected measurable outcomes
J. Constraints
III. Evaluation study questions
A.Questions addressed by the study
B.Questions that could not be addressed by the study (when relevant)
IV. Evaluation procedures
A. Sample
1.Selection procedures
2.Representativeness of the sample
3.Use of comparison or control groups, if applicable
B. Data collection
1.Methods
2.Instruments
C. Summary matrix
1.Evaluation questions
2.Variables
3.Data gathering approaches
4.Respondents
5.Data collection schedule
V. Findings
A. Results of the analyses organized by study question
VI. Conclusions
A.Broad-based, summative statements
B.Recommendations, when applicable
Table of contents
Executive summary
• Introduction
• Evaluation scope, focus, and approach
• Project facts
• Findings, Lessons Learned
o Findings
o Lessons Learned
• Conclusions and recommendations

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 25


o Conclusions
o Recommendations
• Annexes/appendices

Or as per organizational requirements (Modified from UNDP, 2009, Handbook


on Planning, Monitoring and Evaluating for Development Results)

The report should also include the following:

1. Title and opening pages—


Should provide the following
basic information:
• Name of the evaluation intervention
• Time frame of the evaluation and date of the report
• Country/Organization/Entity of the evaluation intervention
• Names and organizations of evaluators
• Name of the organization commissioning the evaluation
• Acknowledgements

2. Table of contents
 Should always include lists of boxes, figures, tables, and annexes with page
references.
3. List of acronyms and
abbreviations
4. Executive summary
A stand-alone section of two to three pages that should:
• Briefly describe the intervention (the project(s), programme(s), policies
or other interventions) that was evaluated.
• Explain the purpose and objectives of the evaluation, including the
audience for the evaluation and the intended uses.
• Describe key aspects of the evaluation approach and methods. 
Summarize principle findings, conclusions, and recommendations.
5. Introduction
Should:
• Explain why the evaluation was conducted (the purpose), why the
intervention is being evaluated at this point, and why it addressed the
questions it did.
• Identify the primary audience or users of the evaluation, what they
wanted to learn from the evaluation and why, and how they are
expected to use the evaluation results.
• Identify the intervention (the project(s) program (s), policies or other
interventions) that was evaluated—see upcoming section on
intervention.
• Acquaint the reader with the structure and contents of the report and
how the information contained in the report will meet the purposes of
the evaluation and satisfy the information needs of the report’s intended
users.
5. Description of the intervention/project/process/program —Provide the
basis for report users to understand the logic and assess the merits of the
evaluation methodology and understand the applicability of the evaluation
results.
Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 26
The description needs to provide sufficient detail for the report user to derive
meaning from the evaluation. The description should:
• Describe what is being evaluated, who seeks to benefit, and the problem
or issue it seeks to address.
• Explain the expected results map or results framework, implementation
strategies, and the key assumptions underlying the strategy.
• Link the intervention to national priorities, Development partner
priorities, corporate strategic plan goals, or other projects, program,
organizational, or country-specific plans and goals.
• Identify the phase in the implementation of the intervention and any
significant changes (e.g., plans, strategies, logical frameworks) that
have occurred over time, and explain the implications of those changes
for the evaluation.
• Identify and describe the key partners involved in the implementation
and their roles.
• Describe the scale of the intervention, such as the number of
components (e.g., phases of a project) and the size of the target
population for each component.
• Indicate the total resources, including human resources and budgets.
• Describe the context of the social, political, economic, and institutional
factors, and the geographical landscape within which the intervention
operates and explain the effects (challenges and opportunities) those
factors present for its implementation and outcomes.
• Point out design weaknesses (e.g., intervention logic) or other
implementation constraints (e.g., resource limitations).
7. Evaluation scope and objectives - The report should provide a clear
explanation of the evaluation’s scope, primary objectives, and main questions.
• Evaluation scope—The report should define the parameters of the
evaluation, for example, the time period, the segments of the target
population included, the geographic area included, and which
components, outputs or outcomes were and were not assessed.
• Evaluation objectives—The report should spell out the types of decisions
evaluation users will make, the issues they will need to consider in
making those decisions, and what the evaluation will need to achieve to
contribute to those decisions.
• Evaluation criteria—The report should define the evaluation criteria or
performance standards used. The report should explain the rationale for
selecting the particular criteria used in the evaluation.
• Evaluation questions—Evaluation questions define the information that
the evaluation will generate. The report should detail the main
evaluation questions addressed by the evaluation and explain how the
answers to these questions address the information needs of users.
8. Evaluation approach and methods - The evaluation report should describe in
detail the selected methodological approaches, theoretical models, methods,
and analysis; the rationale for their selection; and how, within the constraints
of time and money, the approaches and methods employed yielded data that
helped answer the evaluation questions and achieved the evaluation purposes.
The description should help the report users judge the merits of the methods
used in the evaluation and the credibility of the findings, conclusions, and
recommendations.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 27


The description of the methodology should include a discussion of each of the
following:
• Data sources—The sources of information (documents reviewed and
stakeholders), the rationale for their selection, and how the information
obtained addressed the evaluation questions.
• Sample and sampling frame—If a sample was used: the sample size and
characteristics; the sample selection criteria (e.g., single women, under
45); the process for selecting the sample (e.g., random, purposive); if
applicable, how comparison and treatment groups were assigned; and
the extent to which the sample is representative of the entire target
population, including discussion of the limitations of the sample for
generalizing results.
• Data collection procedures and instruments— Methods or procedures used
to collect data, including discussion of data collection instruments (e.g.,
interview protocols), their appropriateness for the data source, and
evidence of their reliability and validity.
• Performance standards/indicators—The standard or measure that will be
used to evaluate performance relative to the evaluation questions (e.g.,
national or regional indicators, rating scales).
• Stakeholder engagement—Stakeholders’ engagement in the evaluation
and how the level of involvement contributed to the credibility of the
evaluation and the results.
• Ethical considerations—The measures taken to protect the rights and
confidentiality of informants
• Background information on evaluators—The composition of the evaluation
team, the background and skills of team members, and the
appropriateness of the technical skill mix, gender balance, and
geographical representation for the evaluation.
• Major limitations of the methodology— Major limitations of the
methodology should be identified and openly discussed as to their
implications for evaluation, as well as steps taken to mitigate those
limitations.
9. Data analysis—The report should describe the procedures used to analyze the
data collected to answer the evaluation questions. It should detail the various
steps and stages of analysis that were carried out, including the steps to
confirm the accuracy of data and the results. The report also should discuss
the appropriateness of the analysis to the evaluation questions. Potential
weaknesses in the data analysis and gaps or limitations of the data should be
discussed, including their possible influence on the way findings may be
interpreted and conclusions drawn.
10. Findings and conclusions—The report should present the evaluation findings
based on the analysis and conclusions drawn from the findings.
• Findings—Should be presented as statements of fact that are based on
analysis of the data. They should be structured around the evaluation
criteria and questions so that report users can readily make the
connection between what was asked and what was found. Variances
between planned and actual results should be explained, as well as
factors affecting the achievement of intended results. Assumptions or
risks in the project or programme design that subsequently affected
implementation should be discussed.
• Conclusions—Should be comprehensive and balanced, and highlight the
strengths, weaknesses and outcomes of the intervention. They should
be well substantiated by the evidence and logically connected to

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 28


evaluation findings. They should respond to key evaluation questions
and provide insights into the identification of and/or solutions to
important problems or issues pertinent to the decision making of
intended users.
11. Recommendations—The report should provide practical, feasible
recommendations directed to the intended users of the report about what
actions to take or decisions to make. The recommendations should be
specifically supported by the evidence and linked to the findings and
conclusions around
key questions addressed by the evaluation. They should address sustainability
of the initiative and comment on the adequacy of the project exit strategy, if
applicable.
12. Lessons learned—As appropriate, the report should include discussion of
lessons learned from the evaluation, that is, new knowledge gained from the
particular circumstance (intervention, context outcomes, even about
evaluation methods) that are applicable to a similar context. Lessons should be
concise and based on specific evidence presented in the report.
13. Report annexes—Suggested annexes should include the following to provide
the report user with supplemental background and methodological details that
enhance the credibility of the report:
• ToR for the evaluation
• Additional methodology-related documentation, such as the evaluation
matrix and data collection instruments (questionnaires, interview
guides, observation protocols, etc.) as appropriate
• List of individuals or groups interviewed or consulted and sites visited
• List of supporting documents reviewed
• Project or programme results map or results framework
• Summary tables of findings, such as tables displaying progress towards
outputs, targets, and goals relative to established indicators
• Short biographies of the evaluators and justification of team composition
• Code of conduct signed by evaluators

SESSION 12: BEST PRACTICES, EMERGING TRENDS & M&E CAPACITY BUILDING
IN KENYA
(i) Monitoring Best Practices
• Data well-focused to specific audiences and uses (only what is
necessary and sufficient).
• Systematic, based upon predetermined indicators and assumptions.
• Also look for unanticipated changes in the project/program and its
context, including any changes in project/program assumptions/risks;
this information should be used to adjust project/program
implementation plans.
• Be timely, so information can be readily used to inform project/program
implementation.
• Be participatory, involving key stakeholders –reduce costs, and build
understanding and ownership.
• Not only for project/program management but should be shared when
possible with beneficiaries, donors, and any other relevant stakeholders.

(ii) Good M&E Principles for Projects

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 29


• Participation: encourage participation “by all who wish to participate
and/or who might be affected by the review.”
• Decision Making: “Projects will utilize a structured decision-making
process.”
• Value People: “Projects are not intended to result in a loss of employees
but may result in employees being re-deployed to other activities within
the department.”
• Measurement: for accountability; measures should be accurate,
consistent, flexible, and comprehensive but not onerous
• Integrated Program/Process Planning and Evaluation: incorporated into
yearly business plans
• Ethical Conduct/Openness: consider ethical implications, respect and
protect the rights of participants
• Program/Process Focus: focus on improving the program, activity, or
process
• Clear and Accurate Reporting of Facts and Review Results
• Timely Communication of Information and Review Results to Affected
Parties
• Multi-Disciplinary Team Approach: include a range of knowledge and
experience; seek assistance from outside of the team as required
• Customer and Stakeholder Involvement: “External and internal
customers and stakeholders related to a project should be identified and
consulted, if possible, throughout the project.”

(iii) Basic Ethics to expect from an evaluator


• Systematic Inquiry – Evaluators conduct systematic, data-based
inquiries about whatever is being evaluated.
• Competence – Evaluators provide competent performance to
stakeholders.
• Integrity/honesty – Evaluators ensure the honesty and integrity of the
entire evaluation process.
• Respect for people – Evaluators respect the security, personal dignity,
and autonomy of individuals, and the self-worth of the respondents
including recognition and special protections for those with diminished
autonomy, such as children or prisoners, program participants, clients,
and other stakeholders with whom they interact.
• Responsibilities for general and public welfare – Evaluators clarify and
consider the diversity of interests and values that may be related to the
general and public welfare.
• Beneficence: the obligation to protect people from harm by maximizing
anticipated benefits and minimizing potential risks of harm
• Justice: The benefits and burdens of research should be distributed
fairly. In other words, one segment of society—the poor or people of one
ethnicity—should not be the only subject in research designed to benefit
everyone

(iv) Key Success Factors of Monitoring and Evaluation System

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 30


• Clear linkage with the strategic objectives
• Clear statements of measurable objectives for the project and its
components.
• A structured set of indicators covering: inputs, processes, outputs,
outcomes, impact, and exogenous factors.
• Data collection mechanisms capable of monitoring progress over
time, including baselines and a means to compare progress and
achievements against targets.
• Availability of baselines and realistic results framework
• Clear mechanisms for reporting and use of M&E results in decision-
making.
• Sustainable organizational arrangements for data collection,
management, analysis, and reporting.
• A good evaluation process should have six characteristics:
o stakeholder involvement,
o impartiality, usefulness,
o technical adequacy,
o cost effectiveness and
o timely dissemination and
feedback.

(v) Factors contributing to the failure of M&E Systems


• Poor system design in terms of collecting more data than is needed or
can be processed.
• Inadequate staffing of M&E both in terms of quantity and quality
• Missing or delayed baseline studies. Strictly these should be done before
the start of project implementation if they are to facilitate with and
without project comparisons and evaluation.
• Delays in processing data, often as a result of inadequate processing
facilities and staff shortages.
• Personal computers can process data easily and quickly but making the
most of these capabilities requires the correct software and capable
staff.
• In adequate utilization of results

(vi) Status of M&E in Kenya


• Establishment of a National Monitoring and Evaluation Policy
• Monitoring and evaluation is defined as ‘ a management tool that
ensures that policy, program, and project results are achieved by
gauging performance against plans; and drawing lessons from the
experience of interventions for future implementation effectiveness
while fostering accountability to the people of Kenya’. (GOK, Monitoring
and evaluation policy in Kenya, 2012)
• Directorate of M&E created in 2003
• National Integrated M&E system- implementation coordinated by
Directorate of M&E, Department of Planning to monitor implementation
of the Economic Recovery Strategy

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 31


• Rationale for M&E policy-Constitution of Kenya provides basis for M&E
under articles
10, 56, 174, 185, 201, 203 and 225, 226 and 227
• Challenges include: -
i. Weak M&E culture- hard to determine with M&E influences
decision-making, and M&E budgets not aligned to
projects/programs
ii. Weak M&E reporting structures and multiple and uncoordinated
M&E systems within and among institutions to get full and
harmonized results-based information.
iii. Weak institutional, managerial, and technical capacities-
evaluations not adequately conducted
iv. Untimely, rarely analyzed data and low utilization of data/
information
v. Lack of M&E policy and legal framework
• Capacity development to complement policy
• o Technical and managerial capacity – Equip officers with M&E skills and
do backstopping on M&E for state and non-state actors
o Standardize M&E activities
o MED in collaboration with local training institutions shall develop a
curriculum to guide the delivery of certificate, diploma, graduate,
master, and post-graduate diploma courses
o MED to spearhead real-time reporting through uploading,
downloading, and data analysis on ICT database platforms
o Institutional capacity
 Units charged with M&E
 Necessary enabling infrastructure at national and devolved
levels
• Technical oversight committee
• National steering committee
• Ministerial M&E committees
• County M&E committees
• National and County Stakeholders fora
 Funds designated for M7E activities
 Non-state actors ( NGOs, civil society and private sector) be
supported by MED in their M&E capacity development

EXERCISES

Exercise 1: Identify 5 key indicators and complete an indicator matrix for the
project/program you are familiar with.

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 32


Indicat Indicato Methods/ Person/s Frequenc Data Informati
or r Sources Responsib y/ Analys on n Use
Definitio es le Schedule is
n s

Exercise 2: Identify a suitable project and complete a logical framework


Narrative Summary Verifiable Means of Important
Indicators Verification Assumptions
(OVI) (MOV)

GOAL

PURPOSE

OUTPUTS

ACTIVITIES Inputs

Exercise 3: Identify a suitable project and complete an Evaluation Grid using


the five evaluation criteria, which are Relevance, Effectiveness, Efficiency,
Impact, and Sustainability

Exercise 4: Identify a suitable project and complete an Evaluation Matrix using


the five evaluation criteria, which are Relevance, Effectiveness, Efficiency,
Impact and Sustainability

Relevant Key Specific Data Data Indicators/ Methods


evaluation Questions Sub- Sources collection Success for Data
criteria Questions Methods/Tools Standard Analysis

Relevance
Effectiveness
Efficiency
Impact
Sustainabilit
y

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 33


Exercise 5: Identify 5 evaluation methods/techniques and complete an
Evaluation Method/Technique Matrix in regard to a suitable project/program.

Evaluation What are What can it Advantages Disadvantage Resource


Method/ they be used for s s required
Technique
e
Formal surveys Used to Baseline data, Findings from Data analysis Finances,
collect comparing sampled items Process and Technical
standardized different can be applied analysis can and
information groups, to wider be a analytical
from changes target group bottleneck skills
samples overtime, etc
Rapid appraisal
methods
Participatory
methods

Exercise 6: Identify 5 evaluation models/approaches and complete an


Evaluation Model/Approaches Matrix

Evaluation What are some What conditions What are some


Model/Approach examples or need to limitations of this
situations in which exist to use this Approach?
you would use this approach?
approach?
Goal-free evaluation
Kirkpatrick Four-
level approach

Exercise 7: Evaluation Models


a) Applying Kirkpatrick's Four-Level Approach to Evaluate Training
Sales training covers basic topics, such as how to begin the sales discussion, how to
ask the right questions, and how to ask for the sale. Although the trainer believes
that the training will be successful, you have been requested to evaluate the training
program. You decide to use the Kirkpatrick four-level approach.

What aspects of the What are some of What are some of the limitations of the
training will you the variables you evaluation and its findings?
evaluate? will focus on?

a) Applying CIPP evaluation model(Context, Input, Process, Product)


What aspects of the What are some of What are some of the limitations of the
project will you the variables you will evaluation and its findings?
evaluate? focus on?

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 34


Exercise 8: Identify 5 evaluation designs and complete an Evaluation Design
Matrix
Evaluation Design When would you use What data collection What are some
this design? methods might you limitations of this
use? design?
Retrospect Pre-test
Case study Design

=======++++++++++++++====END====+++++++++++++
+==========

Monitoring and Evaluation-Nile Salvation College of Science and Technology Page 35

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy