PMIZ M&E Handout
PMIZ M&E Handout
University of Zimbabwe
Preamble
This module introduces the concepts and values of monitoring and evaluation in
project/programme implementation. It will equip students with basic knowledge on monitoring
and evaluation.
The module (PMIZ Module 4 CPM M&E) is taught for 32 hours. During that period students
complete 2 Assignments, 2 practicals and1 1hr class test. Practicals cover groupwork and class
activities which are recorded.
Aim of the Module: At the end of the module, students should be able to apply the fundamental
tools and techniques of project monitoring and evaluation in a formal project environment.
OBJECTIVES:
At the end of the course Students should be able to:
1. Define the terms monitoring and evaluation.
2. Explain the role of project monitoring and evaluation within the PMBOK framework.
3. Discuss the characteristics of an effective M & E system.
4. Describe the various tools and techniques of tracking project execution and performance against
the project quality plan.
5. Discuss the importance of project change control mechanisms and their effects on the project
performance metrics.
6. Describe the tools and techniques for reporting project progress to the various stakeholders.
7. Discuss the importance of implementing corrective actions and continuous improvement to non-
conforming activities within the project.
2. TYPES OF EVALUATIONS
- Describe briefly these approaches to evaluation – (outcome evaluation, impact evaluation,
performance evaluation, participatory evaluation, empowerment evaluation and feminist
evaluation).
- State the classes of evaluation – internal and external.
- Describe the various types of evaluation – ex-ante evaluation, mid-term evaluation, impact
evaluation, formative evaluation (monitoring), summative evaluation (ex-post evaluation).
- Identify and state the advantages and disadvantages of each of the evaluation types.
- Outline the different fields of evaluation (programme / project evaluation, policy evaluation,
legislation evaluation, technical assistance evaluation, organisation or institutional evaluation,
product / service evaluation and proposal assessment).
Over the years, many organisations have implemented various projects in different regions.
However, for an objective assessment of project performance, monitoring and evaluation tools
need to be applied. Thus, a set of project design and management tools, has been adopted by
development organisations such as the EU, GTZ, DANIDA and others.
A project is a short tern building block for any development activity while a programme comprise
many projects and is long-term. For example, the land reform programme in Zimbabwe consist of
projects such as the farm mechanisation, input supply, farm tillage and farmer training projects.
The way in which programmes or projects are planned and carried out follows a sequence
beginning with an agreed strategy, which leads to an idea for a specific action, which then is
formulated, implemented, and evaluated with a view to improving the strategy and further action.
This sequence is called the project cycle.
EXIT
PROGRAMMING
IMPLEMENTATION
FORMULATION
APPRAISAL/
FINANCING
Identification
Problems needs and interests of possible stakeholders are analysed and ideas for projects and other
actions are identified and formulated in broad terms. This involves a study of the project context to
obtain an idea of the relevance, the feasibility and sustainability of the proposal.
Formulation
During the formulation phase the promoters and project leaders engage in an intensive and
participatory process of information collection and analysis followed by a planning process that
includes operational issues such as activity and resource scheduling. This phase of the cycle leads
to final project proposals that can be submitted for a funding decision.
Appraisal
With reference to the pre-determined criteria the projects proposals are analysed and prioritised.
Project cost-benefit analyses are conducted. The outcome of the appraisal phase is an acceptance
or rejection of projects implementation or funding decisions..
Evaluation
The aim of an evaluation is to determine the relevance, effectiveness, efficiency, impact and
sustainability of the intervention. An evaluation should provide information that is credible and
useful, enabling the incorporation of lessons learned into the decision-making process of both
recipients and donors. Such an evaluation can be conducted at the end of the implementation phase
(final evaluation) or afterwards (ex-post evaluation). In addition to the various project partners,
and selected external institutions such as and independent experts are important actors during the
project phase. The outcome may consist of lessons learned and feedback that is channelled into
future programme frameworks.
Although the term ―monitoring and evaluation‖ tends to get run together as if it is only one thing,
monitoring and evaluation are, in fact, two distinct sets of organisational activities,related but not
identical.
Importance of monitoring
Determines the progress in implementing the project
Continuously identifies and resolves any problems arising during project implementation
Continuously tracks down the trends of project activities.
Tracks project outcomes
Importance of evaluation
To analyse gaps in performance
To criticize our own work
To make our work more effective
To help us see where we are going and whether we need to change
To be able to share our experiences
To see if our work is costing too much and achieving too little
To see where our strengths and weaknesses are
Uses data from project records e.g attendance Uses survey data e.g beneficiary or population-
list for a training session level measurements of project outcomes and
impacts
Monitoring indicators are tracked regularly and Evaluation indicators are assed infrequently –
frequently e.g; data on inputs (especially outcome and impact indicators
expenditures in food and cash) are reported
on a quarterly basis)
Results are used to ee whether the expected Results also used to decide whether the project
changes have indeed occurred during the Life warrants continuation.
of Activity (LOA).
Relevance: are objectives and methods appropriate to solve the challenge at hand
Efficiency tells you that the input into the work is appropriate in terms of the output. This
could be input in terms of money, time, staff, equipment and so on. When you run a project
and are concerned about its replicability or about going to scale, then it is very important to get the
efficiency element right.
Impact tells you whether or not what you did made a difference to the problem situation you were
trying to address. In other words, was your strategy useful? Did ensuring that teachers were better
qualified improve the pass rate in the final year of school? Before you decide to get bigger, or to
replicate the project elsewhere, you need to be sure that what you are doing makes sense in terms
of the impact you want to achieve. From this it should be clear that monitoring and evaluation are
best done when there has been proper planning against which to assess progress and achievements.
2. TYPES OF EVALUATIONS
The process of determining the project‘s effectiveness, merit or impact can be put in various
classes. These categories are chiefly influenced by the phase of the project life cycle. The
following types are worthy noting:
Outcome evaluation. A type of evaluation that determines if, and by how much, intervention
activities or services achieved their intended outcomes. An outcome evaluation attempts to
attribute observed changes to the intervention tested. Note: An outcome evaluation is
methodologically rigorous and generally requires a comparative element in its design, such as a
control or comparison group, although it is possible to use statistical techniques in some instances
when control/comparison groups are not available (e.g., for the evaluation of a national program).
Impact evaluation. A type of evaluation that assesses the rise and fall of impacts, such as disease
prevalence and incidence, as a function of HIV programs/interventions. Impacts on a population
seldom can be attributed to a single program/intervention; therefore, an evaluation of impacts on a
population generally entails a rigorous design that assesses the combined effects of a number of
programs/ interventions for at-risk populations.
Categories of evaluation
Evaluation can also be understood by the level of independence of the evaluators and their nature
of relationship with the organisation or firm carrying out the assessment. This leads us to two main
branches of evaluation.
Internal evaluation
An evaluation of an intervention conducted by a unit and/or individuals who report to the
management of the organization responsible for the financial support, design and/or
implementation of the intervention.
External evaluation
Type of evaluation designed to make an independent assessment of impacts of interventions
usually through engaging an external consultant or outsourcing. It can also be used to assess the
performance of internal evaluators.
It is also important to note that evaluation is meant to assess the efficacy of any developmental
intervention. Therefore, there are different fields of evaluation determined by the nature of the
intervention, that is, programme, project or policy evaluation; legislation evaluation, technical
Goal refers to the sectoral or national objectives for which the project is designed to contribute,
e.g. increased incomes, improved nutritional status, reduced crime. It can also be referred to as
describing the expected impact of the project. The goal is thus a statement of intention that
explains the main reason for undertaking the project.
Purpose refers to what the project is expected to achieve in terms of development outcome.
Examples might include increased agricultural production, higher immunization coverage, cleaner
water, or improved local management systems and capacity. There should generally be only one
purpose statement.
Component Objectives Where the project/program is relatively large and has a number of
components, it is useful to give each component an objective statement. These statements should
provide a logical link between the outputs of that component and the project purpose. Poorly
stated objectives limit the capacity of M&E to provide useful assessments for decision-making,
accountability and learning purposes.
Outputs refer to the specific results and tangible products (goods and services) produced by
undertaking a series of tasks or activities. Each component should have at least one contributing
output, and often have up to four or five. The delivery of project outputs should be largely under
project management's control.
Activities refer to all the specific tasks undertaken to achieve the required outputs. There are many
tasks and steps to achieve an output. However, the logical frame matrix should not include too
much detail on activities because it becomes too lengthy. If detailed activity specification is
required, this should be presented separately in an activity schedule/Gantt chart format and not in
the matrix itself.
It is worth noting that the above table represents a simplified framework and should be interpreted
in a suitably flexible manner. For example, ex-post evaluation assesses whether or not the purpose,
component objectives and outputs have been achieved. Project/program reviews are concerned
with performance in output delivery and the extent of achieving objectives.
Indicators
Indicators provide the quantitative and qualitative details to a set of
objectives. They are statements about the situation that will exist when an
objective is reached, therefore, they are measures used to demonstrate
changes in certain conditions or results of an activity, a project or a program.
In addition, they provide evidence of the progress of program or project
activities in the attainment of development objectives. Indicators should be
pre-established, i.e. during the project design phase. When a direct measure
is not feasible, indirect or proxy indicators may be used.
Indicators should be directly linked to the level of assessment (e.g. output
indicators, outcome indicators or impact indicators). Output indicators show
the immediate physical and financial outputs of the project. Early
indications of impact (outcomes) may be obtained by surveying beneficiaries‘
perceptions about project services. Impact refers to long-term developmental
change. Measures of change often involve complex statistics about economic
or social welfare and depend on data that are gathered from beneficiaries.
They should also be clearly phrased to include change in a situation within a
geographical location, time frame, target etc. A popular code for
remembering the characteristics of good indicators is SMART.
S: Specific
M: Measurable
A: Attainable (i.e., can be checked)
R: Relevant (reflect changes in the situation)
T: Trackable (can be tracked over a specific period of time)
Source: ITAD, Monitoring and the Use of Indicators, consultancy report to
Monitoring and evaluation should be part of your planning process. It is very difficult to go back
and set up monitoring and evaluation systems once things have begun to happen. You need to
begin gathering information about performance and in relation to targets from the word go. The
first information gathering should, in fact, take place when you do your needs assessment (see the
toolkit on overview of planning, the section on doing the ground work). This will give you the
information you need against which to assess improvements over time. When you do your
planning process, you will set indicators. These indicators provide the framework for your
monitoring and evaluation system. They tell you WHAT you want to know and the kinds of
information it will be useful to collect. In this section we look at:
In order to answer this question, our monitoring and evaluation system must give us information
about:
Who is benefiting from what we do? How much are they benefiting?
Are beneficiaries passive recipients or does the process enable them to have some control over
their lives?
Are there lessons in what we are doing that have a broader impact than just what is happening
on our project?
Can what we are doing be sustained in some way for the long-term, or will the impact of our
work cease when we leave?
Are we getting optimum outputs for the least possible amount of inputs?
The activities defined during the formulation phase generally do not allow to appropriately
implement the project. These activities often need to be detailed by defining ‗sub-activities’
(contributing to the implementation of the activities, just like the activities contribute to the
results). While over planning needs to be avoided, the following items should generally be
covered during operational planning:
Therefore, Gantt charts or work plans tables are mostly used i(see example of an activity chart).
Indicators
Indicators describe the project‘s objectives in operationally measurable terms (quantity, quality,
target group(s), time, place). Indicators are a measurable or tangible sign that something has been
done. So, for example, an increase in the number of students passing is an indicator of an mproved
culture of learning and teaching. The means of verification (proof) is the officially published list of
passes.
Indicators are an essential part of a monitoring and evaluation system because they are what you
measure and/or monitor. Through the indicators you can ask and answer questions such as:
Who?
How many?
How often?
How much?
For example
Developing indicators
Often, the formulation of indicators is not an easy task. This might be the case in many projects
that pursue qualitative or intangible outputs. In such cases the definition of appropriate indicators
may involve considerable interaction among stakeholders. It might be possible as well that more
than one indicator will be needed to sufficiently describe a result or objective. It will not always be
possible to find indicators that fulfil all these criteria. In that case, ‗proxy-indicators’ might be
resorted to.
It is not always feasible to formulate indicators at the level of the overall objectives. As stated
above, the overall objectives refer to changes at the level of society to which the project intends to
contribute. The indicators should refer to the specific ‗contribution‘ of the project to each of these
general objectives. However, in most cases, the project's contribution is relatively small and, more
importantly, difficult to isolate. It is then not meaningful to formulate indicators, and the
corresponding cell can remain blank. Alternatively, indicators may be formulated without further
operationalisation, i.e. without trying to measure the project‘s performance against them.
DEVELOPING INDICATORS
Step 1: Identify the problem situation you are trying to address. The following might be
problems:
Economic situation (unemployment, low incomes etc)
Social situation (housing, health, education etc)
Cultural or religious situation (not using traditional languages, low attendance at religious
services etc)
Political or organisational situation (ineffective local government, faction fighting etc)
Step 2: Develop a vision for how you would like the problem areas to be/look.. This will give you
impact indicators.
What will tell you that the vision has been achieved?
What signs will you see that you can measure that will ―prove‖ that the vision has been
achieved? For example, if your vision was that the people in your community would be
healthy, then you can use health indicators to measure how well you are doing. Has the infant
mortality rate gone down? Do fewer women die during child-birth? Has the HIV/AIDS
Quantitative measurement tells you ―how much or how many‖. How many people attended a
workshop, how many people passed their final examinations, how much a publication cost, how
many people were infected with HIV, how far people have to walk to get water or firewood.
Quantitative measurement can be expressed in absolute numbers (3241 women in the sample are
infected) or as a percentage (50% of households in the area have television aerials). It can also be
expressed as a ratio (one doctor for every 30 000 people). One way or another, you get
quantitative (number) information by counting or measuring.
Qualitative measurement tells you how people feel about a situation or about how things are done
or how people behave. So, for example, although you might discover that 50% of the teachers in a
school are unhappy about the assessment criteria used, this is still qualitative information, not
quantitative information. You get qualitative information by asking, observing, interpreting.
Some people find quantitative information comforting – it seems solid and reliable and objective‖.
They find qualitative information unconvincing and ―subjective‖. It is a mistake to say that
―quantitative information speaks for itself‖. It requires just as much interpretation in order to make
it meaningful as does qualitative information. It may be a ―fact‖ that enrolment of girls at schools
in some developing countries is dropping – counting can tell us that, but it tells us nothing about
why this drop is taking place. In order to know that, you would need to go out and ask questions –
to get qualitative information.
4.
The Project Manager and team leaders need to be able to control the work of the team members.
They will need access to detailed information on such things as:
what people are doing,
how much time, resources and budget are being consumed,
how efficiently work is being completed and resources are being utilised,
which inter-dependencies and timing issues will impact upon progress,
what are the future projections for the timetable and for resourcing requirements.
The senior leadership team, people such as the Steering Committee, Project Director, Project
Sponsors etc, need to understand how the project is progressing. They need:
to be able to take executive action if problems are building up,
to adjust resource and investment levels where necessary,
to know when and where their support and sponsorship need to be applied to drive
progress forward,
to ensure that the timetable and demands of this project are compatible with their overall
programme of projects and other activities,
to understand what benefits will be delivered and when.
To address these needs, the project must establish effective methods for:
control
collation of information, and
communication.
Sampling method refers to the way that observations are selected from a population to be in the
sample for a sample survey. The reason for conducting a sample survey is to estimate the value of
some attribute of a population.
Population. Total number of items
Types of Samples
As a group, sampling methods fall into one of two categories.
Probability samples. With probability sampling methods, each population element has a
known (non-zero) chance of being chosen for the sample.
Suppose, for example, that a news show asks viewers to participate in an on-line poll. This
would be a volunteer sample. The sample is chosen by the viewers, not by the survey
administrator.
Convenience sample. A convenience sample is made up of people who are easy to reach.
Consider the following example. A pollster interviews shoppers at a local mall. If the mall
was chosen because it was a convenient site from which to solicit survey participants
and/or because it was close to the pollster's home or business, this would be a convenience
sample.
Quota sampling . A quota sample dependis upon the choice and decision of a researcher, who is
conducting this whole research process. A researcher may select appropriate candidates among
many aspirants. The aspirants should be called in equal proportion. For example if the researcher
has selected 5 women candidates between the age group of 16 to 50 years then it is required to
select 5 male candidate within similar age limits to know thoughts and views of the people equally
from both genders
As a example, suppose we conduct a national survey. We might divide the population into
groups or strata, based on geography - north, east, south, and west. Then, within each stratum,
we might randomly select survey respondents.
Cluster sampling. With cluster sampling, every member of the population is assigned to
one, and only one, group. Each group is called a cluster. A sample of clusters is chosen,
using a probability method (often simple random sampling). Only individuals within
sampled clusters are surveyed.
Note the difference between cluster sampling and stratified sampling. With stratified sampling,
the sample includes elements from each stratum. With cluster sampling, in contrast, the sample
includes elements only from sampled clusters.
Bias-a distortion that result in the information not being representative of the true information on
the ground.
Sources of bias
1. defective research instruments e.g fixed or closed questions on situations that we do not
have all knowledge
2. open ended questions without adequate guidance on how to answer or ask them
3. phrasing the questions/vaguely phrased questions
4. sequencing of questions]
5. where measurements are measured without standardized scales
Qualitative approach
Involve identification and exploration of a number of often related variations that give an insight
into the nature and causes of certain problems and into the consequences of problems for those
affected.
Examples are unstructured interviews often open ended, focus group discussions
Quantitative approach
Can be obtained from structured or pre-coded questions.
Are techniques that quantify the size, distribution and association of certain variables in a
population.
Observations
Definition: is a technique that involves the systematic selecting, watching and recording
behaviour and characteristics of living beings, objects or phenomenon
Much is based in studies of human behaviour
Two types of observation techniques
1. Participant observation: The observer takes part in the the situation being observed
2. Non participant observation-the observer watches the situation openly or concealed but does
not participate
Advantages: can give additional more accurate information, can check on information
collected from questionnaires
Limitations: very time consuming, bias can be introduced during participation
Interviewing
An interview is a data collection technique that involves oral questioning of
respondents either individually or as a group
Responses are recorded in writing or by an electronic media e.g radio, video etc
Advantages: high degree of flexibility, interview schedules to cater broad range of
topics, additional questions asked on the spot
Disadvantages: low degree of flexibility, fixed list of questions in a standard sequence
Structured interviews follow a fixed set of questions, unstructured interviews do not have any pre-
prepared questions and semi-structured combine structured and unstructured, with the interviewer
asking some set questions but adding others in order to follow a line of inquiry that comes up.
Questionnaire
Definition: A data collection tool for written questions are presented that are answered by the
respondents
Postal: a variation of the postal is the internet
Gives a broader range of responses
PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page
26
Prepare your questionnaire and post/mail it to the targeted respondent.
They return to you by post
Include a stamped return envelope for them to return
Have clear instructions on how questions are to be answered
Canvassing: is where you allow the respondent to give oral responses at a certain place or time
as close as possible in their own words
Most data censuses or surveys use this approach
Data analysis
Score sheets
Frequency tables
Cross tabulations
Measures of central tendency- mean, median, mode
Measures of dispersion-range, standard deviation, variance
Statistical tests of relationships-correlation analyses
Data Presentation
1. Graphs-
Show detailed results in an annotated form
2. Plotted curves
Used for data where relationship between 2 variables can be presented as a continuum
You can calculate the standard error
3. Scatter diagrams
Used to visualize the relationship between individual data values for 2 variables after a
preliminary part of correlation analysis
4. Bar graphs
Represents frequency distribution of discrete qualitative or quantitative data
Alternative is a line graph
6. Pie charts
Illustrates portions of a whole
7. Tables:
Columns allow cross comparison
Have a brief descriptive title
Communicate information and allow comparison not simply to put results on paper
References
Maxwell, S., and Bart, C. (1995). Beyond Ranking: Exploring Relative Preferences in P/RRA.
PLA Notes 22, pp. 28-34. IIED, London.
Pretty J.N., Gujit I., Scoones I. and Thompson J. (1995). IIED Participatory Methodology Series.
Int. Institute for Environment and Development, London.
Poole, J. (1997). Dealing with ranking/rating data in farmer participatory trials. MSc thesis.
University of Reading.