0% found this document useful (0 votes)
72 views28 pages

PMIZ M&E Handout

This document outlines a module on project monitoring and evaluation taught by the Project Management Institute of Zimbabwe. [1] The 32-hour module aims to equip students with basic knowledge on monitoring and evaluation. [2] It covers key concepts such as defining monitoring and evaluation, the project cycle, and tools for monitoring project implementation and evaluating outcomes. [3] Students learn techniques for tracking project progress, reporting results, and incorporating lessons learned to improve future projects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views28 pages

PMIZ M&E Handout

This document outlines a module on project monitoring and evaluation taught by the Project Management Institute of Zimbabwe. [1] The 32-hour module aims to equip students with basic knowledge on monitoring and evaluation. [2] It covers key concepts such as defining monitoring and evaluation, the project cycle, and tools for monitoring project implementation and evaluating outcomes. [3] Students learn techniques for tracking project progress, reporting results, and incorporating lessons learned to improve future projects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE [PMIZ]

PMIZ Module 4 CPM M&E

CERTIFICATE IN PROJECT MONITORING AND EVALUATION


By

N. Chanza and N. Mujere

University of Zimbabwe

Preamble
This module introduces the concepts and values of monitoring and evaluation in
project/programme implementation. It will equip students with basic knowledge on monitoring
and evaluation.

The module (PMIZ Module 4 CPM M&E) is taught for 32 hours. During that period students
complete 2 Assignments, 2 practicals and1 1hr class test. Practicals cover groupwork and class
activities which are recorded.

Aim of the Module: At the end of the module, students should be able to apply the fundamental
tools and techniques of project monitoring and evaluation in a formal project environment.

OBJECTIVES:
At the end of the course Students should be able to:
1. Define the terms monitoring and evaluation.
2. Explain the role of project monitoring and evaluation within the PMBOK framework.
3. Discuss the characteristics of an effective M & E system.
4. Describe the various tools and techniques of tracking project execution and performance against
the project quality plan.
5. Discuss the importance of project change control mechanisms and their effects on the project
performance metrics.
6. Describe the tools and techniques for reporting project progress to the various stakeholders.
7. Discuss the importance of implementing corrective actions and continuous improvement to non-
conforming activities within the project.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page 1


CONTENT

1. INTRODUCTION TO MONITORING AND EVALUATION (M&E)


- Define ―monitoring‖ and ―evaluation‖.
- Outline the rationale for monitoring and evaluation.
- Define key outcome terms – effectiveness, efficiency, relevance, impact and sustainability.
- Compare monitoring and evaluation.
- State the aims of project monitoring.
- Identify the reasons for conducting project evaluation.

2. TYPES OF EVALUATIONS
- Describe briefly these approaches to evaluation – (outcome evaluation, impact evaluation,
performance evaluation, participatory evaluation, empowerment evaluation and feminist
evaluation).
- State the classes of evaluation – internal and external.
- Describe the various types of evaluation – ex-ante evaluation, mid-term evaluation, impact
evaluation, formative evaluation (monitoring), summative evaluation (ex-post evaluation).
- Identify and state the advantages and disadvantages of each of the evaluation types.
- Outline the different fields of evaluation (programme / project evaluation, policy evaluation,
legislation evaluation, technical assistance evaluation, organisation or institutional evaluation,
product / service evaluation and proposal assessment).

Designing an M & E System


- Identify the different characteristics of an effective M & E system.
- Briefly explain the different steps in designing a Monitoring System.
- Design an evaluation process system (terms of reference /feasibility study /project proposal).
- State the problems emanating from decisions of what needs to be controlled and suggest the
possible means of avoiding them,

3. PLANNING A MONITORING & EVALUATION SYSTEM (M & E)


- Explain what is meant by ―planning an M & E‖.
- Outline the principles of planning
- Explain the levels of monitoring and evaluation (inputs, activities, outputs, outcomes and
impacts)
- Identify and explain the steps of planning for Monitoring and Evaluation
- Explain briefly the steps in developing indicators
- Classify of information (quantitative and qualitative)
- Participants in monitoring and evaluation.

4. MONITORING AND EVALUATION TECHNIQUES


- Identify the project integration management components.
- Draw a simple Logical Framework Matrix.
- Draw simple Gantt Charts / Load Charts.
- Discuss CPA/ Pert Network Analysis (Key Terms and Logical Relationships).
- Outline the need for earned value and calculate related indicators.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page 2


5. PROJECT TRACKING AND CONTROL
- State the benefits of managing the implementation of change.
- Outline the change management process.
- Explain and illustrate some change management instruments.
- Narrate the project communication change process.
- Discuss the components of project closeout [historical data, project closeout report, questionnaire
approach, recommendations and storage].

Data Collection and Analysis Procedures


- Define the terms ―population‖ and ―sample‖.
- Identify and give examples on the sources of M & E information (primary and secondary - give
examples on each).
- Describe the following sampling procedures techniques [Random sampling, Stratified sampling,
Cluster sampling and Quota sampling].
- Discuss the following data collection instruments [questionnaires, observation, interviews, brain-
storming].
- Identify the data analysis types [scoring procedures, tabulation procedures, descriptive statistics
[graphing data, measures of central tendency, measures of dispersion and computer applications].

6. PROGRESS REPORT WRITING.


- Define a progress report?
- List the parts of Progress Report [title page, purpose statement, executive summary, introduction,
the time pattern or the task pattern {discussion, work completed and work remaining or data
capture proforma}, conclusion, references and appendices]
- Develop a progress report document

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page 3


1. INTRODUCTION TO PROJECT MONITORING AND EVALUATION

Over the years, many organisations have implemented various projects in different regions.
However, for an objective assessment of project performance, monitoring and evaluation tools
need to be applied. Thus, a set of project design and management tools, has been adopted by
development organisations such as the EU, GTZ, DANIDA and others.

A project is a short tern building block for any development activity while a programme comprise
many projects and is long-term. For example, the land reform programme in Zimbabwe consist of
projects such as the farm mechanisation, input supply, farm tillage and farmer training projects.

The concept of the project cycle:

The way in which programmes or projects are planned and carried out follows a sequence
beginning with an agreed strategy, which leads to an idea for a specific action, which then is
formulated, implemented, and evaluated with a view to improving the strategy and further action.
This sequence is called the project cycle.

EXIT
PROGRAMMING

SUMMATIVE EVALUATION IDENTIFICATION

IMPLEMENTATION
FORMULATION

APPRAISAL/
FINANCING

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page 4


Programming
Programming is concerned with the establishment of general principles and guidelines for projects
and programmes.

Identification
Problems needs and interests of possible stakeholders are analysed and ideas for projects and other
actions are identified and formulated in broad terms. This involves a study of the project context to
obtain an idea of the relevance, the feasibility and sustainability of the proposal.

Formulation
During the formulation phase the promoters and project leaders engage in an intensive and
participatory process of information collection and analysis followed by a planning process that
includes operational issues such as activity and resource scheduling. This phase of the cycle leads
to final project proposals that can be submitted for a funding decision.

Appraisal

With reference to the pre-determined criteria the projects proposals are analysed and prioritised.
Project cost-benefit analyses are conducted. The outcome of the appraisal phase is an acceptance
or rejection of projects implementation or funding decisions..

Implementation and monitoring and mid-term evaluation


In this project phase all actors are involved. Project activities are undertaken as planned and proper
monitoring of the output delivery, implementation process, management and assumptions allows
for timely corrections and adaptations as and when required. During implementation mid-term
evaluations may be conducted to review the extent to which results and objectives are being
attained. Progress reports are being produced and the planned implementation process is being
appropriately as to ensure the achievement of the intended objectives.

Evaluation
The aim of an evaluation is to determine the relevance, effectiveness, efficiency, impact and
sustainability of the intervention. An evaluation should provide information that is credible and
useful, enabling the incorporation of lessons learned into the decision-making process of both
recipients and donors. Such an evaluation can be conducted at the end of the implementation phase
(final evaluation) or afterwards (ex-post evaluation). In addition to the various project partners,
and selected external institutions such as and independent experts are important actors during the
project phase. The outcome may consist of lessons learned and feedback that is channelled into
future programme frameworks.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page 5


MONITORING AND EVALUATION

Although the term ―monitoring and evaluation‖ tends to get run together as if it is only one thing,
monitoring and evaluation are, in fact, two distinct sets of organisational activities,related but not
identical.

Monitoring is the systematic collection and analysis of information as a project progresses.


It is aimed at improving the efficiency and effectiveness of a project or organisation. It is based on
targets set and activities planned during the planning phases of work. It helps to keep the work on
track, and can let management know when things are going wrong. If done properly, it is an
invaluable tool for good management, and it provides a useful base for evaluation. It enables you
to determine whether the resources you have available are sufficient and are being well used,
whether the capacity you have is sufficient and appropriate, and whether you are doing what you
planned to do. Monitoring involves:
 Establishing indicators of efficiency, effectiveness and impact;
 Setting up systems to collect information relating to these indicators;
 Collecting and recording the information;
 Analysing the information; and
 Using the information to inform day-to-day management.
Evaluation is the comparison of actual project impacts against the agreed strategic plans. It looks
at what you set out to do, at what you have accomplished, and how you accomplished it.
Evaluation involves:
 Looking at what the project or organisation intended to achieve – what difference didit want to
make? What impact did it want to make?
 Assessing its progress towards what it wanted to achieve, its impact targets.
 Looking at the strategy of the project or organisation. Did it have a strategy? Was it effective
in following its strategy? Did the strategy work? If not, why not?
 Looking at how it worked. Was there an efficient use of resources? What were the opportunity
costs of the way it chose to work? How sustainable is the way in which the project or
organisation works? What are the implications for the various stakeholders in the way the
organisation works

Why do we monitor and evaluate ?

Importance of monitoring
 Determines the progress in implementing the project
 Continuously identifies and resolves any problems arising during project implementation
 Continuously tracks down the trends of project activities.
 Tracks project outcomes

Importance of evaluation
 To analyse gaps in performance
 To criticize our own work
 To make our work more effective
 To help us see where we are going and whether we need to change
 To be able to share our experiences
 To see if our work is costing too much and achieving too little
 To see where our strengths and weaknesses are

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page 6


 To help us make better plans for the future
 To compare with others
 To be able to improve our monitoring methods

Differences between monitoring and evaluation


Monitoring Evaluation

Continuous tracking of project Periodic

Looks at operational implementation, Looks at population effects.

Uses data from project records e.g attendance Uses survey data e.g beneficiary or population-
list for a training session level measurements of project outcomes and
impacts

Monitoring indicators are tracked regularly and Evaluation indicators are assed infrequently –
frequently e.g; data on inputs (especially outcome and impact indicators
expenditures in food and cash) are reported
on a quarterly basis)

Results are used to ee whether the expected Results also used to decide whether the project
changes have indeed occurred during the Life warrants continuation.
of Activity (LOA).

What do we monitor and evaluate?


What monitoring and evaluation have in common is that they are geared towards learning
from what you are doing and how you are doing it, by focusing on:

Relevance: are objectives and methods appropriate to solve the challenge at hand
Efficiency tells you that the input into the work is appropriate in terms of the output. This
could be input in terms of money, time, staff, equipment and so on. When you run a project
and are concerned about its replicability or about going to scale, then it is very important to get the
efficiency element right.

Effectiveness is a measure of the extent to which a development programme or project achieves


the specific objectives it set. If, for example, we set out to improve the qualifications of all the
high school teachers in a particular area, did we succeed?

Impact tells you whether or not what you did made a difference to the problem situation you were
trying to address. In other words, was your strategy useful? Did ensuring that teachers were better
qualified improve the pass rate in the final year of school? Before you decide to get bigger, or to
replicate the project elsewhere, you need to be sure that what you are doing makes sense in terms
of the impact you want to achieve. From this it should be clear that monitoring and evaluation are
best done when there has been proper planning against which to assess progress and achievements.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page 7


Sustainability tells whether beneficiaries will continue enjoying project benefits when external
funding or support has ceased

2. TYPES OF EVALUATIONS

The process of determining the project‘s effectiveness, merit or impact can be put in various
classes. These categories are chiefly influenced by the phase of the project life cycle. The
following types are worthy noting:

Process evaluation. A type of evaluation that focuses on program/ intervention implementation,


including, but not limited to access to services, whether services reach the intended population,
how services are delivered, client satisfaction and perceptions about needs and services,
management practices. In addition, a process evaluation might provide an understanding of
cultural, socio-political, legal, and economic contexts that affect implementation of the
program/intervention.

Outcome evaluation. A type of evaluation that determines if, and by how much, intervention
activities or services achieved their intended outcomes. An outcome evaluation attempts to
attribute observed changes to the intervention tested. Note: An outcome evaluation is
methodologically rigorous and generally requires a comparative element in its design, such as a
control or comparison group, although it is possible to use statistical techniques in some instances
when control/comparison groups are not available (e.g., for the evaluation of a national program).

Impact evaluation. A type of evaluation that assesses the rise and fall of impacts, such as disease
prevalence and incidence, as a function of HIV programs/interventions. Impacts on a population
seldom can be attributed to a single program/intervention; therefore, an evaluation of impacts on a
population generally entails a rigorous design that assesses the combined effects of a number of
programs/ interventions for at-risk populations.

Categories of evaluation
Evaluation can also be understood by the level of independence of the evaluators and their nature
of relationship with the organisation or firm carrying out the assessment. This leads us to two main
branches of evaluation.

Internal evaluation
An evaluation of an intervention conducted by a unit and/or individuals who report to the
management of the organization responsible for the financial support, design and/or
implementation of the intervention.

External evaluation
Type of evaluation designed to make an independent assessment of impacts of interventions
usually through engaging an external consultant or outsourcing. It can also be used to assess the
performance of internal evaluators.

It is also important to note that evaluation is meant to assess the efficacy of any developmental
intervention. Therefore, there are different fields of evaluation determined by the nature of the
intervention, that is, programme, project or policy evaluation; legislation evaluation, technical

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page 8


assistance evaluation, organisation or institutional evaluation, product or service evaluation and
proposal assessment.

4. Monitoring and Evaluation Techniques

The Logical Framework


The Logical Framework Approach (LFA) is a long established activity design methodology
used by a range of major multilateral and bilateral donors for systematically analysing a
development situation.
LFA can be used throughout management of interventions in the following aspects:
• identifying and assessing activity options
• preparing the activity design in a systematic and logical way
• appraising activity designs
• implementing approved Activities, and
• monitoring, reviewing and evaluating activity progress and performance.

The general structure of a Logframe Matrix is shown in Figure 1 below.


Project Description Indicators Means of Assumptions
Verification (MOV)

Goal: The broader Measures of the extent Sources of


development impact to which a information and
to which the project/ contribution to the methods used to
program contributes- goal has been made. collect and report it.
at a national and/or Used during
sectoral level. evaluation.
Purpose: The Conditions at the end Sources of Assumptions
development of the project information and concerning the
outcome expected at indicating that the methods used to purpose/ goal
the end of the Purpose has been collect and report it. linkage.
project. All achieved. Used for
components will project completion
contribute to this. and evaluation.
Component Measures of the extent Sources of Assumptions
Objectives: The to which component information and concerning the
expected outcome of objectives have been methods used to component
producing each achieved. Used during collect and report it. objective/purpose
component's outputs. review and evaluation. linkage.
Outputs: The direct Measures of the Sources of Assumptions
measurable results quantity and quality of information and concerning the
(goods and services) outputs and the timing methods used to output/
of the project which of their delivery. Used collect and report it. component
are largely under during monitoring and objective linkage.
project review.
management's
control.
Activities: The tasks Implementation/work Sources of Assumptions

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page 9


carried out to program targets. Used information and concerning the
implement the during monitoring. methods used to activity/ output
project and deliver collect and report it. linkage.
the identified
outputs.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


10
A brief description of the terminology is given below:
Project description provides a narrative summary of what the project intends to achieve and how.
It describes the means by which desired ends are to be achieved.

Goal refers to the sectoral or national objectives for which the project is designed to contribute,
e.g. increased incomes, improved nutritional status, reduced crime. It can also be referred to as
describing the expected impact of the project. The goal is thus a statement of intention that
explains the main reason for undertaking the project.

Purpose refers to what the project is expected to achieve in terms of development outcome.
Examples might include increased agricultural production, higher immunization coverage, cleaner
water, or improved local management systems and capacity. There should generally be only one
purpose statement.
Component Objectives Where the project/program is relatively large and has a number of
components, it is useful to give each component an objective statement. These statements should
provide a logical link between the outputs of that component and the project purpose. Poorly
stated objectives limit the capacity of M&E to provide useful assessments for decision-making,
accountability and learning purposes.
Outputs refer to the specific results and tangible products (goods and services) produced by
undertaking a series of tasks or activities. Each component should have at least one contributing
output, and often have up to four or five. The delivery of project outputs should be largely under
project management's control.

Activities refer to all the specific tasks undertaken to achieve the required outputs. There are many
tasks and steps to achieve an output. However, the logical frame matrix should not include too
much detail on activities because it becomes too lengthy. If detailed activity specification is
required, this should be presented separately in an activity schedule/Gantt chart format and not in
the matrix itself.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


11
Inputs refer to the resources required to undertake the activities and produce the outputs, e.g.,
personnel, equipment and materials. The specific inputs should not be included in the matrix
format.
Assumptions refer to conditions which could affect the progress or success of the project, but over
which the project manager has no direct control, e.g. price changes, rainfall, political situation,
etc. An assumption is a positive statement of a condition that must be met in order for project
objectives to be achieved. A risk is a negative statement of what might prevent objectives being
achieved.
Indicators refer to the information that would help us determine progress towards meeting project
objectives. An indicator should provide, where possible, a clearly defined unit of measurement
and a target detailing the quantity, quality and timing of expected results. Indicators should be
relevant, independent and can be precisely and objectively defined in order to demonstrate that the
objectives of the project have been achieved (see below).
Means of verification (MOVs). Means of verification should clearly specify the expected source
of the information we need to collect. We need to consider how the information will be collected
(method), who will be responsible, and the frequency with which the information should be
provided. In short MOVs specify the means to ensure that the indicators can be measured
effectively, i.e. specification of the indicators, types of data, sources of information, and collection
techniques.
2.2 Link Between the Logical Frame and Monitoring and Evaluation
The horizontal logic of the matrix helps establish the basis for monitoring and evaluating the
project by asking how outputs, objectives, purpose and goal can be measured, and what are the
suitable indicators. The following table summarizes the link between the logical frame and
monitoring and evaluation.
Logical frame Type of monitoring and Indicators
hierarchy evaluation activity
Goal Ex-post evaluation Impact indicators

Purpose Program Review Outcome indicators

Component Objectives Periodic and final evaluation Outcome indicators

Outputs Monitoring/periodic Output indicators


evaluation

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


12
Activities/Inputs Monitoring Output indicators

It is worth noting that the above table represents a simplified framework and should be interpreted
in a suitably flexible manner. For example, ex-post evaluation assesses whether or not the purpose,
component objectives and outputs have been achieved. Project/program reviews are concerned
with performance in output delivery and the extent of achieving objectives.
Indicators
Indicators provide the quantitative and qualitative details to a set of
objectives. They are statements about the situation that will exist when an
objective is reached, therefore, they are measures used to demonstrate
changes in certain conditions or results of an activity, a project or a program.
In addition, they provide evidence of the progress of program or project
activities in the attainment of development objectives. Indicators should be
pre-established, i.e. during the project design phase. When a direct measure
is not feasible, indirect or proxy indicators may be used.
Indicators should be directly linked to the level of assessment (e.g. output
indicators, outcome indicators or impact indicators). Output indicators show
the immediate physical and financial outputs of the project. Early
indications of impact (outcomes) may be obtained by surveying beneficiaries‘
perceptions about project services. Impact refers to long-term developmental
change. Measures of change often involve complex statistics about economic
or social welfare and depend on data that are gathered from beneficiaries.
They should also be clearly phrased to include change in a situation within a
geographical location, time frame, target etc. A popular code for
remembering the characteristics of good indicators is SMART.
S: Specific
M: Measurable
A: Attainable (i.e., can be checked)
R: Relevant (reflect changes in the situation)
T: Trackable (can be tracked over a specific period of time)
Source: ITAD, Monitoring and the Use of Indicators, consultancy report to

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


13
DG VIII, European Commission, Brussels, 1996.
Notes and Comments:
1. Classifying project objectives into different levels requires that management will need to
develop systems to provide information at all levels, from basic accounting through
sophisticated studies, in order to measure project outcomes.
2. There are different means for measuring project indicators:
Input indicators can be provided from management and accounting records. Input indicators
are used mainly by managers closest to the tasks of implementation.
Output indicators are directly linked to project activities and inputs. Management of the
project have control of project activities and their direct results or outputs. Those outputs can
be verified through internal record keeping and analysis.
By contrast, the achievement of project objectives normally depends on a number of factors.
Some might be controlled by the project, other cannot. For example, the response of
beneficiaries to project services is beyond the control of the project. Responses of beneficiaries
regarding benefits brought to them by the project require consultation and data collection.
Project outcome are often measured through the assessment of indicators that focus on
whether beneficiaries have access to project services, level of usage and satisfaction with
services. Such evidence can also be provided easily and accurately through impact research,
e.g. changes in health status or improvements in income.
3. Exogenous indicators focus on general social, economic and environmental factors that are out
of the control of the project, but which might affect its outcome. Those factors might include
the performance of the sector in which the project operates. Gathering data on project indicators
and the wider environment place an additional burden on the project's M&E effort.
4. The importance of indicators could be changed during project implementation. For example,
monitoring and evaluation focus at an early stage of the project is on input and process
indicators. Emphasis shifts later to outputs and impact. In other words, emphasis is first placed
on indicators of implementation progress, and later on indicators of development results.
M&E designers should examine existing record keeping and reporting procedures used by the
project authorities in order to assess the capacity to generate the data that will be needed.
5. Some of the impact indicators, such as mortality rates or improvement of the household
income, are hard to attribute to the project in a cause-effect relation. In general, the higher the

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


14
objective, the more difficult the cause-effect linkages become. Project impact will almost
certainly be a result of a variety of factors, including that of the project itself. In such situation,
the evaluation team might use comparisons with the situation before the project (baseline data),
or in areas not covered by the project.
6. To maximize the benefits of M&E, the project should develop mechanisms to incorporate the
findings, recommendations and lessons learned from evaluations into the various phases of the
program or project cycle.

3. PLANNING A MONITORING AND EVALUATION SYSTEM

Monitoring and evaluation should be part of your planning process. It is very difficult to go back
and set up monitoring and evaluation systems once things have begun to happen. You need to
begin gathering information about performance and in relation to targets from the word go. The
first information gathering should, in fact, take place when you do your needs assessment (see the
toolkit on overview of planning, the section on doing the ground work). This will give you the
information you need against which to assess improvements over time. When you do your
planning process, you will set indicators. These indicators provide the framework for your
monitoring and evaluation system. They tell you WHAT you want to know and the kinds of
information it will be useful to collect. In this section we look at:

What do we want to know?


What we want to know is linked to what we think is important. In development work, what we think
is important is linked to our values. This includes looking at indicators for both internal issues and
external issues. So, t we need to know is: Is what we are doing and how we are doing it meeting
the requirements of these values?

In order to answer this question, our monitoring and evaluation system must give us information
about:
 Who is benefiting from what we do? How much are they benefiting?
 Are beneficiaries passive recipients or does the process enable them to have some control over
their lives?
 Are there lessons in what we are doing that have a broader impact than just what is happening
on our project?
 Can what we are doing be sustained in some way for the long-term, or will the impact of our
work cease when we leave?
 Are we getting optimum outputs for the least possible amount of inputs?

Levels of monitoring and evaluation


Broadly, monitoring is defined as the regular collection of information to assess progress in the
implementation of the workplan; and evaluation as the periodic collection of information to assess
progress in changing the practices and well being of target populations. When designed together,
these two functions should capture the various moments in the life of the project as resources get
transformed into outcomes and impacts.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


15
The sequence, described in the diagrams below, generally is as follows: the project first mobilizes
a set of inputs (human and financial resources, equipment, etc), which it submits to processes
(training sessions, infrastructure building) that generate outputs (e.g. number of people trained;
kilometers of road built). Outputs in turn translate into outcomes (e.g. increased knowledge;
improved practices) at the beneficiary level—outcomes which, once spread to the rest of the
population, result in population-level impacts (reduced malnutrition; improved incomes; improved
yields; etc). The M&E system must reflect this sequence closely, using verifiable indicators. In
addition, the M&E system should track external factors such as rainfall, policies and market prices
in order to warn against, and mitigate the possible negative influence of such factors on local
conditions. Having data on such external data will also help put the project into context when
explaining results.

INPUTS Activities Outputs OUTCOME IMPACT


-Basic resources -Actions taken -Results at the -Results at the -Ultimate effect of
necessary -the or work programme level level of target project in the long term
financial, material performed (measure population -The positive &
and human through which programme negative, intended or
resources needed to inputs activities ). -The medium- unintended
implement the are mobilised The products, term results of long-term
operation or project to produce capital goods and an operation‘s results produced by an
e.g specific services which outputs. operation, either
Policies outputs. result directly or indirectly.
People from an operation e.g behaviour, e.g mortality,
Equipment e.g safer practices morbidity, HIV
Money Training e.g prevalence, TB
Logistics Services , incidence reduction
PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page
16
Management knowledge

How will we get information?


This refers to sources of different kinds of primary secondary information. Your methods for
information collecting need to be built into your action planning. You should be aiming to have a
steady stream of information flowing into the project or organisation about the work and how it is
done, without overloading anyone. The information you collect must mean something: don‘t
collect information to keep busy, only do it to find out what you want to know, and then make sure
that you store the information in such a way that it is easy to access. Usually you can use the
reports, minutes, attendance registers, financial statements that are part of your work anyway as a
source of monitoring and evaluation information. However, sometimes you need to use special
tools that are simple but useful to add to the basic information collected in the natural course of
your work. Some of the more common ones are:
_ Case studies
_ Recorded observation
_ Diaries
_ Recording and analysis of important incidents (called ―critical incident analysis‖)
_ Structured questionnaires
_ One-on-one interviews
_ Focus groups
_ Sample surveys
_ Systematic review of relevant official statistics.

Who should be involved?


Need to identify major stakeholders-target groups and beneficiaries whole challenges will be
addressed by the intervention. Almost everyone in the organisation or project will be involved in
some way in collecting information that can be used in monitoring and evaluation.

Steps for planning for monitoring and evaluation


1. Baseline data
2. Determine objectives
3. List activities and their relationships to strategy and the tasks they will require
4. Select indicators and set targets
5. Collect data
6. Analyse data
7. Take action

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


17
Components of operational planning

The activities defined during the formulation phase generally do not allow to appropriately
implement the project. These activities often need to be detailed by defining ‗sub-activities’
(contributing to the implementation of the activities, just like the activities contribute to the
results). While over planning needs to be avoided, the following items should generally be
covered during operational planning:

 adequate timing of activities


 adequate division of tasks and responsibilities
 adequate estimation of means and a precise cost calculation

Therefore, Gantt charts or work plans tables are mostly used i(see example of an activity chart).

Description of 1st month 2nd month 3rd month 4th month


activity
1.
2.
3

Indicators
Indicators describe the project‘s objectives in operationally measurable terms (quantity, quality,
target group(s), time, place). Indicators are a measurable or tangible sign that something has been
done. So, for example, an increase in the number of students passing is an indicator of an mproved
culture of learning and teaching. The means of verification (proof) is the officially published list of
passes.

A good should be CREAM:


 C-clear:
 R-relevant
 E-economic
 A-Achievable/attainable
 M-measurable

Indicators are an essential part of a monitoring and evaluation system because they are what you
measure and/or monitor. Through the indicators you can ask and answer questions such as:
 Who?
 How many?
 How often?
 How much?

For example

Objective: To reduce pollution load of wastewater discharged into the Mukuvisi

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


18
Select the indicator: Concentration of heavy metal compounds (PB, Cd, Hg)

Define the targets:

Quantity: Concentration of heavy metal compounds is reduced by 75% compared to year X


levels…
Quality: to meet the limits for irrigation water…
Target group: used by urban farmers in Harare …
Place: in the Mukuvisi river section of southern districts of Harare t…
Time: 2 years after the project has started.

Developing indicators
Often, the formulation of indicators is not an easy task. This might be the case in many projects
that pursue qualitative or intangible outputs. In such cases the definition of appropriate indicators
may involve considerable interaction among stakeholders. It might be possible as well that more
than one indicator will be needed to sufficiently describe a result or objective. It will not always be
possible to find indicators that fulfil all these criteria. In that case, ‗proxy-indicators’ might be
resorted to.

It is not always feasible to formulate indicators at the level of the overall objectives. As stated
above, the overall objectives refer to changes at the level of society to which the project intends to
contribute. The indicators should refer to the specific ‗contribution‘ of the project to each of these
general objectives. However, in most cases, the project's contribution is relatively small and, more
importantly, difficult to isolate. It is then not meaningful to formulate indicators, and the
corresponding cell can remain blank. Alternatively, indicators may be formulated without further
operationalisation, i.e. without trying to measure the project‘s performance against them.

DEVELOPING INDICATORS

Step 1: Identify the problem situation you are trying to address. The following might be
problems:
 Economic situation (unemployment, low incomes etc)
 Social situation (housing, health, education etc)
 Cultural or religious situation (not using traditional languages, low attendance at religious
services etc)
 Political or organisational situation (ineffective local government, faction fighting etc)

Step 2: Develop a vision for how you would like the problem areas to be/look.. This will give you
impact indicators.
 What will tell you that the vision has been achieved?
 What signs will you see that you can measure that will ―prove‖ that the vision has been
achieved? For example, if your vision was that the people in your community would be
healthy, then you can use health indicators to measure how well you are doing. Has the infant
mortality rate gone down? Do fewer women die during child-birth? Has the HIV/AIDS

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


19
infection rate been reduced? If you can answer ―yes‖ to these questions then progress is being
made.

Step 3: Develop a process vision for how you want things to be achieved. This will give
youprocess indicators. If, for example, you want success to be achieved through community
efforts andparticipation, then your process vision might include things like community health
workers from the community trained and offering a competent service used by all; community
organises clean-up events on a regular basis, and so on.

Step 4: Develop indicators for effectiveness.


For example, if you believe that you can increase the secondary school pass rate by upgrading
teachers, then you need indicators that show you have been effective in upgrading the teachers e.g.
evidence from a survey in the schools, compared with a baseline survey.

Step 5: Develop indicators for your efficiency targets.


Here you can set indicators such as: planned workshops are run within the stated timeframe, costs
for workshops are kept to a maximum of US$ 2.50 per participant, no more than 160 hours in total
of staff time to be spent on organising a conference; no complaints about conference organisation
etc.

DIFFERENT KINDS OF INFORMATION – QUANTITATIVE AND QUALITATIVE


Information used in monitoring and evaluation can be classified as:
 Quantitative; or
 Qualitative.

Quantitative measurement tells you ―how much or how many‖. How many people attended a
workshop, how many people passed their final examinations, how much a publication cost, how
many people were infected with HIV, how far people have to walk to get water or firewood.

Quantitative measurement can be expressed in absolute numbers (3241 women in the sample are
infected) or as a percentage (50% of households in the area have television aerials). It can also be
expressed as a ratio (one doctor for every 30 000 people). One way or another, you get
quantitative (number) information by counting or measuring.

Qualitative measurement tells you how people feel about a situation or about how things are done
or how people behave. So, for example, although you might discover that 50% of the teachers in a
school are unhappy about the assessment criteria used, this is still qualitative information, not
quantitative information. You get qualitative information by asking, observing, interpreting.

Some people find quantitative information comforting – it seems solid and reliable and objective‖.
They find qualitative information unconvincing and ―subjective‖. It is a mistake to say that
―quantitative information speaks for itself‖. It requires just as much interpretation in order to make
it meaningful as does qualitative information. It may be a ―fact‖ that enrolment of girls at schools
in some developing countries is dropping – counting can tell us that, but it tells us nothing about
why this drop is taking place. In order to know that, you would need to go out and ask questions –
to get qualitative information.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


20
The monitoring and evaluation process requires a combination of quantitative and qualitative
information in order to be comprehensive. For example, we need to know what the school
enrolment figures for girls are, as well as why parents do or do not send their children to school.
Perhaps enrolment figures are higher for boys than for girls because a particular community sees
schooling as a luxury and prefers to train boys to do traditional and practical tasks such taking care
of animals. In this case, the higher enrolment of girls does not necessarily indicate higher regard
for girls.

4.

5 PROJECT TRACKING AND CONTROL

The Project Manager and team leaders need to be able to control the work of the team members.
They will need access to detailed information on such things as:
 what people are doing,
 how much time, resources and budget are being consumed,
 how efficiently work is being completed and resources are being utilised,
 which inter-dependencies and timing issues will impact upon progress,
 what are the future projections for the timetable and for resourcing requirements.
The senior leadership team, people such as the Steering Committee, Project Director, Project
Sponsors etc, need to understand how the project is progressing. They need:
 to be able to take executive action if problems are building up,
 to adjust resource and investment levels where necessary,
 to know when and where their support and sponsorship need to be applied to drive
progress forward,
 to ensure that the timetable and demands of this project are compatible with their overall
programme of projects and other activities,
 to understand what benefits will be delivered and when.
To address these needs, the project must establish effective methods for:
 control
 collation of information, and
 communication.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


21
A key issue always is - how much control and reporting do we need?

How much control and reporting do we need?


The Project Manager can be in a "no-win" scenario. A Project Manager is expected to be
completely in touch with all aspects of progress, performance, expectations, issues, etc. At the
same time, the senior leadership will not wish to see the Project Manager or team members
seemingly wasting time doing administrative tasks. Equally, the team members will often consider
that such administrative tasks distract them from their work.
To make a success of the project control process, the Project Manager needs to achieve two
objectives:
 to balance the needs for information against the time, effort, and emotional costs of
collecting and collating the data so as to achieve optimum benefit from the process, and
 to communicate clearly and effectively to all participants why this information is vital to
the success of the project and, therefore, why the participants' time is well spent in
contributing accurate, valuable data.
The complexity of the project is often a major factor in this decision. A small team based in a
single location can sometimes be managed with no specific process or procedures simply by the
Project Manager taking a continuous interest in what the participants are doing and what they have
achieved. Unless there is a requirement for audit information, for example where a third party is
billing for time or deliverables, the project could be managed without documenting the individual
participants' work and progress.
Best practice, however, is normally to require some form of documentation and submission from
the individuals. It will encourage them to think about what they are supposed to be achieving, and
whether they are expected to be doing better. It also provides the basic information required to
monitor progress, to demonstrate to the senior leadership or any interested third parties that the job
is being properly managed, and to justify any financial costs, payments or joint venture reporting.
In terms of reporting, again it makes sense to provide some level of formalised feedback to the
senior leadership and other interested parties. The level of detail should match the needs to provide
optimum benefit.

Control and reporting during the project


The project control and reporting process should run throughout the project. The main routine
aspects are:
 timesheets
 collation of information
 reporting
 meetings and communication.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


22
Timesheets
Timesheets are the normal way of collecting source data about progress from the individual
participants. In most projects nowadays, you will be able to use automated tools for the collection
and analysis of such data. Various tools exist which can take the detailed project plan and tracking
data to create electronic forms through an Intranet web page or client/server systems. This
removes much of the pain from the physical process. It also means that standing data, previous
totals and expected work items can be pre-completed on the form.
The information you need to collect will depend partly on your approach to planning the project,
and partly on the investment you have decided to make in the collection and analysis of progress
information. Here is a classic timesheet which could be implemented as a paper-based system,
through the E-Mailing of spreadsheet forms or in a fully automated Project Tracking tool.

DATA COLLECTION AND ANALYSIS PROCEDURES

Sampling method refers to the way that observations are selected from a population to be in the
sample for a sample survey. The reason for conducting a sample survey is to estimate the value of
some attribute of a population.
 Population. Total number of items

 Sample statistic. A fraction of the population


Consider this example. A public opinion pollster wants to know the percentage of voters that favor
a flat-rate income tax. The actual percentage of all the voters is a population parameter. The
estimate of that percentage, based on sample data, is a sample statistic.
The quality of a sample statistic (i.e., accuracy, precision, representativeness) is strongly affected
by the way that sample observations are chosen; that is., by the sampling method.

Types of Samples
As a group, sampling methods fall into one of two categories.
 Probability samples. With probability sampling methods, each population element has a
known (non-zero) chance of being chosen for the sample.

 Non-probability samples. With non-probability sampling methods, we do not know the


probability that each population element will be chosen, and/or we cannot be sure that each
population element has a non-zero chance of being chosen.
Non-probability sampling methods offer two potential advantages - convenience and cost. The
main disadvantage is that non-probability sampling methods do not allow you to estimate the

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


23
extent to which sample statistics are likely to differ from population parameters. Only probability
sampling methods permit that kind of analysis.
Non-Probability Sampling Methods
Two of the main types of non-probability sampling methods are voluntary samples and
convenience samples.
 Voluntary sample. A voluntary sample is made up of people who self-select into the
survey. Often, these folks have a strong interest in the main topic of the survey.

 Suppose, for example, that a news show asks viewers to participate in an on-line poll. This
would be a volunteer sample. The sample is chosen by the viewers, not by the survey
administrator.

 Convenience sample. A convenience sample is made up of people who are easy to reach.
Consider the following example. A pollster interviews shoppers at a local mall. If the mall
was chosen because it was a convenient site from which to solicit survey participants
and/or because it was close to the pollster's home or business, this would be a convenience
sample.
Quota sampling . A quota sample dependis upon the choice and decision of a researcher, who is
conducting this whole research process. A researcher may select appropriate candidates among
many aspirants. The aspirants should be called in equal proportion. For example if the researcher
has selected 5 women candidates between the age group of 16 to 50 years then it is required to
select 5 male candidate within similar age limits to know thoughts and views of the people equally
from both genders

Probability Sampling Methods


The main types of probability sampling methods are simple random sampling, stratified sampling,
cluster sampling, multistage sampling, and systematic random sampling. The key benefit of
probability sampling methods is that they guarantee that the sample chosen is representative of the
population. This ensures that the statistical conclusions will be valid.
 Simple random sampling. Simple random sampling refers to any sampling method that
has the following properties.

o The population consists of N objects.


o The sample consists of n objects.
o If all possible samples of n objects are equally likely to occur, the sampling method
is called simple random sampling.
There are many ways to obtain a simple random sample. One way would be the lottery
method. Each of the N population members is assigned a unique number. The numbers are
placed in a bowl and thoroughly mixed. Then, a blind-folded researcher selects n numbers.
Population members having the selected numbers are included in the sample.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


24
 Stratified sampling. With stratified sampling, the population is divided into groups, based
on some characteristic. Then, within each group, a probability sample (often a simple
random sample) is selected. In stratified sampling, the groups are called strata.

As a example, suppose we conduct a national survey. We might divide the population into
groups or strata, based on geography - north, east, south, and west. Then, within each stratum,
we might randomly select survey respondents.

 Cluster sampling. With cluster sampling, every member of the population is assigned to
one, and only one, group. Each group is called a cluster. A sample of clusters is chosen,
using a probability method (often simple random sampling). Only individuals within
sampled clusters are surveyed.

Note the difference between cluster sampling and stratified sampling. With stratified sampling,
the sample includes elements from each stratum. With cluster sampling, in contrast, the sample
includes elements only from sampled clusters.

 Multistage sampling. With multistage sampling, we select a sample by using


combinations of different sampling methods.
For example, in Stage 1, we might use cluster sampling to choose clusters from a
population. Then, in Stage 2, we might use simple random sampling to select a subset of
elements from each chosen cluster for the final sample.

 Systematic random sampling. With systematic random sampling, we create a list of


every member of the population. From the list, we randomly select the first sample element
from the first k elements on the population list. Thereafter, we select every kth element on
the list.
This method is different from simple random sampling since every possible sample of n elements
is not equally likely.

Bias and data collection

Bias-a distortion that result in the information not being representative of the true information on
the ground.

Sources of bias
1. defective research instruments e.g fixed or closed questions on situations that we do not
have all knowledge
2. open ended questions without adequate guidance on how to answer or ask them
3. phrasing the questions/vaguely phrased questions
4. sequencing of questions]
5. where measurements are measured without standardized scales

How to avoid bias


1. carefully planning the data collection process

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


25
2. pre-test the entire instruments or some section of the instrument

Qualitative approach

Involve identification and exploration of a number of often related variations that give an insight
into the nature and causes of certain problems and into the consequences of problems for those
affected.
Examples are unstructured interviews often open ended, focus group discussions

Quantitative approach
Can be obtained from structured or pre-coded questions.
Are techniques that quantify the size, distribution and association of certain variables in a
population.

Data collection instruments

Observations
Definition: is a technique that involves the systematic selecting, watching and recording
behaviour and characteristics of living beings, objects or phenomenon
 Much is based in studies of human behaviour
 Two types of observation techniques
1. Participant observation: The observer takes part in the the situation being observed
2. Non participant observation-the observer watches the situation openly or concealed but does
not participate
 Advantages: can give additional more accurate information, can check on information
collected from questionnaires
 Limitations: very time consuming, bias can be introduced during participation

Interviewing
 An interview is a data collection technique that involves oral questioning of
respondents either individually or as a group
 Responses are recorded in writing or by an electronic media e.g radio, video etc
 Advantages: high degree of flexibility, interview schedules to cater broad range of
topics, additional questions asked on the spot
 Disadvantages: low degree of flexibility, fixed list of questions in a standard sequence

Structured interviews follow a fixed set of questions, unstructured interviews do not have any pre-
prepared questions and semi-structured combine structured and unstructured, with the interviewer
asking some set questions but adding others in order to follow a line of inquiry that comes up.

Questionnaire
Definition: A data collection tool for written questions are presented that are answered by the
respondents
Postal: a variation of the postal is the internet
 Gives a broader range of responses
PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page
26
 Prepare your questionnaire and post/mail it to the targeted respondent.
 They return to you by post
 Include a stamped return envelope for them to return
 Have clear instructions on how questions are to be answered

Canvassing: is where you allow the respondent to give oral responses at a certain place or time
as close as possible in their own words
Most data censuses or surveys use this approach

Hand deliver the questionnaire


In some cases the instructions must be very clear except on the canvassing
In these written questionnaires there are 2 ways of designing:
 Open ended: respondent has all the freedom to express their views
 Close-ended: you have a question and a list of possible answers. Respondent is allowed
to tick a response that he agrees with. Not allowed to think freely but guided. Can be
time consuming in terms of designing it.

Focus Group Discussions


A focus group discussion is a group discussion among 6-12 persons guided by a facilitator, during
which the group members talk freely about a certain topic. The purpose of an FGD is to obtain in-
depth information on concepts, perceptions and ideas of the group.

Data analysis

 Score sheets
 Frequency tables
 Cross tabulations
 Measures of central tendency- mean, median, mode
 Measures of dispersion-range, standard deviation, variance
 Statistical tests of relationships-correlation analyses

Data Presentation
1. Graphs-
Show detailed results in an annotated form

2. Plotted curves
Used for data where relationship between 2 variables can be presented as a continuum
You can calculate the standard error

3. Scatter diagrams
Used to visualize the relationship between individual data values for 2 variables after a
preliminary part of correlation analysis

4. Bar graphs
Represents frequency distribution of discrete qualitative or quantitative data
Alternative is a line graph

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


27
5. Histograms
Represent frequency distribution of continuous variables. Columns are adjacent to each other in
contrast to a bar graph where columns are separate because the axis of a bar represents discrete
data

6. Pie charts
Illustrates portions of a whole

7. Tables:
Columns allow cross comparison
Have a brief descriptive title
Communicate information and allow comparison not simply to put results on paper

6. PROGRESS REPORT WRITING


The following are various sections of an M&E report:
• Cover page-task, date of submission
• Executive summary-the shorter the better
• Preface /acknowledgements-thank contributors
• Contents page –contents and page numbers
• Section 1: Introduction/background to the project, evaluation, evaluation team, actual
processes and problems
• Section 2. Methodology-data collection, analysis, sampling techniques
• Section 3:Findings
• Section 3. Conclusions-draw conclusions from findings
• Section 4. Recommendations-way forward to address and improve on strengths
• Appendices-TORs, sample questionnaire, interview guisde, FGD guide, list of persons
interviewed, map of the area etc

References

IFRC 2002. Handbook for monitoring and evaluation. IFRC, Geneva

Martin, A and Sherington, J. (1997) Participatory Research Methods. Implementation,


Effectiveness and Institutional Context. Agricultural Systems, Vol. 55, N0. 2, pp. 195-216.

Maxwell, S., and Bart, C. (1995). Beyond Ranking: Exploring Relative Preferences in P/RRA.
PLA Notes 22, pp. 28-34. IIED, London.

Pretty J.N., Gujit I., Scoones I. and Thompson J. (1995). IIED Participatory Methodology Series.
Int. Institute for Environment and Development, London.

Poole, J. (1997). Dealing with ranking/rating data in farmer participatory trials. MSc thesis.
University of Reading.

WHO, 2007. Monitoring and evaluation: systems strengthening tools. Geneva.

PROJECT MANAGEMENT INSTITUTE OF ZIMBABWE (PMIZ) M&E Learning Resourses Page


28

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy