Diploma in Monitoring and Evaluation Module Iii
Diploma in Monitoring and Evaluation Module Iii
MODULE 3
TRAINING MANUAL
~1~
Table of Contents
Chapter 1 .................................................................................................................................... 4
Introduction To Project Management ........................................................................................ 4
Chapter 2 .................................................................................................................................. 15
Project Identification And Formulation ................................................................................... 15
.............................................................................................................................................. 20
Chapter 3 .................................................................................................................................. 24
Project Appraisal ...................................................................................................................... 24
Chapter 4 .................................................................................................................................. 29
Project Planning And Scheduling ............................................................................................ 29
Chapter 5 .................................................................................................................................. 42
Project Team Management ...................................................................................................... 42
Chapter 6 .................................................................................................................................. 59
Indicators.................................................................................................................................. 59
Chapter 7 .................................................................................................................................. 68
Project Management Techniques Of Monitoring .................................................................... 68
Assignment One ....................................................................................................................... 78
Chapter 8 .................................................................................................................................. 79
Understanding The Initiative ................................................................................................... 79
Chapter 9 .................................................................................................................................. 82
Stakeholder Analysis ............................................................................................................... 82
Chapter 10 ................................................................................................................................ 88
Importance Of Monitoring And Evaluation ............................................................................. 88
Chapter 11 ................................................................................................................................ 99
Cluster Development ............................................................................................................... 99
Chapter 12 .............................................................................................................................. 106
Community Based Participatory Research ............................................................................ 106
Chapter 13 .............................................................................................................................. 129
Participatory Evaluation......................................................................................................... 129
Chapter 14 .............................................................................................................................. 155
Why Should You Have An Evaluation Plan? ........................................................................ 155
Chapter 15 .............................................................................................................................. 166
Project Meal Framework........................................................................................................ 166
Chapter 16 .............................................................................................................................. 178
Meal Planning And Budgeting............................................................................................... 178
Chapter 16 .............................................................................................................................. 192
Baseline And Evaluation Design And Management ............................................................. 192
~2~
Chapter 17 .............................................................................................................................. 212
Methods Of Data Collection In Meal..................................................................................... 212
Assignment Two .................................................................................................................... 239
~3~
CHAPTER 1
Project Characteristics
Objectives: A project has a set of objectives or missions once the objectives
are achieved the project is treated as completed.
Life cycle: A project has a life cycle where it consist of the following
~4~
Definite time limit: A project has a definite time limit it cannot continue
forever . what represents the end would normally be spelt out in the
set of objectives.
Uniqueness : every project is unique and no two project are similar even
if the plants are exactly identical or are merely duplicated the location
the infrastructure the agencies and the people make each project
unique.
Team work : A project normally consists of delivered areas there will
be personnel specialized in their respective areas any project calls for the
services of experts from a host of disciplines. Co-ordination among the
diverse areas calls for team work. Hence a project can be implemented
only with a team work.
Complexity: A project is a complex set of thousands of varieties. The
varieties are in terms of technology of equipment and materials ,
machinery and people work culture and ethics but they remain
interrelated and unless this is so they either do not belong to the project
or will never allow the project to be completed.
Sub- contracting: some of the activities may be entrusted to sub- contractors
to reduce the complexity of the project. Sub contracting will be
advantageous if it reduces the complexity of the project so that the
project manager can co-ordinate the remaining activities of the project
more effectively. The greater the complexity of the project, the larger will
be the extent to which sub- contracting will be resorted to.
Risk and uncertainty: Every project has risks and uncertainty associated
with it. The degree of risk and uncertainty
Customer specific nature: A project is always customer specific this is
because the products produced on services offered by the project are
necessarily to be customer oriented. It is the customer who decides upon
the product to be produced or services to be offered and hence it is
responsibility of any organization to go for projects / services that are
suited to customer needs.
~5~
Change: Changes occur throughout the lifespan of a project as a natural out
come of many environmental factors. The changes may vary from minor
changes to major changes which may have a big impact or ever change
the very nature of the project.
Forecasting: Forecasting the demand for any product/services that the
project is going to produce is an important aspect. All projects involve
forecasting and in view of the importance attached to forecasting they
must be accurate and based on sound fundamentals.
Optimality: A project is always aimed at optimum utilization of resources
for the overall development of the organization economy . This is
because resources are scarce and have a cost.
Control mechanism: All projects will have pre-designed control
mechanism in order to ensure completion of projects within the time
schedule, within estimated cost and the save time achieve the desired
level of quality and reliability.
~6~
Ability to handle project management software tools /packages
Flair for sense of humour
Solving issues /problems immediately without postponing them
Initiative and risk taking ability
Familiarity with the organization
Tolerance for difference of opinion , delay ambiguity
Knowledge of technology
Conflict resolving capacity
~7~
human factors
Solutions
Ensure that the clients specifications is clear and understandable
Preparation of project brief should take the objectives set out in the previous
exercise and translate them into targets and goals. This brief shoud be agreed
by the sponsor /clients and communicated to the project manager.
Establishment of success criteria:
Hard Criteria
Are tangible and measureable and can be expressed in quantitative terms. This
poses the question “what” should be achieved, it includes the following.
performance specification: these may be set out in terms of the ability to
deal with certain demands
~8~
specific quality standards:- This could relate to achievement of a favorable
report from an outside inspection agency.
Meeting dead lines
Cost of budget constraints : completing the project within cost limit or
budget which has been determined.
Resources constraints: eg making use of existing premises or labour force.
Soft Criteria:
are often intangible and qualitative and difficult to measure they would
tend to ask the question “how” they include the following:
demonstrative co-operation : This would be about showing that the
project team could work together effectively and without a degree of
conflict . it could be an important consideration to develop and
implement solutions for the organization which have an element of
consensus and stem from co-operative attitude
presenting a positive image: This important though difficult to
quantify
Achieving total quality approach: This would be more about the
adoption of a philosophy of continuous improvement than the
achievement of specific performance target on quality.
Gaining total project commitment : This is about how the project is
managed and the attitude of the project team to it.
Ensuring that ethical standards are maintained
Showing an a appreciation of risk: This would ensure that no
unacceptable risks were taken in the pursuit of other project
objectives.
~9~
2. Constraints on the Completion of Project
Time: there is a relationship between the time taken for the project and its
cost. A trade-off between the two constraining factors may be necessary
Resource availability: There is always a budget for the project and this will be
major constraint. while the overall resource available may in theory be
sufficient to complete the project, there may be difficulties arising out of
the way in which the project has been scheduled. Ie there may be a number
of activities scheduled to take place at the same time and this may not be
possible given the amount of resources available
Quality factors: This refers to whether the project delivers the goods to the
right quality the following techniques can be used to overcome the problems
o -budgeting, and the corresponding control of the project budget through
budgetary control procedures
o -Project planning and control techniques eg.Gautt Chatts and network
analysis.
Conception phase
This is the phase during which the project idea germinates, sources of
ideas may be from the following sources
Find solution to certain problems
Non-utilization of the available funds, plant capacity, expertise or simply
unfulfilled aspirations.
Surveying the environment
Idea put across by well wishers
~ 10 ~
The idea need to be put to shape before they can be considered and
compared with competitive ideas. The ideas need to be examined in light of
objectives and constraints. If this phase is avoided or truncated, the project
will have innate defects and may eventually become a liability for the
investors. A well conceived project will go along way for successful
implementation and operation of a project ; ideas may undergo some
changes as the project progresses because pertinent data may not be
available at inception
Definition phase
This phase develops the idea generated during the conception phase and
produce a document describing the project in sufficient details covering all
aspects necessary for the customer and for financial institutions to make up
their minds on the project idea the areas to be examine during this phase are:
Raw materials: quantitative and qualitative evaluation
Plant size /capacity enumeration of plant capacity for the entire plant
and for the main deportments
Location and site: description of location supported by map.
Technology/process selection: selection of optimum technology, reasons
for selection and description of selected technology.
Project layout; selection of optimum layout, reasons for selection and
appropriate drawings.
Utilities –fuel, power, water, telephone, etc
Manpower and organization pattern
Financial analysis
Implementation schedule: This phase clears some of the ambiguities
and uncertainties associated with the formation made during the
conceptual phase this phase also establishes the risk involved in
going a head with the project in clear terms. A project can either be
accepted or get dropped at this stage.
~ 11 ~
Planning and Organizing Phase
This phase includes the following
a) Project infrastructure and enabling services
b) System design and basic engineering package
c) Organization and manpower
d) Schedule and budgets
e) Licensing and governmental clearances
f) Finance
g) Systems and procedures
h) Identification of project manager
i) Design basis , general conditions for purchase and contracts
j) Site preparation and investigations
k) Work packaging
Thus this phase is involved with preparation for the project to take off
smoothly. This phase is often taken as a part of the implementation phase
since it does not limit itself to paperwork and thinking but many
activities including field work it is essential that this phase is completely
done as it forms the basis for the next phase i.e. Implementation phase.
Implementation Phase
Preparation of specification for equipment and machinery, ordering of
equipment lining up construction contractors, trial run , testing etc takes
place during this phase. As far as the volume of work is concerned 80-85%
of the project work is done in this phase only. The bulk of the work is
done during this phase therefore the need to complete this phase as fast
as possible with minimum resources.
~ 12 ~
Project Clean –Up Phase
This is a transition phase in which the hardware built with the active
involvement of various agencies is physically handed over for production
to a different agency who was not so involved earlier. Drawing,
documents, files, operation. and maintenance manuals are catalogued and
handed over to the customer . Project accounts are closed, materials
reconciliation carried out, outstanding payments made and dues collected
during this phase. Essentially this is the handling over of the project to the
customer.
4 % effort
conception and 8%
effort”plan 85%effort:implemen 3% effort:clean
definition tation up phase
phase ning and
orgn phase
~ 13 ~
Tools and Techniques for Project Management
1. Project selection techniques
Cost benefit analysis
Risk and sensitivity analysis
2. Project execution planning techniques
Work break down structure (WBS)
Project execution plan (PEP)
Project responsibility matrix
Project management manual
3. Project scheduling and Coordinating techniques
i. Bar charts
ii. Life cycle charts
iii. Line of balance (LOB)
iv. Networking techniques(PLRT/ CPM)
4. Project monitoring and progress techniques
i. Project measurement technique (PROMPT)
ii. Performance monitoring techniques (PERMIT)
iii. Updating , reviewing and reporting techniques (URT)
5. Project cost and Productivity Control Techniques
i. productivity budgeting techniques
ii. value engineering (VE) and
iii. Cost/ WBS
6. Project Communication and clean-up Techniques
i. Control room and
ii. Computerized information systems.
~ 14 ~
CHAPTER 2
~ 15 ~
e) Price Trend
This may give an indication about the demand-supply relationship. If the general
price level of particular price is increasing steadily then it indicates a demand
supply gap. Further detailed study may be undertaken to ascertain the extent of
demand supply gap.
f) Data From Various Sources
Various publications of government, banks and financial institution, consultancy
organization manufacturer’s association ,export promotion councils, research
institution and international agencies contain data and statistics which may
indicate prospective ventures.
g) Research Laboratories
They are concerned with identifying new product or process that offers a new
avenue for commercial exploitation, care should be taken to ensure the viability
h) Consumption Abroad
Those entrepreneurs who are willing to take higher risk can identify project for
the manufacture of product or supply of services which are new to the country
but extensively use abroad
i) Identify Unfulfilled Psychological Needs
Consumer goods like cosmetics, bathing soap, toot paste etc. are examples. New
product of this group being introduce and accepted by the consumers indicate
unfulfilled psychological needs of the consumers
j) Plant outlays and government guidelines
Government plant of outlays in different sectors loose useful pointers towards
possible investment opportunities. They indicate potential demand for goods and
services by different sectors of the economy.
k) Analysis of Economic and Social Trends
E.g. the growing desire for leisure points to investment opportunities in
recreational activities; rest-houses reason etc. the growing awareness of the value
of time points to growing demands for fast-foods, high-speed vehicles, better
mode of transport ready-made garments etc.
~ 16 ~
l) Possibility of Reviving Sick Units
In any economy there are many industrial units that might have become sick,
these industrial might still have the capacity to become financial viable. A
promising entrepreneurship that has the required entrepreneurship skills can
take over weak/sick units, revive it and make it to turn around.
Project Preparation
After having identified a project that appears to be a worthwhile project the project
promoter has to further analyze the project to ensure that it has the potential and the
investment on it would not go to waste, but would yield attractive returns.
Project preparation consists of four stages viz.
a) Pre-feasibility study
b) Functional study or support studies
c) Feasibility study
d) Detailed project analysis/Report
a) Pre-feasibility study
It has the following main objectives
i) To determined whether the project offers a promising investment
opportunities
ii) Determined whether there are any aspect of the project that are critical
requiring in- depth investigation by way of market surveys, laboratory
tests, pilot plant test etc
~ 17 ~
4. The plant location
5. The plant capacity
6. Man power requirement
7. Investment required and the returns expected:
If the pre-feasibility study indicates that the project is worth while the
feasibility study is undertaken if the pre-feasibility indicates certain areas of
project that need a detailed study, Such studies are taken up before feasibility
study. Such studies are known as support studies or functional studies
c) Feasibility study
Technical feasibility: for projects concerning manufacturing activities the
technology proposed to be adapted needs careful consideration. Technical
feasibility can be evaluated by answering the following questions.
o Is the technology proposed to be adapted, the latest
o What is the likelihood of the proposed technology becoming obsolete
in the near future?
o Is the technology proposed, a proven technology
o Is the technology proposed available indigenously?
o Incase of imported technology is the technology available freely?
o The aim is to analyze whether the technology proposed is capable
of producing intended goods / services to the requirements of
specifications and to the complete satisfaction of consumers
~ 18 ~
Economic viability
This establishes whether the investment made on the project will give a
satisfactory return to the economy. In terms of raw materials, used
community as whole, investments etc.
Commercial Feasibility
This is in relation to sales volume of products /service, quality, price and
consumer acceptability.
Financial feasibility
It examine the workability of the project proposal in respect of raising
finance to meet the investment required for the project it consist of
calculations of cost of debt and equity and the anticipated profit to check
up whether the financial benefits expected are in excess of the financial
cost involved.
~ 19 ~
b) Details of land, building and plant and machinery
c) Details of infrastructural facilities
d) Raw materials requirement /availability
e) Effluents produced by the project and their treatment
f) Labor requirement / availability
5) Schedule of implementation of the project
6) Project cost
7) Means of financing the project
8) Working capital requirement/ arrangements made
9) Marketing and selling arrangements
10) Profitability and cash- flow estimates
11) Mode of repayment of term loan
12) Government approvals, local body consents and other statutory permissions.
13) Details of collateral security that can be offered to financial institution.
~ 20 ~
communication of reality to the project participants. Also determine what
deadlines are tied to higher level objectives, or have critical links into
schedules of other projects in the organization's portfolio.
2. Communication deficit - Many project managers and team members do not
provide enough information to enough people, along with the lack of an
infrastructure or culture for good communication. Solution: Determine proper
communication flows for project members and develop a checklist of what
information (reports, status, etc.) needs to be conveyed to project participants.
The communications checklist should also have an associated schedule of
when each information dissemination should occur.
3. Scope changes - As most project managers know, an evil nemesis "The Scope
Creep" is usually their number one enemy who continually tries to take
control. Solution: There is no anti-scope-creep spray in our PM utility belts,
but as with many project management challenges, document what is
happening or anticipated to happen. Communicate what is being requested,
the challenges related to these changes, and the alternate plans, if any, to the
project participants (stakeholders, team, management, and others).
4. Resource competition - Projects usually compete for resources (people,
money, time) against other projects and initiatives, putting the project
manager in the position of being in competition. Solution: Portfolio
Management - ask upper level management to define and set project priority
across all projects. Also realize that some projects seemingly are more
important only due to the importance and political clout of the project
manager, and these may not be aligned with the organization's goals and
objectives.
5. Uncertain dependencies - As the project manager and the team determine
project dependencies, assessing the risk or reliability behind these linkages
usually involves trusting someone else's assessment. "My planner didn't think
that our area could have a hurricane the day of the wedding, and now we're
out of celebration deposits for the hall and the band, and the cost of a
honeymoon in Tahiti!" Solution: Have several people - use brainstorming
~ 21 ~
sessions - pick at the plan elements and dependencies, doing "what if?"
scenarios. Update the list of project risk items if necessary based on the
results.
6. Failure to manage risk - A project plan has included in it some risks, simply
listed, but no further review happens unless instigated by an event later on.
Solution: Once a project team has assessed risks, they can either (1) act to
reduce the chance of the risk occurrence or (2) act or plan towards responding
to the risk occurrence after it happens.
7. Insufficient team skills - The team members for many projects are assigned
based on their availability, and some people assigned may be too proud or
simply not knowledgeable enough to tell the manager that they are not
trained for all of their assigned work. Solution: Starting with the project
manager role, document the core set of skills needed to accomplish the
expected workload, and honestly bounce each person's skills against the list
or matrix. Using this assessment of the team, guide the team towards
competency with training, cross-training, additional resources, external
advisors, and other methods to close the skills gap.
8. Lack of accountability - The project participants and related players are not
held accountable for their results - or lack of achieving all of them. Solution:
Determine and use accountability as part of the project risk profile. These
accountability risks will be then identified and managed in a more visible
manner.
9. Customers and end-users are not engaged during the project. Project teams
can get wound up in their own world of internal deliverables, deadlines, and
process, and the people on the outside do not get to give added input during
the critical phases. Solution: Discuss and provide status updates to all project
participants - keep them informed! Invite (and encourage) stakeholders,
customers, end-users, and others to periodic status briefings, and provide an
update to those that did not attend.
10. Vision and goals not well-defined - The goals of the project (and the reasons
for doing it), along with the sub-projects or major tasks involved, are not
~ 22 ~
always clearly defined. Clearly communicating these vague goals to the
project participants becomes an impossible task. Some solutions and ideas to
thrash vagueness: Determine which parts of a project are not understood by
the team and other project participants - ask them or note feedback and
questions that come up. Check the project documentation as prepared, and
tighten up the stated objectives and goals - an editor has appropriate skills to
find vague terms and phrasing. Each project is, hopefully, tied into to the
direction, strategic goals, and vision for the whole organization, as part of the
portfolio of projects for the organization.
~ 23 ~
CHAPTER 3
PROJECT APPRAISAL
Introduction
Project appraisal is a process of detailed examination of several aspects of a given
project before recommending the same. The institution that is going to fund the
project has to satisfy itself before providing financial assistance for the project. It
has to ensure that the investment on the proposed project will generate sufficient
returns on the investments mad e . The various aspects of project appraisal are :
Technical Appraisal
Technical appraisal broadly involves a critical study of the following aspects.
a) Scale of operations
Scale of operation is signified by the size of the plant. The plant size
mainly depends on the market for the output of the project.
b) Selections of process/technology: the choice of technology depends on the
number/ types available and also on the quality and quantity of products
proposed to be manufactured.
c) Raw materials
Products can be manufacture using alternative raw materials and with
alternative process. The process of manufacture may sometimes vary with
the raw materials chosen.
d) Technical know-how
When the technical know-how for the project is provided by expert
consultants, it must be ascertained whether the consultants has the
requisite knowledge and experience and whether he has already executed
similar projects successfully, care should betaken to avoid self- styled,
inexperienced consultants.
~ 24 ~
e) Product mix
Consumers differ in their needs and preferences. Hence variations in size
and quality of products are necessary to satisfy the varying needs and
preferences of customers. In order to enable the project to produce goods
of varying size nature and quality as per requirements of the customers,
the production facilities should be planned with an element of flexibility.
Such flexibility in the production facilities will help the organization to
change the product mix as per customer requirements, which is very
essential for the survival and growth of any organization.
~ 25 ~
Commercial Appraisal
This concerned with the market for the product /service. commercial appraisal (or
market appraisal of a project) is done by studying the commercial successfulness of
the product/service offered by the project from the following angles:
Demand for the product
Supply position for the product
Distribution channels
Pricing of the product
Government policies
Economic Appraisal
Economic appraisal measures the effect of the project on the whole economy. In the
overall interest of the country, the limited stocks of capital and foreign exchange
should be put into the best possible use, hence policy makers are concerned as to
where the scarce resources can be directed to maximize economic growth of the
country, the policy makers make a choice based on economic return.
Financial Appraisal
This include appraising the project using the financial tools which include but not
limited to the following
Discounted cash flow techniques
Net present value methods
Internal Rate of return
Profitability index methods
Benefit cost ratio method
~ 26 ~
Management Appraisal
Management is the most important factor that can either make a project a successful
or a failure. A good project at the hands of poor management may fail while not-so-
good project at the hands of an effective management may succeed. Banks and
financials institution that lend money for financing project lay more emphasis on
management appraisal.
Lending institution looks at two points before committing their funds to project
financing.
a) Capacity of project to repay the loan along with the interest within stipulated
period of time.
b) Willingness of the borrower to repay the loan
The following are some of the factors that will reflect the managerial capabilities of
person concerned.
Industrial relations prevailing in that enterprise
Morale of employees, the prevailing superior-subordinate r-ship
Labour turnover
Labour unrest
Productivity of employees etc
~ 27 ~
Social Cost Benefit Analysis
There are some projects that may not offer attractive returns as far as commercial
profitability is concerned. But still such project are still undertaken since they have
social implications e.g. Roads, Railway, Bridge, Irrigation, power projects e.t.c
Ecological Analysis
In recent years, environmental concerns have assumed great deal of significance-and
rightly so. Ecological analysis should be done particularly for major projects which
have significant ecological implications (like power plants and irrigation schemes)
and environment-polluting industries (like bulk drugs, chemicals, and leather
processing). The key questions raised in ecological analysis are:
~ 28 ~
CHAPTER 4
Project planning calls for detailing the project into activities, estimating resources
and time for each activity and describing activity interrelationships. Scheduling
requires the details of starting and completion dates for each activity. Control
requires not only current status information but also insight into possible trade-offs
when difficulties arise.
~ 29 ~
3. Minimization of total cost
4. Minimization of total time.
5. Minimization of cost for a given total time.
6. Minimization of time for a given cost.
7. Minimization of idle resources.
8. To minimize production delays, interruptions and conflicts.
Definition of terms
- Activity: is the actual work to be done – arcs
- Event: marks the start or end of an activity, usually denoted by modes in the
network.
Start Finish
Event Activity Event
PERT and CPM are useful in planning, analyzing, scheduling and controlling the
progress and completion of large projects.
~ 30 ~
PERT and CPM consists of the following steps;
1. Analyses the project and schedule it into specific activities and events
2. Determine the interdependence of the activities i.e. the order of precedence
and sketch the network.
3. Assign the estimates of time and cost to all the activities of the network.
4. Identify the critical path i.e. the longest time required to complete the project
5. Monitor, evaluate and control the progress of the project, by checking
whether the project can be “crashed” i.e. completed in a shorter period.
- PERT and CPM are similar in nature except that PERT is concerned with
the way activity time is estimated while CPM is concerned with the cost
estimates for completion of various activities.
- PERT activity time estimates are probabilistic i.e. there are three different
time estimates.
Most optimistic time devoted by a or o
Most likely time i.e. most realistic time required to complete the
project devoted by m.
Most pessimistic time devoted by b.
- In CPM the activity times are deterministic i.e. under specific conditions and
are given in two sets, Normal cost and crash cost, Normal time and crash
time.
- Given three time estimates for completing the project i.e. most optimistic time
most likely and most pessimistic time estimate, the Expected time estimate is
the weighted average of a, m, and b and is calculated as
a 4m b
te
6
While the standard deviation of the distribution of time estimates for completing the
activity is given by
ba
6
~ 31 ~
Note:
i. By convention, the network flows from left to right i.e. time and progress of
the project flow from left to right .
ii. The event that marks the beginning of the entire project is called the source
event while the event that marks the completion of the entire project is called
the terminal events.
iii. No event is complete until all activities leading to that event are complete.
iv. Loops or cycles are not permitted in networks
v. In order to incorporate technologies or managerial requirement, it is
sometimes necessary to insert a dummy activity into the network model; a
dummy activity does not require any time effort or resource for its
completion.
vi. In a network path, the longest path is called the critical path, paths other than
the critical paths are called non-critical paths.
Example 1
~ 32 ~
6-8 Staffing 2
7-8 Purchasing 4
8-9 Installation 3
5-8 Safety licensing 1
9-10 Testing equipment 1
7-10 Training staff 2
Draw the appropriate PERT network and determine Earliest expected event
time(TE), Latest allowable event time(TL) and Slack time(S) for each event
~ 33 ~
Diagram
7
1 2 10
9
3 8
6
4
Example 2
Draw a network for a project of erection of steel works for a shed. The various
elements of the project are as under:
Activity Description Prerequisites
code
A Erect site workshop None
B Fence site None
C Bend reinforcement A
D Dig foundation B
E Fabricate steel A
F works B
G Install concrete C, D
H plant G, F
I Place reinforcement E
J Concrete H, I
K foundation J
Paint steel works
Erect steel work
Give finishing
touch
~ 34 ~
Solution.
E
I
A C
G H J K
B D
F
Example 3.
Draw a network diagram from the following activities.
Activity Immediate predecessor Activity Immediate predecessor
A None G C
B A H C&D
C A I E&F
D A J G&H
E B K I&J
F C
Solution.
E
3 6
B F I
1 A 2 C 4 8 K 9
7
5 G J
D
H
~ 35 ~
Example 4
Given the following time estimates: Calculate
(a) Expected time te, for individual activities
(b) The completion time of the project
(c) The standard deviation of the project completion time.
(d) The time in which the management can be 95% confident of completing the
project
Activity a b m a 4m b ba
te
6 6
1-2 3 9 6 6 1
1-3 6 12 9 9 1
2-4 4 8 6 6 2/3
3-5 1 4 3 17/6 1/2
4-5 5 9 7 7 2/3
5-6 5 15 10 10 5/3
Example 5
Present the following activities in the form of a network chart and determine
a. Critical path
b. Earliest and latest expected time, and
Completion time
Activity Optimistic Most expected Pessimistic (b)
(a) (m)
1–2 4 8 12
2–3 1 4 7
2–4 8 12 16
3–5 3 5 7
4–5 0 0 0
4–6 3 6 9
~ 36 ~
5–7 3 6 9
5–8 4 6 8
7–9 4 8 12
8–9 2 5 8
9 – 10 4 10 16
6 - 10 4 6 8
Solution.
3 7
9
1 2 5
8
6 10
~ 37 ~
Crash Cost
Normal Cost
The manager has to trace-off, between high cost and minimum time or low cost and
maximum time. The slope of the above graph will give us the cost-time tradeoff i.e.
how much additional cost will be incurred by saving one unit of time in completing
an activity
With CPM the idea is to design a program that field minimum project completion
time with the least increase in costs over normal costs.
Example 6
Given the following Data, Determine
a) The normal critical path
(b) The crash critical path
(c) The minimum project completion time with the least increase in the costs over
normal costs.
~ 38 ~
Time(weeks) Cost(shs)
activity normal crash normal crash Change in
cost per
week
1-2 10 7 1000 1600 200
1-3 15 10 2000 3000 200
2-4 8 6 1800 2600 400
2-5 20 16 4500 5300 200
3-6 30 20 7200 9600 240
4-5 14 12 5000 6000 500
5-6 12 9 3300 4500 400
Procedure
1) Identify the normal critical path and crash critical path
2) On the normal critical path identify the least expensive activity to crash. Crash
this activity and observe any changes on the critical path if there’s no change
crash the next least expensive activity.
1st crash: On the normal critical path the least Expensive activity is 1 – 3, thus
we crash it by five weeks (15 weeks to 10 weeks), therefore reducing to 40
weeks. Compare this time with total time of other paths, therefore the critical
paths changes to 1-2-4-5-6 (44 weeks)
~ 39 ~
2nd crash: The least expensive activity is 1 - 2, so we crash it by 3 weeks (10
weeks to 7 weeks). Hence the critical path is still 1-2-4-5-6.
3rd crash: Activities 2-4, 4-5, and 5-6 are uncrashed, the least expensive is
activities are 2-4, and 5-6, we crash 5-6 as it yields a larger reduction in
completion time. The critical path changes to 1-3-6 (40 weeks).
4th crash: We have crashed activity 1 – 3 and therefore we now crash activity
3 -6 by 10 weeks (30-20) at a cost of 240 sh per week. The critical path is now
1-2-4-5-6.
5th crash: On this path the uncrashed activities are 2-4, 4-5, activity 2-4 is least
expensive, and so we crash it by 2 weeks (400sh per week). We now have two
critical paths 1-2-5-6, 1-2-4-5-6
6th crash: Comparing the two critical paths 1-2-5-6 and 1-2-4-5-6, there are
only two uncrashed activities remaining (2-5 and 4-5) we crash the least
expensive (200 shs per week) the critical path now is 1-2-4-5-6.
7th crash: The only uncrashed activity on the current critical path is 4 – 5,
crashing it leaves the critical path to be 1-2-4-5-6, and all the activities are
crashed.
3) Examine non critical paths and uncrash activities on such paths (starting with
the most expensive to the point after which further uncrashing will create a
longer critical path.
On the non critical path 1-3-6, 3-6 is most expensive (240shs per
week) and so we uncrash it by 4 weeks
On non critical path 1-2-5-6, activities 1-2, and 5-6 cannot be
uncrashed as they are included in the crash critical path but we
can uncrash 2-5 (200shs per week) so we uncrash it by 2 weeks.
~ 40 ~
In Summary
~ 41 ~
CHAPTER 5
~ 42 ~
After analyzing the information, project managers can identify and resolve
problems, reduce conflicts, and improve overall team work.
Work Performance Information – Work Performance information is gathered
by observing team members performance while participating in meetings,
follow-up on action items, and communicating to others.
Performance Reports – Performance reports depict project performance
information when compared to the project plan. This provides a basis for
determining if corrective actions or preventative actions are need to assure a
successful project delivery.
Managing a team involves making justifiable decisions about how to address the
issues and problems that arise as part of project work. There are some tools and
techniques you can use to manage the project team. Those tools and techniques are:
Observation and Conversation - Observation and conversation involves
project managers using indicators such as progress toward project goals,
interpersonal relationships, and pride in accomplishments and work of
project team members.
Project Performance Appraisals - Project performance appraisals is a vehicle
which enables team members to receive feedback from supervisors.
Performance Appraisals can be used them to clarify team member
responsibilities and to develop training plans and future goals.
Conflict Management - Conflict management involves the reduction of
destructive disagreements within the project team. The project manager can
allow the problem to resolve itself or use informal and formal interventions
before the conflict damages the project.
Issue Log - An issue log is a list of action items and the names of the team
members responsible for carrying them out. Issue logs provide project
managers with a way to monitor outstanding items.
Often in the course of a project, it is necessary to make changes to the way the
project is executed. The outputs of the Managing a Project team process are:
Requested Changes – Requested Changes are staffing changes either planned
~ 43 ~
or unplanned which can impact the project plan. When staffing changes
which have the possibility of disrupting the project plan, the change needs to
be processed through integrated change control.
Recommended Corrective Actions - Recommended corrective actions are to
overcoming the addition or removable of a teammate, outsourcing some
work, additional training, or actions relating to disciplinary processes.
Recommended Preventive Actions - Recommended preventive actions are
taken to reduce the impact of anticipated problems. Such actions might
include cross training a replacement before a team member leaves the project
or clarifying roles to ensure that all project tasks are carried out or added
personal time in anticipation of extra work which may be needed to meet
project deadlines.
Organizational Process Asset Updates – Organizational process asset
updates are either inputs to team members performance appraisals or lessons
learned documentation.
Staffing Management Plan Updates – Staffing Management plan is a
subsidiary plan of the project management plan. The staffing management
plan is updated to reflect staffing related approved change requests
~ 44 ~
Resource Availability – Resource availability information lists when team
members are available to partake in team development activities.
Developing a synergistic project team means knowing who your team
members are, helping them build upon their strengths and overcoming their
weaknesses while promoting productive working relationships within the
team. There are common tools and techniques to develop a project team. They
are:
General Management Skills – The soft skills or interpersonal skills help
motivate a team’s performance and collaboration through empathy, influence,
communication, creativity, and facilitation.
Training – Training encompasses improving the skills and knowledge of
team members. Possible training methods are classroom training, online
learning, on-the-job training, mentoring or coaching
Team-Building Activities – Team-building activities encourage
communication, trust, and collaboration among teammates.
Ground Rules – Ground rules establish clear expectations regarding
acceptable behavior by project team members. The overall teams commitment
to ground rules decreases misunderstanding and increases productivity.
Co-Location – Co-location is the placement of all or most of the active team
members in the same physical location to increase team-building
opportunities.
Recognition and Rewards – Recognition and rewards improve project work
by achieving recognition and rewarding desired behaviors.
Establish Empathy – Being empathic is the listening and understanding how
the individual team member is feeling. There is a simple process that can be
used to establish empathy: encourage openness, restate concerns, reflect, and
summarize.
~ 45 ~
team's effectiveness has improved. The single output from developing a project team
is team performance assessment.
Team Performance Assessment – As the team’s performance improves some
indicators to measure the team’s effectiveness are:
o Improvements in skills which allow an individual to perform assigned
activities with increased effectiveness.
o Improvements in competencies and sentiments which will help the
team perform better as a group
o Reduced staff turnover rate.
The truth is that when teams fail, the fault often rests with a flawed process for
getting them started. We might argue that it is "management's" fault because "they"
haven't designed an effective process. But, doesn’t team members, also share
responsibility for making teams successful? By learning the process ourselves, we
can go a long way toward building effective teams.
~ 46 ~
Steps to Team Success
The this first and most important step for creating effective teams is to create a
charter. This process is called "Chartering!' Chartering is the process by which the
team is formed, its mission or task described, its resources allocated, its goals set, its
membership committed, and its plans made. It is the process of "counting the costs"
that it will take for a team to achieve its goals and deciding whether the organization
is really committed to getting there. A good charter creates a recipe or roadmap for
the team as it carries out its charge. it can assist in facilitating the learning of the
team and its members as they work to improve the effectiveness of this and future
team efforts.
1. What is the purpose for creating the team? The most important contributing factor
is a clear and elevating goal. Further, the relationship between goal setting and
task performance is probably the most robust finding in the research literature of
the behavioral sciences. The more completely the purpose of the team can be
identified, the more likely management, team members, and the rest of the
organization will support it in accomplishing its objectives.
2. What kind of team is needed? There are different kinds of teams for different
kinds of goals. Is the team meant to accomplish a task, manage or improve a
process, come up with a new product idea or design, solve a problem, or make a
decision?
3. Will the team be manager led or self-managed? Who, if anyone, is in charge? That
will depend on the task and the maturity of the members. If it is self-managed or
leaderless, who will be responsible for facilitating the team's progress toward its
goal?
~ 47 ~
4. What skills are needed to accomplish the goal? An inventory of critical knowledge
and expertise should be undertaken. It is essential those teams have as members,
or have access to others who can be ad hoc resources, and who can supply the
necessary competence to achieve the objectives.
5. How will members be selected? This is more difficult than it might seem. Often
there are internal political, deployment, or logistical barriers. We want the right
balance of thinkers and doers. We want people who will follow through. We want
to use known resources but develop new competence in the organization. We
want enough diversity of opinion to get all the "cards on the table" without cre-
ating unnecessary conflict. How will the personalities of the various players fit?
Can the company afford to have them take time away from other priorities? Bad
choices here can doom the results.
6. What resources will be necessary to achieve the objectives? Is management willing
to devote the time as well as the financial, human and intellectual capital
necessary to get the job done? Counting the costs and deciding that it is worth
those costs is crucial. In self-managed or leaderless teams ,these are questions that
need to be answered by team members both individually and collectively. Are
they willing to commit their time, talents, and effort to that goal to the extent
necessary?
7. What are the boundaries? Management needs to identify the parameters within
which the team is expected to operate. How much time will the team be given?
How often are the members expected to meet? What is the scope of their concern?
(It's sometimes useful when creating process improvement teams to identify
change recommendations that are off-limits. For example, it is common for teams
to come back with a recommendation that more staff is the solution. By limiting
such recommendations, at least at first, the team is forced to look for solutions that
deal more with the process.)
8. What process will the team use to get results? Once the team has been formed and
the members selected, management-and especially the team itself-must determine
how it will go about getting the job done. When and where will the team meet?
~ 48 ~
How will it meet (face-to-face or some kind of virtual arrangement)? What
maintenance roles will the members agree are important and how will they assign
those? How will the members communicate with one another? What happens if a
member can't be at a meeting but has an assignment due? What are expectations
regarding participation in meetings?
9. How will equal commitment be secured? A frank discussion about the level of
commitment members are willing to give is key to achieving success. Do they
share an equal view as to the importance of the goal? Are they personally willing
to expend the effort necessary to get the desired result? What circumstances might
limit their ability to perform up to the expectations of others? Getting all this out
on the table early on can avoid conflicts down the road.
10. How will we plan for conflict? The best way to minimize the amount of
unproductive conflict is to conduct a frank discussion about potential discord.
Two of the most common examples of conflict in teams result when members
don't pull their weight and follow through on assignments and commitments, or
when one or more members try to over-control and dominate the group. By
identifying these and other potential conflicts and agreeing beforehand how
members will deal with them, a team can minimize the disruption to goal
achievement. In essence, you're giving one another permission to do the kind of
confrontation that is necessary to get past the conflicts. .
11. What will be done to get the job done? The Project Plan: Early on, there's a need
to analyze the task, break down tasks, establish the timeline, make and accept
assignments, and get started. Usually, make this the first step but it's really the
final step of the "chartering" process.
12. How will success be evaluated and learn from the process? How will we know
what mid-course corrections need to be made to the process or plan? How will
we measure our progress? What can we do to learn from this experience about
how not only to make this team better, but future teams: both those we serve on
individually and teams the company forms. By planning how and when the team
will reflect on the process they are going or have gone through, the individuals,
team, and larger organization benefit.
~ 49 ~
There is a direct proportional relationship between the amount of time and
intellectual effort we spend chartering our teams and the likelihood those teams will
achieve their goals. Going about this process in a conscious, reflective manner often
is the deciding factor in achieving optimal results.
Work Expectations
The second area for focus with ground rules is work expectations. People join teams
with very different ideas about the work involved in being a member of the team.
Few people will deliberately perform poorly, but team members need information
about the standards of the team. For example, it is common for people to send out
information about the topic of a meeting and then never reach that topic at the
meeting or never refer to the information provided. If there is not a positive
consequence for meeting preparation, participants will not read materials sent prior
to meetings.
On the other hand, some meetings ask for people to give their interpretations,
opinions, and recommendations based on the material provided prior to the
meeting. In this case, participants are very likely to be prepared. Having had either
one of these experiences, or any of the various experiences in between, will define
what a team member thinks she or he is accountable for in a team meeting.
Common questions teams address in their ground rules involving work
expectations include:
What is the quality of work expected?
What is the quantity of work expected?
How is the timeliness of work defined?
What does it mean to come prepared to a meeting?
Confidentiality
The last issue for team ground rules that we will discuss concerns confidentiality
and support. Nothing can destroy trust as quickly in a team than to have team
discussions shared with those outside the team. When team members hear
~ 50 ~
summaries of what occurred in the team, they often feel that their comments are
misrepresented or misinterpreted or, at the very least, that they would like to speak
for themselves. To avoid these problems, team members need to decide how they
will represent the meeting discussion to others. Some teams choose a spokesperson
for the team.
To develop useful guidelines the team needs to discuss questions such as the
following:
What topics are to be considered confidential?
How will team members identify confidential information?
How should team members treat this information?
How should team members portray team meetings to outsiders? . Who should
be the spokesperson for the group?
Who should receive meeting minutes?
The discussion on confidentiality also requires a discussion on enforcement and
consequence.
How will the team address instances where a team member has violate the
confidentiality norm?
What will be the consequence of such an action?
Ground Rules
Ground rules are prescriptions for team communication. They must arise from the
team and be freely committed to by all team members. The following is guide to
effective team’s communication.
Be a good listener. . Keep an open mind.
Participate in the discussion.
Ask for clarification.
Give everyone a chance to speak.
Deal with particular rather than general problems.
Don't be defensive if your idea is criticized.
Be prepared to carry out group decisions.
~ 51 ~
All comments remain in the meeting room.
Everyone is an equal in the discussion session
Be polite-don't interrupt.
CONFLICT MANAGEMENT
What is conflict? Is it the same as a disagreement or an argument? Typically,
conflict is characterized by three elements:
1) Interdependence,
2) Interaction, and
3) Incompatible goals.
~ 52 ~
achieve goals, their subjective views and opinions about how to best achieve
those goals will lead to conflict of some degree. Harmony occurs only when
conflict is acknowledged and resolved.
2. Conflicts and disagreements are the same. Disagreement is usually temporary and
limited, stemming from misunderstanding or differing views about a specific
issue rather than a situation's underlying values and goals. Conflicts are more
serious and usually are rooted in incompatible goals.
3. Conflict is the result of personality problems. Personalities themselves are not
cause for conflict. While people of different personality types may approach
situations differently, true conflict develops from and is reflected in behavior,
not personality.
4. Conflict and anger are the same thing. While conflict and anger are closely
merged in most people's minds, they don't necessarily go hand in hand.
Conflict involves both issues and emotions-the issue and the participants
determine what emotions will be generated. Serious conflicts can develop that
do not necessarily result in anger. Other emotions are just as likely to surface:
fear, excitement, sadness, frustration, and others.
~ 53 ~
and know when to use the data collected.
6. Negotiate. Suggest partial solutions or compromises identified by
both parties. Continue to emphasize common goals of both parties
involved.
7. Solidify adjustments. Review, summarize, and confirm areas of agreement
Resolution involves compromise.
~ 54 ~
Accommodation
Accommodation is the opposite of competition and contains an element of self-
sacrifice. An accommodating person neglects his or her own concerns to satisfy the
concerns of the other person.
Use accommodation when:
1. The issue is more important to the other person than it is to you.
2. You discover that you are wrong.
3. Continued competition would be detrimental and you know you cant win
4. Preserving harmony without disruption is the most important con-
sideration
Accommodation should not be used if an important issue is at stake that needs to
be addressed immediately.
Compromise
The objective of compromise is to find an expedient, mutually acceptable solution
that partially satisfies both parties. It falls in the middle between competition and
accommodation. Compromise gives up more than competition does, but less than
accommodation. Compromise is appropriate when all parties are satisfied with
getting part of what they want and are willing to be flexible. Compromise is mutual.
All parties should receive something, and all parties should give something up.
Use compromise when:
1. The goals are moderately important but not worth the use of more
assertive strategies
2. People of equal status are equally committed.
3. You want to reach temporary settlement on complex issues.
4. You want to reach expedient solutions on important issues.
5. You need a backup mode when competition or collaboration don't work
Compromise doesn't work when initial demands are too great from the
beginning and there is no commitment to honor the compromise.
~ 55 ~
Competition
An individual who employs the competition strategy pursues his or her own
concerns at the other person's expense. This is a power-oriented strategy used in
situations in which eventually someone wins and someone loses. Competition
enables one party to win. Before using competition as a conflict resolution strategy,
you must decide whether or not winning this conflict is beneficial to individuals or
the group.
Use competition when:
1. You know you are right.
2. You need a quick decision.
3. You meet a steamroller type of person and you need to stand up for your own
rights.
Competition will not enhance a group's ability to work together. It reduces
cooperation.
Collaboration
Collaboration is the opposite of avoidance. It is characterized by an attempt to work
with the other person to find some solution that fully satisfies the concerns of both.
This strategy requires you to identify the underlying concerns of the two individuals
in conflict and find an alternative that meets both sets of concerns. This strategy
encourages teamwork and cooperation within a group. Collaboration does not create
winners and losers and does not presuppose power over others. The best decisions
are made by collaboration.
Use collaboration when:
1. Others' lives are involved.
2. You don't want to have full responsibility.
3. There is a high level of trust.
4. You want to gain commitment from others.
5. You need to work through hard feelings, animosity, etc.
~ 56 ~
Collaboration may not be the best strategy to use if time is limited and people must
act before they can work through their conflict, or there is not enough trust, respect,
or communication among the group for collaboration to occur.
~ 57 ~
we least expect it.
7. Agree to disagree. In spite of your differences, if you maintain respect for
one another and value your relationship, you will keep disagreements
from interfering with the group.
8. Don't insist on being right. There are usually several right solutions to
every problem.
~ 58 ~
CHAPTER 6
INDICATORS
Outcome indicators help to answer two fundamental questions: “How will we know
success or achievement when we see it? Are we moving toward achieving our
desired outcomes?” These are the questions that are increasingly being asked of
governments and organizations across the globe. Consequently, setting appropriate
indicators to answer these questions becomes a critical part of our 10-step model.
Developing key indicators to monitor outcomes enables managers to assess the
degree to which intended or promised outcomes are being achieved. Indicator
development is a core activity in building a results-based M&E system. It drives all
subsequent data collection, analysis, and reporting. There are also important
political and methodological considerations involved in creating good, effective
indicators.
~ 59 ~
will help managers identify those parts of an organization or government that may,
or may not, be achieving results as planned. By measuring performance indicators
on a regular, determined basis, managers and decision makers can find out whether
projects, programs, and policies are on track, off track, or even doing better than
expected against the targets set for performance. This provides an opportunity to
make adjustments, correct course, and gain valuable institutional and project,
program, or policy experience and knowledge. Ultimately, of course, it increases the
likelihood of achieving the desired outcomes.
For example, in the case of the outcome “to improve student learning,” an outcome
indicator regarding students might be the change in student scores on school
achievement tests. If students are continually improving scores on achievement tests,
it is assumed that their overall learning outcomes have also improved. Another
example is the outcome “reduce at-risk behavior of those at high risk of contracting
HIV/AIDS.” Several direct indicators might be the measurement of different risky
behaviors for those individuals most at risk.
~ 60 ~
just a single stakeholder group. Just as important, the indicators have to be relevant
to the managers, because the focus of such a system is on performance and its
improvement.
~ 61 ~
not indicate the size of the success. Assessing the significance of an outcome
typically requires data on both number and percent” (Hatry 1999, p.)
Qualitative indicators should be used with caution. Public sector management is not
just about documenting perceptions of progress. It is about obtaining objective
information on actual progress that will aid managers in making more well-informed
strategic decisions, aligning budgets, and managing resources. Actual progress
matters because, ultimately, M&E systems will help to provide information back to
politicians, ministers, and organizations on what they can realistically expect to
promise and accomplish. Stakeholders, for their part, will be most interested in
actual outcomes, and will press to hold managers accountable for progress toward
achieving the outcomes.
Performance indicators should be relevant to the desired outcome, and not affected
by other issues tangential to the outcome. The economic cost of setting indicators
should be considered. This means that indicators should be set with an
understanding of the likely expense of collecting and analyzing the data.
For example, in the National Poverty Reduction Strategy Paper (PRSP) for the
Kyrgyz Republic, there are about 100 national and sub national indicators spanning
~ 62 ~
more than a dozen policy reform areas. Because every indicator involves data
collection, reporting, and analysis, the Kyrgyz government will need to design and
build 100 individual M&E systems just to assess progress toward its poverty
reduction strategy. For a poor country with limited resources, this will take some
doing. Likewise, in Bolivia the PRSP initially contained 157 national-level indicators.
It soon became apparent that building an M&E system to track so many indicators
could not be sustained. The present PRSP draft for Bolivia now has 17 national level
indicators.
Indicators ought to be adequate. They should not be too indirect, too much of a
proxy, or so abstract that assessing performance becomes complicated and
problematic.
Caution should also be exercised in setting indicators according to the ease with
which data can be collected. “Too often, agencies base their selection of indicators on
how readily available the data are, not how important the outcome indicator is in
measuring the extent to which the outcomes sought are being achieved” (Hatry 1999,
p. 55).
~ 63 ~
Use of Proxy Indicators
You may not always be precise with indicators, but you can strive to be
approximately right. Sometimes it is difficult to measure the outcome indicator
directly, so proxy indicators are needed. Indirect, or proxy, indicators should be
used only when data for direct indicators are not available, when data collection will
be too costly, or if it is not feasible to collect data at regular intervals. However,
caution should be exercised in using proxy indicators, because there has to be a
presumption that the proxy indicator is giving at least approximate evidence on
performance (box 3.1).
~ 64 ~
by two-thirds the under-five mortality rate between the years 1990 and 2015.
Indicators include
a) under-five mortality rate;
b) infant mortality rate; and
c) Proportion of one-year-old children immunized against measles.
In light of regional financial crises in various parts of the world, the IMF is in the
process of devising a set of Financial Soundness Indicators.
These are indicators of the current financial health and soundness of a given
country’s financial institutions, corporations, and households. They include
indicators of capital adequacy, asset quality, earnings and profitability, liquidity,
and sensitivity to market risk (IMF 2003).
On a more general level, the IMF also monitors and publishes a series of
macroeconomic indicators that may be useful to governments and organizations.
These include output indicators, fiscal and monetary indicators, balance of
payments, external debt indicators, and the like.
There are a number of pros and cons associated with using predesigned indicators:
Pros:
• They can be aggregated across similar projects, programs, and policies.
• They reduce costs of building multiple unique measurement systems.
• They make possible greater harmonization of donor requirements.
Cons:
• They often do not address country specific goals.
• They are often viewed as imposed, as coming from the top down.
• They do not promote key stakeholder participation and ownership.
• They can lead to the adoption of multiple competing indicators.
There are difficulties in deciding on what criteria to employ when one chooses one
set of predesigned indicators over another.
~ 65 ~
Predesigned indicators may not be relevant to a given country or organizational
context. There may be pressure from external stakeholders to adopt predesigned
indicators, but it is our view that indicators should be internally driven and tailored
to the needs of the organization and to the information requirements of the
managers, to the extent possible. For example, many countries will have to use some
predesigned indicators to address the MDGs, but each country should then
disaggregate those goals to be appropriate to their own particular strategic objectives
and the information needs of the relevant sectors.
Constructing Indicators
Constructing indicators takes work. It is especially important that competent
technical, substantive, and policy experts participate in the process of indicator
construction. All perspectives need to be taken into account—substantive, technical,
and policy—when considering indicators. Are the indicators substantively feasible,
technically doable, and policy relevant? Going back to the example of an outcome
that aims to improve student learning, it is very important to make sure that
education professionals, technical people who can construct learning indicators, and
policy experts who can vouch for the policy relevance of the indicators, are all
included in the discussion about which indicators should be selected.
Indicators should be constructed to meet specific needs. They also need to be a direct
reflection of the outcome itself. And over time, new indicators will probably be
adopted and others dropped. This is to be expected. However, caution should be
used in dropping or modifying indicators until at least three measurements have
been taken.
~ 66 ~
Taking at least three measurements helps establish a baseline and a trend over time.
Two important questions should be answered before changing or dropping an
indicator: Have we tested this indicator thoroughly enough to know whether it is
providing information to effectively measure against the desired outcome? Is this
indicator providing information that makes it useful as a management tool?
It should also be noted that in changing indicators, baselines against which to
measure progress are also changing. Each new indicator needs to have its own
baseline established the first time data are collected for it.
Performance indicators can and should be used to monitor outcomes and provide
continuous feedback and streams of data throughout the project, program, or policy
cycle. In addition to using indicators to monitor inputs, activities, outputs, and
outcomes, indicators can yield a wealth of performance information about the
process of and progress toward achieving these outcomes. Information from
indicators can help to alert managers to performance discrepancies, shortfalls in
reaching targets, and other variabilities or deviations from the desired outcome.
~ 67 ~
CHAPTER 7
In the past, a company typically decided to undertake a project effort, assigned the
project and the "necessary" resources to a carefully selected individual and assumed
they were using some form of project management. Organizational implications
were of little importance. Although the basic concepts of project management are
simple, applying these concepts to an existing organization is not. Richard P. Olsen,
in his article "Can Project Management Be Defined?" defined project management as
"…the application of a collection of tools and techniques…to direct the use of diverse
resources toward the accomplishment of a unique, complex, one-time task within
time, cost, and quality constraints. Each task requires a particular mix of these tools
and techniques structured to fit the task environment and life cycle (from conception
to completion) of the task."
The project management process typically includes four key phases: initiating the
project, planning the project, executing the project, and closing the project. An
outline of each phase is provided below.
~ 68 ~
2. Establishing a relationship with the customer. The understanding of your
customer's organization will foster a stronger relationship between the two of
you.
3. Establishing the project initiation plan. Defines the activities required to organize
the team while working to define the goals and scope of the project.
4. Establishing management procedures. Concerned with developing team
communication and reporting procedures, job assignments and roles, project
change procedure, and how project funding and billing will be handled.
5. Establishing the project management environment and workbook. Focuses on the
collection and organization of the tools that you will use while managing the
project.
Planning the Project the project management techniques related to the project
planning phase includes:
1. Describing project scope, alternatives, and feasibility. The understanding of the
content and complexity of the project. Some relevant questions that should be
answered include:
o What problem/opportunity does the project address?
o What results are to be achieved?
o What needs to be done?
o How will success be measured?
o How will we know when we are finished?
2. Divide the project into tasks. This technique is also known as the work
breakdown structure. This step is done to ensure an easy progression between
tasks.
3. Estimating resources and creating a resource plan. This helps to gather and
arrange resources in the most effective manner.
4. Developing a preliminary schedule. In this step, you are to assign time estimates
to each activity in the work breakdown structure. From here, you will be able
to create the target start and end dates for the project.
~ 69 ~
5. Developing a communication plan. The idea here is to outline the
communication procedures between management, team members, and the
customer.
6. Determining project standards and procedures. The specification of how various
deliverables are produced and tested by the project team.
7. Identifying and assessing risk. The goal here is to identify potential sources of
risk and the consequences of those risks.
8. Creating a preliminary budget. The budget should summarize the planned
expenses and revenues related to the project.
9. Developing a statement of work. This document will list the work to be done and
the expected outcome of the project.
10. Setting a baseline project plan. This should provide an estimate of the project's
tasks and resource requirements.
Executing the Project the project management techniques related to the project
execution phase includes
1. Executing the baseline project plan. The job of the project manager is to initiate
the execution of project activities, acquire and assign resources, orient and
train new team members, keep the project on schedule, and assure the quality
of project deliverables.
2. Monitoring project progress against the baseline project plan. Using Gantt and
PERT charts, which will be discussed in detail further on in this paper, can
assist the project manager in doing this.
3. Managing changes to the baseline project plan.
4. Maintaining the project workbook. Maintaining complete records of all project
events is necessary. The project workbook is the primary source of
information for producing all project reports.
5. Communicating the project status. This means that the entire project plan should
be shared with the entire project team and any revisions to the plan should be
communicated to all interested parties so that everyone understands how the
plan is evolving.
~ 70 ~
Closing Down the Project the project management techniques related to the project
closedown phase includes:
1. Closing down the project. In this stage, it is important to notify all interested
parties of the completion of the project. Also, all project documentation and
records should be finalized so that the final review of the project can be
conducted.
2. Conducting post project reviews. This is done to determine the strengths and
weaknesses of project deliverables, the processes used to create them, and the
project management process.
3. Closing the customer contract. The final activity is to ensure that all contractual
terms of the project have been met.
The techniques listed above in the four key phases of project management enable a
project team to:
Link project goals and objectives to stakeholder needs.
Focus on customer needs.
Build high-performance project teams.
Work across functional boundaries.
Develop work breakdown structures.
Estimate project costs and schedules.
Meet time constraints.
Calculate risks.
Establish a dependable project control and monitoring system.
Tools
Project management is a challenging task with many complex responsibilities.
Fortunately, there are many tools available to assist with accomplishing the tasks
and executing the responsibilities. Some require a computer with supporting
software, while others can be used manually. Project managers should choose a
project management tool that best suits their management style. No one tool
addresses all project management needs. Program Evaluation Review Technique
(PERT) and Gantt Charts are two of the most commonly used project management
~ 71 ~
tools and are described below. Both of these project management tools can be
produced manually or with commercially available project management software.
Program Evaluation and Review Technique (PERT) is a scheduling method
originally designed to plan a manufacturing project by employing a network of
interrelated activities, coordinating optimum cost and time criteria. PERT
emphasizes the relationship between the time each activity takes,
the costs associated with each phase, and the resulting time and cost for the
anticipated completion of the entire project.
PERT was first developed in 1958 by the U.S. Navy Special Projects Office on the
Polaris missile system. Existing integrated planning on such a large scale was
deemed inadequate, so the Navy pulled in the Lockheed Aircraft Corporation and
the management consulting firm of Booz, Allen, and Hamilton. Traditional
techniques such as line of balance, Gantt charts, and other systems were eliminated,
and PERT evolved as a means to deal with the varied time periods it takes to finish
the critical activities of an overall project.
PERT is a planning and control tool used for defining and controlling the tasks
necessary to complete a project. PERT charts and Critical Path Method (CPM) charts
are often used interchangeably; the only difference is how task times are computed.
Both charts display the total project with all scheduled tasks shown in sequence. The
displayed tasks show which ones are in parallel, those tasks that can be performed at
the same time. A graphic representation called a "Project Network" or "CPM
Diagram" is used to portray graphically the interrelationships of the elements of a
project and to show the order in which the activities must be performed.
~ 72 ~
PERT planning involves the following steps:
1. Identify the specific activities and milestones. The activities are the tasks of the
project. The milestones are the events that mark the beginning and the end of
one or more activities.
2. Determine the proper sequence of activities. This step may be combined with #1
above since the activity sequence is evident for some tasks. Other tasks may
require some analysis to determine the exact order in which they should be
performed.
3. Construct a network diagram. Using the activity sequence information, a
network diagram can be drawn showing the sequence of the successive and
parallel activities. Arrowed lines represent the activities and circles or
"bubbles" represent milestones.
4. Estimate the time required for each activity. Weeks are a commonly used unit of
time for activity completion, but any consistent unit of time can be used. A
distinguishing feature of PERT is its ability to deal with uncertainty in activity
completion times. For each activity, the model usually includes three time
estimates:
o Optimistic time - the shortest time in which the activity can be
completed.
o Most likely time - the completion time having the highest probability.
o Pessimistic time - the longest time that an activity may take.
From this, the expected time for each activity can be calculated using the
following weighted average:
Expected Time = (Optimistic + 4 x Most Likely + Pessimistic)
This helps to bias time estimates away from the unrealistically short
timescales normally assumed.
5. Determine the critical path. The critical path is determined by adding the times
for the activities in each sequence and determining the longest path in the
project. The critical path determines the total calendar time required for the
project. The amount of time that a non-critical path activity can be delayed
without delaying the project is referred to as slack time.
~ 73 ~
If the critical path is not immediately obvious, it may be helpful to determine the
following four times for each activity:
o ES - Earliest Start time
o EF - Earliest Finish time
o LS - Latest Start time
o LF - Latest Finish time
These times are calculated using the expected time for the relevant activities.
The earliest start and finish times of each activity are determined by working
forward through the network and determining the earliest time at which an
activity can start and finish considering its predecessor activities. The latest
start and finish times are the latest times that an activity can start and finish
without delaying the project. LS and LF are found by working backward
through the network. The difference in the latest and earliest finish of each
activity is that activity's slack. The critical path then is the path through the
network in which none of the activities have slack.
The variance in the project completion time can be calculated by summing the
variances in the completion times of the activities in the critical path. Given
this variance, one can calculate the probability that the project will be
completed by a certain date assuming a normal probability distribution for
the critical path. The normal distribution assumption holds if the number of
activities in the path is large enough for the central limit theorem to be
applied.
6. Update the PERT chart as the project progresses. As the project unfolds, the
estimated times can be replaced with actual times. In cases where there are
delays, additional resources may be needed to stay on schedule and the PERT
chart may be modified to reflect the new situation. An example of a PERT
chart is provided below:
~ 74 ~
Benefits to using a PERT chart or the Critical Path Method include:
Improved planning and scheduling of activities.
Improved forecasting of resource requirements.
Identification of repetitive planning patterns which can be followed in other
projects, thus simplifying the planning process.
Ability to see and thus reschedule activities to reflect in the project
dependencies and resource limitations following know priority rules.
It also provides the following: expected project completion time, probability
of completion before a specified date, the critical path activities that impact
completion time, the activities that have slack time and that can lend
resources to critical path activities, and activity start and end dates.
Gantt charts are used to show calendar time task assignments in days, weeks or
months. The tool uses graphic representations to show start, elapsed, and
completion times of each task within a project. Gantt charts are ideal for tracking
progress. The number of days actually required to complete a task that reaches a
milestone can be compared with the planned or estimated number. The actual
workdays, from actual start to actual finish, are plotted below the scheduled days.
This information helps target potential timeline slippage or failure points. These
charts serve as a valuable budgeting tool and can show dollars allocated versus
dollars spent.
To draw up a Gantt chart, follow these steps:
1. List all activities in the plan. For each task, show the earliest start date,
estimated length of time it will take, and whether it is parallel or sequential. If
tasks are sequential, show which stages they depend on.
~ 75 ~
2. Head up graph paper with the days or weeks through completion.
3. Plot tasks onto graph paper. Show each task starting on the earliest possible
date. Draw it as a bar, with the length of the bar being the length of the task.
Above the task bars, mark the time taken to complete them.
4. Schedule activities. Schedule them in such a way that sequential actions are
carried out in the required sequence. Ensure that dependent activities do not
start until the activities they depend on have been completed. Where possible,
schedule parallel tasks so that they do not interfere with sequential actions on
the critical path. While scheduling, ensure that you make best use of the
resources you have available, and do not over-commit resources. Also, allow
some slack time in the schedule for holdups, overruns, failures, etc.
5. Presenting the analysis. In the final version of your Gantt chart, combine your
draft analysis (#3 above) with your scheduling and analysis of resources (#4
above). This chart will show when you anticipate that jobs should start and
finish. An example of a Gantt chart is provided below:
~ 76 ~
Makes it easy to develop "what if" scenarios.
Enables better project control by promoting clearer communication.
Becomes a tool for negotiations.
Shows the actual progress against the planned schedule.
Can report results at appropriate levels.
Allows comparison of multiple projects to determine risk or resource
allocation.
Rewards the project manager with more visibility and control over the
project.
~ 77 ~
ASSIGNMENT ONE
1. Outline the responsibilities of a project manager.
b. Discuss each of the phases of the project life cycle
2. State and explain the project preparation process
3. You have been tasked to identify a project, state and explain various sources
through which the project can be identified.
4. State and explain the tools for managing a project team
b. Explain the inputs for developing a team
5. What are the qualities of a good indicator? Give an example
~ 78 ~
CHAPTER 8
To produce credible information that will be useful for decision makers, evaluations
must be designed with a clear understanding of the initiative, how it operates, how
it was intended to operate, why it operates the way it does and the results that it
produces. It is not enough to know what worked and what did not work (that is,
whether intended outcomes or outputs were achieved or not). To inform action,
evaluations must provide credible information about why an initiative produced the
results that it did and identify what factors contributed to the results (both positive
and negative). Understanding exactly what was implemented and why provides the
basis for understanding the relevance or meaning of project or programme results.
Therefore, evaluations should be built on a thorough understanding of the initiative
that is being evaluated, including the expected results chain (inputs, outputs and
intended outcomes), its implementation strategy, its coverage, and the key
assumptions and risks underlying the Results Map or Theory of Change.
~ 79 ~
anticipated outputs or outcomes, or make it difficult to measure the attainment of
intended outputs or outcomes or the contribution of outputs to outcomes. In
addition, understanding the political, cultural and institutional setting of the
evaluation can provide essential clues for how best to design and conduct the
evaluation to ensure the impartiality, credibility and usefulness of evaluation results.
~ 80 ~
The purpose and timing of an evaluation should be determined at the time of
developing an evaluation plan (see Chapter 3 for more information). The purpose
statement can be further elaborated at the time a ToR for the evaluation is drafted to
inform the evaluation design
Focusing the Evaluation
Evaluation Scope
The evaluation scope narrows the focus of the evaluation by setting the boundaries
for what the evaluation will and will not cover in meeting the evaluation purpose.
The scope specifies those aspects of the initiative and its context that are within the
boundaries of the evaluation. The scope defines, for example:
The unit of analysis to be covered by the evaluation, such as a system of
related programmes, polices or strategies, a single programme involving a
cluster of projects, a single project, or a subcomponent or process within a
project
The time period or phase(s) of the implementation that will be covered
The funds actually expended at the time of the evaluation versus the total
amount allocated
The geographical coverage
The target groups or beneficiaries to be included
The scope helps focus the selection of evaluation questions to those that fall within
the defined boundaries.
~ 81 ~
CHAPTER 9
STAKEHOLDER ANALYSIS
Stakeholders therefore go beyond the target group, and extend to those that may
have something to bring to assist the project, or those that may resist the project
taking place. When identifying stakeholders, it is important to consider potentially
marginalized groups, such as women, the elderly, youth, the disabled and the poor,
so that they are represented in the process, especially if the issue will affect their
lives.
You should aim to identify the motivation or constraints to change from the aspect
of the target group(s), so that you can better understand the underlying causes to the
issue you seek to overcome. This is particularly important if you have more than one
target group, or a diverse group (e.g. urban and rural households). You can use
relevant and up to date information from the literature review, as well as directly
engaging stakeholders to complete the stakeholder analysis
~ 82 ~
Stakeholder analysis is used to understand who the key actors are around a given
issue and to gauge the importance of different groups' interests and potential
influence. It also serves to highlight groups who are most affected by a given issue
and least able to influence the situation.
~ 83 ~
A capacity-building approach to the projects should seek to increase primary
stakeholders’ influence over the achievement of a goal (i.e. move primary
stakeholders towards sector 1 in the Venn diagram on the next page).
Win
1 2
Influence Be influenced
3 4
Lose
Two axes (influence/be influenced and win/lose) divide the diagram into four
areas:
Sector 1: Those who can influence the situation and benefit from it; examples:
Outsiders: local and international NGOs, political factions;
Primary stakeholders: influential actors (e.g. leaders).
~ 84 ~
Sector 2: Those who are influenced by the changes and will benefit from it;
examples:
Primary stakeholders;
Non-primary stakeholders who will nonetheless gain from the project’s
outcomes.
Sector 3: Those who cannot influence the achievement of a goal and will be affected
negatively by it; examples:
Primary stakeholders and outsiders whose status or relative wealth is changed by
an activity.
Sector 4: Those who can influence but will lose from the achievement of a goal. This
is an important area to consider, as it will include those who actively oppose the
achievement of a project; examples:
External factions of local leaders among the primary stakeholders opposed to
change of their status.
~ 85 ~
Matrix for stakeholder analysis
Source: Benjamin Crosby (March 1992) “Stakeholder analysis: A vital tools for
strategic managers”
~ 86 ~
When evaluations are used effectively, they support programme improvements,
knowledge generation and accountability.
Supporting programme improvements—Did it work or not, and why? How
could it be done differently for better results?
The interest is on what works, why and in what context. Decision makers, such as
managers, use evaluations to make necessary improvements, adjustments to the
implementation approach or strategies, and to decide on alternatives. Evaluations
addressing these questions need to provide concrete information on how
improvements could be made or what alternatives exist to address the necessary
improvements.
Building knowledge for generalizability and wider-application—What can we
learn from the evaluation? How can we apply this knowledge to other
contexts?
The main interest is in the development of knowledge for global use and for
generalization to other contexts and situations. When the interest is on knowledge
generation, evaluations generally apply more rigorous methodology to ensure a
higher level of accuracy in the evaluation and the information being produced to
allow for generalizability and wider-application beyond a particular context.
~ 87 ~
CHAPTER 10
The interest here is on determining the merit or worth and value of an initiative and
its quality. An effective accountability framework requires credible and objective
information, and evaluations can deliver such information. Evaluations help ensure
that UNDP goals and initiatives are aligned with and support the Millennium
Declaration, MDGs, and global, national and corporate priorities. UNDP is
accountable for providing evaluative evidence that links UNDP contributions to the
achievement of development results in a given country and for delivering services
that are based on the principles of human development. By providing such objective
and independent assessments, evaluations in UNDP support the organization’s
accountability towards its Executive Board, donors, governments, national partners
and beneficiaries.
~ 88 ~
The intended use determines the timing of an evaluation, its methodological
framework, and level and nature of stakeholder participation. Therefore, the use has
to be determined at the planning stage..
Monitoring and Evaluation is important because
it provides the only consolidated source of information showcasing project
progress;
it allows actors to learn from each other’s experiences, building on expertise
and knowledge;
it often generates (written) reports that contribute to transparency and
accountability, and allows for lessons to be shared more easily;
it reveals mistakes and offers paths for learning and improvements;
it provides a basis for questioning and testing assumptions;
it provides a means for agencies seeking to learn from their experiences and
to incorporate them into policy and practice;
it provides a way to assess the crucial link between implementers and
beneficiaries on the ground and decision-makers;
it adds to the retention and development of institutional memory;
It provides a more robust basis for raising funds and influencing policy.
Points to note:
For an any monitoring and evaluation to be useful, the organization must ensure
that the Evaluation is:-
1) Independent—Management must not impose restrictions on the scope,
content, comments and recommendations of evaluation reports. Evaluators
must be free of conflict of interest
2) Intentional—The rationale for an evaluation and the decisions to be based on
it should be clear from the outset.
3) Transparent—Meaningful consultation with stakeholders is essential for the
credibility and utility of the evaluation.
4) Ethical—Evaluation should not reflect personal or sectoral interests.
Evaluators must have professional integrity, respect the rights of institutions
~ 89 ~
and individuals to provide information in confidence, and be sensitive to the
beliefs and customs of local social and cultural environments.
5) Impartial—removing bias and maximizing objectivity are critical for the
credibility of the evaluation and its contribution to knowledge.
6) Of high quality—All evaluations should meet minimum quality standards
defined by the Evaluation Office
7) Timely—Evaluations must be designed and completed in a timely fashion so
as to ensure the usefulness of the findings and recommendations
8) Used—Evaluation is a management discipline that seeks to provide
information to be used for evidence-based decision making. To enhance the
usefulness of the findings and recommendations, key stakeholders should be
engaged in various ways in the conduct of the evaluation.
One step in the planning process includes understanding and recognizing the
interests of stakeholders in the evaluation. The stakeholders include community
leaders, evaluators, and funders, and you will want to know how the evaluation will be
used by each of them.
The evaluation should respond to the interests of those three stakeholders, and
nothing is more productive than designing it together. The evaluation can serve the
community leaders' interests, the funders' interests, and the evaluators' interests in a
single useful product, if you know what they want before you start. It's important to
~ 90 ~
define the stakeholder’s interests in using the evaluation so that it can focus on
optimally answering questions important to all of them.
What do we mean by needs and interests? Needs and interests are those qualities
which community leadership, evaluators, and funders see as important for doing
their jobs well. Because each of these stakeholders is looking at the evaluation from a
unique perspective, it helps to recognize those differences, and incorporate them
into the evaluation.
For starters, lets consider why you'd want to conduct an evaluation in the first
place.
There are many basic reasons why stakeholders want an evaluation:
To be accountable as a public operation
To assist those who are receiving grants to improve
To improve a foundation's grant making
To assess the quality or impact of funded programs
To plan and implement new programs
To disseminate innovative programs
To increase knowledge
A stakeholder may want an evaluation for one, two or all of these reasons.
Evaluators may want to increase knowledge, funders may want to improve grant
making and community leaders may want to assess quality. Community leaders
may not want to answer more than a phone interview by a student intern, evaluators
may be interested in systematic, disciplined inquiry, and funders may look for
accountability.
When it comes time for evaluation, you don't have to be specialists in order to make
good decisions about what you will do. You should, however, be knowledgeable
about uses of evaluations and how they match the many interests involved so that
you can make informed choices.
~ 91 ~
Who are Community Leaders, Evaluators and Funders?
Community Leaders
May include staff, administrators, committee chairpersons, agency personnel and
civic leaders, and trustees of an initiative. They may have little knowledge of
evaluation, nor feel they have much time to provide data or read data reports. Yet,
the evaluation must be responsive, useful and sensitive to their decision-making
requirements. They often are interested in how to improve the functioning of their
initiative.
Evaluators
Are often professionals, though anyone can design and implement an evaluation.
There are several professional associations that support evaluators and have
established standards of practice, such as the American Evaluation
Association or its Collaborative, Participatory, and Empowerment Evaluation
Topical Interest Group. Evaluators can be private consultants, university or
foundation staff, or a member of the initiative. Evaluators are often interested in the
systematic production of useful, reliable information.
Funders
Are those individuals or organizations that provide financial support for the
initiative. They might include program officers or other representatives of
government agencies, foundations, or other sources of financial support. Some
funders have built a formal evaluation into their regular activities, but they are in the
minority. Funders are often interested in whether the use of their funds is having an
impact on the problems facing communities.
~ 92 ~
that will be convincing and useful to the target audiences. Knowing this will help
you decide what information is needed and the tools you could use to obtain it.
While you may know your group does good work, chances are good that other
important members of the community do not know what you do. Consequently,
others who have supported and encouraged your efforts will want to know what
has worked, and what hasn't worked; and what should change and what should
stay the same. Because these groups or individuals might be instrumental in
assisting your work, financially or otherwise, it makes good sense to include their
needs in the evaluation process.
Even more important is the requirement that the information used is effectively. The
question is: To whom is it useful? If there's no direction to your information
gathering, you can collect just about anything you want, but so what? If it doesn't
matter to anyone else, it is meaningless. If I collect information about the number of
people that my agency serves, I may find that useful, especially if I'm reimbursed for
that number. But what if someone really wanted to know if the efforts of the
agencies in town had an impact on a health problem. The number of people my
agency serves might not be that useful.
The interests, which helps us determine the information we need, lead us to
develop tools to collect it. In other words it is the interests of the stakeholders that
shape the inquiring.
What are the interests of the stakeholders?
What information will help them?
What questions will you ask to get that information?
What tools will help you collect it?
In the long run, including the stakeholders in the process will lead to greater
collaboration and organizational capacity to solve community problems.
Understanding stakeholders' interests will enable you to employ your resources
better. Knowing what everyone wants and needs will help you plan the optimal
evaluation.
~ 93 ~
When Should You Understand The Interests Of These Groups?
You will want to identify stakeholders from the get-go. By going through a process
of stakeholder identification before you begin evaluating, you will be able to obtain
their views and incorporate their ideas and needs into the evaluation itself.
Of course, the sooner you identify the needs and interests of those groups, the
sooner you will be able to gain understanding of the different issues each group is
interested in without wasting time or money. You also have to be watchful so that if
interests change, you can adapt to those changes in a timely manner and keep your
evaluation valid.
First and foremost, you and other members of the group will need to sit down, pour
a cup of Joe, and grab a pencil. Think about the individuals and groups that have
needs that should be addressed in the evaluation. You should try to figure out what
their interests are.
Of course, some people may ask, "Why them? Our group's interests and needs
should be the focus of the evaluation." In one sense, this is true. One of the main
purposes of the evaluation process includes providing feedback and ideas for the
group itself so members can improve and strengthen their efforts. But remember,
everyone is in this together, the community funders and those who will conduct the
evaluation. You want the best information possible information that will help you
make the best decisions.
~ 94 ~
To identify stakeholders, ask yourself these questions:
Who provides funding for our initiative?
Who will conduct the evaluation?
Who do we collaborate with?
Once you know who these people are, find out what they want. To do this, let's take
a look at the groups. Then, we'll talk about the specific needs and interests of
members of each of these groups.
Community Leaders
What will this group need from your evaluation? The information should be:
Clear and understandable: They may have limited knowledge about the
goings-on of your group, or about evaluations. Immediately, then, you know
that the evaluation must be clear and understandable.
Efficient: They probably have a variety of different responsibilities which
demand their time and consideration, so they won't want to waste time
reading information irrelevant to their needs.
Responsive: They may include decision-makers that can affect the future of
your group. Therefore, your evaluation needs to be responsive to their
decision-making requirements.
Sensitive: They will want to know what the initiative has accomplished, so
the evaluation should be sensitive to the activities and accomplishments of
the initiative.
Useful: They will include decision-makers for the initiative, so the evaluation
needs to show them how their efforts can be improved.
Evaluators
They will be assessing the effectiveness of the initiative in meeting its goals.
What do they need to get out of the evaluation?
Input: The evaluation team needs to receive input from the initiative's clients
--including community leadership, funders, and members of the initiative
itself, in order to know what the clients want to learn about the initiative.
~ 95 ~
Accurate and Complete Information: In a similar vein, the evaluators need
accurate and complete information in order to answer the questions posed by
the stakeholders.
Cooperation: Finally, the evaluators will need cooperation from participants
and officials in order to obtain needed data.
Funders
They will need:
Clear and timely reports: Because of their responsibilities for making
decisions concerning the continuation of financial support, the funders will
need information about the progress of the initiative.
Evidence of community change and impact: Funders will need to be able to
measure the success of the initiative and report this to their own trustees or
constituents.
Some of the questions that you can ask stakeholders to match them with their
interests in an evaluation are:
What are the evaluation's strengths and weaknesses?
Do you think the evaluation is moving toward its desired outcomes?
Which kind of implementation problems came up, and how are they being
addressed?
How are staff and clients interacting?
What is happening that wasn't expected?
~ 96 ~
What do you like, dislike, or would like to change in the evaluation process?
From the answers you get, you can determine what each party wants out of the
evaluation. You can also group those who have similar interests. For instance, you
may find out that a community leader and an evaluator are interested in improving
their managing abilities through the evaluation process. You have made a match,
and those two will work toward a common goal.
Here are some ways of determining interests and matching them with
stakeholders:
Interviews: Get a representative from each group of stakeholders and ask
away. Be direct in your questions so that you can quickly get to the point
you're trying to make, that is, what interests this stakeholder has and if they
match with some other stakeholders' interests.
Surveys: You can send out a written questionnaire to assess how the
stakeholders rank their interests and which group wants what. The survey
must be succinct and direct, asking clear questions about the evaluation in
terms of quality and goals. Survey results are easy to utilize and can be
helpful for the evaluation presentation.
Phone surveys: They can save you time and money, if you're doing them
locally. You can use the same questions you would use in a written survey,
but leave more space for commentary, as people tend to talk more when
speaking to a person on the phone. Just be sure your phone surveys don't
stray from your objective.
Brainstorm sessions: Arrange a meeting with stakeholder representatives and
brainstorm interests and possibilities for the evaluation's outcome. Bring up
problems such as continuity of the program, obtaining funds, coordinating
activities, and attracting staff, and let stakeholders have their say. Everybody
will come out from the brainstorming session with new ideas and a much
better notion of everybody else's ideas.
~ 97 ~
Besides these methods, you should always conduct a survey after the completion of
the evaluation. This will benefit external audiences and decision-makers. Remember,
if changes need to be made, don't be afraid to follow through with them.
Remember that decisions about how to improve a program tend to be made in small,
incremental steps based on specific findings aimed at making the evaluation a better
process for all stakeholders involved.
Now armed with this list of needs and interests, you can find or develop the tools to
obtain useful information. The next sections will explore ways to select an evaluation
team and present some key questions for the evaluation process. Later, we will be
discussing how to evaluate your community initiative!
In Summary
Once you have a clear idea of what each stakeholder really wants, you are very
likely to succeed in your evaluation. Be sure to revise frequently the interests of all
the stakeholders involved so that you don't lose focus of what you're looking for
with your evaluation. The hard part of your evaluation work starts now!
~ 98 ~
CHAPTER 11
CLUSTER DEVELOPMENT
~ 99 ~
national counterpart agency and any other bodies that have a clear stake in
the initiative.
Phase 5 - Implementation:
Implementation refers to the entire set of joint actions that are required to
realize the long-term vision of the cluster. It is not the mandate of the
implementing agency or, more specifically of the CDA, to directly carry out
all project activities. Rather, the CDA facilitates the undertaking of activities
~ 100 ~
through the establishment of partnerships with both private organizations
and other public institutions and, of course, based on the capabilities (present
and future) of the cluster firms.
Why Clusters
Clusters have gained increasing prominence in debates on economic development in
recent years. Governments worldwide regard clusters as potential drivers of
enterprise development and innovation. Cluster initiatives are also considered to be
efficient policy instruments in that they allow for a concentration of resources and
funding in targeted areas with a high growth and development potential that can
spread beyond the target locations (spillover and multiplier effects).
~ 101 ~
clustering provides enterprises with access to specialized suppliers and support
services, experienced and skilled labor and the knowledge sharing that occurs when
people meet and talk about business.
Clusters are also particularly promising environments for SME development. Due to
their small size, SMEs individually are often unable to realize economies of scale and
thus find it difficult to take advantage of market opportunities that require the
delivery of large stocks of standardized products or compliance with international
standards. They also tend to have limited bargaining power in inputs purchase, do
not command the resources required to buy specialized support services, and have
little influence in the definition of support policies and services.
Within clusters, SMEs can realize shared gains through the organization of joint
actions between cluster enterprises (e.g., joint bulk inputs purchase or joint
advertising, or shared use of equipment) and between enterprises and their support
institutions (e.g., provision of technical assistance by business associations or
investments in infrastructure by the public sector). The advantage accruing to the
cluster from such collective efforts is referred to as collective efficiency.
Development Principles
The underlying concern of the UNIDO Approach is the stimulation of pro-poor
growth, defined as a pattern of economic growth that creates opportunities for the
poor, and generates the conditions for them to take advantage of those
opportunities. In order to improve cluster performance, UNIDO addresses economic
and non-economic issues, especially those related to the fostering of human
and social capital with a view to enhancing labor force production capacities and
increasing economic participation. Such an approach requires measures that are
~ 102 ~
aimed at empowering marginalized groups, improving access to employment
opportunities, and supporting the well-being of entrepreneurs and employees as
well as the development of their skills to boost productivity and enhance innovation
capacity.
~ 103 ~
Involve public and private sector actors based on their respective capacities and
competencies
While the role of the public sector in supporting a cluster development initiative
normally includes reacting to demands from within the cluster for changes in the
business environment as well as with regards to larger scale infrastructure
development and the provision of an adequate framework for education and
broader skills development and the coordination and support of brokering activities,
the private sector can play an active role when it comes to mobilizing human and
financial resources to be invested in innovative ventures to increase the growth
potential of the cluster; providing business development and financial services on a
commercially viable and sustainable basis; and establishing of and/or participating
in representative bodies to voice the interests of the business community in
dialogues with the public sector. A local public-private forum, e.g. in the context of
the Cluster Commission or other suitable dialogue mechanism, can also ensure that
cluster development initiatives within a country or region are linked with other
public support programmes for private sector development.
~ 104 ~
develop a tailor-made monitoring system for each cluster development initiative. A
project evaluation, typically carried out at the end of a cluster development initiative
(for longer projects, a mid-term evaluation is recommended), assesses several
aspects of an intervention, including its relevance, efficiency, effectiveness, impact
and sustainability, in order to appraise its overall usefulness.
~ 105 ~
CHAPTER 12
Della realized that in order to collect accurate data, she needed to find researchers
who would be trusted by people in the neighborhoods she was concerned about.
What if she recruited researchers from among the people in those neighborhoods?
She contacted two ministers she knew, an African American doctor who practiced in
a black neighborhood, and the director of a community center, as well as using her
own family connections. Within two weeks, she had gathered a group of
neighborhood residents who were willing to act as researchers. They ranged from
high school students to grandparents, and from people who could barely read to
others who had taken college courses.
The group met several times at the hospital to work out how they were going to
collect information from the community. Della conducted workshops in research
methods and in such basic skills as how to record interviews and observations. The
group discussed the problem of recording for those who had difficulty writing, and
came up with other ways of logging information. They decided they would each
interview a given number of residents about their food shopping and eating habits,
and that they would also observe people’s buying patterns in neighborhood stores
and fast food restaurants. They set a deadline for finishing their data gathering, and
went off to learn as much as they could about the food shopping and eating behavior
of people in their neighborhoods.
~ 106 ~
As the data came in, it became clear that people in the neighborhoods would be
happy to buy more nutritious food, but it was simply too difficult to get it. They
either had to travel long distances on the bus, since many didn’t have cars, or find
time after a long work day to drive to another, often unfamiliar, part of the city and
spend an evening shopping. Many also had the perception that healthy food was
much more expensive, and that they couldn’t afford it.
Ultimately, the data that the group of neighborhood residents had gathered went
into a report written by Della and other professionals on the hospital staff. The
report helped to convince the city to provide incentives to supermarket chains to
locate in neighborhoods where healthy food was hard to find.
The group that Della had recruited had become a community-based participatory
research team. Working with Della and others at the hospital, they helped to
determine what kind of information would be useful, and then learned how to
gather it. Because they were part of the community, they were trusted by residents;
because they shared other residents’ experience, they knew what questions to ask
and fully understood the answers, as well as what they were seeing when they
observed.
This section is about participatory action research: what it is, why it can be effective,
who might use it, and how to set up and conduct it.
~ 107 ~
done by community members, so that research results both come from and goes
directly back to the people who need them most and can make the best use of them.
There are several levels of participatory research. At one end of the spectrum is
academic or government research that nonetheless gathers information directly from
community members. The community members are those most directly affected by
the issue at hand, and they may (or may not) be asked for their opinions about what
they need and what they think will help, as well as for specific information. In that
circumstance, the community members don’t have any role in choosing what
information is sought, in collecting data, or in analyzing the information once it’s
collected. (At the same time, this type of participatory research is still a long step
from research that is done at second or third hand, where all the information about a
group of people is gathered from statistics, census data, and the reports of observers
or of human service or health professionals.)
The opposite end of the participatory research continuum from the first level
described involves community members creating their own research group –
~ 108 ~
although they might seldom think of it as such – to find out about and take action on
a community issue that affects them directly.
In this section, we’ll concern ourselves with the latter two types of participatory
research – those that involve community members directly in planning and carrying
out research, and that lead to some action that can influence the issue studied. This is
what is often defined as community-based participatory research. There are certainly
scenarios where other types of participatory research are more appropriate, or easier
to employ in particular situations, but it’s CBPR that we’ll discuss here.
Employing CBPR for purposes of either evaluation or long-term change can be a
good idea for reasons of practicality, personal development, and politics.
On the practical side, community-based participatory research can often get you
the best information possible about the issue, for at least reasons including:
People in an affected population are more liable to be willing to talk and give
straight answers to researchers whom they know, or whom they know to be
in circumstances similar to their own, than to outsiders with whom they have
little in common
People who have actually experienced the effects of an issue – or an
intervention – may have ideas and information about aspects of it that
wouldn’t occur to anyone studying it from outside. Thus, action researchers
from the community may focus on elements of the issue, or ask questions or
follow-ups, that outside researchers wouldn’t, and get crucial information
that other researchers might find only by accident, or perhaps not at all
People who are deeply affected by an issue, or participants in a program, may
know intuitively, or more directly, what’s important when they see or hear it.
What seems an offhand comment to an outside researcher might reveal its
real importance to someone who is part of the same population as person
who made the comment.
Action researchers from the community are on the scene all the time. Their
contact both with the issue or intervention and with the population affected
~ 109 ~
by it is constant, and, as a result, they may find information even when
they’re not officially engaged in research.
Findings may receive more community support because community members
know that the research was conducted by people in the same circumstances as
their own
When you’re conducting an evaluation, these advantages can provide you with a
more accurate picture of the intervention or initiative and its effects. When you’re
studying a community issue, all these advantages can lead to a true understanding
of its nature, its causes, and its effects in the community, and can provide a solid
basis for a strategy to resolve it. And that, of course, is the true goal of community
research – to identify and resolve an issue or problem, and to improve the quality of
life for the community as a whole.
In the personal development sphere, CBPR can have profound effects on the
development and lives of the community researchers, particularly when those who
benefit from an intervention, or who are affected by an issue, are poor or otherwise
disadvantaged, lack education or basic skills, and/or feel that the issue is far beyond
their influence. By engaging in research, they not only learn new skills, but see
themselves in a position of competence, obtain valuable knowledge and information
about a subject important to them, and gain the power and the confidence to exercise
control over this aspect of their lives.
~ 110 ~
understand that they have something to say. Furthermore, the research and
other skills and the self-confidence that people acquire in a community-based
participatory research process can carry over into other parts of their lives,
giving them the ability and the assurance to understand and work to control
the forces that affect them. Research skills, discipline, and analytical thinking
often translate into job skills, making participatory action researchers more
employable. Most important, people who have always seen themselves as
bystanders or victims gain the capacity to become activists who can transform
their lives and communities.
Community-based participatory research has much in common with the work of the
Brazilian political and educational theoretician and activist, Paulo Freire. In Freire’s
critical education process, oppressed people are encouraged to look closely at their
circumstances, and to understand the nature and causes of their oppressors and
oppression. Freire believes that with the right tools – knowledge and critical
thinking ability, a concept of their own power, and the motivation to act – they can
undo that oppression. Many people see this as the “true” and only reason for
supporting action research, but we see many other reasons for doing so, and list
some of them both above and below.
Action research is often used to consider social problems – welfare reform or
homelessness, for example – but can be turned to any number of areas with positive
results.
Some prime examples:
The environment. It was a community member who first asked the questions
and started the probe that uncovered the fact that the Love Canal
neighborhood in Niagara Falls, NY, had been contaminated by the dumping
of toxic waste.
Medical/health issues. Action research can be helpful in both undeveloped
and developed societies in collecting information about health practices,
tracking an epidemic, or mapping the occurrence of a particular condition, to
name three of numerous possibilities.
~ 111 ~
Political and economic issues. Citizen activists often do their own research to
catch corrupt politicians or corporations, trace campaign contributions, etc.
Just as it can be used for different purposes, CBPR can be structured in different
ways. The differences have largely to do with who comes up with the idea in the
first place, and with who controls, or makes the decisions about, the research. Any of
these possibilities might involve a collaboration or partnership, and a community
group might well hire or recruit as a volunteer someone with research skills to help
guide their work.
Some common scenarios:
Academic or other researchers devise and construct a study, and employ
community people as data collectors and/or analysts.
A problem or issue is identified by a researcher or other entity (a human
service organization, for instance), and community people are recruited to
engage in research on it and develop a solution.
A community based organization or other group gathers community people
to define and work on a community issue of their choosing, or to evaluate a
community intervention aimed at them or people similar to them.
A problem is identified by a community member or group, others who are
affected and concerned gather around to help, and the resulting group sets
out to research and solve the problem on its own.
~ 112 ~
Researchers who are members of the community know the history and
relationships surrounding a program or an issue, and can therefore place it in
context.
People experiencing an issue or participating in an intervention know what’s
important to them about it – what it disrupts, what parts of their lives it
touches, how they have changed as a result, etc. That knowledge helps them
to formulate interview questions that get to the heart of what they – as
researchers – are trying to learn.
Action research, by involving community members, creates more visibility for the
effort in the community.
Researchers are familiar to the community, will talk about what they’re doing (as
will their friends and relatives), and will thus spread the word about the effort.
Community members are more likely to accept the legitimacy of the research and
buy into its findings if they know it was conducted by people like themselves,
perhaps even people they know.
Citizens are more apt to trust both the truthfulness and the motives of their friends
and neighbors than those of outsiders.
~ 113 ~
Action research trains citizen researchers who can turn their skills to other
problems as well.
People who discover the power of research to explain conditions in their
communities, and to uncover what’s really going on, realize that they can conduct
research in other areas than the one covered by their CBPR project. They often
become community activists, who work to change the conditions that create
difficulty for them and others. Thus, the action research process may benefit the
community not only by addressing particular issues, but by – over the long term –
creating a core of people dedicated to improving the overall quality of its citizens’
lives.
Skills learned in the course of action research carry over into other areas of
researchers’ lives.
Both the skills and the confidence gained in a CBPR project can be transferred to
employment, education, child-rearing, and other aspects of life, greatly improving
people’s prospects and well-being.
A participatory action research process can help to break down racial, ethnic, and
class barriers.
CBPR can remove barriers in two ways. First, action research teams are often
diverse, crossing racial, ethnic, and class lines. As people of different backgrounds
work together, this encourages tolerance and friendships, and often removes the fear
~ 114 ~
and distrust. In addition, as integral contributors to a research or evaluation effort,
community researchers interact with professionals, academics, and community
leaders on equal footing. Once again, familiarity breaks down barriers, and allows
all groups to see how much the others have to offer. It also allows for people to
understand how much they often misjudge others based on preconceptions, and to
begin to consider everyone as an individual, rather than as “one of those.”
A member of the Changes Project, a CBPR project that explored the impact of
welfare reform on adult literacy and ESOL (English as a Second or Other Language),
learners wrote in the final report: “What I learned from working in this project first
off is, none of us are so great that change couldn’t help us be better people... I
walked into the first meeting thinking I was the greatest thing to hit the pike and
found that I, too, had some prejudices that I was not aware of. I thought that no one
could ever tell me I wasn’t the perfect person to sit in judgment of others because I
never had a negative thought or prejudiced bone in my body. Well, lo and behold, I
did, and seeing it through other people’s eyes I found that I, too, had to make some
changes in my opinions.
Action research helps people better understand the forces that influence their
lives.
Just as Paulo Freire found in his work in Latin America, community researchers,
sometimes as a direct result of their research, and sometimes as a side benefit, begin
to analyze and understand how larger economic, political, and social forces affect
their own lives. This understanding helps them to use and control the effects of those
forces, and to gain more control over their own destinies.
Community based action research can move communities toward positive social
change.
All of the above rationales described reasons for employing CBPR act to restructure
the relationships and the lines of power in a community. They contribute to the
~ 115 ~
mutual respect and understanding among community members and the deep
understanding of issues that in turn lead to significant and positive social change.
~ 116 ~
advocates not just for addressing the issue, but for recognizing and
implementing the solution or intervention that best meets the actual needs of
the population affected.
Academics with an interest in the issue or intervention in question.
Academics who have studied the issue often have important information that
can help a CBPR team better understand the data it collects. They usually
have research skills as well, and can help to train other team members. At the
same time, they can learn a great deal from community-based researchers –
about the community and communities in general, about approaching people,
about putting assumptions and preconceptions aside – and perhaps, as a
result, increase the effectiveness of their own research
It’s important that they be treated, and treat everyone else, as equals. Everyone on a
team has to view other members as colleagues, not as superiors or inferiors, or as
more or less competent or authoritative. This can be difficult on both sides – i.e.
making sure that officials, academics, or other professionals don’t look down on
community members, and that community members don’t automatically defer to (or
distrust) them. It may take some work to create an environment in which everyone
feels equally respected and valued, but it’s worth the effort. Both the quality of the
research and the long-term learning by team members will benefit greatly from the
effort. (There are some circumstances where actual equality among all team
members is not entirely possible. When community members are hired as
researchers, for instance, the academic or other researcher who pays the bills has to
exercise some control over the process. That doesn’t change the necessity of all team
members being viewed as colleagues and treated with respect.)
Health, human service, and public agency staff and volunteers. Like the
previous two groups, these people have both a lot to offer and – often – a lot
to learn that will make them more sensitive and more effective at their jobs in
the long run. They may have a perspective on issues in the community that
residents lack because of their closeness to the situation. At the same time,
~ 117 ~
they may learn more about the lives of those they work with, and better
understand their circumstances and the pressures that shape their lives.
Community members at large. This category brings us back to the statement
at the beginning of this portion of the section that members of all sectors of
the community should have the opportunity to be involved. That statement
covers the knowledge, skills, and talent that different people bring to the
endeavor; the importance of buy-in by all sectors of the community if any
long-term change is to be accomplished; and what team members learn and
bring back to their families, friends, and neighbors as a result of their
involvement.
~ 118 ~
Qualitative Research
Relies on information that can’t be expressed in mathematical terms – descriptions,
opinions, anecdotes, the comments of those affected by the issue under study, etc.
The results of qualitative research are usually expressed as a narrative or set of
conclusions, with the analysis backed up by quotes, observation notes, and other
non-numerical data.
Quantitative Research
Depends on numbers – the number of people served by an intervention, for
instance, the number that completed the program, the number that achieved some
predetermined outcome (lowered blood pressure, employment for a certain period,
citizenship), scores on academic or psychological or physical tests, etc. These
numbers are usually then processed through one or more statistical operations to tell
researchers exactly what they mean. (Some statistics may, for instance, help
researchers determine precisely what part of an intervention was responsible for a
particular behavior change.)
It may seem that quantitative research is more accurate, but that’s not always the
case, especially when the research deals with human beings, who don’t always do
what you expect them to. It’s often important to get other information in order to
understand exactly what’s going on.
~ 119 ~
Furthermore, sometimes there aren’t any numbers to work with. The Changes
Project was looking at the possible effects of a change in the welfare system on adult
learners. The project was conducted very early in the change process, in order to try
to head off the worst consequences of the new system. There was very little
quantitative information available at that point, and most of the project involved
collecting information about the personal experiences of learners on welfare.
In other words, neither quantitative nor qualitative methods are necessarily “better,”
but sometimes one is better than the other for a specific purpose. Often, a mix of the
two will yield the richest and most accurate information.
It’s probably best and most effective to use action research when:
There’s time to properly train and acclimate community researchers
The research and analysis necessary relies on interviews, experience,
knowledge of the community, and an understanding of the issue or
intervention from the inside, rather than on academic skills or an
understanding of statistics (unless you have the time and resources to teach
those skills or the team includes someone who has them)
You need an entry to the community or group from whom the information is
being gathered
You’re concerned with buy-in and support from the community
Part of the purpose of using CBPR is to have an effect on and empower the
community researchers
Part of the purpose of using CBPR is to set the stage for long-term social
change
~ 120 ~
the findings into recommendations; take, or bring about, action based on those
recommendations; evaluate the process; and follow up.
What follows assumes an ideal action research project with a structure, perhaps one
initiated by a health or human service organization. A community group that comes
together out of common interest probably would recruit by people already involved
pulling in their friends, and probably wouldn’t do any formal training unless they
invited a researcher to help them specifically in that way. The nature of your group
will help you determine how – or whether – you follow each of the steps below.
It often makes sense for at least half the team to be composed of people directly
affected by the issue or intervention in question. Those numbers both assure good
contact with the population from which information needs to be gathered, and
makes it less likely that community researchers will be overwhelmed or intimidated
by other (professional) team members or by the task
Recruiting from within an organization or program may be relatively simple,
because the pool of potential researchers is somewhat of a captive audience: you
know where to find them, and you already have a relationship with them.
Recruiting from a more general population, on the other hand, requires attention to
some basic rules of communication.
~ 121 ~
Use language that your audience can understand, whether that means presenting
your message in a language other than English, or presenting it in simple,
clear English without any academic or other jargon.
Use the communication channels that your audience is most likely to pay attention to.
An announcement in the church that serves a large proportion of your
population, a program newsletter, or word-of-mouth might all be good
channels by which to reach a particular population.
Be culturally sensitive and appropriate. Couch your message in a form that is not
only respectful of your audience’s culture, but that also speaks to what is
important in that culture.
Go where your audience is. Meet with groups of people from the population you
want to work with, put out information in their neighborhoods or meeting
places. Don’t wait for them to come to you.
Given all this, the best recruitment method is still face-to-face contact by someone
familiar to the person being recruited.
~ 122 ~
the team, and to ensure that everyone sees it as a team of colleagues, rather than as
one group leading or dominating or – even worse – simply tolerating another. Each
person brings different skills and experience to the effort and has something to teach
everyone else. Emphasizing that from the beginning may be necessary, not only to
keep more educated members from dominating, but also to encourage less educated
members not to be afraid to ask questions and give their opinions.
Training is meant to pass on specific information and skills that people will need in
order to carry out the work of the research. There are as many models for training as
there are teams to be trained. As noted above, orientation might serve as all or part
of an introductory training session. Training can take place all at once – in one or
several multi-hour sessions on consecutive days – or over the whole period of the
project, with each training piece leading to the activity that it concerns. It might be
conducted by one person – who, in turn, could be someone from inside the
organization or an outside facilitator – by a series of experts in different areas, or by
the team members themselves. (In this last case, team members might, for instance,
determine what they need to know, and then decide on and implement an
appropriate way to learn it.)
Regardless of how it’s done, here are some general guidelines for training that are
usually worth following:
Find a comfortable space to hold the training
Provide, or make sure that people bring, food and drink
Take frequent, short breaks. It’s better for people’s concentration to take a three-
minute break every half hour than a 20-minute break every three hours
Structure the space for maximum participation and interaction - chairs in a circle,
room to move around, etc.
Vary the ways in which material is presented. People learn in a variety of ways –
by hearing, by seeing, by discussion, by example (watching others), and by
doing. The more of these methods you can include, the more likely you are to
hold people’s attention and engage everyone on the team.
~ 123 ~
Use the training to build your team. Training is a golden opportunity for people
to get to know and trust one another, and to absorb the guiding principles for
the work.
The actual content of the training will, of course, depend on the project you’re
undertaking, but general areas should probably include:
Necessary research skills. These might include interview techniques, Internet
searching, constructing a survey, and other basic research and information-
gathering methods.
Important information about the community or the intervention in question.
Meeting and negotiation skills. Many of the people on your team may not have
had the experience of participating in numerous meetings. They need time
and support both to develop meeting skills – following discussion, knowing
when it’s okay to interrupt, feeling confident enough to express their opinions
– and to become comfortable with the meeting process.
Preparing a report. This doesn’t necessarily mean drafting a formal document.
Depending upon the team members, a flow chart, a slide show, a video, or a
collage might be informative and powerful ways to convey research results,
as might oral testimony or a sound recording.
Making a presentation. Knowing what to expect, and learning how to make a
clear and cogent presentation can make the difference between having your
findings and recommendations accepted or rejected.
An evaluation can focus on process: What is actually being done, and how does that
compare with what the intervention or initiative set out to do? It can focus on
outcomes: Is the end result of the intervention what you intended it to be? Or it can
try to look at both, and to decide whether the process in fact works to gain the
~ 124 ~
desired outcome. An evaluation may also aim to identify specific elements of the
process that have to be changed, or to identify a whole new process to replace one
that doesn’t seem to be working.
Anticipate and prepare contingency plans for problems that might arise
An action research group, like any other, can have internal conflicts, as well as
conflicts with external forces. People may disagree, or worse; some people may drop
out, or may not do what they promised; people may not understand, or may choose
not to follow the procedures you’ve agreed on. There will need to be guidelines to
deal with each of these and other potential pitfalls.
~ 125 ~
Implement your Research Plan
Now that you've completed your planning, it's time to carry it out.
Follow Up
An action research project doesn’t end with the presentation, or even with action.
The purpose of the research often has as much to do with the learning of the team
members as it does with research results. Even where that’s not the case, the skills
and methods that action researchers learn need to be cemented, so they can carry
over to other projects.
Evaluate the research process. This should be a collaborative effort by all team
members, and might also include others (those who actually implement an
evaluated intervention, for instance). Did things go according to plan? What
were the strengths of the process? What were its weaknesses? Was the
~ 126 ~
training understandable and adequate? What other support would have been
helpful? What parts of the process should be changed?
Identify benefits to the community or group that came about (or may come about) as a
result of the research process. These may have to do with action, with making
the community more aware of particular issues, or with creating more
community activists.
Identify team members’ learning and perceptions of changes in themselves. Some
areas to consider are basic and other academic skills; public speaking;
meeting skills; self-confidence and self-esteem; ability to influence the world
and their own lives; and self-image (seeing themselves as proactive, rather
than acted upon, for example).
Maintain gains by keeping researchers involved. There are a number of ways to
keep the momentum of a CBPR team going, including starting another
project, if there’s a reason to do so; encouraging team members to be active on
other issues they care about (and to suggest some potential areas, and
perhaps make introductions that make it easier for them to do so); keeping
the group together as a (paid) research consortium; or consulting, as a group,
with other organizations interested in conducting action research.
CBPR is not always the right choice for an initiative or evaluation, but it’s always
worthy of consideration. If you can employ it in a given situation, the rewards can be
great.
Community-based participatory research can serve many purposes. It can supply
accurate and appropriate information to guide a community initiative or to evaluate
a community intervention. It can secure community buy-in and support for that
initiative or intervention. It can enhance participants’ personal development and
opportunities. It can empower those who are most affected by conditions or issues in
the community to analyze and change them. And, perhaps most important, it can
lead to long-term social change that improves the quality of life for everyone.
~ 127 ~
In Summary
Community-based participatory research is a process conducted by and for the
people most affected by the issue or intervention being studied or evaluated. It has
multiple purposes, including the empowerment of the participants, the gathering of
the best and most accurate information possible, garnering community support for
the effort, and social change that leads to the betterment of the community for
everyone
As with any participatory process, CBPR can take a great deal of time and effort. The
participants are often economically and educationally disadvantaged, lacking basic
skills and other resources. Thus, training and support – both technical and personal
– are crucial elements in any action research process. With proper preparation,
however, participatory action research can yield not only excellent research results,
but huge benefits for the community over the long run.
~ 128 ~
CHAPTER 13
PARTICIPATORY EVALUATION
It's a good idea to build stakeholder participation into a project from the
beginning. One of the best ways to choose the proper direction for your work is to
involve stakeholders in identifying real community needs, and the ways in which a
project will have the greatest impact. One of the best ways to find out what kinds of
effects your work is having on the people it's aimed at is to include those on the
receiving end of information or services or advocacy on your evaluation team.
Often, you can see most clearly what's actually happening through the eyes of those
directly involved in it - participants, staff, and others who are involved in taking part
in and carrying out a program, initiative, or other project. Previously, we have
discussed how you can involve those people in conducting research on the
community and choosing issues to address and directions to go in. This section is
about how you can involve them in the whole scope of the project, including its
evaluation, and how that's likely to benefit the project's final outcomes.
~ 129 ~
What Is Participatory Evaluation?
When most people think of evaluation, they think of something that happens at the
end of a project - that looks at the project after it's over and decides whether it was
any good or not. Evaluation actually needs to be an integral part of any project from
the beginning. Participatory evaluation involves all the stakeholders in a project -
those directly affected by it or by carrying it out - in contributing to the
understanding of it, and in applying that understanding to the improvement of the
work.
This approach to planning and evaluation isn't possible without mutual trust and
respect. These have to develop over time, but that development is made more
probable by starting out with an understanding of the local culture and customs -
whether you're working in a developing country or in an American urban
neighborhood. Respecting individuals and the knowledge and skills they have will
go a long way toward promoting long-term trust and involvement.
The other necessary aspect of any participatory process is appropriate training for
everyone involved. Some stakeholders may not even be aware that project research
takes place; others may have no idea how to work alongside people from different
backgrounds; and still others may not know what to do with evaluation results once
~ 130 ~
they have them. We'll discuss all of these issues - stakeholder involvement,
establishing trust, and training - as the section progresses.
The real purpose of an evaluation is not just to find out what happened, but to use
the information to make the project better.
~ 131 ~
Many who write about participatory evaluation combine the first two of these areas
into process evaluation, and add a third - impact evaluation - in addition to outcome
evaluation. Impact evaluation looks at the long-term results of a project, whether the
project continues, or does its work and ends.
Rural development projects in the developing world, for example, often exist simply
to pass on specific skills to local people, who are expected to then both practice those
skills and teach them to others. Once people have learned the skills - perhaps
particular cultivation techniques, or water purification - the project ends. If in five or
ten years, an impact evaluation shows that the skills the project taught are not only
still being practiced, but have spread, then the project's impact was both long-term
and positive.
In order for these areas to be covered properly, evaluation has to start at the very
beginning of the project, with assessment and planning.
~ 132 ~
What's the real goal, for instance, of a program to introduce healthier foods in school
lunches? It could be simply to convince children to eat more fruits, vegetables, and
whole grains. It could be to get them to eat less junk food. It could be to encourage
weight loss in kids who are overweight or obese. It could simply be to educate
them about healthy eating, and to persuade them to be more adventurous eaters.
The evaluation questions you ask both reflect and determine your goals for the
program. If you don't measure weight loss, for instance, then clearly that's not what
you're aiming at. If you only look at an increase in children's consumption of
healthy foods, you're ignoring the fact that if they don't cut down on something else
(junk food, for instance), they'll simply gain weight. Is that still better than not
eating the healthy foods? You answer that question by what you choose to examine
- if it is better, you may not care what else the children are eating; if it's not, then you
will care.
Collecting information about the project
Making sense of that information
Deciding what to celebrate, and what to adjust or change, based on
information from the evaluation
~ 133 ~
correctly determine whether your project is effective or not, and to
understand how to change it to make it more so.
It can get you information you wouldn't get otherwise. When project
direction and evaluation depend, at least in part, on information from people
in the community, that information will often be more forthcoming if it's
asked for by someone familiar. Community people interviewing their friends
and neighbors may get information that an outside person wouldn't be
offered.
It tells you what worked and what didn't from the perspective of those
most directly involved - beneficiaries and staff. Those implementing the
project and those who are directly affected by it are most capable of sorting
out the effective from the ineffective.
It can tell you why something does or doesn't work. Beneficiaries are often
able to explain exactly why they didn't respond to a particular technique or
approach, thus giving you a better chance to adjust it properly.
It results in a more effective project. For the reasons just described, you're
much more apt to start out in the right direction, and to know when you need
to change direction if you haven't. The consequence is a project that addresses
the appropriate issues in the appropriate way, and accomplishes what it sets
out to do.
It empowers stakeholders. Participatory evaluation gives those who are often
not consulted - line staff and beneficiaries particularly - the chance to be full
partners in determining the direction and effectiveness of a project.
It can provide a voice for those who are often not heard. Project beneficiaries
are often low-income people with relatively low levels of education, who
seldom have - and often don't think they have a right to - the chance to speak
for themselves. By involving them from the beginning in project evaluation,
you assure that their voices are heard, and they learn that they have the
ability and the right to speak for them.
It teaches skills that can be used in employment and other areas of life. In
addition to the development of basic skills and specific research capabilities,
~ 134 ~
participatory evaluation encourages critical thinking, collaboration, problem-
solving, independent action, meeting deadlines...all skills valued by
employers, and useful in family life, education, civic participation, and other
areas.
It bolsters self-confidence and self-esteem in those who may have little of
either. This category can include not only project beneficiaries, but also others
who may, because of circumstance, have been given little reason to believe in
their own competence or value to society. The opportunity to engage in a
meaningful and challenging activity, and to be treated as a colleague by
professionals, can make a huge difference for folks who are seldom granted
respect or given a chance to prove themselves.
It demonstrates to people ways in which they can take more control of their
lives. Working with professionals and others to complete a complex task with
real-world consequences can show people how they can take action to
influence people and events.
It encourages stakeholder ownership of the project. If those involved feel
the project is theirs, rather than something imposed on them by others, they'll
work hard both in implementing it, and in conducting a thorough and
informative evaluation in order to improve it.
It can spark creativity in everyone involved. For those who've never been
involved in anything similar, a participatory evaluation can be a revelation,
opening doors to a whole new way of thinking and looking at the world. To
those who have taken part in evaluation before, the opportunity to exchange
ideas with people who may have new ways of looking at the familiar can lead
to a fresh perspective on what may have seemed to be a settled issue.
It encourages working collaboratively. For participatory evaluation to work
well, it has to be viewed by everyone involved as a collaboration, where each
participant brings specific tools and skills to the effort, and everyone is valued
for what she can contribute. Collaboration of this sort not only leads to many
of the advantages described above, but also fosters a more collaborative spirit
for the future as well, leading to other successful community projects.
~ 135 ~
It fits into a larger participatory effort. When community assessment and the
planning of a project have been a collaboration among project beneficiaries,
staff, and community members, it only makes sense to include evaluation in
the overall plan, and to approach it in the same way as the rest of the project.
In order to conduct a good evaluation, its planning should be part of the
overall planning of the project. Furthermore, participatory process generally
matches well with the philosophy of community-based or grass roots groups
or organizations.
With all these positive aspects, participatory evaluation carries some negative ones
as well. Whether its disadvantages outweigh its advantages depend on your
circumstances, but whether you decide to engage in it or not, it's important to
understand what kinds of drawbacks it might have.
~ 136 ~
trust to represent them and steer them in the direction that best reflects those
interests. Sometimes, however, leaders are those who push their way to the
front, and try to confirm their own importance by telling others what to do.
By involving only leaders of a population or community, you run the risk of losing -
or never gaining - the confidence and perspective of the rest of the population,
which may dislike and distrust a leader of the second type, or may simply see
themselves shut out of the process.. They may see the participatory evaluation as a
function of authority, and be uninterested in taking part in it. Working to recruit
"regular" people as well as, or instead of, leaders may be an important step for the
credibility of the process. But it's a lot of work and may be tough to sell.
You have to train people to understand evaluation and how the
participatory process works, as well as teaching them basic research
skills. There are really a number of potential disadvantages here. The obvious
one is that of time, which we've already raised - training takes time to
prepare, time to implement, and time to sink in. Another is the question of
what kind of training participants will respond to. Still another concerns
recruitment - will people be willing to put in the time necessary to prepare
them for the process, let alone the time for the process itself?
You have to get buy-in and commitment from participants. Given what
evaluators will have to do, they need to be committed to the process, and to
feel ownership of it. You have to structure both the training and the process
itself to bring about this commitment.
People's lives - illness, child care and relationship problems, getting the
crops in, etc. - may cause delays or get in the way of the evaluation. Poor
people everywhere live on the edge, which means they're engaged in a
delicate balancing act. The least tilt to one side or the other - a sick child, too
many days of rain in a row - can cause a disruption that may result in an
inability to participate on a given day, or at all. If you're dealing with a rural
village that's dependent on agriculture, for instance, an accident of weather
can derail the whole process, either temporarily or permanently.
~ 137 ~
You may have to be creative about how you get, record, and report
information. If some of the participants in an evaluation are non- or semi-
literate, or if participants speak a number of different languages (English,
Spanish, and Lao, for instance), a way to record information will have to be
found that everyone can understand, and that can, in turn, be understood by
others outside the group.
Funders and policy makers may not understand or believe in participatory
evaluation. At worst, this can lose you your funding, or the opportunity to
apply for funding. At best, you'll have to spend a good deal of time and effort
convincing funders and policy makers that participatory evaluation is a good
idea, and obtaining their support for your effort.
Some of these disadvantages could also be seen as advantages: the training people
receive blends in with their development of new skills that can be transferred to
other areas of life, for instance; coming up with creative ways to express ideas
benefits everyone; once funders and policy makers are persuaded of the benefits of
participatory process and participatory evaluation, they may encourage others to
employ it as well. Nonetheless, all of these potential negatives eat up time, which
can be crucial. If it's absolutely necessary that things happen quickly (which is true
not nearly as often as most of us think it is), participatory evaluation is probably not
the way to go.
~ 138 ~
most important, you can gain the advantages of a participatory evaluation
without having to compensate for many of the disadvantages.
When you can convince funders that it's a good idea. Funders may specify
that they want an outside evaluation, or they may simply be dubious about
the value of participatory evaluation. In either case, you may have some
persuading to do in order to be able to use a participatory process. If you can
get their support, however, funders may like the fact that participatory
evaluation is often less expensive, and that it has added value in the form of
empowerment and transferable skills.
When there may be issues in the community or population that outside
evaluators (or program providers, for that matter) aren't likely to be aware
of. Political, social, and interpersonal factors in the community can skew the
results of an evaluation, and without an understanding of those factors and
their history, evaluators may have no idea that what they're finding out is
colored in any way. Evaluators who are part of the community can help sort
out the influence of these factors, and thus end up with a more accurate
evaluation.
When you need information that it will be difficult for anyone outside the
community or population to get. When you know that members of the
community or population in question are unwilling to speak freely to anyone
from outside, participatory evaluation is a way to raise the chances that you'll
get the information you need.
When part of the goal of the project is to empower participants and help
them develop transferable skills. Here, the participatory evaluation, as it
should in any case, becomes a part of the project itself and its goals.
When you want to bring the community or population together. In addition
to fostering a collaborative spirit, as we've mentioned, a participatory
evaluation can create opportunities for people who normally have little
contact to work together and get to know one another. This familiarity can
then carry over into other aspects of community life, and even change the
social character of the community over the long term.
~ 139 ~
Who should be involved in Participatory Evaluation?
We've referred continually to stakeholders - the people who are directly affected by
the project being evaluated. Who are the stakeholders? That varies from project to
project, depending on the focus, the funding, the intended outcomes, etc.
~ 140 ~
get to know one another in a context that might lead to better understanding
of community needs.
Others whose lives are affected by the project. The definition of this group
varies greatly from project to project. In general, it refers to people whose jobs
or other aspects of their lives will be changed either by the functioning of the
project itself, or by its outcomes.
An example would be landowners whose potential use of their
land would be affected by an environmental initiative or a
neighborhood plan.
~ 141 ~
Finding and training stakeholders to act as Participant Evaluators
Unfortunately, this stage isn't simply a matter of announcing a participatory
evaluation and then sitting back while people beat down the doors to be part of it.
In fact, it may be one of the more difficult aspects of conducting a participatory
evaluation.
Here's where the trust building we discussed earlier comes into play. The population
you're working with may be distrustful of outsiders, or may be used to promises of
involvement that turn out to be hollow or simply ignored. They may be used to
being ignored in general, and/or offered services and programs that don't speak to
their real needs. If you haven't already built a relationship to the point where people
are willing to believe that you'll follow through on what you say, now is the time to
do it. It may take some time and effort - you may have to prove that you'll still be
there in six months - but it's worth it. You're much more likely to have a successful
project, let alone a successful evaluation, if you have a relationship of mutual trust
and respect.
But let's assume you have that step out of the way, and that you've established good
relationships in the community and among the population you're working with, as
well as with staff of the project. Let's assume as well that these folks know very little,
if anything, about participatory evaluation. That means they'll need training in order
to be effective.
If, in fact, your evaluation is part of a larger participatory effort, the question arises
as to whether to simply employ the same team that did assessments and/or planned
the project, perhaps with some additions, as evaluators. That course of action has
both pluses and minuses. The team is already assembled, has developed a method
of working together, has some training in research methods, etc., so that they can hit
the ground running - obviously a plus.
The fact that they have a big stake in seeing the project be successful can work either
way: they may interpret their findings in the best possible light, or even ignore
negative information; or they may be eager to see exactly where and how to adjust
~ 142 ~
the work to make it go better.
Another issue is burnout. Evaluation will mean more time in addition to what an
assessment and planning team has already put in. While some may be more than
willing to continue, many may be ready for a break (or may be moving on to another
phase of their lives). If the possibility of assembling a new team exists, it will give
those who've had enough the chance to gracefully withdraw.
How you handle this question will depend on the attitudes of those involved, how
many people you actually have to draw on (if the recruitment of the initial team was
really difficult, you may not have a lot of choices), and what people committed to.
~ 143 ~
Ask people you've recruited to recommend - or recruit - others
In general, it's important for potential participant evaluators - particularly those
whose connection to the project isn't related to their employment - to understand the
commitment involved. An evaluation is likely to last a year, unless the project is
considerably shorter than that, and while you might expect and plan for some
dropouts, most of the team needs to be available for that long.
In order to make that commitment easier, discuss with participants what kinds of
support they'll need in order to fulfill their commitment - child care and
transportation, for instance - and try to find ways to provide it. Arrange meetings at
times and places that are easiest for them (and keep the number of meetings to a
minimum). For participants who are paid project staff, the evaluation should be
considered part of their regular work, so that it isn't an extra, unpaid, burden that
they feel they can't refuse.
~ 144 ~
How training gets carried out will vary with the needs and schedules of participants
and the project. It may take place in small chunks over a relatively long period of
time - weeks or months - might happen all at once in the course of a weekend
retreat, or might be some combination. There's no right or wrong way here. The first
option will probably make it possible for more people to take part; the second allows
for people to get to know one another and bond as a team, and a combination might
allow for both.
By the same token, there are many training methods, any or all of which might be
useful with a particular group. Training in meeting skills - knowing when and how
to contribute and respond, following discussion, etc. - may best be accomplished
through mentoring, rather than instruction. Interviewing skills may best be learned
through role playing and other experiential techniques. Some training - how to
approach local people, for example - might best come from participants themselves.
~ 145 ~
situation, etc.), what the conditions were, the date and time, any other factors
that influenced the interview or observation.
For people for whom writing isn't comfortable, where writing isn't
feasible, or where language is a barrier, there should be alternative
recording and reporting methods. Drawings, maps, diagrams, tape
recording, videos, or other imaginative ways of remembering
exactly what was said or observed can be substituted, depending
on the situation. In interviews, if audio or video recording is going
to be used, it's important to get the interviewee's permission first -
before the interviewer shows up with the equipment, so that there
are no misunderstandings.
Analyzing information. Critical thinking, what kinds of things statistics tell you,
other things to think about.
Planning and Implementing the Project and Its Evaluation
~ 146 ~
There's an assumption here that all phases of a project will be participatory, so that
not only its evaluation, but its planning and the assessment that leads to it also
involve stakeholders (not necessarily the same ones who act as evaluators). If
stakeholders haven't been involved from the beginning, they don't have the deep
understanding of the purposes and structure of a project that they'd have of one
they've helped form. The evaluation that results, therefore, is likely to be less
perceptive - and therefore less valuable - than one of a project they've been involved
in from the start.
Naming a problem or goal refers to identifying the issue that needs to be addressed.
Framing it has to do with the way we look at it. If youth violence is conceived of as
strictly a law enforcement problem, for instance, that framing implies specific ways
of solving it: stricter laws, stricter enforcement, zero tolerance for violence, etc. If it's
framed as a combination of a number of issues - availability of hand guns,
unemployment and drug use among youth, social issues that lead to the formation
of gangs, alienation and hopelessness in particular populations, poverty, etc. - then
solutions may include employment and recreation programs, mentoring, substance
abuse treatment, etc., as well as law enforcement. The more we know about a
problem, and the more different perspectives we can include in our thinking about
it, the more accurately we can frame it, and the more likely we are to come up with
an effective solution.
~ 147 ~
Developing a Theory of Practice to Address the Problem
How do you conduct a community effort so that it has a good chance of solving the
problem at hand? Many communities and organizations answer this question by
throwing uncoordinated programs at the problem, or by assuming a certain
approach (law enforcement, as in our example, for instance) will take care of it. In
fact, you have to have a plan for creating, implementing, evaluating, adjusting, and
maintaining a solution if you want it to work.
Whatever you call this plan - a theory of practice, a logic model, or simply an
approach or process - it should be logical, consistent, consider all the areas that need
to be coordinated in order for it to work, and give you an overall guideline and a list
of steps to follow in order to carry it out.
Once you've identified an issue, for instance, one possible theory of practice
might be:
Form a coalition of organizations, agencies, and community members
concerned with the problem.
Recruit and train a participatory research team which includes
representatives of all stakeholder groups.
The team collects both statistical and qualitative, first-hand information about
the problem, and identifies community assets that might help in addressing it.
Use the information you have to design a solution that takes into account the
problem's complexity and context.
This might be a single program or initiative, or a coordinated,
community-wide effort involving several organizations, the media,
and individuals. If it's closer to the latter, that's part of the
complexity you have to take into account. Coordination has to be
part of your solution, as do ways to get around the bureaucratic
roadblocks that might occur and methods to find the financial and
personnel resources you need.
Implement the solution.
~ 148 ~
Carry out monitoring and evaluation that will give you ongoing feedback
about how well you're meeting objectives, and what you should change to
improve your solution.
Use the information from the evaluation to adjust and improve the solution.
Go back to # 2 and do as much of it again as you need to until the problem is
solved, or - more likely, since many community problems never actually
disappear - indefinitely in order to maintain and increase your gains.
Deciding What Evaluation Questions to Ask, And How to Ask Them to Get the
Information You Need
As we've discussed, choosing the evaluation questions essentially guides the work.
What you're really choosing here is what you're going to pay attention to. There
could be significant results from your project that you're never aware of, because
you didn't look for them - you didn't ask the questions to which those results would
have been the answers. That's why it's so important to select questions carefully:
they'll determine what you find.
Framing the problem is one element here - putting it in context, looking at it from all
sides, stepping back from your own assumptions and biases to get a clearer and
broader view of it. Another is envisioning the outcomes you want, and thinking
about what needs to change, and how, in order to reach them.
Framing is important in this activity as well. If you want simply to reduce youth
violence, stricter laws and enforcement might seem like a reasonable solution,
assuming you're willing to stick with them forever; if you want not only to reduce or
eliminate youth violence, but to change the climate that fosters it (i.e., long term
social change), the solution becomes much broader and requires, as we pointed out
above, much more than law enforcement. And a broader solution means more, and
more complex, evaluation questions.
~ 149 ~
In the first case, evaluation questions might be limited to some variation of: "Were
there more arrests and convictions of youthful offenders for violent crimes in the
time period studied, as compared to the last period for which there were records
before the new solution was put in place?" "Did youthful offenders receive harsher
sentences than before?" "Was there a reduction in violent incidents involving
youth?"
Looking at the broader picture, in addition to some of those questions, there might
be questions about counseling programs for youthful offenders to change their
attitudes and to help ease their transition back to civil society, drug and alcohol
treatment, control of handgun sales, changing community attitudes, etc.
Collecting Information
This is the largest part, at least in time and effort, of implementing an evaluation.
Various evaluators, depending on the information needed, may conduct any or all
of the following:
Research into census or other public records, as well as news archives, library
collections, the Internet, etc.
Individual and/or group interviews
Focus groups
Community information-sharing sessions
Surveys
Direct or participant observation
In some cases - particularly with unschooled populations in developing countries -
evaluators may have to find creative ways to draw out information. In some
cultures, maps, drawings, representations ("If this rock is the headman's house..."), or
even storytelling may be more revealing than the answers to straightforward
questions.
~ 150 ~
Analyzing the Information you've Collected
Once you've collected all the information you need, the next step is to make sense of
it. What do the numbers mean? What do people's stories and opinions tell you about
the project? Did you carry out the process you'd planned? If not, did it make a
difference, positive or negative?
In some cases, these questions are relatively easy to answer. If there were particular
objectives for serving people, or for beneficiaries' accomplishments, you can quickly
find out whether they were met or not. (We set out to serve 75 people, and we
actually served 82. We anticipated that 50 would complete the program, and 61
actually completed.)
In other cases, it's much harder to tell what your information means. What if
approximately half of interviewees say the project was helpful to them, and the other
half says the opposite? A result like that may leave you doing some detective work.
(Is there any ethnic, racial, geographic, or cultural pattern as to who is positive and
who is negative? Whom did each group work with? Where did they experience the
project, and how? Did members of each group have specific things in common?)
While collecting the information requires the most work and time, analyzing it is
perhaps the most important step in conducting an evaluation. Your analysis tells
you what you need to know in order to improve your project, and also gives you the
evidence you need to make a case for continued funding and community support.
It's important that it be done well, and that it makes sense of odd results like that
directly above. Here's where good training and good guidance in using critical
thinking and other techniques come in.
~ 151 ~
Process. This concerns the logistics of the project. Was there good coordination
and communication? Was the planning process participatory? Was the
original timeline for each stage of the project - outreach, assessment, planning,
implementation, evaluation - realistic? Were you able to find or hire the right
people? Did you find adequate funding and other resources? Was the space
appropriate? Did members of the planning and evaluation teams work well
together? Did the people responsible do what they were expected to do? Did
unexpected leaders emerge (in the planning group, for instance)?
Implementation. Did you do what you set out to do - reach the number of
people you expected to, use the methods you intended, provide the amount
and kind of service or activity that you planned for? This part of the
evaluation is not meant to assess effectiveness, but only whether the project
was carried out as planned - i.e., what you actually did, rather than what you
accomplished as a result. That comes next.
Outcomes. What were the results of what you did? Did what you hoped for
take place? If it did, how do you know it was a result of what you did, as
opposed to some other factor(s)? Were there unexpected results? Were they
negative or positive? Why did this all happen?
Using the Information to Celebrate What Worked, and to Adjust and Improve the
Project
While accountability is important - if the project has no effect at all, for example, it's
just wasted effort - the real thrust of a good evaluation is formative. That means it's
meant to provide information that can help to continue to form the project, reshape
it to make it better. As a result, the overall questions when looking at process,
implementation, and outcomes are: What worked well? What didn't? What changes
would improve the project?
~ 152 ~
Answering these questions requires further analysis, but should allow you to
improve the project considerably. In addition to dropping or changing and
adjusting those elements of the project that didn't work well, don't neglect those that
were successful. Nothing's perfect; even effective approaches can be made better.
Don't forget to celebrate your successes. Celebration recognizes the hard work of
everyone involved, and the value of your effort. It creates community support, and
strengthens the commitment of those involved. Perhaps most important, it makes
clear that people working together can improve the quality of life in the community.
If your project is successful, you may think your work is done. Think again -
community problems are only solved as long as the solutions are actively
practiced. The moment you turn your back, the conditions you worked so hard to
change can start to return to what existed before The work - supported by
participatory research and evaluation - has to go on indefinitely to maintain and
increase the gains you've made.
In Summary
Participatory evaluation is a part of participatory research. It involves stakeholders
in a community project in setting evaluation criteria for it, collecting and analyzing
data, and using the information gained to adjust and improve the project.
Participatory process brings in the all-important multiple perspectives of those most
directly affected by the project, which are also most likely to be tied into community
history and culture. The information and insights they contribute can be crucial in a
~ 153 ~
project's effectiveness. In addition, their involvement encourages community buy-in,
and can result in important gains in skills, knowledge, and self-confidence and self-
esteem for the researchers. All in all, participatory evaluation creates a win-win
situation.
Conducting a participatory evaluation involves several steps:
Recruiting and training a stakeholder evaluation team
Naming and framing the problem
Developing a theory of practice to guide the process of the work
Asking the right evaluation questions
Collecting information
Analyzing information
Using the information to celebrate and adjust your work
The final step, as with so many of the community-building strategies and actions
described in the Community Tool Box, is to keep at it. Participatory research in
general, and participatory evaluation in particular, has to continue as long as the
work continues, in order to keep track of community needs and conditions, and to
keep adjusting the project to make it more responsive and effective. And the work
often has to continue indefinitely in order to maintain progress and avoid sliding
back into the conditions or attitudes that made the project necessary in the first
place.
~ 154 ~
CHAPTER 14
After many late nights of hard work, more planning meetings than you care to
remember, and many pots of coffee, your initiative has finally gotten off the ground.
Congratulations! You have every reason to be proud of yourself and you should
probably take a bit of a breather to avoid burnout. Don't rest on your laurels too
long, though--your next step is to monitor the initiative's progress. If your initiative
is working perfectly in every way, you deserve the satisfaction of knowing that. If
adjustments need to be made to guarantee your success, you want to know about
them so you can jump right in there and keep your hard work from going to waste.
And, in the worst case scenario, you'll want to know if it's an utter failure so you can
figure out the best way to cut your losses. For these reasons, evaluation is extremely
important.
There's so much information on evaluation out there that it's easy for community
groups to fall into the trap of just buying an evaluation handbook and following it to
the letter. This might seem like the best way to go about it at first glance-- evaluation
is a huge topic and it can be pretty intimidating. Unfortunately, if you resort to the
"cookbook" approach to evaluation, you might find you end up collecting a lot of
data that you analyze and then end up just filing it away, never to be seen or used
again.
Instead, take a little time to think about what exactly you really want to know about
the initiative. Your evaluation system should address simple questions that are
important to your community, your staff, and (last but never least!) your funding
partners. Try to think about financial and practical considerations when asking
yourself what sort of questions you want answered. The best way to insure that you
have the most productive evaluation possible is to come up with an evaluation plan.
~ 155 ~
Here are a few Reasons Why You Should Develop an Evaluation Plan:
It guides you through each step of the process of evaluation
It helps you decide what sort of information you and your stakeholders really
need
It keeps you from wasting time gathering information that isn't needed
It helps you identify the best possible methods and strategies for getting the
needed information
It helps you come up with a reasonable and realistic timeline for evaluation
Most importantly, it will help you improve your initiative!
What are the different types of stakeholders and what are their interests in your
evaluation?
We'd all like to think that everyone is as interested in our initiative or project as we
are, but unfortunately that isn't the case. For community health groups, there are
basically three groups of people who might be identified as stakeholders (those who
are interested, involved, and invested in the project or initiative in some way):
community groups, grant makers/funders, and university-based researchers. Take
some time to make a list of your project or initiative's stakeholders, as well as which
category they fall into.
~ 156 ~
What are the Types of Stakeholders?
Community groups: Hey, that's you! Perhaps this is the most obvious
category of stakeholders, because it includes the staff and/or volunteers
involved in your initiative or project. It also includes the people directly
affected by it--your targets and agents of change.
Grant makers and funders: Don't forget the folks that pay the bills! Most
grant makers and funders want to know how their money's being spent, so
you'll find that they often have specific requirements about things they want
you to evaluate. Check out all your current funders to see what kind of
information they want you to be gathering. Better yet, find out what sort of
information you'll need to have for any future grants you're considering
applying for. It can't hurt!
University-based researchers: This includes researchers and evaluators that
your coalition or initiative may choose to bring in as consultants or full
partners. Such researchers might be specialists in public health promotion,
epidemiologists, behavioral scientists, specialists in evaluation, or some other
academic field. Of course, not all community groups will work with
university-based researchers on their projects, but if you choose to do so, they
should have their own concerns, ideas, and questions for the evaluation. If
you can't quite understand why you'd include these folks in your evaluation
process, try thinking of them as auto mechanics--if you want them to help you
make your car run better, you will of course include them in the diagnostic
process. If you went to a mechanic and started ordering him around about
how to fix your car without letting him check it out first, he'd probably get
pretty annoyed with you. Same thing with your researchers and evaluators:
it's important to include them in the evaluation development process if you
really want them to help improve your initiative.
~ 157 ~
Each type of stakeholder will have a different perspective on your organization as
well as what they want to learn from the evaluation. Every group is unique, and you
may find that there are other sorts of stakeholders to consider with your own
organization. Take some time to brainstorm about who your stakeholders are before
you are making your evaluation plan.
What decisions do they need to make, and how would they use the data to inform
those decisions?
You and your stakeholders will probably be making decisions that affect your
program or initiative based on the results of your evaluation, so you need to
consider what those decisions will be. Your evaluation should yield honest and
accurate information for you and your stakeholders; you'll need to be careful not to
structure it in such a way that it exaggerates your success, and you'll need to be
really careful not to structure it in such a way that it downplays your success!
Consider what sort of decisions you and your stakeholders will be making.
Community groups will probably want to use the evaluation results to help them
find ways to modify and improve your program or initiative. Grant makers and
funders will most likely be making decisions about how much funding to give you
~ 158 ~
in the future, or even whether to continue funding your program at all (or any
related programs). They may also think about whether to impose any requirements
on you to get that program (e.g., a grant maker tells you that your program may
have its funding decreased unless you show an increase of services in a given area).
University-based researchers will need to decide how they can best assist with plan
development and data reporting.
You'll also want to consider how you and your stakeholders plan to balance costs
and benefits. Evaluation should take up about 10--15% of your total budget. That
may sound like a lot, but remember that evaluation is an essential tool for improving
your initiative. When considering how to balance costs and benefits, ask yourself the
following questions:
What do you need to know?
What is required by the community?
What is required by funding?
~ 159 ~
Developing Evaluation Questions
For our purposes, there are four main categories of evaluation questions. Let's look
at some examples of possible questions and suggested methods to answer those
questions. Later on, we'll tell you a bit more about what these methods are and how
they work
Planning and implementation issues: How well was the program or
initiative planned out, and how well was that plan put into practice?
o Possible questions: Who participates? Is there diversity among
participants? Why do participants enter and leave your programs? Are
there a variety of services and alternative activities generated? Do
those most in need of help receive services? Are community members
satisfied that the program meets local needs?
o Possible methods to answer those questions: monitoring system that tracks
actions and accomplishments related to bringing about the mission of
the initiative, member survey of satisfaction with goals, member
survey of satisfaction with outcomes.
Assessing attainment of objectives: How well has the program or initiative
met its stated objectives?
o Possible questions: How many people participate? How many hours are
participants involved?
o Possible methods to answer those questions: monitoring system (see
above), member survey of satisfaction with outcomes, goal attainment
scaling.
Impact on participants: How much and what kind of a difference has the
program or initiative made for its targets of change?
o Possible questions: How has behavior changed as a result of
participation in the program? Are participants satisfied with the
experience? Were there any negative results from participation in the
program?
~ 160 ~
o Possible methods to answer those questions: member survey of satisfaction
with goals, member survey of satisfaction with outcomes, behavioral
surveys, interviews with key participants.
Impact on the community: How much and what kind of a difference has the
program or initiative made on the community as a whole?
o Possible questions: What resulted from the program? Were there any
negative results from the program? Do the benefits of the program
outweigh the costs?
o Possible methods to answer those questions: Behavioral surveys, interviews
with key informants, community-level indicators.
~ 161 ~
Member survey of process: done during the initiative - how are you doing so
far?
Member survey of outcomes: done after the initiative is finished - how did you
do?
Goal attainment report
If you want to know whether your proposed community changes were truly
accomplished-- and we assume you do--your best bet may be to do a goal attainment
report. Have your staff keep track of the date each time a community change
mentioned in your action plan takes place. Later on, someone compiles this
information (e.g., "Of our five goals, three were accomplished by the end of 1997.")
Behavioral surveys
Behavioral surveys help you find out what sort of risk behaviors people are taking
part in and the level to which they're doing so. For example, if your coalition is
working on an initiative to reduce car accidents in your area, one risk behavior to do
a survey on will be drunk driving.
Interviews with key participants
Key participants - leaders in your community, people on your staff, etc. - have
insights that you can really make use of. Interviewing them to get their viewpoints
on critical points in the history of your initiative can help you learn more about the
quality of your initiative, identify factors that affected the success or failure of certain
events, provide you with a history of your initiative, and give you insight which you
can use in planning and renewal efforts.
Community-level indicators of impact
These are tested-and-true markers that help you assess the ultimate outcome of your
initiative. For substance abuse coalitions, for example, the U.S. Centers for Substance
Abuse Prevention (CSAP) and the Regional Drug Initiative in Oregon recommend
several proven indicators (e.g., single-nighttime car crashes, emergency transports
related to alcohol) which help coalitions figure out the extent of substance abuse in
their communities. Studying community-level indicators helps you provide solid
evidence of the effectiveness of your initiative and determine how successful key
components have been.
~ 162 ~
Setting Up a Timeline for Evaluation Activities
When does evaluation need to begin?
Right now! Or at least at the beginning of the initiative! Evaluation isn't something
you should wait to think about until after everything else has been done. To get an
accurate, clear picture of what your group has been doing and how well you've been
doing it, it's important to start paying attention to evaluation from the very start. If
you're already part of the way into your initiative, however, doesn’t scrap the idea of
evaluation altogether--even if you start late, you can still gather information that
could prove very useful to you in improving your initiative.
Outline questions for each stage of development of the initiative
We suggest completing a table listing:
Key evaluation questions (the five categories listed above, with more specific
questions within each category)
Type of evaluation measures to be used to answer them (i.e., what kind of
data you will need to answer the question?)
Type of data collection (i.e., what evaluation methods you will use to collect
this data)
Experimental design (A way of ruling out threats to the validity - e.g.,
believability - of your data. This would include comparing the information
you collect to a similar group that is not doing things exactly the way you are
doing things.)
With this table, you can get a good overview of what sort of things you'll have to do
in order to get the information you need.
~ 163 ~
When should evaluation end?
Shortly after the end of the project - usually when the final report is due. Don't wait
too long after the project has been completed to finish up your evaluation - it's best
to do this while everything is still fresh in your mind and you can still get access to
any information you might need.
What sort of products should you expect to get out of the evaluation?
The main product you'll want to come up with is a report that you can share with
everyone involved. what should this report include?
Effects expected by shareholders: Find out what key people want to know. Be
sure to address any information that you know they're going to want to hear
about!
Differences in the behaviors of key individuals: Find out how your coalition's
efforts have changed the behaviors of your targets and agents of change.
Have any of your strategies caused people to cut down on risky behaviors, or
increase behaviors that protect them from risk? Are key people in the
community cooperating with your efforts?
Differences in conditions in the community: Find out what has changed Is the
public aware of your coalition or group's efforts? Do they support you? What
steps are they talking to help you achieve your goals? Have your efforts
caused any changes in local laws or practices?
You'll probably also include specific tools (i.e., brief reports summarizing data),
annual reports, quarterly or monthly reports from the monitoring system, and
anything else that is mutually agreed upon between the organization and the
evaluation team.
~ 164 ~
big task, so you want to get it right. What standards should you use to make sure
you do the best possible evaluation? In 1994, the Joint Committee on Standards for
Educational Evaluation issued a list of program evaluation standards that are widely
used to regulate evaluations of educational and public health programs. The
standards the committee outlined are for utility, feasibility, propriety, and accuracy.
Consider using evaluation standards to make sure you do the best evaluation
possible for your initiative.
~ 165 ~
CHAPTER 15
At the project level, the MEAL Framework marks some changes for Implementing
Partners. In the past, implementing partners expected to report against the logframe.
With the new MEAL Framework, Implementing Partners will be expected to focus
their M&E efforts on:
The specific outcomes, indicators and questions that are relevant to the
particular project;
LIFT’s revised logframe indicators that are relevant to the project and as
agreed upon between the project and FMO;
What is working, what is not working, and why;
Learning and improving interventions based on the evidence gathered; and
Generating useful knowledge and evidence that can influence policy and
practice.
To help Implementing Partners build and use their own MEAL systems, IPs are
expected to develop and implement project MEAL Plans. Project MEAL Plans
include the following major components:
1) Project Theory of Change
2) MEAL Stakeholder Analysis (optional)
3) Project Measurement Framework (MF)
4) Project Evaluation and Learning Questions (ELQ)
5) Data Collection, Management and Analysis
6) Use of MEAL Results
7) MEAL Resources
Many country programmes have also developed MEAL systems and capacity which
include elements of Level Two above and use monitoring and accountability data to
drive decision making, programme improvement and learning. We aim to establish
~ 166 ~
this approach as standard for our country programmes, providing a strong
foundation for building a culture of quality.
Advocacy Measurement: Our Theory of Change places great emphasis on using our
‘voice’, and the voices of partners and children, to advocate for change for children.
It aims to influence others to achieve scale-up of proven interventions. Effective
advocacy is key to this, and our Advocacy Monitoring Tool (AMT) is the means by
which we record the sum of our collective efforts on this.
Through the annual reporting round the AMT captures around 300 separate
advocacy efforts each year. These range from local meetings aimed at improving
working relations with local authorities, e.g. ‘we met with District Social Welfare
officials to discuss the establishment of district level child protection referral
mechanisms’, to national level policy breakthroughs with profound impact on
children’s rights, such as ‘we successfully lobbied Ministry of Health to budget for
nutrition interventions within the new National Strategic Health Development Plan’.
~ 167 ~
The AMT provides an accurate snapshot of the range of advocacy work undertaken
and the policy change outcomes in a given year that were influenced by SC
advocacy work. More work is needed to highlight the impact of game-changing
policy breakthroughs on children’s lives.
Total Reach: This option can be used to estimates of the number of children and
adults reached by our programmes. It allows us to produce consistent data which
drives annual estimates across projects/programmes for direct and indirect reach. If
you’ve already worked with Total Reach you will know that it’s quite a demanding
and involved process, and it needs to be because once the agency has committed to
coming up with credible estimates of reach, we need to be able to justify the figures
and take account of complicating factors like double counting, as well as the difficult
definition of what actually constitutes ‘reach’.
The global indicators were developed to measure progress towards the global
outcome statements in our strategy and although originally billed as ‘outcome
indicators’ they are in reality a mix of output and outcome level indicators, and they
vary quite widely in their type and complexity. Some require a single number or a
yes no answer, while others require survey and sampling techniques to build the
data.
~ 168 ~
Global indicator sets have been used in other agencies with varying degrees of
success. Some peer agencies have abandoned them because it proved too difficult to
gather consistent data against them across multiple country programmes. Save the
Children has the advantage of only having a small number of indicators per
thematic area, and we remain very committed to building capacity and credibility in
reporting these indicators to help us better understand and communicate the results
of our work.
Evaluations
An evaluation is a systematic assessment of an ongoing or completed project,
programme or policy, its design, implementation and results. Evaluation also refers
to the process of determining the worth or significance of an activity, policy or
program. Evaluations should provide information that is credible and useful,
enabling the incorporation of lessons learned into the decision–making process of
both recipients and donors. Evaluation is different from monitoring, which is a
continuing assessment based on systematic collection of data on specific indicators
as well as other relevant information
As an organization our principle challenges with evaluations are:
1. Effectively capturing the learning from the vast knowledge base of our
evaluations so that it is used to inform our programme development
2. Ensuring management follow up the recommendations that emerge from
evaluations and reviews.
~ 169 ~
relevant stakeholders and explicitly fed back into program decision making,
incorporating Accountability and Learning. MEAL hence represents a practical and
conceptual step beyond routine monitoring and evaluation. It involves a
commitment to using monitoring data and accountability feedback for the purposes
of programme quality improvement and decision making, and this is an approach
that we would like to promote and apply across all of Save the Children’s
programmes. As we described in the previous session, the MEAL approach relates
very closely to the components of programme quality outlined in the Programme
Quality Framework.
We’ve identified a number of core components to the MEAL approach, based on the
experience of a number of different organization programmes, as well as good
practice elsewhere in the sector.
These are:
~ 170 ~
as by management preferences. Either way, it’s clear that ensuring capacity and
commitment to MEAL lies with the PDQ team.
Partners and the MEAL approach
A great deal of monitoring of Save the Children supported work is carried out by
our partners, so it’s important in promoting the MEAL approach that we also
consider partner capacity and resources to achieve this. One way to account for this
is to ensure that commitments to monitoring, accountability and quality standards
are clearly articulated in partnership agreements, and are explored in partner
assessments.
Accountability Mechanisms
Establishing effective accountability mechanisms is another crucial pillar of the
MEAL approach, including transparent sharing of information about the
organization and our objectives, and managing feedback and complaints from the
children and communities with which we work. There is a separate session on
accountability in this training, but we should note here that successful MEAL
systems must pay particular attention to listening and responding to beneficiaries,
and ensuring that their views are taken into account in programme development
and improvement, in particular through maintaining a register of feedback and
complaints.
~ 171 ~
Evaluation, Research and Learning
And finally the ‘L’ bit of MEAL which emphasizes the importance of deliberate
efforts on research and evaluation to reflect on operational and technical challenges
and achievements, and uses this learning for further quality improvement. As with
the continual monitoring, learning does not happened by accident but requires
dedicated management support and commitment.
~ 172 ~
The MEAL Essential Standards
During 2013, Save the Children revised the MOS (Management Operating
Standards) to arrive a shorter list of Essential Standards designed to build
consistency and compliance across country office programmes.
The Quality Framework has been developed to present both the Programme Quality
and the Operations Quality components of our work, and you can see from this
diagram how they fit together. Clearly the MEAL essential standards, as well as
others including partnership and advocacy, are crucial in achieving quality
outcomes in our programmes.
You may already be familiar with the seven MEAL standards, but here they are
again listed below. Note that some of the standards have qualifying statements
which help to explain them, as well as particular adaptations for use in emergencies,
the ‘humanitarian adaptations’.
1
Projects and programs have clearly
.
Objectives and Standard defined objectives created using an
appropriate logframe, results or
other framework. All relevant
Indicators Global
Indicators are included in the
program design
In humanitarian responses,
objectives and indicators are in line
Objectives and Hum. with the
quality criteria outlined in the
Indicators adaptation / Humanitarian standards
QS
2
Projects and programs are covered
.
M&E Plan and Standard by an M&E plan consistent with the
procedure, with appropriate
resources budgeted to implement
Budget the plan
~ 173 ~
3 Projects and programs establish a
baseline (or other appropriate
.
Baseline Standard equivalent)
as a comparison and planning base
for monitoring and evaluations
If a baseline cannot be established
while prioritizing delivery of a
Baseline Hum. timely
response, then an initial rapid assessment is
adaptation / carried out and followed-on with
in-depth multi-sector assessments in
line with Humanitarian standards
QS and
procedures. In a sudden onset
emergency, initial rapid assessments
are
undertaken within 24-72 hours and
followed-on with in-depth multi-
sector
Assessments
5 Projects and programs which meet
thresholds outlined in the
.
Evaluation Standard Evaluation
procedure are evaluated with
evaluation action plans developed
and signed
off by an appropriate manager
Evaluation and research reports are
shared with relevant Regional and
Evaluation Qualifying Global
Initiative colleagues for the purposes
Statement of effective central archiving and
knowledge management
6
Evidence exists to demonstrate that
.
Learning Standard MEAL data is used to inform
management decision making,
improve programming and share
learning
within and across programs and /
or functional areas
Evidence may include: minutes of
Learning Qualifying program meetings, proposals which
~ 174 ~
demonstrate learning from previous
Statement interventions, feedback from
accountability mechanisms used for
program development
An Output Tracker is set up
according to the deadline in the
Learning Hum. Humanitarian
Categorization procedures and analysis
adaptation / against targets and emerging patterns
are shared with response team
QS leadership at a minimum monthly
In addition to the Essential Standards there are a host of other procedures, guidance
and other resources to support compliance with these standards. Some of these are
explored during the course of this training, including the Evaluation Handbook, the
Accountability Guidance Pack, and the guidance on using the Global Indicators.
~ 175 ~
and evaluation functions and capacity, it’s useful to reflect not only why we collect
this information, but for who.
Probably the biggest driver of our monitoring and reporting is our donors. But there
are multiple other external stakeholders to who we should be reporting for the
purposes of learning, transparency and wider communication. These include the
children and communities with which we work, and our host governments.
Internal monitoring and reporting can be frustrating if we have no idea where the
information is going, or what its purpose is. An organization has to try and simplify
the annual reporting process, and while exercises like total reach and the global
indicators require substantial time and effort, they do allow us to aggregate
information and tell a story globally about our work.
Perhaps most importantly, if we get it right, a principal audience for our monitoring
activities is our project teams themselves, given our aims to use monitoring data as
way of informing decision making and improving quality.
1. Effective monitoring allows us not only to report on our grants, but also to
measure our progress towards ambitious goals in our strategy and to support
programme quality monitoring and continual improvement.
2. An organization has to develop a set of tools and approaches to support
monitoring and global reporting, including Total Reach, Advocacy
Measurement, Global Indicators and Output Tracking.
3. A MEAL approach is being promoted by the organization as a way of using
monitoring and accountability data for learning, quality improvement and
decision making.
4. Different countries adopt different approaches to support MEAL systems
depending on resourcing, structure and other management decisions.
~ 176 ~
5. Organization has to define the particular components and principles of a MEAL
approach and aims for this to become a routine way of working. At its heart is a
commitment across the organization to a culture of quality, transparency and
critical enquiry.
MEAL standards have been agreed for by the organization and form part of the new
set of Essential Standards within the Quality Framework.
Sample MEAL Framework
~ 177 ~
CHAPTER 16
A clear theory of change shows how a project will contribute to the achievement of
LIFT’s purpose and/or programme level outcomes, as defined in the Call for
Proposals. The TOC is a visual tool to articulate and make explicit how a project’s
change process will take place. The TOC therefore is a project planning tool, a M&E
tool and a communication tool and assists with one or more of the following: (1)
defining the outcomes that an intervention aims to achieve; and (2) defining the
causal pathways through which a given set of changes is expected to come about.
~ 178 ~
Beyond this, the TOC can be used to (3) define the assumptions that underlie various
casual pathways; (4) develop a coherent and logical set of metrics or measures that
can be used to track change over time; (5) devise clear and useful evaluation and
learning questions; and (6) organize learning processes at various levels with a
diverse set of stakeholders.
LIFT’s programme level outcomes that the project intends to contribute to;
The sequence of project outcomes that will lead to the LIFT programme level
outcomes;
The outputs through which these project outcomes will be achieved (i.e. what
the project will do to bring about these changes);
The major activities or interventions that will bring about the outputs; and
The major causal connections between the different interventions, outputs
and outcomes.
LIFT’s programme level outcomes should be taken directly from those specified in
the relevant Programme Framework or Programme Theory of Change; however, the
wording can be altered slightly to better fit the project context. All project outcomes
and outputs should be as clear and specific as possible. They should mention the
specific actors concerned,
stating either who is doing the action at the intervention level or who undergoes the
change at the outcome level. The diagram needs to be clear enough for an outsider to
understand the logic of the project simply by following the flow of the boxes. The
TOC can then be used to develop the project Measurement Framework and the
Evaluation and Learning Questions.
MEAL Focus
Based on knowing what the project is about, it is important to then identify who
within the project needs what information and for what purpose. From this
~ 179 ~
information, you can then determine what of the theory of change needs to be
measured and reported, and what are the larger questions that the project needs to
answer. We use three tools to help us determine this focus: a MEAL stakeholder
analysis (optional), a project measurement framework, and a set of carefully
identified project evaluation and learning questions.
~ 180 ~
Table 1. MEAL Stakeholder Analysis
Who are our major What MEAL How will they use When do they need
stakeholders and information do they that information? the information and
what are their roles/ need? (for what purpose?) in what format?
levels of
importance?
After completing the stakeholder analysis, you then need to decide which
stakeholders’ information needs should be addressed and at which point in time.
This will depend, of course, on the roles of the various stakeholders and the context
in which the project operates, both of which may shift overtime.
In selecting the appropriate indicators, first state those that you are required to
report to LIFT twice a year. These will include reporting on certain activities and
outputs, as well as some outcomes, which align with the indicators in LIFT’s logical
framework.1 This is important because LIFT is required to report on the achievement
of the overall logframe to the LIFT Fund Board on an annual and semi-annual basis.
Because different projects will report on different LIFT logframe indicators,
please check with your LIFT programme team to determine the specific indicators
your project is expected to report on. After including the required LIFT logframe
indicators, and based on your optional MEAL stakeholder analysis, then select
additional project indicators that are needed to track the other parts of your TOC.
~ 181 ~
Note that it is important to measure the major pieces of your TOC, but it is not
necessary to track everything in the TOC.
As can be seen in the sample Measurement Framework, projects also should specify
the annual targets (by calendar year) for each indicator. In the Measurement
Framework, projects also need to state what methods and tools will be used to
collect the needed data, who will collect it, and with what frequency.
LIFT overall has developed a series of seven Evaluation and Learning Questions,
which it will report on regularly and answer either throughout or at the end of the
Fund in 2018.
IPs are expected to clearly indicate which of the LIFT overall and programme level
questions their project will help to answer. A headline question for example,
relevant to all projects, will be to ask how effective and cost-effective a project has
been in achieving its outcomes. In addition to addressing the LIFT-level Evaluation
and Learning Questions, projects should identify additional questions based on their
own TOCs and specific learning priorities and interests.
~ 182 ~
In addition to stating their ELQs, projects should also outline why the questions are
important/of interest and the methods and approaches the project will use to
answer these questions. Ideally, a project should not propose more questions than it
can manage efficiently and report on properly. Note that projects are expected to
report in the LIFT semi-annual and annual reports their progress in answering their
Evaluation and Learning questions.
MEAL plans come in different forms and agencies refer to such tools using a variety
of terms, including M&E Plan, Performance Monitoring Plan, Monitoring Matrix
and others. However, a MEAL plan should not be confused with a project or
programme framework or results framework, a tool for project planning, design,
management, and performance assessment that illustrates project elements (such as
goals, objectives, outputs, outcomes), their causal relationships, and the external
factors that may influence success or failure of the project.
~ 183 ~
It also provides space to identify the indicators or variables we need to measure in
order to answer questions about the effects of our interventions, the data collection
tools we need to measure these variables, data collection and data management
processes (including key staff responsible for such processes), how resultant data
can be shared and the key audiences with whom such data should be shared.
With a completed MEAL plan and budget, we can quickly determine if our planned
MEAL activities are practical and relevant to the objectives of our project or
programme and establish relationships between different components of our work.
We can ensure technical and operational quality (e.g. timeliness of
project/programme activities as well as accountability to multiple internal (e.g. other
departments within your office, Members, Country Offices, Global Initiatives,
International Board) and external (e.g. children and communities whom we serve,
donors, government partners) audiences. Because MEAL plans demonstrate our
successes for projects/programmes with proven results and document specific
weaknesses for less successful projects/programmes, they are useful tools for donors
to make funding decisions.
~ 184 ~
• Qualitative information needed for monitoring
• Planned evaluations (projects/programme reviews, mid-term, final)
• Real-time reviews (in humanitarian responses)
• Evaluations of Humanitarian Action
• Training and capacity strengthening of key staff on MEAL activities
• Accountability Activities (e.g. information needs assessment, establishing and
maintaining complaints response mechanisms, ways of sharing information and
facilitating two way communications between beneficiaries and Save the
Children/our partners)
• Data Quality and Validation
• Roles and Responsibilities
• Management Information System(s)
• Communicating and using data
• Work plan
• MEAL budget (recommended 3-10% of total project/programme for MEAL
activities)
~ 185 ~
project/programme (see Session 3 on Programme Frameworks, Objectives and
Indicators for more details).
While indicators are initially selected during the project/programme design phase,
they can change over time as the context evolves. Just as a project/programme can
change throughout the funding period, so can a MEAL plan. You need to ensure
there is a mechanism in place to guarantee the continued relevance of the MEAL
plan to the project/programme.
~ 186 ~
When setting targets for your indicators, you must focus on what your project/
programme can realistically achieve given your human and financial resources and
the environment in which you are operating. For each indicator, consider the
following factors: baseline levels, past trends, donor expectations and logistics to
achieve targets. In addition to the magnitude or size of change over time, when
setting specific project/programme targets, you must also decide on the direction of
change that is desired over time. You may also wish to establish annual or
intermediate targets for multi-year projects/programmes.
Step 3: Select your data collection methods and data sources for prioritized
indicators in your MEAL plan.
There are many ways to collect data. The choices you make will largely depend
upon your available budget and the availability of human resources. Key factors
here include the availability of skilled staff familiar with conducting monitoring and
evaluation activities, the appropriateness of the methodology for your MEAL
objectives, and the context of your project/programme (e.g. political, social,
security). You will also need to determine whether any special studies will be
conducted and what study design will be used. You should also carefully consider
the internal and external capacity to conduct any special studies (this includes
technical capacity as well as cost considerations).
Something that often is neglected when developing a MEAL plan is the need to
assess the MEAL technical capacity of your project/programme. When preparing
the MEAL plan, you must at least consider the existing data collection systems and
staff capacity in MEAL. The programme may have a MEAL unit with staff trained in
monitoring and evaluation methods that will be responsible for leading the
development and coordination of the MEAL plan. If this is not the case, then there
may be individuals who are motivated and have an interest in monitoring and
evaluation. It is important to identify those people even if they do not have a formal
monitoring and evaluation position; it is also important to work and strengthen the
~ 187 ~
capacities of these people in MEAL. Assessing current capacity and using resources
that are already available will help us to avoid duplication of data collection and
reporting and collecting information that will not be used. Finally, no matter the size
of your project/programme, each data collection strategy you select should be as
rigorous as possible to ensure the data gathered are objective and unbiased or
impartial.
~ 188 ~
data sources from the very beginning of your project/programme to conserve
limited human and financial resources and to ensure you are continuously learning
and improving your interventions throughout the project/programme cycle.
Consider the level of participation you require from each group, including children
and youth. How often should your team meet with key stakeholders? What form
will these meetings take – information sharing for particular groups, circulation of
final reports, individual discussions, or other fora? Will data be made publicly
available? Involving children and youth in meaningful and appropriate ways takes
careful preparation and planning. For example, if you plan to share the lessons
learned from a final project evaluation with children who participated in the project,
you cannot simply provide them with a copy of the final report. The key messages
will need to be simplified and adapted so they are understandable, relevant and age-
appropriate. This topic will be covered in greater detail in Session 8, Children’s
Participation in MEAL.
MEAL budgeting
MEAL activities requiring a budget
Budget allocated towards MEAL activities should include both sources of funding
and any existing gaps. MEAL-related costs that should be included in the project.
Budget might include the following:
• Baseline studies or data collection activities
• Establishing a monitoring and evaluation or Management Information System
(MIS)
• Training and capacity building for all staff and partners involved in developing
and implementing MEAL activities
• Ongoing, routine monitoring activities, including supervision visits, data
collection activities, printing of questionnaires, travel and refreshments for
respondents, and data inputting
• Mid-term review (if required)
• Final evaluation (if required)
~ 189 ~
• Learning events and dissemination activities, including publications and public
meetings where appropriate
• Staff time/salary for dedicated monitoring and evaluation personnel can also
be included.
The final allocation for MEAL costs will be highly dependent on the size and
complexity of the project and the project design (evaluation design). For example, a
small stand-alone intervention that is focusing on distribution of teaching materials
in one district (with concrete outputs and outcomes) may require less funding for
MEAL activities compared with a multi-country initiative focused on
implementation of national policies.
Summary of this Session
• MEAL plan can be defined as a management tool that can be used to monitor
and evaluate interventions, projects or programmes. A MEAL plan is your
project or programme’s roadmap to implementing your activities as intended,
to conduct monitoring and evaluation activities in a timely and efficient
fashion, and to ensure continuous learning throughout the project and
programme cycle.
~ 190 ~
• A MEAL plan should be developed with the Programme Management team
and relevant partners and stakeholders to ensure ownership and sense of
shared responsibility, especially where they are responsible for any aspect of
data collection.
~ 191 ~
CHAPTER 16
Introduction:
How do you know if a given intervention is working? How can you measure and
demonstrate if it is producing changes in a systematic manner? Are there different ways of
doing so? This session will answer this and other similar questions and will provide you
with useful skills to manage or actively participate in the process of conducting a baseline
study, needs assessments, project evaluations and real-time reviews.
Indeed, these questions and processes are key to improving the effectiveness and quality of
our programmes. In our dual mandate Quality Framework explicit components related to
evaluation and learning are stated, with the aim of improving programme quality and
measuring and demonstrating the impact for stakeholders.
Referring to the Save the Children Baselines Essential Standard, it says that: ‘Projects and
programs establish a baseline (or other appropriate equivalent) as a comparison and
planning base for monitoring and evaluations’.
The information gathered (and analyzed) in the baseline consists of data on indicators
specifically chosen to monitor project performance on a regular basis. It also considers the
anticipated use of these indicators at a later time to investigate project effects and impacts.
Indicators (and methods of data collection and analysis) used in baseline surveys may be
qualitative or quantitative. Baseline studies can also serve to confirm the initial set of
~ 192 ~
indicators to ensure those indicators are the most appropriate to measure achieved project
results.
What is evaluation?
There are many definitions of evaluation: most of them agree that it is ‘a systematic
assessment of an on-going or completed project, programme or policy, its design,
implementation and results’.
Evaluation also refers to the process of determining the worth or significance of an activity,
policy or program. Evaluations should provide information that is credible and useful,
enabling the incorporation of lessons learned into the decision–making process of both
recipients and donors.
As it is layed out in the table below, evaluation is different from monitoring. Monitoring can
be defined as ‘a continuing assessment based on systematic collection of data on specific
indicators as well as wider information on the implementation of projects’. Monitoring
enables managers and other stakeholders to measure progress in terms of achieving the
stated objectives and the use of allocated funds.
Monitoring Evaluation
… is ongoing … is periodical
Gathers information related to the Assesses the programme’s design, processes
programme regularly, on a day-to-day basis and results
Provides information on whether and how Provides information on what the
the planned activities are implemented or programme effects are
not
Refers to activities, outputs and intermediate Refers to intermediate results/outcomes and
results/outcomes the bigger/strategic objectives and, Goal (in
case of impact study)
Fosters informed re-design of project Can foster informed review of both the
methodology current project (mid-term evaluations) or
new projects (final evaluations)
Collects data on project indicators Uses data on project indicators collected
Performed by project staff (Internal) through the monitoring process
Can be performed by an external evaluator,
but also by a mixed team with internal and
external evaluators or even by only an
internal team.
Table 1: Differences between monitoring and evaluation
~ 193 ~
There are many types of evaluations and also a series of related terms, including: proof of
concept evaluation, post-programme evaluation, thematic evaluations, real-time evaluation,
review, ex-ante evaluation, ex -post evaluation, formative evaluation, summative evaluation,
mid-term evaluation, process evaluation, etc. These are described in more detail in the
Evaluation Handbook referenced at the end of this module.
Particularly worth mentioning here, however, are Real Time Reviews (RTRs) and Evaluation
of Humanitarian Action (EHA) in emergency contexts. RTRs combine two previously
separate exercises: Real Time Evaluations and Operational Reviews. Therefore, an RTR looks
at both programmatic and operational aspects of a humanitarian response. The primary
purpose of an RTR is to provide feedback in a participatory way in real time (i.e. during the
implementation of an emergency response, and during the review itself) to those executing
and managing a humanitarian response: this is designed to improve operational and
programmatic decision making. RTRs are internal exercises that aim to have a light touch so
as not to interfere with programming.
RTRs examine the initial phase of the response. The findings and recommendations should
help staff to make immediate adjustments to both programming and operations in order to
better meet the needs of beneficiaries.
~ 194 ~
Are there any steps that are different for a baseline study?
Involving stakeholders Designing and testing the Preparing a management
evaluation tools response to the report
Data management Agreeing a timeline Sharing findings with
Stakeholders
Putting the evaluation Training Reviewing the evaluation
team together report
In the design phase, we clarify the purpose of the survey, define the objectives and
questions, and decide on the design and methodology. It is here that you need to consider
issues like the level of rigor, data sources, sampling and data collection, and data quality. In
section 3 below I have presented different evaluation models and designs.
The planning and implementation processes include discussing how stakeholders will be
involved, preparing a budget and timeline, and forming the survey team. It is also here that
you should think about the management process, which includes defining roles and
responsibilities for the evaluation team, and preparing the terms of reference and contracts.
In the inception phase, you would then carry out more detailed planning and test the data
collection tools and other methods of measurement . involving children and stakeholders.
During the implementation of the study, in addition to the data collection, input, cleaning,
analysis and storage, writing of the evaluation report should also start.
Finally, it is important to take time and use resources to communicate and disseminate the
findings, plan follow up(s) on any recommendations and prepare the response to the
evaluation and the management action plan. This should also include ways to ensure that
~ 195 ~
learning informs future project or programme planning, design and implementation. Session
9 ‘Use of MEAL data’ explores in more detail how to use the evaluation results.
The process represented in the graph below can be adapted to different types of evaluations,
regardless of the context and the methodology chosen. This process can also be helpful when
planning baselines or end-line studies or different types of research, although as indicated
earlier in this session, these are different from evaluations
You may want to have a break here before we explore different evaluation designs!
Choosing the evaluation approach and design Evaluation models and approaches
In recent years conducting evaluations has become a frequent practice in the international
cooperation sector. Donors demand results and value for money, as well as transparency and
efficiency in the use of resources. There is a shared concern for measuring interventions to
see if what we are doing works, and to what extent. Evaluations are used for accountability
~ 196 ~
and decision making, but also for improvement and learning, or enlightenment for future
actions, in the words of Stufflebeam and Shinkfield (1987:23).
They try to not only answer questions such as ‘Has the intervention achieved its objectives?’,
but also, ‘How were the objectives achieved?’ It is important to identify why you want to
conduct an evaluation and what will be the key questions you want the evaluation to
answer, so you can choose the most appropriate approach and design. For instance, if you
want to conduct an evaluation primarily to show results and to be accountable for the use of
resources, you may want to opt for a ‘criteria-based standardized model’. This approach is
widely used in the international development sector and it fits very well with objectives and
results-based planning tools such as the log-frame. This evaluation approach uses pre-
established criteria to organize the evaluation questions and the evaluation design, such as
the DAC criteria1: effectiveness, efficiency, relevance, impact and sustainability. The criteria-
based model assesses the programme’s performance against some pre-established standards
that determine if the objective has been fulfilled or not.
However, there are other approaches that go beyond the fulfillment of objectives and focus
on other aspects of the intervention. For instance, Theory-based evaluation approaches
(initially proposed by Carol Weiss and adapted by other authors) allow a wider look at the
programme. This approach considers the intervention as a system with inter-related
components that need to be looked at during the evaluation to understand how the changes
are produced. It will help you to read reading the information relating to the programme’s
‘black box’. This is particularly useful if you plan to use your evaluation for learning and
decision making or if you want to document your programme for it to be replicated or
escalated.
~ 197 ~
organizations. It shifts away from assessing the products of a programme to concentrate on
changes in behaviors, relationships, and also the actions and activities of the people and
organizations with whom a development program works directly.
Process tracing is a qualitative research method that attempts to identify the causal
processes – the causal chain and causal mechanism – between a potential cause or causes
(e.g. an intervention) and an effect or outcome (e.g. changes in local government practice).
The method involves evidencing the specific ways in which a particular cause produced (or
contributed to producing) a particular effect. A key step in process tracing is considering
alternative explanations for how the observed outcomes came about, and assessing what
evidence exists to support these.
Rights-based approaches organize the evaluation questions and design around the rights
that the intervention intends to promote, protect or fulfill. The United Nations Convention of
the Rights of the Child (1989) is the primary Human Rights framework for child-rights based
programming and evaluation. The rights and principles presented in the convention are
usually organized around four groups: survival and development; non-discrimination;
participation; and the best interest of the child. This approach puts children at the centre and
recognizes them as rights holders. It analyses power relations between groups, in particular
between duty bearers and rights holders, focusing on the most vulnerable groups with a
particular interest on how principles such as non-discrimination and participation have been
integrated into the programme. It prioritizes sustainable results, looking at the real root-
causes of the problems instead of just considering their immediate causes.
~ 198 ~
Which one is the best approach?
It all depends on the purpose of your evaluation, what you expect the evaluation to
tell you and how you and other stakeholders are planning to use the evaluation. You
also need to check if the donor is requesting a particular approach or if it is possible
to be propositional about it. These models can also be complementary, so you can
combine elements of several of them to create a tailored model. For instance, you can
organize the evaluation questions by groups of rights and sub-grouping them as well
by criteria and standards, including questions that relate to the process and the
structure and not only to the results.
It
is not possible to be completely objective when evaluating programmes, so if you want your
findings to be reliable, it is important to be as systematic as possible. Here are some tips that
can help you:
• Reflect stakeholders’ information needs in your evaluation questions.
• Assure the quality of your data collection and analysis techniques
• Follow four separate steps of a systematic assessment: data analysis; data
interpretation; conclusions; and recommendations.
• Ensure the highest standards on ethics, stakeholder participation and transparency
Evaluation Design
If your evaluation will measure results, then once you have decided the evaluation approach
and the main evaluation questions, you will also need to choose the appropriate evaluation
design. This decision will also depend on elements such as the available resources (some
designs are more resource-intensive than others) and the implementation stage of the
programme. Some evaluation designs require conducting measurements before the
programme activities actually start (baselines) so it will not be possible to use them if you
only start planning for your evaluation when the programme is in an advanced
implementation stage.
~ 199 ~
Why do you need a good evaluation design?
When we conduct an evaluation we want to find out if there have been any changes
(gross effect), but also if the changes are produced by the programme. This means: can
the changes be attributed to the programme? If yes, how much of these changes are
actually due to the programme activities (net effect). This can only be established if we
have a sound evaluation design.
~ 200 ~
1. When you compare groups between themselves and/or over time you can
see their differences and eliminate the effect of possible explicative alternatives you
may be considering to explain the changes.
In order to do this, you can:
a) Compare the treatment group with the comparison or control groups (not
receiving project activities/services)
b) Compare programme group with itself over time
2. When using statistical analysis it is possible to control or maintain constant
the effect of possible alternatives. This requires identifying the variables to be
controlled and gather data on them. Then use statistical analysis techniques to
separate the effects of each variable.
~ 201 ~
From least to most robust.
Design How robust? Can calculate
net effect?
1. Post-test (only TG) Non-experimental designs.
In this model you take one measurement at the end of the Little internal validity
intervention with the treatment group. For instance, you .
can conduct a survey with programme participants at the
end of the activities. As there is only one measurement, we
cannot compare the data with other groups or with the
group itself before the activities, so it is not possible to
establish if there have been changes and if these are
attributable to the programme. We can only document
changes perceived by participants to which the programme
will have contributed.
2. Pre and post-test (only TG) Only possible to extract gross
This model compares the results before and after the effect – the results that the
programme by programme is apparently
measuring changes only in the treatment group. It is not producing or as perceived
possible to know by those participating in the
if there have been any other variables affecting the programme
intervention and the
changes, if any.
3. Only post-test w N-ECG It includes variables related to
The measurement only takes place at the end of the the programme as well as
programme, but both the other factors, so we can only
treatment and the comparison group (=non-equivalent talk about contribution, not
group) are measured. attribution.
You can compare the results between both groups, but
there is no comparable
data of the initial situation, before the programme started.
4. Pre and post-test with N-ECG Quasi-experimental design.
With this model you will perform measurements before
and after the
~ 202 ~
programme to both the treatment group and the
comparison group. It is very
close to the real experiment, but it is not as robust because
the comparison
group is not selected randomly. This is why it is called non-
equivalent group.
5. Time series (longitudinal analysis) Medium internal validity.
Here several measurements are taken before the
programme starts, while it is being implemented and after
it has finished, so you can have a broader picture of how
the treatment group is changing, if this is the case. The
differences between measurements can be treated as net
effect, but we cannot be as sure as with the experimental
design. The comparison group (N-ECG) can be
included or not included.
6. Experiment with pre and post-test Experimental design. High
This model is also called ‘Randomized Controlled Trial’ internal
(RCT) and it is considered the only real experiment. Here validity. Calculates net effect
the individuals of both the treatment group and the control
group are selected randomly from the same group of
individuals. The treatment group receives the programme
that is being tested, while the control group receives an
alternative treatment, a dummy treatment (placebo) or no
treatment at all. The groups are followed up to see how
effective the experimental treatment was and any difference
in response between the groups is assessed statistically.
The non-experimental designs (models 1, 2 and 3) are not very robust and only
provide the gross effect of a programme, but require fewer resources and the
analysis is not complex. Sometimes, if the programme has already started, models 1
and 3 are the only ones that are available for us to do. For this reason it is worth
~ 203 ~
planning the evaluation during the programme planning stage, so a pre-test can be
planned for and performed before the activities start.
Strictly speaking, impact evaluations are those which provide the net effect: this
means that they calculate the ‘amount of change’ that can be claimed by the
programme (attribution). Otherwise, we can only say that the programme has
contributed to the changes.
~ 204 ~
Project
60
45
10 points
40
25
20
0 difference
35
15
0
Beginning project End of Project
Figure 2: Difference between baseline and endline (example 2).
Source: elaborated by Larry Dershem
~ 205 ~
The same happens in the third example, even though both groups start at different
points, they both make the same increase (45-25=20 for the project group; 35-25=20
for the non- project group).
Project Non--‐project
60
40 45
No difference
25
20
35
15
0
Beginning project End of Project
Figure 3: Difference between baseline and endline (example 3). Source: elaborated by
Larry Dershem
To summarize:
• There is a variety of programme evaluation approaches to choose from. The most
widely-known and used in this sector is the criteria-based standardized model.
But you can also consider other options and methodologies, such as theory-based
models, outcome mapping, process tracing, contribution analysis and rights-
based approaches if they are more adequate for your intervention and context.
• Evaluations designs will determine if the evaluation is able to provide the gross
effect (changes that occurred) or the net effect (changes that can be attributed to
the programme).
• In this session we have explored the main quantitative evaluation designs and
discussed their degree of internal validity. From less to more robust we have
~ 206 ~
seen: post-test only, pre-post-test, only post-test with non-equivalent control
group, pre and post-test with non-equivalent control group, time series, and
experiment with pre- and post-test.
Consulting with a wide range of stakeholders (and reaching agreements) can take
quite some time and significant preparation, particularly when involving children. If
you need to plan for an evaluation, make sure you allow enough time for each step.
A full evaluation process may take several months (though usually less in real-time
evaluations or evaluations of humanitarian interventions).
~ 207 ~
(such as the stakeholder analysis, the evaluation questions, the field work plan,
finalization of data collection tools, management of data, report dissemination, etc.).
The evaluation manager also ensures that a quality assurance takes place. This will
include looking at
the credibility of the findings
how well the work addresses the aims of the evaluation
the rationale and robustness of the evaluation design
the way the sample was taken
the data collection processes
comparing the draft report with the ToR
As indicated a ToR can also serve as a management tool throughout the evaluation
process and a reference against which the completed evaluation report and
deliverables are assessed before approval. The scope and level of detail of a ToR
might vary; in some cases, the ToR will entail a plan for the entire evaluation
process, while in others, ToRs can be developed for specific parts of the evaluation –
e.g. for the evaluation team or even individual team members. Furthermore, the
scope and level of detail in the ToR should be consistent with the size, complexity
and strategic importance of the project or programme being evaluated.
As you will have already guessed, stakeholders should be widely consulted while
the ToR is being developed, so their views are clearly reflected and their information
needs included. It will also help to clarify expectations around the process and to
~ 208 ~
motivate stakeholders to be fully involved. The success of the evaluation and its use
depends on it!
Very often the donor will provide a ToR template that should be followed. If this is
not the case and you need to develop your own, it can be useful to collect a few
examples for inspiration.
What skills should you ask for? The ideal case is to have an expert to evaluate each
thematic area addressed by the intervention, and also an expert on evaluation and
methods. For our programmes, it is also particularly important that the team has the
skills to work with children and involve them in the process as much as possible,
although orientation and training may be needed for this. Evaluators should speak
the local language and know the country context as much as possible, so local
evaluators are usually preferred, although international evaluators could also
provide additional capacity to a local team.
~ 209 ~
If the team is composed by several evaluators it is best to name a team leader. It
usually works better when members have worked together before.
When looking for external evaluators, it is considered good practice for transparency
to advertise your evaluation in different networks and platforms. This allows all
potential candidates to submit an Expression of Interest (EoI) or a proposal and
participate in a competitive process. This also increases your chances of getting the
best value for money
The management response and action plans are the key ways you manage the use of
the evaluation results. The management response to the evaluation lists the key
recommendations, along with actions proposed by the evaluation and also the
response of the project or programme team.
The management response is often part of the evaluation report and hence a public
document.
The evaluation action plan is developed for our internal management purposes and
is more detailed than the management response. It is a requirement for Save the
Children as per the Evaluation Essential Standard and the Country Director is
accountable for its follow-up and implementation. In terms of content, it is similar to
the MEAL Action trackers described in the Use of MEAL Data module: the action
plan includes recommendations and lessons learned during the evaluation, together
with actions to address these with resource requirements (financial and/or
technical), a timeline and a list of who is accountable for which actions.
~ 210 ~
• For emergency responses, available baseline data in the Emergency
Preparedness Plan should be reviewed and used. In Category 1 or 2 rapid onset
emergencies, initial rapid assessments should be undertaken within 24-72 hours
• Evaluation is the systematic and objective assessment of an on-going or
completed project, program or policy – its design, implementation and results.
~ 211 ~
CHAPTER 17
More specifically, we will discuss the process of identifying research questions and
selecting appropriate methodologies, understanding the difference between
quantitative and qualitative data, and associated benefits and limitations. We will
give an overview of common methods and data analysis techniques for both
quantitative and qualitative research and finally discuss the interpretation of
findings using multiple data sources. The scope of this module is limited to concepts
that will enable learners to gain a broad understanding of the subject area. However,
we will include links to useful resources should learners wish to increase their
knowledge on a particular topic.
~ 212 ~
through the process of developing research questions, studying objectives and
linking them to an appropriate study design.
In this session you will learn how to develop a ‘situation analysis’ study to
understand the struggles and coping strategies of working street children in Karachi.
~ 213 ~
time, resources and staff constraints, so keep that in mind when you develop your
research question.
A research question is meant to help you focus on the study purpose. A research
question should therefore define the investigation, set boundaries and provide some
level of direction.
In the process of developing a research question, you are likely to think of a number
of different research questions. It is useful to continually evaluate these questions, as
this will help you refine and decide on your final research question. You could, for
example, ask:
• Is there a good fit between the study purpose and the research question?
• Is the research question focused, clear and well-articulated?
• Can the research question be answered? Is it feasible – given time, resource
and staff constraints?
To further help you define your investigation it is useful to develop a few study
objectives. These objectives should be specific statements that reflect the steps you
will take to answer your research question. For the above case study, I would
include the following objectives:
• Map out the struggles and coping strategies of working street children in
Karachi
• Determine how socio-economic status impacts on children’s struggles and
coping strategies
• Identify differences between boys and girls as well as the cause of these
differences
• Discuss the implications of these findings to development programmes.
By addressing these four study objectives, you will automatically begin to ‘paint a
picture’ that answers your overarching research question.
~ 214 ~
Depending on the nature of your research question and study objectives, you may
begin to think about the direction you think the answers will take. For example, in
what ways do you think socio-economic status may determine the struggles of
working street children and their ability to cope with hardship?
As your study seeks to describe some features (struggles and coping strategies) of a
group of working street children at one specific point in time, you are in the process
of developing an exploratory study. Exploratory studies are useful for conducting
situation analyses and benefit from drawing on both qualitative and quantitative
methods. If you were developing a study to assess the impact of an intervention
supporting working street children in Karachi, you would likely benefit from
developing a study with a more experimental design with a before and after
intervention focus. For more detail on experimental evaluation designs, please
consult Session .
~ 215 ~
Promoting ethical and participatory research
After having determined the design of your study, it is time to think about how you
might best engage with the respondents of the study, many of whom will be
children. You will, for example, need to consider the following questions:
What might be the social and ethical implications of the respondent’s engagement
with you and the study? How can you best protect and safeguard their well-being
and interests? What are ethical and safe ways to involve children in research?
These questions are important to consider and resonate with Save the Children’s
child safeguarding policy. Broadly said, ethical research is about ‘doing good and
avoiding harm’ to those participating in the research. This is achieved primarily by
consulting communities of your areas of study and attaining answers and practical
responses to the above questions. Make sure you follow up on their
recommendations. You also need to familiarize yourself with existing toolkits and
universal guidelines for conducting ethical research (see resources below) and use
this information to develop informed consent forms, which include:
i. An information sheet in the local language, explaining: who you and Save
the Children are, including your contact details; the purpose of the interview or
exercise; whether they have to take part; what will happen if they do not want to
participate; what will happen if they agree to participate; how long it will take; how
confidentiality will be assured; what they will get out of it; risks associated with
their participation; approximate data of completion and anticipation how the
information gathered will be used. If you will be involving non-literate groups you
need to think about how to communicate this information to them, for example in a
group discussion and/ or with visual materials.
ii. A consent form that includes statements that the participant has understood
what they will be involved in (e.g.,’ I understand that if I decide at any time that I
don’t want to participate in this study, I can tell the researchers and will be
withdrawn from it immediately. This will not affect me in any way’. Or, to take
~ 216 ~
another instance: ‘I understand that reports from the findings of this study, using
information from all participants combined together, will be published.
Confidentiality and anonymity will be maintained and it will not be possible to
identify me from any publications’.
You need to prepare separate information consent forms for both children and
adults. If children under the age of 18 are participating in your study, you also need
to obtain informed consent from their guardians. Different data collection methods
require different informed consent forms. So it is important you tailor your
information sheets and consent forms to your specific study. More and more
organizations, including Save the Children UK, are setting up internal ethics
committees in place to support and guide staff to conduct ethical research. At the
end of this session we have included some resources providing you with additional
information.
Differences between quantitative and qualitative research and their application
Quantitative research
Quantitative research typically explores specific and clearly defined questions that
examine the relationship between two events, or occurrences, where the second
event is a consequence of the first event. Such a question might be: ‘what impact did
~ 217 ~
the programme have on children’s school performance?’ To test the causality or link
between the programme and children’s school performance, quantitative researchers
will seek to maintain a level of control of the different variables that may influence
the relationship between events and recruit respondents randomly. Quantitative
data is often gathered through surveys and questionnaires that are carefully
developed and structured to provide you with numerical data that can be explored
statistically and yield a result that can be generalized to some larger population.
Qualitative research
Research following a qualitative approach is exploratory and seeks to explain ‘how’
and ‘why’ a particular phenomenon, or programme, operates as it does in a
particular context. As such, qualitative research often investigates i) local knowledge
and understanding of a given issue or programme; ii) people’s experiences,
meanings and relationships and iii) social processes and contextual factors (e.g.,
social norms and cultural practices) that marginalize a group of people or impact a
programme. Qualitative data is non-numerical, covering images, videos, text and
people’s written or spoken words. Qualitative data is often gathered through
individual interviews and focus group discussions using semi-structured or
unstructured topic guides.
~ 218 ~
Summary of differences
Qualitative research Quantitative research
Type of knowledge Subjective Objective
Aim Exploratory and observational Generalisable and testing
Characteristics Flexible Contextual portrayal Fixed and controlled
Independent and dependent
variables
Dynamic, continuous view of change Pre- and post-measurement
of change
Sampling Purposeful Random
Data collection Semi-structured or unstructured Structured
Narratives, quotations, descriptions Numbers, statistics
Nature of data Value uniqueness, particularity Replication
Analysis Thematic Statistical
Although the table above illustrates qualitative and quantitative research as distinct
and opposite, in practice they are often combined or draw on elements from each
other. For example, quantitative surveys can include open ended questions.
Similarly, qualitative responses can be quantified. Qualitative and quantitative
methods can also support each other, both through a triangulation of findings and
by building on each other (e.g., findings from a qualitative study can be used to
guide the questions in a survey).
~ 219 ~
as part of your routine project monitoring activities, in a needs assessment or
baseline or as part of an evaluation exercise.
1 Individual interview
An individual interview is a conversation between two people that has a structure
and a purpose. It is designed to elicit the interviewee’s knowledge or perspective on
a topic. Individual interviews, which can include key informant interviews, are
useful for exploring an individual’s beliefs, values, understandings, feelings,
experiences and perspectives of an issue. Individual interviews also allow the
researcher to ask into a complex issue, learning more about the contextual factors
that govern individual experiences.
3 Photovoice
Photovoice is a participatory method that enables people to identify, represent and
enhance their community, life circumstances or engagement with a programme
through photography and accompanying written captions. Photovoice involves
giving a group of participant’s cameras, enabling them to capture, discuss and share
stories they find significant.
~ 220 ~
4 Picture story
The picture story method enables children, in a fun and participatory way, to
communicate their perspectives on particular issues through a series of drawings
(story telling) they have made. The story telling can either be done in writing,
depending on the child’s level of literacy, or verbally with a researcher. The picture
story method is relatively quick and inexpensive, particularly if the draw-and-write
technique is adopted. The picture story method provides a non-threatening way to
explore children’s views on a particular issue (e.g. barriers to girl’s education) and to
begin to identify what can be done to address any struggles faced by children.
5 Identifying participants
Qualitative research often focuses on a limited number of respondents who have
been purposefully selected to participate because you believe they have in-depth
knowledge of an issue you know little about, such as:
• They have experienced first-hand you topic of study, e.g. working street children
• They show variation in how they respond to hardship, e.g. children who draw on
different protective mechanisms to cope with hardship on the street and in the
work place
• They have particular knowledge or expertise regarding the group under study,
e.g. social workers supporting working street children.
You can select a sample of individuals with a particular ‘purpose’ in mind in
different ways, including:
• Extreme or typical case sampling – learning from unusual or typical cases, e.g.
children who expectedly struggle with hardship (typical) or those who do well
despite extreme hardship (unusual)
• Snowball sampling – asking others to identify people who will interview well,
because they are open and because they have an in-depth understanding about
the issue under study. For example, you may ask street children to identify other
street children you can talk to.
~ 221 ~
• Random purposeful sampling – if your purposeful sample size is large you can
randomly recruit respondents from it.
Whilst purposeful sampling enables you to recruit individuals based on your study
objectives, this limits your ability to produce findings that represent your population
as a whole. It is therefore good practice for triangulation purposes to recruit a
variety of respondents (e.g., children, adults, service users and providers)
Qualitative data analysis ought to pay attention to the ‘spoken word’, context,
consistency and contradictions of views, frequency and intensity of comments, their
~ 222 ~
specificity as well as emerging themes and trends. We now explain three key
components of qualitative data analysis.
The process of reducing your data
There are two ways of analyzing qualitative data. One approach is to examine your
findings with a pre-defined framework, which reflects your aims, objectives and
interests. This approach is relatively easy and is closely aligned with policy and
programmatic research which has pre-determined interests. This approach allows
you to focus on particular answers and abandon the rest. We refer to this approach
as ‘framework analysis’ (Pope et al 2000). The second approach takes a more
exploratory perspective, encouraging you to consider and code all your data,
allowing for new impressions to shape your interpretation in different and
unexpected directions. We refer to this approach as thematic network analysis
(Attride-Stirling, 2001). More often than not, qualitative analysis draws on a mix of
both approaches.
Whichever approach guides you, the first thing you need to do is to familiarize
yourself with your data. This involves reading and re-reading your material (data) in
its entirety. Makes notes of thoughts that spring to mind and write summaries of
each transcript or piece of data that you will analyze. As your aim is to condense all
of this information to key themes and topics that can shed light on your research
question, you need to start coding the material. A code is a word or a short phrase
that descriptively captures the essence of elements of your material (e.g. a quotation)
and is the first step in your data reduction and interpretation.
To help speed up your coding you can, after having read through all of your data,
develop a coding framework, which consists of a list of codes that you anticipate will
be used to index and divide your material into descriptive topics. If you are
approaching your data following the deductive framework approach, your coding
will be guided by a fixed framework (and you index your material according to
these pre-defined codes). If, however, you are following the more inductive thematic
network approach, you are likely to add new codes to your list as you progress with
~ 223 ~
the coding, continually developing your coding framework. Coding is a long, slow
and repetitive process, and you are encouraged to merge, split up or rename codes
as you progress. There is no fixed rule on how many codes you should aim for, but if
you have more than 100-120 codes, it is advisable that you begin to merge some of
your codes.
Once you have coded all of your material you need to start abstracting themes from
the codes. Go through your codes and group them together to represent common,
salient and significant themes. A useful way of doing this is to write your code
headings on small pieces of paper and spread them out on a table: this process will
give you an overview of the various codes and will also allow you to move them
around and cluster them together into themes. Look for underlying patterns and
structures – including differences between types of respondents (e.g., adults versus
children, men versus women) if analyzed together. Label these clusters of codes (and
perhaps even single codes), with a more interpretative and ‘basic theme’. Take a new
piece of paper, write the ‘basic theme’ label, and place it next to your cluster of
codes. If, for example, the codes ‘Torn uniform’ and ‘No school books’ appear in
your interview transcripts with working street children, they can be clustered
together as ‘Working street children lack school materials’ (see Figure 3).
You may find that not all of your codes are of interest and relevance to your research
question and that you choose to only cluster 60 of your codes into ‘basic themes’ that
help shed light on your question. Let us say, for arguments sake, that through this
~ 224 ~
process you identify 20 ‘basic themes’. Repeat this process with your basic themes.
Examine your ‘basic themes’ and cluster them together into higher order and more
interpretative ‘organizing themes’. Let us say, again for arguments sake, that this
process reduces your
20 ‘basic themes’ to four ‘organizing themes’, two of which represent struggles faced
by working street children (as exemplified by Figure 4) and two which give detail to
their coping strategies. Figure 4 also illustrates how you can transparently show how
you went from having descriptive codes to focusing on a few distinct, interpretative
and networked themes that you can use to begin answering parts of your research
question.
The method of cutting out codes and moving them around on a table is often
referred to as the ‘table method’. The ‘table method’ works particularly well for
smaller studies. If you have vast amounts of data (e.g. more than 20 interview
~ 225 ~
transcripts), you may find it helpful to use qualitative data analysis software, such as
Nvivo or Atlas.Ti. These software packages are, however, not free and you will
require a license
Statistical analysis is used to summarize and describe quantitative data and graphs
or tables can be used to visualize present raw data. This section will review the
commonly used methods/sources of quantitative data and the techniques used for
recruiting participants.
~ 226 ~
Quantitative methods
Quantitative data can be collected using a number of different methods and from
a variety of sources.
Accurate sampling requires a sample frame or list of all the units in our target
population. A unit is the individual, household or school (for example) from which
we are interested in collecting data. A sample frame for a household survey would
include all the households in the population identified by location or, in the case of
our case study, all of the working street children in Karachi.
~ 227 ~
Bias
The process of recruiting participants for quantitative research is quite different from
that of qualitative research. In order to ensure that our sample accurately represents
the population and enables us to make generalizations from our sample we must
fulfill a number of requirements.
Sampling bias can occur if decisions are made about sample selection that mean that
some individuals have a greater chance of being selected for the sample than others.
Sample bias is a major failing in our research design and can lead to inconclusive,
unreliable results. There are a many different types of bias. For example, tarmac bias
relates to our tendency to survey those villages that are easily accessible by road. We
may be limited in our ability to travel to many places due to lack of roads, weather
conditions etc. which can create a bias in our sample.
Self-selection or non-response bias is one of the most common forms of bias and is
difficult to manage. Participation in questionnaire/surveys must be on a voluntary
basis. If only those people with strong views about the topic being researched
volunteer then the results of the study may not reflect the opinions of the wider
population creating a bias.
~ 228 ~
Therefore, there is a trade-off between sample precision and considerations of
optimal resource use.
There are no ‘rules of thumb’ when determining sample size for quantitative
research. It is not possible to say whether 10% of the population, for instance, would
provide an adequate sample, as this will be affected by a number of factors. You
should be wary of sample plans in research or evaluations that suggest sample size
can be calculated using a percentage of the population without further clarification
or rationale for this.
Statisticians will calculate sample size using a range of different equations, each of
which are appropriate for different research situations and contexts. It is important
to discuss the objectives of your research, expected results, data types, resources and
context with a statistician or technical advisor at the design stage of your research in
order to calculate an appropriate sample.
It is also useful to understand the two main statistics, which will be used to calculate
the sample size. These are the confidence interval or margin of error and the
confidence level.
The confidence interval is the acceptable range in which your estimate can lie. For
example, if you were using a sample to collect data estimating the percentage of
street children in Karachi who are engaged in harmful work you might set your
margin of error at 10%. This would mean that if, following the collection of your
data you found that
75% of children in your sample are engaged in harmful work, you would know that
the real number for the population would be plus or minus 10% i.e. anywhere
between 65% and 85%.
If you are carrying out before and after intervention analysis to determine whether
your work has contributed to a change you will need to consider what size of effect
~ 229 ~
you anticipate occurring before you calculate your sample size. For example, if you
are carrying out a project which expects to reduce the number of children working
on the street from 75% to 70% you would not want to use a confidence interval of
10% as your estimate would not be precise enough to detect this change.
The level of confidence determines how sure you want to be that actual percentage
(of children engaged in harmful work for example) falls within your selected
confidence interval. As we are using a sample and not asking every single child
individually we are always making an estimate of the real value and we can never
be 100% confident. A level of confidence of 95% is commonly used, which means
that there is a 5% chance that the actual percentage will not lie between the
confidence interval selected.
When deciding on what confidence interval to use in your sample size calculation it
is important to remember that whilst a larger range gives you a smaller sample size,
a smaller range gives you greater precision in your results. Selecting a lower level of
confidence will also give a smaller sample size but also decrease the reliability of the
data. Unfortunately there is no simple answer and you need to review the values
used on a case-by-case basis. Remember, however, that if the sample is too small
then this will lead to inconclusive results, which cannot provide us with the
information that we need. If the sample is too large, however, it may be impossible
to collect and resources will be wasted.
Sampling methods
Stratified sampling: Stratified sampling is used when individuals in a population
can be split into distinct, non-overlapping groups. These groups are called ‘strata’.
Common strata are village, district, urban/rural etc.
~ 230 ~
required sample size of 60. In order to stratify our sample we need to calculate 30%
of 60.
Stratified sampling is beneficial when there are big differences between the strata, as
they can give a more accurate representation of the population and, if the sample
size is large enough, allow for further sub-set analysis.
Quantitative analysis
The methods we have described above help us to collect quantitative data, but is the
collection of data our end goal?
No, of course not! A large set of data sitting in a spreadsheet does not help us to
understand the characteristics of the population we are working with or describe the
changes brought about by our projects. We need to use the data to create
information.
In our case study example, we may have interviewed children working on the street
in Karachi and collected all the data together in a spreadsheet; however, we need to
analyze and summarize the data to answer our research questions. We need to
understand what percentage of children is involved in different work types. For
instance, we may want to understand if girls and boys carry out similar tasks or are
exposed to similar risks.
Statistics help us turn quantitative data into useful information to help with
decision-making. We can use statistics to summarize our data, describing patterns,
relationships and connections. Statistics can be descriptive or inferential. Descriptive
statistics help us to summarize our data whereas inferential statistics are used to
identify statistically significant differences between groups of data (such as
~ 231 ~
intervention and control groups in a randomized control study). During this module
our focus will be on descriptive rather than inferential statistics: this will also help to
give a short introduction to the most common descriptive statistics.
Data structure
We generally collect data from a number of individuals or ‘units’. These units are
most often the children or adults that we are working with. However, our units
could also be hospitals or schools, for example. The different measurements,
questions or pieces of information that we collect from these individuals are the
variables.
Variables
There are two types of variables, numerical and categorical. It is important to
distinguish between these two types of variables, as the analysis that you do for each
type is slightly different.
Numerical variables are numbers. They can be counts (e.g. number of participants
at a training) or measures (e.g. height of a child) or durations (e.g., age, time spent)
This information is presented using a frequency table. The frequency table shows us
how many participants fall into each category. We can also then represent this as a
~ 232 ~
percentage or proportion of the total. Figure 5 shows an example frequency table for
the different types of work carried out by children working on the street in Karachi.
Frequency tables can be used to present findings in a report or can be converted into
a graph for a more visual presentation.
A proportion describes the relative frequency of each category and is calculated by
dividing each frequency by the total number.
Percentages are calculated by multiplying the proportion by 100. Proportions and
percentages can be easier to understand and interpret than examining raw frequency
data and are often added into a frequency table (see figure 6).
Data points
Figure 7 Diagram showing centre and spread for a set of data points
The most common statistics used to describe the centre are the mean (commonly
known as the average) and the median. The median is the middle value in a data set,
half the data are greater than the median and half are less. The mean is calculated by
adding up all the values and then dividing by the total number of values.
Using our case study example – if you were to interview 23 street children and
record their age you might get a set of data as below. Each number is the age of an
individual child and the ages have been arranged in order.
3 3 4 4 5 7 7 8 9 10 10 11 12 12 12 13 13 14 14 15 15 15 16
~ 234 ~
The mean and the median would be different for this dataset. To calculate the
median you need to arrange the children in order of age and then find the mid-way
point. In this example, 11 children are below the age of 11 and 11 children are above
the age of 11.
To calculate the mean you need to add up all the ages and then divide by the
number of children (23 in this example).
So
Spread is most easily described using the range of the data. This is the difference
between the minimum and maximum. The range of the example data above would
be 13 years (minimum = 3, maximum = 16).
Other statistics describing spread are the interquartile range and standard
deviation.
The interquartile range is the difference between the upper quartile and lower
quartile. A quarter (or 25%) of the data lie above the upper quartile and a quarter of
the data lie below the lower quartile.
The standard deviation shows the average difference between each individual data
point (or age of child in our example) and the mean age. If all data points are close to
the mean then the standard deviation is low, showing that there is little difference
between values. A large standard deviation shows that there is a larger spread of
data. Calculating the standard deviation yourself is a little complex but this can also
be done easily in Microsoft Excel (see Computer Assisted Statistics Textbook for details
on how to calculate standard deviation).
~ 235 ~
Discussing results and drawing conclusions
The final stage of the research process is to interpret the findings, making
conclusions and recommendations. When drawing conclusions you should review
and summarize your findings looking for explanatory patterns or relationships that
help answer your research questions.
If you have collected both quantitative and qualitative data you should compare and
contrast these findings when interpreting your work. The integration of quantitative
and qualitative research can give us a broader understanding of our research subject.
Quantitative research can describe magnitude and distribution of change, for
instance, whereas qualitative research gives an in-depth understanding of the social,
political and cultural context. Mixed methods research allows us to triangulate
findings, which can strengthen validity and increase the utility of our work.
~ 236 ~
You should also reflect on your findings in comparison to other research or
evaluation work in the area and consider whether findings were similar.
Limitations
When drawing conclusions and making recommendations it is important to
recognize the limitations of our data. In quantitative research, the level to which we
can generalize our findings to the wider population will depend upon the quality of
the sampling strategy used. You should be careful not to over-generalize results: for
example, suggesting a result is applicable for the whole country when only two out
of eight regions were sampled.
Findings from qualitative research should not be used to make inferences about a
wider population but can be used to provide examples of how or why in specific
contexts.
It is also important that conclusions and recommendations are based on the data
collected rather than personal opinions. When reporting quantitative or qualitative
data, you can only make valid conclusions on the topics researched and for which
you have supporting evidence.
~ 237 ~
generated two ‘global themes’, they could each represent a findings chapter, with
‘organizing themes’ representing sub-headings, under which the ‘basic themes’ are
discussed and supported by plenty of quotations, which are extracted from your
codes. For quantitative data you may present frequency tables or graphs of variables
of interest. When presenting qualitative findings, it is important that you do not only
discuss and present a single and dominant view, but also acknowledge
contradictions and disagreements within the data. Please note that when presenting
qualitative data, you cannot claim causality and association. You are presenting
people’s perceptions and experiences of a phenomenon. As such, you have to be
careful about how you present a finding. You can for example says ‘some
respondents felt …’, ‘a common opinion was …’, ‘The perception of some adults was
…’, ‘and this suggests a possible relationship between …’ and so forth.
A ‘Discussion’ section that highlights how the findings emerging from the study
either corroborate, contradict or build on existing evidence as well as giving detail to
the limitations of the study.
We hope you found this session useful and will draw on it to develop systematic
investigations that can be used to improve the quality, impact and accountability of
our programmes. Best of luck!
~ 238 ~
ASSIGNMENT TWO
~ 239 ~