0% found this document useful (0 votes)
44 views239 pages

Diploma in Monitoring and Evaluation Module Iii

The document is a training manual for an online diploma course on Monitoring and Evaluation at Intex Management Institute. It covers various aspects of project management, including project identification, planning, execution, and evaluation techniques. The manual emphasizes the importance of effective project management skills and outlines the project life cycle phases, characteristics of a good project manager, and reasons projects may fail.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views239 pages

Diploma in Monitoring and Evaluation Module Iii

The document is a training manual for an online diploma course on Monitoring and Evaluation at Intex Management Institute. It covers various aspects of project management, including project identification, planning, execution, and evaluation techniques. The manual emphasizes the importance of effective project management skills and outlines the project life cycle phases, characteristics of a good project manager, and reasons projects may fail.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 239

INTEX MANAGEMENT INSTITUTE

MONITORING AND EVALUATION

ONLINE DIPLOMA COURSE

MODULE 3
TRAINING MANUAL

Intex Management Institute


Thithino Plaza, 4th Floor
P.O. Box 52477 – 00200, Nairobi _ Kenya
Cell Phone: +254 722 525141 or +254 734 525141
Email: info@intexservices.co.ke, training@intexservices.co.ke
Website: www.intexservices.co.ke

~1~
Table of Contents
Chapter 1 .................................................................................................................................... 4
Introduction To Project Management ........................................................................................ 4
Chapter 2 .................................................................................................................................. 15
Project Identification And Formulation ................................................................................... 15
.............................................................................................................................................. 20
Chapter 3 .................................................................................................................................. 24
Project Appraisal ...................................................................................................................... 24
Chapter 4 .................................................................................................................................. 29
Project Planning And Scheduling ............................................................................................ 29
Chapter 5 .................................................................................................................................. 42
Project Team Management ...................................................................................................... 42
Chapter 6 .................................................................................................................................. 59
Indicators.................................................................................................................................. 59
Chapter 7 .................................................................................................................................. 68
Project Management Techniques Of Monitoring .................................................................... 68
Assignment One ....................................................................................................................... 78
Chapter 8 .................................................................................................................................. 79
Understanding The Initiative ................................................................................................... 79
Chapter 9 .................................................................................................................................. 82
Stakeholder Analysis ............................................................................................................... 82
Chapter 10 ................................................................................................................................ 88
Importance Of Monitoring And Evaluation ............................................................................. 88
Chapter 11 ................................................................................................................................ 99
Cluster Development ............................................................................................................... 99
Chapter 12 .............................................................................................................................. 106
Community Based Participatory Research ............................................................................ 106
Chapter 13 .............................................................................................................................. 129
Participatory Evaluation......................................................................................................... 129
Chapter 14 .............................................................................................................................. 155
Why Should You Have An Evaluation Plan? ........................................................................ 155
Chapter 15 .............................................................................................................................. 166
Project Meal Framework........................................................................................................ 166
Chapter 16 .............................................................................................................................. 178
Meal Planning And Budgeting............................................................................................... 178
Chapter 16 .............................................................................................................................. 192
Baseline And Evaluation Design And Management ............................................................. 192

~2~
Chapter 17 .............................................................................................................................. 212
Methods Of Data Collection In Meal..................................................................................... 212
Assignment Two .................................................................................................................... 239

~3~
CHAPTER 1

INTRODUCTION TO PROJECT MANAGEMENT


Introduction
A project is an interrelated set of activities that has a definite starting and
ending point and results in the accomplishment of a unique often major outcome.
Project management is the planning and control of events that together comprise the
project. Project management aims to ensure the effective use of resources and
delivery of the project objectives on time and within cost constraints.
An activity or task is the smallest unit of work effort within the project and
consumes both time and resources which are under the control of the project
manage. A project is a sequence of activities that has a definite start and finish, an
identifiable goal and an integrated system of complex but interdependent
relationships. The purpose of project management is to achieve successful
project completion with the resources available. A successful project is one
where:-
 Has been finished on time
 Is within its cost budget
 Performs to a technical/ performance standard which satisfy the end user

Project Characteristics
 Objectives: A project has a set of objectives or missions once the objectives
are achieved the project is treated as completed.
 Life cycle: A project has a life cycle where it consist of the following

Conception stage: where project ideas are conceived


Design stage: where detailed design of different project are worked out
Implementation stage: where the project is implemented as per the design
Commissioning stage; where the project is commissioned after implementation
commissioning of the project indicates the end of its life cycle.

~4~
 Definite time limit: A project has a definite time limit it cannot continue
forever . what represents the end would normally be spelt out in the
set of objectives.
 Uniqueness : every project is unique and no two project are similar even
if the plants are exactly identical or are merely duplicated the location
the infrastructure the agencies and the people make each project
unique.
 Team work : A project normally consists of delivered areas there will
be personnel specialized in their respective areas any project calls for the
services of experts from a host of disciplines. Co-ordination among the
diverse areas calls for team work. Hence a project can be implemented
only with a team work.
 Complexity: A project is a complex set of thousands of varieties. The
varieties are in terms of technology of equipment and materials ,
machinery and people work culture and ethics but they remain
interrelated and unless this is so they either do not belong to the project
or will never allow the project to be completed.
 Sub- contracting: some of the activities may be entrusted to sub- contractors
to reduce the complexity of the project. Sub contracting will be
advantageous if it reduces the complexity of the project so that the
project manager can co-ordinate the remaining activities of the project
more effectively. The greater the complexity of the project, the larger will
be the extent to which sub- contracting will be resorted to.
 Risk and uncertainty: Every project has risks and uncertainty associated
with it. The degree of risk and uncertainty
 Customer specific nature: A project is always customer specific this is
because the products produced on services offered by the project are
necessarily to be customer oriented. It is the customer who decides upon
the product to be produced or services to be offered and hence it is
responsibility of any organization to go for projects / services that are
suited to customer needs.

~5~
 Change: Changes occur throughout the lifespan of a project as a natural out
come of many environmental factors. The changes may vary from minor
changes to major changes which may have a big impact or ever change
the very nature of the project.
 Forecasting: Forecasting the demand for any product/services that the
project is going to produce is an important aspect. All projects involve
forecasting and in view of the importance attached to forecasting they
must be accurate and based on sound fundamentals.
 Optimality: A project is always aimed at optimum utilization of resources
for the overall development of the organization economy . This is
because resources are scarce and have a cost.
 Control mechanism: All projects will have pre-designed control
mechanism in order to ensure completion of projects within the time
schedule, within estimated cost and the save time achieve the desired
level of quality and reliability.

Characteristics of a Good Project Manager


 Planning and organization skills
 Personnel management skills
 Communication skills
 Change orientation
 Ability to solve problems in their totality
 High energy i.e levels work under pressure
 Ambition for achievements
 Ability to take suggestions
 Understanding the views of the project team members
 Ability to develop alternative actions quickly
 Knowledge of project management methods and tools
 Ability to make self evaluation
 Effective time management
 Capacity to relate current events to project management

~6~
 Ability to handle project management software tools /packages
 Flair for sense of humour
 Solving issues /problems immediately without postponing them
 Initiative and risk taking ability
 Familiarity with the organization
 Tolerance for difference of opinion , delay ambiguity
 Knowledge of technology
 Conflict resolving capacity

Responsibilities of A Project Manager


1. To plan thoroughly all aspects of the project involving the active involvement
of all functional areas involved in order to obtain and maintain a realistic
plan that satisfies their commitment for performance.
2. To control the organ of manpower needed by the project
3. To control the basic technical definition of the project ensuring that
“technical” versus “cost” trade –offs determine the specific areas where
optimization is necessary .
4. To lead the people and organs assigned to the project at any given point in
time
5. To monitor performance, cost and efficiency of all elements of the project and
the project as a whole, exercising judgment and leadership in determining the
causes of problems and facilitating solutions.
6. To complete the project on schedule and within costs, these being the overall
standard by which performance of the project manager is evaluated.

Reasons Why Projects Go Wrong


1. Project goals are not clearly defined
2. complaints arising from:
 short time scale
 resource availability
 quality factors

~7~
 human factors

1. Problems with Project Goals


 Project sponsor or client has an inadequate idea of what the project is about
at the start .
 There may be failure of communication between the client and the project
manager.
 Specification may be subject to constant changer due to problems with
individual clients, environmental changes etc.
 Project goals may be unrealistic and unachievable and it may be that this
is only realized once the project is underway.
 The client may become carried away with the idea of the project and may
be unable to see clearly what can be achieved.
 Projects may be highly complex and may have a number of objectives
that actually contradict each other.

Solutions
 Ensure that the clients specifications is clear and understandable
 Preparation of project brief should take the objectives set out in the previous
exercise and translate them into targets and goals. This brief shoud be agreed
by the sponsor /clients and communicated to the project manager.
 Establishment of success criteria:

Success criteria can be described as being hard or soft

Hard Criteria
Are tangible and measureable and can be expressed in quantitative terms. This
poses the question “what” should be achieved, it includes the following.
 performance specification: these may be set out in terms of the ability to
deal with certain demands

~8~
 specific quality standards:- This could relate to achievement of a favorable
report from an outside inspection agency.
 Meeting dead lines
 Cost of budget constraints : completing the project within cost limit or
budget which has been determined.
 Resources constraints: eg making use of existing premises or labour force.

Soft Criteria:
are often intangible and qualitative and difficult to measure they would
tend to ask the question “how” they include the following:
 demonstrative co-operation : This would be about showing that the
project team could work together effectively and without a degree of
conflict . it could be an important consideration to develop and
implement solutions for the organization which have an element of
consensus and stem from co-operative attitude
 presenting a positive image: This important though difficult to
quantify
 Achieving total quality approach: This would be more about the
adoption of a philosophy of continuous improvement than the
achievement of specific performance target on quality.
 Gaining total project commitment : This is about how the project is
managed and the attitude of the project team to it.
 Ensuring that ethical standards are maintained
 Showing an a appreciation of risk: This would ensure that no
unacceptable risks were taken in the pursuit of other project
objectives.

~9~
2. Constraints on the Completion of Project
 Time: there is a relationship between the time taken for the project and its
cost. A trade-off between the two constraining factors may be necessary
 Resource availability: There is always a budget for the project and this will be
major constraint. while the overall resource available may in theory be
sufficient to complete the project, there may be difficulties arising out of
the way in which the project has been scheduled. Ie there may be a number
of activities scheduled to take place at the same time and this may not be
possible given the amount of resources available
 Quality factors: This refers to whether the project delivers the goods to the
right quality the following techniques can be used to overcome the problems
o -budgeting, and the corresponding control of the project budget through
budgetary control procedures
o -Project planning and control techniques eg.Gautt Chatts and network
analysis.

Project Life Cycle


All projects have to pass through the following five phases:-
i Conception phase
ii Definition phase
iii Planning and organizing phase
iv Implementation phase
v Project clean-up phase

Conception phase
This is the phase during which the project idea germinates, sources of
ideas may be from the following sources
 Find solution to certain problems
 Non-utilization of the available funds, plant capacity, expertise or simply
unfulfilled aspirations.
 Surveying the environment
 Idea put across by well wishers

~ 10 ~
The idea need to be put to shape before they can be considered and
compared with competitive ideas. The ideas need to be examined in light of
objectives and constraints. If this phase is avoided or truncated, the project
will have innate defects and may eventually become a liability for the
investors. A well conceived project will go along way for successful
implementation and operation of a project ; ideas may undergo some
changes as the project progresses because pertinent data may not be
available at inception

Definition phase
This phase develops the idea generated during the conception phase and
produce a document describing the project in sufficient details covering all
aspects necessary for the customer and for financial institutions to make up
their minds on the project idea the areas to be examine during this phase are:
 Raw materials: quantitative and qualitative evaluation
 Plant size /capacity enumeration of plant capacity for the entire plant
and for the main deportments
 Location and site: description of location supported by map.
 Technology/process selection: selection of optimum technology, reasons
for selection and description of selected technology.
 Project layout; selection of optimum layout, reasons for selection and
appropriate drawings.
 Utilities –fuel, power, water, telephone, etc
 Manpower and organization pattern
 Financial analysis
 Implementation schedule: This phase clears some of the ambiguities
and uncertainties associated with the formation made during the
conceptual phase this phase also establishes the risk involved in
going a head with the project in clear terms. A project can either be
accepted or get dropped at this stage.

~ 11 ~
Planning and Organizing Phase
This phase includes the following
a) Project infrastructure and enabling services
b) System design and basic engineering package
c) Organization and manpower
d) Schedule and budgets
e) Licensing and governmental clearances
f) Finance
g) Systems and procedures
h) Identification of project manager
i) Design basis , general conditions for purchase and contracts
j) Site preparation and investigations
k) Work packaging

Thus this phase is involved with preparation for the project to take off
smoothly. This phase is often taken as a part of the implementation phase
since it does not limit itself to paperwork and thinking but many
activities including field work it is essential that this phase is completely
done as it forms the basis for the next phase i.e. Implementation phase.

Implementation Phase
Preparation of specification for equipment and machinery, ordering of
equipment lining up construction contractors, trial run , testing etc takes
place during this phase. As far as the volume of work is concerned 80-85%
of the project work is done in this phase only. The bulk of the work is
done during this phase therefore the need to complete this phase as fast
as possible with minimum resources.

~ 12 ~
Project Clean –Up Phase
This is a transition phase in which the hardware built with the active
involvement of various agencies is physically handed over for production
to a different agency who was not so involved earlier. Drawing,
documents, files, operation. and maintenance manuals are catalogued and
handed over to the customer . Project accounts are closed, materials
reconciliation carried out, outstanding payments made and dues collected
during this phase. Essentially this is the handling over of the project to the
customer.

PROJECT LIFE CYCLE CURVES

4 % effort
conception and 8%
effort”plan 85%effort:implemen 3% effort:clean
definition tation up phase
phase ning and
orgn phase

~ 13 ~
Tools and Techniques for Project Management
1. Project selection techniques
Cost benefit analysis
Risk and sensitivity analysis
2. Project execution planning techniques
Work break down structure (WBS)
Project execution plan (PEP)
Project responsibility matrix
Project management manual
3. Project scheduling and Coordinating techniques
i. Bar charts
ii. Life cycle charts
iii. Line of balance (LOB)
iv. Networking techniques(PLRT/ CPM)
4. Project monitoring and progress techniques
i. Project measurement technique (PROMPT)
ii. Performance monitoring techniques (PERMIT)
iii. Updating , reviewing and reporting techniques (URT)
5. Project cost and Productivity Control Techniques
i. productivity budgeting techniques
ii. value engineering (VE) and
iii. Cost/ WBS
6. Project Communication and clean-up Techniques
i. Control room and
ii. Computerized information systems.

~ 14 ~
CHAPTER 2

PROJECT IDENTIFICATION AND FORMULATION


Introduction
Identifying a new worthwhile project is a complex problem it involves careful study
from many different angles. The following are some of the sources from which new
project ideas may emerge.
a) Performance Of Existing Industries
An analysis of profitability and break-even points of different industries will
offer adequate information about the financial health of different industrial
sectors. One should also consider the stage of business cycle in which the
different industries stand at a particular time. A well performing industry might
have reach a maturity stage and thus in the process of decline while a poor
performing industry might have potential for growth.
b) Availability of Raw Materials
Easy availability of good quality raw materials at cheaper prices is a definite
indication that some project that can make use of those materials can be
considered. Availability of mineral may guide or lead to chemical industries
while the availability of agricultural produce shows the potential of setting up
food processing plant.
c) Availability of Skilled Labor
Based on locally available skilled labor force, suitable industries that make batter
use of the skilled manpower can be identified.
d) Import/Export Statistics
This may revealed potential that remain untapped. Higher proportion of import
of a particular product and increasing trend in its import indicate that a product
which can serve as a substitute can be produced locally. Similarly higher
proportion of import of a particular product and increasing trend in its export
indicate higher export potential of a product.

~ 15 ~
e) Price Trend
This may give an indication about the demand-supply relationship. If the general
price level of particular price is increasing steadily then it indicates a demand
supply gap. Further detailed study may be undertaken to ascertain the extent of
demand supply gap.
f) Data From Various Sources
Various publications of government, banks and financial institution, consultancy
organization manufacturer’s association ,export promotion councils, research
institution and international agencies contain data and statistics which may
indicate prospective ventures.
g) Research Laboratories
They are concerned with identifying new product or process that offers a new
avenue for commercial exploitation, care should be taken to ensure the viability
h) Consumption Abroad
Those entrepreneurs who are willing to take higher risk can identify project for
the manufacture of product or supply of services which are new to the country
but extensively use abroad
i) Identify Unfulfilled Psychological Needs
Consumer goods like cosmetics, bathing soap, toot paste etc. are examples. New
product of this group being introduce and accepted by the consumers indicate
unfulfilled psychological needs of the consumers
j) Plant outlays and government guidelines
Government plant of outlays in different sectors loose useful pointers towards
possible investment opportunities. They indicate potential demand for goods and
services by different sectors of the economy.
k) Analysis of Economic and Social Trends
E.g. the growing desire for leisure points to investment opportunities in
recreational activities; rest-houses reason etc. the growing awareness of the value
of time points to growing demands for fast-foods, high-speed vehicles, better
mode of transport ready-made garments etc.

~ 16 ~
l) Possibility of Reviving Sick Units
In any economy there are many industrial units that might have become sick,
these industrial might still have the capacity to become financial viable. A
promising entrepreneurship that has the required entrepreneurship skills can
take over weak/sick units, revive it and make it to turn around.

Project Preparation
After having identified a project that appears to be a worthwhile project the project
promoter has to further analyze the project to ensure that it has the potential and the
investment on it would not go to waste, but would yield attractive returns.
Project preparation consists of four stages viz.
a) Pre-feasibility study
b) Functional study or support studies
c) Feasibility study
d) Detailed project analysis/Report

a) Pre-feasibility study
It has the following main objectives
i) To determined whether the project offers a promising investment
opportunities
ii) Determined whether there are any aspect of the project that are critical
requiring in- depth investigation by way of market surveys, laboratory
tests, pilot plant test etc

The pre-feasibility study should examine


1. The market potential for the selected product/ services , the competitors in
the field and their market share, the market forecast the trading practices in
the industry in terms of pricing, credit, distribution, government controls etc
2. The technologies available and technology suitable for the project the
manufacturing facilities required etc
3. The availability, and sources of raw materials

~ 17 ~
4. The plant location
5. The plant capacity
6. Man power requirement
7. Investment required and the returns expected:

If the pre-feasibility study indicates that the project is worth while the
feasibility study is undertaken if the pre-feasibility indicates certain areas of
project that need a detailed study, Such studies are taken up before feasibility
study. Such studies are known as support studies or functional studies

b) Support studies (functional studies)


Support studies may be conducted in any of the following area.
 Market study
 Raw material/input study
 Project location study
 Plant size study
 Equipment selection study

c) Feasibility study
Technical feasibility: for projects concerning manufacturing activities the
technology proposed to be adapted needs careful consideration. Technical
feasibility can be evaluated by answering the following questions.
o Is the technology proposed to be adapted, the latest
o What is the likelihood of the proposed technology becoming obsolete
in the near future?
o Is the technology proposed, a proven technology
o Is the technology proposed available indigenously?
o Incase of imported technology is the technology available freely?
o The aim is to analyze whether the technology proposed is capable
of producing intended goods / services to the requirements of
specifications and to the complete satisfaction of consumers

~ 18 ~
Economic viability
This establishes whether the investment made on the project will give a
satisfactory return to the economy. In terms of raw materials, used
community as whole, investments etc.

Commercial Feasibility
This is in relation to sales volume of products /service, quality, price and
consumer acceptability.

Financial feasibility
It examine the workability of the project proposal in respect of raising
finance to meet the investment required for the project it consist of
calculations of cost of debt and equity and the anticipated profit to check
up whether the financial benefits expected are in excess of the financial
cost involved.

d) Detailed Project Report (D P R)


The main idea of preparation of the DPR is to formally communicate the
project promoter’s decision of venturing a new project to financial
institutions for their perusal and to government departments for getting
their approval.
The main sub divisions of a DPR are:
1) General information about the project
2) Background and experience of project promoters
3) Details and working results of industrial concerns already owned
promoted by the project promoters
4) Details of the proposed project
 plant capacity
 manufacturing process
 Technical Known- how / tie –up
 Management team for the project

~ 19 ~
b) Details of land, building and plant and machinery
c) Details of infrastructural facilities
d) Raw materials requirement /availability
e) Effluents produced by the project and their treatment
f) Labor requirement / availability
5) Schedule of implementation of the project
6) Project cost
7) Means of financing the project
8) Working capital requirement/ arrangements made
9) Marketing and selling arrangements
10) Profitability and cash- flow estimates
11) Mode of repayment of term loan
12) Government approvals, local body consents and other statutory permissions.
13) Details of collateral security that can be offered to financial institution.

Project Management Challenges


Whether you are a new project manager, or an experienced leader, project
management will continue to reveal itself as part art, part science, and part major
headache! The list below highlights some of the top project management challenges,
along with suggested solution ideas to help overcome those challenges:
1. Unrealistic deadlines - Some would argue that the majority of projects have
"schedule slippage" as a standard feature rather than an anomaly. The
challenge of many managers becomes to find alternate approaches to the
tasks and schedules in order to complete a project "on time", or to get
approval for slipping dates out. An "absolute" time-based deadline such as a
government election, externally-scheduled event, or public holiday forces a
on-time completion (though perhaps not with 100% of desired deliverables).
But, most project timelines do eventually slip due to faulty initial deadlines
(and the assumptions that created them). Solution: Manage the stress of "the
immovable rock and the irresistible force" (i.e. the project deadline and the
project issues) with creative planning, alternatives analysis, and

~ 20 ~
communication of reality to the project participants. Also determine what
deadlines are tied to higher level objectives, or have critical links into
schedules of other projects in the organization's portfolio.
2. Communication deficit - Many project managers and team members do not
provide enough information to enough people, along with the lack of an
infrastructure or culture for good communication. Solution: Determine proper
communication flows for project members and develop a checklist of what
information (reports, status, etc.) needs to be conveyed to project participants.
The communications checklist should also have an associated schedule of
when each information dissemination should occur.
3. Scope changes - As most project managers know, an evil nemesis "The Scope
Creep" is usually their number one enemy who continually tries to take
control. Solution: There is no anti-scope-creep spray in our PM utility belts,
but as with many project management challenges, document what is
happening or anticipated to happen. Communicate what is being requested,
the challenges related to these changes, and the alternate plans, if any, to the
project participants (stakeholders, team, management, and others).
4. Resource competition - Projects usually compete for resources (people,
money, time) against other projects and initiatives, putting the project
manager in the position of being in competition. Solution: Portfolio
Management - ask upper level management to define and set project priority
across all projects. Also realize that some projects seemingly are more
important only due to the importance and political clout of the project
manager, and these may not be aligned with the organization's goals and
objectives.
5. Uncertain dependencies - As the project manager and the team determine
project dependencies, assessing the risk or reliability behind these linkages
usually involves trusting someone else's assessment. "My planner didn't think
that our area could have a hurricane the day of the wedding, and now we're
out of celebration deposits for the hall and the band, and the cost of a
honeymoon in Tahiti!" Solution: Have several people - use brainstorming

~ 21 ~
sessions - pick at the plan elements and dependencies, doing "what if?"
scenarios. Update the list of project risk items if necessary based on the
results.
6. Failure to manage risk - A project plan has included in it some risks, simply
listed, but no further review happens unless instigated by an event later on.
Solution: Once a project team has assessed risks, they can either (1) act to
reduce the chance of the risk occurrence or (2) act or plan towards responding
to the risk occurrence after it happens.
7. Insufficient team skills - The team members for many projects are assigned
based on their availability, and some people assigned may be too proud or
simply not knowledgeable enough to tell the manager that they are not
trained for all of their assigned work. Solution: Starting with the project
manager role, document the core set of skills needed to accomplish the
expected workload, and honestly bounce each person's skills against the list
or matrix. Using this assessment of the team, guide the team towards
competency with training, cross-training, additional resources, external
advisors, and other methods to close the skills gap.
8. Lack of accountability - The project participants and related players are not
held accountable for their results - or lack of achieving all of them. Solution:
Determine and use accountability as part of the project risk profile. These
accountability risks will be then identified and managed in a more visible
manner.
9. Customers and end-users are not engaged during the project. Project teams
can get wound up in their own world of internal deliverables, deadlines, and
process, and the people on the outside do not get to give added input during
the critical phases. Solution: Discuss and provide status updates to all project
participants - keep them informed! Invite (and encourage) stakeholders,
customers, end-users, and others to periodic status briefings, and provide an
update to those that did not attend.
10. Vision and goals not well-defined - The goals of the project (and the reasons
for doing it), along with the sub-projects or major tasks involved, are not

~ 22 ~
always clearly defined. Clearly communicating these vague goals to the
project participants becomes an impossible task. Some solutions and ideas to
thrash vagueness: Determine which parts of a project are not understood by
the team and other project participants - ask them or note feedback and
questions that come up. Check the project documentation as prepared, and
tighten up the stated objectives and goals - an editor has appropriate skills to
find vague terms and phrasing. Each project is, hopefully, tied into to the
direction, strategic goals, and vision for the whole organization, as part of the
portfolio of projects for the organization.

Project leadership is a skill that takes time to develop in a person or


organization. Achieving success requires analyzing setbacks and failures in
order to improve. Focusing on each project's challenges and learning from
them will help to build a more capable and successful project management
capability.

~ 23 ~
CHAPTER 3

PROJECT APPRAISAL
Introduction
Project appraisal is a process of detailed examination of several aspects of a given
project before recommending the same. The institution that is going to fund the
project has to satisfy itself before providing financial assistance for the project. It
has to ensure that the investment on the proposed project will generate sufficient
returns on the investments mad e . The various aspects of project appraisal are :

Technical Appraisal
Technical appraisal broadly involves a critical study of the following aspects.
a) Scale of operations
Scale of operation is signified by the size of the plant. The plant size
mainly depends on the market for the output of the project.
b) Selections of process/technology: the choice of technology depends on the
number/ types available and also on the quality and quantity of products
proposed to be manufactured.
c) Raw materials
Products can be manufacture using alternative raw materials and with
alternative process. The process of manufacture may sometimes vary with
the raw materials chosen.
d) Technical know-how
When the technical know-how for the project is provided by expert
consultants, it must be ascertained whether the consultants has the
requisite knowledge and experience and whether he has already executed
similar projects successfully, care should betaken to avoid self- styled,
inexperienced consultants.

~ 24 ~
e) Product mix
Consumers differ in their needs and preferences. Hence variations in size
and quality of products are necessary to satisfy the varying needs and
preferences of customers. In order to enable the project to produce goods
of varying size nature and quality as per requirements of the customers,
the production facilities should be planned with an element of flexibility.
Such flexibility in the production facilities will help the organization to
change the product mix as per customer requirements, which is very
essential for the survival and growth of any organization.

f) Selection and procurements of plant a machinery: when selecting plant


machinery several factors need to be considered they include output
planned, machine hours required for each operation, machine capacity,
machine available in the market etc plant and machinery form the
backbone of any industry the quality of out put depends upon the
quality of machinery used in processing the raw materials.

g) Plant layout: This is the arrangement of various production facilities


within the production area, plant layout should be so arranged as to
ensure steady flow of production and minimizes the overall cost. Some of
the consideration include: future expansions, supervision required,
inspection, safety requirements etc.

h) Location of projects: Several factors need to be considered in choosing the


location of project they include, regional factors – raw materials, proximity
to market, availability of labor, availability of supporting industries
availability of infrastructure facilities, climatic factors etc.
i) Project scheduling : this is the arrangement of activities of the project in
the order of time in which they are to be performed.

~ 25 ~
Commercial Appraisal
This concerned with the market for the product /service. commercial appraisal (or
market appraisal of a project) is done by studying the commercial successfulness of
the product/service offered by the project from the following angles:
 Demand for the product
 Supply position for the product
 Distribution channels
 Pricing of the product
 Government policies

Economic Appraisal
Economic appraisal measures the effect of the project on the whole economy. In the
overall interest of the country, the limited stocks of capital and foreign exchange
should be put into the best possible use, hence policy makers are concerned as to
where the scarce resources can be directed to maximize economic growth of the
country, the policy makers make a choice based on economic return.

Financial Appraisal
This include appraising the project using the financial tools which include but not
limited to the following
Discounted cash flow techniques
 Net present value methods
 Internal Rate of return
 Profitability index methods
 Benefit cost ratio method

Non-discounted cash flow techniques


 Payback period method
 Accounting rate of return method

~ 26 ~
Management Appraisal
Management is the most important factor that can either make a project a successful
or a failure. A good project at the hands of poor management may fail while not-so-
good project at the hands of an effective management may succeed. Banks and
financials institution that lend money for financing project lay more emphasis on
management appraisal.
Lending institution looks at two points before committing their funds to project
financing.
a) Capacity of project to repay the loan along with the interest within stipulated
period of time.
b) Willingness of the borrower to repay the loan

While the capacity to repay is assessed by technical, commercial and financial


appraisals, the willingness to repay is assessed by way of management appraisal.
Other appraisal techniques are quantitative and objective in nature, management
appraisal is purely qualitative and subjective in nature.

Integrity, foresightedness, leadership qualities, inter-personal relationship, technical


and financial skills, commitment, perseverance etc. are some of the parameters that
needs to be studied in management appraisal.

The following are some of the factors that will reflect the managerial capabilities of
person concerned.
 Industrial relations prevailing in that enterprise
 Morale of employees, the prevailing superior-subordinate r-ship
 Labour turnover
 Labour unrest
 Productivity of employees etc

~ 27 ~
Social Cost Benefit Analysis
There are some projects that may not offer attractive returns as far as commercial
profitability is concerned. But still such project are still undertaken since they have
social implications e.g. Roads, Railway, Bridge, Irrigation, power projects e.t.c

The objectives of socio-cost benefit analysis include:


o Contribution of the project to the GDP (Gross domestic product) of the economy.
o Contribution of the project to improve the benefit to the poorer section of the
society and reduces the regional imbalances in growth and development
o Justification of the use of scarce resources of the economy by the project.
o Contribution of the project in protecting/improving the environment conditions.

Ecological Analysis
In recent years, environmental concerns have assumed great deal of significance-and
rightly so. Ecological analysis should be done particularly for major projects which
have significant ecological implications (like power plants and irrigation schemes)
and environment-polluting industries (like bulk drugs, chemicals, and leather
processing). The key questions raised in ecological analysis are:

 What is the likely damage caused by the project to the environment?


 What is the cost of restoration measures required to ensure that the dame to
the environment is contained within acceptable limits?

~ 28 ~
CHAPTER 4

PROJECT PLANNING AND SCHEDULING

Developing project network


A network is graphical representation of a project showing the flow and sequence of
activities and events. There are two network models Project evaluation and Review
Techniques (PERT).

Network analysis entails a group of techniques used for presenting information


relating to time and resources so as to assist in the planning, scheduling and
controlling projects. The information usually represented by a network includes the
sequences, interdependencies, interrelationships and criticality of various activities
of the project.

Project planning calls for detailing the project into activities, estimating resources
and time for each activity and describing activity interrelationships. Scheduling
requires the details of starting and completion dates for each activity. Control
requires not only current status information but also insight into possible trade-offs
when difficulties arise.

The main objectives of project management can be described in terms of a successful


project, which has been finished on time, within the budgeted cost and to technical
specifications that satisfy the end users.

 Critical Path Method (CPM)


 Project Evaluation and Review Techniques (PERT)
Objectives of network analysis
The following are some of the objectives of network analysis:
1. It’s a powerful tool for planning, scheduling and control.
2. It shows in a simple way the interrelationship of the various activities
constituting a project.

~ 29 ~
3. Minimization of total cost
4. Minimization of total time.
5. Minimization of cost for a given total time.
6. Minimization of time for a given cost.
7. Minimization of idle resources.
8. To minimize production delays, interruptions and conflicts.

Definition of terms
- Activity: is the actual work to be done – arcs
- Event: marks the start or end of an activity, usually denoted by modes in the
network.

Start Finish
Event Activity Event

- A path is a series of adjacent activities leading from one event to another.


- Dummy activity: it is represented by a dashed arrow on the project network
and is inserted in the network to clarify activity pattern in the following
situations:-

PERT and CPM are useful in planning, analyzing, scheduling and controlling the
progress and completion of large projects.

PERT and CPM and be applied in areas such as

- Research and Development of New Products


- Constructions of plants, buildings and roads
- Maintenance of large and complex equipment

~ 30 ~
PERT and CPM consists of the following steps;
1. Analyses the project and schedule it into specific activities and events
2. Determine the interdependence of the activities i.e. the order of precedence
and sketch the network.
3. Assign the estimates of time and cost to all the activities of the network.
4. Identify the critical path i.e. the longest time required to complete the project
5. Monitor, evaluate and control the progress of the project, by checking
whether the project can be “crashed” i.e. completed in a shorter period.
- PERT and CPM are similar in nature except that PERT is concerned with
the way activity time is estimated while CPM is concerned with the cost
estimates for completion of various activities.
- PERT activity time estimates are probabilistic i.e. there are three different
time estimates.
 Most optimistic time devoted by a or o
 Most likely time i.e. most realistic time required to complete the
project devoted by m.
 Most pessimistic time devoted by b.
- In CPM the activity times are deterministic i.e. under specific conditions and
are given in two sets, Normal cost and crash cost, Normal time and crash
time.
- Given three time estimates for completing the project i.e. most optimistic time
most likely and most pessimistic time estimate, the Expected time estimate is
the weighted average of a, m, and b and is calculated as
a  4m  b
te 
6
While the standard deviation of the distribution of time estimates for completing the
activity is given by

ba

6

The standard deviation for the whole project    2

~ 31 ~
Note:
i. By convention, the network flows from left to right i.e. time and progress of
the project flow from left to right .
ii. The event that marks the beginning of the entire project is called the source
event while the event that marks the completion of the entire project is called
the terminal events.
iii. No event is complete until all activities leading to that event are complete.
iv. Loops or cycles are not permitted in networks
v. In order to incorporate technologies or managerial requirement, it is
sometimes necessary to insert a dummy activity into the network model; a
dummy activity does not require any time effort or resource for its
completion.
vi. In a network path, the longest path is called the critical path, paths other than
the critical paths are called non-critical paths.

Example 1

A National hospital is considering installing a medical equipment . The following


activities have been identified.

Activity Description te(weeks)


1-2 Testing 3
2-3 Feasibility study 3
3-4 Government 1
approval
4-6 City license 1
2-7 Signing of contract 1
4-5 State license 1
5-10 Electrical work 3

~ 32 ~
6-8 Staffing 2
7-8 Purchasing 4
8-9 Installation 3
5-8 Safety licensing 1
9-10 Testing equipment 1
7-10 Training staff 2

Draw the appropriate PERT network and determine Earliest expected event
time(TE), Latest allowable event time(TL) and Slack time(S) for each event

Calculation of TE: i.e. Earliest Expected event Time


i. For each event, identify the various paths that connect the network beginning
event to that event.
ii. Proceeding forward (left to right) add te of the activities along each such path.
iii. The longest chain or paths determines TE.

Calculation of latest Allowable event Time, TL


i. For each event, identify various paths that connect the network ending event
to that event,
ii. Proceeding backwards (right to left) subtract te of the activities along each
such path
iii. The longest chain or paths determine its TL

Calculation of S: or slack time


Slack time = TL - TE. This represents the time by which the event can be delayed
without affecting the timely realization of successor events. A critical path does not
have a slack time i.e. TL = TE a negative slack implies that a project is behind time.
The total slack of a non critical path is given by = (sum of te of critical path)- (sum of
te of non critical paths). The slack time/float time can be distributed to important
work centers which require more time.

~ 33 ~
Diagram
7

1 2 10
9
3 8
6
4

Example 2
Draw a network for a project of erection of steel works for a shed. The various
elements of the project are as under:
Activity Description Prerequisites
code
A Erect site workshop None
B Fence site None
C Bend reinforcement A
D Dig foundation B
E Fabricate steel A
F works B
G Install concrete C, D
H plant G, F
I Place reinforcement E
J Concrete H, I
K foundation J
Paint steel works
Erect steel work
Give finishing
touch

~ 34 ~
Solution.
E
I
A C
G H J K

B D
F
Example 3.
Draw a network diagram from the following activities.
Activity Immediate predecessor Activity Immediate predecessor
A None G C
B A H C&D
C A I E&F
D A J G&H
E B K I&J
F C

Solution.
E
3 6

B F I
1 A 2 C 4 8 K 9

7
5 G J

D
H

~ 35 ~
Example 4
Given the following time estimates: Calculate
(a) Expected time te, for individual activities
(b) The completion time of the project
(c) The standard deviation of the project completion time.
(d) The time in which the management can be 95% confident of completing the
project

Activity a b m a  4m  b ba
te  
6 6
1-2 3 9 6 6 1
1-3 6 12 9 9 1
2-4 4 8 6 6 2/3
3-5 1 4 3 17/6 1/2
4-5 5 9 7 7 2/3
5-6 5 15 10 10 5/3

Example 5
Present the following activities in the form of a network chart and determine
a. Critical path
b. Earliest and latest expected time, and

Completion time
Activity Optimistic Most expected Pessimistic (b)
(a) (m)
1–2 4 8 12
2–3 1 4 7
2–4 8 12 16
3–5 3 5 7
4–5 0 0 0
4–6 3 6 9

~ 36 ~
5–7 3 6 9
5–8 4 6 8
7–9 4 8 12
8–9 2 5 8
9 – 10 4 10 16
6 - 10 4 6 8

Solution.

3 7

9
1 2 5

8
6 10

Resource and cost scheduling

Critical Path Method


The main aim of PERT and CPM is to help the manager obtain balance between cost
and time. The CPM network is deterministic i.e. Two sets of time and costs are
given Normal time and normal cost as well as crash time and crash cost. The
relationship between time and cost is linear. A decrease in time implies an increase
in cost.

~ 37 ~
Crash Cost

Normal Cost

Crash time normal time

The manager has to trace-off, between high cost and minimum time or low cost and
maximum time. The slope of the above graph will give us the cost-time tradeoff i.e.
how much additional cost will be incurred by saving one unit of time in completing
an activity

changein cos t crash cos t  normal cos t


Slope of the cost-time line = 
change int ime crashtime  normaltime

With CPM the idea is to design a program that field minimum project completion
time with the least increase in costs over normal costs.

Example 6
Given the following Data, Determine
a) The normal critical path
(b) The crash critical path
(c) The minimum project completion time with the least increase in the costs over
normal costs.

~ 38 ~
Time(weeks) Cost(shs)
activity normal crash normal crash Change in
cost per
week
1-2 10 7 1000 1600 200
1-3 15 10 2000 3000 200
2-4 8 6 1800 2600 400
2-5 20 16 4500 5300 200
3-6 30 20 7200 9600 240
4-5 14 12 5000 6000 500
5-6 12 9 3300 4500 400

Procedure
1) Identify the normal critical path and crash critical path

Path normal crash


1-3-6 45 30
1-2-5-6 42 32
1-2-4-5-6 44 34

2) On the normal critical path identify the least expensive activity to crash. Crash
this activity and observe any changes on the critical path if there’s no change
crash the next least expensive activity.

1st crash: On the normal critical path the least Expensive activity is 1 – 3, thus
we crash it by five weeks (15 weeks to 10 weeks), therefore reducing to 40
weeks. Compare this time with total time of other paths, therefore the critical
paths changes to 1-2-4-5-6 (44 weeks)

~ 39 ~
2nd crash: The least expensive activity is 1 - 2, so we crash it by 3 weeks (10
weeks to 7 weeks). Hence the critical path is still 1-2-4-5-6.

3rd crash: Activities 2-4, 4-5, and 5-6 are uncrashed, the least expensive is
activities are 2-4, and 5-6, we crash 5-6 as it yields a larger reduction in
completion time. The critical path changes to 1-3-6 (40 weeks).

4th crash: We have crashed activity 1 – 3 and therefore we now crash activity
3 -6 by 10 weeks (30-20) at a cost of 240 sh per week. The critical path is now
1-2-4-5-6.

5th crash: On this path the uncrashed activities are 2-4, 4-5, activity 2-4 is least
expensive, and so we crash it by 2 weeks (400sh per week). We now have two
critical paths 1-2-5-6, 1-2-4-5-6

6th crash: Comparing the two critical paths 1-2-5-6 and 1-2-4-5-6, there are
only two uncrashed activities remaining (2-5 and 4-5) we crash the least
expensive (200 shs per week) the critical path now is 1-2-4-5-6.

7th crash: The only uncrashed activity on the current critical path is 4 – 5,
crashing it leaves the critical path to be 1-2-4-5-6, and all the activities are
crashed.

3) Examine non critical paths and uncrash activities on such paths (starting with
the most expensive to the point after which further uncrashing will create a
longer critical path.
 On the non critical path 1-3-6, 3-6 is most expensive (240shs per
week) and so we uncrash it by 4 weeks
 On non critical path 1-2-5-6, activities 1-2, and 5-6 cannot be
uncrashed as they are included in the crash critical path but we
can uncrash 2-5 (200shs per week) so we uncrash it by 2 weeks.

~ 40 ~
In Summary

Activity Time cost


1-2 7 1600
1-3 10 3000
2-4 6 2600
2-5 18 4900
3-6 24 8640
4-5 12 6000
5-6 9 4500
Total 31240

~ 41 ~
CHAPTER 5

PROJECT TEAM MANAGEMENT

Managing a Project Team


Managing a project team involves tracking individual’s performance, providing
feedback, resolving issues, and coordinating changes to enhance overall project
performance. In a functional organization it is much like managing a department
team with the nuance of enhancing the project’s performance instead of meeting the
operational goals of the department.
The inputs to managing a project team yield an overview of project team
performance and assignments so the project manager can determine what the next
steps are. The inputs are:
 Organizational Process Assets – Organizational process assets are the
organization’s policies, procedures, and systems which can be used to reward
the team during the course of a project.
 Project Staff Assignments – Project Staff Assignments are the list of project
duties for team members. Staff Assignments are often used during the
monitoring and controlling process group to evaluate individual team
members.
 Roles and Responsibilities – The roles and responsibilities document is used
to determine what each team member should be focusing upon and
completing.
 Project Organization Charts – Project Charts represent the reporting
relationships among the project team.
 Staffing Management Plan – The Staffing Management Plan details when
team members are needed and list training plans, certification requirements,
and compliance issues.
 Team Performance Assessment - Team Performance Assessments are the
documented formal or informal assessment of the project team’s performance.
Common indications are staff turnover rates, team dynamics, and skill levels.

~ 42 ~
After analyzing the information, project managers can identify and resolve
problems, reduce conflicts, and improve overall team work.
 Work Performance Information – Work Performance information is gathered
by observing team members performance while participating in meetings,
follow-up on action items, and communicating to others.
 Performance Reports – Performance reports depict project performance
information when compared to the project plan. This provides a basis for
determining if corrective actions or preventative actions are need to assure a
successful project delivery.

Managing a team involves making justifiable decisions about how to address the
issues and problems that arise as part of project work. There are some tools and
techniques you can use to manage the project team. Those tools and techniques are:
 Observation and Conversation - Observation and conversation involves
project managers using indicators such as progress toward project goals,
interpersonal relationships, and pride in accomplishments and work of
project team members.
 Project Performance Appraisals - Project performance appraisals is a vehicle
which enables team members to receive feedback from supervisors.
Performance Appraisals can be used them to clarify team member
responsibilities and to develop training plans and future goals.
 Conflict Management - Conflict management involves the reduction of
destructive disagreements within the project team. The project manager can
allow the problem to resolve itself or use informal and formal interventions
before the conflict damages the project.
 Issue Log - An issue log is a list of action items and the names of the team
members responsible for carrying them out. Issue logs provide project
managers with a way to monitor outstanding items.
Often in the course of a project, it is necessary to make changes to the way the
project is executed. The outputs of the Managing a Project team process are:
Requested Changes – Requested Changes are staffing changes either planned

~ 43 ~
or unplanned which can impact the project plan. When staffing changes
which have the possibility of disrupting the project plan, the change needs to
be processed through integrated change control.
 Recommended Corrective Actions - Recommended corrective actions are to
overcoming the addition or removable of a teammate, outsourcing some
work, additional training, or actions relating to disciplinary processes.
 Recommended Preventive Actions - Recommended preventive actions are
taken to reduce the impact of anticipated problems. Such actions might
include cross training a replacement before a team member leaves the project
or clarifying roles to ensure that all project tasks are carried out or added
personal time in anticipation of extra work which may be needed to meet
project deadlines.
 Organizational Process Asset Updates – Organizational process asset
updates are either inputs to team members performance appraisals or lessons
learned documentation.
 Staffing Management Plan Updates – Staffing Management plan is a
subsidiary plan of the project management plan. The staffing management
plan is updated to reflect staffing related approved change requests

Developing a Project Team


Developing a project team improves the overall competencies and collaborations
among team mates. This improvement will eventually result in enhanced project
performance. The goals of developing a project team is to increase the team’s skill
sets and increase trust among teammates.
The inputs to developing a project team are:
 Project Staff Assignments – Project Staff Assignments is a detailed list of the
people who are on the team.
 Staffing management Plan – The staffing management plan identifies the
way to develop the team, guidelines documenting staff acquisition and
release, the timetable, training needs, recognition, compliance, and safety.

~ 44 ~
 Resource Availability – Resource availability information lists when team
members are available to partake in team development activities.
Developing a synergistic project team means knowing who your team
members are, helping them build upon their strengths and overcoming their
weaknesses while promoting productive working relationships within the
team. There are common tools and techniques to develop a project team. They
are:
 General Management Skills – The soft skills or interpersonal skills help
motivate a team’s performance and collaboration through empathy, influence,
communication, creativity, and facilitation.
 Training – Training encompasses improving the skills and knowledge of
team members. Possible training methods are classroom training, online
learning, on-the-job training, mentoring or coaching
 Team-Building Activities – Team-building activities encourage
communication, trust, and collaboration among teammates.
 Ground Rules – Ground rules establish clear expectations regarding
acceptable behavior by project team members. The overall teams commitment
to ground rules decreases misunderstanding and increases productivity.
 Co-Location – Co-location is the placement of all or most of the active team
members in the same physical location to increase team-building
opportunities.
 Recognition and Rewards – Recognition and rewards improve project work
by achieving recognition and rewarding desired behaviors.
 Establish Empathy – Being empathic is the listening and understanding how
the individual team member is feeling. There is a simple process that can be
used to establish empathy: encourage openness, restate concerns, reflect, and
summarize.

Developing a project team is an ongoing process that needs to be assessed on an


ongoing basis. When tools and techniques have been used to develop the project
team, the project manager or project management team needs to assess whether the

~ 45 ~
team's effectiveness has improved. The single output from developing a project team
is team performance assessment.
 Team Performance Assessment – As the team’s performance improves some
indicators to measure the team’s effectiveness are:
o Improvements in skills which allow an individual to perform assigned
activities with increased effectiveness.
o Improvements in competencies and sentiments which will help the
team perform better as a group
o Reduced staff turnover rate.

Team performance assessment allows project managers to recognize improvements,


but it also highlights any difficulties so they can be addressed. If improvements are
not noted or problems arise, you may need to revisit some of the tools and
techniques you used to develop your project team and start again.

The Key To Team Success, Creating A Team Charter


When something goes wrong in our organizations, the most common response is to
find someone to blame.
When teams fail in our organizations, we do the same thing. We look around for
people or things to blame. Examples of statements indicating blame are
 People didn't carry their weight. The meetings were a waste of time.
 We couldn't get past the conflicts.
 We didn't know what we were supposed to do.
 Management won't pay any attention to the recommendations anyhow, so why
bother.

The truth is that when teams fail, the fault often rests with a flawed process for
getting them started. We might argue that it is "management's" fault because "they"
haven't designed an effective process. But, doesn’t team members, also share
responsibility for making teams successful? By learning the process ourselves, we
can go a long way toward building effective teams.

~ 46 ~
Steps to Team Success
The this first and most important step for creating effective teams is to create a
charter. This process is called "Chartering!' Chartering is the process by which the
team is formed, its mission or task described, its resources allocated, its goals set, its
membership committed, and its plans made. It is the process of "counting the costs"
that it will take for a team to achieve its goals and deciding whether the organization
is really committed to getting there. A good charter creates a recipe or roadmap for
the team as it carries out its charge. it can assist in facilitating the learning of the
team and its members as they work to improve the effectiveness of this and future
team efforts.

Elements of an Effective Charter


There's a fairly simple logic to building a team charter. Ask yourself questions about
all the various conditions, resources, attitudes, and behaviors that will be required in
order for the team to accomplish its goals-and answer them. Here's a list of some of
the most important questions:

1. What is the purpose for creating the team? The most important contributing factor
is a clear and elevating goal. Further, the relationship between goal setting and
task performance is probably the most robust finding in the research literature of
the behavioral sciences. The more completely the purpose of the team can be
identified, the more likely management, team members, and the rest of the
organization will support it in accomplishing its objectives.
2. What kind of team is needed? There are different kinds of teams for different
kinds of goals. Is the team meant to accomplish a task, manage or improve a
process, come up with a new product idea or design, solve a problem, or make a
decision?
3. Will the team be manager led or self-managed? Who, if anyone, is in charge? That
will depend on the task and the maturity of the members. If it is self-managed or
leaderless, who will be responsible for facilitating the team's progress toward its
goal?

~ 47 ~
4. What skills are needed to accomplish the goal? An inventory of critical knowledge
and expertise should be undertaken. It is essential those teams have as members,
or have access to others who can be ad hoc resources, and who can supply the
necessary competence to achieve the objectives.
5. How will members be selected? This is more difficult than it might seem. Often
there are internal political, deployment, or logistical barriers. We want the right
balance of thinkers and doers. We want people who will follow through. We want
to use known resources but develop new competence in the organization. We
want enough diversity of opinion to get all the "cards on the table" without cre-
ating unnecessary conflict. How will the personalities of the various players fit?
Can the company afford to have them take time away from other priorities? Bad
choices here can doom the results.
6. What resources will be necessary to achieve the objectives? Is management willing
to devote the time as well as the financial, human and intellectual capital
necessary to get the job done? Counting the costs and deciding that it is worth
those costs is crucial. In self-managed or leaderless teams ,these are questions that
need to be answered by team members both individually and collectively. Are
they willing to commit their time, talents, and effort to that goal to the extent
necessary?
7. What are the boundaries? Management needs to identify the parameters within
which the team is expected to operate. How much time will the team be given?
How often are the members expected to meet? What is the scope of their concern?
(It's sometimes useful when creating process improvement teams to identify
change recommendations that are off-limits. For example, it is common for teams
to come back with a recommendation that more staff is the solution. By limiting
such recommendations, at least at first, the team is forced to look for solutions that
deal more with the process.)
8. What process will the team use to get results? Once the team has been formed and
the members selected, management-and especially the team itself-must determine
how it will go about getting the job done. When and where will the team meet?

~ 48 ~
How will it meet (face-to-face or some kind of virtual arrangement)? What
maintenance roles will the members agree are important and how will they assign
those? How will the members communicate with one another? What happens if a
member can't be at a meeting but has an assignment due? What are expectations
regarding participation in meetings?
9. How will equal commitment be secured? A frank discussion about the level of
commitment members are willing to give is key to achieving success. Do they
share an equal view as to the importance of the goal? Are they personally willing
to expend the effort necessary to get the desired result? What circumstances might
limit their ability to perform up to the expectations of others? Getting all this out
on the table early on can avoid conflicts down the road.
10. How will we plan for conflict? The best way to minimize the amount of
unproductive conflict is to conduct a frank discussion about potential discord.
Two of the most common examples of conflict in teams result when members
don't pull their weight and follow through on assignments and commitments, or
when one or more members try to over-control and dominate the group. By
identifying these and other potential conflicts and agreeing beforehand how
members will deal with them, a team can minimize the disruption to goal
achievement. In essence, you're giving one another permission to do the kind of
confrontation that is necessary to get past the conflicts. .
11. What will be done to get the job done? The Project Plan: Early on, there's a need
to analyze the task, break down tasks, establish the timeline, make and accept
assignments, and get started. Usually, make this the first step but it's really the
final step of the "chartering" process.
12. How will success be evaluated and learn from the process? How will we know
what mid-course corrections need to be made to the process or plan? How will
we measure our progress? What can we do to learn from this experience about
how not only to make this team better, but future teams: both those we serve on
individually and teams the company forms. By planning how and when the team
will reflect on the process they are going or have gone through, the individuals,
team, and larger organization benefit.

~ 49 ~
There is a direct proportional relationship between the amount of time and
intellectual effort we spend chartering our teams and the likelihood those teams will
achieve their goals. Going about this process in a conscious, reflective manner often
is the deciding factor in achieving optimal results.

Work Expectations
The second area for focus with ground rules is work expectations. People join teams
with very different ideas about the work involved in being a member of the team.
Few people will deliberately perform poorly, but team members need information
about the standards of the team. For example, it is common for people to send out
information about the topic of a meeting and then never reach that topic at the
meeting or never refer to the information provided. If there is not a positive
consequence for meeting preparation, participants will not read materials sent prior
to meetings.
On the other hand, some meetings ask for people to give their interpretations,
opinions, and recommendations based on the material provided prior to the
meeting. In this case, participants are very likely to be prepared. Having had either
one of these experiences, or any of the various experiences in between, will define
what a team member thinks she or he is accountable for in a team meeting.
Common questions teams address in their ground rules involving work
expectations include:
 What is the quality of work expected?
 What is the quantity of work expected?
 How is the timeliness of work defined?
 What does it mean to come prepared to a meeting?

Confidentiality
The last issue for team ground rules that we will discuss concerns confidentiality
and support. Nothing can destroy trust as quickly in a team than to have team
discussions shared with those outside the team. When team members hear

~ 50 ~
summaries of what occurred in the team, they often feel that their comments are
misrepresented or misinterpreted or, at the very least, that they would like to speak
for themselves. To avoid these problems, team members need to decide how they
will represent the meeting discussion to others. Some teams choose a spokesperson
for the team.

To develop useful guidelines the team needs to discuss questions such as the
following:
 What topics are to be considered confidential?
 How will team members identify confidential information?
 How should team members treat this information?
 How should team members portray team meetings to outsiders? . Who should
be the spokesperson for the group?
 Who should receive meeting minutes?
The discussion on confidentiality also requires a discussion on enforcement and
consequence.
 How will the team address instances where a team member has violate the
confidentiality norm?
 What will be the consequence of such an action?

Ground Rules
Ground rules are prescriptions for team communication. They must arise from the
team and be freely committed to by all team members. The following is guide to
effective team’s communication.
 Be a good listener. . Keep an open mind.
 Participate in the discussion.
 Ask for clarification.
 Give everyone a chance to speak.
 Deal with particular rather than general problems.
 Don't be defensive if your idea is criticized.
 Be prepared to carry out group decisions.

~ 51 ~
 All comments remain in the meeting room.
 Everyone is an equal in the discussion session
 Be polite-don't interrupt.

CONFLICT MANAGEMENT
What is conflict? Is it the same as a disagreement or an argument? Typically,
conflict is characterized by three elements:
1) Interdependence,
2) Interaction, and
3) Incompatible goals.

We can define conflict as the interaction of interdependent people who perceive a


disagreement about goals, aims, and values, and who see the other party as potentially
interfering with the realization of these goals. Conflict is a social phenomenon that is
woven into the fabric of human relationships; therefore, it can only be expressed and
manifested through communication. We can only come into conflict with people
with whom we are interdependent; that is, only when we become dependent on one
another to meet our needs or goals, does conflict emerge.

Conflicts are differentiated in a number of ways. One method of distinguishing


among conflict situations is based on the context in which the conflict occurs.
Traditionally conflict is viewed as occurring in the following three contexts:
 Interpersonal conflict exists between two individuals within a group.
 Intergroup Conflict occurs between two groups within the larger social system.
 Interorganizational conflict occurs between two organizations.

Misconceptions about Conflict


Smith and Andrews (1989) suggest that people still hold negative opinions about the
advisability of conflict resolution because of the following misconceptions:
1. Harmony is normal and conflict is abnormal. This belief is erroneous. Conflict is
normal; in fact, it is inevitable. Whenever two people must interact in order to

~ 52 ~
achieve goals, their subjective views and opinions about how to best achieve
those goals will lead to conflict of some degree. Harmony occurs only when
conflict is acknowledged and resolved.
2. Conflicts and disagreements are the same. Disagreement is usually temporary and
limited, stemming from misunderstanding or differing views about a specific
issue rather than a situation's underlying values and goals. Conflicts are more
serious and usually are rooted in incompatible goals.
3. Conflict is the result of personality problems. Personalities themselves are not
cause for conflict. While people of different personality types may approach
situations differently, true conflict develops from and is reflected in behavior,
not personality.
4. Conflict and anger are the same thing. While conflict and anger are closely
merged in most people's minds, they don't necessarily go hand in hand.
Conflict involves both issues and emotions-the issue and the participants
determine what emotions will be generated. Serious conflicts can develop that
do not necessarily result in anger. Other emotions are just as likely to surface:
fear, excitement, sadness, frustration, and others.

Phases of Conflict Management


When parties in conflict agree that conflict resolution is needed, they are more likely
to succeed if they move through prescribed phases to reach resolution (Johnson &
Johnson, 1994).
1. Collect data. Know exactly what the conflict is about and objectively
analyze the behavior of parties involved.
2. Probe. Ask open-ended, involved questions; actively listen; facilitate
communication.
3. Save face. Work toward a win/win resolution; avoid embarrassing
either party; maintain an objective (not emotional) level.
4. Discover common interests. This will help individuals redefine dimension
of the conflict and perhaps bring about a compromise.
5. Reinforce. Give additional support to common ideas of both parties

~ 53 ~
and know when to use the data collected.
6. Negotiate. Suggest partial solutions or compromises identified by
both parties. Continue to emphasize common goals of both parties
involved.
7. Solidify adjustments. Review, summarize, and confirm areas of agreement
Resolution involves compromise.

Strategies for Coping With Conflict


Avoidance
Avoidance occurs when an individual fails to address the conflict, but rather
sidesteps, postpones, or simply withdraws. Some people attempt to avoid conflict by
postponing it, hiding their feelings, changing the subject, leaving the room, or
quitting the project.
Use avoidance when:
1. The stakes aren't that high and you don't have anything to lose.
2. You don't have time to deal with it.
3. The context isn't suitable to address the conflict-it isn't the right time or
place
4. More important issues are pressing.
5. You see no chance of getting your concerns met.
6. You would have to deal with an angry, hotheaded person.
7. You are totally unprepared, taken by surprise, and you need time to think
and collect information.
8. You are too emotionally involved and the others around you can solve the
conflict more successfully.
Avoidance may not be appropriate when the issue is very important and postponing
resolution will only make matters worse. Avoiding conflict is generally not
satisfying to the individuals involved in a conflict, nor does it help the group resolve
a problem.

~ 54 ~
Accommodation
Accommodation is the opposite of competition and contains an element of self-
sacrifice. An accommodating person neglects his or her own concerns to satisfy the
concerns of the other person.
Use accommodation when:
1. The issue is more important to the other person than it is to you.
2. You discover that you are wrong.
3. Continued competition would be detrimental and you know you cant win
4. Preserving harmony without disruption is the most important con-
sideration
Accommodation should not be used if an important issue is at stake that needs to
be addressed immediately.

Compromise
The objective of compromise is to find an expedient, mutually acceptable solution
that partially satisfies both parties. It falls in the middle between competition and
accommodation. Compromise gives up more than competition does, but less than
accommodation. Compromise is appropriate when all parties are satisfied with
getting part of what they want and are willing to be flexible. Compromise is mutual.
All parties should receive something, and all parties should give something up.
Use compromise when:
1. The goals are moderately important but not worth the use of more
assertive strategies
2. People of equal status are equally committed.
3. You want to reach temporary settlement on complex issues.
4. You want to reach expedient solutions on important issues.
5. You need a backup mode when competition or collaboration don't work

Compromise doesn't work when initial demands are too great from the
beginning and there is no commitment to honor the compromise.

~ 55 ~
Competition
An individual who employs the competition strategy pursues his or her own
concerns at the other person's expense. This is a power-oriented strategy used in
situations in which eventually someone wins and someone loses. Competition
enables one party to win. Before using competition as a conflict resolution strategy,
you must decide whether or not winning this conflict is beneficial to individuals or
the group.
Use competition when:
1. You know you are right.
2. You need a quick decision.
3. You meet a steamroller type of person and you need to stand up for your own
rights.
Competition will not enhance a group's ability to work together. It reduces
cooperation.

Collaboration
Collaboration is the opposite of avoidance. It is characterized by an attempt to work
with the other person to find some solution that fully satisfies the concerns of both.
This strategy requires you to identify the underlying concerns of the two individuals
in conflict and find an alternative that meets both sets of concerns. This strategy
encourages teamwork and cooperation within a group. Collaboration does not create
winners and losers and does not presuppose power over others. The best decisions
are made by collaboration.
Use collaboration when:
1. Others' lives are involved.
2. You don't want to have full responsibility.
3. There is a high level of trust.
4. You want to gain commitment from others.
5. You need to work through hard feelings, animosity, etc.

~ 56 ~
Collaboration may not be the best strategy to use if time is limited and people must
act before they can work through their conflict, or there is not enough trust, respect,
or communication among the group for collaboration to occur.

Creative Ways to Manage Conflict


Conflict of some degree is inevitable when individuals or groups work together.
Before conflict evolves, decide to take positive steps to manage it. When it does
occur, discuss it openly with the group. Here are some useful guidelines to follow
when managing conflict:
1. Deal with one issue at a time. More than one issue may be involved in the
conflict, but someone in the group needs to provide leadership to identify
the issues involved. Then address one issue at a time to make the problem
manageable.
2. If there is a past problem blocking current communication, list it as one of
the issues in this conflict. It may have to be dealt with before the current
conflict can be resolved.
3. Choose the right time for conflict resolution. Individuals have to be
willing to address the conflict. We are likely to resist if we feel we are
being forced into negotiations.
4. Avoid reacting to unintentional remarks. Words like always and never
may be said in the heat of battle and do not necessarily convey what the
speaker means. Anger will increase the conflict rather than bring it closer
to resolution.
5. Avoid resolutions that come too soon or too easily. People need time to
think about all possible solutions and the impact of each. Quick answers
may disguise the real problem. All parties need to feel some satisfaction
with the resolution if they are to accept it. Conflict resolutions should not
be rushed.
6. Avoid name-calling and threatening behavior. Don't corner the opponent.
All parties need to preserve their dignity and self-respect. Threats usually
increase the conflict and payback can occur some time in the future when

~ 57 ~
we least expect it.
7. Agree to disagree. In spite of your differences, if you maintain respect for
one another and value your relationship, you will keep disagreements
from interfering with the group.
8. Don't insist on being right. There are usually several right solutions to
every problem.

~ 58 ~
CHAPTER 6

INDICATORS

Components and Indicator Monitoring


How will we know when we have achieved our desired outcomes? After examining
the importance of setting achievable and well-defined outcomes, and the issues and
process involved in agreeing upon those outcomes, we turn next to the selection of
key indicators .Outcome indicators are not the same as outcomes. Indicators are the
quantitative or qualitative variables that provide a simple and reliable means to measure
achievement, to reflect the changes connected to an intervention, or to help assess the
performance of an organization against the stated outcome. Indicators should be
developed for all levels of the results-based M&E system, meaning that indicators
are needed to monitor progress with respect to inputs, activities, outputs, outcomes,
and goals. Progress needs to be monitored at all levels of the system to provide
feedback on areas of success and areas in which improvement may be required.

Outcome indicators help to answer two fundamental questions: “How will we know
success or achievement when we see it? Are we moving toward achieving our
desired outcomes?” These are the questions that are increasingly being asked of
governments and organizations across the globe. Consequently, setting appropriate
indicators to answer these questions becomes a critical part of our 10-step model.
Developing key indicators to monitor outcomes enables managers to assess the
degree to which intended or promised outcomes are being achieved. Indicator
development is a core activity in building a results-based M&E system. It drives all
subsequent data collection, analysis, and reporting. There are also important
political and methodological considerations involved in creating good, effective
indicators.

Indicators Are Required for All Levels of Results-Based M&E Systems


Setting indicators to measure progress in inputs, activities, outputs, outcomes, and
goals is important in providing necessary feedback to the management system. It

~ 59 ~
will help managers identify those parts of an organization or government that may,
or may not, be achieving results as planned. By measuring performance indicators
on a regular, determined basis, managers and decision makers can find out whether
projects, programs, and policies are on track, off track, or even doing better than
expected against the targets set for performance. This provides an opportunity to
make adjustments, correct course, and gain valuable institutional and project,
program, or policy experience and knowledge. Ultimately, of course, it increases the
likelihood of achieving the desired outcomes.

Translating Outcomes into Outcome Indicators


When we consider measuring “results,” we mean measuring outcomes, rather than
only inputs and outputs. However, we must translate these outcomes into a set of
measurable performance indicators. It is through the regular measurement of key
performance indicators that we can determine if outcomes are being achieved.

For example, in the case of the outcome “to improve student learning,” an outcome
indicator regarding students might be the change in student scores on school
achievement tests. If students are continually improving scores on achievement tests,
it is assumed that their overall learning outcomes have also improved. Another
example is the outcome “reduce at-risk behavior of those at high risk of contracting
HIV/AIDS.” Several direct indicators might be the measurement of different risky
behaviors for those individuals most at risk.

As with agreeing on outcomes, the interests of multiple stakeholders should also be


taken into account when selecting indicators. We previously pointed out that
outcomes need to be translated into a set of measurable performance indicators. Yet
how do we know which indicators to select? The selection process should be guided
by the knowledge that the concerns of interested stakeholders must be considered
and included. It is up to managers to distill stakeholder interests into good, usable
performance indicators. Thus, outcomes should be disaggregated to make sure that
indicators are relevant across the concerns of multiple stakeholder groups—and not

~ 60 ~
just a single stakeholder group. Just as important, the indicators have to be relevant
to the managers, because the focus of such a system is on performance and its
improvement.

The “CREAM” of Good Performance Indicators


The “CREAM” of selecting good performance indicators is essentially a set of criteria
to aid in developing indicators for a specific project, program, or policy (Schiavo-
Campo 1999, p. 85). Performance indicators should be clear, relevant, economic,
adequate, and monitorable. CREAM amounts to an insurance policy, because the
more precise and coherent the indicators, the better focused the measurement
strategies will be.
Clear Precise and unambiguous
Relevant Appropriate to the subject at hand
Economic Available at a reasonable cost
Adequate Provide a sufficient basis to assess performance
Monitorable Amenable to independent validation
If any one of these five criteria is not met, formal performance indicators will suffer
and be less useful5. Performance indicators should be as clear, direct, and
unambiguous as possible. Indicators may be qualitative or quantitative. In
establishing results-based M&E systems, however, we advocate beginning with a
simple and quantitatively measurable system rather than inserting qualitatively
measured indicators upfront.

Quantitative indicators should be reported in terms of a specific number (number,


mean, or median) or percentage. “Percents can also be expressed in a variety of
ways, e.g., percent that fell into a particular outcome category . . . percent that fell
above or below some targeted value . . . and percent that fell into particular outcome
intervals . . .” (Hatry 1999, p. 63). “Outcome indicators are often expressed as the
number or percent (proportion or rate) of something. Programs should consider
including both forms. The number of successes (or failures) in itself does not indicate
the rate of success (or failure)—what was not achieved. The percent by itself does

~ 61 ~
not indicate the size of the success. Assessing the significance of an outcome
typically requires data on both number and percent” (Hatry 1999, p.)

“Qualitative indicators/targets imply qualitative assessments . . . [that is],


compliance with, quality of, extent of and level of . . . . Qualitative indicators . . .
provide insights into changes in institutional processes, attitudes, beliefs, motives
and behaviors of individuals” (U.N. Population Fund 2000, p. 7). A qualitative
indicator might measure perception, such as the level of empowerment that local
government officials feel to adequately do their jobs. Qualitative indicators might
also include a description of a behavior, such as the level of mastery of a newly
learned skill. Although there is a role for qualitative data, it is more time consuming
to collect, measure, and distill, especially in the early stages. Furthermore,
qualitative indicators are harder to verify because they often involve subjective
judgments about circumstances at a given time.

Qualitative indicators should be used with caution. Public sector management is not
just about documenting perceptions of progress. It is about obtaining objective
information on actual progress that will aid managers in making more well-informed
strategic decisions, aligning budgets, and managing resources. Actual progress
matters because, ultimately, M&E systems will help to provide information back to
politicians, ministers, and organizations on what they can realistically expect to
promise and accomplish. Stakeholders, for their part, will be most interested in
actual outcomes, and will press to hold managers accountable for progress toward
achieving the outcomes.

Performance indicators should be relevant to the desired outcome, and not affected
by other issues tangential to the outcome. The economic cost of setting indicators
should be considered. This means that indicators should be set with an
understanding of the likely expense of collecting and analyzing the data.
For example, in the National Poverty Reduction Strategy Paper (PRSP) for the
Kyrgyz Republic, there are about 100 national and sub national indicators spanning

~ 62 ~
more than a dozen policy reform areas. Because every indicator involves data
collection, reporting, and analysis, the Kyrgyz government will need to design and
build 100 individual M&E systems just to assess progress toward its poverty
reduction strategy. For a poor country with limited resources, this will take some
doing. Likewise, in Bolivia the PRSP initially contained 157 national-level indicators.
It soon became apparent that building an M&E system to track so many indicators
could not be sustained. The present PRSP draft for Bolivia now has 17 national level
indicators.

Every indicator has cost and work implications. In essence


when we explore building M&E systems, we are
considering a new M&E system for every single indicator.
Therefore, indicators should be chosen carefully and
judiciously

Indicators ought to be adequate. They should not be too indirect, too much of a
proxy, or so abstract that assessing performance becomes complicated and
problematic.

Indicators should be Monitorable, meaning that they can be independently validated


or verified, which is another argument in favor of starting with quantitative
indicators as opposed to qualitative ones. Indicators should be reliable and valid to
ensure that what is being measured at one time is what is also measured at a later
time— and that what is measured is actually what is intended.

Caution should also be exercised in setting indicators according to the ease with
which data can be collected. “Too often, agencies base their selection of indicators on
how readily available the data are, not how important the outcome indicator is in
measuring the extent to which the outcomes sought are being achieved” (Hatry 1999,
p. 55).

~ 63 ~
Use of Proxy Indicators
You may not always be precise with indicators, but you can strive to be
approximately right. Sometimes it is difficult to measure the outcome indicator
directly, so proxy indicators are needed. Indirect, or proxy, indicators should be
used only when data for direct indicators are not available, when data collection will
be too costly, or if it is not feasible to collect data at regular intervals. However,
caution should be exercised in using proxy indicators, because there has to be a
presumption that the proxy indicator is giving at least approximate evidence on
performance (box 3.1).

For example, if it is difficult to conduct periodic household surveys in dangerous


housing areas, one could use the number of tin roofs or television antennas as a
proxy measure of increased household income. These proxy indicators might be
correctly tracking the desired outcome, but there could be other contributing factors
as well; for example, the increase in income could be attributable to drug money, or
income generated from the hidden market, or recent electrification that now allows
the purchase of televisions. These factors would make attribution to the policy or
program of economic development more difficult to assert.

The Pros and Cons of Using Predesigned Indicators


Predesigned indicators are those indicators established independently of an
individual country, organization, program, or sector context. For example, a number
of development institutions have created indicators to track development goals,
including the following:
• MDGs
• The United Nations Development Programme’s (UNDP’s)
Sustainable Human Development goals
• The World Bank’s Rural Development Handbook
• The International Monetary Fund’s (IMF’s) Financial Soundness Indicators.
The MDGs contain eight goals, with attendant targets and indicators assigned to
each. For example, Goal 4 is to reduce child mortality, while the target is to reduce

~ 64 ~
by two-thirds the under-five mortality rate between the years 1990 and 2015.
Indicators include
a) under-five mortality rate;
b) infant mortality rate; and
c) Proportion of one-year-old children immunized against measles.

In light of regional financial crises in various parts of the world, the IMF is in the
process of devising a set of Financial Soundness Indicators.
These are indicators of the current financial health and soundness of a given
country’s financial institutions, corporations, and households. They include
indicators of capital adequacy, asset quality, earnings and profitability, liquidity,
and sensitivity to market risk (IMF 2003).

On a more general level, the IMF also monitors and publishes a series of
macroeconomic indicators that may be useful to governments and organizations.
These include output indicators, fiscal and monetary indicators, balance of
payments, external debt indicators, and the like.

There are a number of pros and cons associated with using predesigned indicators:
Pros:
• They can be aggregated across similar projects, programs, and policies.
• They reduce costs of building multiple unique measurement systems.
• They make possible greater harmonization of donor requirements.
Cons:
• They often do not address country specific goals.
• They are often viewed as imposed, as coming from the top down.
• They do not promote key stakeholder participation and ownership.
• They can lead to the adoption of multiple competing indicators.
There are difficulties in deciding on what criteria to employ when one chooses one
set of predesigned indicators over another.

~ 65 ~
Predesigned indicators may not be relevant to a given country or organizational
context. There may be pressure from external stakeholders to adopt predesigned
indicators, but it is our view that indicators should be internally driven and tailored
to the needs of the organization and to the information requirements of the
managers, to the extent possible. For example, many countries will have to use some
predesigned indicators to address the MDGs, but each country should then
disaggregate those goals to be appropriate to their own particular strategic objectives
and the information needs of the relevant sectors.

Ideally, it is best to develop indicators to meet specific needs while involving


stakeholders in a participatory process. Using predesigned indicators can easily
work against this important participatory element.

Constructing Indicators
Constructing indicators takes work. It is especially important that competent
technical, substantive, and policy experts participate in the process of indicator
construction. All perspectives need to be taken into account—substantive, technical,
and policy—when considering indicators. Are the indicators substantively feasible,
technically doable, and policy relevant? Going back to the example of an outcome
that aims to improve student learning, it is very important to make sure that
education professionals, technical people who can construct learning indicators, and
policy experts who can vouch for the policy relevance of the indicators, are all
included in the discussion about which indicators should be selected.

Indicators should be constructed to meet specific needs. They also need to be a direct
reflection of the outcome itself. And over time, new indicators will probably be
adopted and others dropped. This is to be expected. However, caution should be
used in dropping or modifying indicators until at least three measurements have
been taken.

~ 66 ~
Taking at least three measurements helps establish a baseline and a trend over time.
Two important questions should be answered before changing or dropping an
indicator: Have we tested this indicator thoroughly enough to know whether it is
providing information to effectively measure against the desired outcome? Is this
indicator providing information that makes it useful as a management tool?
It should also be noted that in changing indicators, baselines against which to
measure progress are also changing. Each new indicator needs to have its own
baseline established the first time data are collected for it.

In summary, indicators should be well thought through. They should not be


changed or switched often (and never on a whim), as this can lead to chaos in the
overall data collection system. There should be clarity and agreement in the M&E
system on the logic and rationale for each indicator from top level decision makers
on to those responsible for collecting data in the field.

Performance indicators can and should be used to monitor outcomes and provide
continuous feedback and streams of data throughout the project, program, or policy
cycle. In addition to using indicators to monitor inputs, activities, outputs, and
outcomes, indicators can yield a wealth of performance information about the
process of and progress toward achieving these outcomes. Information from
indicators can help to alert managers to performance discrepancies, shortfalls in
reaching targets, and other variabilities or deviations from the desired outcome.

Thus, indicators provide organizations and governments with the opportunity to


make midcourse corrections, as appropriate, to manage toward the desired
outcomes. Using indicators to track process and progress is yet another
demonstration of the ways that a result based M&E system can be a powerful public
management tool.
The central function of any performance measurement process
is to provide regular, valid data on indicators of performance
outcomes

~ 67 ~
CHAPTER 7

PROJECT MANAGEMENT TECHNIQUES OF MONITORING

In the past, a company typically decided to undertake a project effort, assigned the
project and the "necessary" resources to a carefully selected individual and assumed
they were using some form of project management. Organizational implications
were of little importance. Although the basic concepts of project management are
simple, applying these concepts to an existing organization is not. Richard P. Olsen,
in his article "Can Project Management Be Defined?" defined project management as
"…the application of a collection of tools and techniques…to direct the use of diverse
resources toward the accomplishment of a unique, complex, one-time task within
time, cost, and quality constraints. Each task requires a particular mix of these tools
and techniques structured to fit the task environment and life cycle (from conception
to completion) of the task."

Employing project management technologies minimizes the disruption of routine


business activities in many cases by placing under a single command all of the skills,
technologies, and resources needed to realize the project. The skills required depend
on each specific project and the resources available at that time. The greater the
amount of adjustments a parent organization must make to fulfill project objectives,
the greater chance exists for project failure. The form of project management will be
unique for every project endeavor and will change throughout the project.

The project management process typically includes four key phases: initiating the
project, planning the project, executing the project, and closing the project. An
outline of each phase is provided below.

Initiating the Project


The project management techniques related to the project initiation phase includes:
1. Establishing the project initiation team. This involves organizing team members
to assist in carrying out the project initiation activities.

~ 68 ~
2. Establishing a relationship with the customer. The understanding of your
customer's organization will foster a stronger relationship between the two of
you.
3. Establishing the project initiation plan. Defines the activities required to organize
the team while working to define the goals and scope of the project.
4. Establishing management procedures. Concerned with developing team
communication and reporting procedures, job assignments and roles, project
change procedure, and how project funding and billing will be handled.
5. Establishing the project management environment and workbook. Focuses on the
collection and organization of the tools that you will use while managing the
project.
Planning the Project the project management techniques related to the project
planning phase includes:
1. Describing project scope, alternatives, and feasibility. The understanding of the
content and complexity of the project. Some relevant questions that should be
answered include:
o What problem/opportunity does the project address?
o What results are to be achieved?
o What needs to be done?
o How will success be measured?
o How will we know when we are finished?
2. Divide the project into tasks. This technique is also known as the work
breakdown structure. This step is done to ensure an easy progression between
tasks.
3. Estimating resources and creating a resource plan. This helps to gather and
arrange resources in the most effective manner.
4. Developing a preliminary schedule. In this step, you are to assign time estimates
to each activity in the work breakdown structure. From here, you will be able
to create the target start and end dates for the project.

~ 69 ~
5. Developing a communication plan. The idea here is to outline the
communication procedures between management, team members, and the
customer.
6. Determining project standards and procedures. The specification of how various
deliverables are produced and tested by the project team.
7. Identifying and assessing risk. The goal here is to identify potential sources of
risk and the consequences of those risks.
8. Creating a preliminary budget. The budget should summarize the planned
expenses and revenues related to the project.
9. Developing a statement of work. This document will list the work to be done and
the expected outcome of the project.
10. Setting a baseline project plan. This should provide an estimate of the project's
tasks and resource requirements.

Executing the Project the project management techniques related to the project
execution phase includes
1. Executing the baseline project plan. The job of the project manager is to initiate
the execution of project activities, acquire and assign resources, orient and
train new team members, keep the project on schedule, and assure the quality
of project deliverables.
2. Monitoring project progress against the baseline project plan. Using Gantt and
PERT charts, which will be discussed in detail further on in this paper, can
assist the project manager in doing this.
3. Managing changes to the baseline project plan.
4. Maintaining the project workbook. Maintaining complete records of all project
events is necessary. The project workbook is the primary source of
information for producing all project reports.
5. Communicating the project status. This means that the entire project plan should
be shared with the entire project team and any revisions to the plan should be
communicated to all interested parties so that everyone understands how the
plan is evolving.

~ 70 ~
Closing Down the Project the project management techniques related to the project
closedown phase includes:
1. Closing down the project. In this stage, it is important to notify all interested
parties of the completion of the project. Also, all project documentation and
records should be finalized so that the final review of the project can be
conducted.
2. Conducting post project reviews. This is done to determine the strengths and
weaknesses of project deliverables, the processes used to create them, and the
project management process.
3. Closing the customer contract. The final activity is to ensure that all contractual
terms of the project have been met.
The techniques listed above in the four key phases of project management enable a
project team to:
 Link project goals and objectives to stakeholder needs.
 Focus on customer needs.
 Build high-performance project teams.
 Work across functional boundaries.
 Develop work breakdown structures.
 Estimate project costs and schedules.
 Meet time constraints.
 Calculate risks.
 Establish a dependable project control and monitoring system.

Tools
Project management is a challenging task with many complex responsibilities.
Fortunately, there are many tools available to assist with accomplishing the tasks
and executing the responsibilities. Some require a computer with supporting
software, while others can be used manually. Project managers should choose a
project management tool that best suits their management style. No one tool
addresses all project management needs. Program Evaluation Review Technique
(PERT) and Gantt Charts are two of the most commonly used project management

~ 71 ~
tools and are described below. Both of these project management tools can be
produced manually or with commercially available project management software.
Program Evaluation and Review Technique (PERT) is a scheduling method
originally designed to plan a manufacturing project by employing a network of
interrelated activities, coordinating optimum cost and time criteria. PERT
emphasizes the relationship between the time each activity takes,
the costs associated with each phase, and the resulting time and cost for the
anticipated completion of the entire project.

PERT is an integrated project management system. These systems were designed to


manage the complexities of major manufacturing projects, the extensive data
necessary for such industrial efforts, and the time deadlines created by defense
industry projects. Most of these management systems developed following World
War II, and each has its advantages.

PERT was first developed in 1958 by the U.S. Navy Special Projects Office on the
Polaris missile system. Existing integrated planning on such a large scale was
deemed inadequate, so the Navy pulled in the Lockheed Aircraft Corporation and
the management consulting firm of Booz, Allen, and Hamilton. Traditional
techniques such as line of balance, Gantt charts, and other systems were eliminated,
and PERT evolved as a means to deal with the varied time periods it takes to finish
the critical activities of an overall project.

PERT is a planning and control tool used for defining and controlling the tasks
necessary to complete a project. PERT charts and Critical Path Method (CPM) charts
are often used interchangeably; the only difference is how task times are computed.
Both charts display the total project with all scheduled tasks shown in sequence. The
displayed tasks show which ones are in parallel, those tasks that can be performed at
the same time. A graphic representation called a "Project Network" or "CPM
Diagram" is used to portray graphically the interrelationships of the elements of a
project and to show the order in which the activities must be performed.

~ 72 ~
PERT planning involves the following steps:
1. Identify the specific activities and milestones. The activities are the tasks of the
project. The milestones are the events that mark the beginning and the end of
one or more activities.
2. Determine the proper sequence of activities. This step may be combined with #1
above since the activity sequence is evident for some tasks. Other tasks may
require some analysis to determine the exact order in which they should be
performed.
3. Construct a network diagram. Using the activity sequence information, a
network diagram can be drawn showing the sequence of the successive and
parallel activities. Arrowed lines represent the activities and circles or
"bubbles" represent milestones.
4. Estimate the time required for each activity. Weeks are a commonly used unit of
time for activity completion, but any consistent unit of time can be used. A
distinguishing feature of PERT is its ability to deal with uncertainty in activity
completion times. For each activity, the model usually includes three time
estimates:
o Optimistic time - the shortest time in which the activity can be
completed.
o Most likely time - the completion time having the highest probability.
o Pessimistic time - the longest time that an activity may take.
From this, the expected time for each activity can be calculated using the
following weighted average:
Expected Time = (Optimistic + 4 x Most Likely + Pessimistic)
This helps to bias time estimates away from the unrealistically short
timescales normally assumed.
5. Determine the critical path. The critical path is determined by adding the times
for the activities in each sequence and determining the longest path in the
project. The critical path determines the total calendar time required for the
project. The amount of time that a non-critical path activity can be delayed
without delaying the project is referred to as slack time.

~ 73 ~
If the critical path is not immediately obvious, it may be helpful to determine the
following four times for each activity:
o ES - Earliest Start time
o EF - Earliest Finish time
o LS - Latest Start time
o LF - Latest Finish time
These times are calculated using the expected time for the relevant activities.
The earliest start and finish times of each activity are determined by working
forward through the network and determining the earliest time at which an
activity can start and finish considering its predecessor activities. The latest
start and finish times are the latest times that an activity can start and finish
without delaying the project. LS and LF are found by working backward
through the network. The difference in the latest and earliest finish of each
activity is that activity's slack. The critical path then is the path through the
network in which none of the activities have slack.

The variance in the project completion time can be calculated by summing the
variances in the completion times of the activities in the critical path. Given
this variance, one can calculate the probability that the project will be
completed by a certain date assuming a normal probability distribution for
the critical path. The normal distribution assumption holds if the number of
activities in the path is large enough for the central limit theorem to be
applied.
6. Update the PERT chart as the project progresses. As the project unfolds, the
estimated times can be replaced with actual times. In cases where there are
delays, additional resources may be needed to stay on schedule and the PERT
chart may be modified to reflect the new situation. An example of a PERT
chart is provided below:

~ 74 ~
Benefits to using a PERT chart or the Critical Path Method include:
 Improved planning and scheduling of activities.
 Improved forecasting of resource requirements.
 Identification of repetitive planning patterns which can be followed in other
projects, thus simplifying the planning process.
 Ability to see and thus reschedule activities to reflect in the project
dependencies and resource limitations following know priority rules.
 It also provides the following: expected project completion time, probability
of completion before a specified date, the critical path activities that impact
completion time, the activities that have slack time and that can lend
resources to critical path activities, and activity start and end dates.

Gantt charts are used to show calendar time task assignments in days, weeks or
months. The tool uses graphic representations to show start, elapsed, and
completion times of each task within a project. Gantt charts are ideal for tracking
progress. The number of days actually required to complete a task that reaches a
milestone can be compared with the planned or estimated number. The actual
workdays, from actual start to actual finish, are plotted below the scheduled days.
This information helps target potential timeline slippage or failure points. These
charts serve as a valuable budgeting tool and can show dollars allocated versus
dollars spent.
To draw up a Gantt chart, follow these steps:
1. List all activities in the plan. For each task, show the earliest start date,
estimated length of time it will take, and whether it is parallel or sequential. If
tasks are sequential, show which stages they depend on.

~ 75 ~
2. Head up graph paper with the days or weeks through completion.
3. Plot tasks onto graph paper. Show each task starting on the earliest possible
date. Draw it as a bar, with the length of the bar being the length of the task.
Above the task bars, mark the time taken to complete them.
4. Schedule activities. Schedule them in such a way that sequential actions are
carried out in the required sequence. Ensure that dependent activities do not
start until the activities they depend on have been completed. Where possible,
schedule parallel tasks so that they do not interfere with sequential actions on
the critical path. While scheduling, ensure that you make best use of the
resources you have available, and do not over-commit resources. Also, allow
some slack time in the schedule for holdups, overruns, failures, etc.
5. Presenting the analysis. In the final version of your Gantt chart, combine your
draft analysis (#3 above) with your scheduling and analysis of resources (#4
above). This chart will show when you anticipate that jobs should start and
finish. An example of a Gantt chart is provided below:

Benefits of using a Gantt chart include:


 Gives an easy to understand visual display of the scheduled time of a task or
activity.

~ 76 ~
 Makes it easy to develop "what if" scenarios.
 Enables better project control by promoting clearer communication.
 Becomes a tool for negotiations.
 Shows the actual progress against the planned schedule.
 Can report results at appropriate levels.
 Allows comparison of multiple projects to determine risk or resource
allocation.
 Rewards the project manager with more visibility and control over the
project.

~ 77 ~
ASSIGNMENT ONE
1. Outline the responsibilities of a project manager.
b. Discuss each of the phases of the project life cycle
2. State and explain the project preparation process
3. You have been tasked to identify a project, state and explain various sources
through which the project can be identified.
4. State and explain the tools for managing a project team
b. Explain the inputs for developing a team
5. What are the qualities of a good indicator? Give an example

~ 78 ~
CHAPTER 8

UNDERSTANDING THE INITIATIVE

To produce credible information that will be useful for decision makers, evaluations
must be designed with a clear understanding of the initiative, how it operates, how
it was intended to operate, why it operates the way it does and the results that it
produces. It is not enough to know what worked and what did not work (that is,
whether intended outcomes or outputs were achieved or not). To inform action,
evaluations must provide credible information about why an initiative produced the
results that it did and identify what factors contributed to the results (both positive
and negative). Understanding exactly what was implemented and why provides the
basis for understanding the relevance or meaning of project or programme results.
Therefore, evaluations should be built on a thorough understanding of the initiative
that is being evaluated, including the expected results chain (inputs, outputs and
intended outcomes), its implementation strategy, its coverage, and the key
assumptions and risks underlying the Results Map or Theory of Change.

The Evaluation Context


The evaluation context concerns two interrelated sets of factors that have bearing on
the accuracy, credibility and usefulness of evaluation results:
 Social, political, economic, demographic and institutional factors, both
internal and external, that have bearing on how and why the initiative
produces the results (positive and negative) that it does and the sustainability
of results.
 Social, political, economic, demographic and institutional factors within the
environment and time frame of the evaluation that affect the accuracy,
impartiality and credibility of the evaluation results.
Examining the internal and external factors within which a development initiative
operates helps explain why the initiative has been implemented the way it has and
why certain outputs or outcomes have been achieved and others have not. Assessing
the initiative context may also point to factors that impede the attainment of

~ 79 ~
anticipated outputs or outcomes, or make it difficult to measure the attainment of
intended outputs or outcomes or the contribution of outputs to outcomes. In
addition, understanding the political, cultural and institutional setting of the
evaluation can provide essential clues for how best to design and conduct the
evaluation to ensure the impartiality, credibility and usefulness of evaluation results.

Guiding questions for defining the context


a) What is the operating environment around the project or programme?
b) How might factors such as history, geography, politics, social and economic
conditions, secular trends and efforts of related or competing organizations
affect implementation of the initiative strategy, its outputs or outcomes?
c) How might the context within which the evaluation is being conducted (for
example, cultural language, institutional setting, community perceptions, etc.)
affect the evaluation?
d) How does the project or programme collaborate and coordinate with other
initiatives and those of other organizations?
e) How is the programme funded? Is the funding adequate? Does the project or
programme have finances secured for the future?
f) What is the surrounding policy and political environment in which the project
or programme operates? How might current and emerging policy alternatives
influence initiative outputs and outcomes?

The Evaluation Purpose


All evaluations start with a purpose, which sets the direction. Without a clear and
complete statement of purpose, an evaluation risks being aimless and lacking
credibility and usefulness. Evaluations may fill a number of different needs. The
statements of purpose should make clear the following:
 Why the evaluation is being conducted and at that particular point in time
 Who will use the information
 What information is needed
 How the information will be used

~ 80 ~
The purpose and timing of an evaluation should be determined at the time of
developing an evaluation plan (see Chapter 3 for more information). The purpose
statement can be further elaborated at the time a ToR for the evaluation is drafted to
inform the evaluation design
Focusing the Evaluation
Evaluation Scope
The evaluation scope narrows the focus of the evaluation by setting the boundaries
for what the evaluation will and will not cover in meeting the evaluation purpose.
The scope specifies those aspects of the initiative and its context that are within the
boundaries of the evaluation. The scope defines, for example:
 The unit of analysis to be covered by the evaluation, such as a system of
related programmes, polices or strategies, a single programme involving a
cluster of projects, a single project, or a subcomponent or process within a
project
 The time period or phase(s) of the implementation that will be covered
 The funds actually expended at the time of the evaluation versus the total
amount allocated
 The geographical coverage
 The target groups or beneficiaries to be included
The scope helps focus the selection of evaluation questions to those that fall within
the defined boundaries.

Evaluation Objectives and Criteria


Evaluation objectives are statements about what the evaluation will do to fulfill the
purpose of the evaluation. Evaluation objectives are based on careful consideration
of: the types of decisions evaluation users will make; the issues they will need to
consider in making those decisions; and what the evaluation will need to achieve in
order to contribute to those decisions. A given evaluation may pursue one or a
number of objectives. The important point is that the objectives derive directly from
the purpose and serve to focus the evaluation on the decisions that need to be made.

~ 81 ~
CHAPTER 9

STAKEHOLDER ANALYSIS

A stakeholder analysis provides a means to identify the relevant stakeholders and


assess their views and support for the proposed project.
A stakeholder can be defined as any individuals, groups of people, institutions or
organizations’ that may have a significant interest in the success or failure of a
potential project around the issue of concern. These may be affected either positively
or negatively by a proposed project.

Stakeholders therefore go beyond the target group, and extend to those that may
have something to bring to assist the project, or those that may resist the project
taking place. When identifying stakeholders, it is important to consider potentially
marginalized groups, such as women, the elderly, youth, the disabled and the poor,
so that they are represented in the process, especially if the issue will affect their
lives.

It is important to identify and understand the different stakeholders and their


varying levels of interest and power to influence the project, and their motivation
and capacity (resources/knowledge/skills) that they bring to the issue. Having these
matters identified and clarified will make the process of identifying the causes of the
problem and potential solutions much easier.

You should aim to identify the motivation or constraints to change from the aspect
of the target group(s), so that you can better understand the underlying causes to the
issue you seek to overcome. This is particularly important if you have more than one
target group, or a diverse group (e.g. urban and rural households). You can use
relevant and up to date information from the literature review, as well as directly
engaging stakeholders to complete the stakeholder analysis

~ 82 ~
Stakeholder analysis is used to understand who the key actors are around a given
issue and to gauge the importance of different groups' interests and potential
influence. It also serves to highlight groups who are most affected by a given issue
and least able to influence the situation.

Stakeholder analysis should be focused on a single issue, e.g. girls’ education or


recruitment of child soldiers. It can serve as an analytical framework for processing
data or as a data collection exercise to be done in the field:
 based on review of existing information (documentary review);
 in group meetings;
 through key informant interviews (centrally or in the field).

It can serve in an assessment exercise, in a programme monitoring exercise (e.g. to


further probe positions/ interests as the programme advances) and in an evaluation
(e.g. how have interests changed, supporting or impeding programme progress).
What it can tell us
 Identify different groups that can be sources of information;
 Interpret perspectives provided by each group;
 Identify who could positively or negatively influence programme responses;

 To support realistic programme planning and management, data collectors must


look carefully
 within the group of primary stakeholders, recognizing that this group is not
uniform, but include sub-groups with different characteristics (e.g. women,
children, leaders); and
 at the wider group of actors that might positively or negatively influence a
situation.
 A "do no harm" perspective must foresee which non-primary stakeholder groups
might seek to benefit from a programme at the expense of primary stakeholders
 Direct capacity-building efforts

~ 83 ~
 A capacity-building approach to the projects should seek to increase primary
stakeholders’ influence over the achievement of a goal (i.e. move primary
stakeholders towards sector 1 in the Venn diagram on the next page).

Win

1 2

Influence Be influenced

3 4
Lose

Representing stakeholders as a Venn diagram


Two circles distinguish stakeholders:
 Primary stakeholders (those who will benefit from an intervention) are
represented inside the dotted oval;
 The wider context of stakeholders is represented by the larger oval.

Two axes (influence/be influenced and win/lose) divide the diagram into four
areas:
Sector 1: Those who can influence the situation and benefit from it; examples:
 Outsiders: local and international NGOs, political factions;
 Primary stakeholders: influential actors (e.g. leaders).

~ 84 ~
Sector 2: Those who are influenced by the changes and will benefit from it;
examples:
 Primary stakeholders;
 Non-primary stakeholders who will nonetheless gain from the project’s
outcomes.

Sector 3: Those who cannot influence the achievement of a goal and will be affected
negatively by it; examples:
 Primary stakeholders and outsiders whose status or relative wealth is changed by
an activity.
Sector 4: Those who can influence but will lose from the achievement of a goal. This
is an important area to consider, as it will include those who actively oppose the
achievement of a project; examples:
 External factions of local leaders among the primary stakeholders opposed to
change of their status.

~ 85 ~
Matrix for stakeholder analysis

To identify interests, consider: Resources that can be


 Expectations (positive and mobilized in support of
negative) interests:
 Who has influence  Benefits or losses stakeholders are  Information
 Who is affected likely to face (power, status,  Economic resources
economic resources: financial and  Status (also
Identifying stakeholders non-financial) leadership)
requires consideration about  Relations with other stakeholders  Legitimacy / authority
particularly vulnerable  Potential conflicts between  Coercion
groups interests and rights

Stakeholder group Key interests Programme/decisio Potential influence


or sub-group n’s potential impact
(+,-)

Source: Benjamin Crosby (March 1992) “Stakeholder analysis: A vital tools for
strategic managers”

Importance of Evaluation and its uses


Evaluation is critical for any development project to progress towards advancing
human development. Through the generation of ‘evidence’ and objective
information, evaluations enable managers to make informed decisions and plan
strategically. The effective conduct and use of evaluation requires adequate human
and financial resources, sound understanding of evaluation and most importantly, a
culture of results-orientation, learning, inquiry and evidence-based decision
making.

~ 86 ~
When evaluations are used effectively, they support programme improvements,
knowledge generation and accountability.
 Supporting programme improvements—Did it work or not, and why? How
could it be done differently for better results?
The interest is on what works, why and in what context. Decision makers, such as
managers, use evaluations to make necessary improvements, adjustments to the
implementation approach or strategies, and to decide on alternatives. Evaluations
addressing these questions need to provide concrete information on how
improvements could be made or what alternatives exist to address the necessary
improvements.
 Building knowledge for generalizability and wider-application—What can we
learn from the evaluation? How can we apply this knowledge to other
contexts?
The main interest is in the development of knowledge for global use and for
generalization to other contexts and situations. When the interest is on knowledge
generation, evaluations generally apply more rigorous methodology to ensure a
higher level of accuracy in the evaluation and the information being produced to
allow for generalizability and wider-application beyond a particular context.

~ 87 ~
CHAPTER 10

IMPORTANCE OF MONITORING AND EVALUATION

Evaluations should not be seen as an event but as part of an exercise whereby


different stakeholders are able to participate in the continuous process of generating
and applying evaluative knowledge. UNDP managers, together with government
and other stakeholders, decide who participates in what part of this process
(analyzing findings and lessons, developing a management response to an
evaluation, disseminating knowledge) and to what extent they will be involved
(informed, consulted, actively involved, equal partners or key decision makers).
These are strategic decisions for UNDP managers that have a direct bearing on the
learning and ownership of evaluation findings. An evaluation framework that
generates knowledge, promotes learning and guides action is an important means of
capacity development and sustainability of results.
 Supporting accountability—Is UNDP doing the right things? Is UNDP doing
things right? Did UNDP do what it said it would do?

The interest here is on determining the merit or worth and value of an initiative and
its quality. An effective accountability framework requires credible and objective
information, and evaluations can deliver such information. Evaluations help ensure
that UNDP goals and initiatives are aligned with and support the Millennium
Declaration, MDGs, and global, national and corporate priorities. UNDP is
accountable for providing evaluative evidence that links UNDP contributions to the
achievement of development results in a given country and for delivering services
that are based on the principles of human development. By providing such objective
and independent assessments, evaluations in UNDP support the organization’s
accountability towards its Executive Board, donors, governments, national partners
and beneficiaries.

~ 88 ~
The intended use determines the timing of an evaluation, its methodological
framework, and level and nature of stakeholder participation. Therefore, the use has
to be determined at the planning stage..
Monitoring and Evaluation is important because
it provides the only consolidated source of information showcasing project
progress;
it allows actors to learn from each other’s experiences, building on expertise
and knowledge;
it often generates (written) reports that contribute to transparency and
accountability, and allows for lessons to be shared more easily;
it reveals mistakes and offers paths for learning and improvements;
it provides a basis for questioning and testing assumptions;
it provides a means for agencies seeking to learn from their experiences and
to incorporate them into policy and practice;
it provides a way to assess the crucial link between implementers and
beneficiaries on the ground and decision-makers;
it adds to the retention and development of institutional memory;
It provides a more robust basis for raising funds and influencing policy.
Points to note:
For an any monitoring and evaluation to be useful, the organization must ensure
that the Evaluation is:-
1) Independent—Management must not impose restrictions on the scope,
content, comments and recommendations of evaluation reports. Evaluators
must be free of conflict of interest
2) Intentional—The rationale for an evaluation and the decisions to be based on
it should be clear from the outset.
3) Transparent—Meaningful consultation with stakeholders is essential for the
credibility and utility of the evaluation.
4) Ethical—Evaluation should not reflect personal or sectoral interests.
Evaluators must have professional integrity, respect the rights of institutions

~ 89 ~
and individuals to provide information in confidence, and be sensitive to the
beliefs and customs of local social and cultural environments.
5) Impartial—removing bias and maximizing objectivity are critical for the
credibility of the evaluation and its contribution to knowledge.
6) Of high quality—All evaluations should meet minimum quality standards
defined by the Evaluation Office
7) Timely—Evaluations must be designed and completed in a timely fashion so
as to ensure the usefulness of the findings and recommendations
8) Used—Evaluation is a management discipline that seeks to provide
information to be used for evidence-based decision making. To enhance the
usefulness of the findings and recommendations, key stakeholders should be
engaged in various ways in the conduct of the evaluation.

Understanding The Stakeholders In Evaluation


You want to know more about how your group is doing, but others you work with
want to know whether you are making a difference. Welcome to the world of
evaluation. If you are a community initiative, you will want to evaluate your effort.
You will need to devote some time and energy to planning the evaluation process.
Like many other aspects of community health and development, an evaluation will
ultimately be more beneficial if you spend the time and energy searching for ways to
successfully begin and complete an evaluation.

One step in the planning process includes understanding and recognizing the
interests of stakeholders in the evaluation. The stakeholders include community
leaders, evaluators, and funders, and you will want to know how the evaluation will be
used by each of them.

The evaluation should respond to the interests of those three stakeholders, and
nothing is more productive than designing it together. The evaluation can serve the
community leaders' interests, the funders' interests, and the evaluators' interests in a
single useful product, if you know what they want before you start. It's important to

~ 90 ~
define the stakeholder’s interests in using the evaluation so that it can focus on
optimally answering questions important to all of them.

What do we mean by needs and interests? Needs and interests are those qualities
which community leadership, evaluators, and funders see as important for doing
their jobs well. Because each of these stakeholders is looking at the evaluation from a
unique perspective, it helps to recognize those differences, and incorporate them
into the evaluation.

For starters, lets consider why you'd want to conduct an evaluation in the first
place.
There are many basic reasons why stakeholders want an evaluation:
 To be accountable as a public operation
 To assist those who are receiving grants to improve
 To improve a foundation's grant making
 To assess the quality or impact of funded programs
 To plan and implement new programs
 To disseminate innovative programs
 To increase knowledge
A stakeholder may want an evaluation for one, two or all of these reasons.
Evaluators may want to increase knowledge, funders may want to improve grant
making and community leaders may want to assess quality. Community leaders
may not want to answer more than a phone interview by a student intern, evaluators
may be interested in systematic, disciplined inquiry, and funders may look for
accountability.
When it comes time for evaluation, you don't have to be specialists in order to make
good decisions about what you will do. You should, however, be knowledgeable
about uses of evaluations and how they match the many interests involved so that
you can make informed choices.

~ 91 ~
Who are Community Leaders, Evaluators and Funders?
Community Leaders
May include staff, administrators, committee chairpersons, agency personnel and
civic leaders, and trustees of an initiative. They may have little knowledge of
evaluation, nor feel they have much time to provide data or read data reports. Yet,
the evaluation must be responsive, useful and sensitive to their decision-making
requirements. They often are interested in how to improve the functioning of their
initiative.

Evaluators
Are often professionals, though anyone can design and implement an evaluation.
There are several professional associations that support evaluators and have
established standards of practice, such as the American Evaluation
Association or its Collaborative, Participatory, and Empowerment Evaluation
Topical Interest Group. Evaluators can be private consultants, university or
foundation staff, or a member of the initiative. Evaluators are often interested in the
systematic production of useful, reliable information.

Funders
Are those individuals or organizations that provide financial support for the
initiative. They might include program officers or other representatives of
government agencies, foundations, or other sources of financial support. Some
funders have built a formal evaluation into their regular activities, but they are in the
minority. Funders are often interested in whether the use of their funds is having an
impact on the problems facing communities.

Why Should You Understand The Interests Of These Groups?


So, you understand the idea in principle, but why do you need to understand the
needs of leaders, evaluators and funders? The information needs of various groups
can be very different, so it's important to take into account the kinds of information

~ 92 ~
that will be convincing and useful to the target audiences. Knowing this will help
you decide what information is needed and the tools you could use to obtain it.
While you may know your group does good work, chances are good that other
important members of the community do not know what you do. Consequently,
others who have supported and encouraged your efforts will want to know what
has worked, and what hasn't worked; and what should change and what should
stay the same. Because these groups or individuals might be instrumental in
assisting your work, financially or otherwise, it makes good sense to include their
needs in the evaluation process.

Even more important is the requirement that the information used is effectively. The
question is: To whom is it useful? If there's no direction to your information
gathering, you can collect just about anything you want, but so what? If it doesn't
matter to anyone else, it is meaningless. If I collect information about the number of
people that my agency serves, I may find that useful, especially if I'm reimbursed for
that number. But what if someone really wanted to know if the efforts of the
agencies in town had an impact on a health problem. The number of people my
agency serves might not be that useful.
The interests, which helps us determine the information we need, lead us to
develop tools to collect it. In other words it is the interests of the stakeholders that
shape the inquiring.
 What are the interests of the stakeholders?
 What information will help them?
 What questions will you ask to get that information?
 What tools will help you collect it?
In the long run, including the stakeholders in the process will lead to greater
collaboration and organizational capacity to solve community problems.
Understanding stakeholders' interests will enable you to employ your resources
better. Knowing what everyone wants and needs will help you plan the optimal
evaluation.

~ 93 ~
When Should You Understand The Interests Of These Groups?
You will want to identify stakeholders from the get-go. By going through a process
of stakeholder identification before you begin evaluating, you will be able to obtain
their views and incorporate their ideas and needs into the evaluation itself.
Of course, the sooner you identify the needs and interests of those groups, the
sooner you will be able to gain understanding of the different issues each group is
interested in without wasting time or money. You also have to be watchful so that if
interests change, you can adapt to those changes in a timely manner and keep your
evaluation valid.

What are the Interests of These Groups?


How Do we Find These People?

First and foremost, you and other members of the group will need to sit down, pour
a cup of Joe, and grab a pencil. Think about the individuals and groups that have
needs that should be addressed in the evaluation. You should try to figure out what
their interests are.

Of course, some people may ask, "Why them? Our group's interests and needs
should be the focus of the evaluation." In one sense, this is true. One of the main
purposes of the evaluation process includes providing feedback and ideas for the
group itself so members can improve and strengthen their efforts. But remember,
everyone is in this together, the community funders and those who will conduct the
evaluation. You want the best information possible information that will help you
make the best decisions.

But, at the same time, there are other factors to consider.


Over the course of your brainstorming session, you should identify as many
stakeholders as you can.

~ 94 ~
To identify stakeholders, ask yourself these questions:
 Who provides funding for our initiative?
 Who will conduct the evaluation?
 Who do we collaborate with?
Once you know who these people are, find out what they want. To do this, let's take
a look at the groups. Then, we'll talk about the specific needs and interests of
members of each of these groups.

Community Leaders
What will this group need from your evaluation? The information should be:
 Clear and understandable: They may have limited knowledge about the
goings-on of your group, or about evaluations. Immediately, then, you know
that the evaluation must be clear and understandable.
 Efficient: They probably have a variety of different responsibilities which
demand their time and consideration, so they won't want to waste time
reading information irrelevant to their needs.
 Responsive: They may include decision-makers that can affect the future of
your group. Therefore, your evaluation needs to be responsive to their
decision-making requirements.
 Sensitive: They will want to know what the initiative has accomplished, so
the evaluation should be sensitive to the activities and accomplishments of
the initiative.
 Useful: They will include decision-makers for the initiative, so the evaluation
needs to show them how their efforts can be improved.

Evaluators
They will be assessing the effectiveness of the initiative in meeting its goals.
What do they need to get out of the evaluation?
 Input: The evaluation team needs to receive input from the initiative's clients
--including community leadership, funders, and members of the initiative
itself, in order to know what the clients want to learn about the initiative.

~ 95 ~
 Accurate and Complete Information: In a similar vein, the evaluators need
accurate and complete information in order to answer the questions posed by
the stakeholders.
 Cooperation: Finally, the evaluators will need cooperation from participants
and officials in order to obtain needed data.

Funders
They will need:
 Clear and timely reports: Because of their responsibilities for making
decisions concerning the continuation of financial support, the funders will
need information about the progress of the initiative.
 Evidence of community change and impact: Funders will need to be able to
measure the success of the initiative and report this to their own trustees or
constituents.

How Do You Determine Interests?


Now you know what interests you're looking for, you have to determine a way of
finding them. You need to match people with what their interests are, whether it's
through a survey, an interview, or some other method. Failure to determine interests
is often the source of problems and misunderstandings along the way and can
became disastrous if it turns out that different stakeholders had different
expectations and priorities.

Some of the questions that you can ask stakeholders to match them with their
interests in an evaluation are:
 What are the evaluation's strengths and weaknesses?
 Do you think the evaluation is moving toward its desired outcomes?
 Which kind of implementation problems came up, and how are they being
addressed?
 How are staff and clients interacting?
 What is happening that wasn't expected?

~ 96 ~
 What do you like, dislike, or would like to change in the evaluation process?
From the answers you get, you can determine what each party wants out of the
evaluation. You can also group those who have similar interests. For instance, you
may find out that a community leader and an evaluator are interested in improving
their managing abilities through the evaluation process. You have made a match,
and those two will work toward a common goal.

Here are some ways of determining interests and matching them with
stakeholders:
 Interviews: Get a representative from each group of stakeholders and ask
away. Be direct in your questions so that you can quickly get to the point
you're trying to make, that is, what interests this stakeholder has and if they
match with some other stakeholders' interests.
 Surveys: You can send out a written questionnaire to assess how the
stakeholders rank their interests and which group wants what. The survey
must be succinct and direct, asking clear questions about the evaluation in
terms of quality and goals. Survey results are easy to utilize and can be
helpful for the evaluation presentation.
 Phone surveys: They can save you time and money, if you're doing them
locally. You can use the same questions you would use in a written survey,
but leave more space for commentary, as people tend to talk more when
speaking to a person on the phone. Just be sure your phone surveys don't
stray from your objective.
 Brainstorm sessions: Arrange a meeting with stakeholder representatives and
brainstorm interests and possibilities for the evaluation's outcome. Bring up
problems such as continuity of the program, obtaining funds, coordinating
activities, and attracting staff, and let stakeholders have their say. Everybody
will come out from the brainstorming session with new ideas and a much
better notion of everybody else's ideas.

~ 97 ~
Besides these methods, you should always conduct a survey after the completion of
the evaluation. This will benefit external audiences and decision-makers. Remember,
if changes need to be made, don't be afraid to follow through with them.
Remember that decisions about how to improve a program tend to be made in small,
incremental steps based on specific findings aimed at making the evaluation a better
process for all stakeholders involved.

Now armed with this list of needs and interests, you can find or develop the tools to
obtain useful information. The next sections will explore ways to select an evaluation
team and present some key questions for the evaluation process. Later, we will be
discussing how to evaluate your community initiative!

In Summary
Once you have a clear idea of what each stakeholder really wants, you are very
likely to succeed in your evaluation. Be sure to revise frequently the interests of all
the stakeholders involved so that you don't lose focus of what you're looking for
with your evaluation. The hard part of your evaluation work starts now!

~ 98 ~
CHAPTER 11

CLUSTER DEVELOPMENT

Clusters can be defined as sectoral and geographical concentration of enterprises, in


particular Small and Medium Enterprises (SME), faced with common opportunities
and threats which can:
a. Give rise to external economies (e.g. specialized suppliers of raw materials,
components and machinery; sector specific skills etc.);
b. Favor the emergence of specialized technical, administrative and financial
services;
c. Create a conducive ground for the development of inter-firm cooperation and
specialization as well as of cooperation among public and private local
institutions to promote local production, innovation and collective learning.

UNIDO has been implementing technical cooperation projects based on a cluster


and network development (CND) approach which is built on three assumptions:
 that clustering and networking among enterprises promotes enterprise
competitiveness,
 that public policy can help to facilitate clustering and networking; and
 that support programmes targeting groups of enterprises are more cost-
efficient and cost-effective than those targeting individual enterprises

UNIDO has formulated approach to guide the formulation and implementation of


cluster development initiatives. Each module represents a critical phase in the
cluster development process:

 Phase 1 - Cluster selection:


The selection of clusters to be supported has to be made according to specific
and agreed upon criteria. Such criteria should be determined in a transparent
process and the ranking be established by the implementing agency, the

~ 99 ~
national counterpart agency and any other bodies that have a clear stake in
the initiative.

 Phase 2 - Cluster governance, trust building and the CDA:


The Cluster Development Agent (CDA) is a neutral professional or broker
who facilitates the process of cluster and network development. S/he plays a
crucial, yet typically temporary role in developing a sustainable cluster
governance system and accompanies the cluster development initiative over
the subsequent development phases.

 Phase 3 - Cluster diagnostics:


A cluster diagnostic study or assessment forms the basis of developing a
strategic vision and action plan for the cluster. It is developed through a
participatory exercise guided by the CDA with a view to:
developing an understanding of the socioeconomic and institutional
environment of a cluster,
detecting potential leverage points for the intervention,
providing a baseline for monitoring and evaluation, and
building initial trust between the CDA and the cluster stakeholders.

 Phase 4 - Vision building and action planning:


Based on the diagnostic study, the cluster stakeholders develop a long-term
vision for the cluster for the immediate future and develop a detailed plan for
joint actions that are aimed at realizing the cluster vision over specified
periods (short, medium, long term).

 Phase 5 - Implementation:
Implementation refers to the entire set of joint actions that are required to
realize the long-term vision of the cluster. It is not the mandate of the
implementing agency or, more specifically of the CDA, to directly carry out
all project activities. Rather, the CDA facilitates the undertaking of activities

~ 100 ~
through the establishment of partnerships with both private organizations
and other public institutions and, of course, based on the capabilities (present
and future) of the cluster firms.

 Phase 6: Monitoring and Evaluation (M&E):


While the M&E phase is the final one in the UNIDO cluster development
methodology, M&E activities have to start at the very outset of the
intervention. Importantly, indicators against which progress can be measured
and reporting lines and responsibilities have to be determined already during
the very early stages of a cluster and network development initiative. As
already mentioned, the diagnostic phase will usually help with the
development of a monitoring framework, the specification of indicators and
the establishment of a baseline. At the same time, the vision building and
action planning phase will have to be considered too clearly reflect the
strategic orientation of the initiative in the monitoring framework. During the
implementation phase, M&E activities are performed regularly to assess
progress and to undertake corrective action, where necessary.

Why Clusters
Clusters have gained increasing prominence in debates on economic development in
recent years. Governments worldwide regard clusters as potential drivers of
enterprise development and innovation. Cluster initiatives are also considered to be
efficient policy instruments in that they allow for a concentration of resources and
funding in targeted areas with a high growth and development potential that can
spread beyond the target locations (spillover and multiplier effects).

Examples of internationally renowned clusters, such as that of the Silicon Valley


cluster in California, the information technology cluster of Bangalore in India, or the
Australian and Chilean wine clusters demonstrate that clusters are environments
where enterprises can develop a competitive and global edge, while at the same time
generating wealth and local economic development in the process. This is because

~ 101 ~
clustering provides enterprises with access to specialized suppliers and support
services, experienced and skilled labor and the knowledge sharing that occurs when
people meet and talk about business.

Clusters are also particularly promising environments for SME development. Due to
their small size, SMEs individually are often unable to realize economies of scale and
thus find it difficult to take advantage of market opportunities that require the
delivery of large stocks of standardized products or compliance with international
standards. They also tend to have limited bargaining power in inputs purchase, do
not command the resources required to buy specialized support services, and have
little influence in the definition of support policies and services.

Within clusters, SMEs can realize shared gains through the organization of joint
actions between cluster enterprises (e.g., joint bulk inputs purchase or joint
advertising, or shared use of equipment) and between enterprises and their support
institutions (e.g., provision of technical assistance by business associations or
investments in infrastructure by the public sector). The advantage accruing to the
cluster from such collective efforts is referred to as collective efficiency.

The UNIDO Approach to Cluster Development focuses in particular on overcoming


of these impediments to SME competitiveness and on unleashing their growth and
sustainable development potential.

Development Principles
The underlying concern of the UNIDO Approach is the stimulation of pro-poor
growth, defined as a pattern of economic growth that creates opportunities for the
poor, and generates the conditions for them to take advantage of those
opportunities. In order to improve cluster performance, UNIDO addresses economic
and non-economic issues, especially those related to the fostering of human
and social capital with a view to enhancing labor force production capacities and
increasing economic participation. Such an approach requires measures that are

~ 102 ~
aimed at empowering marginalized groups, improving access to employment
opportunities, and supporting the well-being of entrepreneurs and employees as
well as the development of their skills to boost productivity and enhance innovation
capacity.

Facilitate the undertaking of joint actions to realize collective efficiency gains


The UNIDO Approach to cluster development focuses on initiatives that encourage
enterprises and institutions in selected clusters to undertake joint actions that could
ultimately yield benefits to the cluster as a whole and the communities in which they
are embedded. It does so by brokering and facilitating dialogue and by promoting
activities oriented at building consensus within the cluster. A distinctive feature of
the UNIDO Approach is that – instead of targeting relatively large and successful
enterprises and hoping that the benefits will trickle down to smaller enterprises in
the cluster - the cluster vision and action plan are devised by a representative group
of cluster stakeholders and thus comprise activities that tackle issues of relevance to
a majority of cluster stakeholders.

Provide targeted support to the cluster’s institutional support structure


The UNIDO Approach focuses on incentivizing public and private sector bodies to
more effectively promote cluster development and on strengthening their capacity
to do so. Support is given to relevant local, regional, and national institutions,
including chambers of commerce, local governments, NGOs, producer associations,
universities and training institutes and regional as well as local economic
development agencies to gradually assume a strong supporting role in the
development of the cluster. UNIDO also technically assists financial and non-
financial service providers (e.g., business development service (BDS) providers,
vocational schools and training institutes, large buyers and retailers, and the
suppliers of equipment and inputs) to make the services they offer more responsive
to demands from within clusters.

~ 103 ~
Involve public and private sector actors based on their respective capacities and
competencies
While the role of the public sector in supporting a cluster development initiative
normally includes reacting to demands from within the cluster for changes in the
business environment as well as with regards to larger scale infrastructure
development and the provision of an adequate framework for education and
broader skills development and the coordination and support of brokering activities,
the private sector can play an active role when it comes to mobilizing human and
financial resources to be invested in innovative ventures to increase the growth
potential of the cluster; providing business development and financial services on a
commercially viable and sustainable basis; and establishing of and/or participating
in representative bodies to voice the interests of the business community in
dialogues with the public sector. A local public-private forum, e.g. in the context of
the Cluster Commission or other suitable dialogue mechanism, can also ensure that
cluster development initiatives within a country or region are linked with other
public support programmes for private sector development.

Monitor and evaluate project results to improve efficiency and effectiveness,


enhance accountability and demonstrate impact.
Monitoring and Evaluation (M&E) are an integral part of project management. They
provide detailed information about the project’s results assessing any changes –
both intended and unintended - an intervention may have produced. Understanding
the status quo is a prerequisite to determine the intervention strategy of a project. As
project staff and management can only take corrective measures if they are aware of
the outcomes produced and the (external) factors that influenced them, monitoring
information forms the basis for project-related decision making on a daily basis as
well as the coordination of actors and activities. In the UNIDO approach, the careful
construction of a cluster development initiative’s causal chain and the determination
of key performance indicators are critical steps to be undertaken right from the
beginning of a typical project. To facilitate this, UNIDO has developed step-by-step
guidelines, based on a generic causal chain and a pool of relevant indicators, to

~ 104 ~
develop a tailor-made monitoring system for each cluster development initiative. A
project evaluation, typically carried out at the end of a cluster development initiative
(for longer projects, a mid-term evaluation is recommended), assesses several
aspects of an intervention, including its relevance, efficiency, effectiveness, impact
and sustainability, in order to appraise its overall usefulness.

~ 105 ~
CHAPTER 12

COMMUNITY BASED PARTICIPATORY RESEARCH

Della Roberts worked as a nutritionist at the Harperville Hospital. As an African


American, she was concerned about obesity among black children, and about the
fact that many of Harperville’s African American neighborhoods didn’t have access
to healthy food in stores or restaurants. She felt that the city ought to be doing
something to change the situation, but officials didn’t seem to see it as a problem.
Della decided to conduct some research to use as a base for advocacy.

Della realized that in order to collect accurate data, she needed to find researchers
who would be trusted by people in the neighborhoods she was concerned about.
What if she recruited researchers from among the people in those neighborhoods?
She contacted two ministers she knew, an African American doctor who practiced in
a black neighborhood, and the director of a community center, as well as using her
own family connections. Within two weeks, she had gathered a group of
neighborhood residents who were willing to act as researchers. They ranged from
high school students to grandparents, and from people who could barely read to
others who had taken college courses.

The group met several times at the hospital to work out how they were going to
collect information from the community. Della conducted workshops in research
methods and in such basic skills as how to record interviews and observations. The
group discussed the problem of recording for those who had difficulty writing, and
came up with other ways of logging information. They decided they would each
interview a given number of residents about their food shopping and eating habits,
and that they would also observe people’s buying patterns in neighborhood stores
and fast food restaurants. They set a deadline for finishing their data gathering, and
went off to learn as much as they could about the food shopping and eating behavior
of people in their neighborhoods.

~ 106 ~
As the data came in, it became clear that people in the neighborhoods would be
happy to buy more nutritious food, but it was simply too difficult to get it. They
either had to travel long distances on the bus, since many didn’t have cars, or find
time after a long work day to drive to another, often unfamiliar, part of the city and
spend an evening shopping. Many also had the perception that healthy food was
much more expensive, and that they couldn’t afford it.

Ultimately, the data that the group of neighborhood residents had gathered went
into a report written by Della and other professionals on the hospital staff. The
report helped to convince the city to provide incentives to supermarket chains to
locate in neighborhoods where healthy food was hard to find.

The group that Della had recruited had become a community-based participatory
research team. Working with Della and others at the hospital, they helped to
determine what kind of information would be useful, and then learned how to
gather it. Because they were part of the community, they were trusted by residents;
because they shared other residents’ experience, they knew what questions to ask
and fully understood the answers, as well as what they were seeing when they
observed.

This section is about participatory action research: what it is, why it can be effective,
who might use it, and how to set up and conduct it.

What Is Community-Based Participatory Research?


In simplest terms, community-based participatory research (for convenience, we’ll
primarily call it CBPR for the rest of this section) enlists those who are most affected
by a community issue – typically in collaboration or partnership with others who
have research skills – to conduct research on and analyze that issue, with the goal of
devising strategies to resolve it. In other words, community-based participatory
research adds to or replaces academic and other professional research with research

~ 107 ~
done by community members, so that research results both come from and goes
directly back to the people who need them most and can make the best use of them.
There are several levels of participatory research. At one end of the spectrum is
academic or government research that nonetheless gathers information directly from
community members. The community members are those most directly affected by
the issue at hand, and they may (or may not) be asked for their opinions about what
they need and what they think will help, as well as for specific information. In that
circumstance, the community members don’t have any role in choosing what
information is sought, in collecting data, or in analyzing the information once it’s
collected. (At the same time, this type of participatory research is still a long step
from research that is done at second or third hand, where all the information about a
group of people is gathered from statistics, census data, and the reports of observers
or of human service or health professionals.)

At another level, academic or other researchers recruit or hire members of an


affected group – often because they are familiar with and known by the community
– to collect data. In this case, the collectors may or may not also help to analyze the
information that they have gathered.

A third level of participatory research has academic, government, or other


professional researchers recruiting members of an affected group as partners in a
research project. The community members work with the researchers as colleagues,
participating in the conception and design of the project, data collection, and data
analysis. They may participate as well in reporting the results of the project or study.
At this level, there is usually – though not always – an assumption that the research
group is planning to use its research to take action on an issue that needs to be
resolved.

The opposite end of the participatory research continuum from the first level
described involves community members creating their own research group –

~ 108 ~
although they might seldom think of it as such – to find out about and take action on
a community issue that affects them directly.

In this section, we’ll concern ourselves with the latter two types of participatory
research – those that involve community members directly in planning and carrying
out research, and that lead to some action that can influence the issue studied. This is
what is often defined as community-based participatory research. There are certainly
scenarios where other types of participatory research are more appropriate, or easier
to employ in particular situations, but it’s CBPR that we’ll discuss here.
Employing CBPR for purposes of either evaluation or long-term change can be a
good idea for reasons of practicality, personal development, and politics.

On the practical side, community-based participatory research can often get you
the best information possible about the issue, for at least reasons including:
 People in an affected population are more liable to be willing to talk and give
straight answers to researchers whom they know, or whom they know to be
in circumstances similar to their own, than to outsiders with whom they have
little in common
 People who have actually experienced the effects of an issue – or an
intervention – may have ideas and information about aspects of it that
wouldn’t occur to anyone studying it from outside. Thus, action researchers
from the community may focus on elements of the issue, or ask questions or
follow-ups, that outside researchers wouldn’t, and get crucial information
that other researchers might find only by accident, or perhaps not at all
 People who are deeply affected by an issue, or participants in a program, may
know intuitively, or more directly, what’s important when they see or hear it.
What seems an offhand comment to an outside researcher might reveal its
real importance to someone who is part of the same population as person
who made the comment.
 Action researchers from the community are on the scene all the time. Their
contact both with the issue or intervention and with the population affected

~ 109 ~
by it is constant, and, as a result, they may find information even when
they’re not officially engaged in research.
 Findings may receive more community support because community members
know that the research was conducted by people in the same circumstances as
their own

When you’re conducting an evaluation, these advantages can provide you with a
more accurate picture of the intervention or initiative and its effects. When you’re
studying a community issue, all these advantages can lead to a true understanding
of its nature, its causes, and its effects in the community, and can provide a solid
basis for a strategy to resolve it. And that, of course, is the true goal of community
research – to identify and resolve an issue or problem, and to improve the quality of
life for the community as a whole.

In the personal development sphere, CBPR can have profound effects on the
development and lives of the community researchers, particularly when those who
benefit from an intervention, or who are affected by an issue, are poor or otherwise
disadvantaged, lack education or basic skills, and/or feel that the issue is far beyond
their influence. By engaging in research, they not only learn new skills, but see
themselves in a position of competence, obtain valuable knowledge and information
about a subject important to them, and gain the power and the confidence to exercise
control over this aspect of their lives.

Two common political results of the CBPR process:


 Through community-based participatory research, citizens can take more
control of the direction of their communities
 Community researchers – especially those who are poor or otherwise
disadvantaged – come to be viewed differently by professionals and those in
positions of power. They have vital information, and the ability to use it, and
thus become accepted as contributing members of the community, rather than
as voiceless observers or dependents. They have gained a voice, because they

~ 110 ~
understand that they have something to say. Furthermore, the research and
other skills and the self-confidence that people acquire in a community-based
participatory research process can carry over into other parts of their lives,
giving them the ability and the assurance to understand and work to control
the forces that affect them. Research skills, discipline, and analytical thinking
often translate into job skills, making participatory action researchers more
employable. Most important, people who have always seen themselves as
bystanders or victims gain the capacity to become activists who can transform
their lives and communities.

Community-based participatory research has much in common with the work of the
Brazilian political and educational theoretician and activist, Paulo Freire. In Freire’s
critical education process, oppressed people are encouraged to look closely at their
circumstances, and to understand the nature and causes of their oppressors and
oppression. Freire believes that with the right tools – knowledge and critical
thinking ability, a concept of their own power, and the motivation to act – they can
undo that oppression. Many people see this as the “true” and only reason for
supporting action research, but we see many other reasons for doing so, and list
some of them both above and below.
Action research is often used to consider social problems – welfare reform or
homelessness, for example – but can be turned to any number of areas with positive
results.
Some prime examples:
 The environment. It was a community member who first asked the questions
and started the probe that uncovered the fact that the Love Canal
neighborhood in Niagara Falls, NY, had been contaminated by the dumping
of toxic waste.
 Medical/health issues. Action research can be helpful in both undeveloped
and developed societies in collecting information about health practices,
tracking an epidemic, or mapping the occurrence of a particular condition, to
name three of numerous possibilities.

~ 111 ~
 Political and economic issues. Citizen activists often do their own research to
catch corrupt politicians or corporations, trace campaign contributions, etc.

Just as it can be used for different purposes, CBPR can be structured in different
ways. The differences have largely to do with who comes up with the idea in the
first place, and with who controls, or makes the decisions about, the research. Any of
these possibilities might involve a collaboration or partnership, and a community
group might well hire or recruit as a volunteer someone with research skills to help
guide their work.
Some common scenarios:
 Academic or other researchers devise and construct a study, and employ
community people as data collectors and/or analysts.
 A problem or issue is identified by a researcher or other entity (a human
service organization, for instance), and community people are recruited to
engage in research on it and develop a solution.
 A community based organization or other group gathers community people
to define and work on a community issue of their choosing, or to evaluate a
community intervention aimed at them or people similar to them.
 A problem is identified by a community member or group, others who are
affected and concerned gather around to help, and the resulting group sets
out to research and solve the problem on its own.

Why would you use Community-Based Participatory Research?


We’ve already alluded to a number of reasons why CBPR could be useful in
evaluating a community intervention or initiative or addressing a community issue.
We’ll repeat them briefly here, and introduce others as well.
Action research yields better and more nearly complete and accurate information
from the community.
 People will speak more freely to peers, especially those they know personally,
than to strangers.

~ 112 ~
 Researchers who are members of the community know the history and
relationships surrounding a program or an issue, and can therefore place it in
context.
 People experiencing an issue or participating in an intervention know what’s
important to them about it – what it disrupts, what parts of their lives it
touches, how they have changed as a result, etc. That knowledge helps them
to formulate interview questions that get to the heart of what they – as
researchers – are trying to learn.

Involving the community in research is more likely to meet community needs.


Action research makes a reasonable resolution or accurate evaluation more probable
in two ways. First, by involving the people directly affected by the issue or
intervention, it brings to bear the best information available about what’s actually
happening. Second, it encourages community buy-in and support for whatever
plans or interventions are developed. If people are involved in the planning and
implementation of solutions to community issues, they’ll feel they own the process,
and work to make it successful. It’s equitable, philosophically consistent for most
grassroots and community-based organizations, and practical in that it usually
yields the best results

Action research, by involving community members, creates more visibility for the
effort in the community.
Researchers are familiar to the community, will talk about what they’re doing (as
will their friends and relatives), and will thus spread the word about the effort.
Community members are more likely to accept the legitimacy of the research and
buy into its findings if they know it was conducted by people like themselves,
perhaps even people they know.
Citizens are more apt to trust both the truthfulness and the motives of their friends
and neighbors than those of outsiders.

~ 113 ~
Action research trains citizen researchers who can turn their skills to other
problems as well.
People who discover the power of research to explain conditions in their
communities, and to uncover what’s really going on, realize that they can conduct
research in other areas than the one covered by their CBPR project. They often
become community activists, who work to change the conditions that create
difficulty for them and others. Thus, the action research process may benefit the
community not only by addressing particular issues, but by – over the long term –
creating a core of people dedicated to improving the overall quality of its citizens’
lives.

Involvement in CBPR changes people’s perceptions of themselves and of what


they can do.
An action research project can have profound effects on community researchers who
are disadvantaged economically, educationally, or in other ways. It can contribute to
their personal development, help them develop a voice and a sense of their power to
change things, and vastly expand their vision of what’s possible for them and for the
community. Such an expanded vision leads to an increased willingness to take
action, and to an increase in their control over their lives.

Skills learned in the course of action research carry over into other areas of
researchers’ lives.
Both the skills and the confidence gained in a CBPR project can be transferred to
employment, education, child-rearing, and other aspects of life, greatly improving
people’s prospects and well-being.

A participatory action research process can help to break down racial, ethnic, and
class barriers.
CBPR can remove barriers in two ways. First, action research teams are often
diverse, crossing racial, ethnic, and class lines. As people of different backgrounds
work together, this encourages tolerance and friendships, and often removes the fear

~ 114 ~
and distrust. In addition, as integral contributors to a research or evaluation effort,
community researchers interact with professionals, academics, and community
leaders on equal footing. Once again, familiarity breaks down barriers, and allows
all groups to see how much the others have to offer. It also allows for people to
understand how much they often misjudge others based on preconceptions, and to
begin to consider everyone as an individual, rather than as “one of those.”

A member of the Changes Project, a CBPR project that explored the impact of
welfare reform on adult literacy and ESOL (English as a Second or Other Language),
learners wrote in the final report: “What I learned from working in this project first
off is, none of us are so great that change couldn’t help us be better people... I
walked into the first meeting thinking I was the greatest thing to hit the pike and
found that I, too, had some prejudices that I was not aware of. I thought that no one
could ever tell me I wasn’t the perfect person to sit in judgment of others because I
never had a negative thought or prejudiced bone in my body. Well, lo and behold, I
did, and seeing it through other people’s eyes I found that I, too, had to make some
changes in my opinions.

Action research helps people better understand the forces that influence their
lives.
Just as Paulo Freire found in his work in Latin America, community researchers,
sometimes as a direct result of their research, and sometimes as a side benefit, begin
to analyze and understand how larger economic, political, and social forces affect
their own lives. This understanding helps them to use and control the effects of those
forces, and to gain more control over their own destinies.

Community based action research can move communities toward positive social
change.
All of the above rationales described reasons for employing CBPR act to restructure
the relationships and the lines of power in a community. They contribute to the

~ 115 ~
mutual respect and understanding among community members and the deep
understanding of issues that in turn lead to significant and positive social change.

Who should be involved in community-based participatory research?


The short answer here is people from all sectors of the community, but there are
some specific groups that, under most circumstances, are important to include.
 People most affected by the issue or intervention under study. These are the
people whose inclusion is most important to a participatory effort – both
because it’s their inclusion that makes it participatory, and because of what
they bring to it. These folks, as we discussed earlier, are closest to the
situation, have better access to the population most concerned, and may have
insights others wouldn’t have. In addition, their support is crucial to the
planning and implementation of an intervention or initiative. That support is
much more likely to be forthcoming if they’ve been involved in research or
evaluation.
 Other members of the affected population. People who may not themselves
be directly affected by the issue or intervention, but who are trusted by the
affected population, can be useful members of a CBPR team.

A businessman from the Portuguese community in a small city was an invaluable


member of an action research team examining the need for services in that
community. He was quite successful, had graduated from college in the US, and
needed no services himself, but his fluency in Portuguese, his credentials as a trusted
member of the community, and his understanding of both the culture of the
Portuguese residents and the culture of health and human service workers brought a
crucial dimension to data gathering, analysis, and general information about the
community.
 Decision makers. Involving local officials, legislators, and other decision
makers from the very beginning can be crucial, both in securing their support,
and in making sure that what they support is in fact what’s needed. If they’re
part of the team, and have all the information that it gathers, they become

~ 116 ~
advocates not just for addressing the issue, but for recognizing and
implementing the solution or intervention that best meets the actual needs of
the population affected.
 Academics with an interest in the issue or intervention in question.
Academics who have studied the issue often have important information that
can help a CBPR team better understand the data it collects. They usually
have research skills as well, and can help to train other team members. At the
same time, they can learn a great deal from community-based researchers –
about the community and communities in general, about approaching people,
about putting assumptions and preconceptions aside – and perhaps, as a
result, increase the effectiveness of their own research

It’s important that they be treated, and treat everyone else, as equals. Everyone on a
team has to view other members as colleagues, not as superiors or inferiors, or as
more or less competent or authoritative. This can be difficult on both sides – i.e.
making sure that officials, academics, or other professionals don’t look down on
community members, and that community members don’t automatically defer to (or
distrust) them. It may take some work to create an environment in which everyone
feels equally respected and valued, but it’s worth the effort. Both the quality of the
research and the long-term learning by team members will benefit greatly from the
effort. (There are some circumstances where actual equality among all team
members is not entirely possible. When community members are hired as
researchers, for instance, the academic or other researcher who pays the bills has to
exercise some control over the process. That doesn’t change the necessity of all team
members being viewed as colleagues and treated with respect.)
 Health, human service, and public agency staff and volunteers. Like the
previous two groups, these people have both a lot to offer and – often – a lot
to learn that will make them more sensitive and more effective at their jobs in
the long run. They may have a perspective on issues in the community that
residents lack because of their closeness to the situation. At the same time,

~ 117 ~
they may learn more about the lives of those they work with, and better
understand their circumstances and the pressures that shape their lives.
 Community members at large. This category brings us back to the statement
at the beginning of this portion of the section that members of all sectors of
the community should have the opportunity to be involved. That statement
covers the knowledge, skills, and talent that different people bring to the
endeavor; the importance of buy-in by all sectors of the community if any
long-term change is to be accomplished; and what team members learn and
bring back to their families, friends, and neighbors as a result of their
involvement.

When should you employ community-based participatory research?


There are times when action research may not be appropriate, and there are times
when it’s the best choice. How do you decide?
One criterion is the amount of time you have to do the research on the issue or
intervention. Action research may take longer than traditional methods, because of
the need for training, and because of the time it often takes for community
researchers to adjust to the situation (i.e. to realize that their opinions and intuitions
are important, even if they may not always be right, and that their conclusions are
legitimate). If your time is limited, CBPR may not be the right option
Another consideration is the type of research that’s necessary. Action research lends
itself particularly well to qualitative research. If you’re obligated to deliver
complicated, quantitative results to a funder, for instance, you may want to depend
on professional researchers or evaluators. Most CBPR isn’t oriented toward
producing results couched in terms of statistical procedures. (This isn’t to say that
action research teams can’t do quantitative research, but simply that it requires more
training, and therefore time, and may require an outside source or an academic team
member to crunch the numbers.)

~ 118 ~
Qualitative Research
Relies on information that can’t be expressed in mathematical terms – descriptions,
opinions, anecdotes, the comments of those affected by the issue under study, etc.
The results of qualitative research are usually expressed as a narrative or set of
conclusions, with the analysis backed up by quotes, observation notes, and other
non-numerical data.

(Almost anything can be expressed in terms of numbers in some way. Interviewers,


for instance, can count the number of references to a particular issue, or even record
the number of times that an interviewee squirmed in his chair. Qualitative research,
however, relies on elements that can’t be adequately – or, in many cases, at all –
described numerically. The number of squirms may say something about how
nervous an interviewee is, or it may indicate that he has to go to the bathroom. The
interviewer will probably be able to tell the difference, but the numbers won’t.)

Quantitative Research
Depends on numbers – the number of people served by an intervention, for
instance, the number that completed the program, the number that achieved some
predetermined outcome (lowered blood pressure, employment for a certain period,
citizenship), scores on academic or psychological or physical tests, etc. These
numbers are usually then processed through one or more statistical operations to tell
researchers exactly what they mean. (Some statistics may, for instance, help
researchers determine precisely what part of an intervention was responsible for a
particular behavior change.)

It may seem that quantitative research is more accurate, but that’s not always the
case, especially when the research deals with human beings, who don’t always do
what you expect them to. It’s often important to get other information in order to
understand exactly what’s going on.

~ 119 ~
Furthermore, sometimes there aren’t any numbers to work with. The Changes
Project was looking at the possible effects of a change in the welfare system on adult
learners. The project was conducted very early in the change process, in order to try
to head off the worst consequences of the new system. There was very little
quantitative information available at that point, and most of the project involved
collecting information about the personal experiences of learners on welfare.
In other words, neither quantitative nor qualitative methods are necessarily “better,”
but sometimes one is better than the other for a specific purpose. Often, a mix of the
two will yield the richest and most accurate information.

It’s probably best and most effective to use action research when:
 There’s time to properly train and acclimate community researchers
 The research and analysis necessary relies on interviews, experience,
knowledge of the community, and an understanding of the issue or
intervention from the inside, rather than on academic skills or an
understanding of statistics (unless you have the time and resources to teach
those skills or the team includes someone who has them)
 You need an entry to the community or group from whom the information is
being gathered
 You’re concerned with buy-in and support from the community
 Part of the purpose of using CBPR is to have an effect on and empower the
community researchers
 Part of the purpose of using CBPR is to set the stage for long-term social
change

How do you institute and carry out community-based participatory research?


Once you’ve decided to conduct an action research project, there are a number of
steps to take to get it up and running. You have to find and train the participants;
determine exactly what information you’re looking for and how to go about finding
it; plan and carry out your research; analyze and report on your findings; translate

~ 120 ~
the findings into recommendations; take, or bring about, action based on those
recommendations; evaluate the process; and follow up.

What follows assumes an ideal action research project with a structure, perhaps one
initiated by a health or human service organization. A community group that comes
together out of common interest probably would recruit by people already involved
pulling in their friends, and probably wouldn’t do any formal training unless they
invited a researcher to help them specifically in that way. The nature of your group
will help you determine how – or whether – you follow each of the steps below.

Recruit a Community Research Team


How you recruit a team will depend on the purpose of the project as well as on who
might be most effective in gaining and analyzing information. A team may already
exist, as in the example at the beginning of this section. Or a team may simply be a
group that gets together out of common concerns. Many CBPR projects aim for a
diverse team, with the idea that a mix of people will both provide the broadest range
of benefit and allow for the greatest amount of personal learning for team members.
Other projects may specifically draw only from a particular population – a language
minority, those served by a certain intervention, those experiencing a particular
physical condition.

It often makes sense for at least half the team to be composed of people directly
affected by the issue or intervention in question. Those numbers both assure good
contact with the population from which information needs to be gathered, and
makes it less likely that community researchers will be overwhelmed or intimidated
by other (professional) team members or by the task
Recruiting from within an organization or program may be relatively simple,
because the pool of potential researchers is somewhat of a captive audience: you
know where to find them, and you already have a relationship with them.
Recruiting from a more general population, on the other hand, requires attention to
some basic rules of communication.

~ 121 ~
 Use language that your audience can understand, whether that means presenting
your message in a language other than English, or presenting it in simple,
clear English without any academic or other jargon.
 Use the communication channels that your audience is most likely to pay attention to.
An announcement in the church that serves a large proportion of your
population, a program newsletter, or word-of-mouth might all be good
channels by which to reach a particular population.
 Be culturally sensitive and appropriate. Couch your message in a form that is not
only respectful of your audience’s culture, but that also speaks to what is
important in that culture.
 Go where your audience is. Meet with groups of people from the population you
want to work with, put out information in their neighborhoods or meeting
places. Don’t wait for them to come to you.
Given all this, the best recruitment method is still face-to-face contact by someone
familiar to the person being recruited.

Orient and Train the Research Team


Orientation and training may be part of the same process, or they might be separate.
The two have different purposes. Orientation is meant to give people a chance to ask
questions and an overall picture of what is expected.
Orientation might include:
 Introductions all around, and an introductory activity to help team members
get to know one another
 Explanation of community-based participatory research, and basic
information about this project or evaluation
 Participants’ time commitment and the support available to them, if any. Are
child care, transportation, or other support services provided or paid for?
 An opportunity to ask questions, or to discuss any part of the project or
evaluation that team members don’t understand or agree with
Especially if the team is diverse, and especially if that diversity is one of education
and research experience, an important aspect of the orientation is to start building

~ 122 ~
the team, and to ensure that everyone sees it as a team of colleagues, rather than as
one group leading or dominating or – even worse – simply tolerating another. Each
person brings different skills and experience to the effort and has something to teach
everyone else. Emphasizing that from the beginning may be necessary, not only to
keep more educated members from dominating, but also to encourage less educated
members not to be afraid to ask questions and give their opinions.

Training is meant to pass on specific information and skills that people will need in
order to carry out the work of the research. There are as many models for training as
there are teams to be trained. As noted above, orientation might serve as all or part
of an introductory training session. Training can take place all at once – in one or
several multi-hour sessions on consecutive days – or over the whole period of the
project, with each training piece leading to the activity that it concerns. It might be
conducted by one person – who, in turn, could be someone from inside the
organization or an outside facilitator – by a series of experts in different areas, or by
the team members themselves. (In this last case, team members might, for instance,
determine what they need to know, and then decide on and implement an
appropriate way to learn it.)
Regardless of how it’s done, here are some general guidelines for training that are
usually worth following:
 Find a comfortable space to hold the training
 Provide, or make sure that people bring, food and drink
 Take frequent, short breaks. It’s better for people’s concentration to take a three-
minute break every half hour than a 20-minute break every three hours
 Structure the space for maximum participation and interaction - chairs in a circle,
room to move around, etc.
 Vary the ways in which material is presented. People learn in a variety of ways –
by hearing, by seeing, by discussion, by example (watching others), and by
doing. The more of these methods you can include, the more likely you are to
hold people’s attention and engage everyone on the team.

~ 123 ~
 Use the training to build your team. Training is a golden opportunity for people
to get to know and trust one another, and to absorb the guiding principles for
the work.

The actual content of the training will, of course, depend on the project you’re
undertaking, but general areas should probably include:
 Necessary research skills. These might include interview techniques, Internet
searching, constructing a survey, and other basic research and information-
gathering methods.
 Important information about the community or the intervention in question.
 Meeting and negotiation skills. Many of the people on your team may not have
had the experience of participating in numerous meetings. They need time
and support both to develop meeting skills – following discussion, knowing
when it’s okay to interrupt, feeling confident enough to express their opinions
– and to become comfortable with the meeting process.
 Preparing a report. This doesn’t necessarily mean drafting a formal document.
Depending upon the team members, a flow chart, a slide show, a video, or a
collage might be informative and powerful ways to convey research results,
as might oral testimony or a sound recording.
 Making a presentation. Knowing what to expect, and learning how to make a
clear and cogent presentation can make the difference between having your
findings and recommendations accepted or rejected.

Determine the questions the Research or Evaluation is meant to answer


The questions you choose to answer will shape your research. There are many types
of answers in either of these cases.

An evaluation can focus on process: What is actually being done, and how does that
compare with what the intervention or initiative set out to do? It can focus on
outcomes: Is the end result of the intervention what you intended it to be? Or it can
try to look at both, and to decide whether the process in fact works to gain the

~ 124 ~
desired outcome. An evaluation may also aim to identify specific elements of the
process that have to be changed, or to identify a whole new process to replace one
that doesn’t seem to be working.

Research on a community issue also may be approached in a number of ways. You


may simply be trying to find out whether a certain condition exists in your
community, or to what extent it exists. You may be concerned with how, or how
much, it affects the community, or what parts of the community it affects. You may
be seeking a particular outcome, and the research questions you ask may be
designed to help you reach that outcome.

Plan and Structure your Research Activity


Given your time constraints, the capacity of your team, and the questions you’re
considering, plan your research.
Your plan should include:
 The kind and amount of information-gathering that best suits your project
(e.g., interviews, library research, surveys)
 Who will be responsible for what
 The timeline – i.e., deadlines for completing each phase of the plan
 How and by whom the information will be analyzed
 What the report of the research or evaluation will look like
 When, how, and to whom the report will be presented

Anticipate and prepare contingency plans for problems that might arise
An action research group, like any other, can have internal conflicts, as well as
conflicts with external forces. People may disagree, or worse; some people may drop
out, or may not do what they promised; people may not understand, or may choose
not to follow the procedures you’ve agreed on. There will need to be guidelines to
deal with each of these and other potential pitfalls.

~ 125 ~
Implement your Research Plan
Now that you've completed your planning, it's time to carry it out.

Prepare and Present Your Report and Recommendations


The report, as explained previously, may be a written document, or may be in some
alternative form. If it’s an evaluation, it might be presented in one way to the staff of
the intervention being evaluated, and in another to funders or the community,
depending upon your purposes.
Some possibilities for presentation include:
 A press conference
 A community presentation
 A newspaper or newsletter article
 A written report to funders and/or other interested parties

Take, or Try to Bring About, Appropriate Action on the Issue or Intervention


Action can range from adjusting a single element of an intervention as a result of an
evaluation, to writing letters to the editor, advocating with legislators, taking direct
action (a demonstration, a lawsuit), and starting a community initiative that grows
into a national movement. In most cases, a CBPR effort is meant to lead to some kind
of action, even if that action is simply further research.

Follow Up
An action research project doesn’t end with the presentation, or even with action.
The purpose of the research often has as much to do with the learning of the team
members as it does with research results. Even where that’s not the case, the skills
and methods that action researchers learn need to be cemented, so they can carry
over to other projects.
 Evaluate the research process. This should be a collaborative effort by all team
members, and might also include others (those who actually implement an
evaluated intervention, for instance). Did things go according to plan? What
were the strengths of the process? What were its weaknesses? Was the

~ 126 ~
training understandable and adequate? What other support would have been
helpful? What parts of the process should be changed?
 Identify benefits to the community or group that came about (or may come about) as a
result of the research process. These may have to do with action, with making
the community more aware of particular issues, or with creating more
community activists.
 Identify team members’ learning and perceptions of changes in themselves. Some
areas to consider are basic and other academic skills; public speaking;
meeting skills; self-confidence and self-esteem; ability to influence the world
and their own lives; and self-image (seeing themselves as proactive, rather
than acted upon, for example).
 Maintain gains by keeping researchers involved. There are a number of ways to
keep the momentum of a CBPR team going, including starting another
project, if there’s a reason to do so; encouraging team members to be active on
other issues they care about (and to suggest some potential areas, and
perhaps make introductions that make it easier for them to do so); keeping
the group together as a (paid) research consortium; or consulting, as a group,
with other organizations interested in conducting action research.
CBPR is not always the right choice for an initiative or evaluation, but it’s always
worthy of consideration. If you can employ it in a given situation, the rewards can be
great.
Community-based participatory research can serve many purposes. It can supply
accurate and appropriate information to guide a community initiative or to evaluate
a community intervention. It can secure community buy-in and support for that
initiative or intervention. It can enhance participants’ personal development and
opportunities. It can empower those who are most affected by conditions or issues in
the community to analyze and change them. And, perhaps most important, it can
lead to long-term social change that improves the quality of life for everyone.

~ 127 ~
In Summary
Community-based participatory research is a process conducted by and for the
people most affected by the issue or intervention being studied or evaluated. It has
multiple purposes, including the empowerment of the participants, the gathering of
the best and most accurate information possible, garnering community support for
the effort, and social change that leads to the betterment of the community for
everyone
As with any participatory process, CBPR can take a great deal of time and effort. The
participants are often economically and educationally disadvantaged, lacking basic
skills and other resources. Thus, training and support – both technical and personal
– are crucial elements in any action research process. With proper preparation,
however, participatory action research can yield not only excellent research results,
but huge benefits for the community over the long run.

~ 128 ~
CHAPTER 13

PARTICIPATORY EVALUATION

Experienced community builders know that involving stakeholders - the people


directly connected to and affected by their projects - in their work is tremendously
important. It gives them the information they need to design, and to adjust or
change, what they do to best meet the needs of the community and of the particular
populations that an intervention or initiative is meant to benefit. This is particularly
true in relation to evaluation.

As we have previously discussed, community-based participatory research can be


employed in describing the community, assessing community issues and needs,
finding and choosing best practices, and/or evaluation. We consider the topic
of participatory evaluation important enough to give it a section of its own, and to
show how it fits into the larger participatory research picture.

It's a good idea to build stakeholder participation into a project from the
beginning. One of the best ways to choose the proper direction for your work is to
involve stakeholders in identifying real community needs, and the ways in which a
project will have the greatest impact. One of the best ways to find out what kinds of
effects your work is having on the people it's aimed at is to include those on the
receiving end of information or services or advocacy on your evaluation team.

Often, you can see most clearly what's actually happening through the eyes of those
directly involved in it - participants, staff, and others who are involved in taking part
in and carrying out a program, initiative, or other project. Previously, we have
discussed how you can involve those people in conducting research on the
community and choosing issues to address and directions to go in. This section is
about how you can involve them in the whole scope of the project, including its
evaluation, and how that's likely to benefit the project's final outcomes.

~ 129 ~
What Is Participatory Evaluation?
When most people think of evaluation, they think of something that happens at the
end of a project - that looks at the project after it's over and decides whether it was
any good or not. Evaluation actually needs to be an integral part of any project from
the beginning. Participatory evaluation involves all the stakeholders in a project -
those directly affected by it or by carrying it out - in contributing to the
understanding of it, and in applying that understanding to the improvement of the
work.

Participatory evaluation, as we shall see, isn't simply a matter of asking stakeholders


to take part. Involving everyone affected changes the whole nature of a project from
something done for a group of people or a community to a partnership between the
beneficiaries and the project implementers. Rather than powerless people who are
acted on, beneficiaries become the copilots of a project, making sure that their real
needs and those of the community are recognized and addressed. Professional
evaluators, project staff, project beneficiaries or participants, and other community
members all become colleagues in an effort to improve the community's quality of
life.

This approach to planning and evaluation isn't possible without mutual trust and
respect. These have to develop over time, but that development is made more
probable by starting out with an understanding of the local culture and customs -
whether you're working in a developing country or in an American urban
neighborhood. Respecting individuals and the knowledge and skills they have will
go a long way toward promoting long-term trust and involvement.

The other necessary aspect of any participatory process is appropriate training for
everyone involved. Some stakeholders may not even be aware that project research
takes place; others may have no idea how to work alongside people from different
backgrounds; and still others may not know what to do with evaluation results once

~ 130 ~
they have them. We'll discuss all of these issues - stakeholder involvement,
establishing trust, and training - as the section progresses.
The real purpose of an evaluation is not just to find out what happened, but to use
the information to make the project better.

In Order To Accomplish This, Evaluation Should Include Examining At Least


Two Areas:
 Process. The process of a project includes the planning and logistical activities
needed to set up and run it. Did we do a proper assessment beforehand so we
would know what the real needs were? Did we use the results of the
assessment to identify and respond to those needs in the design of the
project? Did we set up and run the project within the timelines and other
structures that we intended? Did we involve the people we intended to? Did
we have or get the resources we expected? Were staff and others trained and
prepared to do the work? Did we have the community support we expected?
Did we record what we did accurately and on time? Did we monitor and
evaluate as we intended?
 Implementation. Project implementation is the actual work of running it. Did
we do what we intended? Did we serve or affect the number of people we
proposed to? Did we use the methods we set out to use? Was the level of our
activity what we intended (e.g., did we provide the number of hours of
service we intended to)? Did we reach the population(s) we aimed at? What
exactly did we provide or do? Did we make intentional or unintentional
changes, and why?
 Outcomes. The project's outcomes are its results - what actually happened as a
consequence of the project's existence. Did our work have the effects we
hoped for? Did it have other, unforeseen effects? Were they positive or
negative (or neither)? Do we know why we got the results we did? What can
we change, and how, to make our work more effective?

~ 131 ~
Many who write about participatory evaluation combine the first two of these areas
into process evaluation, and add a third - impact evaluation - in addition to outcome
evaluation. Impact evaluation looks at the long-term results of a project, whether the
project continues, or does its work and ends.
Rural development projects in the developing world, for example, often exist simply
to pass on specific skills to local people, who are expected to then both practice those
skills and teach them to others. Once people have learned the skills - perhaps
particular cultivation techniques, or water purification - the project ends. If in five or
ten years, an impact evaluation shows that the skills the project taught are not only
still being practiced, but have spread, then the project's impact was both long-term
and positive.
In order for these areas to be covered properly, evaluation has to start at the very
beginning of the project, with assessment and planning.

In a Participatory Evaluation, Stakeholders should be involved in:


 Naming and framing the problem or goal to be addressed
 Developing a theory of practice (process, logic model) for how to achieve
success
 Identifying the questions to ask about the project and the best ways to ask
them - these questions will identify what the project means to do, and
therefore what should be evaluated

~ 132 ~
What's the real goal, for instance, of a program to introduce healthier foods in school
lunches? It could be simply to convince children to eat more fruits, vegetables, and
whole grains. It could be to get them to eat less junk food. It could be to encourage
weight loss in kids who are overweight or obese. It could simply be to educate
them about healthy eating, and to persuade them to be more adventurous eaters.
The evaluation questions you ask both reflect and determine your goals for the
program. If you don't measure weight loss, for instance, then clearly that's not what
you're aiming at. If you only look at an increase in children's consumption of
healthy foods, you're ignoring the fact that if they don't cut down on something else
(junk food, for instance), they'll simply gain weight. Is that still better than not
eating the healthy foods? You answer that question by what you choose to examine
- if it is better, you may not care what else the children are eating; if it's not, then you
will care.
 Collecting information about the project
 Making sense of that information
 Deciding what to celebrate, and what to adjust or change, based on
information from the evaluation

Why would (and why wouldn't) you use participatory evaluation?


Why would you use participatory evaluation? The short answer is that it's often the
most effective way to find out what you need to know, both at the beginning of and
throughout the course of a project. In addition, it carries benefits for both individual
participants and the community that other methods don't.

Some of the major advantages of participatory evaluation:


 It gives you a better perspective on both the initial needs of the project's
beneficiaries, and on its ultimate effects. If stakeholders, including project
beneficiaries, are involved from the beginning in determining what needs to
be evaluated and why - not to mention what the focus of the project needs to
be - you're much more likely to aim your work in the right direction, to

~ 133 ~
correctly determine whether your project is effective or not, and to
understand how to change it to make it more so.
 It can get you information you wouldn't get otherwise. When project
direction and evaluation depend, at least in part, on information from people
in the community, that information will often be more forthcoming if it's
asked for by someone familiar. Community people interviewing their friends
and neighbors may get information that an outside person wouldn't be
offered.
 It tells you what worked and what didn't from the perspective of those
most directly involved - beneficiaries and staff. Those implementing the
project and those who are directly affected by it are most capable of sorting
out the effective from the ineffective.
 It can tell you why something does or doesn't work. Beneficiaries are often
able to explain exactly why they didn't respond to a particular technique or
approach, thus giving you a better chance to adjust it properly.
 It results in a more effective project. For the reasons just described, you're
much more apt to start out in the right direction, and to know when you need
to change direction if you haven't. The consequence is a project that addresses
the appropriate issues in the appropriate way, and accomplishes what it sets
out to do.
 It empowers stakeholders. Participatory evaluation gives those who are often
not consulted - line staff and beneficiaries particularly - the chance to be full
partners in determining the direction and effectiveness of a project.
 It can provide a voice for those who are often not heard. Project beneficiaries
are often low-income people with relatively low levels of education, who
seldom have - and often don't think they have a right to - the chance to speak
for themselves. By involving them from the beginning in project evaluation,
you assure that their voices are heard, and they learn that they have the
ability and the right to speak for them.
 It teaches skills that can be used in employment and other areas of life. In
addition to the development of basic skills and specific research capabilities,

~ 134 ~
participatory evaluation encourages critical thinking, collaboration, problem-
solving, independent action, meeting deadlines...all skills valued by
employers, and useful in family life, education, civic participation, and other
areas.
 It bolsters self-confidence and self-esteem in those who may have little of
either. This category can include not only project beneficiaries, but also others
who may, because of circumstance, have been given little reason to believe in
their own competence or value to society. The opportunity to engage in a
meaningful and challenging activity, and to be treated as a colleague by
professionals, can make a huge difference for folks who are seldom granted
respect or given a chance to prove themselves.
 It demonstrates to people ways in which they can take more control of their
lives. Working with professionals and others to complete a complex task with
real-world consequences can show people how they can take action to
influence people and events.
 It encourages stakeholder ownership of the project. If those involved feel
the project is theirs, rather than something imposed on them by others, they'll
work hard both in implementing it, and in conducting a thorough and
informative evaluation in order to improve it.
 It can spark creativity in everyone involved. For those who've never been
involved in anything similar, a participatory evaluation can be a revelation,
opening doors to a whole new way of thinking and looking at the world. To
those who have taken part in evaluation before, the opportunity to exchange
ideas with people who may have new ways of looking at the familiar can lead
to a fresh perspective on what may have seemed to be a settled issue.
 It encourages working collaboratively. For participatory evaluation to work
well, it has to be viewed by everyone involved as a collaboration, where each
participant brings specific tools and skills to the effort, and everyone is valued
for what she can contribute. Collaboration of this sort not only leads to many
of the advantages described above, but also fosters a more collaborative spirit
for the future as well, leading to other successful community projects.

~ 135 ~
 It fits into a larger participatory effort. When community assessment and the
planning of a project have been a collaboration among project beneficiaries,
staff, and community members, it only makes sense to include evaluation in
the overall plan, and to approach it in the same way as the rest of the project.
In order to conduct a good evaluation, its planning should be part of the
overall planning of the project. Furthermore, participatory process generally
matches well with the philosophy of community-based or grass roots groups
or organizations.
With all these positive aspects, participatory evaluation carries some negative ones
as well. Whether its disadvantages outweigh its advantages depend on your
circumstances, but whether you decide to engage in it or not, it's important to
understand what kinds of drawbacks it might have.

The significant disadvantages of participatory evaluation include:


 It takes more time than conventional process. Because there are so many
people with different perspectives involved, a number of whom have never
taken part in planning or evaluation before, everything takes longer than if a
professional evaluator or a team familiar with evaluation simply set up and
conducted everything. Decision-making involves a great deal of discussion,
gathering people together may be difficult, evaluators need to be trained, etc.
 It takes the establishment of trust among all participants in the process. If
you're starting something new (or, all too often, even if the project is
ongoing), there are likely to be issues of class distinction, cultural differences,
etc., dividing groups of stakeholders. These can lead to snags and slowdowns
until they're resolved, which won't happen overnight. It will take time and a
good deal of conscious effort before all stakeholders feel comfortable and
confident that their needs and culture are being addressed.
 You have to make sure that everyone's involved, not just "leaders" of
various groups. All too often, "participatory" means the participation of an
already-existing power structure. Most leaders are actually that - people who
are most concerned with the best interests of the group, and whom others

~ 136 ~
trust to represent them and steer them in the direction that best reflects those
interests. Sometimes, however, leaders are those who push their way to the
front, and try to confirm their own importance by telling others what to do.

By involving only leaders of a population or community, you run the risk of losing -
or never gaining - the confidence and perspective of the rest of the population,
which may dislike and distrust a leader of the second type, or may simply see
themselves shut out of the process.. They may see the participatory evaluation as a
function of authority, and be uninterested in taking part in it. Working to recruit
"regular" people as well as, or instead of, leaders may be an important step for the
credibility of the process. But it's a lot of work and may be tough to sell.
 You have to train people to understand evaluation and how the
participatory process works, as well as teaching them basic research
skills. There are really a number of potential disadvantages here. The obvious
one is that of time, which we've already raised - training takes time to
prepare, time to implement, and time to sink in. Another is the question of
what kind of training participants will respond to. Still another concerns
recruitment - will people be willing to put in the time necessary to prepare
them for the process, let alone the time for the process itself?
 You have to get buy-in and commitment from participants. Given what
evaluators will have to do, they need to be committed to the process, and to
feel ownership of it. You have to structure both the training and the process
itself to bring about this commitment.
 People's lives - illness, child care and relationship problems, getting the
crops in, etc. - may cause delays or get in the way of the evaluation. Poor
people everywhere live on the edge, which means they're engaged in a
delicate balancing act. The least tilt to one side or the other - a sick child, too
many days of rain in a row - can cause a disruption that may result in an
inability to participate on a given day, or at all. If you're dealing with a rural
village that's dependent on agriculture, for instance, an accident of weather
can derail the whole process, either temporarily or permanently.

~ 137 ~
 You may have to be creative about how you get, record, and report
information. If some of the participants in an evaluation are non- or semi-
literate, or if participants speak a number of different languages (English,
Spanish, and Lao, for instance), a way to record information will have to be
found that everyone can understand, and that can, in turn, be understood by
others outside the group.
 Funders and policy makers may not understand or believe in participatory
evaluation. At worst, this can lose you your funding, or the opportunity to
apply for funding. At best, you'll have to spend a good deal of time and effort
convincing funders and policy makers that participatory evaluation is a good
idea, and obtaining their support for your effort.

Some of these disadvantages could also be seen as advantages: the training people
receive blends in with their development of new skills that can be transferred to
other areas of life, for instance; coming up with creative ways to express ideas
benefits everyone; once funders and policy makers are persuaded of the benefits of
participatory process and participatory evaluation, they may encourage others to
employ it as well. Nonetheless, all of these potential negatives eat up time, which
can be crucial. If it's absolutely necessary that things happen quickly (which is true
not nearly as often as most of us think it is), participatory evaluation is probably not
the way to go.

When might you use Participatory Evaluation?


So when do you use participatory evaluation? Some of the reasons you might
decide it's the best choice for your purposes:
 When you're already committed to a participatory process for your project.
Evaluation planning can be included and collaboratively designed as part of
the overall project plan.
 When you have the time, or when results are more important than time. As
should be obvious from the last part of this section, one of the biggest
drawbacks to participatory evaluation is the time it takes. If time isn't what's

~ 138 ~
most important, you can gain the advantages of a participatory evaluation
without having to compensate for many of the disadvantages.
 When you can convince funders that it's a good idea. Funders may specify
that they want an outside evaluation, or they may simply be dubious about
the value of participatory evaluation. In either case, you may have some
persuading to do in order to be able to use a participatory process. If you can
get their support, however, funders may like the fact that participatory
evaluation is often less expensive, and that it has added value in the form of
empowerment and transferable skills.
 When there may be issues in the community or population that outside
evaluators (or program providers, for that matter) aren't likely to be aware
of. Political, social, and interpersonal factors in the community can skew the
results of an evaluation, and without an understanding of those factors and
their history, evaluators may have no idea that what they're finding out is
colored in any way. Evaluators who are part of the community can help sort
out the influence of these factors, and thus end up with a more accurate
evaluation.
 When you need information that it will be difficult for anyone outside the
community or population to get. When you know that members of the
community or population in question are unwilling to speak freely to anyone
from outside, participatory evaluation is a way to raise the chances that you'll
get the information you need.
 When part of the goal of the project is to empower participants and help
them develop transferable skills. Here, the participatory evaluation, as it
should in any case, becomes a part of the project itself and its goals.
 When you want to bring the community or population together. In addition
to fostering a collaborative spirit, as we've mentioned, a participatory
evaluation can create opportunities for people who normally have little
contact to work together and get to know one another. This familiarity can
then carry over into other aspects of community life, and even change the
social character of the community over the long term.

~ 139 ~
Who should be involved in Participatory Evaluation?
We've referred continually to stakeholders - the people who are directly affected by
the project being evaluated. Who are the stakeholders? That varies from project to
project, depending on the focus, the funding, the intended outcomes, etc.

There are a number of groups that are generally involved, however:


 Participants or beneficiaries. The people whom the project is meant to
benefit. That may be a specific group (people with a certain medical
condition, for instance), a particular population (recent Southeast Asian
immigrants, residents of a particular area), or a whole community. They may
be actively receiving a service (e.g., employment training) or may simply
stand to benefit from what the project is doing (violence prevention in a given
neighborhood). These are usually the folks with the greatest stake in the
project's success, and often the ones with the least experience of evaluation.
 Project line staff and/or volunteers. The people who actually do the work of
carrying out the project. They may be professionals, people with specific
skills, or community volunteers. They may work directly with project
beneficiaries as mentors, teachers, or health care providers; or they may
advocate for immigrant rights, identify open space to be preserved, or answer
the phone and stuff envelopes. Whoever they are, they often know more
about what they're doing than anyone else, and their lives can be affected by
the project as much as those of participants or beneficiaries.
 Administrators. The people who coordinate the project or specific aspects of
it. Like line staff and volunteers, they know a lot about what's going on, and
they're intimately involved with the project every day.
 Outside evaluators, if they're involved. In many cases, outside evaluators are
hired to run participatory evaluations. The need for their involvement is
obvious.
 Community officials. You may need the support of community leaders, or
you may simply want to give them and other participants the opportunity to

~ 140 ~
get to know one another in a context that might lead to better understanding
of community needs.
 Others whose lives are affected by the project. The definition of this group
varies greatly from project to project. In general, it refers to people whose jobs
or other aspects of their lives will be changed either by the functioning of the
project itself, or by its outcomes.
An example would be landowners whose potential use of their
land would be affected by an environmental initiative or a
neighborhood plan.

How do you conduct a Participatory Evaluation?


Participatory evaluation encompasses elements of designing the project as well as
evaluating it. What you evaluate depends on what you want to know and what
you're trying to do. Identifying the actual evaluation questions sets the course of the
project just as surely as a standardized testing program guides teaching. When these
questions come out of an assessment in which stakeholders are involved, the
evaluation is one phase of a community-based participatory research process.
A participatory evaluation really has two stages: One comprises finding and training
stakeholders to act as participant evaluators. The second - some of which may take
place before or during the first stage - encompasses the planning and
implementation of the project and its evaluation, and includes six steps:
 Naming and framing the issue
 Developing a theory of practice to address it
 Deciding what questions to ask, and how to ask them to get the information
you need
 Collecting information
 Analyzing the information you've collected
 Using the information to celebrate what worked, and to adjust and improve
the project
We'll examine both of these stages in detail.

~ 141 ~
Finding and training stakeholders to act as Participant Evaluators
Unfortunately, this stage isn't simply a matter of announcing a participatory
evaluation and then sitting back while people beat down the doors to be part of it.
In fact, it may be one of the more difficult aspects of conducting a participatory
evaluation.
Here's where the trust building we discussed earlier comes into play. The population
you're working with may be distrustful of outsiders, or may be used to promises of
involvement that turn out to be hollow or simply ignored. They may be used to
being ignored in general, and/or offered services and programs that don't speak to
their real needs. If you haven't already built a relationship to the point where people
are willing to believe that you'll follow through on what you say, now is the time to
do it. It may take some time and effort - you may have to prove that you'll still be
there in six months - but it's worth it. You're much more likely to have a successful
project, let alone a successful evaluation, if you have a relationship of mutual trust
and respect.

But let's assume you have that step out of the way, and that you've established good
relationships in the community and among the population you're working with, as
well as with staff of the project. Let's assume as well that these folks know very little,
if anything, about participatory evaluation. That means they'll need training in order
to be effective.

If, in fact, your evaluation is part of a larger participatory effort, the question arises
as to whether to simply employ the same team that did assessments and/or planned
the project, perhaps with some additions, as evaluators. That course of action has
both pluses and minuses. The team is already assembled, has developed a method
of working together, has some training in research methods, etc., so that they can hit
the ground running - obviously a plus.
The fact that they have a big stake in seeing the project be successful can work either
way: they may interpret their findings in the best possible light, or even ignore
negative information; or they may be eager to see exactly where and how to adjust

~ 142 ~
the work to make it go better.
Another issue is burnout. Evaluation will mean more time in addition to what an
assessment and planning team has already put in. While some may be more than
willing to continue, many may be ready for a break (or may be moving on to another
phase of their lives). If the possibility of assembling a new team exists, it will give
those who've had enough the chance to gracefully withdraw.
How you handle this question will depend on the attitudes of those involved, how
many people you actually have to draw on (if the recruitment of the initial team was
really difficult, you may not have a lot of choices), and what people committed to.

Recruit Participant Evaluators


There are many ways to accomplish this. In some situations, it makes the most sense
to put out a general call for volunteers; in others, to approach specific individuals
who are likely - because of their commitment to the project or to the population - to
be willing. Alternatively, you might approach community leaders or stakeholders to
suggest possible evaluators.
Some basic guidelines for recruitment include:
 Use communication channels and styles that reach the people you're aiming
at
 Make your message as clear as possible
 Use plain English and/or whatever other language(s) the population uses
 Put your message where the audience is
 Approach potential participants individually where possible - if you can find
people they know to recruit them, all the better
 Explain what people may gain from participation
 Be clear that they're being asked because they already have the qualities that
are necessary for participation
 Encourage people, but also be honest about the amount and extent of what
needs to be done
 Work out with participants what they're willing and able to do
 Try to arrange support - child care, for example - to make participation easier

~ 143 ~
 Ask people you've recruited to recommend - or recruit - others
In general, it's important for potential participant evaluators - particularly those
whose connection to the project isn't related to their employment - to understand the
commitment involved. An evaluation is likely to last a year, unless the project is
considerably shorter than that, and while you might expect and plan for some
dropouts, most of the team needs to be available for that long.

In order to make that commitment easier, discuss with participants what kinds of
support they'll need in order to fulfill their commitment - child care and
transportation, for instance - and try to find ways to provide it. Arrange meetings at
times and places that are easiest for them (and keep the number of meetings to a
minimum). For participants who are paid project staff, the evaluation should be
considered part of their regular work, so that it isn't an extra, unpaid, burden that
they feel they can't refuse.

Be careful to try to put together a team that's a cross-section of the stakeholder


population. As we've already discussed, if you recruit only "leaders" from among the
beneficiary population, for instance, you may create resentment in the rest of the
group, not get a true perspective of the thinking or perceptions of that group, and
defeat the purpose of the participatory nature of the evaluation as well. Even if the
leaders are good representatives of the group, you may want to broaden your
recruitment in the hopes of developing more community leadership, and
empowering those who may not always be willing to speak out.

Train Participant Evaluators


Participants, depending on their backgrounds, may need training in a number of
areas. They may have very little experience in attending and taking part in meetings,
for instance, and may need to start there. They may benefit from an introduction to
the idea of participatory evaluation, and how it works. And they'll almost certainly
need some training in data gathering and analysis.

~ 144 ~
How training gets carried out will vary with the needs and schedules of participants
and the project. It may take place in small chunks over a relatively long period of
time - weeks or months - might happen all at once in the course of a weekend
retreat, or might be some combination. There's no right or wrong way here. The first
option will probably make it possible for more people to take part; the second allows
for people to get to know one another and bond as a team, and a combination might
allow for both.

By the same token, there are many training methods, any or all of which might be
useful with a particular group. Training in meeting skills - knowing when and how
to contribute and respond, following discussion, etc. - may best be accomplished
through mentoring, rather than instruction. Interviewing skills may best be learned
through role playing and other experiential techniques. Some training - how to
approach local people, for example - might best come from participants themselves.

Some of the areas in which training might be necessary:


 The participatory evaluation process. How participatory evaluation works, its
goals, the roles people may play in the process, what to expect.
 Meeting skills. Following discussion, listening skills, handling disagreement or
conflict, contributing and responding appropriately, general ground rules and
etiquette, etc.
 Interviewing. Putting people at ease, body language and tone of voice, asking
open-ended and follow-up questions, recording what people say and other
important information, handling interruptions and distractions, group
interviews.
 Observation. Direct vs. participant observation, choosing appropriate times
and places to observe, relevant information to include, recording
observations.
 Recording information and reporting it to the group. What interviewees and those
observed say and do, the non-verbal messages they send, who they are (age,

~ 145 ~
situation, etc.), what the conditions were, the date and time, any other factors
that influenced the interview or observation.
For people for whom writing isn't comfortable, where writing isn't
feasible, or where language is a barrier, there should be alternative
recording and reporting methods. Drawings, maps, diagrams, tape
recording, videos, or other imaginative ways of remembering
exactly what was said or observed can be substituted, depending
on the situation. In interviews, if audio or video recording is going
to be used, it's important to get the interviewee's permission first -
before the interviewer shows up with the equipment, so that there
are no misunderstandings.
 Analyzing information. Critical thinking, what kinds of things statistics tell you,
other things to think about.
Planning and Implementing the Project and Its Evaluation

~ 146 ~
There's an assumption here that all phases of a project will be participatory, so that
not only its evaluation, but its planning and the assessment that leads to it also
involve stakeholders (not necessarily the same ones who act as evaluators). If
stakeholders haven't been involved from the beginning, they don't have the deep
understanding of the purposes and structure of a project that they'd have of one
they've helped form. The evaluation that results, therefore, is likely to be less
perceptive - and therefore less valuable - than one of a project they've been involved
in from the start.

Naming and Framing the Problem or Goal to Be Addressed


Identifying what you're evaluating defines what the project is meant to address and
accomplish. Community representatives and stakeholders, all those with something
to gain or lose, work together to develop a shared vision and mission. By collecting
information about community concerns and identifying available assets,
communities can understand which issues to focus a project on.

Naming a problem or goal refers to identifying the issue that needs to be addressed.
Framing it has to do with the way we look at it. If youth violence is conceived of as
strictly a law enforcement problem, for instance, that framing implies specific ways
of solving it: stricter laws, stricter enforcement, zero tolerance for violence, etc. If it's
framed as a combination of a number of issues - availability of hand guns,
unemployment and drug use among youth, social issues that lead to the formation
of gangs, alienation and hopelessness in particular populations, poverty, etc. - then
solutions may include employment and recreation programs, mentoring, substance
abuse treatment, etc., as well as law enforcement. The more we know about a
problem, and the more different perspectives we can include in our thinking about
it, the more accurately we can frame it, and the more likely we are to come up with
an effective solution.

~ 147 ~
Developing a Theory of Practice to Address the Problem
How do you conduct a community effort so that it has a good chance of solving the
problem at hand? Many communities and organizations answer this question by
throwing uncoordinated programs at the problem, or by assuming a certain
approach (law enforcement, as in our example, for instance) will take care of it. In
fact, you have to have a plan for creating, implementing, evaluating, adjusting, and
maintaining a solution if you want it to work.

Whatever you call this plan - a theory of practice, a logic model, or simply an
approach or process - it should be logical, consistent, consider all the areas that need
to be coordinated in order for it to work, and give you an overall guideline and a list
of steps to follow in order to carry it out.
Once you've identified an issue, for instance, one possible theory of practice
might be:
 Form a coalition of organizations, agencies, and community members
concerned with the problem.
 Recruit and train a participatory research team which includes
representatives of all stakeholder groups.
 The team collects both statistical and qualitative, first-hand information about
the problem, and identifies community assets that might help in addressing it.
 Use the information you have to design a solution that takes into account the
problem's complexity and context.
This might be a single program or initiative, or a coordinated,
community-wide effort involving several organizations, the media,
and individuals. If it's closer to the latter, that's part of the
complexity you have to take into account. Coordination has to be
part of your solution, as do ways to get around the bureaucratic
roadblocks that might occur and methods to find the financial and
personnel resources you need.
 Implement the solution.

~ 148 ~
 Carry out monitoring and evaluation that will give you ongoing feedback
about how well you're meeting objectives, and what you should change to
improve your solution.
 Use the information from the evaluation to adjust and improve the solution.
 Go back to # 2 and do as much of it again as you need to until the problem is
solved, or - more likely, since many community problems never actually
disappear - indefinitely in order to maintain and increase your gains.

Deciding What Evaluation Questions to Ask, And How to Ask Them to Get the
Information You Need
As we've discussed, choosing the evaluation questions essentially guides the work.
What you're really choosing here is what you're going to pay attention to. There
could be significant results from your project that you're never aware of, because
you didn't look for them - you didn't ask the questions to which those results would
have been the answers. That's why it's so important to select questions carefully:
they'll determine what you find.

Framing the problem is one element here - putting it in context, looking at it from all
sides, stepping back from your own assumptions and biases to get a clearer and
broader view of it. Another is envisioning the outcomes you want, and thinking
about what needs to change, and how, in order to reach them.

Framing is important in this activity as well. If you want simply to reduce youth
violence, stricter laws and enforcement might seem like a reasonable solution,
assuming you're willing to stick with them forever; if you want not only to reduce or
eliminate youth violence, but to change the climate that fosters it (i.e., long term
social change), the solution becomes much broader and requires, as we pointed out
above, much more than law enforcement. And a broader solution means more, and
more complex, evaluation questions.

~ 149 ~
In the first case, evaluation questions might be limited to some variation of: "Were
there more arrests and convictions of youthful offenders for violent crimes in the
time period studied, as compared to the last period for which there were records
before the new solution was put in place?" "Did youthful offenders receive harsher
sentences than before?" "Was there a reduction in violent incidents involving
youth?"

Looking at the broader picture, in addition to some of those questions, there might
be questions about counseling programs for youthful offenders to change their
attitudes and to help ease their transition back to civil society, drug and alcohol
treatment, control of handgun sales, changing community attitudes, etc.

Collecting Information
This is the largest part, at least in time and effort, of implementing an evaluation.
Various evaluators, depending on the information needed, may conduct any or all
of the following:
 Research into census or other public records, as well as news archives, library
collections, the Internet, etc.
 Individual and/or group interviews
 Focus groups
 Community information-sharing sessions
 Surveys
 Direct or participant observation
In some cases - particularly with unschooled populations in developing countries -
evaluators may have to find creative ways to draw out information. In some
cultures, maps, drawings, representations ("If this rock is the headman's house..."), or
even storytelling may be more revealing than the answers to straightforward
questions.

~ 150 ~
Analyzing the Information you've Collected
Once you've collected all the information you need, the next step is to make sense of
it. What do the numbers mean? What do people's stories and opinions tell you about
the project? Did you carry out the process you'd planned? If not, did it make a
difference, positive or negative?

In some cases, these questions are relatively easy to answer. If there were particular
objectives for serving people, or for beneficiaries' accomplishments, you can quickly
find out whether they were met or not. (We set out to serve 75 people, and we
actually served 82. We anticipated that 50 would complete the program, and 61
actually completed.)

In other cases, it's much harder to tell what your information means. What if
approximately half of interviewees say the project was helpful to them, and the other
half says the opposite? A result like that may leave you doing some detective work.
(Is there any ethnic, racial, geographic, or cultural pattern as to who is positive and
who is negative? Whom did each group work with? Where did they experience the
project, and how? Did members of each group have specific things in common?)

While collecting the information requires the most work and time, analyzing it is
perhaps the most important step in conducting an evaluation. Your analysis tells
you what you need to know in order to improve your project, and also gives you the
evidence you need to make a case for continued funding and community support.
It's important that it be done well, and that it makes sense of odd results like that
directly above. Here's where good training and good guidance in using critical
thinking and other techniques come in.

In general, information-gathering and analysis should cover the three areas we


discussed early in the section: process, implementation, and outcomes. The purpose
here is both to provide information for improving the project and to provide
accountability to funders and the community.

~ 151 ~
 Process. This concerns the logistics of the project. Was there good coordination
and communication? Was the planning process participatory? Was the
original timeline for each stage of the project - outreach, assessment, planning,
implementation, evaluation - realistic? Were you able to find or hire the right
people? Did you find adequate funding and other resources? Was the space
appropriate? Did members of the planning and evaluation teams work well
together? Did the people responsible do what they were expected to do? Did
unexpected leaders emerge (in the planning group, for instance)?
 Implementation. Did you do what you set out to do - reach the number of
people you expected to, use the methods you intended, provide the amount
and kind of service or activity that you planned for? This part of the
evaluation is not meant to assess effectiveness, but only whether the project
was carried out as planned - i.e., what you actually did, rather than what you
accomplished as a result. That comes next.
 Outcomes. What were the results of what you did? Did what you hoped for
take place? If it did, how do you know it was a result of what you did, as
opposed to some other factor(s)? Were there unexpected results? Were they
negative or positive? Why did this all happen?

Using the Information to Celebrate What Worked, and to Adjust and Improve the
Project
While accountability is important - if the project has no effect at all, for example, it's
just wasted effort - the real thrust of a good evaluation is formative. That means it's
meant to provide information that can help to continue to form the project, reshape
it to make it better. As a result, the overall questions when looking at process,
implementation, and outcomes are: What worked well? What didn't? What changes
would improve the project?

~ 152 ~
Answering these questions requires further analysis, but should allow you to
improve the project considerably. In addition to dropping or changing and
adjusting those elements of the project that didn't work well, don't neglect those that
were successful. Nothing's perfect; even effective approaches can be made better.

Don't forget to celebrate your successes. Celebration recognizes the hard work of
everyone involved, and the value of your effort. It creates community support, and
strengthens the commitment of those involved. Perhaps most important, it makes
clear that people working together can improve the quality of life in the community.

There's a final element to participatory research and evaluation that can't be


ignored. Once you've started a project and made it successful, you have to maintain
it. The participatory research and evaluation has to continue - perhaps not with the
same team(s), but with team’s representative of all stakeholders. Conditions change,
and projects have to adapt. Research into those conditions and continued evaluation
of your work will keep that work fresh and effective.

If your project is successful, you may think your work is done. Think again -
community problems are only solved as long as the solutions are actively
practiced. The moment you turn your back, the conditions you worked so hard to
change can start to return to what existed before The work - supported by
participatory research and evaluation - has to go on indefinitely to maintain and
increase the gains you've made.

In Summary
Participatory evaluation is a part of participatory research. It involves stakeholders
in a community project in setting evaluation criteria for it, collecting and analyzing
data, and using the information gained to adjust and improve the project.
Participatory process brings in the all-important multiple perspectives of those most
directly affected by the project, which are also most likely to be tied into community
history and culture. The information and insights they contribute can be crucial in a

~ 153 ~
project's effectiveness. In addition, their involvement encourages community buy-in,
and can result in important gains in skills, knowledge, and self-confidence and self-
esteem for the researchers. All in all, participatory evaluation creates a win-win
situation.
Conducting a participatory evaluation involves several steps:
 Recruiting and training a stakeholder evaluation team
 Naming and framing the problem
 Developing a theory of practice to guide the process of the work
 Asking the right evaluation questions
 Collecting information
 Analyzing information
 Using the information to celebrate and adjust your work
The final step, as with so many of the community-building strategies and actions
described in the Community Tool Box, is to keep at it. Participatory research in
general, and participatory evaluation in particular, has to continue as long as the
work continues, in order to keep track of community needs and conditions, and to
keep adjusting the project to make it more responsive and effective. And the work
often has to continue indefinitely in order to maintain progress and avoid sliding
back into the conditions or attitudes that made the project necessary in the first
place.

~ 154 ~
CHAPTER 14

WHY SHOULD YOU HAVE AN EVALUATION PLAN?

After many late nights of hard work, more planning meetings than you care to
remember, and many pots of coffee, your initiative has finally gotten off the ground.
Congratulations! You have every reason to be proud of yourself and you should
probably take a bit of a breather to avoid burnout. Don't rest on your laurels too
long, though--your next step is to monitor the initiative's progress. If your initiative
is working perfectly in every way, you deserve the satisfaction of knowing that. If
adjustments need to be made to guarantee your success, you want to know about
them so you can jump right in there and keep your hard work from going to waste.
And, in the worst case scenario, you'll want to know if it's an utter failure so you can
figure out the best way to cut your losses. For these reasons, evaluation is extremely
important.

There's so much information on evaluation out there that it's easy for community
groups to fall into the trap of just buying an evaluation handbook and following it to
the letter. This might seem like the best way to go about it at first glance-- evaluation
is a huge topic and it can be pretty intimidating. Unfortunately, if you resort to the
"cookbook" approach to evaluation, you might find you end up collecting a lot of
data that you analyze and then end up just filing it away, never to be seen or used
again.

Instead, take a little time to think about what exactly you really want to know about
the initiative. Your evaluation system should address simple questions that are
important to your community, your staff, and (last but never least!) your funding
partners. Try to think about financial and practical considerations when asking
yourself what sort of questions you want answered. The best way to insure that you
have the most productive evaluation possible is to come up with an evaluation plan.

~ 155 ~
Here are a few Reasons Why You Should Develop an Evaluation Plan:
 It guides you through each step of the process of evaluation
 It helps you decide what sort of information you and your stakeholders really
need
 It keeps you from wasting time gathering information that isn't needed
 It helps you identify the best possible methods and strategies for getting the
needed information
 It helps you come up with a reasonable and realistic timeline for evaluation
 Most importantly, it will help you improve your initiative!

When Should you Develop an Evaluation Plan?


As soon as possible! The best time to do this is before you implement the initiative.
After that, you can do it anytime, but the earlier you develop it and begin to
implement it, the better off your initiative will be, and the greater the outcomes will
be at the end.
Remember, evaluation is more than just finding out if you did your job. It is
important to use evaluation data to improve the initiative along the way.

What are the different types of stakeholders and what are their interests in your
evaluation?
We'd all like to think that everyone is as interested in our initiative or project as we
are, but unfortunately that isn't the case. For community health groups, there are
basically three groups of people who might be identified as stakeholders (those who
are interested, involved, and invested in the project or initiative in some way):
community groups, grant makers/funders, and university-based researchers. Take
some time to make a list of your project or initiative's stakeholders, as well as which
category they fall into.

~ 156 ~
What are the Types of Stakeholders?
 Community groups: Hey, that's you! Perhaps this is the most obvious
category of stakeholders, because it includes the staff and/or volunteers
involved in your initiative or project. It also includes the people directly
affected by it--your targets and agents of change.
 Grant makers and funders: Don't forget the folks that pay the bills! Most
grant makers and funders want to know how their money's being spent, so
you'll find that they often have specific requirements about things they want
you to evaluate. Check out all your current funders to see what kind of
information they want you to be gathering. Better yet, find out what sort of
information you'll need to have for any future grants you're considering
applying for. It can't hurt!
 University-based researchers: This includes researchers and evaluators that
your coalition or initiative may choose to bring in as consultants or full
partners. Such researchers might be specialists in public health promotion,
epidemiologists, behavioral scientists, specialists in evaluation, or some other
academic field. Of course, not all community groups will work with
university-based researchers on their projects, but if you choose to do so, they
should have their own concerns, ideas, and questions for the evaluation. If
you can't quite understand why you'd include these folks in your evaluation
process, try thinking of them as auto mechanics--if you want them to help you
make your car run better, you will of course include them in the diagnostic
process. If you went to a mechanic and started ordering him around about
how to fix your car without letting him check it out first, he'd probably get
pretty annoyed with you. Same thing with your researchers and evaluators:
it's important to include them in the evaluation development process if you
really want them to help improve your initiative.

~ 157 ~
Each type of stakeholder will have a different perspective on your organization as
well as what they want to learn from the evaluation. Every group is unique, and you
may find that there are other sorts of stakeholders to consider with your own
organization. Take some time to brainstorm about who your stakeholders are before
you are making your evaluation plan.

What do they want to know about the Evaluation?


While some information from the evaluation will be of use to all three groups of
stakeholders, some will be needed by only one or two of the groups. Grant makers
and funders, for example, will usually want to know how many people were
reached and served by the initiative, as well as whether the initiative had the
community -level impact it intended to have. Community groups may want to use
evaluation results to guide them in decisions about their programs, and where they
are putting their efforts. University-based researchers will most likely be interested
in proving whether any improvements in community health were definitely caused
by your programs or initiatives; they may also want to study the overall structure of
your group or initiative to identify the conditions under which success may be
reached.

What decisions do they need to make, and how would they use the data to inform
those decisions?
You and your stakeholders will probably be making decisions that affect your
program or initiative based on the results of your evaluation, so you need to
consider what those decisions will be. Your evaluation should yield honest and
accurate information for you and your stakeholders; you'll need to be careful not to
structure it in such a way that it exaggerates your success, and you'll need to be
really careful not to structure it in such a way that it downplays your success!
Consider what sort of decisions you and your stakeholders will be making.
Community groups will probably want to use the evaluation results to help them
find ways to modify and improve your program or initiative. Grant makers and
funders will most likely be making decisions about how much funding to give you

~ 158 ~
in the future, or even whether to continue funding your program at all (or any
related programs). They may also think about whether to impose any requirements
on you to get that program (e.g., a grant maker tells you that your program may
have its funding decreased unless you show an increase of services in a given area).
University-based researchers will need to decide how they can best assist with plan
development and data reporting.

You'll also want to consider how you and your stakeholders plan to balance costs
and benefits. Evaluation should take up about 10--15% of your total budget. That
may sound like a lot, but remember that evaluation is an essential tool for improving
your initiative. When considering how to balance costs and benefits, ask yourself the
following questions:
 What do you need to know?
 What is required by the community?
 What is required by funding?

How do you Develop an Evaluation Plan?


There are four main steps to Developing an Evaluation Plan:
 Clarifying program objectives and goals
 Developing evaluation questions
 Developing evaluation methods
 Setting up a timeline for evaluation activities

Clarifying Program Objectives and Goals


The first step is to clarify the objectives and goals of your initiative. What are the
main things you want to accomplish, and how have you set out to accomplish them?
Clarifying these will help you identify which major program components should be
evaluated. One way to do this is to make a table of program components and
elements.

~ 159 ~
Developing Evaluation Questions
For our purposes, there are four main categories of evaluation questions. Let's look
at some examples of possible questions and suggested methods to answer those
questions. Later on, we'll tell you a bit more about what these methods are and how
they work
 Planning and implementation issues: How well was the program or
initiative planned out, and how well was that plan put into practice?
o Possible questions: Who participates? Is there diversity among
participants? Why do participants enter and leave your programs? Are
there a variety of services and alternative activities generated? Do
those most in need of help receive services? Are community members
satisfied that the program meets local needs?
o Possible methods to answer those questions: monitoring system that tracks
actions and accomplishments related to bringing about the mission of
the initiative, member survey of satisfaction with goals, member
survey of satisfaction with outcomes.
 Assessing attainment of objectives: How well has the program or initiative
met its stated objectives?
o Possible questions: How many people participate? How many hours are
participants involved?
o Possible methods to answer those questions: monitoring system (see
above), member survey of satisfaction with outcomes, goal attainment
scaling.
 Impact on participants: How much and what kind of a difference has the
program or initiative made for its targets of change?
o Possible questions: How has behavior changed as a result of
participation in the program? Are participants satisfied with the
experience? Were there any negative results from participation in the
program?

~ 160 ~
o Possible methods to answer those questions: member survey of satisfaction
with goals, member survey of satisfaction with outcomes, behavioral
surveys, interviews with key participants.
 Impact on the community: How much and what kind of a difference has the
program or initiative made on the community as a whole?
o Possible questions: What resulted from the program? Were there any
negative results from the program? Do the benefits of the program
outweigh the costs?
o Possible methods to answer those questions: Behavioral surveys, interviews
with key informants, community-level indicators.

Developing Evaluation Methods


Once you've come up with the questions you want to answer in your evaluation, the
next step is to decide which methods will best address those questions. Here is a
brief overview of some common evaluation methods and what they work best for.
Monitoring and feedback system
This method of evaluation has three main elements:
 Process measures: these tell you about what you did to implement your
initiative;
 Outcome measures: these tell you about what the results were; and
 Observational system: this is whatever you do to keep track of the initiative
while it's happening.
Member surveys about the initiative
When Ed Koch was mayor of New York City, his trademark call of "How am I
doing?" was known all over the country. It might seem like an overly simple
approach, but sometimes the best thing you can do to find out if you're doing a good
job is to ask your members. This is best done through member surveys. There are
three kinds of member surveys you're most likely to need to use at some point:
 Member survey of goals: done before the initiative begins - how do your
members think you're going to do?

~ 161 ~
 Member survey of process: done during the initiative - how are you doing so
far?
 Member survey of outcomes: done after the initiative is finished - how did you
do?
Goal attainment report
If you want to know whether your proposed community changes were truly
accomplished-- and we assume you do--your best bet may be to do a goal attainment
report. Have your staff keep track of the date each time a community change
mentioned in your action plan takes place. Later on, someone compiles this
information (e.g., "Of our five goals, three were accomplished by the end of 1997.")
Behavioral surveys
Behavioral surveys help you find out what sort of risk behaviors people are taking
part in and the level to which they're doing so. For example, if your coalition is
working on an initiative to reduce car accidents in your area, one risk behavior to do
a survey on will be drunk driving.
Interviews with key participants
Key participants - leaders in your community, people on your staff, etc. - have
insights that you can really make use of. Interviewing them to get their viewpoints
on critical points in the history of your initiative can help you learn more about the
quality of your initiative, identify factors that affected the success or failure of certain
events, provide you with a history of your initiative, and give you insight which you
can use in planning and renewal efforts.
Community-level indicators of impact
These are tested-and-true markers that help you assess the ultimate outcome of your
initiative. For substance abuse coalitions, for example, the U.S. Centers for Substance
Abuse Prevention (CSAP) and the Regional Drug Initiative in Oregon recommend
several proven indicators (e.g., single-nighttime car crashes, emergency transports
related to alcohol) which help coalitions figure out the extent of substance abuse in
their communities. Studying community-level indicators helps you provide solid
evidence of the effectiveness of your initiative and determine how successful key
components have been.

~ 162 ~
Setting Up a Timeline for Evaluation Activities
When does evaluation need to begin?
Right now! Or at least at the beginning of the initiative! Evaluation isn't something
you should wait to think about until after everything else has been done. To get an
accurate, clear picture of what your group has been doing and how well you've been
doing it, it's important to start paying attention to evaluation from the very start. If
you're already part of the way into your initiative, however, doesn’t scrap the idea of
evaluation altogether--even if you start late, you can still gather information that
could prove very useful to you in improving your initiative.
Outline questions for each stage of development of the initiative
We suggest completing a table listing:
 Key evaluation questions (the five categories listed above, with more specific
questions within each category)
 Type of evaluation measures to be used to answer them (i.e., what kind of
data you will need to answer the question?)
 Type of data collection (i.e., what evaluation methods you will use to collect
this data)
 Experimental design (A way of ruling out threats to the validity - e.g.,
believability - of your data. This would include comparing the information
you collect to a similar group that is not doing things exactly the way you are
doing things.)
With this table, you can get a good overview of what sort of things you'll have to do
in order to get the information you need.

When do feedback and reports need to be provided?


Whenever you feel it's appropriate. Of course, you will provide feedback and reports
at the end of the evaluation, but you should also provide periodic feedback and
reports throughout the duration of the project or initiative. In particular, since you
should provide feedback and reports at meetings of your steering committee or
overall coalition, find out ahead of time how often they'd like updates. Funding
partners will want to know how the evaluation is going as well.

~ 163 ~
When should evaluation end?
Shortly after the end of the project - usually when the final report is due. Don't wait
too long after the project has been completed to finish up your evaluation - it's best
to do this while everything is still fresh in your mind and you can still get access to
any information you might need.

What sort of products should you expect to get out of the evaluation?
The main product you'll want to come up with is a report that you can share with
everyone involved. what should this report include?
 Effects expected by shareholders: Find out what key people want to know. Be
sure to address any information that you know they're going to want to hear
about!
 Differences in the behaviors of key individuals: Find out how your coalition's
efforts have changed the behaviors of your targets and agents of change.
Have any of your strategies caused people to cut down on risky behaviors, or
increase behaviors that protect them from risk? Are key people in the
community cooperating with your efforts?
 Differences in conditions in the community: Find out what has changed Is the
public aware of your coalition or group's efforts? Do they support you? What
steps are they talking to help you achieve your goals? Have your efforts
caused any changes in local laws or practices?
You'll probably also include specific tools (i.e., brief reports summarizing data),
annual reports, quarterly or monthly reports from the monitoring system, and
anything else that is mutually agreed upon between the organization and the
evaluation team.

What sort of standards should you follow?


Now that you've decided you're going to do an evaluation and have begun working
on your plan, you've probably also had some questions about how to ensure that the
evaluation will be as fair, accurate, and effective as possible. After all, evaluation is a

~ 164 ~
big task, so you want to get it right. What standards should you use to make sure
you do the best possible evaluation? In 1994, the Joint Committee on Standards for
Educational Evaluation issued a list of program evaluation standards that are widely
used to regulate evaluations of educational and public health programs. The
standards the committee outlined are for utility, feasibility, propriety, and accuracy.
Consider using evaluation standards to make sure you do the best evaluation
possible for your initiative.

~ 165 ~
CHAPTER 15

PROJECT MEAL FRAMEWORK

At the project level, the MEAL Framework marks some changes for Implementing
Partners. In the past, implementing partners expected to report against the logframe.
With the new MEAL Framework, Implementing Partners will be expected to focus
their M&E efforts on:

 The specific outcomes, indicators and questions that are relevant to the
particular project;
 LIFT’s revised logframe indicators that are relevant to the project and as
agreed upon between the project and FMO;
 What is working, what is not working, and why;
 Learning and improving interventions based on the evidence gathered; and
 Generating useful knowledge and evidence that can influence policy and
practice.

To help Implementing Partners build and use their own MEAL systems, IPs are
expected to develop and implement project MEAL Plans. Project MEAL Plans
include the following major components:
1) Project Theory of Change
2) MEAL Stakeholder Analysis (optional)
3) Project Measurement Framework (MF)
4) Project Evaluation and Learning Questions (ELQ)
5) Data Collection, Management and Analysis
6) Use of MEAL Results
7) MEAL Resources

Many country programmes have also developed MEAL systems and capacity which
include elements of Level Two above and use monitoring and accountability data to
drive decision making, programme improvement and learning. We aim to establish

~ 166 ~
this approach as standard for our country programmes, providing a strong
foundation for building a culture of quality.

The components of the M&E System (Level One)


You may already be familiar with the main components of the Save the Children
M&E
System, which have been developed in recent years to allow us to measure and
demonstrate our reach and impact on some consistent measures across all country
programmes.
Evaluations and Real Time Reviews
Reporting
Global Indicators Advocacy Measurement
Database

Total Reach and Output Tracking (in emergencies)

Advocacy Measurement: Our Theory of Change places great emphasis on using our
‘voice’, and the voices of partners and children, to advocate for change for children.
It aims to influence others to achieve scale-up of proven interventions. Effective
advocacy is key to this, and our Advocacy Monitoring Tool (AMT) is the means by
which we record the sum of our collective efforts on this.

Through the annual reporting round the AMT captures around 300 separate
advocacy efforts each year. These range from local meetings aimed at improving
working relations with local authorities, e.g. ‘we met with District Social Welfare
officials to discuss the establishment of district level child protection referral
mechanisms’, to national level policy breakthroughs with profound impact on
children’s rights, such as ‘we successfully lobbied Ministry of Health to budget for
nutrition interventions within the new National Strategic Health Development Plan’.

~ 167 ~
The AMT provides an accurate snapshot of the range of advocacy work undertaken
and the policy change outcomes in a given year that were influenced by SC
advocacy work. More work is needed to highlight the impact of game-changing
policy breakthroughs on children’s lives.

Total Reach: This option can be used to estimates of the number of children and
adults reached by our programmes. It allows us to produce consistent data which
drives annual estimates across projects/programmes for direct and indirect reach. If
you’ve already worked with Total Reach you will know that it’s quite a demanding
and involved process, and it needs to be because once the agency has committed to
coming up with credible estimates of reach, we need to be able to justify the figures
and take account of complicating factors like double counting, as well as the difficult
definition of what actually constitutes ‘reach’.

As we have developed a few years of experience in generating reach figures at


country, regional and global level, as well as by theme and sub-theme, we can see
that the quality and consistency of the data continues to improve, and interesting
trends in the changing story of our reach start to emerge. We can see how reach is
impacted by spikes in emergency response, or by a small number of very large
projects.
The other major ‘counting’ tool that we use is the Output Tracker, which is used in
our emergency response programmes to keep careful track of the activities’ outputs,
deliverables and beneficiary reach of the overall response. Session11 deals with the
Total Reach methodology.

The global indicators were developed to measure progress towards the global
outcome statements in our strategy and although originally billed as ‘outcome
indicators’ they are in reality a mix of output and outcome level indicators, and they
vary quite widely in their type and complexity. Some require a single number or a
yes no answer, while others require survey and sampling techniques to build the
data.

~ 168 ~
Global indicator sets have been used in other agencies with varying degrees of
success. Some peer agencies have abandoned them because it proved too difficult to
gather consistent data against them across multiple country programmes. Save the
Children has the advantage of only having a small number of indicators per
thematic area, and we remain very committed to building capacity and credibility in
reporting these indicators to help us better understand and communicate the results
of our work.

Evaluations
An evaluation is a systematic assessment of an ongoing or completed project,
programme or policy, its design, implementation and results. Evaluation also refers
to the process of determining the worth or significance of an activity, policy or
program. Evaluations should provide information that is credible and useful,
enabling the incorporation of lessons learned into the decision–making process of
both recipients and donors. Evaluation is different from monitoring, which is a
continuing assessment based on systematic collection of data on specific indicators
as well as other relevant information
As an organization our principle challenges with evaluations are:

1. Effectively capturing the learning from the vast knowledge base of our
evaluations so that it is used to inform our programme development
2. Ensuring management follow up the recommendations that emerge from
evaluations and reviews.

Components of the MEAL Approach


Complementing but moving beyond our Global M&E system components, the
MEAL approach aims to emphasis the collection and use of data to support decision
making, accountability and continual improvement. It seeks to ensure programmes
are not only Monitored and Evaluated, but beneficiary opinions are actively sought,
quality of activities assessed against minimum standards and findings shared with

~ 169 ~
relevant stakeholders and explicitly fed back into program decision making,
incorporating Accountability and Learning. MEAL hence represents a practical and
conceptual step beyond routine monitoring and evaluation. It involves a
commitment to using monitoring data and accountability feedback for the purposes
of programme quality improvement and decision making, and this is an approach
that we would like to promote and apply across all of Save the Children’s
programmes. As we described in the previous session, the MEAL approach relates
very closely to the components of programme quality outlined in the Programme
Quality Framework.

We’ve identified a number of core components to the MEAL approach, based on the
experience of a number of different organization programmes, as well as good
practice elsewhere in the sector.
These are:

Organizational Culture and Commitment


The key to success in applying the MEAL approach is a management team,
including the Country Director, which is committed to making the system work and
actively uses monitoring data for the purposes of programme quality improvement.
There’s obviously a big difference between monitoring just for reporting, and
actually taking action on the basis of what monitoring data is telling us. The
management team can drive a culture of critical inquiry, transparency and
accountability, and this is crucial if we are to succeed in MEAL.

Options for Country Office MEAL structure


Country teams have built MEAL capacity into their structures in different ways.
Some have invested in developing independent MEAL units, within Programme
Development and Quality (PDQ), with a mandate to monitor all projects against
agreed standards and provide management actions for improvement. Others have
preferred to apply these MEAL principles within the work and responsibilities of
existing project teams. Structure might be determined by available resources, as well

~ 170 ~
as by management preferences. Either way, it’s clear that ensuring capacity and
commitment to MEAL lies with the PDQ team.
Partners and the MEAL approach

A great deal of monitoring of Save the Children supported work is carried out by
our partners, so it’s important in promoting the MEAL approach that we also
consider partner capacity and resources to achieve this. One way to account for this
is to ensure that commitments to monitoring, accountability and quality standards
are clearly articulated in partnership agreements, and are explored in partner
assessments.

Monitoring Minimum standards


The principle of monitoring projects against an agreed set of minimum standards is
a particular feature of the MEAL approach. Minimum standards should be project
specific, and must be established by the project team themselves, rather than from
elsewhere in the organization. They can form the basis of a ‘checklist’ which project
monitors can use to establish whether the commitments and expected quality
standards of a particular project are being met.

Accountability Mechanisms
Establishing effective accountability mechanisms is another crucial pillar of the
MEAL approach, including transparent sharing of information about the
organization and our objectives, and managing feedback and complaints from the
children and communities with which we work. There is a separate session on
accountability in this training, but we should note here that successful MEAL
systems must pay particular attention to listening and responding to beneficiaries,
and ensuring that their views are taken into account in programme development
and improvement, in particular through maintaining a register of feedback and
complaints.

~ 171 ~
Evaluation, Research and Learning
And finally the ‘L’ bit of MEAL which emphasizes the importance of deliberate
efforts on research and evaluation to reflect on operational and technical challenges
and achievements, and uses this learning for further quality improvement. As with
the continual monitoring, learning does not happened by accident but requires
dedicated management support and commitment.

All sorts of research happens organization programmes. In the MEAL approach we


are particularly interested in building skills and capacity for operational research,
which investigates operational issues as projects are progressing, and identifies
improvements and mid-course correction.
Learning in Action: A review of programme quality in Malawi revealed some
excellent examples of learning being put into practice, including this case of the
development of the Cash Transfer project and the use of e-payments and mobile
technology:
The social welfare cash transfer project was originally based on the Malawi
emergency cash transfer programme, which in turn was built on a pilot cash
transfer programme introduced by Concern in 2005. Learning on e-payment has
been sought from M-pesa in East Africa, including a study visit to Uganda to look
at scaling up and sustainability, and from Save the Children’s experience of cash
programming in Swaziland. Particular learning issues included:
• Ensuring that agents have enough money to meet demand, by linking them to
other larger agents (liquidity). They did the learning/problem solving through
observation and discussion with partner.
• Changing the project design from a focus on community phones (which didn’t
work) to individual phones (at a cost of just $10 each)
• Formative research into different delivery products (mobile, bank based) in
collaboration with University of Malawi Centre for Social Research and Oxford
Policy Management.
• Examining clients’ ability to use the phone, and community access to financial
institutions.

~ 172 ~
The MEAL Essential Standards
During 2013, Save the Children revised the MOS (Management Operating
Standards) to arrive a shorter list of Essential Standards designed to build
consistency and compliance across country office programmes.

The Quality Framework has been developed to present both the Programme Quality
and the Operations Quality components of our work, and you can see from this
diagram how they fit together. Clearly the MEAL essential standards, as well as
others including partnership and advocacy, are crucial in achieving quality
outcomes in our programmes.

You may already be familiar with the seven MEAL standards, but here they are
again listed below. Note that some of the standards have qualifying statements
which help to explain them, as well as particular adaptations for use in emergencies,
the ‘humanitarian adaptations’.
1
Projects and programs have clearly
.
Objectives and Standard defined objectives created using an
appropriate logframe, results or
other framework. All relevant
Indicators Global
Indicators are included in the
program design
In humanitarian responses,
objectives and indicators are in line
Objectives and Hum. with the
quality criteria outlined in the
Indicators adaptation / Humanitarian standards
QS

2
Projects and programs are covered
.
M&E Plan and Standard by an M&E plan consistent with the
procedure, with appropriate
resources budgeted to implement
Budget the plan

~ 173 ~
3 Projects and programs establish a
baseline (or other appropriate
.
Baseline Standard equivalent)
as a comparison and planning base
for monitoring and evaluations
If a baseline cannot be established
while prioritizing delivery of a
Baseline Hum. timely
response, then an initial rapid assessment is
adaptation / carried out and followed-on with
in-depth multi-sector assessments in
line with Humanitarian standards
QS and
procedures. In a sudden onset
emergency, initial rapid assessments
are
undertaken within 24-72 hours and
followed-on with in-depth multi-
sector
Assessments
5 Projects and programs which meet
thresholds outlined in the
.
Evaluation Standard Evaluation
procedure are evaluated with
evaluation action plans developed
and signed
off by an appropriate manager
Evaluation and research reports are
shared with relevant Regional and
Evaluation Qualifying Global
Initiative colleagues for the purposes
Statement of effective central archiving and
knowledge management
6
Evidence exists to demonstrate that
.
Learning Standard MEAL data is used to inform
management decision making,
improve programming and share
learning
within and across programs and /
or functional areas
Evidence may include: minutes of
Learning Qualifying program meetings, proposals which

~ 174 ~
demonstrate learning from previous
Statement interventions, feedback from
accountability mechanisms used for
program development
An Output Tracker is set up
according to the deadline in the
Learning Hum. Humanitarian
Categorization procedures and analysis
adaptation / against targets and emerging patterns
are shared with response team
QS leadership at a minimum monthly

Real Time Review (RTR) is conducted for


Learning Hum. relevant categorized response in
line with deadline in the Humanitarian
adaptation / Categorization procedures. Project
and program implementation and
response strategy is reviewed
QS following
RTR

7 Monitoring includes systems which


collect, document and respond to
.
Accountability Standard the
feedback, suggestions and
complaints of beneficiaries. Project
related
information is shared effectively
with beneficiaries

In addition to the Essential Standards there are a host of other procedures, guidance
and other resources to support compliance with these standards. Some of these are
explored during the course of this training, including the Evaluation Handbook, the
Accountability Guidance Pack, and the guidance on using the Global Indicators.

Audiences and Information Needs


It’s a reasonable question to ask of any information collection exercise: ‘Who is this
information for’? Our work does put a lot of emphasis on reporting and
documentation, and we generate a great deal of data. But in building our monitoring

~ 175 ~
and evaluation functions and capacity, it’s useful to reflect not only why we collect
this information, but for who.

Probably the biggest driver of our monitoring and reporting is our donors. But there
are multiple other external stakeholders to who we should be reporting for the
purposes of learning, transparency and wider communication. These include the
children and communities with which we work, and our host governments.

Internal monitoring and reporting can be frustrating if we have no idea where the
information is going, or what its purpose is. An organization has to try and simplify
the annual reporting process, and while exercises like total reach and the global
indicators require substantial time and effort, they do allow us to aggregate
information and tell a story globally about our work.

Perhaps most importantly, if we get it right, a principal audience for our monitoring
activities is our project teams themselves, given our aims to use monitoring data as
way of informing decision making and improving quality.

Summary of this Session

1. Effective monitoring allows us not only to report on our grants, but also to
measure our progress towards ambitious goals in our strategy and to support
programme quality monitoring and continual improvement.
2. An organization has to develop a set of tools and approaches to support
monitoring and global reporting, including Total Reach, Advocacy
Measurement, Global Indicators and Output Tracking.
3. A MEAL approach is being promoted by the organization as a way of using
monitoring and accountability data for learning, quality improvement and
decision making.
4. Different countries adopt different approaches to support MEAL systems
depending on resourcing, structure and other management decisions.

~ 176 ~
5. Organization has to define the particular components and principles of a MEAL
approach and aims for this to become a routine way of working. At its heart is a
commitment across the organization to a culture of quality, transparency and
critical enquiry.

MEAL standards have been agreed for by the organization and form part of the new
set of Essential Standards within the Quality Framework.
Sample MEAL Framework

~ 177 ~
CHAPTER 16

MEAL PLANNING AND BUDGETING


Introduction
In this session on MEAL Planning and Budgeting, you will learn the purpose and
function of a MEAL plan and how to develop a comprehensive MEAL plan and
budget for your project or programme.

Project Meal Plans


Project Description
1 Project Overview
Give a brief summary of the project, including the background of the project and the
project’s overall goals and objectives.

1.2 Project Conceptual Framework


A project conceptual framework is narrative description and/or a visual display that
explains the major actors, actions, and results in a project, and the relationships
among them. Most of LIFT’s new projects under the 2015-16 Calls for Proposals were
asked to develop a Theory of Change, which is specific type of conceptual
framework. Projects that started prior to the recent Calls for Proposals may continue
to use their previously established logical frameworks but may be asked to develop
full MEAL Plans, depending on their stage of implementation. Please check with
your LIFT programme team if your project is not sure if it needs to develop a TOC
and/or a MEAL Plan.

A clear theory of change shows how a project will contribute to the achievement of
LIFT’s purpose and/or programme level outcomes, as defined in the Call for
Proposals. The TOC is a visual tool to articulate and make explicit how a project’s
change process will take place. The TOC therefore is a project planning tool, a M&E
tool and a communication tool and assists with one or more of the following: (1)
defining the outcomes that an intervention aims to achieve; and (2) defining the
causal pathways through which a given set of changes is expected to come about.

~ 178 ~
Beyond this, the TOC can be used to (3) define the assumptions that underlie various
casual pathways; (4) develop a coherent and logical set of metrics or measures that
can be used to track change over time; (5) devise clear and useful evaluation and
learning questions; and (6) organize learning processes at various levels with a
diverse set of stakeholders.

As such, a project TOC will need to show:

 LIFT’s programme level outcomes that the project intends to contribute to;
 The sequence of project outcomes that will lead to the LIFT programme level
outcomes;
 The outputs through which these project outcomes will be achieved (i.e. what
the project will do to bring about these changes);
 The major activities or interventions that will bring about the outputs; and
 The major causal connections between the different interventions, outputs
and outcomes.

LIFT’s programme level outcomes should be taken directly from those specified in
the relevant Programme Framework or Programme Theory of Change; however, the
wording can be altered slightly to better fit the project context. All project outcomes
and outputs should be as clear and specific as possible. They should mention the
specific actors concerned,
stating either who is doing the action at the intervention level or who undergoes the
change at the outcome level. The diagram needs to be clear enough for an outsider to
understand the logic of the project simply by following the flow of the boxes. The
TOC can then be used to develop the project Measurement Framework and the
Evaluation and Learning Questions.

MEAL Focus
Based on knowing what the project is about, it is important to then identify who
within the project needs what information and for what purpose. From this

~ 179 ~
information, you can then determine what of the theory of change needs to be
measured and reported, and what are the larger questions that the project needs to
answer. We use three tools to help us determine this focus: a MEAL stakeholder
analysis (optional), a project measurement framework, and a set of carefully
identified project evaluation and learning questions.

Project MEAL stakeholder analysis (optional)


When developing MEAL Plans, LIFT projects are encouraged but not required to
conduct a MEAL stakeholder analysis. To be strategic in deciding what M&E
information to collect, it is often helpful to identify who are the most important
stakeholders to the project, what M&E information they need, how you want them
to use that information, when they need that information and, depending on your
audience, in what format.

In addition, and in keeping with the development principles of participation and


ownership, it is important to involve major stakeholder groups in the different
phases of MEAL, including design and planning, data collection, data analysis and
interpretation, and use of results. Research shows that involving project stakeholders
in monitoring and evaluation can lead to better quality data, better use of results,
greater ownership of M&E processes and the program overall, and capacity building
in M&E skills. A fundamental way to involve project stakeholders in M&E is to
conduct a stakeholder analysis and decide with them what information should be
collected and the larger questions that need to be answered.

~ 180 ~
Table 1. MEAL Stakeholder Analysis
Who are our major What MEAL How will they use When do they need
stakeholders and information do they that information? the information and
what are their roles/ need? (for what purpose?) in what format?
levels of
importance?

After completing the stakeholder analysis, you then need to decide which
stakeholders’ information needs should be addressed and at which point in time.
This will depend, of course, on the roles of the various stakeholders and the context
in which the project operates, both of which may shift overtime.

Project Measurement Framework


IPs will need to develop a clear Measurement Framework (or, more simply, a data
collection plan) that sets out what data they will collect to track the achievement of
the outputs and outcomes defined in their TOC. Project Measurement Frameworks
should include indicators that are a combination of outreach, output, and outcome
measures, as well as any other performance metrics that are relevant to the project
(e.g. financial ratios for micro-finance institutions, return on investment (RoI)
calculations for proposed business models)). Indicators may be quantitative or
qualitative in nature.

In selecting the appropriate indicators, first state those that you are required to
report to LIFT twice a year. These will include reporting on certain activities and
outputs, as well as some outcomes, which align with the indicators in LIFT’s logical
framework.1 This is important because LIFT is required to report on the achievement
of the overall logframe to the LIFT Fund Board on an annual and semi-annual basis.
Because different projects will report on different LIFT logframe indicators,
please check with your LIFT programme team to determine the specific indicators
your project is expected to report on. After including the required LIFT logframe
indicators, and based on your optional MEAL stakeholder analysis, then select
additional project indicators that are needed to track the other parts of your TOC.

~ 181 ~
Note that it is important to measure the major pieces of your TOC, but it is not
necessary to track everything in the TOC.

As can be seen in the sample Measurement Framework, projects also should specify
the annual targets (by calendar year) for each indicator. In the Measurement
Framework, projects also need to state what methods and tools will be used to
collect the needed data, who will collect it, and with what frequency.

Some organizations use a different format for their measurement framework.


Projects are free to submit any format they wish, as long as they include as a
minimum the major outputs/outcomes from their TOCs, indicators, annual targets,
and data collection methods/tools.

Project Evaluation and Learning Questions


As part of LIFT’s greater focus on programme learning, it is important to ask
questions about the workings of the TOC, including the how’s and why’s, and about
possible effects outside of the TOC. To get to these types of questions, LIFT is asking
for meaningful Evaluation and Learning Questions (ELQ) at the LIFT overall,
programme and project levels.

LIFT overall has developed a series of seven Evaluation and Learning Questions,
which it will report on regularly and answer either throughout or at the end of the
Fund in 2018.

IPs are expected to clearly indicate which of the LIFT overall and programme level
questions their project will help to answer. A headline question for example,
relevant to all projects, will be to ask how effective and cost-effective a project has
been in achieving its outcomes. In addition to addressing the LIFT-level Evaluation
and Learning Questions, projects should identify additional questions based on their
own TOCs and specific learning priorities and interests.

~ 182 ~
In addition to stating their ELQs, projects should also outline why the questions are
important/of interest and the methods and approaches the project will use to
answer these questions. Ideally, a project should not propose more questions than it
can manage efficiently and report on properly. Note that projects are expected to
report in the LIFT semi-annual and annual reports their progress in answering their
Evaluation and Learning questions.

What is the purpose of a MEAL plan?


A MEAL plan is your project or programme’s roadmap to implementing your
MEAL related activities as intended, in a timely and efficient fashion, and to ensure
continuous learning throughout the project and programme cycle. Specifically,
organization defines a MEAL plan as a management tool that can be used to monitor
and evaluate interventions, projects or programmes. Save the Children’s Essential
Standards states that all projects/programmes should be covered by a MEAL plan
with appropriate resources in place to implement the plan.

MEAL plans come in different forms and agencies refer to such tools using a variety
of terms, including M&E Plan, Performance Monitoring Plan, Monitoring Matrix
and others. However, a MEAL plan should not be confused with a project or
programme framework or results framework, a tool for project planning, design,
management, and performance assessment that illustrates project elements (such as
goals, objectives, outputs, outcomes), their causal relationships, and the external
factors that may influence success or failure of the project.

Overall, a comprehensive MEAL plan demonstrates the linkages between the


activities we implement and the changes we seek to achieve. A comprehensive
MEAL plan simplifies complex information by transforming programming strategy
into concrete and practical activities. It forces us to think systematically about each
of the components of our project/programme.

~ 183 ~
It also provides space to identify the indicators or variables we need to measure in
order to answer questions about the effects of our interventions, the data collection
tools we need to measure these variables, data collection and data management
processes (including key staff responsible for such processes), how resultant data
can be shared and the key audiences with whom such data should be shared.

With a completed MEAL plan and budget, we can quickly determine if our planned
MEAL activities are practical and relevant to the objectives of our project or
programme and establish relationships between different components of our work.
We can ensure technical and operational quality (e.g. timeliness of
project/programme activities as well as accountability to multiple internal (e.g. other
departments within your office, Members, Country Offices, Global Initiatives,
International Board) and external (e.g. children and communities whom we serve,
donors, government partners) audiences. Because MEAL plans demonstrate our
successes for projects/programmes with proven results and document specific
weaknesses for less successful projects/programmes, they are useful tools for donors
to make funding decisions.

What is in a MEAL Plan?


The content and the degree of details in a MEAL plan may vary depending on the
size and nature of the project/programme. Several projects could be captured within
one programme-level MEAL plan. In fact, MEAL plans can exist at any level,
including country, regional and global. The MEAL plan should cover the entire
length of the project/programme.

Some recommended categories to be included in MEAL plans, but no prescribed


format because the actual template depends on donor or other context specific
stakeholders. That said, the following components should be considered in MEAL
plans and budgets:
• Key project/programme information
• Objectives and Indicators for monitoring, with baselines

~ 184 ~
• Qualitative information needed for monitoring
• Planned evaluations (projects/programme reviews, mid-term, final)
• Real-time reviews (in humanitarian responses)
• Evaluations of Humanitarian Action
• Training and capacity strengthening of key staff on MEAL activities
• Accountability Activities (e.g. information needs assessment, establishing and
maintaining complaints response mechanisms, ways of sharing information and
facilitating two way communications between beneficiaries and Save the
Children/our partners)
• Data Quality and Validation
• Roles and Responsibilities
• Management Information System(s)
• Communicating and using data
• Work plan
• MEAL budget (recommended 3-10% of total project/programme for MEAL
activities)

How do I get started?


Step 1: Identify your project/programme goals.
The first step in creating a MEAL plan is to identify your programme or project goals
and activities that you plan to implement. A good MEAL plan begins with the
specific objectives of the project/programme. In fact, a comprehensive MEAL plan
can serve as an effective tool in creating a coherent project/programme proposal as
it can reveal gaps in planning and any underlying assumptions we are making.
What are the products, materials, and capital goods (outputs), short- or medium-
term results (outcomes) and long-term results (impacts) your project/programme
intends to achieve? These goals and activities are typically specified during the
proposal writing phase. Usually this information is articulated in your programme
framework document and is described in the form of a results framework, logical
framework or any other types of framework that were used to design the

~ 185 ~
project/programme (see Session 3 on Programme Frameworks, Objectives and
Indicators for more details).

Next, consider the context of your project/programme. No project/programme


takes place in isolation. It is important to understand the national and local policies
as well as the administrative structures that can affect the implementation of our
project/programme. Any contextual information that can be collected prior to the
start of the project/programme will provide a critical basis for understanding the
risks, if any, associated with successful implementation of MEAL activities.

Step 2: Identify your project/programme indicators and targets.


The second step in creating a comprehensive MEAL plan is to identify the indicators
you will use to measure the progress and achievements of your project/programme,
as well as targets. Underlying all MEAL plans is the need to determine what it is we
are trying to measure and why. Identifying and selecting both quantitative and
qualitative indicators is critical prior to beginning MEAL activities. Remember that
indicators must be:
• SMART (Specific, Measurable, Achievable, Realistic/Relevant, Time-Bound)
• Linked to your project/programme MEAL plan
• Useful for programme decision-making
• Consistent with international standards & other reporting requirements (as
appropriate)
• Realistic to collect (feasible).

While indicators are initially selected during the project/programme design phase,
they can change over time as the context evolves. Just as a project/programme can
change throughout the funding period, so can a MEAL plan. You need to ensure
there is a mechanism in place to guarantee the continued relevance of the MEAL
plan to the project/programme.

~ 186 ~
When setting targets for your indicators, you must focus on what your project/
programme can realistically achieve given your human and financial resources and
the environment in which you are operating. For each indicator, consider the
following factors: baseline levels, past trends, donor expectations and logistics to
achieve targets. In addition to the magnitude or size of change over time, when
setting specific project/programme targets, you must also decide on the direction of
change that is desired over time. You may also wish to establish annual or
intermediate targets for multi-year projects/programmes.

Step 3: Select your data collection methods and data sources for prioritized
indicators in your MEAL plan.

There are many ways to collect data. The choices you make will largely depend
upon your available budget and the availability of human resources. Key factors
here include the availability of skilled staff familiar with conducting monitoring and
evaluation activities, the appropriateness of the methodology for your MEAL
objectives, and the context of your project/programme (e.g. political, social,
security). You will also need to determine whether any special studies will be
conducted and what study design will be used. You should also carefully consider
the internal and external capacity to conduct any special studies (this includes
technical capacity as well as cost considerations).

Something that often is neglected when developing a MEAL plan is the need to
assess the MEAL technical capacity of your project/programme. When preparing
the MEAL plan, you must at least consider the existing data collection systems and
staff capacity in MEAL. The programme may have a MEAL unit with staff trained in
monitoring and evaluation methods that will be responsible for leading the
development and coordination of the MEAL plan. If this is not the case, then there
may be individuals who are motivated and have an interest in monitoring and
evaluation. It is important to identify those people even if they do not have a formal
monitoring and evaluation position; it is also important to work and strengthen the

~ 187 ~
capacities of these people in MEAL. Assessing current capacity and using resources
that are already available will help us to avoid duplication of data collection and
reporting and collecting information that will not be used. Finally, no matter the size
of your project/programme, each data collection strategy you select should be as
rigorous as possible to ensure the data gathered are objective and unbiased or
impartial.

Step 4: Decide upon your data analysis, quality assurance/validation and


management strategy.
Every comprehensive MEAL plan should include a method for analyzing and
assuring the quality or validating the data you gather, both qualitative and
quantitative. Linked with this can be a capacity to strengthen strategy to enhance
MEAL specific skills of staff. For example, workshops to raise awareness of the
MEAL plan, data collection methodologies and tool development, familiarity with
Excel or statistical software packages such as SPSS. The MEAL plan should also
specify, for each indicator, the individuals or units responsible for management and
implementation of the MEAL plan, as well as how the data will be received, stored
and accessed. It should describe “data flow” from initial collection to final storage
(including physical storage of original data collection tools and electronic databases)
and use.

Step 5: Identify your strategy for reporting and disseminating data.


When crafting your MEAL plan and finalizing the MEAL activities you will
implement, it is critical to consider if, when and how you will interact with
stakeholders and beneficiaries your project/programme serves with respect to your
MEAL activities (please see Session 7 on Accountability for more details). A
dissemination strategy is required that will specify how to deal with the results of
your MEAL activities, how widely you would like to circulate the results and the
stakeholders you wish to target. Stakeholders could include learners, teachers, policy
makers and other decision makers. As you will learn in Session 9, Use of MEAL
Data, it is critical to think about how you will use data gathered through different

~ 188 ~
data sources from the very beginning of your project/programme to conserve
limited human and financial resources and to ensure you are continuously learning
and improving your interventions throughout the project/programme cycle.

Consider the level of participation you require from each group, including children
and youth. How often should your team meet with key stakeholders? What form
will these meetings take – information sharing for particular groups, circulation of
final reports, individual discussions, or other fora? Will data be made publicly
available? Involving children and youth in meaningful and appropriate ways takes
careful preparation and planning. For example, if you plan to share the lessons
learned from a final project evaluation with children who participated in the project,
you cannot simply provide them with a copy of the final report. The key messages
will need to be simplified and adapted so they are understandable, relevant and age-
appropriate. This topic will be covered in greater detail in Session 8, Children’s
Participation in MEAL.

MEAL budgeting
MEAL activities requiring a budget
Budget allocated towards MEAL activities should include both sources of funding
and any existing gaps. MEAL-related costs that should be included in the project.
Budget might include the following:
• Baseline studies or data collection activities
• Establishing a monitoring and evaluation or Management Information System
(MIS)
• Training and capacity building for all staff and partners involved in developing
and implementing MEAL activities
• Ongoing, routine monitoring activities, including supervision visits, data
collection activities, printing of questionnaires, travel and refreshments for
respondents, and data inputting
• Mid-term review (if required)
• Final evaluation (if required)

~ 189 ~
• Learning events and dissemination activities, including publications and public
meetings where appropriate
• Staff time/salary for dedicated monitoring and evaluation personnel can also
be included.

MEAL as a percentage of project/programme budgets


As a rough guide, 3-10% of your total project budget should be allocated towards
MEAL activities. This should ensure that implementation is not adversely affected
while project/programme results can be documented reliably and credibly.

The final allocation for MEAL costs will be highly dependent on the size and
complexity of the project and the project design (evaluation design). For example, a
small stand-alone intervention that is focusing on distribution of teaching materials
in one district (with concrete outputs and outcomes) may require less funding for
MEAL activities compared with a multi-country initiative focused on
implementation of national policies.
Summary of this Session
• MEAL plan can be defined as a management tool that can be used to monitor
and evaluate interventions, projects or programmes. A MEAL plan is your
project or programme’s roadmap to implementing your activities as intended,
to conduct monitoring and evaluation activities in a timely and efficient
fashion, and to ensure continuous learning throughout the project and
programme cycle.

• Different organizations have recommended categories to be included in MEAL


plans, but no prescribed format. Always use a donor’s preferred template when
creating your MEAL plan; if your donor does not have a preferred template,
you can use Save the Children’s MEAL categories described in the Standard
Operating Procedure.

~ 190 ~
• A MEAL plan should be developed with the Programme Management team
and relevant partners and stakeholders to ensure ownership and sense of
shared responsibility, especially where they are responsible for any aspect of
data collection.

• MEAL activities should be allotted between 3-10% of your overall project or


programme budget. The exact amount will vary depending upon the scope of
the project and available resources.

~ 191 ~
CHAPTER 16

BASELINE AND EVALUATION DESIGN AND MANAGEMENT

Introduction:
How do you know if a given intervention is working? How can you measure and
demonstrate if it is producing changes in a systematic manner? Are there different ways of
doing so? This session will answer this and other similar questions and will provide you
with useful skills to manage or actively participate in the process of conducting a baseline
study, needs assessments, project evaluations and real-time reviews.

Indeed, these questions and processes are key to improving the effectiveness and quality of
our programmes. In our dual mandate Quality Framework explicit components related to
evaluation and learning are stated, with the aim of improving programme quality and
measuring and demonstrating the impact for stakeholders.

The purpose and use of baselines and evaluations


Introduction
What is a baseline?
Baselines are data collected at the outset of a project (or an activity) to establish the pre-
project conditions against which future changes amongst a target population can be
measured.

Referring to the Save the Children Baselines Essential Standard, it says that: ‘Projects and
programs establish a baseline (or other appropriate equivalent) as a comparison and
planning base for monitoring and evaluations’.

The information gathered (and analyzed) in the baseline consists of data on indicators
specifically chosen to monitor project performance on a regular basis. It also considers the
anticipated use of these indicators at a later time to investigate project effects and impacts.
Indicators (and methods of data collection and analysis) used in baseline surveys may be
qualitative or quantitative. Baseline studies can also serve to confirm the initial set of

~ 192 ~
indicators to ensure those indicators are the most appropriate to measure achieved project
results.

What is evaluation?
There are many definitions of evaluation: most of them agree that it is ‘a systematic
assessment of an on-going or completed project, programme or policy, its design,
implementation and results’.

Evaluation also refers to the process of determining the worth or significance of an activity,
policy or program. Evaluations should provide information that is credible and useful,
enabling the incorporation of lessons learned into the decision–making process of both
recipients and donors.
As it is layed out in the table below, evaluation is different from monitoring. Monitoring can
be defined as ‘a continuing assessment based on systematic collection of data on specific
indicators as well as wider information on the implementation of projects’. Monitoring
enables managers and other stakeholders to measure progress in terms of achieving the
stated objectives and the use of allocated funds.

Monitoring Evaluation
… is ongoing … is periodical
Gathers information related to the Assesses the programme’s design, processes
programme regularly, on a day-to-day basis and results
Provides information on whether and how Provides information on what the
the planned activities are implemented or programme effects are
not
Refers to activities, outputs and intermediate Refers to intermediate results/outcomes and
results/outcomes the bigger/strategic objectives and, Goal (in
case of impact study)
Fosters informed re-design of project Can foster informed review of both the
methodology current project (mid-term evaluations) or
new projects (final evaluations)
Collects data on project indicators Uses data on project indicators collected
Performed by project staff (Internal) through the monitoring process
Can be performed by an external evaluator,
but also by a mixed team with internal and
external evaluators or even by only an
internal team.
Table 1: Differences between monitoring and evaluation

~ 193 ~
There are many types of evaluations and also a series of related terms, including: proof of
concept evaluation, post-programme evaluation, thematic evaluations, real-time evaluation,
review, ex-ante evaluation, ex -post evaluation, formative evaluation, summative evaluation,
mid-term evaluation, process evaluation, etc. These are described in more detail in the
Evaluation Handbook referenced at the end of this module.

Particularly worth mentioning here, however, are Real Time Reviews (RTRs) and Evaluation
of Humanitarian Action (EHA) in emergency contexts. RTRs combine two previously
separate exercises: Real Time Evaluations and Operational Reviews. Therefore, an RTR looks
at both programmatic and operational aspects of a humanitarian response. The primary
purpose of an RTR is to provide feedback in a participatory way in real time (i.e. during the
implementation of an emergency response, and during the review itself) to those executing
and managing a humanitarian response: this is designed to improve operational and
programmatic decision making. RTRs are internal exercises that aim to have a light touch so
as not to interfere with programming.

RTRs examine the initial phase of the response. The findings and recommendations should
help staff to make immediate adjustments to both programming and operations in order to
better meet the needs of beneficiaries.

An Evaluation of Humanitarian Action is a systematic and impartial examination of


humanitarian action intended to provide lessons to improve policy and practice and enhance
accountability in emergency responses. It is just one type of evaluation with distinct
characteristics looking at the OECD DAC criteria but also, for example, our Emergency
benchmarks, and the context of an emergency that can make access to data more difficult,
etc.

Steps to conduct baselines and evaluations


The box below has some key steps that would need to be undertaken for conducting an
evaluation. On a piece of paper, list these in a correct order. Then also reflect on any steps
that may be missing.

~ 194 ~
Are there any steps that are different for a baseline study?
Involving stakeholders Designing and testing the Preparing a management
evaluation tools response to the report
Data management Agreeing a timeline Sharing findings with
Stakeholders
Putting the evaluation Training Reviewing the evaluation
team together report

Making an evaluation plan Developing an action plan Approving the evaluation


Setting a budget report
Writing an evaluation report Drawing up ToR

The study process


The chart below describes the key steps in a baseline or evaluation study process: from
designing the study, through planning and implementing it, to using the results.

In the design phase, we clarify the purpose of the survey, define the objectives and
questions, and decide on the design and methodology. It is here that you need to consider
issues like the level of rigor, data sources, sampling and data collection, and data quality. In
section 3 below I have presented different evaluation models and designs.

The planning and implementation processes include discussing how stakeholders will be
involved, preparing a budget and timeline, and forming the survey team. It is also here that
you should think about the management process, which includes defining roles and
responsibilities for the evaluation team, and preparing the terms of reference and contracts.
In the inception phase, you would then carry out more detailed planning and test the data
collection tools and other methods of measurement . involving children and stakeholders.

During the implementation of the study, in addition to the data collection, input, cleaning,
analysis and storage, writing of the evaluation report should also start.

Finally, it is important to take time and use resources to communicate and disseminate the
findings, plan follow up(s) on any recommendations and prepare the response to the
evaluation and the management action plan. This should also include ways to ensure that

~ 195 ~
learning informs future project or programme planning, design and implementation. Session
9 ‘Use of MEAL data’ explores in more detail how to use the evaluation results.

The process represented in the graph below can be adapted to different types of evaluations,
regardless of the context and the methodology chosen. This process can also be helpful when
planning baselines or end-line studies or different types of research, although as indicated
earlier in this session, these are different from evaluations

Purpose Planning Inception/ evaluation Sharing, use and


Scope and work plan follow up
objectives
Context Design Training Organisational
Child/stakeholders response
Audience (baseline, mid term, participation Publication,
Questions end--‐line, post)
Type of Stakeholder's dissemination,
evaluation participation Data collection sharing
Criteria Data collection methods Data Analysis
and tools Interpretation Action Plan
Design  Sampling  Conclusions  Decision making
methodology  Timeline and budget  Recommendations  Improve
TOR/Evaluation Plan  Writing Report programme quality
 Evalua1on Team  Review and and
 Roles& Approval implementation
Responsibili1es  Implementation  Learning

Table 2: Evaluation process Source: Adapted from SC Evaluation Handbook and


corresponding M&E Training modules

You may want to have a break here before we explore different evaluation designs!
Choosing the evaluation approach and design Evaluation models and approaches

In recent years conducting evaluations has become a frequent practice in the international
cooperation sector. Donors demand results and value for money, as well as transparency and
efficiency in the use of resources. There is a shared concern for measuring interventions to
see if what we are doing works, and to what extent. Evaluations are used for accountability

~ 196 ~
and decision making, but also for improvement and learning, or enlightenment for future
actions, in the words of Stufflebeam and Shinkfield (1987:23).

They try to not only answer questions such as ‘Has the intervention achieved its objectives?’,
but also, ‘How were the objectives achieved?’ It is important to identify why you want to
conduct an evaluation and what will be the key questions you want the evaluation to
answer, so you can choose the most appropriate approach and design. For instance, if you
want to conduct an evaluation primarily to show results and to be accountable for the use of
resources, you may want to opt for a ‘criteria-based standardized model’. This approach is
widely used in the international development sector and it fits very well with objectives and
results-based planning tools such as the log-frame. This evaluation approach uses pre-
established criteria to organize the evaluation questions and the evaluation design, such as
the DAC criteria1: effectiveness, efficiency, relevance, impact and sustainability. The criteria-
based model assesses the programme’s performance against some pre-established standards
that determine if the objective has been fulfilled or not.

However, there are other approaches that go beyond the fulfillment of objectives and focus
on other aspects of the intervention. For instance, Theory-based evaluation approaches
(initially proposed by Carol Weiss and adapted by other authors) allow a wider look at the
programme. This approach considers the intervention as a system with inter-related
components that need to be looked at during the evaluation to understand how the changes
are produced. It will help you to read reading the information relating to the programme’s
‘black box’. This is particularly useful if you plan to use your evaluation for learning and
decision making or if you want to document your programme for it to be replicated or
escalated.

In advocacy interventions, where the use of experimental or quasi-experimental methods is


not feasible, you would utilize specific approaches, such as outcome mapping, contribution
analysis and process tracing. If you are interested on learning more about evaluation in
advocacy, please see Outcome Mapping considers that development is essentially about
people relating to each other and their environment, so it focuses on people and

~ 197 ~
organizations. It shifts away from assessing the products of a programme to concentrate on
changes in behaviors, relationships, and also the actions and activities of the people and
organizations with whom a development program works directly.

Contribution analysis compares an intervention’s theory of change against the evidence in


order to come to conclusions about the contribution that it has made to observed outcomes.
It examines to what extent observed results are due to the activities of an advocacy initiative
rather than other factors, and whether the advocacy initiative has made a difference, and
whether or not it has added value.

Process tracing is a qualitative research method that attempts to identify the causal
processes – the causal chain and causal mechanism – between a potential cause or causes
(e.g. an intervention) and an effect or outcome (e.g. changes in local government practice).
The method involves evidencing the specific ways in which a particular cause produced (or
contributed to producing) a particular effect. A key step in process tracing is considering
alternative explanations for how the observed outcomes came about, and assessing what
evidence exists to support these.

Rights-based approaches organize the evaluation questions and design around the rights
that the intervention intends to promote, protect or fulfill. The United Nations Convention of
the Rights of the Child (1989) is the primary Human Rights framework for child-rights based
programming and evaluation. The rights and principles presented in the convention are
usually organized around four groups: survival and development; non-discrimination;
participation; and the best interest of the child. This approach puts children at the centre and
recognizes them as rights holders. It analyses power relations between groups, in particular
between duty bearers and rights holders, focusing on the most vulnerable groups with a
particular interest on how principles such as non-discrimination and participation have been
integrated into the programme. It prioritizes sustainable results, looking at the real root-
causes of the problems instead of just considering their immediate causes.

~ 198 ~
Which one is the best approach?
It all depends on the purpose of your evaluation, what you expect the evaluation to
tell you and how you and other stakeholders are planning to use the evaluation. You
also need to check if the donor is requesting a particular approach or if it is possible
to be propositional about it. These models can also be complementary, so you can
combine elements of several of them to create a tailored model. For instance, you can
organize the evaluation questions by groups of rights and sub-grouping them as well
by criteria and standards, including questions that relate to the process and the
structure and not only to the results.

It
is not possible to be completely objective when evaluating programmes, so if you want your
findings to be reliable, it is important to be as systematic as possible. Here are some tips that
can help you:
• Reflect stakeholders’ information needs in your evaluation questions.
• Assure the quality of your data collection and analysis techniques
• Follow four separate steps of a systematic assessment: data analysis; data
interpretation; conclusions; and recommendations.
• Ensure the highest standards on ethics, stakeholder participation and transparency

Evaluation Design
If your evaluation will measure results, then once you have decided the evaluation approach
and the main evaluation questions, you will also need to choose the appropriate evaluation
design. This decision will also depend on elements such as the available resources (some
designs are more resource-intensive than others) and the implementation stage of the
programme. Some evaluation designs require conducting measurements before the
programme activities actually start (baselines) so it will not be possible to use them if you
only start planning for your evaluation when the programme is in an advanced
implementation stage.

~ 199 ~
Why do you need a good evaluation design?
When we conduct an evaluation we want to find out if there have been any changes
(gross effect), but also if the changes are produced by the programme. This means: can
the changes be attributed to the programme? If yes, how much of these changes are
actually due to the programme activities (net effect). This can only be established if we
have a sound evaluation design.

Robust evaluation designs present high degrees of:


• Internal validity => It refers to the extent to which all possible explanations or
hypotheses are controlled.
• External validity => It refers to the extent to which results can be generalized beyond
the object of study and if the same conclusions can thus be applied to other subjects, places
or populations. This depends on the quality of the sampling, the margin of error we are
dealing with and the control of potential bias.
You can find further information on these concepts in session 6 –‘Methods of data collection
and analysis’ in this training module.
There are two strategies to ensure the internal validity of your evaluation design.
These can also be combined. You can (1) compare groups between themselves or over time
or you can (2) use statistical analysis.

~ 200 ~
1. When you compare groups between themselves and/or over time you can
see their differences and eliminate the effect of possible explicative alternatives you
may be considering to explain the changes.
In order to do this, you can:
a) Compare the treatment group with the comparison or control groups (not
receiving project activities/services)
b) Compare programme group with itself over time
2. When using statistical analysis it is possible to control or maintain constant
the effect of possible alternatives. This requires identifying the variables to be
controlled and gather data on them. Then use statistical analysis techniques to
separate the effects of each variable.

What is a treatment group? (TG)


The group participating in the programme, receiving the services or goods.
What is a comparison group? (N-ECG)
This is a group with very similar (almost identical) characteristics to the treatment
group but that is not participating in the programme. It is also called a non-
equivalent group because it is almost identical, but the members are not selected
with random sampling ─ thus, possible bias does not make them totally equivalent.
What is a control group? (ECG)
This group is a comparison group – so no participating in the programme – but its
members have been randomly selected. This means the group can be considered
completely equivalent to the treatment group.

In this session we will concentrate on the first methodological strategy (comparing


groups between themselves and/or over time). Let’s see what the most known
quantitative evaluation designs are to compare groups!

~ 201 ~
From least to most robust.
Design How robust? Can calculate
net effect?
1. Post-test (only TG) Non-experimental designs.
In this model you take one measurement at the end of the Little internal validity
intervention with the treatment group. For instance, you .
can conduct a survey with programme participants at the
end of the activities. As there is only one measurement, we
cannot compare the data with other groups or with the
group itself before the activities, so it is not possible to
establish if there have been changes and if these are
attributable to the programme. We can only document
changes perceived by participants to which the programme
will have contributed.
2. Pre and post-test (only TG) Only possible to extract gross
This model compares the results before and after the effect – the results that the
programme by programme is apparently
measuring changes only in the treatment group. It is not producing or as perceived
possible to know by those participating in the
if there have been any other variables affecting the programme
intervention and the
changes, if any.
3. Only post-test w N-ECG It includes variables related to
The measurement only takes place at the end of the the programme as well as
programme, but both the other factors, so we can only
treatment and the comparison group (=non-equivalent talk about contribution, not
group) are measured. attribution.
You can compare the results between both groups, but
there is no comparable
data of the initial situation, before the programme started.
4. Pre and post-test with N-ECG Quasi-experimental design.
With this model you will perform measurements before
and after the

~ 202 ~
programme to both the treatment group and the
comparison group. It is very
close to the real experiment, but it is not as robust because
the comparison
group is not selected randomly. This is why it is called non-
equivalent group.
5. Time series (longitudinal analysis) Medium internal validity.
Here several measurements are taken before the
programme starts, while it is being implemented and after
it has finished, so you can have a broader picture of how
the treatment group is changing, if this is the case. The
differences between measurements can be treated as net
effect, but we cannot be as sure as with the experimental
design. The comparison group (N-ECG) can be
included or not included.
6. Experiment with pre and post-test Experimental design. High
This model is also called ‘Randomized Controlled Trial’ internal
(RCT) and it is considered the only real experiment. Here validity. Calculates net effect
the individuals of both the treatment group and the control
group are selected randomly from the same group of
individuals. The treatment group receives the programme
that is being tested, while the control group receives an
alternative treatment, a dummy treatment (placebo) or no
treatment at all. The groups are followed up to see how
effective the experimental treatment was and any difference
in response between the groups is assessed statistically.

Table 3: Examples of quantitative evaluation designs. Source: self-elaborated

The non-experimental designs (models 1, 2 and 3) are not very robust and only
provide the gross effect of a programme, but require fewer resources and the
analysis is not complex. Sometimes, if the programme has already started, models 1
and 3 are the only ones that are available for us to do. For this reason it is worth

~ 203 ~
planning the evaluation during the programme planning stage, so a pre-test can be
planned for and performed before the activities start.

The experimental design (number 5) provides the soundest methodology to extract


the net effect, but it is expensive and can very often pose ethical problems, because
individuals may be excluded from a programme just so they can be treated as the
control group. For this reason Campbell introduced quasi-experimental models
(such as model 4 and 5) that provide an acceptable degree of internal validity while
posing less ethical issues.

Strictly speaking, impact evaluations are those which provide the net effect: this
means that they calculate the ‘amount of change’ that can be claimed by the
programme (attribution). Otherwise, we can only say that the programme has
contributed to the changes.

Let’s see some practical examples:


In the example below, if we only look at the treatment group (blue line) there seems
to be a 20 points gain (45-25=20). However, a closer look will show you that the
comparison group, who has not received the programme, has gained 10 points (35-
25=10). This means that the programme has only produced 10 points gain. The other
10 may be due to other factors, but we can only see this if we compare the results
between to two groups.

~ 204 ~
Project
60
45
10 points
40
25
20
0 difference
35

Beginning project End of Project


Figure 1: Difference between baseline and endline (example 1). Source: elaborated by
Larry Dershem
In the second example no real change can be attributed to the programme because
even if there is a gain of 20 points, the same gain has been recorded in the group
who is not receiving the programme. Thus, changes are due to other variables or
correspond to a natural trend.
Project Non--‐project
60
45
No difference
40
25
35
20

15
0
Beginning project End of Project
Figure 2: Difference between baseline and endline (example 2).
Source: elaborated by Larry Dershem

~ 205 ~
The same happens in the third example, even though both groups start at different
points, they both make the same increase (45-25=20 for the project group; 35-25=20
for the non- project group).

Project Non--‐project
60
40 45
No difference
25

20
35
15

0
Beginning project End of Project
Figure 3: Difference between baseline and endline (example 3). Source: elaborated by
Larry Dershem

To summarize:
• There is a variety of programme evaluation approaches to choose from. The most
widely-known and used in this sector is the criteria-based standardized model.
But you can also consider other options and methodologies, such as theory-based
models, outcome mapping, process tracing, contribution analysis and rights-
based approaches if they are more adequate for your intervention and context.

• Evaluations designs will determine if the evaluation is able to provide the gross
effect (changes that occurred) or the net effect (changes that can be attributed to
the programme).

• In this session we have explored the main quantitative evaluation designs and
discussed their degree of internal validity. From less to more robust we have

~ 206 ~
seen: post-test only, pre-post-test, only post-test with non-equivalent control
group, pre and post-test with non-equivalent control group, time series, and
experiment with pre- and post-test.

Planning and managing baselines and evaluations


Introduction
As mentioned in the section above on evaluation process, conducting evaluations is
a process that starts with a discussion of the motivations for conducting the
evaluation. Additionally, the evaluation should only be considered finalized when
the results are disseminated, the evaluation findings are used and the
recommendations widely discussed.

Consulting with a wide range of stakeholders (and reaching agreements) can take
quite some time and significant preparation, particularly when involving children. If
you need to plan for an evaluation, make sure you allow enough time for each step.
A full evaluation process may take several months (though usually less in real-time
evaluations or evaluations of humanitarian interventions).

Who should participate in the evaluation process? Identifying and engaging


stakeholders
In order to apply the principles of participation, transparency and accountability,
your evaluation process should clearly define a management structure and
stakeholder input, laying out who is going to do what.

The Evaluation Manager


All evaluations should appoint an evaluation manager. This person will assume
day-to-day responsibility for managing the evaluation process. Their role is distinct
from that of the evaluation team leader, who will lead on content and
implementation. The evaluation manager is also responsible for logistics and for
facilitating the broader evaluation process, including quality control. The evaluation
manager’s responsibilities are centered around deliverables in the evaluation plan

~ 207 ~
(such as the stakeholder analysis, the evaluation questions, the field work plan,
finalization of data collection tools, management of data, report dissemination, etc.).

The evaluation manager also ensures that a quality assurance takes place. This will
include looking at
 the credibility of the findings
 how well the work addresses the aims of the evaluation
 the rationale and robustness of the evaluation design
 the way the sample was taken
 the data collection processes
 comparing the draft report with the ToR

Preparing the Terms of Reference (TOR)


During the planning phase of your evaluation, baseline or real-time review, you will
need to prepare the Terms of Reference (ToR). This is a structured document that
describes the reasons for conducting the study (purpose), its objectives and its scope,
amongst other things. It should cover what we want to evaluate, why, and how it
should be organized.

As indicated a ToR can also serve as a management tool throughout the evaluation
process and a reference against which the completed evaluation report and
deliverables are assessed before approval. The scope and level of detail of a ToR
might vary; in some cases, the ToR will entail a plan for the entire evaluation
process, while in others, ToRs can be developed for specific parts of the evaluation –
e.g. for the evaluation team or even individual team members. Furthermore, the
scope and level of detail in the ToR should be consistent with the size, complexity
and strategic importance of the project or programme being evaluated.

As you will have already guessed, stakeholders should be widely consulted while
the ToR is being developed, so their views are clearly reflected and their information
needs included. It will also help to clarify expectations around the process and to

~ 208 ~
motivate stakeholders to be fully involved. The success of the evaluation and its use
depends on it!

Very often the donor will provide a ToR template that should be followed. If this is
not the case and you need to develop your own, it can be useful to collect a few
examples for inspiration.

Recruiting and managing the evaluation team


A key factor in the success of the evaluation process is the evaluation team. When
planning the evaluation you will need to decide if it is more appropriate to have a
team of internal evaluators, external evaluators or a mix of both. External evaluators
are usually less influenced by the institutional context but may not be familiar with
the details of relevant organizational issues affecting the project. External evaluators
are usually perceived as more independent, although internal evaluators should not
be directly related to the intervention either. There is not a right or wrong answer for
this, so you may want to discuss with the reference groups what is the best
arrangement for your specific evaluation.

What skills should you ask for? The ideal case is to have an expert to evaluate each
thematic area addressed by the intervention, and also an expert on evaluation and
methods. For our programmes, it is also particularly important that the team has the
skills to work with children and involve them in the process as much as possible,
although orientation and training may be needed for this. Evaluators should speak
the local language and know the country context as much as possible, so local
evaluators are usually preferred, although international evaluators could also
provide additional capacity to a local team.

Evaluating is not only collecting and analyzing data. As noted by C. Weiss, it is a


process that happens within a political context. There are a number of interests
around the evaluation results that will influence the process and evaluators should
be sensitive to this reality.

~ 209 ~
If the team is composed by several evaluators it is best to name a team leader. It
usually works better when members have worked together before.

When looking for external evaluators, it is considered good practice for transparency
to advertise your evaluation in different networks and platforms. This allows all
potential candidates to submit an Expression of Interest (EoI) or a proposal and
participate in a competitive process. This also increases your chances of getting the
best value for money

The management response and action plans are the key ways you manage the use of
the evaluation results. The management response to the evaluation lists the key
recommendations, along with actions proposed by the evaluation and also the
response of the project or programme team.
The management response is often part of the evaluation report and hence a public
document.

The evaluation action plan is developed for our internal management purposes and
is more detailed than the management response. It is a requirement for Save the
Children as per the Evaluation Essential Standard and the Country Director is
accountable for its follow-up and implementation. In terms of content, it is similar to
the MEAL Action trackers described in the Use of MEAL Data module: the action
plan includes recommendations and lessons learned during the evaluation, together
with actions to address these with resource requirements (financial and/or
technical), a timeline and a list of who is accountable for which actions.

Summary of this Session


1. Concepts and definitions:
• Baseline studies are different from a needs assessment or situational analysis
• Only collect baseline data on your project/program indicators

~ 210 ~
• For emergency responses, available baseline data in the Emergency
Preparedness Plan should be reviewed and used. In Category 1 or 2 rapid onset
emergencies, initial rapid assessments should be undertaken within 24-72 hours
• Evaluation is the systematic and objective assessment of an on-going or
completed project, program or policy – its design, implementation and results.

2. When deciding the best evaluation approach and design, consider:


• The evaluation purpose • Implementation stage
• The evaluation questions • Data availability
• Time • Ethical issues
• Resources • Desired level of rigour

3. Planning and managing baselines and evaluations


• Planning baselines and evaluations is not only about preparing for data
collection and analysis. It’s a broader process that should be participatory. Make
sure you allow enough time for it
• A good ToR will include all the elements you need to plan and manage the
evaluation
• The higher the involvement and ownership from stakeholders, the higher the
chances are that the evaluation and baseline will be used.

~ 211 ~
CHAPTER 17

Methods of Data Collection in MEAL


Introduction
The quality and utility of monitoring, evaluation and research in our projects and
programmes fundamentally relies on our ability to collect and analyze quantitative
and qualitative data. Monitoring and evaluation plans, needs assessments, baseline
surveys and situational analyses are all located within a project cycle and require
high-quality data to inform evidence-based decision-making and programmatic
learning. To achieve this it is useful to reflect on research practices, which in a
monitoring, evaluation, accountability and learning context refers to the systematic
investigation of programmes. Although this session targets monitoring and
evaluation specialists, it is framed by the research agenda and will build on your
existing knowledge of using different data collection methods in your project work.

More specifically, we will discuss the process of identifying research questions and
selecting appropriate methodologies, understanding the difference between
quantitative and qualitative data, and associated benefits and limitations. We will
give an overview of common methods and data analysis techniques for both
quantitative and qualitative research and finally discuss the interpretation of
findings using multiple data sources. The scope of this module is limited to concepts
that will enable learners to gain a broad understanding of the subject area. However,
we will include links to useful resources should learners wish to increase their
knowledge on a particular topic.

Developing research questions and linking them to study designs


We have all had questions and experienced a desire to know more about the impact
and local impressions of our programmes as well as how people and culture
influence our activities. This curiosity to question and learn is integral to our
delivery of quality programmes. But how do we move from having an interest to
knowing more about a particular area, through to developing a research question(s)
and determining the right study design? The aim of this section is to guide you

~ 212 ~
through the process of developing research questions, studying objectives and
linking them to an appropriate study design.

Case study: Working Street Children in Karachi, Pakistan


To assist with demonstrating the aim of this section, as well as exemplifying,
illustrating and linking the different topics described in this module, we will give
examples referring back to a simulated case study.

Case study: Working Street


Children in Karachi, Pakistan
Poverty is forcing more and more children to seek work on the streets of Karachi,
enabling them to take an active role in sustaining themselves and their families.
Whilst most children live with family or relatives, some children live on the street
with no adult supervision and care. shoe-shiners and as beggars and scavengers.
Furthermore, large numbers of children are picked up on the street to do ad hoc
domestic work, particularly girls, often performing physically-demanding tasks in
situations where they face risk of abuse and exploitation behind the walls of private
homes.
Regardless of the type of labor, working street children often miss out on regular
schooling and on opportunities that would enable them to pursue their right to a
‘normal’ childhood and a dream to escape poverty. They are often required to engage
in risky, heavy and age-inappropriate forms of labor, which, among other issues, can
have serious consequences for their physical and emotional health.

In this session you will learn how to develop a ‘situation analysis’ study to
understand the struggles and coping strategies of working street children in Karachi.

Developing research questions and study objectives


A key step in the planning of research is to be clear about its purpose and scope. The
purpose of this study in part is to reflect gaps in existing knowledge and in part to
inform future programmes. The scope of a research project is usually determined by

~ 213 ~
time, resources and staff constraints, so keep that in mind when you develop your
research question.

A research question is meant to help you focus on the study purpose. A research
question should therefore define the investigation, set boundaries and provide some
level of direction.

In the process of developing a research question, you are likely to think of a number
of different research questions. It is useful to continually evaluate these questions, as
this will help you refine and decide on your final research question. You could, for
example, ask:

• Is there a good fit between the study purpose and the research question?
• Is the research question focused, clear and well-articulated?
• Can the research question be answered? Is it feasible – given time, resource
and staff constraints?
To further help you define your investigation it is useful to develop a few study
objectives. These objectives should be specific statements that reflect the steps you
will take to answer your research question. For the above case study, I would
include the following objectives:
• Map out the struggles and coping strategies of working street children in
Karachi
• Determine how socio-economic status impacts on children’s struggles and
coping strategies
• Identify differences between boys and girls as well as the cause of these
differences
• Discuss the implications of these findings to development programmes.
By addressing these four study objectives, you will automatically begin to ‘paint a
picture’ that answers your overarching research question.

~ 214 ~
Depending on the nature of your research question and study objectives, you may
begin to think about the direction you think the answers will take. For example, in
what ways do you think socio-economic status may determine the struggles of
working street children and their ability to cope with hardship?

Figure 1 summarizes key steps for you to establish a study focus.


a) Be clear of your purpose
b) Define the scope of study
c) Develop a research question
d) Develop a list of research objectives
Figure 1: ‘Steps’ to establish a study focus
Deciding on a study design
Once you are happy with your research question and study objectives you can begin
to determine which study design is most appropriate to answer your question. There
are many different kinds of study designs for monitoring, evaluation and research.
They can either be exploratory and observational, meaning they try to explore and
observe what is happening in a given context, or they can be experimental, which
means they are aiming to test the impact of an intervention.

As your study seeks to describe some features (struggles and coping strategies) of a
group of working street children at one specific point in time, you are in the process
of developing an exploratory study. Exploratory studies are useful for conducting
situation analyses and benefit from drawing on both qualitative and quantitative
methods. If you were developing a study to assess the impact of an intervention
supporting working street children in Karachi, you would likely benefit from
developing a study with a more experimental design with a before and after
intervention focus. For more detail on experimental evaluation designs, please
consult Session .

~ 215 ~
Promoting ethical and participatory research
After having determined the design of your study, it is time to think about how you
might best engage with the respondents of the study, many of whom will be
children. You will, for example, need to consider the following questions:

What might be the social and ethical implications of the respondent’s engagement
with you and the study? How can you best protect and safeguard their well-being
and interests? What are ethical and safe ways to involve children in research?

These questions are important to consider and resonate with Save the Children’s
child safeguarding policy. Broadly said, ethical research is about ‘doing good and
avoiding harm’ to those participating in the research. This is achieved primarily by
consulting communities of your areas of study and attaining answers and practical
responses to the above questions. Make sure you follow up on their
recommendations. You also need to familiarize yourself with existing toolkits and
universal guidelines for conducting ethical research (see resources below) and use
this information to develop informed consent forms, which include:
i. An information sheet in the local language, explaining: who you and Save
the Children are, including your contact details; the purpose of the interview or
exercise; whether they have to take part; what will happen if they do not want to
participate; what will happen if they agree to participate; how long it will take; how
confidentiality will be assured; what they will get out of it; risks associated with
their participation; approximate data of completion and anticipation how the
information gathered will be used. If you will be involving non-literate groups you
need to think about how to communicate this information to them, for example in a
group discussion and/ or with visual materials.

ii. A consent form that includes statements that the participant has understood
what they will be involved in (e.g.,’ I understand that if I decide at any time that I
don’t want to participate in this study, I can tell the researchers and will be
withdrawn from it immediately. This will not affect me in any way’. Or, to take

~ 216 ~
another instance: ‘I understand that reports from the findings of this study, using
information from all participants combined together, will be published.
Confidentiality and anonymity will be maintained and it will not be possible to
identify me from any publications’.

You need to prepare separate information consent forms for both children and
adults. If children under the age of 18 are participating in your study, you also need
to obtain informed consent from their guardians. Different data collection methods
require different informed consent forms. So it is important you tailor your
information sheets and consent forms to your specific study. More and more
organizations, including Save the Children UK, are setting up internal ethics
committees in place to support and guide staff to conduct ethical research. At the
end of this session we have included some resources providing you with additional
information.
Differences between quantitative and qualitative research and their application

Research is a systematic investigation that aims to generate knowledge about a


particular phenomenon. However, the nature of this knowledge varies and reflects
your study objectives. Some study objectives seek to make standardized and
systematic comparisons, others seek to study a phenomenon or situation in detail.
These different intentions require different approaches and methods, which are
typically categorized as either quantitative or qualitative. You have probably already
made decisions about using qualitative or quantitative data for monitoring and
evaluation. Perhaps you have had to choose between using a questionnaire or
conducting a focus group discussion in order to gather data for a particular
indicator.

Quantitative research
Quantitative research typically explores specific and clearly defined questions that
examine the relationship between two events, or occurrences, where the second
event is a consequence of the first event. Such a question might be: ‘what impact did

~ 217 ~
the programme have on children’s school performance?’ To test the causality or link
between the programme and children’s school performance, quantitative researchers
will seek to maintain a level of control of the different variables that may influence
the relationship between events and recruit respondents randomly. Quantitative
data is often gathered through surveys and questionnaires that are carefully
developed and structured to provide you with numerical data that can be explored
statistically and yield a result that can be generalized to some larger population.

Qualitative research
Research following a qualitative approach is exploratory and seeks to explain ‘how’
and ‘why’ a particular phenomenon, or programme, operates as it does in a
particular context. As such, qualitative research often investigates i) local knowledge
and understanding of a given issue or programme; ii) people’s experiences,
meanings and relationships and iii) social processes and contextual factors (e.g.,
social norms and cultural practices) that marginalize a group of people or impact a
programme. Qualitative data is non-numerical, covering images, videos, text and
people’s written or spoken words. Qualitative data is often gathered through
individual interviews and focus group discussions using semi-structured or
unstructured topic guides.

~ 218 ~
Summary of differences
Qualitative research Quantitative research
Type of knowledge Subjective Objective
Aim Exploratory and observational Generalisable and testing
Characteristics Flexible Contextual portrayal Fixed and controlled
Independent and dependent
variables
Dynamic, continuous view of change Pre- and post-measurement
of change
Sampling Purposeful Random
Data collection Semi-structured or unstructured Structured
Narratives, quotations, descriptions Numbers, statistics
Nature of data Value uniqueness, particularity Replication
Analysis Thematic Statistical

Table 1: Key differences between qualitative and quantitative research

Although the table above illustrates qualitative and quantitative research as distinct
and opposite, in practice they are often combined or draw on elements from each
other. For example, quantitative surveys can include open ended questions.
Similarly, qualitative responses can be quantified. Qualitative and quantitative
methods can also support each other, both through a triangulation of findings and
by building on each other (e.g., findings from a qualitative study can be used to
guide the questions in a survey).

2 Methods for collecting and analyzing qualitative data


This section starts off by introducing you to four commonly used qualitative data
collection methods. These collection methods and many others are also described in
the Save the Children Evaluation Handbook, which also explain how to use them in
evaluation. It then explains how you may go about involving participants: this is
also known as sampling. The section ends with a discussion of a couple of
approaches to qualitative data analysis. You may have used some of these methods

~ 219 ~
as part of your routine project monitoring activities, in a needs assessment or
baseline or as part of an evaluation exercise.

1 Individual interview
An individual interview is a conversation between two people that has a structure
and a purpose. It is designed to elicit the interviewee’s knowledge or perspective on
a topic. Individual interviews, which can include key informant interviews, are
useful for exploring an individual’s beliefs, values, understandings, feelings,
experiences and perspectives of an issue. Individual interviews also allow the
researcher to ask into a complex issue, learning more about the contextual factors
that govern individual experiences.

2 Focus group discussions


A focus group discussion is an organized discussion between 6 to 8 people. Focus
group discussions provide participants with a space to discuss a particular topic, in a
context where people are allowed to agree or disagree with each other. Focus group
discussions allow you to explore how a group thinks about an issue, the range of
opinions and ideas, and the inconsistencies and variations that exist in a particular
community in terms of beliefs and their experiences and practices. You should
therefore purposefully (the adjective is ‘purposive’) recruit participants for whom
the issue is relevant. Be clear about the benefits and limitations of recruiting
participants that represent either one population (e.g. school going girls) or a mix
(e.g. school going boys and girls), and whether or not they know each other.

3 Photovoice
Photovoice is a participatory method that enables people to identify, represent and
enhance their community, life circumstances or engagement with a programme
through photography and accompanying written captions. Photovoice involves
giving a group of participant’s cameras, enabling them to capture, discuss and share
stories they find significant.

~ 220 ~
4 Picture story
The picture story method enables children, in a fun and participatory way, to
communicate their perspectives on particular issues through a series of drawings
(story telling) they have made. The story telling can either be done in writing,
depending on the child’s level of literacy, or verbally with a researcher. The picture
story method is relatively quick and inexpensive, particularly if the draw-and-write
technique is adopted. The picture story method provides a non-threatening way to
explore children’s views on a particular issue (e.g. barriers to girl’s education) and to
begin to identify what can be done to address any struggles faced by children.

5 Identifying participants
Qualitative research often focuses on a limited number of respondents who have
been purposefully selected to participate because you believe they have in-depth
knowledge of an issue you know little about, such as:

• They have experienced first-hand you topic of study, e.g. working street children
• They show variation in how they respond to hardship, e.g. children who draw on
different protective mechanisms to cope with hardship on the street and in the
work place
• They have particular knowledge or expertise regarding the group under study,
e.g. social workers supporting working street children.
You can select a sample of individuals with a particular ‘purpose’ in mind in
different ways, including:
• Extreme or typical case sampling – learning from unusual or typical cases, e.g.
children who expectedly struggle with hardship (typical) or those who do well
despite extreme hardship (unusual)

• Snowball sampling – asking others to identify people who will interview well,
because they are open and because they have an in-depth understanding about
the issue under study. For example, you may ask street children to identify other
street children you can talk to.

~ 221 ~
• Random purposeful sampling – if your purposeful sample size is large you can
randomly recruit respondents from it.

Whilst purposeful sampling enables you to recruit individuals based on your study
objectives, this limits your ability to produce findings that represent your population
as a whole. It is therefore good practice for triangulation purposes to recruit a
variety of respondents (e.g., children, adults, service users and providers)

Qualitative data analysis


Qualitative data analysis is a process that seeks to reduce and make sense of vast
amounts of information, often from different sources, so that impressions that shed
light on a research question can emerge. It is a process where you take descriptive
information and offer an explanation or interpretation. The information can consist
of interview transcripts, documents, blogs, surveys, pictures, videos etc. You may
have been in the situation where you have carried out 6 focus group discussions but
then are not quite sure what to do with the 30 pages of notes you collected during
the process. Do you just highlight what seems most relevant or is there a more
systematic way of analyzing it?

Qualitative data analysis typically revolves around the impressions and


interpretations of key researchers. However, through facilitation, study participants
can also take an active role in identifying key themes emerging from the data.
Because qualitative analysis relies on researchers’ impressions, it is vital that
qualitative analysis is systematic and that researchers report on their impression in a
structured and transparent form. This is particularly important considering the
common perception that qualitative research is not as reliable and sound as
quantitative research.

Qualitative data analysis ought to pay attention to the ‘spoken word’, context,
consistency and contradictions of views, frequency and intensity of comments, their

~ 222 ~
specificity as well as emerging themes and trends. We now explain three key
components of qualitative data analysis.
The process of reducing your data
There are two ways of analyzing qualitative data. One approach is to examine your
findings with a pre-defined framework, which reflects your aims, objectives and
interests. This approach is relatively easy and is closely aligned with policy and
programmatic research which has pre-determined interests. This approach allows
you to focus on particular answers and abandon the rest. We refer to this approach
as ‘framework analysis’ (Pope et al 2000). The second approach takes a more
exploratory perspective, encouraging you to consider and code all your data,
allowing for new impressions to shape your interpretation in different and
unexpected directions. We refer to this approach as thematic network analysis
(Attride-Stirling, 2001). More often than not, qualitative analysis draws on a mix of
both approaches.

Whichever approach guides you, the first thing you need to do is to familiarize
yourself with your data. This involves reading and re-reading your material (data) in
its entirety. Makes notes of thoughts that spring to mind and write summaries of
each transcript or piece of data that you will analyze. As your aim is to condense all
of this information to key themes and topics that can shed light on your research
question, you need to start coding the material. A code is a word or a short phrase
that descriptively captures the essence of elements of your material (e.g. a quotation)
and is the first step in your data reduction and interpretation.

To help speed up your coding you can, after having read through all of your data,
develop a coding framework, which consists of a list of codes that you anticipate will
be used to index and divide your material into descriptive topics. If you are
approaching your data following the deductive framework approach, your coding
will be guided by a fixed framework (and you index your material according to
these pre-defined codes). If, however, you are following the more inductive thematic
network approach, you are likely to add new codes to your list as you progress with

~ 223 ~
the coding, continually developing your coding framework. Coding is a long, slow
and repetitive process, and you are encouraged to merge, split up or rename codes
as you progress. There is no fixed rule on how many codes you should aim for, but if
you have more than 100-120 codes, it is advisable that you begin to merge some of
your codes.

Once you have coded all of your material you need to start abstracting themes from
the codes. Go through your codes and group them together to represent common,
salient and significant themes. A useful way of doing this is to write your code
headings on small pieces of paper and spread them out on a table: this process will
give you an overview of the various codes and will also allow you to move them
around and cluster them together into themes. Look for underlying patterns and
structures – including differences between types of respondents (e.g., adults versus
children, men versus women) if analyzed together. Label these clusters of codes (and
perhaps even single codes), with a more interpretative and ‘basic theme’. Take a new
piece of paper, write the ‘basic theme’ label, and place it next to your cluster of
codes. If, for example, the codes ‘Torn uniform’ and ‘No school books’ appear in
your interview transcripts with working street children, they can be clustered
together as ‘Working street children lack school materials’ (see Figure 3).

Figure 3: From codes to basic themes

You may find that not all of your codes are of interest and relevance to your research
question and that you choose to only cluster 60 of your codes into ‘basic themes’ that
help shed light on your question. Let us say, for arguments sake, that through this

~ 224 ~
process you identify 20 ‘basic themes’. Repeat this process with your basic themes.
Examine your ‘basic themes’ and cluster them together into higher order and more
interpretative ‘organizing themes’. Let us say, again for arguments sake, that this
process reduces your

20 ‘basic themes’ to four ‘organizing themes’, two of which represent struggles faced
by working street children (as exemplified by Figure 4) and two which give detail to
their coping strategies. Figure 4 also illustrates how you can transparently show how
you went from having descriptive codes to focusing on a few distinct, interpretative
and networked themes that you can use to begin answering parts of your research
question.

Figure 4: From codes to organizing and global themes

The method of cutting out codes and moving them around on a table is often
referred to as the ‘table method’. The ‘table method’ works particularly well for
smaller studies. If you have vast amounts of data (e.g. more than 20 interview

~ 225 ~
transcripts), you may find it helpful to use qualitative data analysis software, such as
Nvivo or Atlas.Ti. These software packages are, however, not free and you will
require a license

Quantitative data and methods


Quantitative data is numerical and can be collected in a number of forms. The most
common forms of quantitative data used in Save the Children are shown below.
1. Units: number of staff that have been trained; number of children enrolled in
school for the first time
2. Prices: amount of money spent on a building, or the additional revenue of
farmers following a seed distribution programme
3. Proportions/percentages: proportion of the community that has access to a
service
4. Rates of change: percentage change in average household income over a
reporting period
5. Ratios: ratio of midwives or traditional birth attendants to families in a region
6. Scoring and ranking: scores given out of ten by project participants to rate the
quality of service they have received.

Statistical analysis is used to summarize and describe quantitative data and graphs
or tables can be used to visualize present raw data. This section will review the
commonly used methods/sources of quantitative data and the techniques used for
recruiting participants.

~ 226 ~
Quantitative methods
Quantitative data can be collected using a number of different methods and from
a variety of sources.

a) Surveys and questionnaires use carefully constructed questions, often


ranking or scoring options or using closed-ended questions. A closed-ended
question limits respondents to a specified number of answers. For example,
this is the case in multiple-choice questions. Good quality design is
particularly important for quantitative surveys and questionnaires.
b) Biophysical measurements can include height and weight of a child
c) Project records are a useful source of data. For example, the number of
training events held and the number of participants attending
d) Service provider or facility data includes school attendance or health care
provider vaccination records
e) Service provider or facility assessments are often carried out during the
monitoring and evaluation of our projects

Sampling for quantitative methods


Commonly in our research or programmatic data collection, it is not possible or even
desirable, to collect data from a whole target group or population. This could be
extremely difficult and expensive. Through accurate sampling of a subset of the
population we can reduce costs and gain a good representation from which we can
infer or generalize about the total population.

Accurate sampling requires a sample frame or list of all the units in our target
population. A unit is the individual, household or school (for example) from which
we are interested in collecting data. A sample frame for a household survey would
include all the households in the population identified by location or, in the case of
our case study, all of the working street children in Karachi.

~ 227 ~
Bias
The process of recruiting participants for quantitative research is quite different from
that of qualitative research. In order to ensure that our sample accurately represents
the population and enables us to make generalizations from our sample we must
fulfill a number of requirements.

Sampling bias can occur if decisions are made about sample selection that mean that
some individuals have a greater chance of being selected for the sample than others.
Sample bias is a major failing in our research design and can lead to inconclusive,
unreliable results. There are a many different types of bias. For example, tarmac bias
relates to our tendency to survey those villages that are easily accessible by road. We
may be limited in our ability to travel to many places due to lack of roads, weather
conditions etc. which can create a bias in our sample.

Self-selection or non-response bias is one of the most common forms of bias and is
difficult to manage. Participation in questionnaire/surveys must be on a voluntary
basis. If only those people with strong views about the topic being researched
volunteer then the results of the study may not reflect the opinions of the wider
population creating a bias.

Simple random sampling


A simple random sample is the simplest way to select participants from a
population. Pulling names out of a hat or using an online random number generator
such as www.random.org can create a random sample. Using these methods means
that each individual in the population has the same chance of being selected for the
sample.
How many? Sample size calculation
Calculating the most appropriate sample size is an important step in the research
process. A larger sample provides a more precise estimate of the ‘real’ situation but
the benefits of increased sample size get smaller as you near the total population.

~ 228 ~
Therefore, there is a trade-off between sample precision and considerations of
optimal resource use.

There are no ‘rules of thumb’ when determining sample size for quantitative
research. It is not possible to say whether 10% of the population, for instance, would
provide an adequate sample, as this will be affected by a number of factors. You
should be wary of sample plans in research or evaluations that suggest sample size
can be calculated using a percentage of the population without further clarification
or rationale for this.

Statisticians will calculate sample size using a range of different equations, each of
which are appropriate for different research situations and contexts. It is important
to discuss the objectives of your research, expected results, data types, resources and
context with a statistician or technical advisor at the design stage of your research in
order to calculate an appropriate sample.

It is also useful to understand the two main statistics, which will be used to calculate
the sample size. These are the confidence interval or margin of error and the
confidence level.

The confidence interval is the acceptable range in which your estimate can lie. For
example, if you were using a sample to collect data estimating the percentage of
street children in Karachi who are engaged in harmful work you might set your
margin of error at 10%. This would mean that if, following the collection of your
data you found that
75% of children in your sample are engaged in harmful work, you would know that
the real number for the population would be plus or minus 10% i.e. anywhere
between 65% and 85%.

If you are carrying out before and after intervention analysis to determine whether
your work has contributed to a change you will need to consider what size of effect

~ 229 ~
you anticipate occurring before you calculate your sample size. For example, if you
are carrying out a project which expects to reduce the number of children working
on the street from 75% to 70% you would not want to use a confidence interval of
10% as your estimate would not be precise enough to detect this change.
The level of confidence determines how sure you want to be that actual percentage
(of children engaged in harmful work for example) falls within your selected
confidence interval. As we are using a sample and not asking every single child
individually we are always making an estimate of the real value and we can never
be 100% confident. A level of confidence of 95% is commonly used, which means
that there is a 5% chance that the actual percentage will not lie between the
confidence interval selected.

When deciding on what confidence interval to use in your sample size calculation it
is important to remember that whilst a larger range gives you a smaller sample size,
a smaller range gives you greater precision in your results. Selecting a lower level of
confidence will also give a smaller sample size but also decrease the reliability of the
data. Unfortunately there is no simple answer and you need to review the values
used on a case-by-case basis. Remember, however, that if the sample is too small
then this will lead to inconclusive results, which cannot provide us with the
information that we need. If the sample is too large, however, it may be impossible
to collect and resources will be wasted.

Sampling methods
Stratified sampling: Stratified sampling is used when individuals in a population
can be split into distinct, non-overlapping groups. These groups are called ‘strata’.
Common strata are village, district, urban/rural etc.

In stratified sampling, the number of participants sampled from each stratum is


calculated proportionally to the total population. For example, a population of 100
people lives in two villages, with 30% in village A and 70% in village B. We have a

~ 230 ~
required sample size of 60. In order to stratify our sample we need to calculate 30%
of 60.

Number of people from village A in sample = 60 * 0.3 = 18 people Number of people


from village B in sample = 60 * 0.7 = 42 people

Stratified sampling is beneficial when there are big differences between the strata, as
they can give a more accurate representation of the population and, if the sample
size is large enough, allow for further sub-set analysis.

Quantitative analysis
The methods we have described above help us to collect quantitative data, but is the
collection of data our end goal?

No, of course not! A large set of data sitting in a spreadsheet does not help us to
understand the characteristics of the population we are working with or describe the
changes brought about by our projects. We need to use the data to create
information.

In our case study example, we may have interviewed children working on the street
in Karachi and collected all the data together in a spreadsheet; however, we need to
analyze and summarize the data to answer our research questions. We need to
understand what percentage of children is involved in different work types. For
instance, we may want to understand if girls and boys carry out similar tasks or are
exposed to similar risks.

Statistics help us turn quantitative data into useful information to help with
decision-making. We can use statistics to summarize our data, describing patterns,
relationships and connections. Statistics can be descriptive or inferential. Descriptive
statistics help us to summarize our data whereas inferential statistics are used to
identify statistically significant differences between groups of data (such as

~ 231 ~
intervention and control groups in a randomized control study). During this module
our focus will be on descriptive rather than inferential statistics: this will also help to
give a short introduction to the most common descriptive statistics.

Data structure
We generally collect data from a number of individuals or ‘units’. These units are
most often the children or adults that we are working with. However, our units
could also be hospitals or schools, for example. The different measurements,
questions or pieces of information that we collect from these individuals are the
variables.

Variables
There are two types of variables, numerical and categorical. It is important to
distinguish between these two types of variables, as the analysis that you do for each
type is slightly different.

Categorical variables are made up of a group of categories. Sex (male/female) is a


categorical variable, as is quality of training (good; bad; average).

Numerical variables are numbers. They can be counts (e.g. number of participants
at a training) or measures (e.g. height of a child) or durations (e.g., age, time spent)

Analysis of categorical variables


Categorical data groups all units into distinct categories which can be summarized
by determining how many times a category occurs. For example, the number of
females in a group of participants. We describe this as the frequency of females in
the group.

This information is presented using a frequency table. The frequency table shows us
how many participants fall into each category. We can also then represent this as a

~ 232 ~
percentage or proportion of the total. Figure 5 shows an example frequency table for
the different types of work carried out by children working on the street in Karachi.

Type of work Number of children


Street vendor 87
Car washing 92
Shoe-shiner 67
Scavenging 98
Begging 110
Domestic work 45
Other 28
TOTAL 527

Figure 5. Type of work for street children in Karachi

Frequency tables can be used to present findings in a report or can be converted into
a graph for a more visual presentation.
A proportion describes the relative frequency of each category and is calculated by
dividing each frequency by the total number.
Percentages are calculated by multiplying the proportion by 100. Proportions and
percentages can be easier to understand and interpret than examining raw frequency
data and are often added into a frequency table (see figure 6).

Type of work Number of Percentage


Children of children
Street vendor 87 16.51
Car washing 92 17.46
Shoe-shiner 67 12.71
Scavenging 98 18.60
Begging 110 20.87
Domestic work 45 8.54
Other 28 5.21
TOTAL 527 100
Figure 6. Types of work for street children in Karachi

Analysis of numerical variables


~ 233 ~
Two statistics – the centre and the spread – commonly describe numerical data.
The centre describes a typical value and the spread describes distance of data from
the centre.

Data points
Figure 7 Diagram showing centre and spread for a set of data points

The most common statistics used to describe the centre are the mean (commonly
known as the average) and the median. The median is the middle value in a data set,
half the data are greater than the median and half are less. The mean is calculated by
adding up all the values and then dividing by the total number of values.

Using our case study example – if you were to interview 23 street children and
record their age you might get a set of data as below. Each number is the age of an
individual child and the ages have been arranged in order.

3 3 4 4 5 7 7 8 9 10 10 11 12 12 12 13 13 14 14 15 15 15 16

Mean = 10.08 Median = 11

Figure 8 Mean and median age of children

~ 234 ~
The mean and the median would be different for this dataset. To calculate the
median you need to arrange the children in order of age and then find the mid-way
point. In this example, 11 children are below the age of 11 and 11 children are above
the age of 11.

To calculate the mean you need to add up all the ages and then divide by the
number of children (23 in this example).

So

3+3+4+4+5+7+7+8+9+10+10+11+12+12+12+13+13+14+14+15+15+16 = 232 232/23 =


10.08 = mean age of the children interviewed.

Spread is most easily described using the range of the data. This is the difference
between the minimum and maximum. The range of the example data above would
be 13 years (minimum = 3, maximum = 16).

Other statistics describing spread are the interquartile range and standard
deviation.

The interquartile range is the difference between the upper quartile and lower
quartile. A quarter (or 25%) of the data lie above the upper quartile and a quarter of
the data lie below the lower quartile.

The standard deviation shows the average difference between each individual data
point (or age of child in our example) and the mean age. If all data points are close to
the mean then the standard deviation is low, showing that there is little difference
between values. A large standard deviation shows that there is a larger spread of
data. Calculating the standard deviation yourself is a little complex but this can also
be done easily in Microsoft Excel (see Computer Assisted Statistics Textbook for details
on how to calculate standard deviation).

~ 235 ~
Discussing results and drawing conclusions
The final stage of the research process is to interpret the findings, making
conclusions and recommendations. When drawing conclusions you should review
and summarize your findings looking for explanatory patterns or relationships that
help answer your research questions.

Questions to consider when interpreting your findings:


• Did the research methodology and data collected answer the research
question? Do the findings support our hypotheses (quantitative)?
• How do the different findings interact? Do they explain each other or are
there contradictions?
• Can we triangulate the data from a number of different sources (different
stakeholders, different methodologies, external sources of information)?
• What were the limitations of the study and how do they affect the results?
• Are there any areas that require further research or follow up?
Mixed methods and triangulation

If you have collected both quantitative and qualitative data you should compare and
contrast these findings when interpreting your work. The integration of quantitative
and qualitative research can give us a broader understanding of our research subject.
Quantitative research can describe magnitude and distribution of change, for
instance, whereas qualitative research gives an in-depth understanding of the social,
political and cultural context. Mixed methods research allows us to triangulate
findings, which can strengthen validity and increase the utility of our work.

Triangulation is when we compare a number of different data sources and methods to


confirm our findings. For example, we could compare the perspectives of teachers,
students and parents on the quality of schooling. Triangulation can bring strength to
our conclusions or identify areas for further work.

~ 236 ~
You should also reflect on your findings in comparison to other research or
evaluation work in the area and consider whether findings were similar.

Limitations
When drawing conclusions and making recommendations it is important to
recognize the limitations of our data. In quantitative research, the level to which we
can generalize our findings to the wider population will depend upon the quality of
the sampling strategy used. You should be careful not to over-generalize results: for
example, suggesting a result is applicable for the whole country when only two out
of eight regions were sampled.

Findings from qualitative research should not be used to make inferences about a
wider population but can be used to provide examples of how or why in specific
contexts.

It is also important that conclusions and recommendations are based on the data
collected rather than personal opinions. When reporting quantitative or qualitative
data, you can only make valid conclusions on the topics researched and for which
you have supporting evidence.

Displaying and reporting on your qualitative and quantitative data


Any research report must be guided by the transparency of the process through
which conclusions have been drawn. A report must therefore include:

• An ‘Introduction’ that argues for the importance of exploring a particular


research question, highlighting the gaps in, and limitations of, existing evidence.
• A ‘Methodology’ section that justifies your sampling strategy and the
research methods to be used to answer your research question: this gives detail to
the process through which data was collected and analyzed.
• A ‘Findings’ section that presents key findings emerging from the analysis
that answers the research question. If, for example, your qualitative data analysis

~ 237 ~
generated two ‘global themes’, they could each represent a findings chapter, with
‘organizing themes’ representing sub-headings, under which the ‘basic themes’ are
discussed and supported by plenty of quotations, which are extracted from your
codes. For quantitative data you may present frequency tables or graphs of variables
of interest. When presenting qualitative findings, it is important that you do not only
discuss and present a single and dominant view, but also acknowledge
contradictions and disagreements within the data. Please note that when presenting
qualitative data, you cannot claim causality and association. You are presenting
people’s perceptions and experiences of a phenomenon. As such, you have to be
careful about how you present a finding. You can for example says ‘some
respondents felt …’, ‘a common opinion was …’, ‘The perception of some adults was
…’, ‘and this suggests a possible relationship between …’ and so forth.

A ‘Discussion’ section that highlights how the findings emerging from the study
either corroborate, contradict or build on existing evidence as well as giving detail to
the limitations of the study.

Summary of this Session


This session has taken you through the process of identifying research questions and
selecting appropriate methodologies. You now hopefully have a better
understanding of the difference between quantitative and qualitative data collection
methods and associated benefits and limitations. We also introduced you to some
common methods and techniques of data analysis for both quantitative and
qualitative research.

We hope you found this session useful and will draw on it to develop systematic
investigations that can be used to improve the quality, impact and accountability of
our programmes. Best of luck!

~ 238 ~
ASSIGNMENT TWO

1. You have been contracted by UNICEF to undertake the role of a consultant in a


project (joint partnership between them and the Ministry of Gender and Children) a
program that gives direct funds to families staying with orphaned children, to plan a
monitoring system for the same.
a) What are the advantages of participatory evaluation methods?
b) Formulate the steps in planning a monitoring system
2. Define and Explain key components of MEAL framework
3. Explain why drafting MEAL plan and Budget should involve all stakeholders
4. Below is a list of key terms associated with MEAL plans – Define each
i. Data source
ii. Dissemination
iii. Indicator
iv. Validation
v. Accountability
5. Describe any seven factors that may lead to project failure
b. List practical examples of what we use MEAL data for and how different
audiences may use this information

~ 239 ~

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy