0% found this document useful (0 votes)
56 views41 pages

Intervention and Evaluation

This document discusses intervention and evaluation. It describes different types of interventions like personal interventions and programmatic interventions. It outlines key steps in intervention design and implementation, including identifying a problem, arriving at a solution, setting goals and designing the intervention, and implementing the intervention. It also discusses program evaluation and linking it to program development so evaluation can influence decisions and improve programs. A four-step model of program evaluation is outlined involving identifying goals, measuring outcomes, comparing outcomes to goals, and using information.

Uploaded by

Lovis Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views41 pages

Intervention and Evaluation

This document discusses intervention and evaluation. It describes different types of interventions like personal interventions and programmatic interventions. It outlines key steps in intervention design and implementation, including identifying a problem, arriving at a solution, setting goals and designing the intervention, and implementing the intervention. It also discusses program evaluation and linking it to program development so evaluation can influence decisions and improve programs. A four-step model of program evaluation is outlined involving identifying goals, measuring outcomes, comparing outcomes to goals, and using information.

Uploaded by

Lovis Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Intervention and Evaluation

Intervention: A strategy or procedure that is intended to influence

the behavior of people for the purpose of improving their

functioning with respect to some social or practical problem.

Interventions may or may not target people’s behavior directly, for

example, increasing awareness targets behavior indirectly.

However, the ultimate goal of most interventions is behavior

change.
Types of intervention

Personal Interventions: interventions that people carry out in the

course of their daily lives, that is, when they use their knowledge

of social psychology to improve their own circumstances or those

of people around them.

Programmatic Interventions: commonly referred as programs.

Program as an “organized collection of activities designed to reach

certain objectives. In the context of ASP, the activities that


comprise a program are directed toward addressing a social or

practical problem with the objective of preventing, reducing, or

eliminating its negative consequences. In some instances,

interventions may be directed at reinforcing and strengthening a

positive situation.

Trial Interventions: are those that are implemented to determine

whether the interventions, as designed, in fact have the intended


positive consequences. These are known as program efficacy

studies (Crano & Brewer, 2002).

Two basic kinds of trial interventions:

One is when researchers design a study to test out a possible

intervention strategy;

The second is when an organization conducts a pilot program to

determine its effectiveness before implementing it on a more

permanent basis or before implementing it on a wider scale.


Key Steps in intervention design and implementation:

Step 1: Identifying a problem. Programs are initiated to address

social problems or practical problems. The first step in program

design is to identify the existence and severity of a problem. A

problem usually is identified and defined by stakeholders.

 Stakeholders are individuals or groups who have a vested

interest in the possible development of a program in that they

may be affected by it in some way.


 Stakeholders include not only the potential recipients of the

program but also individuals such as program funders,

administrators of the organizations responsible for delivering

the program, program managers, and frontline staff members

(i.e., the employees who actually carry out the program

activities).
 Difficulties arise when different stakeholders disagree about

whether a problem exists, how serious a problem is, or which

problems should be given highest priority.


Needs Assessment: the process of establishing whether or not

there is a need or problem to sufficiently warrant the development

of a program.

 It may be informal or formal in nature.

 ASP has more confidence in the conclusions of a formal needs

assessment that relies on systematic research procedures for

collecting data that are relevant to problem severity and

prevalence.
 Problems may be investigated using a variety of qualitative

and/or quantitative procedures.

 A formal needs assessment gauges the availability of existing

programs or services as well as possible barriers to or gaps in

service.
Step 2: Arriving at a Solution. To arrive at a solution, it is

important to identify the factors responsible for causing the

problem.

 Causal factors are of two types: precipitating factors (factors

that trigger the problem) and perpetuating factors (those that

sustain the problem and keep it from being solved).


 It is critical to distinguish between the two factors as

precipitating factors may not always be directly involved in the

continuation of a problem.

 Once causal factors have been identified, the next step is to

find out often through a literature review whether interventions

that have effectively addressed the same needs already exists.

Such interventions can guide the development of a solution to

the current problem.


 If previous interventions cannot be located, then a solution

must be developed independently based on relevant social

psychological theory and research evidence as well as theory

and evidence from any other field that may contribute to a

solution.

 Solutions to problems should be expressed as Intervention

Hypotheses which are “if-then” statements that summarize the

intervention and the expected outcomes.


Step 3: Setting goals and designing the intervention. Knowledge

of goals and objectives serves to guide the selection of program

activities.

 Goals refer to the ultimate or long-term outcomes that one

hopes to accomplish through an intervention. Once goals have

been established, it is important to define the program

objectives.
 Objectives refer to short-term outcomes and intermediate term

changes that occur as a result of the intervention and are

required for the attainment of the program goals.

 Goals refer to the ends, whereas objectives refer to the means

or steps by which the ends are achieved.

 Once the goals and objectives have been set, the next step in

intervention design is to determine the program activities.


 The process of specifying the various components of a program

---goals, objectives, and activities ----requires a sound

rationale, often referred to as a program logic model.

 A program logic model is an explanation or a blueprint of how

the program activities lead to the attainment of the program

objectives and in turn how the objectives logically and

operationally contribute to the eventual achievement of the

program goals (Wholey, 1983).


 Logic models vary in complexity and detail, but all of them

stress a “cause and effect” flow as expressed in the intervention

hypothesis. Program logic is the glue that holds the activities,

objectives, and goals together.

 A logic model should be based on a theoretical rationale that

explains the causal connections among its various components.

 From the point of view of intervention design, this means that

one should be able to point to any component of the


intervention and indicate not only what its contribution is but

also why the effect should occur.

 The use of a program logic model ensures a careful

theoretically and empirically based articulation of the program

and increases the likelihood of its success.

 Unfortunately, in applied settings, too often the programs are

designed without the formal articulation of a logic model.


 Some evaluators have begun to employ the notion of a theory

of change model instead of program logic to underscore the

need to make explicit an intervention’s underlying theory

including the steps involved along the path to desired change,

the assumptions being made, and the preconditions that may

enable or inhibit the desired change (Mackinnon, Amott, &

McGarvey, 2006).
 Step 4: Implementing the Intervention. Implementation

refers to the actual process of enacting the intervention

activities, that is, of delivering them to the recipients of the

intervention.

 There are many practical details that might need to be in place

to implement programs properly.

 Depending on the complexity of an intervention, determined by

factors such as its size and structure, practical details might


include securing an appropriate facility, hiring staff members,

ensuring adequate training, and developing things such as

operating budgets, management structures, job descriptions,

performance appraisal methods, promotional strategies, and

cross-agency referral protocols.

 An intervention always should be designed and implemented in

such a way that its degree of effectiveness can be evaluated.


Program Evaluation and Program Development

 Program evaluation and program development should be linked


so that empirical information can influence decisions.
 Without that linkage, decisions about community programs are
made with much misinformation and wishful thinking about
what the actual effects of the program are.
 With such a linkage in place, even initially disappointing
results can lead to systematic improvements in a program.
 Evaluation is essential to program improvement and
effectiveness, especially when values and viewpoints conflict.
 Evaluation can provide information on real world outcomes
and impacts and inform debates on values, goals, and methods.
Common complaints and fears about program evaluation
 Evaluation can create anxiety among program staff;
 Staff may be unsure how to conduct evaluation;
 Evaluation can interfere with program activities or compete
with services for scarce resources;
 Evaluation results can be misused and misinterpreted,
especially by program opponents
Some typical answers by NGOs and by government agencies to
the question as to “how your program supported by our grant
money actually accomplishes its goals?”
Trust and Values: you should trust us; what we do is valuable etc.
Problem: there is no way of knowing the process of program and
its results
Process and Outputs: detailed documentation and outputs
Problem: Providing services does not ensure their effectiveness as
they may be inadequate, misdirected, and may have unintended
side effects.
Results based Accountability: agency staff and evaluators can
show through program evaluation that a specific program achieved
its intended effects and it can also be modified to increase its
effectiveness.
Problem: Agency staff may not be trained evaluators and in case of
the program’s failure to show intended results what steps could be
taken and if any such steps would be allowed or even possible!
Nonetheless, if done well, program evaluation can strengthen a
program’s quality as well as its ability to resist critics.
A Four-Step Model of Program Evaluation
Step 1: Identify Goals and Desired Outcomes
Goals represent what a project is striving for. Goals tend to be
ambitious and set a framework for outcomes. Outcomes are more
specific and represent what the project is accountable for. Goals
can be general, outcomes must be specific and measurable (Schorr,
1997).
If a community program has promotion/prevention aims, goals
and outcomes concern competencies to be promoted or problems
to be prevented.
Step 1 describes the program’s primary goals, target group/s and
desired outcomes.
Primary Goals: such as increasing teachers’ and students’
involvement in corporate life of their educational institution;
Target Group/s: such as teachers, administrative staff, students,
parents;
Desired Outcomes: increase in adherence to rules and regulations,
smooth functioning of the institution, making campus tobacco free,
zero tolerant for violence etc.
Step 2: Process Evaluation, i.e., “What did the program actually
do?”
Purposes of Process Evaluation:
First Purpose:
Monitoring program activities helps organize program efforts;
It helps ensure that all parts of the program are conducted as
planned;
It also helps the program use resources where they are needed;
It provides information to help manage the program and modify
activities, leading to midcourse corrections that enhance the
project’s outcomes;
Second
Information in a process evaluation provides accountability that the
program is conducting the activities it promised to do.

Third
After a later evaluation of outcomes and impacts, the process
evaluation can provide information about why the program worked
or did not work.

Fourth
Process evaluation can help you decide whether or not you are
ready to assess the effects of your program.
Fifth
Process evaluation helps keep track of changes that may have
occurred during the implementation of a program.
Conducting a Process Evaluation
A process evaluation centers on two related questions:
What were the intended and actual activities of the program?
After its implementation, what did program planners and staff
learn from their experiences?
Regarding activities: ‘Who’ was supposed to do ‘What’, with
‘Whom’, and ‘When’ was it to be done? Where,
‘Who’ refers to the staff delivering the services, such as
number of staff members, their qualification, any training
required;
‘What’ refers to what the staff actually does;
‘Whom’ refers to the target groups for each activity;
‘When’ refers to the time and setting of the activity
The more clearly the questions are answered, the more useful the
process evaluation will be.
Step 3: Outcome Evaluation
It assesses the immediate, short-term effects of a program.
Outcome measures are used to do evaluation, such as, self-report
questionnaires, interviews with key informants, behavioral
observation ratings.

Step 4: Impact Evaluation


It is concerned with the ultimate effects desired by a program.
Archival data, based on records collected for other purposes, help
assess impacts, such as, medical records, police records, school
records.
Linking Program Evaluation to Program Development

The frequent occurrence of disappointing results has spurred a


strong movement for accountability in community and social
programs.

Continuous improvement of programs relies on the use of


evaluation data to plan and implement program modifications.
Barriers that prevent program planners to use feedback
effectively:
 Use of an outside evaluator who although would be more
objective but may lead to a “us versus them” situations that
may cause communication gaps which in turn may limit the use
of feedback;
 Usually, feedback is provided at the end of the program
implementation which does not provide opportunities for
midcourse corrections
Nevertheless, program evaluation with efficient feedback can play
an important role in program development, such as empowerment
evaluation.
Empowerment Evaluation (EE)
EE breaks down barriers inherent in traditional evaluation
methods, promoting an empowerment and citizen participation
perspective (Fetterman, 1996).

EE is an evaluation approach that aims to increase the probability


of achieving program success by:
(a) Providing program stakeholders with tools for assessing the
planning, implementation, and self-evaluation of their
program;
(b) Mainstreaming evaluation as part of the planning and
management of the program/organization (Wandersman,
Snell-Johns, Lentz et al., 2005)
Role of Empowerment Evaluators:

 Collaborate with community members and program


practitioners to determine program goals and implementation
strategies,
 Serve as facilitators, provide technical assistance to teach
community members and program staff to do self-evaluation,
and,
 Stress the importance of using information from the evaluation
in ongoing program improvement
EE Principles: The EE principles are a set of core beliefs that, as a
whole, communicate the underlying values of EE and guide the
work of empowerment evaluators. These ten principles are:
Principle 1: Improvement
Principle 2: Community ownership
Principle 3: Inclusion
Principle 4: Democratic participation
Principle 5: Social justice
Principle 6: Community knowledge
Principle 7: Evidence based strategies
Principle 8: Capacity building
Principle 9: Organizational learning
Principle 10: Accountability

Getting to Outcomes (GTO)


In order to do EE, Wandersman, Imm, Chinman, and Kaftarian
(1999, 2000) developed a 10-step approach to results-based
accountability called Getting to Outcomes (GTO).
By answering Ten Accountability questions, interventions can be
guided to results –based accountability and program improvement:
Accountability Questions Strategies for Answering the Questions

1. What are the needs and resources in Needs assessment; resource assessment
your organization/community/state?

2. What are the goals, target population, Goal Setting


and desired outcomes (objectives)
for your organization/community/state?
3. How does the intervention incorporate Science and best practices literature
knowledge of science and best
practices?
4. How does the intervention fit with other Collaboration; cultural competence
programs already being offered?
5. What capacities do you need to put this Capacity building
intervention into place with quality?

6. How will intervention be carried out? Planning


7. How will the quality of implementation Process evaluation
be assessed?
8. How well did the intervention work? Outcome and impact evaluation
9. How will continuous quality Total quality management; continuous quality
improvement strategies be improvement
incorporated?
10. If the intervention is successful; how Sustainability, institutionalization
will it be sustained?
Conclusion
Chelimsky (1997) described three purposes of evaluation:
 Program development, e.g., information collected to strengthen
programs or institutions
 Accountability, e.g., measurement of results or efficiency
 Broader knowledge, e.g., increasing understanding about factors
underlying public problems
The value of any evaluation approach depends upon the purpose of
the evaluation (Chelimsky, 1997; Patton, 1997).
Program evaluation concepts can be incorporated into program
planning and program implementation. It may lead to blurring of
boundaries between program development and program evaluation;
however, it improves the process and increased the probability of
successful results.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy