100% found this document useful (1 vote)
368 views35 pages

Glossary of Terms - M&E

MONITORING and Evaluation SERIES GLOSSARY OF COMMONLY USED M&E TERMINOLOGIES this is a living document and will be updated periodically as required. Many donor and implementing organizations have their own specific definitions of the terms commonly associated with PM&E.

Uploaded by

godondiegi
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
368 views35 pages

Glossary of Terms - M&E

MONITORING and Evaluation SERIES GLOSSARY OF COMMONLY USED M&E TERMINOLOGIES this is a living document and will be updated periodically as required. Many donor and implementing organizations have their own specific definitions of the terms commonly associated with PM&E.

Uploaded by

godondiegi
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

MONITORING AND EVALUATION SERIES

GLOSSARY OF COMMONLY USED M&E TERMINOLOGIES

By: Enock Warinda

2011

This is a living document and will be updated periodically as required.

1|P a g e

Planning, Monitoring and Evaluation definitions


Many ASARECA staff and partners bring a wealth of PM&E experience to bear on the programs and projects that they are responsible for. Frequent obstacles to effective discussion of PM&E are the misunderstandings that result from a lack of agreed terminology. Many donor and implementing organizations have their own specific definitions of the terms commonly associated with PM&E. To facilitate communication inside ASARECA, the following section lists some key terms and establishes a common definition The growth of Monitoring and Evaluation (M&E) units in government, together with an increased supply of M&E expertise from the private sector and public institutions, calls for a common language on M&E. M&E is a relatively new practice, which tends to be informed by varied ideologies and concepts. A danger for stakeholders is that these diverse ideological and conceptual approaches can exacerbate confusion and misalignment. The standardization of concepts and approaches in ASARECA is particularly crucial for the enhancement of service delivery. Please note that this glossary is not to be considered as exhaustive. It should rather be viewed as an attempt to provide ASARECA members with the same understanding of key terminology used in M&E. ASARECA is in the process of refining its M&E systems to improve the performance of its system of governance and the quality of its outputs thus providing early warning systems and mechanisms to respond speedily to problems as they arise.

2|P a g e

Detailed Glossary
Accountability Obligation to demonstrate that work has been conducted in compliance with agreed rules and standards or to report fairly and accurately on performance results vis--vis mandated roles and/or plans. In terms of development, it refers to the obligations of partners to act according to clearly defined responsibilities, roles and performance expectations, often with respect to the prudent use of resources. For evaluators, it connotes the responsibility to provide accurate, fair and credible monitoring reports and performance assessments. It also refers to planning for and the monitoring, evaluation and reporting of performance and compliance against agreed upon organizational standards and outcomes. It enables ASARECA to answer to all stakeholders for results and impacts and the use of resources and requires the fullest communication between different programs, ASARECA core service units and responsibilities. Action research Action research is an interactive inquiry process that balances problem solving actions implemented in a collaborative context with data-driven collaborative analysis or research to understand underlying causes enabling future predictions about personal and organizational change. Activities Activities refer to the actions taken to achieve the required outputs and to accomplish the planned objective. Examples of activities include: Conducting needs assessment and training on Establishing trials on .. Marketing value-added products of . Negotiations and dialogue with . Monitor/evaluate program results .. Allocate funds to . Provide technical assistance to . Conducting information sessions, etc. Activity Schedule An activity schedule refers to the graphic representation that set out the timing, sequence and duration of project activities. It can also be used to identify milestones for monitoring progress and to assign responsibility for achievement of milestones. Advocacy Refers to the act of representing or defending others (individuals, communities, etc) and using evaluation results to promote and inform. Aggregate To aggregate is to put together (collapse) data from different sectors (such as men, women, households, communities, management practices, regions, locations, technologies, innovation, etc) into one category. For example: putting together data from men and women to have household-level data, or collapsing data from numerous households into community-level data. This requires organization beforehand, at the levels of data coding, collection, and computer input. Agricultural Development Domain A development domain refers to the spatial representation of preconditions or factors considered important for rural development. It can be characterized using stratification criteria that, based on theory and previous research, determine the comparative advantage of rural areas with respect to frequently occurring livelihood strategies. It is constructed by the intersection of three spatial variables: agricultural potential, market access and population density, using a geographic information system (GIS).

3|P a g e

Agricultural Performance Indicator (API) Agricultural Performance Indicator refers to the extent or level of contribution of agriculture to an economy or to the region. It is computed as follows: Observed level of contribution Target level of contribution Observed level of contribution = the contribution that agriculture makes to the economy in a particular time period, usually one year Target level of contribution = the maximum expected or planned contribution that agriculture could make to the economy given the resource base of the economy. Analysis of Objectives Identification and verification of future desired benefits to which the beneficiaries attach priority. The output of an analysis of objectives is the objective tree. Analysis of the Strategies This refers to the critical assessment of the alternative ways of achieving objectives, and selection of one or more for inclusion in the proposed project. Analytical methods Methods used to process and interpret information during an evaluation. Appraisal Within the context of ASARECA, appraisal refers to an overall assessment of the relevance, feasibility and potential sustainability of a development intervention prior to a decision of funding. Its purpose is to enable decision-makers to decide whether the activity represents an appropriate use of resources. Archival Records Involve gleaning of information from existing records that are kept by your own or another institution to gather data for your evaluation. Assessment This refers to a process of making judgment on the basis of the analysis of available information. Assumptions Refer to hypotheses about factors or risks which could affect the progress or success of a development intervention. They are made explicit in theory-based evaluations where evaluation tracks systematically the anticipated results chain. They represent the 4th column of the Logframe matrix. Attribution The ascription of a causal link between observed (or expected to be observed) changes and a specific intervention. Attribution refers to that which is to be credited for the observed changes or results achieved. It represents the extent to which observed development effects can be attributed to a specific intervention or to the performance of one or more partner taking account of other interventions, (anticipated or unanticipated) confounding factors, or external shocks. Attrition A situation when some members of the treatment or control group, or both (e.g. farmers and cluster groups) drop out from the sample. It also refers to failure to collect data from a unit in subsequent rounds of a panel data survey. Attrition in the treatment group is generally higher the less desirable the intervention

4|P a g e

Audit Refers to an independent, objective assurance activity designed to add value and improve an organizations operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to assess and improve the effectiveness of risk management, control and governance processes. Note: It is worth noting a distinction between regularity (financial) auditing, which focuses on compliance with applicable statutes and regulations; and performance auditing, which is concerned with relevance, economy, efficiency and effectiveness. Internal auditing provides an assessment of internal controls undertaken by a unit reporting to management while external auditing is conducted by an independent organization. Baseline Data A set of data that measures specific conditions (almost always the indicators we have chosen through the design process) before a project, initiative or program starts or shortly after implementation begins. It provides a starting point to compare project performance over the life of the project. Example: If you are on a diet, your baseline is your weight on the day you begin, or the level of income at the start of a project. If reliable historical data on your performance indicator exists, then it should be used; otherwise, you will have to collect a set of baseline data at the first opportunity. Baseline Study Refers to an analysis describing the situation prior to a development intervention, against which progress can be assessed or comparisons made. The following questions should be considered when planning a baseline study: What information is already available? What will the study measure? Which data will effectively measure the indicators? Which methodology should be used to measure progress and results achieved against the project objectives? What logistical preparations are needed for collecting, analyzing, storing and sharing data? How will the data be analyzed? Who should be involved in conducting the studies? Does the team have all the skills needed to conduct the study? If not, how will additional expertise be obtained? What will the financial and management costs of the study be? Are the estimated costs of the studies proportionate to the overall project costs? Are adequate quality control procedures in place? How will the study results/recommendations be used? Point to remember: In case an end-line study is planned, then both the baseline and end-line studies should use the same methods of sampling, data collection and analysis, and collect the same data (set of indicators) for comparison. Benchmark Benchmark refers to a reference point or standard against which performance or achievements can be assessed. A benchmark refers to the performance that has been achieved in the recent past by other comparable organizations, or what can be reasonably inferred to have been achieved in the circumstances. Beneficiaries The individuals, groups or organizations who, in their own view and whether targeted or not, benefit directly or indirectly from the interventions of ASARECA. Bias

5|P a g e

The extent to which the estimate of impact differs from the true value as a result of problems in the evaluation or sample design, but not due to sampling error. However, bias in sampling, for example, means ignoring or underrepresenting parts of the target population.

Capacity-building The process through which capacity is created. This is an increasingly important cross cutting issue in poverty reduction interventions. Case Study A methodological approach to describing a situation, individual, etc that typically incorporates a number of data gathering activities (e.g. interviews, observations, questionnaire, etc) at select sites or programs. Causality Analysis Refers to an analysis used in program formulation to identify the root causes of development challenges. It organizes the main data, trends and findings into relationships of cause and effect, and identifies root causes and their linkages as well as the differentiated impact of the selected development challenges. A causality framework or causality tree analysis (or problem tree) can be used as a tool to cluster contributing causes and examine the linkages among them and their various determinants. Coherence Compliance with the policies, guidelines, priorities, and approaches set by an institution. Community A group of people living in the same locality and sharing common characteristics. Community of Practice Refers to networks of people who work on similar processes or in similar disciplines and who come together to develop and share their knowledge in that field for the benefit of both themselves and their organization. It may be created formally or informally, and members can interact online or in person. Control A verification that financial documents are exact and expenditures conform to norms and to authorization procedures (financial control); or a management function to determine if materials conform to technical specifications and to international norms (technical control). Comparison Group Refers to individuals whose characteristics (such as race/ethnicity, gender, and age) are similar to those of your program participants. These individuals may not receive any services, or they may receive a different set of services, activities, or products. In no instance do they receive the same service(s) as those you are evaluating. As part of the evaluation process, the experimental (or treatment) group and the control/comparison group are assessed to determine which type of services, activities, or products provided by your program produced the expected changes. Control Group A group of individuals whose characteristics (such as race/ethnicity, gender, and age) are similar to those of your program participants, but do not receive the program (services, products, or activities) you are evaluating. Participants are randomly assigned to either the treatment (or program) group or the control group. A control group is used to assess the effect of your program on participants as compared to similar individuals not receiving the services, products, or activities you are evaluating. The same information is collected for people in the control group as in the experimental group.

6|P a g e

Correlation The strength of relationship between two (or more) variables. It can take two trends: Positive correlation one variable tends to increase together with another variable Negative correlation one variable decreases as the other one increases Cost-Benefit analysis A form of economic analysis that takes into account the benefits and costs in commensurable and actual monetary values and arrives at single index to determine the value of a project. The financial cost-benefit analysis is made from the perspective of the project; an economic cost-benefit analysis is made from the perspective of the entire economy of which the aid activity is part; a social cost benefit analysis also includes distributional considerations. Cost-Effectiveness Analysis An economic or social cost-benefit analysis that quantifies benefits without translating them into monetary terms. This analysis allows one to compare alternative ways to accomplish a same objective(s). It also allows the selection of the activity - among those feasible - that will allow the attainment of the objective at the least cost. Counterfactual It refers to an estimate of what the outcome (Y) would have been for a program or project participant in the absence of the program (P). In other words, it is the situation or condition which hypothetically may prevail for individuals, organizations or groups were there no development intervention. By definition, the counterfactual cannot be observed. Therefore, it must be estimated using comparison groups. Critical assumption It refers to the hypothesis about factors or risks which could affect the progress or success of a development intervention. It is an important factor outside management control that can strongly influence the project implementation and success.
Note: Assumptions can also be understood as hypothesized conditions that bear on the validity of the evaluation itself, e.g. about the characteristics of the population when designing a sampling procedure for a survey. Assumptions are made explicit in theory-based evaluations where evaluation tracks systematically the anticipated results chain.

Data Describes information stored in numerical form, either in hard or soft format. Hard data is precise, numerical information Soft data is less precise, verbal information Raw data is the survey information before it has been processed and analyzed Missing data are values or responses which fieldworkers were unable to collect (or which were lost before analyses) Gender-segregated data are information used to promote gender balanced analyses. Data Collection Method Refers to the strategy and approach used to collect data. The methods include: informal and formal surveys; direct and participatory observation; community interviews; focus groups; expert opinion; case studies; literature search, etc. In collecting data, the following questions should be addressed: What type of data should we collect? When should we collect data (how often)? What methods and tools will we use to collect data? Where do we get the data from? How do we ensure good quality data? What type of data should we collect? There are two main types of data qualitative and quantitative and the type of data most appropriate for a project will depend on the indicators developed.

7|P a g e

Qualitative data consist of perceptions, experience and opinions, and common questions to collect qualitative information might begin with How did?, In what way?, In your opinion? etc. Quantitative data involve numbers, percentages and ratios, and common questions to collect quantitative information might start with How many?, What proportion of? etc. The most common methods used in qualitative data collection are observation, focus group discussions and indepth interviews. The most common methods of collecting quantitative data are quantitative surveys and secondary data review. Qualitative data can be categorized and quantified for the purpose of data analysis. What methods and tools will we use to collect data? There are a variety of methods used to collect data, and the most common methodologies are: a) Surveys: These are used by evaluators to gather data on specific questions. Performance data, demographic information, satisfaction levels and opinions can be collected through surveys which usually involve pre-set questions in a particular order or flow. The questions can be structured or semi-structured, open or closeended in format. Surveys can be conducted face to face, by email or telephone, and they can also be selfadministered.
b)

Interviews: An interview is used when interpersonal contact is important and when the follow up of any interesting comments provided is desired. Interviews are best conducted face to face although, in some situations, telephone or online interviewing can be successful. In structured interviews, a carefully worded questionnaire is administered. The emphasis is on obtaining answers to pre-prepared questions. Interviewers are trained to deviate only minimally from the question wording to ensure uniformity of interview administration. For in-depth interviews, no rigid format is followed, although a series of open-ended questions is usually used to guide the conversation. There may be a trade-off between a comprehensive coverage of topics and in-depth exploration of a more limited set of issues. In-depth interviews capture the respondents perceptions in his or her own words. This allows the evaluator to understand the experience from the respondents perspective. In both cases, it is good practice to prepare an interview guide and to hold mock interviews to estimate the time required, to amend difficult questions or wordings, and to ensure that none of the questions are leading or prompting a specific answer. In terms of qualitative data that can be used to gain insight into the achievements and challenges of a project, interviews with project staff and beneficiaries can be especially useful to answer questions like: How has the project affected you personally? Which aspects of the project have been successful, and why? Which aspects of the project have not been successful, and why? What needs to be done to improve the project? Observation: In observation, data is gathered on activities, processes and behavior. It can provide evaluators with an opportunity to understand the context within which the project operates and to learn about issues that the participants or staff may themselves be unaware of or unwilling to discuss in an interview or focus group. It is a useful way to collect data on physical settings, interactions between individuals or groups, non-verbal cues or the nonoccurrence of something that is expected. Data collected through observation should be documented immediately. The descriptions must be factual, accurate and thorough without being opinionated or too detailed. The date and time of the observation should be recorded, and everything that the observer believes to be worth noting should be included. No information should be trusted to future recall. Focus group discussions: This method combines elements of both interviewing and observation, and the explicit use of group interaction is believed to generate data and insights that are unlikely to emerge without the interaction found in the group. The facilitator can also observe group dynamics and gain insight into the respondents behaviors, attitudes and relationships with one another. Focus groups involve a gathering of 8 12 people who share characteristics relevant to the evaluation questions. The discussions should take a

c)

d)

8|P a g e

maximum of 90 minutes. Originally used as a market research tool to investigate the appeal of various products, the focus group technique has been adapted as a tool for data collection in many other sectors. FGDs are useful in answering the same type of questions as those posed in in-depth interviews but within a social, rather than individual, setting. Specific applications of the focus group method in evaluations include: identifying and defining achievements and constraints in project implementation identifying project strengths, weaknesses and opportunities assisting with interpretation of quantitative findings obtaining perceptions of project effects providing recommendations for similar future interventions generating new ideas
e)

Document studies: Reviews of various documents that are not prepared for the purpose of the evaluation can provide insights into a setting and/or group of people that cannot be observed or noted in any other way. For example, external public records include census and vital statistics reports, county office records, newspaper archives and local business records that can assist an evaluator in gathering information about the larger community and relevant trends. These may be helpful in understanding the characteristics of the project participants to make comparisons between communities. Examples of internal public records are organizational accounts, institutional mission statements, annual reports, budgets and policy manuals. They can help the evaluator understand the institutions resources, values, processes, priorities and concerns. Personal documents are first person accounts of events and experiences such as diaries, field notes, portfolios, photographs, artwork, schedules, scrapbooks, poetry, letters to the paper and quotes. They can help the evaluator understand an individuals perspective with regard to the project. Document studies are inexpensive, quick and unobtrusive. However, accuracy, authenticity and access always need to be considered.

Data Quality Extent to which data adheres to the key dimensions of quality, namely: Accuracy Reliability Completeness Precision Timeliness Integrity refers to the security or protection of information from unauthorized access or revision Utility refers to the usefulness of the information for its intended users Objectivity refers to whether information is accurate, reliable, and unbiased, and whether it is presented in an accurate, clear and unbiased manner . Data Quality Assessment and/or Assurance Set of internal and external mechanisms and processes to ensure that data meets the key dimensions of quality. Data Quality Management Refers to the establishment and deployment of roles, responsibilities, policies, and procedures concerning the acquisition, maintenance, dissemination, and disposition of data. It allows an organization to see how the data quality procedures put in place have caused the quality of the data to improve. Delphi Technique The Delphi technique enables experts who live in different locations to engage in dialogue and reach consensus through an iterative process. Experts are asked specific questions; their answers are sent to a central source, summarized, and fed back to the experts. The experts then comment on the summary. They are free to challenge particular points of view or to add new perspectives by providing additional information. Because no one knows who said what, conflict is avoided.

9|P a g e

Double Difference The difference in the change in the outcome observed in the treatment group compared to the change observed in the control group. Equivalently, it is also the change in the difference in the outcome between treatment and control. Double differencing removes selection bias resulting from time-invariant unobservables. Also called Difference-in-difference. Effects Intended or unintended changes resulting directly or indirectly from a development intervention. Primary effects: the changes brought about by an assistance effort to accomplish the specific objective of the intervention. Direct effect: The immediate costs and benefits of both the contributions to and the results of a project, without taking into consideration their effect on the economy. Indirect effects: the costs and benefits, which are unleashed by the contributions to a project and by its results. External effects: the costs and benefits not taken into account in determining the expenditures and financial revenue of the aid programme. Intangible effects: costs and benefits, which are thought to be pertinent but which cannot be measured and which therefore, cannot be included in the economic analysis. These effects are taken into account by sociological analyses. Evaluability Assessment An evaluability assessment is a brief preliminary study undertaken to determine whether an evaluation would be useful and feasible. This type of preliminary study helps clarify the goals and objectives of the program or project, identify data resources available, pinpoint gaps and identify data that need to be developed, and identify key stakeholders and clarify their information needs. It may also redefine the purpose of the evaluation and the methods for conducting it. By looking at the intervention as implemented on the ground and the implications for the timing and design of the evaluation, an evaluability assessment can save time and help avoid costly mistakes. Evaluability assessments are often conducted by a group, including stakeholders, such as implementers, evaluators, and administrators. To conduct an evaluability assessment, the team: Reviews materials that define and describe the interventions Identifies modifications to the intervention Interviews managers, Principal Investigators, Scientists, and other staff on their perceptions of the interventions goals and objectives Interviews stakeholders on their perceptions of, and level of satisfaction with the interventions goals and objectives Develops and refines a theory of change model Identifies sources of data and data collection methods Identifies people and organizations that can implement any possible recommendations from the evaluation. Evaluation Evaluation is a systematic and objective examination of a planned, ongoing or completed project or initiative at a given point in time. It commonly seeks to determine the efficiency, effectiveness, impact, sustainability and relevance of a project or organizations objectives. It requires an in-depth review at specific points in the life of the project usually mid-point or end of a project. It verifies whether project objectives have been achieved or not. It is a management tool which can assist in evidence-based decision making, and which provides valuable lessons for implementing organizations and their partners. Evaluation helps to answer questions such as:

10 | P a g e

How relevant was our work in relation to the primary stakeholders and beneficiaries? To what extent were the project objectives achieved? What contributed to and/or hindered these achievements? Were the available resources (human, financial) utilized as planned and used in an effective way? What are the key results, including intended and unintended results? What evidence is there that the project has changed the lives of individuals and communities? How has the project helped to strengthen the management and institutional capacity of the organization? What is the potential for sustainability, expansion and replication of similar interventions? What are the lessons learned from the intervention? How should those lessons be utilized in future planning and decision making? There are several approaches to development evaluation(see figure below)

Evaluation Criteria When evaluating development programs and projects, it is useful to consider the following criteria:
1.

Effectiveness The extent to which an organization, policy, program or initiative is meeting its expected results. It is the degree of achievement of the planned specific objectives and thus the extent to which the beneficiaries have reaped the planned benefits. In evaluating the effectiveness of a program or project, it is useful to consider the following questions: To what extent were the planned resources used to meet project objectives? What were the major factors influencing the achievement or non-achievement of the objectives? What was the rate of disbursement of project resources? To what extent did the interventions address the capacity needs identified? What was the quality of capacity built? To what extent were the capacity building skills acquired utilized? Related terms: Cost effectiveness the extent to which an organization, policy, program or initiative is using the appropriate and efficient means in achieving its expected results relative to alternative design and delivery approaches.

Efficiency Efficiency measures the outputs qualitative and quantitative in relation to the inputs. It is an economic term which is used to assess the extent to which aid uses the least costly resources possible in order to achieve the desired results. This generally requires comparing alternative approaches to achieving the same outputs, to see whether the most efficient approaches to achieving the same outputs, to see whether the most efficient process has been adopted. When evaluating the efficiency of a project or program, it is useful to consider the following questions: Were the activities cost-effective? Were objectives achieved on time? Was the project or program implemented in the most efficient was compared to alternatives?
3.

2.

Relevance

11 | P a g e

Relevance refers to the extent to which the supported interventions are suited to the priorities and policies of the target group, recipient and donors. In evaluating the relevance of a project or program, it is useful to consider the following questions: To what extent are the objectives of the program or project still valid? Are the activities and outputs of the program or project consistent with the overall goal and the attainment of its objectives? Are the activities and outputs of the program or project consistent with the intended impacts and effects?
4.

Impacts The positive or negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental and other development indicators. The examinations should be concerned with both intended and unintended results and must also include the positive and negative impact of external factors, such as changes in terms of trade and financial conditions. When evaluating the impact of a program or project, it is useful to consider the following: What happened as a result of the program or project? What real difference has the activity made to the beneficiaries? How many people have been affected?

5.

Sustainability Sustainability is concerned with measuring whether the benefits and effects generated by a project or program will continue after the donor funding has been withdrawn and the projects terminated. Projects need to be environmentally as well as financially sustainable. When evaluating the sustainability of a program or project, it is useful to consider the following questions: To what extent did the benefits of a program or project continue after donor funding ceased? What were the major factors which influenced the achievement or non-achievement of sustainability of the program or project?

Evaluation Framework Study Assessment conducted at the starting up of the project to verify the conditions for allowing monitoring and evaluation of the project. It includes the revision of data availability and possible collection of baseline data; the final selection of the indicators; the agreement on the targets to be achieved and their measurement on the basis of selected indicators; the selection/provision of tools for data collection according to the selected / available sources of verification. Evaluative Research Evaluation research is as a type of study that uses standard social research methods for evaluative purposes, as a specific research methodology, and as an assessment process that employs special techniques unique to the evaluation of social programs. Ex-ante Evaluation (Prospective Evalaution) A prospective evaluation is conducted ex ante that is, a prposed program is reviewed before it begins, in an attempt to analyze its likely success, predict its cost, and analyze alternative proposals and projections. Most prospective evaluations involve the following kinds of activities: A contextual analysis of the proposed program or policy A review of evaluation studies on similar programs or policies and synthesis of the findings and lessons from the past

12 | P a g e

A prediction of likely success or failure, given a future context that is not too different from the past, and suggestions on strengthening the proposed program and policy if decision makers want to go forward.

Expected Results An outcome that a program, policy or initiative is designed to produce. Ex-post Evaluation (Analysis) Evaluation of a development intervention after it has been completed. It may be undertaken directly after or long after completion. The intention is to identify the factors of success or failure, to assess the sustainability of results and impacts, and to draw conclusions that may inform other interventions. This is the evaluation produced after the project is completed, which includes not only the summative evaluation of the project itself (typically in terms of processes and outputs) but also an analysis of the project's impact on its environment and its contribution to wider (economic/societal/education- al/community etc.) goals and policies. It should also lay down a framework for future action leading, in turn, to the next ex ante study. In reality, ex post evaluations often take so long to produce (in order to measure long-term impact) that they are too late to influence future planning. External Evaluation The evaluation of a development intervention conducted by entities and/or individuals outside the donor and implementing organizations. Note: Externally conducted evaluation is not necessarily independent evaluation. If the evaluation is conducted externally but is funded by and under the general oversight of Program Managers and Principal Investigators, it is an internal evaluation and should not be deemed independent. Feasibility study It is an assessment conducted during the appraisal phase to verify whether the proposed project is well founded, and is likely to meet the needs of its intended target groups/beneficiaries. It should take into account all policy, technical, economic, financial, institutional, management, environmental, socio-cultural, gender-related aspects. Feedback The transmission of findings generated through the evaluation process to parties for whom it is relevant and useful so as to facilitate learning. This may involve the collection and dissemination of findings, conclusions, recommendations and lessons from experience. Formative Evaluation Refers to an evaluation intended to improve performance, most often conducted during the implementation phase of projects or programs. It can also be conducted for other reasons such as compliance, legal requirements or as part of a larger evaluation initiative. Learning how the program is being implemented, including the challenges and strong points, can serve as useful information for improving practice, rethinking how to go about things, and identifying future action steps. It includes several evaluation types, e.g.: Needs assessment determines who needs the program, how great the need is, and what might work to meet the need Evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness Structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes

13 | P a g e

Implementation evaluation monitors the fidelity of the program or technology delivery Process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures Gender Gender refers to the social roles assigned to men and women based on their sex. Gender Analysis Assessment of the likely differences in the impacts of proposed policies, programmes or projects on women and men. It includes attention to: the different roles; the differential access to and use of resources and their specific needs, interests and problems; and the barrier to the full and equitable participation of women and men in project activities and the equitable distribution of the benefits obtained. Goal Refers to the sectoral, national, or organizational objectives to which the project is designed to contribute. It can also be thought of as describing the expected impact of the project. It is a statement of intention that defines the main reason for undertaking the project. Ground-truth This refers to a test run, a pilot, or a pre-test study. It implies testing a technology or an innovation or any activity in the setting where it will be used. It also refers to checking ideas or methods in the real world. Hierarchy of objectives This is a tool that helps to analyze and communicate programme objectives and shows how local interventions should contribute to global objectives. It organizes these objectives into different levels (objectives, subobjectives) in the form of a hierarchy or tree, thus showing the logical links between the objectives and their sub-objectives. It presents in a synthetic manner the various intervention logics derived from the regulation, that link individual actions and measures to the overall goals of the intervention Horizontal logic Indicates the relation between the resources and the results of a project or programme through the identification of objectively verifiable indicators and means of verification for these indicators. Inception Phase The period from the project start-up until the finalization of the updating of the work plan, Logframe Matrix and the evaluation framework study. It extends between one and three months and ends with a first project report. Independent Evaluation Refers to an evaluation carried out by entities and persons free of the control of those responsible for the design and implementation of the development intervention. Note: The credibility of an evaluation depends in part on how independently it has been carried out. Independence implies freedom from political influence and organizational pressure. It is characterized by full access to information and by full autonomy in carrying out investigations and reporting findings. Indicator An indicator is a marker of performance showing progress and helping measure change. It comes from the Latin words in (towards) and dicare (make known). Types of Indicators:

14 | P a g e

1.

Input indicators: these indicators measure the provision of resources, for example the number of full time staff working on the project. Process indicators: these indicators provide evidence of whether the project is moving in the right direction to achieve the set objectives. They relate to multiple activities that are carried out to achieve project objectives, e.g.: What has been done? Examples include training outlines, policies/procedures developed, number of varieties produced. Who and how many people have been involved? Examples include number of participants, proportion of ethnic groups, age groups, number of partner organizations involved. How well have things been done? Examples include proportion of participants who report they are satisfied with the service or information provided, etc. Output indicators: these indicators demonstrate the change at project level as a result of activities undertaken. Examples include number of demand driven technologies generated, number of policy options presented for legislation or decree, etc. Outcome indicators: these indicators illustrate the change with regard to the beneficiaries of the project in terms of knowledge, attitudes, skills or behavior. These indicators can usually be monitored after a medium to long term period. Examples include the number of new varieties users in a community, etc. Impact indicators: these indicators measure the long term effect of a program, often at the national or population level. Examples of impact indicators include: percent change in total factor productivity; percent change in selected crops, etc. Impact measurement requires rigorous evaluation methods, longitudinal study and an experimental design involving control groups in order to assess the extent to which any change observed can be directly attributed to project activities.

2.

3.

4.

5.

Other types of indicators: 1. Proxy indicators: these indicators provide supplementary information where direct measurement is unavailable or impossible to collect.
2.

Quantitative and qualitative indicators: all the indicators discussed above can be categorized as qualitative or quantitative indicators on the basis of the way they are expressed. Quantitative indicators are essentially numerical and are expressed in terms of absolute numbers, percentages, ratios, binary values (yes/no), etc. Qualitative indicators are narrative descriptions of phenomena measured through peoples opinions, beliefs and perceptions and the reality of peoples lives in terms of non-quantitative facts. Qualitative information often provides information which explains the quantitative evidence, e.g. what are the reasons for low levels of technology adoption? Why do so few men use the introduced varieties? What are the cultural determinants that contribute to the need for appropriate information packages? Qualitative information supplements quantitative data with a richness of detail that brings a projects results to life.

It is important to select a limited number of key indicators that will best measure any change in the project objectives and which will not impose unnecessary data collection. As there is no standard list of indicators, each project will require a collaborative planning exercise to develop indicators related to each specific objective and on the basis of the needs, theme and requirements of each project.

Criteria of a strong Performance Indicator


Validity: Doest the performance indicator actually measure the result? Reliability: Is the performance indicator a consistent measure over time? Sensitivity: When the result changes, will the performance indicator be sensitive to those changes? Simplicity: How easy will it be to collect and analyze the data? Does it present challenges is it complex? Does it need technical expertise to understand?

15 | P a g e

Utility: Will the information be useful for program management (decision-making, learning, and adjustment)? Affordability: Can the program afford to collect the information?

Innovation It refers to a creative and interactive process of making improvements by successfully introducing something new into the social and economic practices. It goes far beyond the confines of research labs to users, suppliers and consumers everywhere in government, business and non-profit organizations, across borders, across sectors, and across institutions. The Oslo Manual defines four types of innovation: Product innovation; Process innovation; Marketing innovation; and Organizational innovation. Product innovation: A good or service that is new or significantly improved. This includes significant improvements in technical specifications, components and materials, software in the product, user friendliness or other functional characteristics. Process innovation: A new or significantly improved production or delivery method. This includes significant changes in techniques, equipment and/or software. Marketing innovation: A new marketing method involving significant changes in product design or packaging, product placement, product promotion or pricing. Organizational innovation: A new organizational method in business practices, workplace organization or external relations. Innovation Policies (Agricultural) Refer to policies designed to enhance the stakeholders capacity to innovate in the agricultural sector. They operate on both the formal and informal sources of innovation. Based on the Innovation Systems Framework, innovation policies are hereby classified into three categories: Policies designed to create and strengthen the formal organizations and institutions needed to generate and apply new or existing information Policies that support and facilitate innovation among system actors, including farmers Policies that integrate and intermediate among public, private, and civil society actors engaged in innovation processes. Potential indicators on agricultural innovation policy include: Expert assessments of policies on agricultural research, education, and extension/advisory services Average distance of farm households to markets Membership in international regimes, e.g. International Union for the Protection of New Varieties of Plants (UPOV) or the International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA) Innovation System An Innovation System refers to a network of organizations, enterprises, and individuals focused on bringing new products, new processes, and new forms of organization into social and economic use, together with the institutions and policies that affect their behavior and performance. The IS concept embraces not only the science suppliers, but also the totality and interaction of actors involved in innovation. It gives more attention to The interaction between research and related economic activities The attitudes and practices that promote interaction and the learning that accompanies it The creation of an enabling environment that encourages interaction and helps to put knowledge into socially and economically productive use. Inputs These are the human, financial, material/physical and information resources used to produce outputs through activities and accomplish outcomes. Internal Evaluation

16 | P a g e

Refers to an evaluation of a development intervention conducted by a unit and/or individuals reporting to the management of the donor, partner, or implementing organization. Interval scale Refers to measurements with defined and constant intervals between successive values (e.g. attitude measures and rankings). In Interval Scale, all the values are continuous. Intervention Logic The strategy underlying the project. It is the narrative description of the project at each of the four levels of the hierarchy of the objectives used in the Logframe. Joint Evaluation An evaluation to which different donor agencies and/or partners participate. Note: There are various degrees of jointness depending on the extent to which individual partners cooperate in the evaluation process, merge their evaluation resources and combine their evaluation reporting. Joint evaluations can help overcome attribution problems in assessing the effectiveness of programs and strategies, the complementarity of efforts supported by different partners, the quality of aid coordination, etc. Key informants People in a community, region, organization, who, because of their position, are able to provide information or insights on some aspects relevant to the project. These informants play a key role in evaluation, especially in qualitative evaluation, though it is important to bear in mind that they also provide a subjective/one-sided perspective. Therefore the evaluators will have to obtain the information from a large number of key informants. Key performance indicator (KPI) A variable that allows the verification of changes in the development intervention or shows results relative to what was planned. Key performance indicators may be selected from overall objectively verifiable indicators, but should adequately and sufficiently measure the intended change either singly or in combination. Learning This is the process by which knowledge and experience directly influence changes in behavior. It also refers to reflection on experiences to identify how a situation or future actions could be improved. This can be individual or group-based. Learning involves applying lessons learned to future actions, which provides the basis for another cycle of learning. Thus, we learn to: Increase effectiveness and efficiency Increase the ability to initiate and manage change Utilize institutional knowledge and promote organizational learning Improve cohesion among different units of the organization Increase adaptability for opportunities, challenges and unpredictable events Increase motivation, confidence and proactive learning Lessons Learned Refer to the conclusions extracted from reviewing a development program or project, or even activities by participants, managers, beneficiaries or evaluators with implications for effectively addressing similar issues or problems in another setting. Frequently, lessons highlight strengths or weaknesses in preparation, design, and implementation that affect performance, outcome, and impact. Logical Framework Management tool used to improve the design of interventions, most often at the project level. It involves identifying strategic elements (inputs, outputs, outcomes, impact) and their causal relationships, indicators, and

17 | P a g e

the assumptions or risks that may influence success and failure. It thus facilitates planning, execution and evaluation of a development intervention. The following is the layout of the Logical Framework Matrix:
Intervention logic Impact 13: What is the overall impact of the project? OVIs of achievement 14: What are the key indicators related to the impact? 10: Which indicators clearly show that the outcome has been achieved? Sources and means of verification 15: What are the sources of information for these indicators? 11: What are the sources of information that exist or can be collected? What are the methods required to get this information? 7: What are the sources of information for these indicators? Assumptions

Outcomes

9: What specific outcome is the action intended to achieve?

12: Which risks should be taken into consideration?

Outputs

5: Outputs are the results envisaged to achieve the specific objective.

6: Enumerate the outputs. What are the indicators to measure whether and to what extent the action achieves the expected outputs? 2: Means: What are the means required to implement these activities, e.g. personnel, equipment, training, studies, supplies, operational facilities, etc.

8: What external conditions must be met to obtain the expected outputs on schedule?

Activities

1: What are the key activities to be carried out and in what sequence in order to produce the expected results? (group the activities by result)

3: What are the sources of information about action progress?

4: What pre-conditions are required before the action starts? What conditions outside the beneficiary's direct control have to be met for the implementation of the planned activities?

Logical Framework Approach (LFA) A methodology for planning, managing and evaluating programmes and projects involving stakeholder analysis, problem analysis, analysis of objectives, analysis of strategies, preparation of the Logframe Matrix and activities and resources schedule. Management Information System (MIS) The creation, through a well designed monitoring system, of a regular feedback to the management at the project and central level on all key aspects of a project. Means of Verification Refers to the expected source of the information we need to collect. MoVs should clearly specify this source. They ensure that the indicators can be measured effectively by specification of types of data, sources of information and methods of collection. Meta Evaluation This is an evaluation designed to aggregate findings from a series of evaluations. It can also be used to denote the evaluation of an evaluation to judge its quality and/or assess the performance of the evaluators. Mid-Term Review/Evaluation This is the point at which progress-to-date is formally measured to see whether the original environment has changed in a way which impacts on the relevance of the original objectives. It is an opportunity to review these objectives if necessary, decide whether the project is on target in terms of its projected outputs, adjust the working practices if necessary or, in certain circumstances, re-negotiate timescales or outputs. It is often not carried out at the 'mid- point' at all but at the end of a significant phase! Milestones

18 | P a g e

They correspond to the process indicators. They are an indication of short and medium-term objectives (usually activities) which facilitate the measurement of achievements throughout the project rather than just at the end. They also indicate times when decisions should be taken or an action should be finished. Monitoring This an ongoing and systematic collection and analysis of information to assist timely decision making, ensure accountability and provide part of the data for evaluation and learning. It provides project and program managers with important information on progress, or lack of progress, in relation to project/program objectives. It helps to answer the following questions: How well are we doing? Are we doing the activities we planned to do? Are we following the designated timeline? Are we over/under-spending? What are the strengths and weaknesses in the project? Monitoring and Evaluation (M&E) Framework Refers to a holistic approach that can address the program needs, monitor program processes and outputs, and evaluate goals and program/project objectives. It encompasses the program planning processes right down to the documentation and dissemination plan. Monitoring and Evaluation (M&E) Plan A comprehensive narrative document on all M&E activities (summary of M&E Framework). It addresses key M&E questions; what indicators to measure; sources, frequency and method of indicator data collection; baselines, targets and assumptions; how to analyze or interpret data; frequency and method for report development and distribution of the indicators, and how the components of the M&E System will function. Monitoring and Evaluation (M&E) System M&E system is a framework of M&E principles, practices and standards to be used throughout ASARECA. It is also envisaged to function as an apex-level information system which draws from program and project systems to deliver useful M&E products for stakeholders. Most Significant Change (MSC) This is a system that designed to record and analyze change in projects or programs where it is not possible to precisely predict changes beforehand, and is therefore difficult to set predefined indicators. It is also designed to ensure that the process of analyzing and recording change is as participatory as possible. It aims to identify significant changes brought about by a development intervention, especially in those areas where changes are qualitative and therefore not susceptible to statistical treatment. It relies on people at all stages of a project or program meeting to identify what they consider to be the most significant changes within predefined areas or domains. Its strength lies in its ability to produce information-rich stories that can be analyzed for lesson learning. It also involves a transparent process for the generation of stories that shows why and how each story was chosen. It is designed around purposive sampling sampling to find the most interesting or revealing stories. NARS The National Agricultural Research Systems (NARS) comprise mainly of national institutes of agricultural research, universities, training and extension services, users of agricultural products and civil society organizations (NGO, Producer Organizations, and Private Sector). This system is important in the promotion of the new paradigm of integrated agricultural research for development (IAR4D). In general, the implementation of the technical programmes (Agro-biodiversity, Livestock and Fisheries, Staple crops, High Value Non-staple crops, Natural Resources Management and Biodiversity, etc) must be realized through networking between members of NARS of ASARECA. Research activities carried out within this framework are mainly funded on a

19 | P a g e

competitive basis. However, commissioned research could be carried out, as the case may be, by specialized centers in the sub-region. The knowledge management and upscaling programme is carried out through networking at the level of NARS and through competitive funds and commissioned research, as the case may be. Nominal Scale Refers to classifications that form no natural sequence. However, numbers sometimes are assigned to characteristics for identification, but have no mathematical value and cannot be used for mathematical functions. Objective A specific statement detailing the desired accomplishments of a project. It is specified in terms of desired changes in behaviors and practices as a result of training or services provided by a project. Examples: reduction of malnutrition, increase in income, improvement in the environment. Objective Tree A diagrammatic representation of the situation in the future once problems have been remedied, following a problem analysis and showing a means to ends relationship. Objectively Verifiable Indicators (OVI) Indicators of the different level of objectives, they represent the second column of the logical framework. OVIs provide the basis for designing an appropriate monitoring system. Ordinal Scale Refers to measurements using classifications with a natural sequence (lowest to highest), but with undefined intervals. The values are discontinuous. Outcome Mapping Is a methodology for planning and assessing development programming that is oriented towards change and social transformation. It provides a set of tools to design and gather information on the outcomes, defined as behavioral changes, of the change process. It helps a project or program learn about its influence on the progression of change in their direct partners, and therefore helps those in the assessment process think more systematically and pragmatically about what they are doing and to adaptively manage variations in strategies to bring about the desired outcomes. It puts people and learning at the centre of development and accepts unanticipated changes as potential for innovation. It consists of three phases: Intentional design: The program or project frames its activities based on the changes it intends to help bring about and that its actions are purposely chosen so as to maximize the effectiveness of its contributions to development.
o Vision Statement: describes why the program or project is engaged in development and provides an inspirational focus. In drafting the vision, project implementers must be visionary, by establishing a vivid beacon to motivate staff and highlight the ultimate purpose of their day-to-day work. Mission Statement: describes how the program or project intends to support the vision, by stating the areas in which the program or project will work toward the vision. It does not list all the planned activities. In developing the mission statement, the project implementers should consider not only how the program will support the achievement of outcomes by its boundary partners, but also how it will keep itself effective, efficient, and relevant. Boundary Partners refer to individuals, groups, or organizations with whom the project or program interacts directly and with whom the program can anticipate some opportunities for influence (e.g. NGOs, indigenous groups, churches, community leaders, regional administration, private sector, academic and research institutions, international institutions, etc). They are assumed to control change since they operate within different logic and responsibility systems.

20 | P a g e

Outcome Challenge: Once the boundary partners have been identified, an outcome challenge statement is development for each of them. Outcomes are the effects of the program being there, with a focus on how the behavior, relationships, activities, or actions of an individual, group, or institution will change if the program or project is extremely successful. They are phrased in a way that emphasizes behavioral change. Progress Markers: A graduated set of statements describing a progression of changed behaviors in the boundary partner that will lead to the outcome challenge (e.g. what are the changes you expect to see, like to see, love to see) Strategy Maps: A combination of strategies or activities aimed at the boundary partner (outputs, new skills, support needs), and the environment of the partner (rules of the game, information availability, networking, etc). Organizational Practices: Refers to practices that determine an organizations effectiveness, that foster creativity and innovation, assist partners and maintain the organizations niche.

Outcome and performance monitoring: It provides a framework for the ongoing monitoring of the programs or projects actions in support of the outcomes and the boundary partners progress towards the achievement of outcomes. It is based largely on systematized self-assessment.
o Monitoring Priorities: Refers to what (information), who (will collect, use it), when (should it be collected), how (will it be collected, used) etc. This information is then collected by the following 3 tools: Outcome Journals: Data collection tools for monitoring the progress of a boundary partner in achieving progress markers over time. it describes the level of change as low, medium, or high, and a place to record who among the boundary partners exhibited the change. It includes information explaining the reasons for the change, the people and circumstances that contributed to the change, evidence of the change, a record of unanticipated change, and lessons for the program or project is also recorded in order to keep a running track of the context for future analysis or evaluation. Strategy Journals: Data collection tools for monitoring the strategies a program uses to encourage change in the boundary partner. Some of the planning and management questions that project implementers might want to consider during monitoring meetings after completing the strategy journal include: What are we doing well and what should we continue doing? What are we doing okay or badly and what can we improve? What strategies or practices do we need to add? What strategies or practices do we need to give up (those that have produced no results, or require too much effort or too many resources relative to the results obtained)? How are/should we be responding to the changes in boundary partners behavior? Who is responsible? What are the timelines? Has any issue come up that we need to evaluate in greater depth? What? When? Why? How? Performance Journal: Data collection tools for monitoring how well the program or project is carrying out its organizational practices. It records data on how the program is operating as an organization fulfills its mission. A single performance journal is created for the program and filled out during the regular monitoring meetings.

Evaluation planning: It helps the program or project to identify evaluation priorities and develop an evaluation plan.
o Evaluation Plan: A short description of the main elements of an evaluation study to be conducted. It identifies who will use the evaluation, how and when, what questions should be answered, what information is needed, who will collect this information, how and when, and how much it will cost.

Outcomes Outcomes are the changes within the community or among the researchers that can be attributed, at least in part, to the research process. Outcomes result both from meeting research objectives (outputs) and from the participatory research process itself. They can be negative or positive, expected or unexpected, and encompass both the functional effects of participatory research (e.g. greater adoption and diffusion of new technologies, changed farming practices, changes in institutions or management regimes) and the empowering effects (e.g.

21 | P a g e

increased community capacity, improved confidence or self-esteem, and improved ability to resolve conflict or solve problems). The desired outcomes of participatory research in natural resource management projects, for example, generally involve social transformation; many are diffuse, long term, and notoriously difficult to measure or to attribute to a particular research project or activity.

Three types of outcomes related to the logic model are defined as: 1. Immediate Outcome: an outcome that is directly attributable to a policy, program or initiative's outputs. In terms of time frame and level, these are short-term outcomes and are often at the level of an increase in awareness of a target population. Examples include: increase in awareness/skills of , access to ., 2. Intermediate Outcome: an outcome that is expected to logically occur once one or more immediate outcomes have been achieved. In terms of time frame and level, these are medium term outcomes and are often at the change of behavior level among a target population. Examples include: increased adoption of crop varieties in Country X , increased area under technology or management practice in ., etc. Final Outcome: the highest-level outcome that can be reasonably attributed to a policy, program or initiative in causal manner, and is the consequence of one or more intermediate outcomes having been achieved. These outcomes usually represent the raison d'tre of a policy, program or initiative. They are long-term outcomes that represent a change of state of a target population. Ultimate outcomes of individual programs, policies or initiatives contribute to the higher-level departmental Strategic Outcomes.

3.

Outputs Direct products and services stemming from the activities of an organization, policy, program or initiative, and usually within the control of the organization itself. These products and services are delivered to the project participants, thus helping to achieve intermediate changes that result from accessing and using inputs. Examples of outputs are: The research activities undertaken as well as the tangible products of the research Information, such as a profile of a community, documentation of indigenous knowledge of plant species or local management practices, etc (organized in a report, for example) Products, such as new techniques or technologies developed through farmer experimentation, new management regimes for common resources, new community institutions and organizations, or community development plans. Measures such as the number of people trained, the number of farmers involved in on-farm experiments, pamphlets produced, research studies conducted and the number of reports or publications of the research Evaluators will assess the quality of the outputs (e.g. what was the nature of the activities? Were all those interested in the project able to participate? Are the outputs useful? For whom? etc). Overall Objective It explains why the project is important to the society in terms of long-term benefits to final beneficiaries as well as wider benefits to other groups. It may also help to show how a programme fits into the regional/sectoral policies of the government/organization concerned and of the donor community. Participatory Approach It refers to the involvement of project participants in the design, monitoring and evaluation of a project. It is particularly suitable for process projects, but requires specific skills to be implemented and is more time

22 | P a g e

consuming than other approaches. On the other hand, the use of participatory approach increases beneficiaries ownership and therefore potential sustainability of project results. Participatory Evaluation Refers to the evaluation method in which representatives of agencies and stakeholders (including beneficiaries) work together in designing, carrying out and interpreting an evaluation. Participatory Process One or more processes in which the key stakeholders take part in specific decision-making and action, and over which they may exercise specific controls. It is often used to refer specifically to processes in which primary stakeholders take an active part in planning and decision-making, implementation, learning and evaluation. This often has the intention of sharing control over the resources generated and responsibility for their future use. Partners These are individuals and/or organizations with whom/which ASARECA works cooperatively to achieve mutually agreed upon objectives and outputs and outcomes, and to secure stakeholder participation. Partners include: universities, community-based organizations, Farmer Organizations, private sector, CG Centres, governments, civil society, non-governmental organizations, universities, professional and business associations, multilateral organizations, private companies, etc. Performance The degree to which a development intervention or a development partner operates according to specific criteria/standards/guidelines or achieves results in accordance with stated goals or plans. Performance Indicator Performance indicator refers to a particular characteristic or dimension used to measure intended changes defined by an organizations Logframe or results framework. They are used to observe progress and to measure actual results compared to expected results. Performance indicators help to answer whether a project is progressing toward its objective, rather than why/why not such progress is being made. They are usually expressed in quantifiable terms, and should be objective and measurable (numeric values, percentages, scores and indices). Quantitative indicators are preferred in most cases, although in certain circumstances qualitative indicators are appropriate.

Performance Measurement A system for assessing performance of development interventions against stated goals. It also refers to the collection, interpretation of, and reporting on data for performance indicators which measure how well programs or projects deliver outputs and contribute to achievement of goals. Performance monitoring Performance monitoring refers to the continuous process of collecting and analyzing data to measure the performance of a program, project, process or activity against expected results. A defined set of indicators is constructed to regularly track the key aspects of performance. Performance reflects effectiveness in converting inputs into outputs, outcomes and impacts. Performance Monitoring Framework (PMF) A plan to systematically collect relevant data over the lifetime of an investment to assess and demonstrate progress made in achieving expected results. It documents the major elements of the monitoring system and ensures that performance information is collected on a regular basis. It contains information on expected results, indicators, baseline data, targets, data sources, data collection methods, frequency, and the responsibility for data collection. For example:

23 | P a g e

Data Sources: Individuals, Organizations, or Publications from which data about your performance indicator will be obtained. Therefore, identify the data sources for each performance indicator that has been selected. Focus also on existing sources to maximize value from existing data {beneficiaries, partner organizations, government documents, tracking sheets, partner statistical reports, consultants, ASARECA staff} Data Collection Methods: Represent HOW data about performance indicators is collected {e.g. analysis of records or documents, literature review, survey, interviews, focus group, questionnaire, preand post-intervention survey, comparative study, collection of anecdotal evidence, observing participants, etc}. Frequency: How often will the information about each performance indicator be collected? Some indicators may be looked at regularly as part of ongoing performance management, while others will only be collected periodically for baseline, mid-term, or final evaluations. Responsibility: Who is responsible for collecting and validating the data? {e.g. beneficiaries, local professionals, partner organizations, consultants, ASARECA staff, etc} Performance Monitoring Plan (PMP) The PMP is a detailed plan for managing the collection of data in order to monitor performance. It identifies the indicators to be tracked; specifies the source, method of collection, and schedule of collection for each piece of datum required; and assigns responsibility for collection to a specific office, team, or individual. It contributes to the effectiveness of the performance monitoring system by assuring that comparable data will be collected on a regular and timely basis. It is mainly used for: Planning to monitor achievement of program implementation Collecting and analyzing performance information to track progress towards planed results and outcomes Using performance information to improve management decision making and resource allocation Communicating results achieved, or not attained to advance organizational learning It clearly spells the desired results, performance indicators, baselines and targets, plans for data collection, analysis, reporting and utilization. Performance Monitoring System It refers to an organized approach or process for systematically monitoring the performance of a program, project, process or activity toward its objectives over time. Performance monitoring systems at ASARECA consist of, inter alia: performance indicators; performance baselines; performance targets for all result areas; means for tracking critical assumptions; performance monitoring plans to assist in managing the data collection process; and the regular collection of actual results data (see figure for details).

24 | P a g e

Planning A broad description of the activities that would normally be carried out as part of project development, from start to finish, and the milestones that would generally be achieved along the way, such as signing sub grant agreements, capacity building details, etc. The plan should also explain the different aspects that need to be addressed as part of project development, and illustrate basic principles that are to be followed. The sequence of and relationship between main activities and milestones should also be described. Portfolio Review A required systematic analysis of the progress of an Objective, Output, or Outcome by the M&E Units. It focuses on both operational and strategic issues and examines the robustness of the underlying development hypotheses and the impact of activities on results. It is intended to bring together various expertise and points of view to arrive at a conclusion as to whether the program is on track or if new actions are needed to improve the chances of achieving results. At a minimum, a portfolio review must examine the following: a) Progress towards achievements of Objectives, Outputs, or Outcomes, and expectations regarding future results achievements. b) Evidence that outputs of activities are adequately supporting the relevant outcomes and ultimately contributing to the achievement of the purpose and goals. c) Adequacy of inputs for producing activity outputs and efficiency of processes leading to output. d) Status and timeliness of input mobilization effort. e) Status of critical assumptions and causal relationships defined in the results framework, along the related implications for performance towards outcomes and goals. f) Status of related partner efforts that contribute to the achievement of results. g) Pipeline levels and future research requirements. Power The power is the probability of detecting an impact if one has occurred. The power of a test is equal to 1 minus the probability of a type II error, ranging from 0 to 1. Popular levels of power are 0.8 and 0.9. High levels of power are more conservative and decrease the likelihood of type II error. An impact evaluation has high power if there is a low risk of not detecting real program impacts, i.e. of committing type II error. Power Calculations Power calculations indicate the sample size required for an evaluation to detect a given minimum desired effect. It depends on parameters such as power (or the likelihood of type II error), significance level, variance, and intra-cluster correlation of the outcome of interest. Pre-Conditions Conditions that have to be met before the project can commence, and typically involves the existence of funds from the donor agency, or the approval of a specific policy/law by the government. Pre-Planning The process of understanding the status, condition, trends and key issues affecting people and community, ecosystems and institutions in a given geographic context at any level (local, national, regional, international). Problem Analysis A structured investigation of the negative aspects of a situation in order to establish cause-effect relationships. Problem tree A diagrammatic representation of a negative situation, showing a cause-effect relationship. It is the visual result of a problem analysis.

25 | P a g e

Process-Based Evaluation Process-based evaluations are aimed at understanding how a program or project works. It is helpful in obtaining early warnings of operational difficulties in newly implemented programs or projects. It can also be conducted at regular intervals to check that operation remains on track and follows established procedures. It seeks to answer the following key questions: What are the actual steps and activities involved in delivering a good or service? How close are they to agreed operation? Is program evaluation efficient? It is an evaluation that tries to establish the level of quality or success of the processes of a program; for example, adequacy of the administrative processes, accessibility of the program benefits, clarity of the information campaign, internal dynamics of implementing organizations, their policy instruments, their service delivery mechanisms, their management practices, and the linkage among these. There are numerous questions that might be asked in a process-based evaluation, including:
Is the program being implemented according to design? Are operational procedures appropriate to ensure the timely delivery of quality products or services? What is the level of compliance with the Operations Manual? Are there adequate resources (money, equipment, facilities, training, etc) to ensure the timely delivery of quality products or services? Are there adequate systems (human resources, financial, management information, etc) in place to support program operations? Are program clients receiving quality products and services? What is the general process that project beneficiaries go through with the products or projects? are project beneficiaries satisfied with the processes and services? Are there any operational bottlenecks? Is the program or project reaching the intended population? Are program or project reach-out activities adequate to ensure the desired level of target population participation?

Program Evaluation Evaluation of a set of interventions, marshaled to attain specific global, regional, country, or sector development objectives. Note: A development program is a time-bound intervention involving multiple activities that may cut across sectors, themes and/or geographic areas. Project An intervention that consists of a set of planned and interrelated activities and tasks designed to achieve defined objectives within a given budget and a specified period of time. Projects in International Programs are to address specific needs identified by communities and families. Project Evaluation Refers to a technique to review the current status of a project against plan and to provide practical, comprehensive and forward-looking recommendations for corrective action where necessary. In general terms, project evaluation should consider: 1) Project objectives in terms of cost, time and quality; 2) Management; 3) Organization; 4) Systems and Procedures; 5) Suitability of contracts; 6) Performance of consultants; 7) Work todate in terms of cost, time and quality measured against plan. Project Goal Project Goal (Purpose; long-term; development objective) refers to what the project is expected to achieve in terms of significant improvements in the lives of the target population beyond the life of the project. Examples: demonstration of community solidarity as expressed through engagement in a number of defined activities, increased household income and reduced malnutrition in children leading to improved quality of life as demonstrated by improved nutrition of all members of the household. Project or Program Objective The intended physical, financial, institutional, social, environmental, or other development results to which a project or program is expected to contribute.

26 | P a g e

Project Partner The organization in the project country with which the Heifer country program collaborates to achieve mutually agreed upon objectives. The organization works closely with the beneficiary group/community and may handle financial and operational aspects of the project. Partners may include host country governments, local and international NGOs, universities, professional and business associations, private businesses, etc. Project Purpose It is the central objective of the project and represents what the project is expected to achieve by the end of the project and with the resources available. By achieving its purpose the project contributes to the overall objective. Project Strategy An overall framework of what a project will achieve and how it will be implemented. Propensity Score Matching (PSM) PSM is used to measure a programs effect on project participants relative to non-participants with similar characteristics. To use this technique, evaluators must first collect baseline data. They must then identify observable characteristics that are likely to link to the evaluation question {for example, Do farmers living near the experimental plots have higher yields from their farms than those further away?}. The observable characteristics may include gender, age, marital status, distance from home to experimental sites, etc. Once the variables are selected, the treatment group and the comparison group can be constructed by matching each person in the treatment group with the one in the comparison group that is most similar using the identified observable characteristics. The result is pairs of individuals or households that are similar to one another as possible, except on the treatment variable. Proxy Indicator An appropriate indicator that is used to represent a less easily measurable one. For example, condition of the house is a proxy indicator for income. Purpose Refers to what the project is expected to achieve in terms of its development outcome. It relates only to the beneficiaries, a specific area and a timeframe. It also refers to the publicly stated objectives of the development program or project. Qualitative Data Qualitative data deal with descriptions. They are data that can be observed, or self-reported, but not necessarily precisely measured. They normally describe people's knowledge, attitudes or behaviors, and are not usually summarized in numerical form. Examples of qualitative data include: The leadership role of women in a community; Minutes from community meetings; general notes from observations, etc. Qualitative methods They belong to the social science tradition and are based on the observation of people in their own territory, and interaction with them in their own language, on their own terms. Qualitative methods emphasize understanding reality as the persons being studied construe it. Most qualitative studies rely on descriptive rather numerical or statistical analysis. Quality Assurance It encompasses any activity that is concerned with assessing and improving the merit or the worth of a development intervention or its compliance with given standards. Examples include: Appraisal, RBM, reviews during implementation, evaluations, etc. It may also refer to the assessment of the quality of a portfolio and its development effectiveness.

27 | P a g e

Quantitative Data Quantitative data are data that can be precisely measured. They can be measured or measurable by, or concerned with, quantity and expressed in numbers or quantities. Examples include data on age, cost, length, area, volume, weight, number, etc. Quasi-Experimental Design Impact evaluation designs which create a control group using statistical procedures. The intention is to ensure that the characteristics of the treatment and control groups are identical in all respects, other than the intervention, as would be the case from an experimental design. Rapid Appraisal Methods first developed in agriculture and rural development (where they are known as Rapid Rural Appraisal, RRA) to provide rapid and cost-effective means of assessing the conditions of a community or area at the time a project was planned. Since then, they have been extended to provide a rapid method of impact assessment and are now being used in other sectors such as health and nutrition. They are based on qualitative methods such as observation, semi-structured interviews. Currently one speaks about Participatory Rapid Appraisal (PRA), which puts more focus on process and ownership of the participants. Reach Reach refers to who is influenced (by the research) and who acts because of this influence. Reach is closely related to the concept of equity. Participatory research is assumed to improve reach to disadvantaged groups and communities by including them in defining research priorities and capacity-building activities, and by mobilizing them to act in their own interests, rather than treating them as passive objects intended to benefit from the research results.

Recommendations These refer to proposals aimed at enhancing the effectiveness, quality, or efficiency of a development intervention; at redesigning the objectives; and/or at the reallocation of resources. Recommendations should be linked to conclusions. Regression In statistics, regression analysis includes any techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. In impact evaluation, regression analysis helps us understand how the typical value of the outcome indicator (Y) (dependent variable) changes when the assignment to treatment or comparison group (P) (independent variable) is varied, while the characteristics of the beneficiaries (other independent variables) are held fixed. Regression Discontinuity Design Regression Discontinuity Design (RDD) is a non-experimental evaluation method. It is adequate for programs that use a continuous index to rank potential beneficiaries, and that have a threshold along the index that determines whether potential beneficiaries receive the program or not. The cutoff threshold for program eligibility provides a dividing point between the treatment and comparison groups. Reliability Refers to consistency or dependability of data and evaluation judgments, with reference to the quality of the instruments, procedures and analyses used to collect and interpret evaluation data. Note: Evaluation information is reliable when repeated observations using similar instruments under similar conditions produce similar results.

28 | P a g e

Reporting Refers to the systematic and timely provision of essential information at periodic intervals. It is also an official record of a given period in the life of a project that presents a summary of project implementation and performance reporting. Progress reports are essential mechanisms for project implementation to inform partners and donors on the progress, difficulties, and problems encountered and lessons learned during the implementation of project activities. Reports are designed to: Enable the assessment of progress in the implementation process and achievement of results Focus activities and therefore improve subsequent workplans Facilitate the replenishment of funds by donors. ASARECA projects not only report on performance, but also show how they are contributing to the key results of the global program CAADP, MDGs, etc. Results Refer to the output, outcome or impact (intended or unintended, positive and/or negative) of a development intervention. It also refers to the changes that occur as an effect of a development intervention thus implying that a change of behavior by individuals, groups of people, organizations, government bodies or society has taken place. Results Chain Refer to the causal sequence for a development intervention that stipulates the necessary sequence to achieve desired objectives, beginning with inputs, moving through activities and outputs, and culminating in outcomes, impacts, and feedback. In some agencies, reach is part of the results chain. A results chain is an iterative process, the planning starts with a clear view of the project purpose and outcomes, planning backwards to the inputs and then implementing the project from the inputs to the outcomes.

Results Framework The results framework represents the development hypothesis including those results necessary to achieve intended results and their causal relationships and underlying assumptions. The framework also establishes an organizing basis for measuring, analyzing, and reporting ASARECA results. It typically is presented both in narrative form and as a graphical representation. Results-Based Management (RBM) A comprehensive, lifecycle approach to management that integrates strategy, people, resources, processes and measurements to improve decision-making and drive change. It is a shift from focusing on the inputs and activities (the resources and procedures) to focusing on the outputs, outcomes, impact and the need for sustainable benefits (the results of what you do). Broadly, RBM involves: a) Identifying clear and measurable results aided by logical frameworks, based on appropriate problem analyses; b) Selecting indicators that will be used to measure progress towards indicators; c) Setting explicit targets for each indicator, used to judge indicators; d) Developing performance monitoring systems to regularly collect data on actual results; e) Reviewing, analyzing and reporting actual results vis--vis the targets;

29 | P a g e

f) g)

Integrating evaluations to provide complementary performance information, not readily available from performance monitoring systems; Using performance information for internal management accountability, learning and decision making processes, and also for external performance reporting to stakeholders and partners.

Review An assessment of the performance of an intervention, periodically or on an ad hoc basis. Note: Frequently evaluation is used for a more comprehensive and/or more in-depth assessment than review. Reviews tend to emphasize operational aspects. Sometimes the terms review and evaluation are used as synonyms. Results Statement It outlines what a policy, program or investment is expected to achieve or contribute to. It describes the change stemming from ASARECAs contribution to a development activity in cooperation with others {e.g. Enhanced utilization of agricultural research and development innovations in ECA; Increased generation and uptake of demand driven gender responsive agricultural technologies and innovations; Facilitated policy options for enhancing the performance of the agricultural sector in ECA, etc}. Risk Analysis An analysis or an assessment of factors (called assumptions in the logframe) affect or are likely to affect the successful achievement of an interventions objectives. A detailed examination of the potential unwanted and negative consequences to human life, health, property, or the environment posed by development interventions; a systematic process to provide information regarding such undesirable consequences; the process of quantification of the probabilities and expected impacts for identified risks. Risk Register A list of the most important risks, the results of their analysis and a summary of additional risk response strategies. This register should be continuously updated and reviewed throughout the project life. Useful Risk Terminology:
Risk refers to the effect of uncertainty on results (ISO 31000) Impact is the effect of the risk on the achievement of results Likelihood is the perceived probability of occurrence of an event or circumstance Risk level is Impact multiplied by Likelihood Risk response is the plan to manage a risk (by avoiding, reducing, sharing, transferring or accepting it) Risk Owner is the person who owns the process of coordinating, mitigating and gathering information about the specific risk as opposed to the person who enacts the controls. Stated otherwise, it is the person or entity with the accountability and authority to resolve a risk incident (ISO 31000) Operational Risk is the potential impact on ASARECAs ability to operate effectively or efficiently Financial Risk is the potential impact on the ability to properly protect the funds Development Risk is the potential impact on the ability to achieve expected development results Reputation Risk is the potential impact arising from a reduction in ASAREAs reputation and in stakeholder confidence in ASARECA's ability to fulfill its mandate.

Sample A number of people, households, communities, or other units that have been selected to estimate the characteristics of the population from which the units were drawn. The sample is generally used when carrying out impact evaluations. Sampling methods include: Cluster sample. Groups selected as blocks, communities, sector or other definable areas. The use of cluster samples reduces the time and cost of data collection, although it might lead to less precise statistical estimates.

30 | P a g e

Purposive sample. Respondents are selected according to given characteristics that are particularly relevant for the study. It is a very economical way to obtain information, but caution has to be used in making generalization. Random sample. Each unit has an equal chance to be selected. Generalization can be therefore made directly from the sample for the total population without introducing a bias. Stratified sample. Reduces the number of interviews required to achieve a required level of statistical precision in the estimation of population attributes. The primary units (households, individuals, etc) are classified in groups according to characteristics. For each stratum a sample is extracted. Sampling frame The most comprehensive list of units from the population of interest (universe) that can be obtained. Differences between the sampling frame and the population of interest create a coverage (sampling) bias. In the presence of coverage bias, results from the sample do not have external validity for the entire population of interest. Selection bias Selection bias occurs when the reasons for which an individual participates in a program are correlated with outcomes. This bias commonly occurs when the comparison group is ineligible or self-selects out of treatment. Significance level The significance level is usually denoted by the Greek symbol, . Popular levels of significance are 5% (0.05), 1% (0.01), and 0.1% (0.001). If a test of significance gives a p value lower than the level, the null hypothesis is rejected. Such results are informally referred to as statistically significant. The lower the significance level, the stronger the evidence required. Choosing the level of significance is an arbitrary task, but for many applications, a level of 5% is chosen for no better reason than that it is convenient. Situation Definition or Needs Assessment or Feasibility Study A process or set of processes of gathering information, analyzing it, and then making a judgment on the basis of the information on the current scenario of the community or theme that requires participation/intervention from ASARECA. It is a good snapshot of the current situation, necessary to design a project. Sources of verification They represent the third column of the Logframe matrix and indicate where and in what form information on the achievement of the overall objective, the project purpose and the results can be found. Spillover effects Also known as contamination of the comparison group. A spillover effect occurs when the comparison group is affected by the treatment administered to the treatment group, even though the treatment is not administered directly to the comparison group. If the spillover effect on the comparison group is negative (i.e. if they suffer because of the project), then the straight difference between outcomes in the treatment and comparison groups will yield an overestimation of the program impact. By contrast, if the spillover effect on the comparison group is positive (i.e. they benefit), then it will yield an underestimation of the project impact. Stakeholders Agencies, organizations, groups or individuals who have a direct or indirect stake or commitment in the programme or project design, implementation, benefits or in its evaluation. Stakeholders Analysis Analysis that involves: the identification of all stakeholder groups likely to be affected by the proposed intervention; the identification and analysis of their interest, problems, potentials, etc. The conclusions of this analysis are then integrated in the project design. Statistical Power

31 | P a g e

The power of a statistical test is the probability that the test will reject the null hypothesis when the alternative hypothesis is true (i.e. that it will not make a type II error). As power increases, the chances of a type II error decrease. The probability of a type II error is referred to as the false negative rate (). Therefore power is equal to 1-. Strategic Outcome A long-term and enduring benefit to ASARECA and its partners that stems from its mandate, vision and efforts. It represents the difference it wants to make for its stakeholders, and should be a clear measurable outcome that is within its sphere of influence. Summative Evaluation A study conducted at the end of an intervention (or a phase of that intervention) to determine the extent to which anticipated outcomes were produced. Summative evaluation is intended to provide information about the worth of the program. Related term: impact evaluation. Summative evaluation can be subdivided: outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes impact evaluation is broader and assesses the overall or net effects -- intended or unintended -- of the program or technology as a whole cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values secondary analysis reexamines existing data to address new questions or use methods not previously employed meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgment on an evaluation question SWOT analysis Analysis of an organizations Strengths, and Weaknesses, and the Opportunities and Threats that it faces. It is a tool that can be used during all phases of the project cycle. Target A measurable performance or success level that an organization, program or initiative plans to achieve within a specified time period. Targets can be either quantitative or qualitative and are appropriate for both outputs and outcomes (e.g. 70% of targeted farmers will use identified technologies in 2013). Therefore, each organization should establish realistic targets for each performance indicator in relation to the baseline data already identified. This sets the expectations for performance over a fixed period of time. Targets belong only in the PMP and should not appear in Outcomes themselves. When targets are included in a result statement, they limit the ability to report against the achievement of that result by restricting success to an overly narrow window (i.e. the target itself). Reporting, in this context becomes an exercise of justifying why the target was not met or was exceeded instead of comparing expected outcomes to actual outcomes and discussing variance. Target group The specific group for whom the intervention is planned and undertaken. Technology (Agricultural) Agricultural technology refers to all aspects of technology in agricultural production, processing, distribution, storage and exchange. Terms of Reference {Scope of Work or Evaluation Mandate}

32 | P a g e

This refers to a structured document explaining the purpose and guiding principles on an evaluation exercise. It specifies the methods to be used, the standard against which performance is to be assessed or analyses are to be conducted, the resources and time allocated, and reporting requirements. It should answer questions on:
Why is the exercise necessary? Why now? What will be covered and what will not be covered? How will the findings and learning be used? Who should be involved? What questions should be asked? What methodologies should be used for data collection and analysis? How much time is available for planning, implementing and reporting? What resources are needed/available? How should the findings and recommendations be presented and disseminated? How will the results be used? How will learning from the evaluation be implemented?

It is always a good idea to include the following with terms of reference:


Conceptual framework of the project or project logical framework Budget details Map of project sites List of projects/sites to be visited Evaluation mission schedule List of people to be interviewed Project statistics, documents, reports already available

Thematic Evaluation Evaluation of a selection of development interventions, all of which address a specific development priority that cuts across countries, regions, and sectors. Time Series Analysis Quasi-experimental designs that rely on relatively long series of repeated measurements of the outcome-output variables taken before, during and after intervention in order to reach conclusions about the effect of the intervention. Triangulation The use of three or more theories, sources or types of information, or types of analysis to verify and substantiate an assessment. Note: By combining multiple data sources, methods, analyses or theories, evaluators seek to overcome the bias that comes from single informants, single methods, single observer or single theory studies. Type I error Error committed when rejecting a null hypothesis even though the null hypothesis actually holds. In the context of an impact evaluation, a type I error is made when an evaluation concludes that a program has had an impact (i.e. the null hypothesis of no impact is rejected) even though in reality the program had no impact (i.e. the null hypothesis holds). The significance level determines the probability of committing a type I error. Type II error Error committed when accepting (not rejecting) the null hypothesis even though the null hypothesis does not hold. In the context of an impact evaluation, a type II error is made when concluding that a program has no impact (i.e. the null hypothesis of no impact is not rejected) even though the program did have an impact (i.e. the null hypothesis does not hold). The probability of committing a type II error is 1 minus the power level. Unit of Analysis

33 | P a g e

The level at which analysis is done. For example, at the: Level of individual men or individual women Community- or regional-level Level of individual agricultural fields, or household fields, or a villages communal land Validity The extent to which the data collection strategies and instruments measure what they purport to measure. Value Chain (Agricultural) The full range of activities that are required to bring a product (goods and services) through different phases of production, delivery to final consumers, and final disposal after use. It includes design, production, marketing, distribution, and support to get the product to the final consumer. It incorporates a range of activities within each phase, including both input supply and output marketing systems. The activities that comprise a value chain can be contained within a single firm or many firms. Variable In statistical terminology, a variable is a symbol that stands for a value that may vary. Vertical logic It designates the casual relationships between each level of a narrative summary (inputs-activities, activitiesresults, results-purpose, purpose overall objective) and the critical assumptions affecting these linkages. Work Plan Contains a detailed list of activities to be performed to reach the project objectives with clearly defined timeframe, responsibilities and resources. Financial allocations can also be done to achieve the activities defined in the work plans.

34 | P a g e

Reference
1. 2.

CIDA, 2008: Results-Based Management Policy Statement. DAC Network on Development Evaluation, 2010: Glossary of Key Terms in Evaluation and Results Based Management Handbook on monitoring and evaluation of human resources for health: with special applications for lowand middle-income countries/edited by Mario R Dal Poz [et al], 2009. IFFPRI Discussion Paper 00732 RBM Tools at CIDA: How-to Guide Spielman, D.J. 2008. How Innovative Is Your Agriculture? Using Innovation Indicators and Benchmarks to Strengthen National Agricultural Innovation Systems. USAID: ADS 203.3.3

3.

4. 5. 6.

7.

35 | P a g e

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy