0% found this document useful (0 votes)
1K views28 pages

Development Design Data Final

Uploaded by

api-631777443
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views28 pages

Development Design Data Final

Uploaded by

api-631777443
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES 1


Copyright: Mast Measurement and Management. August 2022
Walid Madhoun walid@triplem.io
Johannes Wheeldon jwheeldon@gmail.com

Abstract
This paper is a proposal to rethink the results framework of development organizations. The paper
proposes an integrative approach to development with the context of the Design Thinking Approach
and the use of data science. This paper presents the potential for data science and design thinking to
transform evaluation, including data collection, extraction, management, integration, analysis,
interpretation, and reporting. Projects serve as the main delivery mechanism by which development
mandates are achieved. The proposed integrative framework is more favorable than current
approaches because it allows for adjustments at the project level. The proposal seeks to reframe the
results framework by better defining levels of outcomes and substantiating (quantifying) the
contribution of individual projects to overall institutional goals. This proposal, while presenting
concrete suggestions, is the foundation for further interdisciplinary studies; the subject requires
detailed design, feasibility of assumptions, and piloting to test the mechanisms proposed herein.

1. Introduction
International development organizations, whether
bilateral or multilateral, employ relatively common DEFINING DATA SCIENCE
approaches and tools to design, manage, and assess
the effectiveness of their development interventions.
These include the results framework, social and Data science in this article includes data
management and analysis, covering the
environmental safeguards, and other performance process from collection to reporting.
measurement instruments. Development organizations 1. Big Data: Massive data collected
are constantly improving their processes and tools, through various transactions,
thus giving rise to innovative variations in management whether a single source or multiple
sources.
and operations. The recent emergence of data science
2. Data Warehouse: Derivate of big
is an example of such innovation. As a promising tool to data but structured, transformed,
help better understand the contribution of the and prepared for analysis.
development sector, data science will reduce the costs 3. Data Science Applications: Artificial
of data collection and analysis, thus freeing Intelligence or Machine Learning are
two common examples of advanced
professionals to focus on design, creative solutions, and
applications.
more efficient implementation. Additionally, data The whole process from collection to
science applications can be used to better explain what analysis and reporting relies on the
happened in projects, why it happened, who benefited, availability, ability, and capacity to
and who did not. There are different definitions of Data collect high-quality data.
Science - some broad, some narrow. Data science is
defined broadly for this article. It includes both data

1We would like to acknowledge and express gratitude for the valuable insights and advice provided by Mr. Michael Wodzicki
(Development Consultant), Mr. Andrej Hudoklin (COO at ADD Business Solutions), and Mr. Marko Skufca (Business Solutions
Director at ADD Business Solutions).

Madhoun and Wheeldon 1


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

management and analysis, so our definition for this article includes data collection, extraction,
transformation, management, integration, analysis, and interpretation and reporting.

In a recent article, Hejnowicz & Chaplowe (2021) note:

…the transformative potential and joint benefits of data science for development
evaluation remain clear: improved modeling capabilities, better service delivery
through enhanced targeting and granularity, broadly reduced levels of bias,
efficiency gains via cost reductions, and, just as important, the ability to have (near)
real-time information …

While there are interesting examples of data science applications in some projects and
operations, there is not yet a systemic (nor systematic) application of the various data science
techniques, tools, and instruments in development organizations. One challenge is that data
science was not designed for the development context where projects are underpinned by the
theory of change predicated on cultural, social, economic, and political factors. Development
operates in a non-binary context. Data science cannot read nuance nor capture the unique
socio-cultural drivers of persons and communities. Big data is more suitable for transactional
analysis, whereas the development sector seeks to understand the reason behind occurrences.
This can be achieved with applications using artificial intelligence and machine learning;
however, they are not without constraints in the development sectors. Artificial intelligence,
specifically machine learning, can misidentify targets due to inherent biases and social exclusion
(BBC, 2020). Machine learning also needs a sufficiently large “learning data set” to train the
algorithm, and in cases of innovative projects, there may be less data than what is required. As
the co-director of the Center for Effective Global Action (CEGA) at UC Berkeley, Joshua
Blumenstock, warns us, it is important to avoid the “silver bullet fallacy” and not fall victim to
the allure of data science as the answer to all questions. This critique, while important, does not
suggest that data science should be overlooked by development professionals. Instead, it
requires judicious application based on learning that emerges from carefully defining the goals,
applications, and utility within the development context. There are legitimate worries that
using Big Data in development will exacerbate the tendency to entrench Western values and
ethics into development (Picciotto, 2020). Artificial intelligence, specifically machine learning,
hallmarks of Big Data, can misidentify targets and perpetuate rather than challenge inherent
biases and social exclusion (BBC, 2020).

Currently, data science is being used as part of global development research and somewhat
rarely within some projects. There is a gap, however,
The focus on outputs, quantitative
between these two points on the spectrum that raises outcomes, and postponing the
the question of how to use data science at the measurement of effect to impact studies
institutional level. When data science is used as part has limited the ability of development
of global development research, it is applied organizations to effectively deploy data
haphazardly and tends to present applications in science applications in projects and has
reduced performance measurement to
single operations or projects. New approaches are still compliance.
emerging (ADB, 2020; DIME, 2020; OLC, 2020). More

Madhoun and Wheeldon 2


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

work is needed to properly situate and develop frameworks that can be tested in the
development context. While there is a critical mass of blog posts and short articles hyping the
merits of merging data science with the processes of development organizations, most of these
articles explore applications in the context of single operations or projects. However, none offer
meaningful insights on which application is most useful or how best to deploy the combination
of data science applications to serve broad institutional purposes. There is a need to cut
through the fog enveloping the space between development research and the discrete
application of data science. This requires exploring practical ways of integrating into the
development sector the tools made widely accessible by advances in Information and
Communications Technology (ICT).

There are two ways to fill the gap. First, the obvious one is to introduce data science
applications for collection and analysis at the institutional level through development research;
however, this is too broad and often takes place outside the development organization at
research centers like CEGA or inside the organization through specialized departments looking
at macro-level project information that relies on self-administered end-of-project assessments 2.
The second option relates to how project-level data combined with findings of development
research can be translated through data science applications to improve institutional strategies
and performance. The second option requires adjustments to some of the existing processes
and tools used by the development organizations in delivering projects. To that end, this paper
proposes a methodology for integrating data science applications into the development sector
at the institutional level; it explores how data science can help absorb and translate the vast
amount of project and research knowledge to improve aid effectiveness and measure the
contribution of the institution to the development goals. To achieve this, the paper proposes
that there is a need to reframe the performance measurement framework to include new
elements that better connect the projects to the institutional objectives.

The Canadian government’s results framework offers a workable model for this reframing. The
reframing is not merely a change in procedure but a shift from measuring project performance
as stand-alone initiatives to acknowledging the practical ways that each project fits in the
complex of interventions. Adoption and implementation of modern data science technologies is
not merely a technical challenge of fitting the new with the existing processes but, in large part,
a cultural challenge as well. The use of Design Thinking in the project and the institutional cycle
will address this cultural change by helping to create an environment that focuses attention on
the end user that promotes responsible and responsible data-driven innovations. However, to
do so, these examples must be theoretically informed and linked within the development
literature.

2
For example, see the World Bank Implementation Completion Reports and the Independent Evaluation Group’s
(IEG) Implementation Completion Report (ICR) Review. Other agencies follow a similar process with different
names.

Madhoun and Wheeldon 3


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

2. Literature Review
The complexity of international development extends to efforts to evaluate the provision of
assistance (Wheeldon, 2012: 79-83). Two central problems emerge from how the purpose of
any evaluation is defined. From one view, evaluation should be designed to measure outcomes
to provide reliable information for future programs and policies. Indeed, the increase in calls for
evaluation, monitoring, and reporting has primarily been led by pressure from
nongovernmental organizations such as the Organization for Economic Cooperation and
Development, the World Bank, and other international agencies (Leeuw & Furubo, 2008). This
is linked to the first challenge. Critics observe that evaluation often exists merely to serve the
goals of international aid and the politics of the development agenda. As Carden (2013: 577)
reminds us, evaluation is “…borne out of the need of funding agencies.” Critics note that
evaluation reflects the judgment and values of donor agencies and sponsoring countries when
framed in this way and is thus neither useful nor responsive to in-country policymakers and
other stakeholders (Ofir et al., 2013).

Moreover, in the past, evaluations failed to assess project inputs, outputs, and impacts in any
meaningful way. In one of the most extensive reviews of its kind, Bollen and colleagues (2005)
reviewed twenty-five evaluations completed over thirty years on USAID-funded projects
ranging on topics from the rule of law to civil society, governance, and general democracy in
sub-Saharan Africa, Africa, Asia, the Middle East, Europe and Eurasia, and Latin America and the
Caribbean. They concluded that evaluations lacked methodological rigor, were missing
important information, and focused on immediate outcomes. In fact, “…single-group, posttest-
only design was the usual evaluation design, making it extremely difficult to attribute effects to
USAID interventions” (Bollen et al., 2005: 199). Significant issues remain. These have been
summarized as concerns about data quality, problems with identifying and operationalizing
variables, use of inappropriate research designs and associated methodologies, and the failure
to address social exclusion, researcher bias, and epistemological concerns about the nature of
evaluation itself (York & Bamberger, 2020).

A more recent approach to evaluation seeks to build on the evolving methodology of


participatory action research (PAR). PAR has been defined as a means of engaging in research
with people as opposed to research on people or for people (McIntyre, 2007: 16). In the
development context, this approach seeks to combine the local knowledge of those being
supported to engage with development-funded initiatives with the understanding of those
involved with development agencies and the broader developmental context (Wheeldon,
2012). This is an increasingly popular approach (Chouinard & Hopson, 2016: 255). However,
despite many “participatory” evaluations, these models often serve as performative efforts that
conceal Western assumptions about what counts in any evaluation and why (Cornwall &
Jewkes, 1995). For example, the use of essential terms such as “participatory,” “ownership,”
and “local relevance” can be used to make development assistance appear less proscriptive and
more socially constructive while avoiding more meaningful reforms (Wheeldon, 2012). Those
more consequential efforts would carefully define culturally relevant evaluation criteria and
rethink both evaluators' roles and the purpose of evaluation itself. Moreover, it would require
bridging the gap between the concept of evaluation as a technocratic, accountability-based

Madhoun and Wheeldon 4


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

mechanism on the one hand and the aspiration that evaluation can serve to empower
communities to define development in ways that are meaningful to them (Chouinard &
Hopson, 2016: 264).

Shifting the culture of evaluation is a longer-term endeavor. One approach might involve
collecting different kinds of data from a diverse set of development participants. For example,
development agencies might use big data, spatial disaggregation, and timeliness to pursue
evidence-based policymaking by monitoring outcomes and adjusting actions through an
iterative feedback loop that can assist development initiatives on the ground (Cohen & Kharas,
2018). Those engaging in on-the-ground activities might apply design thinking to ensure
participatory evaluative techniques are co-produced in meaningful and sustainable ways
(Hoolohan & Browne, 2020). Of immediate interest is how to update current practices in
performance measurement to ensure development agencies can begin to gather the kind of
data upon which more useful evaluations can proceed.

3. Institutional Cycle as Design Thinking


All development organizations employ the Institutional Cycle to define the processes of their
intervention. We make a distinction between Institutional Cycle and the project cycle. While
the project cycle refers exclusively to the components of the project, the Institutional Cycle is
far more complex. It refers to the institutional process of delivering its mandate primarily
through projects. The Institutional Cycle involves the development of the country strategy, the
consultative and advisory process, the individual project development and implementation, the
post-project set of evaluations ranging from summative to impact to meta-studies, and all the
other operations of the institution. Each institution has its own take on the Institutional Cycle,
but they are generally similar and can fit into the generic Institutional Cycle shown in Figure 1.

Madhoun and Wheeldon 5


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

Figure 1: Generic Institutional Development Cycle

The Institutional Development Cycle has some similarities to a business methodology which is
growing in use and popularity, called Design Thinking, and from which emerged the practice of
User-Centered Design. This section will introduce the Design Thinking Methodology and
demonstrate its similarities to the development sector’s Institutional Cycle.

Design Thinking is important to this discussion The development enterprise aims improve the lives
because it provides a simple language to of people, whether that is by reducing poverty,
explore complex ideas. Design Thinking begins improving governance, increasing economic
from the perspective of the user rather than mobility, etc. At the heart of the development
enterprise is the end user: the citizen. Design
the lens of organizational imperative. Thinking provides the ideal environment in which to
Importantly, the design thinking cycle and design and deliver the development products and
methodology can place the function of services. Additionally, Design Thinking requires data
Monitoring and Evaluation along the whole to help understand the needs and learn from past
cycle and not as a discrete function that attempts. It is this convergence that makes Design
Thinking an ideal design and execution process for
measures mostly outputs during development organizations.
implementation and effect only at the end.
This has huge implications for how project
results data is used for institutional strategies and how projects are designed, recalling that
projects are the primary delivery mechanism of development. Design Thinking is a suitable
environment through which to explore the application of data science within an institutional
setting. Later in this section, Table 1 demonstrates the utility of perceiving the Institutional
Cycle through the design thinking lens.

Madhoun and Wheeldon 6


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

Design Thinking is the union of a variety of disciplines that congregate human, technological,
and strategic needs. The history of this convergence dates back to the 1960s when scholars
attempted to apply scientific methodology to understanding the design process in solving what
Horst Rittel, a Design Theorist, coined as “wicked problems.” In the 1970s, Nobel Prize laureate
Herbert Simon became the grandfather of Design Thinking with his ideas of rapid prototyping
and testing through observation. Interestingly, his work focused on synthesizing human forms
of thinking in artificial intelligence. In 1987, Peter Rowe, while serving as Director of Urban
Design Programs at Harvard, published his book Design Thinking, not only coining the phrase
but also giving structure to the practice. In 1991, IDEO popularized Design Thinking by
developing easy-to-understand vernacular, logical, intuitive steps, and user-friendly toolkits
(Dam & Siang, 2020). Today Design Thinking is taught at prestigious design schools and is
accepted as an effective method to design products, services, and systems that improve the
lives of users and solve the “wicked problems” of our world. Design Thinking is still evolving as a
multidisciplinary practice as more sectors adopt it and, in some cases, recognize it in their
existing processes.

Design Thinking draws from what is desirable by the


user, what is feasible (technologically, physically, legally,
etc.), and what is economically viable. The combination
of the three factors in the right measure is the goal that
drives Design Thinking. The three factors are expanded
into a process that is typically expressed in a linear
manner but more recently has been presented as a
cycle. The process can be recognized in many industries
as it reinforces traits common to all: innovation,
problem-solving, leadership, and creativity. These traits
are not the objective of Design Thinking but the means Figure 2: the three factors.
Source: IDEO.COM
to achieve its human-centered goal, which is to fully
understand the needs of the final users to produce more relevant and useful products, services,
and internal processes.

Figure 3: Design Thinking Process.


Source: Stanford University Design School

Madhoun and Wheeldon 7


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

The Design Thinking process was originally expressed by the Stanford University Design School,
as shown in Figure 3. However, though the definitions of the steps did not change, the
representation of the Design Thinking process later evolved to a cyclical model with the
addition of the “implement” step and was organized under three broad headings of
Understand, Explore, and Materialize as shown in Figure 4.

Figure 4: Design Thinking Cycle.


Adapted from Gibbons 2016

When the development Institutional Cycle is placed alongside the design thinking cycle, we can
see that they match closely. The similarities do not stop at the definitions of the stages. Both
are highly iterative, both require open-minded approaches and patience to execute, and both
require collaboration and consultation with stakeholders. Importantly, to arrive at useful
decisions, design interventions, and measure performance, both rely on and visualize data to
describe, illustrate, and predict change.

Madhoun and Wheeldon 8


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

Table 1: Design Thinking and Institutional Cycle


Design Thinking3 Development Institutional Cycle
Understand Country Strategy
Empathize (Research): Conduct research to Research: A typical development organization has an
understand the needs, context, history, long-term office and staff in countries of operations with a team
vision, and any other condition impacting the user. devoted to understanding the needs, context, history,
How, why, when, and with whom are all questions we long-term vision, and any other condition impacting
need to ask to understand the needs of the user. the country of the eventual user of the development
service.
Define: Using the research to understand where the Define: Using the research above, a typical
users’ problems exist and to pinpoint their needs. development organization defines the sectors and
Ideas and opportunities for innovation begin to themes based on the understanding of the specific
emerge here. needs and begins to imagine the type of interventions
that may be appropriate.
Explore Explore
Ideate: This is the step where creative ideas to Ideate: While the development sector may not give a
address the unmet user needs are identified in total name to this, it still takes place. Project ideas based on
freedom from conventional approaches. It is the previous steps begin to take shape in team
important here to not throw away seemingly meetings, scoping visits, and other consultations, such
farfetched ideas: there will be an opportunity to cull as policy advice, leading to the identification of project
or refine them. ideas.
Prototype: The goal of this phase is to understand Prototype: The project ideas are subject to feasibility
what might work and what simply will not. This is and viability assessments through internal analyses
where feasibility and viability should be examined. and discussions with the recipient country. Some
MDOs might support small-scale test projects and
consultations alongside these projects to understand
better the issues, constraints, and opportunities. At
least one project idea emerges as most likely to
succeed – typically, a concept paper is produced.
Materialize Materialize
Test: Consultation with users on 1-3 chosen Test: The name of this step is misleading. In a
prototypes will help establish which one(s) meet development setting, this is where a team from the
users’ needs. Testing will also help to confirm or dispel development organization tests its assumptions in the
assumptions and conclusions on viability and concept paper on the ground and develops the full
feasibility. project in cooperation and consultation with the team
of the recipient country.
Implement: Once a solution is chosen, it is Implement: Following the requisite approvals and
implemented. This step is often misunderstood as an signatures on agreements, the project begins
endpoint. Rather, it includes design corrections and a implementation, complete with the full range of
collection of lessons for future initiatives. Monitoring and Evaluation (including formative and
summative), design corrections, and collection of
lessons for future initiatives.

3
Adapted from Gibbons, 2016.

Madhoun and Wheeldon 9


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

Graphic representation of the two cycles suggests a sequential approach despite the repeated
claims of a cycle’s iterative nature: visually, we still perceive the consecutive steps. In practice,
however, the steps are overlapping at times and concurrent at others, as demonstrated in the
revised combined cycle shown in Figure 5.

Figure 5: Combined Design Thinking Project Life


shown in actual application

Research is foundational, whether through events, actual investigation of a sector or theme,


national or regional development strategies, a set of projects sharing similar qualities, etc.
Research contributes to long-term and mid-term strategies and is evident in products such as
country strategies or even more macro ones, such as the World Bank’s Human Development
Report.

The next four steps draw from research and occur in an iterative manner, even though each has
a specific objective. They collectively lead to the development of projects targeting specific
reform topics; in some cases, they involve policy advisory engagements that produce policy
options or specific reform agendas and may eventually contribute to the development of one or
more projects - recalling that projects are the main delivery vehicle of the development sector.
The step labeled “implement” is the culmination of all its predecessors and refers to projects.
Projects are drawn from and contribute to all previous steps, including research. Projects merge
with research as the project enters its closing stages; it is here where monitoring and formative

Madhoun and Wheeldon 10


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

evaluation data are used to develop summative, impact, and meta evaluations in addition to
broader development research topics – which in turn help define more interventions.
4. The data scientist and the development professional
While Section 2 showed how Design Thinking can create an ideal environment for design and
execution of projects, institutional strategies, and how data science can support the cycle, this
section addresses the capacity needs of the development professional in relation to data
science. It is important to clarify the role of the development professional to avoid raising
capacity concerns among the staff and managers and to reduce potential resistance to change
caused by these concerns.

A successful data scientist


needs a profound
understanding of statistics and
algorithms and a strong
background in programming.
The data science process
involves collecting the data,
cleaning it, undertaking
exploratory analysis, building
the analytic model, and finally
deploying the model to produce
facts about the past or
predictive information about Figure 6: Data Science Process drawn by Chanin Nantasenamat and Ken Jee
the future. This is where the
work of the data scientist stops.
The data scientist may be able to address biases through programming, but she cannot remove
biases from the data science machinery altogether. There are numerous examples of problems
with data collection and biased predictions by machines, a notable one being the bias in facial
recognition software. Krittika D'Silva, a University of Cambridge computer-science researcher,
confirms that “numerous studies have shown that machine-learning algorithms, in particular
facial-recognition software, have racial, gendered, and age biases.” (BBC, 2020)

Data biases exist across the data science spectrum. Dr. Alexandra Olteanu, a post-doctoral
researcher at Microsoft used “fairness” as an example to show how machines fail at dealing
with complex human issues. Asking an algorithm to determine individual fairness where similar
individuals are treated similarly is an impossibly complex problem: how do we determine
similarities of individuals? Do we look at physical traits? Economic condition? Education level? A
combination of these? How about temporal factors? How do we weigh these factors? What
mathematical function can capture the wholeness of a person? (Olteanu, 2019) Machines know
only what we show them—they do not possess human intuition and curiosity, at least not yet.
Where the machine fails to answer these questions, the development professional can provide
some insights.

Madhoun and Wheeldon 11


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

The development professional, as a user of the data science output, needs only a familiarity
with statistics, algorithms, and ICT; however, they do need: (i) an understanding of the targeted
sector and experience in its operations, processes, and nuances; (ii) an ability to formulate
useful questions; (iii) an analytic capability that transcends statements of fact to reach for
insights and applies a host of factors on machine generated predictions; (iv) to be a natural
integrative thinker4 (Martin, 2007); (v) an understanding of the historical and current political
nuances; and (vi) the ability to communicate clearly and convincingly.

Once the data scientist produces the result from the machine, the development professional
must interpret the analyzed data to explain patterns, understand the nature of the predictions,
address biases, and consider acceptability of risks among other types of subjective analysis. In
the development sector, this subjective analysis may be infused by political, social, historical,
and/or cultural factors that cannot be measured by an algorithm. Even though these reviews
are by their nature subjective, the results of such interpretations can be validated through peer
review and a broad stakeholder consultative process.

The development professional need not become a data scientist, nor does the data scientist
need to become a development professional. Rather, they complement each other to improve
their output collaboration. There is a need nonetheless to provide introductory training to both
groups to ensure that they are familiar with each other’s language and basic concepts and that
they exchange knowledge on a regular basis to improve the information being exchanged
between them. It is the development professional who is responsible for collection of quality
data; it is then passed to the data scientist who processes it and sends it back to the
development professional for interpretation and insights. Their work cannot be isolated in
discrete tasks; it must be integrated.

5. Data science and the development organization


The purpose of this paper is to explore whether and how the data science applications can be
used in the development Institutional Cycle. The main thrust of the paper is that the
organization should create the environment where its different operational units can use
whatever applications of data science are relevant to improve performance at the functional
level. The organization should also be prepared to absorb the data from a variety of internal
and external sources and convert this into higher level insights that explain its progress towards
its objectives. What does this look like? How might it work? These are some of the questions
answered in this section.

4.1 Current application of data science in support of development organizations


Within the functional departments of development organizations there are already varied and
successful uses of data science, especially at the level of development research.

4
Integrative Thinking is a discipline proposed by Roger Martin, former dean at the Rotman School of Management
at the University of Toronto, wherein the integrative thinker can create new innovative solutions from two or more
seemingly contradictory options.

Madhoun and Wheeldon 12


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

In 2017, the Asian Development Bank (ADB) created the Data for Development initiative to
develop the statistical capacity of governments in the Asia and Pacific region to meet the
increasing data demands for effective policymaking and for monitoring development goals. The
initiative used high-resolution satellite imagery, geospatial data, and machine-learning
algorithms in conjunction with traditional data sources and conventional survey methods to
estimate the magnitude of poverty in specific areas. Following the success of the Data for
Development initiative, statisticians from ADB’s Statistics and Data Innovation Unit worked with
the Philippine Statistics Authority, the National Statistical Office of Thailand, and the World
Data Lab to examine the feasibility of poverty mapping using satellite imagery and associated
geospatial data. This examination produced a report on Mapping Poverty Through Data
Integration and Artificial Intelligence. (ADB, 2020) This feasibility study demonstrated how
innovative approaches, sometimes supplemented by traditional data sources, can improve the
data collected and reduce the cost of collection and analysis.

Recently, the World Bank released a Big Data in Action series through its Open Learning
Campus to explore data science applications in World Bank Group operations. (OLC, 2020) This
series includes interviews and case studies that illustrate how non-traditional data sources and
techniques are used in Bank sector studies; some notable examples include the following:

 Mining big data to Improve Transport Corridor Investments


 Remote Sensing and Machine Learning in action for Country Diagnostics in North Macedonia
 Using Geospatial Data to Track Changes in Urbanization

In the area of service to staff, country partners, and other development practitioners, the
World Bank presented two interesting models for creating an enabling environment. The first is
designed specifically for impact evaluations. The Development Impact Evaluation (DIME) offers
projects a host of services for impact evaluation. In addition to training, toolkits, and other
advisory services, DIME offers to build through its country programs:

“…deep country-level data ecosystems to optimize operations and inform policy. The
spatially integrated, cross-sector data systems can include administrative data, from
land registries, road networks, infrastructure investments, tax payments, social
transfers; primary data from surveys, censuses, and crowdsourcing; remote sensing,
from nighttime lights, satellite imagery, and sensors. The data systems are tailored
to support a process of adaptive research and policymaking and adapted over time
to iteratively respond to policy needs. DIME teams create maps and dashboards,
tailored to the country context and specific policy interests; these user-friendly
outputs allow policymakers to interact directly with the data. A dedicated Analytics
team supports development and maintenance of data systems, guaranteeing data
quality and secure data transfer, storage, and handling.” (DIME, 2020)

The second specialized group at the World Bank provides to Bank staff and other development
practitioners a way to deal with the vast amount of unstructured and structured data, inside
and outside the World Bank. The Text and Data Analytics group (TDA) uses a variety of methods

Madhoun and Wheeldon 13


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

including text clustering, sentiment analysis, topic identification, document classification, text
extraction, entity extraction, and concept mining. This is a service that is triggered by Bank staff
but can serve partner countries and development practitioners. Finally, at the time of writing
this paper, the World Bank has begun cataloguing data science expertise among its staff to
better understand the range of skills in this area across the organization with the aim of
possibly developing a new professional technical family.

These examples show that data science in development research is being used effectively;
however, while the outputs (reports) provide important information to help shape policies and
projects, they fail to inform us about the performance of the development organization.
Furthermore, if in ten years the same measurements are taken again, there is little to connect
the change (positive or negative) to the efforts of the organization.

4.2 Data Science in Projects


Data science in this article includes data management and analysis, covering the process from
collection to reporting. It often involves Big Data or information drawn from massive data sets.
Data of this kind emerges from various sources and, by dint of the size, are viewed as more
valuable than other information. Hejnowicz & Chaplowe (2021) identify four significant and
growing areas in the data science toolbox for evaluation: Big Data, Machine Learning (ML),
Artificial Intelligence (AI), and perhaps the lesser-known Digital Twins (DT). While an analysis of
the value of each of these is beyond the purview of this practice note, the utility of each is a
function of the availability, ability, and capacity to collect high-quality data.

There are two ways in which data science is used in projects, and in both cases it is infrequent.
The first is the use of data science in project activities.5 The second, which is integral to this
paper, is the use of data science in monitoring and evaluating projects. A primary reason for the
low tolerance for data science is the way in which M&E is treated in the project cycle by the
Government (borrower) and the MDB (lender). Before addressing how data science can be used
to improve project performance measurement, we must first look at the project results
framework and examine if it adequately reflects the theory of change and where within this
framework data science can be most instructive.

Project ideas are founded on the country strategy which is itself founded on data from
development research and from other projects (sometimes from other countries and/or from
other organizations). The typical project design process does not consider M&E until the core of
the project is mostly designed and even then, the question of M&E focuses mostly on the
products (outputs) and not on the effects of these products on the end users. The list below is

5
An example of data science in project activities can be found in the Azerbaijan Judicial Services and Smart
Infrastructure Project wherein a data science approach was designed and implemented for measuring the
performance of courts covered by the project. This paper’s author was the Assignment Team Leader. The resulting
product, Court Pulse, was nominated for an award by the European Commission for the Efficiency of Justice
(CEPEJ).

Madhoun and Wheeldon 14


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

extracted from a current MDB6 project showing a typical set of indicators in the results
framework:

Highest-level Result
 Number of beneficiaries who are engaged in the project output X years after completion of
project support
Intermediate Results
 Number of monitoring reports produced and available online
 Number of beneficiaries supported by the project
 Number of communication and dissemination activities conducted per region
 Share of female beneficiaries

In the example above, there are no indicators measuring the effect of the investment. The
highest result indicator is expected to measure effect, but in this case (as in most cases) the
indicator measures engagement as an effect. Table 2 below provides an illustration using an
actual project. The MDB elevates an operation to the level of output, the output to the level of
intermediate result, and consequently, the intermediate result to the highest-level project
result. While this assures a favorable rating for the project, it is not developmentally sound as it
tells us nothing about the effect of the investment.

Table 2: Typical and Ideal Results Framework


Category Typical Results Framework Ideal or Appropriate Results Notes
entry Framework
Note: “Training program”
implies curricula, manual,
Output Training program Output 100 teachers trained
and/or trained teachers.
Specificity here is beneficial.
75% of teachers still use new
Intermediate 100 teachers trained Note: 100 teachers trained is
skills two years after
Result (Skills upgraded) an output, not a result.
receiving the training.
Note: 75% of trained teachers
using new skills after two
The effect of the teachers’
75% of trained teachers still years is not a highest-level
Highest-level new skills acquired through
use new skills after two result but an intermediate
project result the project can be attributed
years. result showing a degree of
to students’ performance.
continuity that can infer
sustainability.

This example illustrates the challenges with M&E in project design. The combined lender and
borrower design team favors quantitative measures of success as these are relatively easy to
collect. However, the ideal framework requires a different set of data collection methods. The
output requires counting of heads whereas the intermediate result requires interviews
complemented by observation of a representative sample to determine the degree to which
the output is used. The highest-level project result is more complex to measure and requires
human, financial, and material resources for research beyond counting participants and

6
The project is not cited to avoid appearing to critique an ongoing project and its team.

Madhoun and Wheeldon 15


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

immediate use. Notwithstanding the complexity, this type of indicator is measured using
standard data collection instruments. In the case of this example, the 360-degree 7 approach
would provide a wealth of information on the students’ performance. The ideal results
framework was applied in a World Bank-financed education reform project 8. The project
undertook annual evaluations during the life of the project and produced a vast amount of
knowledge about the effect of teacher training on the academic and behavioral performance of
students. Sadly, when the project ended, the data and lessons it produced were shelved.

Some may argue that the highest-level result within the ideal scenario presented above is not
within the scope of a project but in the domain of evaluations that examine the impact of the
project on the intended population. While this is a fair position to take given the current
practices, it is also inadequate. First, it is not unreasonable to expect a perceptible
improvement in the condition of the intended population following an investment, and second,
relegating the measurement of “effect” to summative or impact evaluations eliminates the
monitoring of progress towards “effect” through annual formative evaluations. Delaying
monitoring of perceptible change removes a crucial tool for determining whether design
alternations or redistribution of resources and effort are necessary. The task of the project
designers is to define a reasonable perceptible improvement and generate an indicator to
measure it effectively and efficiently, as well as devote resources to undertake this work. With
this understanding in mind, the availability of modern technology allows for project leaders to
better facilitate the use of data science at the project level and better assess a measure of
effect within the life of the project. This includes adopting a more real-time adaptive evaluation
that provides immediate feedback so that interventions can be nimbler and more responsive to
change in the short term (Hejnowicz & Chaplowe, 2021).

Over the medium and longer term, these findings can be validated or complemented by an
impact evaluation or other broader research initiatives. However, instead of postponing the
measurement of effect to impact studies and focusing on simplistic outputs and quantitative
outcomes, development organizations are missing critical data and failing to establish relevant
baseline measures from which subsequent evaluation can be based. Defining down
development outputs and results ensures limited data is provided to institutions, which
hampers efforts to understand the impact of various kinds of investments and makes
effectiveness nearly impossible to measure responsibly. These approaches offer a means to
address at least some of the methodological critiques of existing development initiatives (Bollen
et al., 2005; Wheeldon, 2012; York & Bamberger, 2020). However, they will not on their own re-
orient development in ways that center those for whom the projects are meant to benefit.

This paper suggests realigning the results levels typical to most MDB results chains to the model
employed by the Government of Canada shown below (Figure 7). The distinguishing feature of

7
A combination of interviews with parents, students, teachers, and administrators complemented by classroom
observation will provide sufficient information to measure the indicator.
8
Azerbaijan - Second Education Sector Development Project (English). Washington, D.C.: World Bank Group.
(Closed March 31, 2016)

Madhoun and Wheeldon 16


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

the Canadian model is that it recognizes two levels of outcomes within the life of the project: an
immediate outcome that deals with change in capacity and capability, and an intermediate
outcome that measures change in behavior perceptible by the end of the project. The MDB
model does not recognize these two levels of outcomes and removes the measure of effect
from the responsibility of the project. (See Table 3 for a comparison of results chain labels.)

Figure 7: Result chain Canada.


Source: Results-Based Management Tip Sheet 2.1: Results Chains and Definitions

Table 3: Labels used in the results chain of the ADB and the World Bank in comparison to the Canadian Model.
ADB World Bank Canadian model
Impact Project Development Outcome Ultimate Outcome
Intermediate Outcome
Outcome Intermediate Outcome Immediate Outcome
Output Output Output
A note on Table 3: Although the World Bank uses the term “intermediate outcome,” the content of this category is
at best a very short-term (immediate) outcome but in most cases an output.

The focus on outputs and quantitative outcomes and postponing the measurement of effect to
impact studies has limited the ability of development organizations to effectively deploy data
science applications in projects (as we see in the example in Table 3) and has reduced

Madhoun and Wheeldon 17


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

performance measurement to compliance. In these circumstances, the M&E data submitted by


the project to the institution is limited to participation levels and offers little to no insights on
the effect of the investment. The importance of this gap will become clearer as we discuss in
the next section how the institution could measure its effectiveness.

The Asian Development Bank’s (ADB) 2020 version of the Guidelines for Preparing and Using a
Design and Monitoring Framework addresses this issue by distinguishing “impact” from the
project’s results chain. (ADB, 2020) However, it still does not go far enough in measuring effect
within the project, thus depriving projects of proposing minor design changes during the
implementation of a project. In the ADB sample (Table 3), the output and outcome are
measured by the project’s products. The outputs measure the products of several activities
while the outcome measures combined outputs, namely an affordable and accessible rail
system. In contrast to the ADB model, the social and economic effect of a modernized rail
system can be measured as a direct effect in the life of the project; not doing so would be a
missed opportunity. In the example used by the ADB, as the project begins to wind down there
should be a perceptible increase in commuters using the trains and we can begin to collect data
on commuting trends (see Table 3) which would be instrumental to later impact evaluations.

In the example in Table 3, the suggested revision to the impact statement places the impact
outside the direct responsibility of the project but still related to it. The impact statement used
in the DMF is improved connectivity to social and economic opportunities. Connectivity is
stated as the impact of the project, and if achieved, it implies the success of the investment.
However, we are operating in a development context so we want to improve connectivity to
achieve a social and economic outcome. Connectivity is a means to achieve our developmental
goal, which is employment, better access to government services, better access to education,
and healthcare. While connectivity can be measured during the life of the project, the impacts
we seek from development investments are not. These are not measured during the life of the
project and are not attributed to a single initiative but instead come about through the
intervention of different projects. For example, if the project improves connectivity but there
are initiatives to support employment, encourage growth of SMEs, or provide relevant skills
training, then connectivity is of little value in a development context. Similarly, just as several
projects from several sections together contribute to a single institutional goal, one project can
contribute to more than one institutional goal as we see in Table 3 where the transport project
improves access to health and to education.

In the current construct of results frameworks, the impact is in Moving the current impact
fact a measurable project result, not an impact, which creates a statement to the intermediate
gap at the intermediary result level and removes the logical outcome level and developing
an impact statement that
bridge from the operational to the strategic; importantly, it
recognizes a development result
removes the incentive to borrowers and team leaders to greater than the individual
investigate beyond participation levels. For the intermediary project creates the imperative to
outcome they must now measure the type of participation, use data science to measure the
public opinion, and other quality-oriented questions. This is performance at the level of
particularly important for the topic of our discussion: (i) the projects.

Madhoun and Wheeldon 18


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

addition of the intermediate level of outcome creates more opportunities for the use of data
science within projects; and (ii) adding an appropriate impact statement links the project to
institutional outcomes more directly and increases the opportunity for data science. Table 3
provides a few examples of data science applications at the level of project that give more
depth to participation levels.

Madhoun and Wheeldon 19


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

Table 3: Impact vs. Effect. Adapted from the ADB DMF (ADB 2020) Figures 3 page 4 and Table 1 page 5

For a light-rail / metro project


Relationship Results statements in the ADB
Result Level Revised results statements Method(s) of measurement
to project example
Development research: impact studies, meta
Impact (not Longer-term Connectivity for all to social and Increased employment / increased
studies
part of results development economic opportunities in City A access to health / education /
AI: natural language processing; processing
chain) result improved justice, etc.
data from a multitude of sources
Here is a good opportunity for the use of data
science applications:
 Mobile location records (determine starting
Highest-level (Intermediate Outcome) point and destination of commuters)
Project Increased use by target population  User satisfaction surveys collected remotely
Outcome (Not (to access social and economic at ticket points at stations can collect data
Direct
part of ADB opportunities in City A) about social and economic opportunities
results (Connectivity: adapted from the facilitated by the rail system and its
framework) ADBs impact statement) importance thereto.
 We can even go further here and look at
reduction in traffic in the city as more
people use the rail.
Measured through fare analysis, safety records,
(Immediate Outcome)
Outcome (part Affordable, safe, and inclusive scheduling disruption records, accessibility for
Affordable, safe, and inclusive
of results Direct mobility of urban rail-based disadvantaged citizens (location of stations,
mobility of urban rail-based transit
chain) transit users in City A enhanced wheelchair accessible, hearing and visually
users in City A enhanced
impaired, etc.)
 Signaling, train control, and  Signaling, train control, and
telecommunication systems telecommunication systems
Output (part operational operational
of results Direct  Rolling stock operational  Rolling stock operational Project deliverables
chain)  Institutional capacity of metro  Institutional capacity of metro
operations organizations operations organizations
strengthened strengthened

Madhoun and Wheeldon 20


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

4.3 Marshalling Data Science for the Institutional Performance Measurement


Data science is being used effectively in development research but is limited in projects,
although there is tremendous potential for this as shown above. We have also explored a
suitable environment for data science through the combined design thinking life cycle. How can
these three elements be harmonized to contribute to measuring institutional performance? In
the MDB context, institutional performance rests in the effectiveness of its operations to meet
its strategic objectives. For the World Bank, the strategic objective is “ending extreme poverty
and promoting shared prosperity in a sustainable manner,” while for the Asian Development
Bank it is to “achieve a prosperous, inclusive, resilient, and sustainable Asia and the Pacific,
while sustaining its efforts to eradicate extreme poverty.” Effectiveness of their work must be
measured against these objectives. The suggestion to develop project impact statements that
recognize a development result greater than the individual project responds to this
measurement of effectiveness because it connects each project explicitly to one or more parts
of the objective statement.9

The ADB’s cautions that the impact level is separated from the results chain to show that its
purpose is alignment, not performance measurement. (DMF 2020 page 6) The example
provided on page 7 fits this description, showing that the impact of the urban transport project
is aligned to the institutional objective of ending poverty and shared prosperity. However, Table
4 below tells a different story.

Table 4: Impact definition versus impact statement

Generic Impact Statement Specific Impact Statement Revised Impact Statement


ADB DMF (2020) Table 2, page 7 ADB DMF (2020) Table 1, page 5 Table 3, this paper
Connectivity for all to social and Increased employment / reduced
Jobs and economic activity
economic opportunities in City A poverty / increased access to
increased
improved health / education / justice, etc.

The example of a specific impact statement in the second column does not align with the
definition of impact in the DMF. Connectivity is a measurable and attributable performance
metric; it is measured by kilometers of track, frequency of service, and number of users. It only
assumes alignment to the institutional objectives: if connectivity is secured, then shared wealth
and reduction in poverty will be accomplished. This is a risky assumption based on a tenuous
link between output and developmental outcome. On the other hand, the generic example, like
the revised impact statement from Table 3, is more explicit in its alignment to institutional
objectives while remaining outside the project as defined by the DMF. The specific impact
statement also fails in its assumption that connectivity will invariably contribute to increased
economic activity and access to social service; this is true only if there are other projects that
support development in those areas.

9
While such connections are made in the project’s official documentation, these play a bureaucratic compliance
role.

Madhoun and Wheeldon 21


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

This paper is not suggesting a radical departure from current practices in performance
measurement, merely a recognition that measuring the performance of the institution using
modern data science techniques requires an adjustment to the project results framework. A
project should not merely show “alignment” to institutional objectives in key intervention
areas; it must be quantifiably accountable for its contribution to the objectives of the sector
and to the objectives of the organization. An urban transport project, for example, should be
able to quantify its contribution to the institutional goal of increasing prosperity. And it is here
where data science will play a role.

A reframing of M&E is necessary to achieve this change in the project result framework. The
reframing is not merely a change in procedure, but a shift from measuring project performance
as stand-alone initiatives to acknowledging the practical ways that each project fits in the
complex of interventions. This requires greater resources to M&E within the project budget, a
recognition of M&E’s centrality in project design as shown in the Design Thinking model, and
importantly, impressing upon the borrowers the importance of this M&E approach. Whenever
a new technology is introduced to an institution the challenge is not merely technical—the
bigger challenge is cultural. Despite the publications, guidance documents, and training on
M&E, we must recognize that it is still perceived by borrowers as merely a matter of
compliance rather than an integral part of the project cycle and one that can have a positive
impact on the borrower’s own systems. Projects strive to meet targets without understanding
the effect of these targets on the population that these projects are designed to serve. This is
encouraged by emphasis on disbursement which is tied to adherence to the procurement plan,
both of which are a key metrics of the project performance rating.

The three key recommendations: (i) adoption of a Design Thinking approach to the institutional
cycle, (ii) introduction of an intermediate level outcome, and (iii) developing an impact
(ultimate result) statement that reflects institutional goals and recognizes that it is achieved
through multiple interventions from multiple sectors. Data science applications are useful at
every level of the revised results framework, as shown in Figure 8.

Madhoun and Wheeldon 22


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

Figure 8: Revised Results Framework with Data Science applications

The data science applications to measure and analyze the output, immediate, and intermediate
outcome levels are not complex and are mostly available through commercial providers,
however, they require adaptation to suit the purpose. The complexity emerges in the
development of appropriate algorithms to measure
The most complex element of the
degree of contribution of a given project to the impact proposals in this paper relate to the
and the degree to which the impact contributes to the development of algorithms to measure
institutional goal (of course, notwithstanding the degree of contribution of a given project
10
availability of quality data) . To illustrate this problem to the impact and the degree to which
using Figure 8: assume there are only four projects in a the impact contributes to the
institutional goal.
fictious MDB’s portfolio, one in each of urban
transport, TVET, Energy, and SME development. How
much does an urban transport improvement project contribute to employment in relation to
the other projects contributing to the shared institutional impact? There are several
approaches to address this question. We can consider assigning weights to project size, scope,
reach, and type. The allotment of weight would require a credible evidence-based estimation,
but even then, it can become unmanageably complex. For example, weight allotted to size of
the project can be variable depending on the type of project. A large initiative exceeding US$50
million in value may be further removed from contributing to employment than a project worth
US$10 million that is causally related to employment. This approach has too many variables to
produce a credible weighting formula. Conversely, we can also use machine learning to sift

10
Most data science projects fail due to poor data quality, not poorly programmed applications.

Madhoun and Wheeldon 23


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

through a massive amount of historical data and records from projects financed by all MDBs
and bilateral donors covering the globe, to capture information that can lead to a reasonable
estimate of the extent of contribution of different types of projects to impacts such as
employment, and institutional goals such as poverty reduction. While still complex, the data
science approach remains more manageable, especially now that commercial applications are
more advanced and adaptable and that expertise in data science in increasing. The approach
suggested here is also applicable to measuring the degree of contribution of project impacts to
institutional goals. Elimination of poverty is not achieved solely through increased employment;
poverty elimination requires a host of social, governance, and economic initiatives. How much
does access to quality affordable education contribute to poverty alleviation? How about
healthcare? In the same way that we can measure the contribution of projects to a shared
impact, we can also measure the contribution of impacts to shared institutional goals with data
science offering the means to achieve credible measurement of contribution.

Not only do we need to make modifications in processes, but alterations must also be made in
how we obtain and manage the data to allow the data science applications to function
correctly. Agreements with other donors must be achieved to share historical project and
program data. These will need to be properly catalogued and stored in data warehouses. 11 At
the risk of delving too deeply into the technicalities of data management, it is important to
recognize that the “data warehouse” should be able to handle all types of data, to integrate,
manage, and transform the data coming from a multitude of sources. Finally, algorithms need
to be fed with quality data, and this emphasizes the importance of the management side of the
process. This very-large data warehouse must be accessible to participating MDOs, bilateral
donors, development researchers at universities, and think tanks to allow them to design and

Figure 9: Simple rendering of data management architecture

11
Traditionally, a Data Warehouse is only suitable for structural data that is being integrated and prepared for
reporting and advanced analysis. However, new extended definitions exist such as Logical Data warehouse or
Extended Data warehouse where the Data warehouse environment includes also unstructured and semi-
structured data (extends to the level of Big Data).

Madhoun and Wheeldon 24


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

apply their own algorithms and produce insights that can be used to complement the
development organizations’ own analysis. In this way, despite a degree of centralization, the
open access ensures that innovation is not stifled and develops a greater integration in the
measurement of performance and improvement of interventions.

6. Conclusion
The combination of modifications suggested in this paper creates an environment that allows
development organizations to draw the greatest advantage from modern data science
applications to improve project performance measurement and the measurement of the
effectiveness of development organizations in achieving the changes envisioned in their
strategies and objectives. Figure 10 shows the combination of modifications and their
relationships. As noted earlier, Design Thinking is user centered, as it draws from what is
desirable by the user, what is feasible (technologically, physically, legally etc.), and what is
economically viable. It is an ideal environment because it puts the citizen, the ultimate
beneficiary, at the center of all the action and all the measurements of performance. The
modifications suggested to the institutional and project results frameworks are a necessary end
to enable the effective use of data science. More specifically, the goal is for the consolidation of
data science in the service of the institution beyond just spotty use in projects and the
conceptual use in development research. The modifications do not suggest the disposal of any
data science applications in use today, rather, they promote the institutionalization of these
applications to serve common corporate objectives.

Figure 10: Performance measurement within the design thinking environment

This paper, despite specific recommendations, is a proposal. A detailed design and feasibility
study must be undertaken to determine whether it is workable. Other factors must be
incorporated, such as efficiency and cost effectiveness. To actualize the proposal contained in
this paper, the first step must be an interdisciplinary study that expands upon, examines, and
tests the feasibility of the suggestions made here which are summarized as follows:

Madhoun and Wheeldon 25


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

1. Integrate Design Thinking into the organizational DNA.


2. Adjust the results framework to include a new intermediate result level.
3. Develop appropriate algorithms for data science applications to determine the extent of each
project’s contribution to the organizational objectives.
4. Develop an appropriate database architecture to manage the massive amount of data needed to
operate the algorithms.

Some of the questions that the study will need to investigate include: How can these changes
be introduced? What are the consequences of the making these changes? What are the
resource requirements that will be needed to make the changes and then operationalize them?
The use of data science applications should not be a centralized function within a development
organization; this would stifle innovation at the operational level and would impose a
standardization where it is not necessary and where it would be counterproductive. As such,
the study must find the balance where projects have room to explore relevant applications of
data science, where development research takes place independently and is driven by
transparency, and where the organization has access to all these sources to help shape its
understanding of its effectiveness on the population it aims to serve.

In this global climate, especially since the COVID-19 pandemic, it is necessary to quantify the
effect of projects on the lives of people targeted by corporate objectives. It is not enough to
make tenuous connections to goals that improve the lives of people—we must show how and
how much an investment has changed the condition of the communities they were designed to
serve. We can no longer be satisfied with knowing how many people use a new highway; we
must understand, not only assume, how their lives have changed because of it. We now have
the means to do this through improved data science applications, but we cannot simply
superimpose modern methods on existing processes.

This paper does not suggest a radical departure from current practices in performance
measurement, merely a recognition that measuring the performance of the institution using
modern data science techniques requires an adjustment to the project results framework and a
sharper focus on the end user through the lens of Design Thinking. The development
organizations and data science applications both need some modifications to make them fit the
purpose; if we can do this correctly, it will help the international development sector design
better interventions and help more people more efficiently and more effectively than ever
before.

Madhoun and Wheeldon 26


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

Works Cited
ADB. (2020). Guidelines for Preparing and Using a Design and Monitoring Framework:. Metro Manila:
Asian Development Bank.
ADB. (2020). Mapping Poverty Through Data Integration and Artificial Intelligence. Manila: Asian
Development Bank.
BBC. (2020, 06 24). https://www.bbc.com/news/technology-53165286. Retrieved from
https://www.bbc.com/: https://www.bbc.com/news/technology-53165286
Bollen K, Paxton P, Morishima R. Assessing International Evaluations: An Example From USAID’s
Democracy and Governance Program. American Journal of Evaluation. 2005;26(2):189-203.
Carden, F. (2013). Evaluation, not development evaluation. American Journal of Evaluation, 34(4), 576-
579.
Chouinard, J. A., & Hopson, R. (2015). A critical exploration of culture in international development
evaluation. Canadian Journal of Program Evaluation, 30(3): 248-276.
Cohen, J. L., & Kharas, H. (2018). Using big data and artificial intelligence to accelerate global
development. Brookings Institution Accessed, 4, 19.
Cornwall, A., & Jewkes, R. (1995). What is participatory research?. Social science & medicine, 41(12),
1667-1676.
Dam, R. F., & Siang, T. (2020). https://www.interaction-design.org/literature/article/design-thinking-get-
a-quick-overview-of-the-
history#:~:text=Simon%20was%20the%20first%20to,book%2C%20Experiences%20in%20Visual%
20Thinking. Retrieved from https://www.interaction-design.org/.
DIME. (2020, July). DIME Data Services. Retrieved from
https://www.worldbank.org/en/research/dime/data-and-analytics:
http://pubdocs.worldbank.org/en/221011596024068203/DIME-Data-Services.pdf
Gibbons, S. (2016, July 31). https://www.nngroup.com/articles/design-thinking/. Retrieved from
https://www.nngroup.com: https://www.nngroup.com/articles/design-thinking/
Hejnowicz, A., & Chaplowe, S. (2021). Catching the Wave: Harnessing Data Science to Support
Evaluation’s Capacity for Making a Transformational Contribution to Sustainable
Development. The La Canadian Revue Journal canadienne of Program d’évaluation Evaluation
de programme, 162.
Hoolohan, C., & Browne, A. L. (2020). Design thinking for practice-based intervention: Co-producing the
change points toolkit to unlock (un) sustainable practices. Design Studies, 67, 102-132.
Leeuw, F. L., & Furubo, J. E. (2008). Evaluation systems: What are they and why study
them? Evaluation, 14(2), 157-169.
Martin, R. (2007). The Opposable Mind: How Successful Leaders Win Through Integrative Thinking.
Boston: Harvard Business School Publishing.
McIntyre, A. (2007). Participatory action research. Sage Publications.
Ofir, Z., Kumar, S., & Kuzmin, A. (2013). Evaluation in developing countries: What makes it
different. Emerging practices in international development evaluation, 11-24.
OLC. (2020). https://olc.worldbank.org/content/big-data-action-applications-world-bank-group-
operations. Retrieved from World Bank Open Learning Campus:
https://olc.worldbank.org/content/big-data-action-applications-world-bank-group-operations
Olteanu, A. (2019, 06 05). How Can We Overcome the Challenge of Biased and Incomplete Data?
(Knowledge@Wharton, Interviewer) Retrieved from
https://knowledge.wharton.upenn.edu/article/big-data-ai-bias/
Picciotto, R. (2020). Towards a ‘New Project Management’movement? An international development
perspective. International Journal of Project Management, 38(8), 474-485.

Madhoun and Wheeldon 27


Copyright: Mast Measurement and Management May 2022
DATA SCIENCE AND DESIGN THINKING FOR BETTER DEVELOPMENT OUTCOMES

Rowe, P.G. (1987). Design Thinking. MIT press.


York, P., & Bamberger, M. (2020). Measuring results and impact in the age of big data: The nexus of
evaluation, analytics, and digital technology. The Rockefeller Foundation.
Wheeldon, J. (2012). After the Spring: Probation, Justice Reform and Democratization from the Baltics to
Beirut. Eleven International Publishing.

Madhoun and Wheeldon 28


Copyright: Mast Measurement and Management May 2022

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy