0% found this document useful (0 votes)
38 views

15df715b en

Ebook

Uploaded by

thotueminh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

15df715b en

Ebook

Uploaded by

thotueminh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

THE POTENTIAL

IMPACT OF ARTIFICIAL
INTELLIGENCE ON
EQUITY AND INCLUSION
IN EDUCATION
OECD ARTIFICIAL
INTELLIGENCE PAPERS
August 2024 No. 23
2  EDU/WKP(2024)15

OECD EDUCATION WORKING PAPERS SERIES OECD


Working Papers should not be reported as representing the official views of the OECD or of its member
countries. The opinions expressed and arguments employed herein are those of the author(s).
Working Papers describe preliminary results or research in progress by the author(s) and are published to
stimulate discussion on a broad range of issues on which the OECD works. Comments on Working Papers
are welcome, and may be sent to the Directorate for Education and Skills, OECD, 2 rue André-Pascal,
75775 Paris Cedex 16, France.
This document, as well as any data and map included herein, are without prejudice to the status of or
sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name
of any territory, city or area.
The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at
http://www.oecd.org/termsandconditions.
Comment on the series is welcome, and should be sent to edu.contact@oecd.org.
This working paper has been authorised by Andreas Schleicher, Director of the Directorate for Education
and Skills, OECD.
-------------------------------------------------------------------------
www.oecd.org/edu/workingpapers
-------------------------------------------------------------------------

Unclassified
EDU/WKP(2024)15 3

Acknowledgements

This working paper was prepared as part of the OECD Education for Inclusive Societies project. The
authors would like to thank Hannah Borhan, Lucie Cerna, Shivi Chandra, Marc Fuster Rabella,
Paulo Santiago and Quentin Vidal for their valuable feedback and comments. Thanks to Eda Cabbar and
Daiana Torres Lima for their editorial work.

Unclassified
4  EDU/WKP(2024)15

Abstract

This working paper reviews the impact of artificial intelligence (AI) on equity and inclusion in education,
focusing on learner-centred, teacher-led and other institutional AI tools. It highlights the potential of AI in,
e.g. adapting learning while also addressing challenges such as access issues, inherent biases and the
need for comprehensive teacher training. The paper emphasises the importance of balancing the potential
benefits of AI with ethical considerations and the risk of exacerbating existing disparities. It highlights the
need to address privacy and ethical concerns, enhance cultural responsiveness, manage techno-ableism
and provide continuing professional learning in AI. Additionally, the paper stresses the importance of
maintaining educational integrity amidst growing commercial influence. It encourages research on AI tools’
implications for equity and inclusion to ensure that AI adoption in education supports a more equitable and
inclusive learning environment.

Unclassified
EDU/WKP(2024)15 5

Table of contents

Acknowledgements 3
Abstract 4
1 Introduction 7
2 Definitions, guidelines and conceptualisations 9
Definitions of artificial intelligence, equity and inclusion 9
Guidelines and frameworks related to artificial intelligence in education 11
Taxonomy to analyse the impact of artificial intelligence on equity and inclusion in education 12

3 Learner-centred tools to support equity and inclusion 14


Opportunities of learner-centred AI tools for equity and inclusion 14
Challenges of learner-centred AI tools for equity and inclusion 17

4 Teacher-led tools to support equity and inclusion 25


Opportunities of teacher-led tools for equity and inclusion 25
Challenges of teacher-led tools for equity and inclusion 30

5 Other institutional tools that can foster equity and inclusion 34


Opportunities of institutional tools for equity and inclusion 34
Challenges of institutional tools for equity and inclusion 35

6 Conclusions 37
Embracing the potential for adaptive learning while addressing privacy, ethical and
accountability issues 37
Recognising the potential to enhance cultural responsiveness while keeping in mind inherent
biases 37
Balancing the potential for accessibility with challenges such as techno-ableism and impact on
socio-emotional skills 38
Developing and improving teacher training in AI 38
Exploring how to maintain educational integrity amidst the growing commercial influence in the
sector 39
Encouraging research on the implications of AI for equity and inclusion in education, and
clarifying the role of institutions at the national level in its systematic implementation 39

References 41

Unclassified
6  EDU/WKP(2024)15

Tables
Table 2.1. Conceptualising equity and inclusion regarding digital technologies in education 10
Table 2.2. AI techniques and technologies 11
Table 2.3. Taxonomy of AI tools in education 13
Table 4.1. Teacher activities and AI 27

Figures
Figure 2.1. Definitions of equity and inclusion in education 10
Figure 3.1. Quantity and quality of digital resources by socio-economic profile of schools (2022) 19
Figure 4.1. Continuing professional learning needs by school characteristics (2018) 33

Boxes
Box 3.1. Algorithmic biases 22
Box 4.1. Aspects of teaching that AI could support 27

Unclassified
EDU/WKP(2024)15 7

1 Introduction
Artificial intelligence (AI) has sparked transformative possibilities in many facets of human life in the current
era of rapid technological advancement. AI tools continue to make headlines, while critiques also emerge,
citing algorithmic biases, privacy concerns, accountability issues, implications for equity and inclusion, and
others. As a general-purpose technology, AI is expected to transform and is already changing a wide range
of areas, from advertising, agriculture, and criminal justice, through education, finance, health, marketing,
science and security to transport (OECD, 2019[1]). Benefits of AI use in these areas include improving the
efficiency of decision making, saving costs and enabling better resource allocation (ibid.).
AI might also have profound impacts on education systems, including on equity and inclusion. Therefore,
this working paper delves into some debates around the connection between AI, equity and inclusion in
education. By exploring the opportunities and challenges that arise as AI tools reshape the educational
landscape, it aims to set the ground for a meaningful discourse on ensuring equitable and inclusive
education in times of AI.
To this end, the working paper has three objectives. First, it aims to provide policy makers with a
categorisation of AI tools that can support equity and inclusion in education. Following Holmes and
Tuomi (2022[2]), the AI tools have been categorised into learner-centred, teacher-led, and other institutional
tools. This taxonomy is particularly useful when addressing the question of who the primary user or the
primary beneficiary is.
Second, in categorising the AI tools and providing examples, the working paper aims to highlight that AI
solutions in various areas already exist, there is demand for them and they are likely already being used
by educational institutions across OECD countries. New AI tools are being introduced in classrooms
without much supervision or oversight in many countries. This kind of “unchecked adoption” of AI tools can
result in some schools, often those that can afford the technology, reaping some of the benefits (but also
potential risks) sooner than others. This leads to the final objective of this working paper, namely
underscoring that the use of AI tools in education occurs mainly without systematic oversight and
regulation. To this end, the paper outlines some of AI tools' significant opportunities and challenges. While
opportunities and challenges are categorised based on the learner-centred, teacher-led and other
institutional tools taxonomy, there is great overlap among them, and, ultimately, almost all the tools
discussed were created to help students learn and address students’ needs (whether directly or indirectly
by, e.g. assisting teachers). In particular, challenges mentioned in one section often extend and apply to
AI tools described in other sections.
While the working paper does not provide an exhaustive list of AI tools, many are already present in
schools, along with the opportunities and challenges they bring. As such, the question emerges to what
extent policy makers should aim to support or discourage the use of the tools from a centralised
perspective. While this working paper does not seek to provide a comprehensive answer at this early stage,
it is the right time to ponder this question.
The paper was conducted using desk-based research, mainly in English. As such, the tools presented may
not be relevant in non-English-speaking countries. Nevertheless, the opportunities and challenges are
likely applicable regardless of location. Furthermore, little information is available on country-level
approaches to AI in education. This probably partially stems from the fact that few education systems have

Unclassified
8  EDU/WKP(2024)15

implemented system-level guidance or policies. Future research should place a greater focus on this
aspect. Finally, the opportunities presented in this paper should be viewed more as hypotheses rather than
evidence-based evaluations. Indeed, for many of the (types of) AI tools, there are only a handful of robust
evaluations for the potential benefits or improvements in student learning and well-being (Holmes, 2023[3]).
Where these are available, they are referenced. The working paper focuses mostly on school education.
This field is evolving rapidly and new AI tools are emerging daily. Challenges outlined in this paper are
also being constantly addressed. In a year, some of the content will likely be out of date. As such, caution
is required when reading this analysis after a prolonged time after publication. That said, the information
in this paper can be used to take stock of where the field is at the present and how the field has evolved
in a few years.
The working paper is structured as follows. The next section provides a framework for analysis in regard
to definitions and guidelines published on AI in education, as well as a taxonomy to analyse the impact on
equity and inclusion. Section 3 describes opportunities of learner-centred tools, such as adapting learning,
content enrichment, support for learners with special education needs, and information and advice.
However, these tools also face challenges such as ensuring access, combating techno-ableism1,
addressing bias, maintaining socio-emotional learning, and balancing AI integration with privacy and
accountability concerns.
In section 4, the paper elaborates on teacher-led tools. It discusses the potential of supporting teaching
with AI-powered robots, curating learning materials, assisting in assessment and classroom management,
identifying some special education needs, and providing continuing professional learning opportunities.
Yet, these benefits are weighed against challenges like the high costs of AI tools, the need to balance
commercial interests with educational objectives, and the imperative of equipping educators with the
necessary AI knowledge and skills.
In section 5, the paper explores institutional tools that can foster equity and inclusion, with opportunities
such as increasing the efficiency of admissions, better identifying students at risk of early leaving from
education and training, and data-based decisions. However, these tools present challenges, including
addressing the complexities and ethical considerations involved in their implementation. The final section
concludes and provides some overarching conclusions and policy implications.

1
Techno-ableism refers to a tendency to argue that technology is a “solution” for disability and, as such, that people
with disabilities need to be “fixed” (Shew, 2020[74]).

Unclassified
EDU/WKP(2024)15 9

2 Definitions, guidelines and


conceptualisations

Definitions of artificial intelligence, equity and inclusion

Before discussing the impact of AI on equity and inclusion in education, it is necessary to define AI and
explore how AI can be applied in educational contexts in general. Defining AI is a crucial yet challenging
starting point in the ever-changing realm of technology. This working paper adopts the definition of the
OECD as recommended by the Council on Artificial Intelligence (OECD, 2023, p. 7[4]):

“a machine-based system that, for explicit or implicit objectives, infers,


from the input it receives, how to generate outputs such as predictions,
content, recommendations, or decisions that can influence physical or
virtual environments. Different AI systems vary in their levels of
autonomy and adaptiveness after deployment”.

Other definitions stress AI’s potential meaning for society even further, e.g. referring to AI as “a set of
sciences, theories and techniques whose purpose is to reproduce by a machine the cognitive abilities of a
human being” (Council of Europe, 2024[5]).
Given the focus of this working paper on equity and inclusion in education, it is also essential to define
these (Figure 2.1). The concepts vary across the literature and in the interpretations of different education
systems (Cerna et al., 2021[6]; Varsik, 2022[7]). The OECD Education for Inclusive Societies project offers
a comprehensive insight into the critical elements encompassed within countries' definitions of equity and
inclusion (OECD, 2023[8]). In regard to equity, the project’s definition includes two complementary
approaches. First, horizontal equity reflects the overall fair provision of resources to each part of an
education system, providing similar resources to those alike. Second, vertical equity involves giving
additional resources to disadvantaged groups or schools based on their needs. Equitable education
systems are thus defined as those that ensure the achievement of educational potential regardless of
personal and social circumstances, including factors such as gender, ethnic origin, Indigenous
background, immigrant status, sexual orientation and gender identity, special education needs, and
giftedness (Cerna et al., 2021[6]; OECD, 2017[9]).
Inclusion is defined as “an on-going process aimed at offering quality education for all while respecting
diversity and the different needs and abilities, characteristics and learning expectations of the students and
communities, eliminating all forms of discrimination” (UNESCO, 2009, p. 126[10]). More than a particular
policy or practice related to a specific group of students or individuals, this definition identifies an ethos of
inclusion and communities of learners, which does not only involve an individual dimension but also a
communal one. Inclusive education aims to respond to all students’ needs beyond school attendance and

Unclassified
10  EDU/WKP(2024)15

achievement while improving all students’ well-being and participation (Cerna et al., 2021[6]).

Figure 2.1. Definitions of equity and inclusion in education

Equity
• Equitable education systems are those that ensure the achievement of educational potential
regardless of personal and social circumstances, including factors such as gender, ethnic
origin, Indigenous background, immigrant status, sexual orientation and gender identity,
special education needs, and giftedness.

Inclusion
• An on-going process aimed at offering quality education for all while respecting diversity and
the different needs and abilities, characteristics and learning expectations of the students
and communities, eliminating all forms of discrimination.

Note: The definitions were adopted by the Education for Inclusive Societies (and the previous Strength through Diversity) project. Other
organisations, projects, countries and researchers may use different definitions.
Source: OECD (2023[8]), Equity and Inclusion in Education: Finding Strength through Diversity, https://doi.org/10.1787/e9072e21-en and
UNESCO (2009[10]), Defining an Inclusive Education Agenda: Reflections around the 48th session of the International Conference on Education,
https://unesdoc.unesco.org/ark:/48223/pf0000186807 (accessed on 25 March 2024).

These definitions are supported by Gottschalk and Weise (2023[11]), who provide a detailed
conceptualisation for defining equity and inclusion in regard to digital technologies in education (Table 2.1).
Digital equity in education promotes fairness and equity in student access to digital technologies, skills,
uses and attitudes. As such, digital tools for equity in education provide additional learning resources for
students in need and help them participate fully in education (Gottschalk and Weise, 2023[11]). Digital
inclusion, in turn, overcomes barriers to participation based on student differences. Digital technologies for
inclusion are then adapted to acknowledge, accept and respect student differences. They also ensure that
students feel included, promote their well-being and sense of belonging, and ensure non-discrimination
(ibid.).

Table 2.1. Conceptualising equity and inclusion regarding digital technologies in education
In education For equity/inclusion in education
Equity Digital equity in education: Promoting fairness and equity in Digital technologies for equity in education: Using digital
access to digital technologies (including hardware, software, technologies to promote equity in education, such as providing
high-quality broadband, etc.), digital skills, uses and attitudes for additional learning resources for students in need to promote
all students. equitable outcomes to help them participate fully in (digital)
education.
Inclusion Digital inclusion in education: Overcoming barriers to Digital technologies for inclusion in education: Adapting digital
participation in digital education based on student differences. technologies and learning environments to promote inclusion in
This would also involve ensuring digital tools in education are education, acknowledging, accepting and respecting student
designed and used to promote participation and inclusion of all differences. Using digital technologies to promote inclusion in
learners. education should aim to ensure students feel included, promote
belonging and a sense of well-being, while ensuring
non-discrimination.

Source: Gottschalk and Weise (2023[11]), Digital equity and inclusion in education: An overview of practice and policy in OECD countries,
Table 1.1., https://doi.org/10.1787/7cb15030-en.

Having defined AI, equity and inclusion, it is worthwhile to explore how AI can be applied in educational
contexts before moving on to more specific cases. To that end, differentiating between AI techniques and
AI technologies can be a helpful approach (UNESCO, 2022[12]). The former refers to methods, approaches

Unclassified
EDU/WKP(2024)15  11

and algorithms used in AI to solve specific tasks or problems (Table 2.2). They are the underlying
mathematical and computational processes that enable AI systems to learn, reason and make decisions.
AI technologies, in turn, encompass the hardware and software infrastructure that facilitates AI systems'
development, deployment and operation. Individual AI tools will then deploy AI techniques and
technologies to address a particular issue. For instance, intelligent tutoring systems (section Learner-
centred tools to support equity and inclusion) can use a variety of AI techniques (commonly, e.g. machine
learning) to train on vast amounts of data and then deploy AI technologies (commonly, e.g. chatbots) to
interact with the user.

Table 2.2. AI techniques and technologies


Definition Examples
AI techniques Methods and approaches used to solve specific tasks or problems, as Machine learning algorithms, deep learning,
well as underlying mathematical and computational processes that supervised and unsupervised learning, neural
enable AI systems to learn, reason, and make decisions. networks.
AI technologies Hardware and software instruments, frameworks and platforms that Autonomous agents (avatars, chatbots, robots),
enable the implementation of AI techniques to create AI applications. image and speech recognition, natural language
processing.

Source: UNESCO (2022[12]), K-12 AI curricula: a mapping of government-endorsed AI curricula,


https://unesdoc.unesco.org/ark:/48223/pf0000380602 (accessed on 15 January 2024).

Guidelines and frameworks related to artificial intelligence in education

Given the globalised nature of technology developments, policies addressing the use of AI to foster equity
and inclusion in education are not necessarily constrained by country borders. Furthermore, international
policy frameworks can influence national directives. This section outlines some prominent guidelines and
frameworks focusing on AI in the context of equitable and inclusive education. Within the OECD, the
Council’s Recommendation on Artificial Intelligence lays the foundation for how governments and other
actors can develop a human-centric approach to trustworthy AI (OECD, 2023[4]). As a legal instrument, its
principles represent a common aspiration for OECD countries. In regard to equity and inclusion, the first
principle targets inclusive growth, sustainable development and well-being. Countries are called upon to
consider how AI can advance “the inclusion of underrepresented populations, reducing economic, social,
gender and other inequalities” (OECD, 2023, p. 7[4]). Furthermore, the OECD Secretariat has joined forces
with Education International, a global federation of teacher unions, to develop nine opportunities,
guidelines, and guardrails for the effective and equitable use of AI in education (OECD, 2023[13]). These
aim to help educational stakeholders navigate some of the fast-moving developments in AI, and a notable
focus is on equity of access and use (ibid.).
The United Nations Educational, Scientific and Cultural Organisation (UNESCO) also provides several
guidelines and frameworks on AI in education. The Recommendation on the Ethics of Artificial Intelligence
marks a consensus among 193 member states concerning the core values, principles and policies that
should drive the advancement of AI (UNESCO, 2022[14]). It outlines practical approaches, such as tools,
methodologies and initiatives intended to maximise AI's beneficial influence on society while mitigating
associated risks (ibid.). Moreover, the Beijing Consensus on Artificial Intelligence and Education codifies
the agreements on the ethical use of AI in education (UNESCO, 2019[15]). It is complemented by guidelines
for policy makers on leveraging the opportunities and addressing the challenges and risks associated with
AI and education (UNESCO, 2021[16]). The guidelines outline the definitions, techniques and technologies
of AI, and analyse some emerging trends and implications of AI for teaching and learning. More recently,
UNESCO published guidance on generative AI in education and research, marking the first attempt to
create a global standard for the use of generative AI (UNESCO, 2023[17]). Additionally, UNESCO is working
on AI competency frameworks for students and teachers, as well as on a global survey on the

Unclassified
12  EDU/WKP(2024)15

governmental use of AI as a public good for education, including existing AI competency frameworks and
continuing professional learning programmes on AI for teachers (UNESCO, 2023[18]).
Looking at other international organisations, the United Nations Children's Fund (UNICEF) provides policy
guidance on AI for children, presenting recommendations for building AI policies and systems that uphold
child rights (UNICEF, 2021[19]). The policy guidance advocates for children's rights within both government
and private sector, and seeks to enhance awareness of how AI systems can support and compromise
children’s rights (ibid.). These guidelines go beyond education and present a more holistic discussion on
how AI can impact children’s lives. Furthermore, the European Commission launched the Digital Education
Action Plan (2021-2027), a policy initiative that, among other things, includes ethical guidelines on the use
of AI and data in teaching and learning (European Commission, n.d.[20]). The guidelines are designed to
help teachers and educators understand AI tools' potential in education and raise awareness of possible
risks (ibid.).
Some national examples of guidelines and frameworks for AI in education can also be found. The
United States Department of Education published recommendations on the future of teaching and learning
in the context of AI. The recommendations also focus on using emerging AI technology for digital equity
and inclusion (U.S. Department of Education, Office of Educational Technology, 2023 [21]). In England
(United Kingdom), the Department for Education published a policy paper on using generative AI in the
education sector, including large language models like ChatGPT and Google Bard (Department for
Education, 2023[22]). It discusses the potential of these tools to reduce workload and enhance teaching
while cautioning about their limitations and the need for professional judgment to ensure content accuracy
and appropriateness (ibid.). It also emphasises the importance of data privacy, intellectual property rights
and the integration of AI into formal assessments and future skills training (ibid.). In Norway, the Directorate
for Education and Training provides regularly updated guidance on integrating AI in schools, emphasising
the need for schools to evolve with society and technology (Directorate for Education and Training,
2024[23]). It highlights the rapid development of AI, along with its challenges and opportunities, and stresses
the importance of addressing these issues immediately and over the long term (ibid.). The Directorate also
outlines specific advice for schools on incorporating AI into education, including updating curricula to
prepare students for a future influenced by AI, emphasising critical thinking and ethical considerations, and
fostering a culture of experimentation and evaluation in pedagogical practice (ibid.).

Taxonomy to analyse the impact of artificial intelligence on equity and inclusion


in education

Having defined the concepts and outlined some available guidelines and frameworks, this section explores
which conceptualisations and taxonomies are available in the literature. For instance, Pons (2023[24])
differentiated between the impacts inside and outside the classroom. Chen, Chen and Lin (2020[25])
considered the functions of AI in administration (e.g. AI can perform some administrative tasks faster or
more cost-effectively and can help teachers in data-driven work), instruction (e.g. analyse course
materials, help create learning plans), and learning (e.g. uncover learning shortcomings, apply intelligent
adaptive interventions). The primary focus of this working paper is to help policy makers orient themselves
in the vast array of tools and their impacts on equity and inclusion in education. To this end, the authors
adopted and adjusted the taxonomy by Holmes and Tuomi (2022[2]). The rationale for this taxonomy of AI
tools in education – categorising them into learner-centred, teacher-led-and other institutional tools
(Table 2.3) – is primarily based on each tool's primary beneficiary and intended application. This taxonomy
allows for a clearer understanding of how AI is applied in different facets of the educational ecosystem,
addressing distinct challenges and objectives in each sector. Furthermore, this taxonomy comes with a

Unclassified
EDU/WKP(2024)15  13

helpful categorisation of the vast amount of AI tools (Holmes, 2023[3]; Holmes and Tuomi, 2022[2]).2 This
provides a solid base to elaborate on the more specific focus of this working paper on equity and inclusion
in education. While this categorisation might be helpful in some contexts, overlaps exist among the
categories. For instance, learner-centred AI tools indirectly benefit and support teachers, as they can save
them time. Furthermore, it could be argued that most, if not all, of the tools discussed in this working paper
have been developed to improve student academic and well-being outcomes.

Table 2.3. Taxonomy of AI tools in education


Purpose Examples of AI tools
Learner-centred tools to Designed to enhance the learning Intelligent tutoring systems, AI-enabled simulations, AI-enabled
support equity and inclusion experience of students. tools to support students with special education needs, etc..
Teacher-led tools to support Assist teachers in their instructional and AI-powered robots, assistants with assessment and classroom
equity and inclusion administrative roles. management, continuing professional learning coaches, etc..
Other institutional tools that Aimed at addressing broader institutional Smart admission systems, tools for identifying at-risk students
can foster equity and inclusion objectives such as improving operational and assistants with data-based decision making.
efficiency and managing admissions.

Note: Categories can overlap in regard to purpose and examples.


Source: Holmes and Tuomi (2022[2]), State of the art and practice in AI in education, https://doi.org/10.1111/ejed.12533.

Learner-centred AI tools (section 3) are designed to enhance students' learning experience. They can
provide adaptive learning and offer support in areas where students may struggle. This category includes
tools like intelligent tutoring systems, AI-enabled simulations, AI-enabled tools to support students with
special education needs and others. These technologies have not necessarily been designed for students
and to be used by students (Holmes and Tuomi, 2022[2]). Instead, they were often repurposed for learning
(ibid.).
Teacher-led AI tools (section 4) assist teachers in their instructional and administrative roles. They are
designed to streamline tasks like assessment, curation of learning materials and classroom management,
thereby enhancing teaching efficiency and effectiveness. AI-powered robots, tools that enable smart
curation of learning materials, assistants with assessment and classroom management, tools that help
identify some special education needs, and continuing professional learning coaches fall into this category.
Finally, other institutional tools (section 5) aim to address broader institutional objectives, such as
improving operational efficiency and managing admissions. They can be used at a higher administrative
level and impact the institution as a whole. Examples include smart admission systems, tools for identifying
students at risk of early leaving from education and training, and assistants with data-based decision
making.

2
Holmes (2023[3]) also provides an elaborate overview of AI tools in each of the categories.

Unclassified
14  EDU/WKP(2024)15

3 Learner-centred tools to support


equity and inclusion

Learner-centred AI tools are designed to improve students' educational experiences. They aim to enable
tailored learning experiences and furnish assistance in subjects where students might face difficulties.
They have the potential for adaptivity, enriching content, assistance in learning, and informing and advising
students. However, these tools also come with several challenges. These include access disparities,
dangers of techno-ableism, various inherent biases, socio-emotional implications, and privacy and
accountability concerns.

Opportunities of learner-centred AI tools for equity and inclusion

Learner-centred AI tools have the potential to mark a transformative moment in education, opening doors
to new opportunities for equity and inclusion. Intelligent tutoring systems exemplify this shift, offering
adaptive learning experiences that have the potential to enhance educational outcomes for a diverse
student body. Similarly, AI-enabled simulations can enrich content, making learning more engaging and
culturally rich, thereby catering to a varied student demographic. For learners with special education needs,
AI tools can provide additional support and equalise access to educational content. Furthermore,
AI-powered tools, such as chatbots, have the potential to play a role in promoting inclusivity. They can
offer rapid, universal access to information and support mental health. As these technologies evolve, they
might play an increasingly significant role in fostering inclusive and equitable learning environments.

Adapting learning

Adaptivity in learning, sometimes referred to as “personalisation” 3, has been highlighted as one of the most
defining features of AI tools (Khosravi et al., 2022[26]). In particular, intelligent tutoring systems (ITS) can
significantly advance educational technology, combining AI techniques with pedagogical methods to tailor
instructional activities to individual learner profiles. These systems adjust content, pace and difficulty level
in real-time, responding to the unique characteristics, needs and performance of each student (Conati
et al., 2021[27]; Keleş et al., 2009[28]; Mousavinasab et al., 2018[29]). Indeed, adaptive learning is a significant
advantage of ITS (de la Higuera and Iyer, 2024[30]). Such adaptability can result in more inclusive education
responsive to the varied learning requirements of a diverse student body. For example, Carnegie
Learning's adaptive learning platform provides a customised learning experience that aims to adapt in
real-time to each student's interactions. Khan Academy’s Khanmigo offers AI one-on-one tutoring to
students by, e.g. mimicking a writing coach by giving prompts and suggestions to move students forward
as they write, debate and collaborate. Furthermore, individuals for whom English is not their first language

3
Some researchers observed that the term “personalisation” is imprecise (Plass and Pawar, 2020[144]). For some
educators, it can mean tailoring activities to each student, for others it might mean giving learners voice and choice.
Furthermore, many education technology products “personalise” in limited ways (U.S. Department of Education, Office
of Educational Technology, 2023[21]).

Unclassified
EDU/WKP(2024)15  15

can benefit from AI tools that rewrite the text into grammatically correct and stylistically appropriate
English – provided that they understand, access, navigate, expertly prompt, corroborate, and ethically and
effectively incorporate text generated by AI tools (Warschauer et al., 2023[31]). Another key opportunity ITS
presents is catering to gifted students (Johns Hopkins Center for Talented Youth, 2023[32]). These systems
could provide enriched content along with adapted enrichment activities (Johns Hopkins Center for
Talented Youth, 2023[32]; Pons, 2023[24]). Such an approach matches gifted students’ academic abilities,
and promotes independent exploration and research, thus fostering a conducive learning environment for
their skills and talents (Rutigliano and Quarshie, 2021[33]).
This tailored approach has the potential to help struggling students catch up academically so they do not
remain disadvantaged due to educational setbacks. Indeed, there is some emerging evidence suggesting
that ITS help disadvantaged students and ethnic minorities (Huang et al., 2016[34]). This can potentially
address the gap in educational equity, as these students often lack access to individualised support that
can be pivotal in their academic development (OECD, 2020[35]). However, more research of higher quality
is needed, with a recent meta-analysis reporting mixed results (Wang et al., 2023[36]). In particular, a lack
of research is visible on the heterogeneous effects of ITS on diverse learners.
An indirect effect of ITS can be alleviating some tasks performed by school staff members, enabling them
to focus on more complex aspects of teaching and learning. This can enhance the quality of education and
contribute to a more sustainable workload for educators. For instance, Carnegie Learning's technology
aims to support teachers by providing detailed insights into student performance, enabling them to
intervene more effectively and efficiently. While system-level implementation of ITS remains rare, some
countries, such as Austria, Korea, Luxembourg and Türkiye, are pioneering these (OECD, 2023[37]).

Enriching content

AI-enabled simulations, encompassing game-based learning, chatbots, virtual reality (VR) and augmented
reality (AR), can offer interactive and immersive experiences that enhance learning. The integration of
AI-enabled simulations, tailored to cultural specificities, has the potential to make curriculum content more
tangible and engaging. For instance, in medical sciences, a VR heart anatomy system enhanced students'
anatomy learning experience and understanding, compared to traditional physical models (Alfalah et al.,
2018[38]). Varjo can help medical students prepare for challenging real-life scenarios. Chatbots, such as
ChatGPT, were used in interactive medical simulations, such as forming independent diagnostic and
therapeutic impressions over an entire patient encounter (Scherr et al., 2023[39]). In science and history
education, AI-enabled simulations can foster the exploration of scientific phenomena, historical events and
cultural practices that are difficult or impossible to replicate in a physical classroom (Holmes, 2023[3]). This
aspect is particularly valuable for overcoming budgetary, geographical and physical constraints limiting
educational experiences. Several private companies offer solutions. Google Virtual Field Trips, among
others, aims to enable students to experience various environments: history and natural history,
geography, arts, science and technology. Other reviews have shown that AI-enabled simulations can
enhance learning and memory, although more research is needed (Papanastasiou et al., 2018[40]; Pellas,
Dengel and Christopoulos, 2020[41]). In particular, studies need to be conducted using more robust designs
and with control groups placed in appropriate settings (e.g. comparing AI-enabled simulations with older
simulation tools such as 2D simulations).
Furthermore, AI-enabled simulations can provide a supportive environment for students to develop
essential skills such as problem-solving, social interaction and collaboration (Dai and Ke, 2022[42]; Wu
et al., 2019[43]). These environments can be conducive to students with particular special education needs.
For instance, Brain Power, an AR system empowering people with autism, aims to help to teach these
individuals social and cognitive skills. AR solutions can also help students with disabilities to play and
exercise with their peers. iGYM, for instance, is an AR designed for school and community-based sport or

Unclassified
16  EDU/WKP(2024)15

recreation facilities seeking to provide novel and accessible ways for people with motor disabilities and
their non-disabled peers to play and exercise together (Graf et al., 2019[44]).
AI-enabled simulations can also play a role in enhancing cultural diversity and individualising learning
contexts. For instance, these technologies can promote the appreciation of Indigenous and minority
cultures (Reihana et al., 2023[45]). Culturally contextualised digital technologies can enable more
meaningful learning experiences for students from diverse backgrounds. Indeed, Google Arts & Culture
can provide educators and students with extensive cultural content, including collections on Black and
Indigenous history and culture in the United States. This platform utilises interactive camera features,
making learning about cultural artefacts engaging and dynamic.

Assisting learners with special education needs

AI-enabled tools designed to support learners with special education needs (SEN) are technologies that
assist in overcoming a range of visual, auditory, physical and cognitive impairments (Holmes, 2023[3]). A
growing body of literature emphasises the role of AI in facilitating special needs education (Gottschalk and
Weise, 2023[11]; OECD, 2021[46]; Vincent-Lancrin and van der Vlies, 2020[47]). These AI tools aim to adapt
to individual needs and abilities, offering learning experiences potentially tailored to each student’s unique
skills and requirements (Hopcan et al., 2022[48]). They can make learning experiences more accessible
and enhance the educational process for students with various disabilities, impairments and difficulties. By
employing AI-enabled tools, educators can significantly improve the accessibility of educational content
and experiences for these students (Holmes, 2023[3]).
One of the potential benefits of these AI tools is the facilitation of including students with SEN in standard
classroom settings. Integrating AI tools into the classroom can allow students with SEN to participate
alongside their peers to a greater extent, contributing to a more diverse and inclusive learning community.
These tools have the potential to assist the students in accessing the curriculum, and also enrich the
educational experience for all students by fostering an environment of diversity and mutual understanding.
For example, students with visual and auditory impairments can benefit from AI tools that provide
customised support. A notable advancement in this area is the development of AI assistive devices for
learners with hearing impairments. Microsoft Translator, for instance, has created a device equipped with
a headset that translates speech signals into written captions in real-time. This device employs deep
learning and AI technologies, including VR and AR, to deliver a customised hearing experience featuring
sound scene analysis, sound protection, real-time language translation, etc. (Roach, 2018[49]). Moreover,
the tool supports translation to over 60 languages, making it beneficial to many students who do not speak
the language of instruction with or without SEN (ibid.). Similarly, Deaf AI is developing digital sign language
interpreters for real-time interpreting of voice to sign languages.
Other tools utilise AI to foster social communication skills in children with autism spectrum disorders
(OECD, 2021[46]). ECHOES, for instance, is a technology-enhanced learning environment where young
learners can explore and practise skills needed for successful social interaction, such as sharing attention
with others, turn-taking, initiating and responding to bids for interaction (Bernardini, Porayska-Pomsta and
Smith, 2014[50]). By integrating playful activities within a virtual "magic garden" and interaction with a virtual
character named Andy, ECHOES operates on the SCERTS model principles of Social Communication,
Emotional Regulation, and Transactional Support (ibid.). This approach demonstrates the potential of AI
tools in enhancing educational experiences for students with SEN by providing environments that stimulate
their unique learning requirements (Porayska-Pomsta et al., 2018[51]). Evaluation of the ECHOES
environment highlighted a nuanced increase in social initiations from children, both towards human
partners and the AI agent, underscoring the effectiveness of AI in engaging students with some SEN in
meaningful educational interactions (ibid.).

Unclassified
EDU/WKP(2024)15  17

Informing, advising and supporting students

AI-powered chatbots are tools designed to simulate interactive conversations with human users by
adapting to new information and user interactions (Holmes, 2023[3]). Chatbots in education can provide
quick and universal access to information (Okonkwo and Ade-Ibijola, 2021[52]). Students may sometimes
prefer to use chatbots for information retrieval over traditional counselling methods. For instance, many
students would like not to have sexual education content delivered by familiar teachers as it could “blur
boundaries and introduce awkwardness into the teacher-pupil relationship” (Pound, 2017, p. 1[53]). To this
end, the Roo chatbot aims to provide users with answers related to sexual education. Such chatbots can
offer immediate access to information and can keep students engaged and motivated (ibid.). The
implications this could have for education are yet to be determined. On the one hand, chatbots could make
sexual education more comprehensive and less “awkward”. On the other hand, the potential lack of
alignment with official curricula could be viewed as problematic.
From an equity standpoint, chatbots can present a budget-friendly solution for equitable distribution of
information and assistance. They can give all students instant access to essential details like class timings,
venue information, submission deadlines and educational materials. EduBot by INNODATATICS, for
instance, can offer help-desk support in, e.g. courses and curriculum at a school or higher education
institution. In addition to providing real-time responses, it features speech recognition and emotion analysis
that reads the user's emotions and aims to respond appropriately. CareerChat, a chatbot powered by AI,
aims to provide career support services to students, and save time, energy and resources for career
development professionals, thus enabling them to help students more effectively (Hughes, 2023[54]).
AI-enabled tools, including chatbots, are also being used to detect and support student health issues.
These tools can analyse various data sources, such as behaviour patterns, sleep quality, heart rate and
academic performance, to identify signs of mental health struggles or well-being issues (Holmes, 2023[3]).
Implementing such tools can be particularly beneficial in disadvantaged areas, where resources fostering
well-being might not be easily accessible. Indeed, chatbots can offer 24/7 non-judgmental listening,
providing information about available resources, coping strategies and guidance to appropriate
professional help where needed (ibid.). This round-the-clock availability ensures that students have
constant access to support, which is particularly important in times of crisis or when immediate help is
required. Confidentiality and anonymity are often cited as advantages of chatbots, particularly for those
seeking support and information without the fear of stigmatisation (Abd-alrazaq et al., 2019[55]). This aspect
is crucial in creating an inclusive and supportive educational environment where all students feel
comfortable seeking help. For instance, the ADMINS project by the Institute of Educational Technology is
creating a chatbot assistant that can enable more effective access to support by providing an alternative
to filling in forms. By supporting dialogue, the assistant aims to guide the student to provide information
that helps the educational institution understand their needs, allow them to ask questions and understand
more about the available support. While chatbots show potential in this area, more research is needed to
confirm their clinically significant effects and safety (ibid.).

Challenges of learner-centred AI tools for equity and inclusion

Integrating AI in education confronts significant challenges in ensuring equity and inclusion. Issues of
access and the digital divide spotlight the need to bridge technological gaps and address socio-technical
factors contributing to the AI divide. Concurrently, techno-ableism might necessitate the involvement of
disabled individuals in AI development to create inclusive and empathetic educational tools. Compounding
these challenges are inherent biases in AI, reflecting societal prejudices, and requiring a diverse and
critically aware approach to AI implementation. Equally critical are the socio-emotional implications of AI
in education, including the potential reduction in human interaction and its impact on social skills and
mental health. Finally, integrating AI in educational settings raises essential data privacy and security

Unclassified
18  EDU/WKP(2024)15

concerns, emphasising the need for informed consent, transparent AI systems and robust privacy
protection regulations. This section examines these complex issues, underscoring the importance of
navigating these challenges to realise the potential of learner-centred AI tools in creating equitable and
inclusive educational environments.

Accessing AI tools

The increasing integration of AI-powered education technologies can present challenges for equity and
inclusion in education, mainly due to varying degrees of access to the technology. This so-called “digital
divide” can present challenges in terms of technical components, socio-technical and social factors (Carter,
Liu and Cantrell, 2020[56]). Technical factors, such as technology availability, broadband speed and
computational data, are crucial for the effective use of AI tools. These technologies often require internet
access for optimal functionality, as the internet enables AI systems to access databases, online resources
and real-time data essential for up-to-date information. For instance, cloud-based AI services and
interactive tools depend heavily on internet connectivity to offer adaptive, real-time and enriched
educational experiences (ibid.). However, schools with high shares of socio-economically disadvantaged
students already report a more significant lack of and inadequate quality of digital resources (Figure 3.1).
On average across OECD, almost 30% of students in disadvantaged schools had principals who reported
a lack of digital resources, or an inadequate or poor-quality thereof in 2022. In contrast, less than 20% of
students in advantaged schools had principals who reported similar concerns.

Unclassified
EDU/WKP(2024)15  19

Figure 3.1. Quantity and quality of digital resources by socio-economic profile of schools (2022)
Percentage of students in schools whose principal reported a lack of digital resources (panel A), or inadequate or
poor-quality digital resources (panel B), to some extent or a lot
Panel A: Lack of digital resources (e.g. desktop or laptop Panel B: Inadequate or poor-quality digital resources (e.g. desktop
computers, Internet access, learning-management systems or or laptop computers, Internet access, learning-management
school learning platforms) systems or school learning platforms)
Advantaged schools Disadvantaged schools Advantaged schools Disadvantaged schools

Colombia Colombia
Mexico Mexico
Japan Greece
Greece Japan
Israel Israel
Hungary Chile
Chile Hungary
Germany Germany
Slovak Republic Slovak Republic
Spain Korea
Korea Latvia*
Latvia* Portugal
OECD average Estonia
Estonia Austria
Austria OECD average
Portugal United Kingdom*
United Kingdom* Czechia
France France
Iceland Spain
Czechia Belgium
Australia* Poland
New Zealand* Australia*
Türkiye Türkiye
Switzerland Iceland
Canada* Lithuania
Poland Switzerland
Ireland* New Zealand*
Italy Italy
Denmark* Finland
Belgium Denmark*
Lithuania Norway
Finland Canada*
Slovenia Slovenia
United States* Ireland*
Norway United States*
Netherlands* Sweden
Sweden Netherlands*
0 20 40 60 80 100 0 20 40 60 80 100

Note: * Caution is required when interpreting estimates because one or more PISA sampling standards were not met (see Reader’s Guide,
Annexes A2 and A4 in OECD (2023[57])). The PISA Index of Economic, Social and Cultural Status (ESCS) measures the schools’ socio-economic
profile. A socio-economically disadvantaged (advantaged) school is in the bottom (top) quarter of the ESCS index in the country.
Sorted in descending order of the percentage of students in disadvantaged schools whose principal reported a lack of digital resources, or
inadequate or poor-quality digital resources.
Source: OECD (2023[58]), PISA 2022 Results (Volume II): Learning During – and From – Disruption, Tables II.B1.5.19-20,
https://doi.org/10.1787/a97db61c-en.

Social factors include demographic and socio-economic characteristics of users that can impact who
accesses AI tools (e.g. gender, age, income, family size, educational level) (Carter, Liu and Cantrell,
2020[56]). These also include other social factors such as culture, regulations and policies that determine
access to AI tools (ibid.). These social factors are highly relevant to inequities in education as they can
determine who has access to and can benefit from AI technologies in learning environments. For instance,
disadvantaged students may have limited access to AI-enhanced educational tools. Cultural factors and
regulations also play a role in how AI is integrated into education systems, potentially creating disparities
in the quality of education received. Such socio-economic and cultural divides can lead to a widening gap
in educational outcomes, reinforcing existing inequalities.

Unclassified
20  EDU/WKP(2024)15

Finally, socio-technical factors include skills, digital literacy, beliefs, mistrust, risk perceptions and privacy
concerns (Carter, Liu and Cantrell, 2020[56]). For instance, an earlier cycle of PISA revealed that a higher
percentage of advantaged students reported using information and communication technology outside of
school for reading news (70%) or obtaining practical information (74%) in comparison to disadvantaged
students (55 and 56%, respectively) (OECD, 2016[59]). Similarly, results showed that 93% of advantaged
students thought the internet was a good resource for obtaining information compared to 84% of
disadvantaged students (OECD, 2017[60]). Disadvantaged socio-economic background is also associated
with underperformance in computer and information literacy and computational thinking (Fraillon et al.,
2020[61]). Socio-economically disadvantaged students are also less likely to use the internet to e.g. search
for information about careers or higher education programmes than their less advantaged peers (OECD,
2019[62]). Access disparities also extend to cultural differences in the adoption of AI technologies. Students
from less trusting cultures may not access AI tools that support, e.g. health and well-being. These skills
and attitudes can influence how users interact with and perceive AI innovations, thus shaping inequities in
access to AI technologies.
Some AI technologies can present financial barriers for many schools and families. Thus, integrating AI
tools, such as game-based learning, VR and AR, can widen the gap between resource-rich and
resource-poorer schools and families. The cost of VR/AR equipment and content development can be
prohibitively expensive, leading to a situation where institutions that can afford these technologies provide
enhanced educational experiences to students who already have more advantages, thus exacerbating
existing inequalities. The cost of such tools can significantly influence the adoption and use of these tools
(Alzahrani, 2020[63]). Furthermore, the costs involved in installing, maintaining and repairing AI tools can
be a barrier for these schools. The equity impacts can also be less direct. For instance, even if more
research is needed on the actual learning impacts of these new tools, some schools could use them to
attract socio-economically advantaged students, leading to greater segregation between schools and
eventually higher performance gaps.
While access to AI tools represents a significant challenge, it needs to be viewed in a broader context and
one should be cautious when drawing parallels between a “digital divide” and an “AI divide”. It is considered
a net good for students to have access to the internet, hence also the human rights mandate to freedom
of opinion and expression “through any media and regardless of frontiers” (UN, n.d.[64]). From an equity
perspective it is, therefore, necessary to bridge this digital divide and lack of access is viewed as
problematic in principle. However, this is not the case with AI tools now. As is elaborated throughout this
working paper, there is insufficient research on AI tools’ implications for equity and inclusion, necessitating
further and interdisciplinary collaboration to develop practical applications impacting learning outcomes
(Zhang and Aslan, 2021[65]).
Moreover, in some instances, there appear to be differences in how AI tools are used by schools. For
instance, AI is sometimes present in cameras with facial recognition technology to check who should be
allowed to enter a school building or identify someone who should not be there, according to a survey of
teachers in secondary schools in the United States (Laird, Dwyer and Grant-Chapman, 2023[66]). At the
same time, parents, teachers and students with diverse backgrounds continue to worry about school data
and technology practices and the digital footprints that are created in this way (ibid.). Meanwhile, some
socio-economically advantaged institutions try to filter out screen time for students to benefit from,
e.g. relationship-rich education (Bowles, 2018[67]; Bowles, 2019[68]; Felten, 2020[69]). All of this suggests
that access to AI tools may not necessarily be viewed as a “net good”.
These changes are happening in a broader context where the rapid advancement of AI can reshape global
economic and social landscapes. The current trajectory of AI development, primarily steered by
high-income countries, presents significant challenges in terms of exacerbating inequalities between
economies (Dutta and Lanvin, n.d.[70]). For instance, AI innovations predominantly cater to capital-intensive
applications in richer nations, potentially undermining the labour-intensive economic structures of poorer

Unclassified
EDU/WKP(2024)15  21

countries (ibid.). This issue is further exacerbated by the fact that relatively cheap labour in the Global
South (including individuals in refugee camps) was sometimes used for training of AI systems in
labour-intensive, time-consuming and repetitive tasks (e.g. data labelling) in often adverse working
conditions (e.g. working with toxic content) (Gray and Suri, 2019[71]; Jones, 2021[72]; Perrigo, 2023[73]).
Discussions about inequities in access to AI tools, therefore, need to consider the global context of AI
development.

Confronting techno-ableism

Techno-ableism refers to a tendency to argue that technology is a “solution” for a disability and, as such,
that people with disabilities need to be “fixed” (Shew, 2020[74]). Often, when AI is developed with disability
in mind, it is done so to help disabled individuals to better assimilate into an able-bodied and neurotypical
world. This approach inherently frames disability as an individual problem that technology, specifically AI,
can help to solve or mitigate (ibid.). In educational contexts, where foundational views and values are
formed, techno-ableism presents a significant challenge for equity and inclusion. Education systems shape
the perspectives of future generations, and when these systems are imbued with techno-ableism, they
perpetuate a narrow understanding of disability (Dolmage, 2017[75]). This approach is at odds with, for
instance, the social model of disability, which originated in the United Kingdom in the 1970s. This model
posits that disability is not necessarily an impairment of the body or brain but rather a relationship between
individuals with impairments and a discriminatory society (Shakespeare, 2004[76]; Smith and Smith,
2020[77]).
Techno-ableism in educational AI tools can inadvertently reinforce the notion that the problem lies with the
individual rather than addressing the societal structures that create barriers for people with disabilities
(Selwyn, 2023[78]). By focusing on “fixing” the individual, these tools can fail to challenge or change the
underlying societal discrimination and exclusion that define disability. Moreover, techno-ableism in
education can lead to a lack of support tailored to the diverse needs of disabled students. When
educational digital tools (including AI tools) are not designed with inclusion in mind, they are less likely to
be effective or relevant for students with diverse needs (Gottschalk and Weise, 2023[11]). This oversight
can further disadvantage or exclude those who are most vulnerable (ibid.).

Addressing the continuous challenge of bias

Bias in AI, particularly in equitable and inclusive education, poses a complex challenge that encompasses
a spectrum of issues from algorithmic biases to cultural insensitivity and stereotyping. AI tools, which can
inherit biases from training data or encode the biases of their developers and society, have the potential
to perpetuate and reinforce existing inequalities and discrimination towards specific groups (Baker and
Hawn, 2021[79]). Bias in AI can take various forms (Box 3.1) and can lead to allocative harms, affecting
those who receive resources or opportunities, and representational harms, such as denigration and
stereotyping based on gender, ethnicity or other characteristics (ibid.).

Unclassified
22  EDU/WKP(2024)15

Box 3.1. Algorithmic biases


Broadly, six categories of algorithmic bias can be distinguished:
• Historical bias involves modelling decisions that replicate real-world inequalities, such as using
student demographics to predict grades, leading to improved accuracy at the expense of
potentially perpetuating bias. This bias can persist even if demographic information is not
explicitly included in models, as unintended proxies may influence predictions.
• Representation bias occurs when underrepresented groups in training data lead to poorer
predictions. For instance, Anderson, Boodhwani and Baker (2019[80]) found a college graduation
prediction model to work inadequately for Indigenous learners due to their small representation
in the assessed data.
• Measurement bias stems from selecting inadequate variables that do not validly represent the
intended factors, causing unequal predictions among different groups. For example, a model
predicting school violence may lead to unfair outcomes if the labelling process is prejudiced,
e.g. the same violent behaviour is flagged for members of one ethnic group but not for members
of another one.
• Aggregation bias results from combining several distinct groups in a single model, rendering
the model ineffective for some or all groups. For instance, a prediction model of student
performance trained on a combination of urban and rural students can create generalised
recommendations that fail to effectively address the specific learning needs of either group,
resulting in suboptimal or ineffective predictions.
• Evaluation bias occurs when the dataset on which models are tested fails to represent the
eventual application population for which the models are intended. Models in educational data
mining can be developed on non-representative populations and provide no information on what
populations they were tested on.
• Deployment bias arises when a model designed for one purpose is eventually used for another,
such as utilising a student disengagement identification model for grading participation.
Source: Suresh and Guttag (2021[81]), A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle,
https://doi.org/10.1145/3465416.3483305.

Other adverse consequences of bias in AI include insensitivity towards human qualities like empathy,
ethics, solidarity, and concern for others and the environment (Selwyn, 2023[78]). For instance, AI
technologies developed primarily by Western and Chinese organisations can reinforce existing power
dynamics and disregard local contexts (Holmes, 2023[3]; Munn, 2023[82]). Some Indigenous groups
expressed concerns about human dignity, collective interests, communal integrity and environmental
impact, contrasting sharply with the dominant Western framings of AI (Munn, 2023[82]). Some AI tools have
also been criticised for perpetuating gender stereotypes, e.g. many AI personal assistants have
female-sounding voices and names that can reinforce traditional gender roles and discriminatory visions
(UNESCO/EQUALS Skills Coalition, 2019[83]). Indeed, generative AI, by its nature, is designed to generate
text based on the content it has been trained on. This approach often echoes prevailing opinions, reflecting
dominant viewpoints regardless of the user's location or background (Holmes, 2023[3]). Such a mechanism
can inadvertently amplify the marginalisation of already marginalised voices, as these AI systems might
not adequately represent diverse perspectives and experiences (ibid.).
All these issues can then permeate AI tools in education. Some studies have shown that AI can provide
more accurate predictions for some groups of students than others based on demographic characteristics

Unclassified
EDU/WKP(2024)15  23

(Anderson, Boodhwani and Baker, 2019[80]; Gardner, Brooks and Baker, 2019[84]; Khosravi et al., 2022[26]).
Language bias was also observed, with AI models unfairly categorising students' posts in discussion
forums based on whether English was their first language (Sha et al., 2021[85]). Ethnically related misuses
and systematic discrimination through AI technologies are also a concern, particularly when AI is trained
on datasets reflecting historical biases and deployed in structurally racist settings (Benjamin, 2019[86]). This
can lead to AI-powered educational software, such as grading systems, favouring certain writing styles,
languages or cultural references, thereby penalising students from different ethnic and Indigenous
backgrounds (Anderson, Boodhwani and Baker, 2019[80]; Gardner, Brooks and Baker, 2019[84]).
Moreover, groups such as disadvantaged learners and students with SEN are often underrepresented in
the research on algorithmic bias (Baker and Hawn, 2021[79]). Research on algorithmic bias in regard to
students with intersecting identities is also insufficient (Baker and Hawn, 2021[79]; Cabrera et al., 2019[87]).
Furthermore, the effectiveness of ITS for diverse learners is largely unknown due to the lack of data on
their efficacy by gender or race (Martin et al., 2022[88]). VR/AR educational content and gamified
environments might reflect biases or stereotypes, potentially alienating or misrepresenting certain groups
of students (Holmes, 2023[3]). Chatbots can also suffer from underrepresentation of cultural diversity (ibid.).
Navigating the challenges related to bias will likely occupy the research and educational sector in the
upcoming years. There are two pertinent questions in this context: a) Is AI more or less biased compared
to teachers?4; b) Are we more likely to successfully address AI’s or teachers’ biases? Both of these require
increased attention from the research and academia, as well as a nuanced understanding of the
educational contexts in which teachers and AI are more or less biased. In regard to question a), while
research is only emerging, results suggest that AI, when applied as a trainer of a strategic board game, for
instance, can lower pre-existing gender gaps in tournament data relative to human trainers (Bao and
Huang, 2022[89]). The researchers argue that AI trainers' non-discriminatory emotional status can explain
the improvement in gender equality (ibid.). Moreover, a study from the People’s Republic of China exploring
fairness perceptions among higher education students suggests that AI algorithms are perceived as fairer
evaluators than teachers, particularly in formative evaluations (Chai et al., 2024[90]). This perception is
attributed to AI algorithms' higher perceived information transparency (ibid.). However, when explanations
for the evaluation process are provided, the gap in perceived fairness between AI and teachers diminishes,
indicating that transparency plays a significant role in fairness perception (ibid.). Some researchers have
also suggested that AI tools can help address teacher biases in assessment, although not corroborated
by robust evaluations (Gauthier et al., 2022[91]). Nevertheless, the nuanced nature of this field underscores
that some AI tools could serve as complementary to human judgment, potentially offering a pathway to
mitigate biases that teachers might hold.
In regard to question b), successes have been seen in the past in reductions in both AI and teacher bias.
As mentioned before, biases can seep into AI tools through various channels, such as biased training data,
flawed algorithm design and lack of diverse representation in development teams. Addressing AI bias thus
involves diverse and representative data collection, thorough evaluation of training data, transparency in
AI development processes and continuous monitoring for bias throughout the lifecycle of AI systems
(Ferrara, 2023[92]; Nazer et al., 2023[93]). Additionally, fostering diversity and inclusivity within AI
development teams can help mitigate biases (ibid.). At the same time, it may not be possible or realistic to
reduce biases in datasets in some instances (European Union Agency for Fundamental Rights, 2022[94]).
For example, if data are heavily biased against certain groups, it may be difficult to “unbias” them or their
predictions (ibid.). To address teacher bias, successful approaches revealed that evaluators need to be
trained to recognise and address conscious and unconscious bias in the classroom (Alesina et al., 2024[95];

4
Rater or evaluator bias (i.e. the tendency of raters to be influenced by non-performance factors when rating), has
long been recognised as an issue in teachers’ processes to assess students (OECD, 2023[8]). Evaluator bias can be
influenced by stereotypes, preconceptions and socio-economic factors that impact students' academic performance
(Milanowski, 2017[143]).

Unclassified
24  EDU/WKP(2024)15

OECD, 2023[8]). The evaluation process needs to be adjusted to ensure that it is fair and equitable for all
teachers, regardless of their background and the students they teach (OECD, 2023[8]).

Assessing the impact on socio-emotional learning

Integrating AI in educational settings, while offering advancements in adaptive learning, also poses
challenges to students' socio-emotional learning, which is crucial for holistic development (Holmes, 2023[3];
UNESCO, 2023[17]). Indeed, the goal of inclusive education is to respond to all students’ needs, going
beyond school attendance and achievement, while improving all students’ well-being, including in the
domain of socio-emotional learning (OECD, 2023[8]). Some AI-driven tools might impede students’
sociability, sense of trust and empathy for others by inadvertently reducing the need for human interaction,
leading students to rely more on digital interfaces (Pons, 2023[24]; UNESCO, 2023[17]). This is all happening
in a context where some researchers have raised concerns in regard to the ability of AI tools to read
cognitive or emotional states, with disproportionate inaccuracies affecting people with disabilities or
different cultural backgrounds (Holmes and Porayska-Pomsta, 2022[96]). The socio-emotional gap caused
by AI technologies can thus significantly impact the inclusive aspects of education, which are fundamental
for fostering a sense of belonging and community within educational settings.
Furthermore, students excluded from social participation in school settings could turn to their “digital
friends” without tackling the roots of the problem. This raises concerns about the worsening of loneliness
and isolation, which are already significant, particularly among vulnerable student cohorts, and thus
hindering the inclusive processes in schools. AI tools, while beneficial in many ways, cannot replace the
nuanced understanding and empathy educators and support staff provide (Holmes, Bialik and Fadel,
2019[97]). Furthermore, this may erode crucial social interactions for building inclusion and engagement
within and beyond educational settings (Holmes, 2023[3]).

Balancing AI Integration with privacy and accountability

Integrating AI in educational settings has raised concerns about data privacy and security, posing
significant challenges for equity and inclusion in education (Holmes et al., 2021[98]). Many AI technologies
gather and store vast amounts of student data. While beneficial for adaptive learning, such as ITS, this
information risks misuse and commercialisation, raising ethical and privacy concerns (Holmes, 2023[3];
Holmes et al., 2021[98]; Huang, 2023[99]). One key issue is the potential for information monopolies by the
designers of AI tools (Huang, 2023[99]). These platforms process and analyse personal data extensively,
which can inadvertently expose sensitive information, such as about a student's minority status (OECD,
2023[37]). This situation becomes more complex with younger learners, where obtaining informed consent
is challenging due to their limited capacity to understand the implications of data sharing (UNESCO,
2023[17]). Indeed, the risks to student privacy were starkly illustrated during the COVID-19 pandemic.
According to Human Rights Watch (2022[100]), many education technology products used data practices
that compromised children's rights. These products collected detailed personal information, including
location, activities, family information and socio-economic status. Children, parents and teachers were
often unaware of these practices (ibid.).
Accountability in AI technology usage in education is critical yet challenging to ensure. Porayska-Pomsta
and Rajendran (2019[101]) emphasise the importance of accountability for inclusion, diversity and fairness
in educational interventions and AI interactions. However, it remains unclear who bears responsibility when
AI technologies lead to discriminatory outcomes or incorrect guidance (Pedró et al., 2019[102]). These
concerns are particularly pertinent in educational settings, where inaccurate or biased AI-generated
responses can significantly affect students' learning and development. This uncertainty extends to various
educational AI applications, from ITS to chatbots (Holmes, 2023[3]). When AI provides incorrect or biased
information or advice, determining who is responsible for rectifying these errors and their consequences is
complex.

Unclassified
EDU/WKP(2024)15  25

4 Teacher-led tools to support equity


and inclusion

Teacher-led AI tools aim to enhance teaching effectiveness and efficiency across various functions. These
include classroom assistance, curating learning materials, student assessment and classroom
management support, identifying some special education needs, and new continuing professional learning
opportunities. While promising, these tools also face challenges. This section focuses on costs, the
commercialisation of education, and issues surrounding teacher training in regard to the use of AI.

Opportunities of teacher-led tools for equity and inclusion

Teacher-led AI tools have the potential to influence educational practices significantly, mirroring the impact
of learner-centred AI (see section 3). These tools encompass various functionalities to enhance teaching
efficiency and effectiveness. AI-powered robots can assist with classroom management and support
students with special education needs. At the same time, AI-driven curation of learning materials can adapt
and diversify educational content, overcoming language and cultural barriers. In assessment and
classroom management, AI tools have the potential to foster greater fairness and inclusivity by assisting
in evaluation methods. AI tools can also aid school staff in identifying some special education needs. The
success of these technologies depends on teachers' continuing professional learning. Indeed, deploying
AI-supported virtual facilitators acting as human instructors is a notable innovation in this field, offering
opportunities for training in the domain of AI and beyond.

Supporting teaching with AI-powered robots

AI-powered robots can offer a range of benefits, from teaching and promoting soft and social skills to
providing individualised learning support (OECD, 2021[46]). Under teachers’ or other school staff’s
guidance, they can advance diverse learning needs in educational settings, particularly for students
needing psychological and behavioural support (ibid.). Indeed, robots were applied to assist students with
emotional, behavioural and neurodevelopmental disorders (Bertacchini et al., 2023[103]). Students can
interact with the robots without the fear of judgment or embarrassment, an aspect crucial for encouraging
repeated practice and enhancing learning outcomes (OECD, 2021[46]). This factor is significant for students
who might feel self-conscious or anxious in traditional learning environments (ibid.).
Teaching and enhancing soft skills and social skills can also be beneficial for children with autism spectrum
disorders. For instance, robots like NAO have been employed to practice social skills using therapeutic
approaches like Applied Behaviour Analysis. This method can help internalise social skills through
repeated practice in a non-threatening environment (Panke, 2023[104]; Woo et al., 2021[105]). NAO and
similar robots can personalise interactions by incorporating the learner's hobbies, interests, and names of
friends and family, which can positively impact student motivation and learning outcomes (OECD, 2021[46]).
Other robots are designed with the aim to amplify learning for children with autism spectrum disorder,
positively impacting their well-being (Lemaignan et al., 2022[106]).

Unclassified
26  EDU/WKP(2024)15

AI-powered robots are not intended to replace teachers or therapists (OECD, 2021[46]). Instead, they aim
to function as tools or extensions of human instruction, enhancing the educational experience rather than
substituting for human interaction and decision-making (ibid.). They can support and augment traditional
teaching methods, providing additional resources for students requiring more specialised attention (ibid.).
Indeed, the potential for AI to replace teachers is a topic of debate, with some arguing that AI can enhance
teaching and learning without replacing human educators (Chan and Tsi, 2023[107]). The limitations of
current AI applications suggest that AI cannot replace experienced teachers (Chan and Tsi, 2023[107];
Kolchenko, 2018[108]). The unique qualities of human teachers, such as critical thinking and creativity, make
them irreplaceable (ibid.). Moreover, it has been argued that AI cannot replace human expertise in teaching
existential reflection, norms and values, and a sense of self, history and society (Felix, 2020[109]).
Nevertheless, research also indicates that a substantial proportion of primary and lower secondary
teachers’ time could be automated and directed towards other tasks (Box 4.1).

Unclassified
EDU/WKP(2024)15  27

Box 4.1. Aspects of teaching that AI could support


Based on the McKinsey Global Teacher and Student Survey, some AI tools’ impact on teacher tasks in
primary and lower secondary education shows significant potential for automating certain aspects of
teaching. The survey included over 2 000 teachers from four countries with high adoption rates of
education technology: Canada, Singapore, the United Kingdom and the United States. The survey
results suggest that 20% to 40% of teacher hours are spent on activities that could be automated using
existing technology. This could translate to approximately 13 hours per week that teachers could
redirect toward activities that lead to higher student outcomes and teacher satisfaction (Table 4.1). The
area with the most considerable automation potential is one that teachers deal with before they even
get to the classroom: preparation. AI could also assist teachers with evaluation and feedback,
administration, student instruction and engagement, and continuing professional learning.
With the time saved through automation, teachers could use this opportunity to engage in more
personalised learning, direct coaching, mentoring and fostering one-on-one relationships with students.
This additional time can also support the development of 21st-century skills necessary for students to
thrive in an increasingly automated workplace. Furthermore, teachers could dedicate more time to
continuing professional learning and collaborative planning, enhancing their teaching methods and
student learning outcomes.

Table 4.1. Teacher activities and AI


Teacher activity Hours per week in Hours per week Use of reallocated
2020 in the sample potentially with AI time for teachers
Preparation 10.5 5.0 Enhanced planning for
personalised learning
Evaluation and 6.5 3.0 More time for
feedback individualised student
feedback
Administration 5.0 2.5 Engaging in professional
development and
collaboration
Student instruction and 14.5 2.0 Fostering one-on-one
engagement relationships with
students
Professional 3.0 0.5 Greater use of direct
development coaching and mentoring

Note: Research was conducted before the wide-spread use of large language model chatbots, such as ChatGPT.
Source: Bryant et al. (2020[110]), How artificial intelligence will impact K-12 teachers, https://www.mckinsey.com/industries/education/our-
insights/how-artificial-intelligence-will-impact-k-12-teachers (accessed on 11 January 2024).

Curating learning materials

The curation of learning materials through AI tools represents a significant shift in educational practices,
offering the potential for supporting teaching and enhancing equity and inclusion in education. AI tools'
ability to identify, curate, adapt and translate learning content such as books, videos and websites can
reshape how educational resources are accessed and utilised in classrooms. In England
(United Kingdom), for instance, some educators use AI tools, such as ChatGPT and Copilot with Bing
Chat, to create learning resources, e.g. materials for lessons, and support students with learning outside
of the classroom (The Open Innovation Team and Department for Education, 2024[111]). Khanmigo, an

Unclassified
28  EDU/WKP(2024)15

AI-powered teaching assistant, is designed to help educators create lesson plans, learning materials and
assessments. Another example of this shift is Korea’s initiative to implement AI-powered learning systems
and digital devices, replacing traditional paper-based textbooks in all public schools (Joo-heon, 2023[112]).
This "paradigm shift” is designed to transform the nation’s education culture by offering learning solutions
tailored to all children, including those with multicultural backgrounds and language difficulties (ibid.). The
digital textbooks, set to be distributed in 2025, will have smart tutoring systems, metaverse capabilities and
conversational AI (You-jin, 2023[113]). One of the key benefits of this approach is the support it offers to
students whose first language is not Korean. Language translation and interpretation tools integrated into
digital textbooks can help overcome language barriers, facilitating these students' integration into the
classroom (Joo-heon, 2023[112]). This technology allows students who are not proficient in the instruction
language to participate in learning processes. Such an approach can significantly enhance the educational
experience for these students, promoting better understanding and integration. Moreover, the Ministry’s
plan to train teachers for this digital shift, with a pilot programme for 400 teachers expanding to 1 500 by
2025, highlights the importance of preparing educators for this new educational landscape (You-jin,
2023[113]). Teacher training is crucial for effectively integrating AI tools in teaching, ensuring educators can
utilise these technologies to their full potential (see section Offering continuing professional learning
opportunities).

Assisting with assessment and classroom management

The ability of AI to create assessments, evaluate student responses and assign grades can impact
teaching efficiency, streamline the assessment process and enhance learning outcomes. By facilitating
assessments through various means, such as simulations, vocational hands-on assessments or essay
scoring, AI showcases its capability to refine traditional assessment methods (OECD, 2021[46]). These AI
tools can enable the use of real-time data and feedback, allowing for different learning and assessment
pathways (including the development of testing questions/items), and tailoring them to the needs of
students (ibid.). Moreover, some AI tools’ precision in assessment can lead to more consistent grading
(e.g. among teachers), thus promoting fairness in evaluation processes (Holmes, 2023[3]). The systematic
review by Salas-Pilco, Xiao and Oshima (2022[114]) highlights some AI tools' capacity to improve
performance and self-efficacy among socio-cultural minorities through personalised feedback, which is
crucial for fostering an inclusive educational environment where every student feels valued and
understood. Other researchers showed that human raters were generally better at giving high-quality
feedback to students compared to ChatGPT 3.5 (Steiss et al., 2024[115]). Nevertheless, they also argued
that ChatGPT shows promise at giving feedback, particularly when considering the trade-off between
quality and time (ibid.). More broadly, AI tools can help to fasten the process of evaluating large-scale
assessments. In Czechia, for instance, an AI tool is assisting to evaluate some items in the standardised
unified entrance examination to upper secondary schools (Deník N, 2024[116]).
Furthermore, integrating classroom analytics into AI tools’ capabilities offers a nuanced approach to
understanding and enhancing classroom dynamics. Classroom analytics can enhance classroom
management by, e.g. providing teachers with a dashboard that indicates struggling learners or suggests
the optimal timing for transitioning to new activities. This functionality not only aids classroom management
but can also lead to improved learning outcomes (Holstein, McLaren and Aleven, 2018[117]). Classroom
analytics extend to monitoring a variety of learning activities, thereby supporting the rich pedagogical
concept of "classroom orchestration" (OECD, 2021[46]). This approach does not replace the teacher's
decision making but empowers them with contextual information to make more informed choices,
acknowledging that factors such as illness, peer support or technical issues might influence a student's
performance (ibid.). ExamSoft, for example, is designed to provide robust reporting and analytics tools that
enable educators to align student performance with remediation efforts and measure course objectives
against accreditation standards. Zelexio is a cloud-based platform that aims to help teachers monitor
learners' progress and create evaluation grids for all assessments. AI can also assist teachers in identifying

Unclassified
EDU/WKP(2024)15  29

instances of academic misconduct, including AI-generated outputs. Copyleaks, for instance, was
developed to detect potential plagiarism using a text-matching algorithm.

Aiding in the identification of some special education needs

Navigating the complexities of identifying special education needs, such as dysgraphia, requires a nuanced
approach that balances specialist assessments with the subtleties of each condition. Diagnosing
dysgraphia presents numerous challenges, stemming from the requirement of specialist assessment and
the variability of symptoms such as distorted handwriting and difficulties forming letters (OECD, 2021[46]).
The process is often lengthy, subjective and stressful for families, requiring standardised tests focusing
more on the output than the writing process. However, some AI tools offer promising opportunities for early
detection and intervention. Tools that enable teachers to recognise developmental differences provide the
potential for faster initiation of the diagnostic process. Such technologies can also provide specialists with
detailed information, facilitating more timely and accurate diagnoses. This is crucial as early intervention
tailored to an individual’s needs can significantly impact a learner’s educational journey and future (ibid.).
Several research teams have developed AI tools to ease the diagnostic assessment of dysgraphia. Some
systems can accurately detect dysgraphia in children using a standard tablet, overlaying a sheet of paper
to mimic traditional writing practices (Asselborn et al., 2018[118]). By analysing handwriting data from nearly
300 children, the tool achieved approximately 96% accuracy in dysgraphia detection. It identifies specific
handwriting features, such as pen tilt, pressure and speed variations, enabling a detailed analysis of the
writing process. This approach not only distinguishes dysgraphic handwriting from that of typically
developing children, but also facilitates the provision of targeted support (ibid.). In another example,
Dystech has introduced an AI tool for dyslexia detection based on audio records. The approach leverages
machine learning algorithms to analyse audio recordings of children reading aloud (Radford et al.,
2021[119]). This method focuses on extracting features from these recordings, such as variations in reading
speed, pronunciation accuracy and other audio cues that may indicate dyslexia. The AI tool has
demonstrated a high degree of accuracy, offering a non-invasive, fast and cost-effective means for early
dyslexia screening (ibid.). These examples do not aim to replace the role of teachers and other specialists
in diagnosing special education needs, but they can complement existing methods by providing an
additional layer of analysis that can be particularly useful in settings where access to specialised
assessment services is limited.

Offering continuing professional learning opportunities

AI-based teacher training and development present opportunities for enhancing teaching practices and
addressing educational disparities. Continuing professional learning programmes can yield positive
changes in teacher instruction and help reduce the achievement gap in student academic performance,
particularly when tailored to a specific subject and focusing on both content knowledge and pedagogical
content knowledge (Copur-Gencturk et al., 2024[120]; Scher and O’Reilly, 2009[121]). Furthermore,
programmes involving active, collaborative learning among educators have been shown to be most
effective (Darling-Hammond, Hyler and Gardner, 2017[122]). To this end, deploying AI-supported virtual
facilitators acting as human instructors is a notable innovation in this field, with a potential to fill some of
the continuing professional learning gaps related to diversity, equity and inclusion elaborated on in the
Equipping educators with knowledge and skills section. These facilitators present teaching-related
problems to educators and provide feedback on their responses, simulating a dynamic learning
environment for continuing professional learning (Copur-Gencturk et al., 2024[120]). Edthena, for instance,
employs an "AI Coach" to guide teachers through self-observation and action planning, potentially creating
a dynamic and interactive learning environment. Through tools like video coaching and video learning, the
platform aims to amplify coaching capacity and deliver feedback, thereby making data-driven decisions to
improve teaching practices (ibid.). Similarly, Copur-Gencturk et al. (2024[120]) developed an online

Unclassified
30  EDU/WKP(2024)15

continuing professional learning programme with natural language processing integrated into the system.
Based on a randomised controlled trial, the authors showed that teachers improve students' mathematics
performance by obtaining personalised and real-time feedback from a virtual facilitator (ibid.).

Challenges of teacher-led tools for equity and inclusion

Challenges related to teacher-led AI tools encompass several areas significantly impacting equity and
inclusion in education. Firstly, overcoming the costs associated with AI tools, particularly in
under-resourced schools, can lead to disparities in educational access to AI tools that teachers can use.
Secondly, the increasing commercialisation of education through AI tools raises concerns about prioritising
financial gains over educational outcomes, and additional worries regarding privacy, data security and
algorithmic bias. Lastly, the need to equip educators with the necessary knowledge and skills to integrate
AI effectively into their teaching practices is highlighted, acknowledging the disparities in training
opportunities. Additionally, these AI tools face similar challenges described in the section Challenges of
learner-centred AI tools for equity and inclusion.

Overcoming the costs of AI tools

Similar to learner-centred AI tools, one of the primary concerns with teacher-led tools is the inequity
stemming from the costs involved in installing, maintaining and repairing AI tools. These costs can be
prohibitive, especially for under-resourced schools, leading to disparities in the quality of education offered.
Schools in affluent areas or with better funding can afford (more) advanced digital tools, including AI tools,
and the associated costs, thereby enhancing their educational offerings and classroom experiences
(Gottschalk and Weise, 2023[11]). In contrast, schools with limited resources struggle to access these
technologies. For instance, the cost of advanced robotic systems can present a significant challenge. The
price of a robot like NAO (see section Supporting teaching with AI-powered robots) can reach EUR 7 200
(euros), and this does not include the cost of specific applications, which can vary significantly (United
Robotics Group, 2023[123]). This cost can be prohibitive for many schools, potentially exacerbating existing
inequalities in access to advanced educational technologies (OECD, 2021[46]). While the cost aspect is
apparent for physical AI applications such as robots, it may seem less intuitive for software-based solutions
like chatbots, e.g. ChatGPT. Yet, even here, costs play a role, albeit in a different form. Illustrating on the
example of ChatGPT, while the basic 3.5 version is free and the more advanced 4.0 version free within
limits, the most advanced 4o version with its features requires additional investment (OpenAI, n.d.[124]).
Thus, despite their generalist nature and broad utility, some AI tools still present cost-related barriers.
The disparity is not limited to within countries but extends across different education systems globally.
There is a notable inequity across countries in adopting and integrating AI in education. Some education
systems plan to systematically offer AI-based classroom solutions (e.g. Korean example Curating learning
materials). Other systems are yet to explore AI options, but barriers exist. For instance, at the basic level,
few countries even monitor and evaluate investments in digital education tools and resources (OECD,
2023[37]). This divide raises concerns about global educational equity and inclusion. Without concerted
efforts to address these disparities, AI in education risks exacerbating existing inequalities rather than
narrowing them. This is further complicated in a context where much of the research on AI tools in
education is concentrated in the United States (OECD, 2023[37]; Zhang and Aslan, 2021[65]). Finally, the
discussion on costs and access to AI tools needs to be balanced, viewed in a broader context when
drawing parallels between a “digital divide” and an “AI divide”, and caution needs to be exercised when
viewing AI tools as a “net good” given the often limited research (see section Accessing AI tools).

Unclassified
EDU/WKP(2024)15  31

Striking a balance with the commercialisation of education

The increasing presence of AI tools in education has raised concerns about the commercialisation of this
sector (Holmes, 2023[3]). The involvement of corporate entities indicates a growing trend where commercial
interests could potentially overshadow educational and equity objectives (Holmes et al., 2021[98]). This
commercialisation raises critical questions about the primary focus of educational tools and resources.
While some AI tools have the potential to enhance educational quality and accessibility, there is a risk that
the profit motives of commercial entities could lead to a prioritisation of financial gains over educational
and equitable outcomes (Holmes, 2023[3]). This tension might necessitate careful management to ensure
that the development and deployment of AI tools in education primarily serve students' needs rather than
companies' commercial interests. The commercialisation of education through AI tools also brings
additional concerns regarding privacy, data security and the risk of algorithmic bias, elaborated on in the
section Challenges of learner-centred AI tools for equity and inclusion. As commercial entities gain access
to vast amounts of student data, privacy issues and the potential misuse of this data emerge (Holmes,
2023[3]). These concerns are particularly pertinent given the sensitive nature of educational data, which
can include personal and demographic information about students.
Moreover, as delineated in the OECD Digital Education Outlook, countries' flexible and varied procurement
strategies highlight a complex ecosystem where the commercialisation of education through AI tools could
further complicate the landscape. While some governments strive for economies of scale, security and
compliance with data protection regulations, the predominant trend is towards decentralised procurement
decisions, often leaving the selection of digital tools to local governments or schools (OECD, 2023[37]). This
decentralisation could exacerbate the risk of commercial interests overshadowing educational and equity
goals. The emphasis on procedural rather than substantive regulation in procurement and the challenges
of setting rigid standards that accommodate future innovations could inadvertently foster a market
environment where a few large providers dominate, potentially stifling innovation and leading to vendor
lock-in effects (ibid.).
The nuanced understanding of procurement strategies across different governance models suggests that
while there is potential for aligning digital tool selection with educational goals and equity, the reality is
fraught with challenges. Governments' efforts to foster interoperability, equity and effectiveness in digital
tools are commendable but remain limited in scope (OECD, 2023[37]). This scenario underscores the
importance of establishing good public-private partnerships and spaces for collaboration between schools
and the education technology sector to navigate commercialisation risks effectively. Aligning procurement
strategies with governance models and policy objectives is crucial to mitigate the impact of commercial
interests and ensure that AI in education serves its primary purpose of enhancing educational quality and
accessibility for all students.

Equipping educators with knowledge and skills

While AI tools offer opportunities for enhancing teaching practices and addressing disparities in education
(section Offering continuing professional learning opportunities), effective integration of AI-powered tools
also hinges on teachers' ability to use these technologies within their teaching strategies (Pons, 2023[24]).
This requirement demands significant investment in time and resources for teacher training, which can be
a substantial barrier for many educational institutions. The challenge lies not just in the need for training
but in the depth and quality of the training provided, particularly in reaping AI tools' benefits for equity and
inclusion. Some initiatives are addressing these issues. For instance, the “AI4T - Artificial Intelligence for
and by teachers” is a three-year experimental project aimed at improving the use of AI tools in education
(France Éducation international, n.d.[125]). The project involves a professional training pathway for teachers
in five European countries (France, Italy, Ireland, Luxembourg and Slovenia) focusing on innovative
continuing professional learning methods for teachers (ibid.). Still, within initial teacher education,
according to preliminary data from the OECD AI Policy Observatory, the number of AI courses among

Unclassified
32  EDU/WKP(2024)15

Education and Training higher education study programmes (in English language) remains negligible
(OECD, 2024[126]).
The complexity of equipping educators with knowledge and skills is heightened by the reality that there is
already a notable disparity in training opportunities for teachers in different educational settings, both within
some countries and globally. Gottschalk and Weise (2023[11]) have identified widespread challenges in
training teachers and staff for digital use and literacy. While not specific to AI, 17.7% of lower secondary
teachers on average across OECD countries have identified a high level of need for professional
development in information and communication technology (ICT) skills for teaching in 2018 (OECD,
2019[127]). On average across OECD countries, the situation did not differ by the concentration of students
with special education needs or socio-economically disadvantaged students (Figure 4.1). However, in
some education systems, such as in the United States, teachers reported a much higher need for
professional development in schools with relatively high shares of students with special education needs
and disadvantaged students. Furthermore, Marino et al. (2023[128]) emphasise the difficulties in equipping
educators with AI knowledge and skills in special needs education. Disparities in training opportunities can
contribute to widening the gap in the effective use of AI technologies in education, potentially exacerbating
inequalities. Students in disadvantaged schools are at risk of being left behind in the rapidly advancing
digital landscape due to their teachers' unmet needs in regard to high-quality training.

Unclassified
EDU/WKP(2024)15  33

Figure 4.1. Continuing professional learning needs by school characteristics (2018)


Percentage of lower secondary teachers reporting a high level of need for professional development in ICT skills for
teaching
Panel A: By concentration of students with special needs Panel B: By concentration of students from socio-economically
disadvantaged homes
Less than or equal to 10% More than 10% Less than or equal to 30% More than 30%

Japan Japan
Colombia Colombia
Israel
Lithuania Israel
France Lithuania
Latvia Sweden
Norway
France
Sweden
Hungary Hungary
Finland Estonia
Belgium
OECD average
Estonia
OECD average Chile
Italy Mexico
Chile
Italy
Mexico
Slovak Republic Belgium
Spain Spain
Netherlands Austria
Austria
New Zealand New Zealand
Czechia Denmark
Portugal Australia
Australia
Portugal
Denmark
Slovenia Alberta (Canada)
Türkiye Türkiye
Alberta (Canada)
United States
United States
England (UK) England (UK)

0 10 20 30 40 50 0 10 20 30 40 50

Note: Students with special needs are those for whom a special learning need has been formally identified because they are mentally, physically
or emotionally disadvantaged. Socio-economically disadvantaged homes refer to homes lacking the basic necessities or advantages of life, such
as adequate housing, nutrition or medical care.
Sorted in descending order of the percentage of lower secondary teachers reporting a high level of need for professional development in schools
with less than or equal to 10% of students with special needs (panel A) or less than or equal to 30% of students from socio-economically
disadvantaged homes (panel B).
Source: OECD (2018[129]), TALIS 2018 Database, https://www.oecd.org/education/talis/talis-2018-data.htm (accessed on 25 January 2024).

Unclassified
34  EDU/WKP(2024)15

5 Other institutional tools that can


foster equity and inclusion

In addition to learner-centred and teacher-led AI tools, education systems and schools can utilise AI for
various other functions. This section focuses on three: a) increasing efficiency of higher education
admissions processes, b) better identifying students at risk of early leaving from education and training,
and c) assisting with data-based decision making. While not as visible as the other AI tools, they can
potentially become most influential in the future (Holmes, 2023[3]). These tools face challenges, including
inherent AI biases, and privacy and accountability concerns. Given that these challenges have been
elaborated on before, these will only be mentioned briefly.

Opportunities of institutional tools for equity and inclusion

AI-powered institutional tools present promising opportunities for enhancing equity and inclusion in
education systems. The application of AI in higher education admissions, for example, uses algorithms
and data analytics to streamline the admissions process, potentially reducing biases and improving
fairness in candidate selection. Similarly, AI tools designed to identify students at risk of leaving education
can provide crucial insights, allowing for timely school interventions. Additionally, AI can facilitate
data-based decision making, enabling more efficient and targeted distribution of resources to areas of
greater need.

Increasing efficiency of admissions in higher education

The application of AI tools in education admissions is a growing trend, particularly in higher education
(Holmes, 2023[3]). These AI tools, employing algorithms, machine learning and data analytics, analyse vast
amounts of data about applicants to create detailed candidate profiles and facilitate comparisons with
peers (ibid.). Applying complex pattern recognition, adaptability and learning from diverse data sources, AI
has the potential to, for instance, assess the qualities of applicants (Lira et al., 2023[130]). This approach
can improve prediction accuracy, speed up the admissions process, and, theoretically, reduce subjectivity
and bias inherent in human decision making, thereby enhancing the fairness and equity of the selection
process (ibid.). Indeed, implicit biases, e.g. racial bias, in some higher education institutions can impact
the decision to admit or reject an applicant (Capers et al., 2017[131]). For instance, the University of Texas
at Austin (United States) launched an AI system to recommend whether applicants should be accepted
based on test scores, academic background and textual input (Waters and Miikkulainen, 2014[132]). In
theory, this could reduce the subjectivity and bias of human decision makers and increase fairness. In
practice, however, this system had to be dropped precisely due to its various biases (Holmes et al.,
2022[133]). In another example, iSchool360 uses AI to enhance recruitment processes in higher education
institutions by automating filtering, identifying candidates and nudging them through the process. The
technology can potentially limit biases, save admissions teams’ time and make informed enrolment
decisions.

Unclassified
EDU/WKP(2024)15  35

Identifying students at risk of early leaving from education and training

AI tools for identifying students at risk of dropping out analyse data on student attendance, grades, test
scores, behaviour and demographic information to identify patterns that may indicate a student is struggling
(Holmes, 2023[3]). As teachers are often burdened with heavy workloads, they may find it challenging to
identify and support at-risk students effectively. AI drop-out systems can alleviate some of this pressure
by gathering and analysing data to understand better the factors leading to student disengagement and
drop outs, and offer a more efficient system for identifying at-risk students (Goel and Goyal, 2020[134]; Lee
and Chung, 2019[135]). This capability is vital in contexts where resources are limited, and teachers are
under significant pressure, such as in schools with high shares of disadvantaged students. For instance,
Azure Machine Learning, a cloud based predictive analytics service, processes complex data, including
student performance and characteristics (such as gender, academic performance and socio-economic
background), school infrastructure and teacher capabilities (OECD, 2021[46]). This tool aims to uncover
drop-out patterns and identify high-risk students, highlighting over 60 reasons for early leavers. For
example, outdated study materials, struggles in English or mathematics and dysfunctional school toilets
(particularly impacting girls) were identified as contributing factors in Andhra Pradesh (India) (ibid.). This
approach demonstrates how AI tools can enhance the capacity of education systems to pre-emptively
address factors leading to early leaving from education and training, thereby supporting at-risk students
more effectively. Such interventions are crucial for fostering more equitable educational environments,
especially in regions where resource constraints and socio-economic factors pose significant challenges
to student retention and success.

Assisting with data-based decisions

Data-based decision making in education systems, involving the systematic collection, analysis and
interpretation of various data types, is crucial in improving student outcomes and fostering equity and
inclusion (OECD, 2023[8]). This approach includes the analysis of inputs like resources and teacher
characteristics, processes within the education system, and outcomes such as student achievement and
teacher well-being (Mezzanotte and Calvel, 2023[136]). Effective data usage can increase student learning
and achievement, ensuring that students from all backgrounds have equal opportunities for success
(Schildkamp, 2019[137]; van Geel et al., 2016[138]).
However, challenges exist in fully implementing data-based decision making in educational settings.
School staff and policy makers often lack data collection and analysis training, and time and budget
constraints can impede effective data usage (Schildkamp, 2019[137]). AI tools have the potential to bridge
this gap by analysing data to detect resourcing needs. With their capability to process extensive datasets
rapidly, identify patterns, forecast trends and communicate the results in “natural language”, some AI tools
have the potential to efficiently highlight areas needing attention, facilitating more innovative and targeted
distribution of resources (Teng, Zhang and Sun, 2022[139]). This advancement could ensure that schools
with greater needs receive the necessary support more timely and effectively, contributing to a more
equitable educational landscape.

Challenges of institutional tools for equity and inclusion

Challenges of institutional tools for equity and inclusion are similar to those mentioned in other sections
(see Challenges of learner-centred AI tools for equity and inclusion and Challenges of teacher-led tools for
equity and inclusion). One concern is the risk of algorithmic bias in AI systems. For instance, AI systems
used for admissions and identifying at-risk students analyse data on student performance, demographic
information and other factors, and can inadvertently perpetuate existing biases if the data or algorithms
themselves are biased (Lira et al., 2023[130]; Tay et al., 2022[140]). The system might favour certain

Unclassified
36  EDU/WKP(2024)15

demographic groups over others based on historical data patterns, leading to unfair and discriminatory
practices (Lira et al., 2023[130]). Similarly, AI tools used to identify students at risk of dropping out may rely
on data that, due to general population distribution or imperfect sample procedures, do not account well
for the experiences of marginalised groups. As a result, they might potentially misclassify students and
exacerbate inequalities (Gardner, Brooks and Baker, 2019[84]; Lee and Chung, 2019[135]).
Another critical challenge is the potential for unintended consequences, such as the stigmatisation or
labelling students based on AI-derived categorisations (Holmes, 2023[3]; OECD, 2023[37]). This issue can
lead to discrimination and exclusion, particularly for students from marginalised communities. For example,
students identified as at risk by AI systems might be unfairly labelled, affecting their educational
experiences and opportunities (Holmes, 2023[3]). Furthermore, privacy and data security concerns are
paramount when dealing with sensitive student information (Holmes and Porayska-Pomsta, 2022[96]). The
collection and analysis of extensive datasets by AI systems, while beneficial for understanding student
needs and optimising resource allocation, must be managed with strict adherence to data protection and
privacy standards (Holmes, Bialik and Fadel, 2019[97]; Holmes et al., 2021[98]; OECD, 2023[37]; Pedró et al.,
2019[102]; UNESCO, 2021[16]; UNESCO, 2023[17]). Without appropriate safeguards, there is a risk of data
breaches and misuse, potentially harming students who are most vulnerable (ibid.).
Another challenge in this field relevant for all three types of AI tools is concerns about the generalisability
and transferability of research findings in this area. Most studies are conducted by the developers of AI
tools, often from commercial entities, and with a limited participant pool (Holmes and Tuomi, 2022[2]).
Independent, large-scale studies, mainly from the United States, are rare, casting doubt on the broader
claims of AI in education (UNESCO, 2021[16]). Moreover, research tends to focus narrowly on AI's technical
efficacy in improving academic outcomes, overlooking its broader implications on classroom dynamics and
the wider educational ecosystem (ibid.). The discussion extends to AI tools' potential cognitive and
developmental impacts, with historical analogies to technology's influence on human cognition and specific
concerns about children's development (Gottschalk, 2019[141]). In fact, UNESCO (2021[16]) calls for more
systematic, interdisciplinary and cross-national research to thoroughly understand AI tools’ effects on
learning and educational practices.

Unclassified
EDU/WKP(2024)15  37

6 Conclusions
This working paper delves into the potential of AI in fostering equity and inclusion in education, examining
learner-centred, teacher-led and institutional AI tools. It highlights the opportunities offered by AI tools,
such as adaptive learning experiences, enriched content, and improved efficiency in processes like
admissions and data-based decision making. However, it also addresses significant challenges, including
access issues, potential biases, the high costs associated with AI tools and the need for comprehensive
teacher training. Importantly, while acknowledging these new challenges, viewing them in light of the
challenges already present in schools is essential. For instance, we should not assume that “traditional”
teaching methods and tools are flawless. The paper emphasises the importance of weighing AI tools’
benefits for educational enhancement against the complexities and ethical considerations to avoid
exacerbating existing disparities or creating new ones. By highlighting this comparison, the paper aims to
present a balanced view that acknowledges both the promise and the pitfalls of integrating AI tools into
educational settings. While the previous sections delved into each category of opportunities and challenges
in greater detail, some overarching messages and policy implications are summarised below.

Embracing the potential for adaptive learning while addressing privacy, ethical
and accountability issues

AI tools are used for their potential in adaptive learning experiences. They can offer a tailored approach
that caters to individual student needs, thereby enhancing the effectiveness and inclusivity of education.
These can be seen in ITS, AI-enabled simulations, AI-powered robots, AI-based systems that identify
students at risk of early leaving from education and training, and others. As such, they have potential to
level the field for students with diverse needs.
However, developing these (and other) tools hinges on access to a wide range of data with student
characteristics. While beneficial for adaptive learning, this information risks misuse and commercialisation,
raising ethical and privacy concerns. Moreover, accountability in AI technology usage, i.e. the responsibility
when AI technologies lead to discriminatory outcomes or incorrect guidance, is challenging to ensure.
Inaccurate or biased AI-generated responses can have significant implications for students' learning. In
fact, concerns about the use of AI have led to the creation of a journal focused on “ethical, regulatory, and
policy implications that arise from the development of AI”, AI and Ethics (Springer Link, 2024[142]). Policy
makers and other stakeholders could thus embrace the potential of AI in education for adaptive learning
while evaluating privacy and ethical concerns, and accountability for responsible AI usage.

Recognising the potential to enhance cultural responsiveness while keeping in


mind inherent biases

AI tools can be culturally responsive by, e.g. providing more targeted content. They can break the barriers
in a language different from the language of instruction. They can also enhance teacher capacities directly
and indirectly. Directly, by helping teachers in, e.g. curating learning materials, assisting with the
assessment, classroom management and identifying some special education needs. Indirectly by freeing

Unclassified
38  EDU/WKP(2024)15

teachers’ time that could be used more productively to support students’ needs. While these potentials are
not without caveats, they promise to foster inclusion in the classroom through cultural responsiveness.
While offering numerous potential benefits, AI tools are not without significant caveats, particularly
concerning bias. Bias in AI encompasses a range of issues, from algorithmic biases to cultural insensitivity,
potentially perpetuating inequalities and discrimination. These biases manifest in various forms, such as
historical, representation, measurement, aggregation, evaluation and deployment biases, each affecting
different aspects of AI tools’ application in education. For instance, AI systems might reinforce stereotypes,
neglect local contexts and Indigenous perspectives, and inadvertently favour certain demographic groups
over others. This includes language processing and assessment biases, which can disadvantage
non-native English speakers and students from diverse ethnic backgrounds. In addressing these
challenges, balancing data protection and privacy issues with actively improving fairness and equity by
identifying and mitigating biases is important. This may require a nuanced collection of personal data to
pinpoint and address these biases effectively. Therefore, policy makers and other stakeholders should
recognise that while AI tools in education have the potential to enhance cultural responsiveness and foster
inclusion, inherent biases must be carefully managed. This might include adopting an adaptive and
forward-looking regulatory or guidance framework that keeps pace with rapid advancements in AI, ensuring
that efforts to mitigate biases and promote equity do not inadvertently hinder innovation or the uptake of
beneficial technologies in the classroom.

Balancing the potential for accessibility with challenges such as techno-ableism


and impact on socio-emotional skills

AI-enabled tools designed to support learners with SEN illustrate a significant opportunity in education. By
adapting learning experiences and enhancing accessibility, these tools can facilitate the inclusion of some
students with SEN in classroom settings, fostering a diverse and inclusive learning community. For
instance, AI assistive devices like real-time language translation headsets and digital sign language
interpreters offer advancements in supporting students with auditory and visual impairments, potentially
contributing to a more equitable educational environment.
Contrasting these opportunities, however, are significant challenges. Techno-ableism in AI tools risks
perpetuating a narrow view of disability, framing it as a problem to be fixed rather than addressing societal
barriers. This approach can lead to further exclusion and inadequate support for students with diverse
needs. Additionally, AI tools’ impact on socio-emotional learning presents multifaceted challenges. The
potential for increased loneliness and isolation, for instance, especially among vulnerable student cohorts,
highlights the need for human interaction in education. Therefore, policy makers and other stakeholders
should recognise that AI tools not only enhance accessibility for learners but also raise challenges like
techno-ableism and the impact on socio-emotional skills.

Developing and improving teacher training in AI

AI-based teacher training and continuing professional learning offer opportunities to enhance teaching
practices and address educational disparities. Innovations like AI-supported virtual facilitators and
platforms can provide dynamic and interactive environments for teacher training, amplifying coaching
capacity and enabling data-driven improvements in teaching practices.
However, AI tools include various possible uses with disparate outcomes. Teachers' mediation is thus vital
to maximising many of the benefits of AI tools, underscoring the need for continuing AI-related professional
learning (whether with the help of AI tools or not). Moreover, equipping educators with the necessary
knowledge and skills to effectively integrate AI into their teaching is difficult. This challenge is compounded

Unclassified
EDU/WKP(2024)15  39

by the significant investment in time and resources for teacher training, which can be a substantial barrier,
particularly for under-resourced educational institutions. The disparity in training opportunities is
pronounced within and across countries, with schools in disadvantaged communities often facing
shortages in continuing professional learning resources. This gap in training and development
opportunities risks exacerbating inequalities, as students in some schools may be left behind in an
increasingly digital educational landscape. Therefore, to fully realise the benefits of AI tools in education,
educators need AI training and continuing professional learning.

Exploring how to maintain educational integrity amidst the growing commercial


influence in the sector

Some AI tools have the potential to further bolster equity by serving as a cost-effective resource that can
be readily scaled among schools. For example, AI tools could facilitate communication processes and
operate as self-services for learners and parents. The most common applications in that field are chatbots
used for counselling in administrative questions.
While some AI tools are scalable, concerns exist in regard to the financial accessibility of others. For
instance, AI-enabled simulations (AR/VR) and AI-powered robots might not be accessible to all schools
that need them. Moreover, some issues arise around the increasing commercialisation of this sector. The
growing involvement of corporate entities in educational AI tools has led to concerns that commercial
interests might overshadow educational objectives. This trend raises critical questions about the primary
focus of educational tools, with the risk that the profit motives of commercial entities could prioritise financial
gains over educational outcomes. As these entities access vast amounts of sensitive student data, the
potential for misuse becomes severe. Therefore, policy makers and other stakeholders could explore
options for maintaining educational integrity amidst the growing commercial influence in the sector.

Encouraging research on the implications of AI for equity and inclusion in


education, and clarifying the role of institutions at the national level in its
systematic implementation

The integration of AI tools in education, while promising, should not be viewed as a quick fix for educational
challenges. There is a notable lack of research on the implications of AI tools for equity and inclusion in
education. This includes a scarcity of data and robust evaluations. Interdisciplinary research involving
educators and educational researchers is essential for creating practical applications of AI that directly or
indirectly influence learning outcomes in educational settings (Zhang and Aslan, 2021[65]). To this end,
policy makers should encourage researchers to ask nuanced questions. For instance, in the domain of
bias, one of the high-stakes questions is not whether AI tools are biased, but whether they are more or
less biased compared to teachers in, e.g. assessment, and how this bias amplifies for specific subgroups.
In another example, rather than asking whether VR/AR tools improve, e.g. learning outcomes, it might be
more important to ask whether they improve outcomes more than traditional 2D tools already present in
many schools. In other words, rather than asking about the absolute value of AI tools, it might be relevant
to start asking about AI tools’ relative effects.
Furthermore, research needs to be expanded to analyse the impact of AI at institutional, regional and
national levels over extended time frames. Studies are unevenly distributed across various AI tools,
focusing on ITS and adaptive learning systems, while other tools are less examined. Research is also
unevenly split between English- and non-English-speaking countries, stemming from the reality that many
AI tools target English speakers. However, wherever possible, educational research should also focus on

Unclassified
40  EDU/WKP(2024)15

non-English speaking jurisdictions, and the impacts of English or other language AI tools on educational
institutions.
Additionally, the role of institutions at the national (or sub-national) levels (e.g. ministries) in promoting or
hindering the use of AI tools in schools remains unclear, with few national (or sub-national) examples of
institutions or agencies with a mandate to regulate the systematic use of AI in education. Therefore, policy
makers and other stakeholders could encourage comprehensive research and evaluation of AI and its
implications for equity and inclusion, and clarify the role of central institutions or agencies in regulating its
systematic implementation.
The role of institutions at the national (or sub-national) level should also be clarified in regard to ensuring
equitable access to AI tools. In many jurisdictions, policy responsibilities for digital and traditional education
governance are devolved to lower levels of government, e.g. in the provision and procurement of digital
tools and resources (OECD, 2023[37]). The devolution of responsibilities has advantages, e.g. AI tools may
align more closely with local needs. However, from the equity perspective, it may result in discrepancies
in access and use of AI tools. Therefore, policy makers could view this as an opportunity to introduce
responsibilities at higher governance levels. This may include assuming the role of provider (e.g. for digital
infrastructure) or standard setter (e.g. for procurement practices and continuing professional learning).

Unclassified
EDU/WKP(2024)15  41

References

Abd-alrazaq, A. et al. (2019), “An overview of the features of chatbots in mental health: A [55]
scoping review”, International Journal of Medical Informatics, Vol. 132, p. 103978,
https://doi.org/10.1016/j.ijmedinf.2019.103978.

Alesina, A. et al. (2024), “Revealing Stereotypes: Evidence from Immigrants in Schools”, [95]
American Economic Review, Vol. 114/7, pp. 1916-1948,
https://doi.org/10.1257/aer.20191184.

Alfalah, S. et al. (2018), “A comparative study between a virtual reality heart anatomy system [38]
and traditional medical teaching modalities”, Virtual Reality, Vol. 23/3, pp. 229-234,
https://doi.org/10.1007/s10055-018-0359-y.

Alzahrani, N. (2020), “Augmented Reality: A Systematic Review of Its Benefits and Challenges in [63]
E-learning Contexts”, Applied Sciences, Vol. 10/16, p. 5660,
https://doi.org/10.3390/app10165660.

Anderson, H., A. Boodhwani and R. Baker (2019), Assessing the Fairness of Graduation [80]
Predictions, https://learninganalytics.upenn.edu/ryanbaker/EDM2019_paper56.pdf (accessed
on 19 January 2024).

Asselborn, T. et al. (2018), “Automated human-level diagnosis of dysgraphia using a consumer [118]
tablet”, npj Digital Medicine, Vol. 1/1, https://doi.org/10.1038/s41746-018-0049-x.

Baker, R. and A. Hawn (2021), “Algorithmic Bias in Education”, International Journal of Artificial [79]
Intelligence in Education, Vol. 32/4, pp. 1052-1092, https://doi.org/10.1007/s40593-021-
00285-9.

Bao, Z. and D. Huang (2022), “Can Artificial Intelligence Improve Gender Equality? Evidence [89]
from a Natural Experiment”, SSRN Electronic Journal, https://doi.org/10.2139/ssrn.4202239.

Benjamin, R. (2019), Race After Technology: Abolitionist Tools for the New Jim Code, Polity. [86]

Bernardini, S., K. Porayska-Pomsta and T. Smith (2014), “ECHOES: An intelligent serious game [50]
for fostering social communication in children with autism”, Information Sciences, Vol. 264,
pp. 41-60, https://doi.org/10.1016/j.ins.2013.10.027.

Bertacchini, F. et al. (2023), “A social robot connected with chatGPT to improve cognitive [103]
functioning in ASD subjects”, Frontiers in Psychology, Vol. 14,
https://doi.org/10.3389/fpsyg.2023.1232177.

Unclassified
42  EDU/WKP(2024)15

Bowles, N. (2019), Human Contact Is Now a Luxury Good, [68]


https://www.nytimes.com/2019/03/23/sunday-review/human-contact-luxury-screens.html
(accessed on 3 July 2024).

Bowles, N. (2018), The Digital Gap Between Rich and Poor Kids Is Not What We Expected, [67]
https://www.nytimes.com/2018/10/26/style/digital-divide-screens-schools.html (accessed on
3 July https://www.nytimes.com/2018/10/26/style/digital-divide-screens-schools.html2024).

Bryant, J. et al. (2020), How artificial intelligence will impact K-12 teachers, McKinsey & [110]
Company, https://www.mckinsey.com/industries/education/our-insights/how-artificial-
intelligence-will-impact-k-12-teachers (accessed on 11 January 2024).

Cabrera, A. et al. (2019), “FAIRVIS: Visual Analytics for Discovering Intersectional Bias in [87]
Machine Learning”, 2019 IEEE Conference on Visual Analytics Science and Technology
(VAST), https://doi.org/10.1109/vast47406.2019.8986948.

Capers, Q. et al. (2017), “Implicit Racial Bias in Medical School Admissions”, Academic [131]
Medicine, Vol. 92/3, pp. 365-369, https://doi.org/10.1097/acm.0000000000001388.

Carter, L., D. Liu and C. Cantrell (2020), “Exploring the Intersection of the Digital Divide and [56]
Artificial Intelligence: A Hermeneutic Literature Review”, AIS Transactions on Human-
Computer Interaction, Vol. 12/4, pp. 253-275, https://doi.org/10.17705/1thci.00138.

Cerna, L. et al. (2021), “Promoting inclusive education for diverse societies: A conceptual [6]
framework”, OECD Education Working Papers, No. 260, OECD Publishing, Paris,
https://doi.org/10.1787/94ab68c6-en.

Chai, F. et al. (2024), “Grading by AI makes me feel fairer? How different evaluators affect [90]
college students’ perception of fairness”, Frontiers in Psychology, Vol. 15,
https://doi.org/10.3389/fpsyg.2024.1221177.

Chan, C. and L. Tsi (2023), The AI Revolution in Education: Will AI Replace or Assist Teachers [107]
in Higher Education?, https://doi.org/10.48550/arXiv.2305.01185.

Chen, L., P. Chen and Z. Lin (2020), “Artificial Intelligence in Education: A Review”, IEEE [25]
Access, Vol. 8, pp. 75264-75278, https://doi.org/10.1109/access.2020.2988510.

Conati, C. et al. (2021), “Toward personalized XAI: A case study in intelligent tutoring systems”, [27]
Artificial Intelligence, Vol. 298, p. 103503, https://doi.org/10.1016/j.artint.2021.103503.

Copur-Gencturk, Y. et al. (2024), “The impact of an interactive, personalized computer-based [120]


teacher professional development program on student performance: A randomized controlled
trial”, Computers & Education, Vol. 210, p. 104963,
https://doi.org/10.1016/j.compedu.2023.104963.

Council of Europe (2024), Artificial Intelligence: Glossary, https://www.coe.int/en/web/artificial- [5]


intelligence/glossary (accessed on 15 January 2024).

Dai, C. and F. Ke (2022), “Educational applications of artificial intelligence in simulation-based [42]


learning: A systematic mapping review”, Computers and Education: Artificial Intelligence,
Vol. 3, p. 100087, https://doi.org/10.1016/j.caeai.2022.100087.

Unclassified
EDU/WKP(2024)15  43

Darling-Hammond, L., M. Hyler and M. Gardner (2017), Effective Teacher Professional [122]
Development, https://learningpolicyinstitute.org/product/teacher-prof-dev (accessed on
22 January 2024).

de la Higuera, C. and J. Iyer (2024), AI for Teachers: an Open Textbook, [30]


https://www.ai4t.eu/book/ai-for-teachers-an-open-textbook-version-1-english/index (accessed
on 18 January 2024).

Deník N (2024), Cermat: I letos hodnotí část odpovědí uchazečů o studium na SŠ umělá [116]
inteligence [Cermat: Artificial intelligence is also evaluating part of the answers of applicants
to study at the secondary school this year], https://denikn.cz/minuta/1404813/ (accessed on
3 July 2024).

Department for Education (2023), Generative artificial intelligence (AI) in education, [22]
https://www.gov.uk/government/publications/generative-artificial-intelligence-in-
education/generative-artificial-intelligence-ai-in-education (accessed on 15 January 2024).

Directorate for Education and Training (2024), Råd om kunstig intelligens i skolen [Advice on [23]
artificial intelligence in schools], https://www.udir.no/kvalitet-og-
kompetanse/digitalisering/kunstig-intelligens-ki-i-skolen/#a210110 (accessed on
19 March 2024).

Dolmage, J. (2017), Academic Ableism, University of Michigan Press, [75]


https://doi.org/10.3998/mpub.9708722.

Dutta, S. and B. Lanvin (eds.) (n.d.), Network Readiness Index 2023, Portulans Institute, [70]
https://download.networkreadinessindex.org/reports/nri_2023.pdf (accessed on 3 July 2024).

European Commission (n.d.), Digital Education Action Plan (2021-2027), [20]


https://education.ec.europa.eu/focus-topics/digital-education/action-plan (accessed on
15 January 2024).

European Union Agency for Fundamental Rights (2022), Bias in algorithms – Artificial [94]
intelligence and discrimination, Publications Office of the European Union,
https://data.europa.eu/doi/10.2811/25847.

Felix, C. (2020), “The Role of the Teacher and AI in Education”, in Innovations in Higher [109]
Education Teaching and Learning, International Perspectives on the Role of Technology in
Humanizing Higher Education, Emerald Publishing Limited, https://doi.org/10.1108/s2055-
364120200000033003.

Felten, P. (2020), Relationship-Rich Education, Johns Hopkins University Press, [69]


https://doi.org/10.1353/book.78561.

Ferrara, E. (2023), “Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, [92]
Impacts, and Mitigation Strategies”, Sci, Vol. 6/1, p. 3, https://doi.org/10.3390/sci6010003.

Fraillon, J. et al. (2020), Preparing for Life in a Digital World, Springer International Publishing, [61]
Cham, https://doi.org/10.1007/978-3-030-38781-5.

France Éducation international (n.d.), AI4T - Artificial Intelligence for and by teachers, [125]
https://www.france-education-international.fr/en/expertises/cooperation-education/projets/ai4t-
artificial-intelligence-and-teachers?langue=en (accessed on 23 January 2024).

Unclassified
44  EDU/WKP(2024)15

Gardner, J., C. Brooks and R. Baker (2019), “Evaluating the Fairness of Predictive Student [84]
Models Through Slicing Analysis”, Proceedings of the 9th International Conference on
Learning Analytics & Knowledge, https://doi.org/10.1145/3303772.3303791.

Gauthier, A. et al. (2022), “Is it time we get real? A systematic review of the potential of data- [91]
driven technologies to address teachers’ implicit biases”, Frontiers in Artificial Intelligence,
Vol. 5, https://doi.org/10.3389/frai.2022.994967.

Goel, Y. and R. Goyal (2020), “On the Effectiveness of Self-Training in MOOC Dropout [134]
Prediction”, Open Computer Science, Vol. 10/1, pp. 246-258, https://doi.org/10.1515/comp-
2020-0153.

Gottschalk, F. (2019), “Impacts of technology use on children: Exploring literature on the brain, [141]
cognition and well-being”, OECD Education Working Papers, No. 195, OECD Publishing,
Paris, https://doi.org/10.1787/8296464e-en.

Gottschalk, F. and C. Weise (2023), “Digital equity and inclusion in education: An overview of [11]
practice and policy in OECD countries”, OECD Education Working Papers, No. 299, OECD
Publishing, Paris, https://doi.org/10.1787/7cb15030-en.

Graf, R. et al. (2019), “iGYM”, Proceedings of the Annual Symposium on Computer-Human [44]
Interaction in Play, https://doi.org/10.1145/3311350.3347161.

Gray, M. and S. Suri (2019), Ghost Work: How to Stop Silicon Valley from Building a New Global [71]
Underclass, Harper Business.

Holmes, W. (2023), The Unintended Consequences of Artificial Intelligence and Education, [3]
Education International.

Holmes, W., M. Bialik and C. Fadel (2019), Artificial Intelligence in Education: Promises and [97]
Implications for Teaching and Learning, Center for Curriculum Redesign.

Holmes, W. et al. (2022), Artificial intelligence and education - A critical view through the lens of [133]
human rights, democracy and the rule of law, Council of Europe.

Holmes, W. and K. Porayska-Pomsta (2022), The Ethics of Artificial Intelligence in Education, [96]
Routledge, New York, https://doi.org/10.4324/9780429329067.

Holmes, W. et al. (2021), “Ethics of AI in Education: Towards a Community-Wide Framework”, [98]


International Journal of Artificial Intelligence in Education, Vol. 32/3, pp. 504-526,
https://doi.org/10.1007/s40593-021-00239-1.

Holmes, W. and I. Tuomi (2022), “State of the art and practice in AI in education”, European [2]
Journal of Education, Vol. 57/4, pp. 542-570, https://doi.org/10.1111/ejed.12533.

Holstein, K., B. McLaren and V. Aleven (2018), “Student Learning Benefits of a Mixed-Reality [117]
Teacher Awareness Tool in AI-Enhanced Classrooms”, in Lecture Notes in Computer
Science, Artificial Intelligence in Education, Springer International Publishing, Cham,
https://doi.org/10.1007/978-3-319-93843-1_12.

Hopcan, S. et al. (2022), “Artificial intelligence in special education: a systematic review”, [48]
Interactive Learning Environments, Vol. 31/10, pp. 7335-7353,
https://doi.org/10.1080/10494820.2022.2067186.

Unclassified
EDU/WKP(2024)15  45

Huang, L. (2023), “Ethics of Artificial Intelligence in Education: Student Privacy and Data [99]
Protection”, Science Insights Education Frontiers, Vol. 16/2, pp. 2577-2587,
https://doi.org/10.15354/sief.23.re202.

Huang, X. et al. (2016), “Intelligent tutoring systems work as a math gap reducer in 6th grade [34]
after-school program”, Learning and Individual Differences, Vol. 47, pp. 258-265,
https://doi.org/10.1016/j.lindif.2016.01.012.

Hughes, D. (2023), Chatbots in the classroom: Revolutionising Career Readiness with Cutting- [54]
Edge Technology, https://oecdedutoday.com/chatbots-in-the-classroom-revolutionising-
career-readiness-with-cutting-edge-technology/ (accessed on 29 March 2024).

Human Rights Watch (2022), “How Dare They Peek into My Private Life?”: Children’s Rights [100]
Violations by Governments That Endorsed Online Learning During the Covid-19 Pandemic,
Human Rights Watch,
https://www.hrw.org/sites/default/files/media_2022/07/HRW_20220711_Students%20Not%20
Products%20Report%20Final-IV-%20Inside%20Pages%20and%20Cover.pdf (accessed on
22 January 2024).

Johns Hopkins Center for Talented Youth (2023), New Project Explores Use of AI in the Gifted [32]
Classroom, https://cty.jhu.edu/who-we-are/news-events/articles/new-project-explores-use-ai-
gifted-classroom (accessed on 24 January 2024).

Jones, P. (2021), Refugees help power machine learning advances at Microsoft, Facebook, and [72]
Amazon, https://restofworld.org/2021/refugees-machine-learning-big-tech/ (accessed on
3 July 2024).

Joo-heon, K. (2023), S. Korea to adopt multilingual AI digital textbooks for multicultural families’ [112]
children, https://www.ajudaily.com/view/20230609165013073 (accessed on
22 January 2024).

Kalla, M. (ed.) (2023), “Bias in artificial intelligence algorithms and recommendations for [93]
mitigation”, PLOS Digital Health, Vol. 2/6, p. e0000278,
https://doi.org/10.1371/journal.pdig.0000278.

Keleş, A. et al. (2009), “ZOSMAT: Web-based intelligent tutoring system for teaching–learning [28]
process”, Expert Systems with Applications, Vol. 36/2, pp. 1229-1239,
https://doi.org/10.1016/j.eswa.2007.11.064.

Khosravi, H. et al. (2022), “Explainable Artificial Intelligence in education”, Computers and [26]
Education: Artificial Intelligence, Vol. 3, p. 100074,
https://doi.org/10.1016/j.caeai.2022.100074.

Kolchenko, V. (2018), “Can Modern AI replace teachers? Not so fast! Artificial Intelligence and [108]
Adaptive Learning: Personalized Education in the AI age”, HAPS Educator, Vol. 22/3,
pp. 249-252, https://doi.org/10.21692/haps.2018.032.

Laird, E., M. Dwyer and H. Grant-Chapman (2023), Off Task: EdTech Threats to Student Privacy [66]
and Equity in the Age of AI, Center for Democracy & Technology, https://cdt.org/wp-
content/uploads/2023/09/091923-CDT-Off-Task-web.pdf (accessed on 3 July 2024).

Unclassified
46  EDU/WKP(2024)15

Lee, S. and J. Chung (2019), “The Machine Learning-Based Dropout Early Warning System for [135]
Improving the Performance of Dropout Prediction”, Applied Sciences, Vol. 9/15, p. 3093,
https://doi.org/10.3390/app9153093.

Lemaignan, S. et al. (2022), ““It’s Important to Think of Pepper as a Teaching Aid or Resource [106]
External to the Classroom”: A Social Robot in a School for Autistic Children”, International
Journal of Social Robotics, https://doi.org/10.1007/s12369-022-00928-4.

Lira, B. et al. (2023), “Using artificial intelligence to assess personal qualities in college [130]
admissions”, Science Advances, Vol. 9/41, https://doi.org/10.1126/sciadv.adg9405.

Marino, M. et al. (2023), “The Future of Artificial Intelligence in Special Education Technology”, [128]
Journal of Special Education Technology, Vol. 38/3, pp. 404-416,
https://doi.org/10.1177/01626434231165977.

Martin, A. et al. (2022), “Intelligent Support for All?”, Proceedings of the 53rd ACM Technical [88]
Symposium on Computer Science Education, https://doi.org/10.1145/3478431.3499418.

Mezzanotte, C. and C. Calvel (2023), “Indicators of inclusion in education: A framework for [136]
analysis”, OECD Education Working Papers, No. 300, OECD Publishing, Paris,
https://doi.org/10.1787/d94f3bd8-en.

Milanowski, A. (2017), “Lower Performance Evaluation Practice Ratings for Teachers of [143]
Disadvantaged Students”, AERA Open, Vol. 3/1, p. 233285841668555,
https://doi.org/10.1177/2332858416685550.

Mousavinasab, E. et al. (2018), “Intelligent tutoring systems: a systematic review of [29]


characteristics, applications, and evaluation methods”, Interactive Learning Environments,
Vol. 29/1, pp. 142-163, https://doi.org/10.1080/10494820.2018.1558257.

Munn, L. (2023), “The five tests: designing and evaluating AI according to indigenous Māori [82]
principles”, AI & SOCIETY, https://doi.org/10.1007/s00146-023-01636-x.

OECD (2024), OECD.AI Policy Observatory: AI courses in English by discipline, [126]


https://oecd.ai/en/data?selectedArea=ai-education&selectedVisualization=ai-courses-by-
discipline-in-time (accessed on 3 July 2024).

OECD (2023), Equity and Inclusion in Education: Finding Strength through Diversity, OECD [8]
Publishing, Paris, https://doi.org/10.1787/e9072e21-en.

OECD (2023), OECD Digital Education Outlook 2023: Towards an Effective Digital Education [37]
Ecosystem, OECD Publishing, Paris, https://doi.org/10.1787/c74f03de-en.

OECD (2023), “Opportunities, guidelines and guardrails for effective and equitable use of AI in [13]
education”, in OECD Digital Education Outlook 2023: Towards an Effective Digital Education
Ecosystem, OECD Publishing, Paris, https://doi.org/10.1787/2b39e98b-en.

OECD (2023), PISA 2022 Results (Volume I): The State of Learning and Equity in Education, [57]
PISA, OECD Publishing, Paris, https://doi.org/10.1787/53f23881-en.

OECD (2023), PISA 2022 Results (Volume II): Learning During – and From – Disruption, PISA, [58]
OECD Publishing, Paris, https://doi.org/10.1787/a97db61c-en.

Unclassified
EDU/WKP(2024)15  47

OECD (2023), Recommendation of the Council on Artificial Intelligence, [4]


https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (accessed on
15 January 2024).

OECD (2021), OECD Digital Education Outlook 2021: Pushing the Frontiers with Artificial [46]
Intelligence, Blockchain and Robots, OECD Publishing, Paris,
https://doi.org/10.1787/589b283f-en.

OECD (2020), PISA 2018 Results (Volume V): Effective Policies, Successful Schools, PISA, [35]
OECD Publishing, Paris, https://doi.org/10.1787/ca768d40-en.

OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris, [1]


https://doi.org/10.1787/eedfee77-en.

OECD (2019), PISA 2018 Results (Volume II): Where All Students Can Succeed, PISA, OECD [62]
Publishing, Paris, https://doi.org/10.1787/b5fd1b8f-en.

OECD (2019), TALIS 2018 Results (Volume I): Teachers and School Leaders as Lifelong [127]
Learners, TALIS, OECD Publishing, Paris, https://doi.org/10.1787/1d0bc92a-en.

OECD (2018), TALIS 2018 Database, https://www.oecd.org/education/talis/talis-2018-data.htm [129]


(accessed on 25 January 2024).

OECD (2017), Educational Opportunity for All: Overcoming Inequality throughout the Life [9]
Course, Educational Research and Innovation, OECD Publishing, Paris,
https://doi.org/10.1787/9789264287457-en.

OECD (2017), PISA 2015 Results (Volume III): Students’ Well-Being, PISA, OECD Publishing, [60]
Paris, https://doi.org/10.1787/9789264273856-en.

OECD (2016), “Are there differences in how advantaged and disadvantaged students use the [59]
Internet?”, PISA in Focus, No. 64, OECD Publishing, Paris,
https://doi.org/10.1787/5jlv8zq6hw43-en.

Okonkwo, C. and A. Ade-Ibijola (2021), “Chatbots applications in education: A systematic [52]


review”, Computers and Education: Artificial Intelligence, Vol. 2, p. 100033,
https://doi.org/10.1016/j.caeai.2021.100033.

OpenAI (n.d.), Pricing, https://openai.com/chatgpt/pricing (accessed on 5 June 2024). [124]

Panke, S. (2023), Meet Humanoid Robots NAO, FURHAT & PEPPER: An Interview with [104]
Humanoid Robots Expert Professor Ilona Buchem, https://aace.org/review/ilona-buchem/
(accessed on 22 January 2024).

Papanastasiou, G. et al. (2018), “Virtual and augmented reality effects on K-12, higher and [40]
tertiary education students’ twenty-first century skills”, Virtual Reality, Vol. 23/4, pp. 425-436,
https://doi.org/10.1007/s10055-018-0363-2.

Pedró, F. et al. (2019), Artificial intelligence in education: challenges and opportunities for [102]
sustainable development, https://unesdoc.unesco.org/ark:/48223/pf0000366994 (accessed
on 22 January 2024).

Unclassified
48  EDU/WKP(2024)15

Pellas, N., A. Dengel and A. Christopoulos (2020), “A Scoping Review of Immersive Virtual [41]
Reality in STEM Education”, IEEE Transactions on Learning Technologies, Vol. 13/4,
pp. 748-761, https://doi.org/10.1109/tlt.2020.3019405.

Perrigo, B. (2023), Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make [73]
ChatGPT Less Toxic, https://time.com/6247678/openai-chatgpt-kenya-workers/ (accessed on
3 July 2024).

Plass, J. and S. Pawar (2020), “Toward a taxonomy of adaptivity for learning”, Journal of [144]
Research on Technology in Education, Vol. 52/3, pp. 275-300,
https://doi.org/10.1080/15391523.2020.1719943.

Pons, A. (2023), Generative AI in the classroom: From hype to reality?, [24]


https://one.oecd.org/document/EDU/EDPC(2023)11/en/pdf (accessed on 15 January 2024).

Porayska-Pomsta, K. et al. (2018), “Blending Human and Artificial Intelligence to Support Autistic [51]
Children’s Social Communication Skills”, ACM Transactions on Computer-Human Interaction,
Vol. 25/6, pp. 1-35, https://doi.org/10.1145/3271484.

Porayska-Pomsta, K. and G. Rajendran (2019), “Accountability in Human and Artificial [101]


Intelligence Decision-Making as the Basis for Diversity and Educational Inclusion”, in Artificial
Intelligence and Inclusive Education, Perspectives on Rethinking and Reforming Education,
Springer Singapore, Singapore, https://doi.org/10.1007/978-981-13-8161-4_3.

Pound, P. (2017), “How should mandatory sex education be taught?”, BMJ, p. j1768, [53]
https://doi.org/10.1136/bmj.j1768.

Radford, J. et al. (2021), “Detecting Dyslexia from Audio Records: An AI Approach”, Proceedings [119]
of the 14th International Joint Conference on Biomedical Engineering Systems and
Technologies, https://doi.org/10.5220/0010196000580066.

Reihana, K. et al. (2023), “Indigitization: Technology as a mode for conservation sustainability [45]
and knowledge transfer in indigenous New Zealand communities”, Biological Conservation,
Vol. 285, p. 110237, https://doi.org/10.1016/j.biocon.2023.110237.

Roach, J. (2018), AI technology helps students who are deaf learn, [49]
https://blogs.microsoft.com/ai/ai-powered-captioning/ (accessed on 18 January 2024).

Rutigliano, A. and N. Quarshie (2021), “Policy approaches and initiatives for the inclusion of [33]
gifted students in OECD countries”, OECD Education Working Papers, No. 262, OECD
Publishing, Paris, https://doi.org/10.1787/c3f9ed87-en.

Salas-Pilco, S., K. Xiao and J. Oshima (2022), “Artificial Intelligence and New Technologies in [114]
Inclusive Education for Minority Students: A Systematic Review”, Sustainability, Vol. 14/20,
p. 13572, https://doi.org/10.3390/su142013572.

Scher, L. and F. O’Reilly (2009), “Professional Development for K–12 Math and Science [121]
Teachers: What Do We Really Know?”, Journal of Research on Educational Effectiveness,
Vol. 2/3, pp. 209-249, https://doi.org/10.1080/19345740802641527.

Scherr, R. et al. (2023), “ChatGPT Interactive Medical Simulations for Early Clinical Education: [39]
Case Study”, JMIR Medical Education, Vol. 9, p. e49877, https://doi.org/10.2196/49877.

Unclassified
EDU/WKP(2024)15  49

Schildkamp, K. (2019), “Data-based decision-making for school improvement: Research insights [137]
and gaps”, Educational Research, Vol. 61/3, pp. 257-273,
https://doi.org/10.1080/00131881.2019.1625716.

Selwyn, N. (2023), Resisting and reimagining Artificial Intelligence, Education International, [78]
https://www.ei-ie.org/en/item/27927:resisting-and-reimagining-artificial-intelligence (accessed
on 19 January 2024).

Shakespeare, T. (2004), “Social models of disability and other life strategies”, Scandinavian [76]
Journal of Disability Research, Vol. 6/1, pp. 8-21,
https://doi.org/10.1080/15017410409512636.

Sha, L. et al. (2021), “Assessing Algorithmic Fairness in Automatic Classifiers of Educational [85]
Forum Posts”, in Lecture Notes in Computer Science, Artificial Intelligence in Education,
Springer International Publishing, Cham, https://doi.org/10.1007/978-3-030-78292-4_31.

Shew, A. (2020), “Ableism, Technoableism, and Future AI”, IEEE Technology and Society [74]
Magazine, Vol. 39/1, pp. 40-85, https://doi.org/10.1109/mts.2020.2967492.

Smith, P. and L. Smith (2020), “Artificial intelligence and disability: too much promise, yet too [77]
little substance?”, AI and Ethics, Vol. 1/1, pp. 81-86, https://doi.org/10.1007/s43681-020-
00004-5.

Springer Link (2024), AI and Ethics, https://link.springer.com/journal/43681 (accessed on [142]


23 January 2024).

Steiss, J. et al. (2024), “Comparing the quality of human and ChatGPT feedback of students’ [115]
writing”, Learning and Instruction, Vol. 91, p. 101894,
https://doi.org/10.1016/j.learninstruc.2024.101894.

Suresh, H. and J. Guttag (2021), “A Framework for Understanding Sources of Harm throughout [81]
the Machine Learning Life Cycle”, Equity and Access in Algorithms, Mechanisms, and
Optimization, https://doi.org/10.1145/3465416.3483305.

Tay, L. et al. (2022), “A Conceptual Framework for Investigating and Mitigating Machine- [140]
Learning Measurement Bias (MLMB) in Psychological Assessment”, Advances in Methods
and Practices in Psychological Science, Vol. 5/1, p. 251524592110613,
https://doi.org/10.1177/25152459211061337.

Teng, Y., J. Zhang and T. Sun (2022), “Data‐driven decision‐making model based on artificial [139]
intelligence in higher education system of colleges and universities”, Expert Systems,
Vol. 40/4, https://doi.org/10.1111/exsy.12820.

The Open Innovation Team and Department for Education (2024), Generative AI in education: [111]
Educator and expert views,
https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_
education_-_Educator_and_expert_views_report.pdf (accessed on 8 April 2024).

U.S. Department of Education, Office of Educational Technology (2023), Artificial Intelligence [21]
and Future of Teaching and Learning: Insights and Recommendations, US Department of
Education, https://www2.ed.gov/documents/ai-report/ai-report.pdf (accessed on
15 January 2024).

Unclassified
50  EDU/WKP(2024)15

UN (n.d.), Universal Declaration of Human Rights, https://www.un.org/en/about-us/universal- [64]


declaration-of-human-rights (accessed on 3 July 2024).

UNESCO (2023), AI Competency frameworks for students and teachers, [18]


https://www.unesco.org/en/digital-education/ai-future-learning/competency-frameworks
(accessed on 15 January 2024).

UNESCO (2023), Guidance for generative AI in education and research, UNESCO, [17]
https://unesdoc.unesco.org/ark:/48223/pf0000386693 (accessed on 15 January 2024).

UNESCO (2022), K-12 AI curricula: a mapping of government-endorsed AI curricula, UNESCO, [12]


https://unesdoc.unesco.org/ark:/48223/pf0000380602 (accessed on 15 January 2024).

UNESCO (2022), Recommendation on the Ethics of Artificial Intelligence, [14]


https://unesdoc.unesco.org/ark:/48223/pf0000381137.

UNESCO (2021), AI and education: guidance for policy-makers, UNESCO, [16]


https://doi.org/10.54675/pcsp7350.

UNESCO (2019), Beijing Consensus on Artificial Intelligence and Education, [15]


https://unesdoc.unesco.org/ark:/48223/pf0000368303 (accessed on 15 January 2024).

UNESCO (2009), Defining an Inclusive Education Agenda: Reflections around the 48th session [10]
of the International Conference on Education, UNESCO,
https://unesdoc.unesco.org/ark:/48223/pf0000186807 (accessed on 25 March 2024).

UNESCO/EQUALS Skills Coalition (2019), I’d blush if I could: closing gender divides in digital [83]
skills through education, UNESCO, https://doi.org/10.54675/rapc9356.

UNICEF (2021), Policy guidance on AI for children, [19]


https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-
AI-children-2.0-2021.pdf (accessed on 15 January 2024).

United Robotics Group (2023), Online interview with a United Robotics Group representative on [123]
12 December 2023.

van Geel, M. et al. (2016), “Assessing the Effects of a School-Wide Data-Based Decision-Making [138]
Intervention on Student Achievement Growth in Primary Schools”, American Educational
Research Journal, Vol. 53/2, pp. 360-394, https://doi.org/10.3102/0002831216637346.

Varsik, S. (2022), “A snapshot of equity and inclusion in OECD education systems: Findings [7]
from the Strength through Diversity Policy Survey”, OECD Education Working Papers,
No. 284, OECD Publishing, Paris, https://doi.org/10.1787/801dd29b-en.

Vincent-Lancrin, S. and R. van der Vlies (2020), “Trustworthy artificial intelligence (AI) in [47]
education: Promises and challenges”, OECD Education Working Papers, No. 218, OECD
Publishing, Paris, https://doi.org/10.1787/a6c90fa9-en.

Wang, H. et al. (2023), “Examining the applications of intelligent tutoring systems in real [36]
educational contexts: A systematic literature review from the social experiment perspective”,
Education and Information Technologies, Vol. 28/7, pp. 9113-9148,
https://doi.org/10.1007/s10639-022-11555-x.

Unclassified
EDU/WKP(2024)15  51

Warschauer, M. et al. (2023), “The affordances and contradictions of AI-generated text for [31]
writers of english as a second or foreign language”, Journal of Second Language Writing,
Vol. 62, p. 101071, https://doi.org/10.1016/j.jslw.2023.101071.

Waters, A. and R. Miikkulainen (2014), “GRADE: Machine‐Learning Support for Graduate [132]
Admissions”, AI Magazine, Vol. 35/1, pp. 64-75, https://doi.org/10.1609/aimag.v35i1.2504.

Woo, H. et al. (2021), “The use of social robots in classrooms: A review of field-based studies”, [105]
Educational Research Review, Vol. 33, p. 100388,
https://doi.org/10.1016/j.edurev.2021.100388.

Wu, J. et al. (2019), “Integrating spherical video-based virtual reality into elementary school [43]
students’ scientific inquiry instruction: effects on their problem-solving performance”,
Interactive Learning Environments, Vol. 29/3, pp. 496-509,
https://doi.org/10.1080/10494820.2019.1587469.

You-jin, L. (2023), Korea to adopt AI textbooks for core subjects starting in 2025, [113]
https://english.hani.co.kr/arti/english_edition/e_national/1081129.html (accessed on
22 January 2024).

Zhang, K. and A. Aslan (2021), “AI technologies for education: Recent research & future [65]
directions”, Computers and Education: Artificial Intelligence, Vol. 2, p. 100025,
https://doi.org/10.1016/j.caeai.2021.100025.

Unclassified

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy