0% found this document useful (0 votes)
47 views68 pages

Using AI to Automate the Literature Review Process

Uploaded by

Beto Brun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views68 pages

Using AI to Automate the Literature Review Process

Uploaded by

Beto Brun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

HELPDESK RESPONSE 183

Using AI to Automate The Literature


Review Process in Education:
A Topic Brief

Date December 2024

Authors Björn Haßler


Syed Mustafa Hassan
Christopher Klune
Hassan Mansour
Laila Friese

DOI 10.53832/edtechhub.1003

@GlobalEdTechHub edtechhub.org
Creative Commons Attribution 4.0 International https://creativecommons.org/licenses/by/4.0/
EdTech Hub

About this document


Recommended Haßler, B., Hassan, S. M., Klune, C., Mansour, H., & Friese, L. (2024). Using AI
citation to Automate the Literature Review Process in Education: A Topic Brief
(Helpdesk Response No. 183). EdTech Hub.
https://doi.org/10.53832/edtechhub.1003. Available at
https://docs.edtechhub.org/lib/BVD8JX7V. Available under Creative
Commons Attribution 4.0 International.
Licence
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/
This licence means you are free to share and adapt for any purpose, even
commercially, as long as you give appropriate credit, provide a link to the
licence, and indicate if changes were made. You may do so in any
reasonable manner, but not in any way that suggests the licensor endorses
you or your use. Please refer to the link for more details.

.
Creative This document uses some content from Gerit Wagner, Roman Lukyanenko
Commons and Guy Pare (2022). Artificial intelligence and the conduct of literature
Attributions reviews. Journal of Information Technology, Vol. 37(2) 209–226; © Association
for Information Technology Trust 2021; DOI: 10.1177/02683962211048201;
Creative Commons Attribution, https://creativecommons.org/licenses/by/4.0/
[reusing-open-access-and-sage-choice-content].
Please also note that this topic brief is set in the context of EdTech Hub’s
prior work, and, therefore, also includes several self-citations by the lead
author, who also serves as technical director of EdTech Hub.

Reviewers Arjun Upadhyay

About EdTech Hub


EdTech Hub is a global research partnership. Our goal is to empower people by giving
them the evidence they need to make decisions about technology in education. Our
evidence library is a repository of our latest research, findings and wider literature on
EdTech. As a global partnership, we seek to make our evidence available and accessible to
those who are looking for EdTech solutions worldwide.
EdTech Hub is supported by UKAid, Bill & Melinda Gates Foundation, World Bank, and
UNICEF. The views in this document do not necessarily reflect the views of these
organisations. To find out more about us, go to edtechhub.org. Our evidence library can be
found at docs.edtechhub.org/lib.
EdTech Hub Helpdesk
The Helpdesk is the Hub’s rapid response service, available to FCDO advisers and World
Bank staff in 70 low and lower-middle-income countries (LMICs). It delivers just-in-time
services to support education technology planning and decision-making. We respond to
most requests in 1–15 business days. Given the rapid nature of requests, we aim to produce
comprehensive and evidence-based quality outputs, while acknowledging that our work is
by no means exhaustive. For more information, please visit edtechhub.org/helpdesk.

Using AI to Automate the Literature Review Process in Education​ 2


EdTech Hub

Contents
Glossary​ 5
Additional abbreviations​ 7
List of figures​ 4
1. Introduction​ 8
2. Overview of findings​ 9
2.1. AI tools for literature review​ 9
2.2. Literature inputs​ 9
2.3. Questions posed for this topic brief​ 10
3. AI and literature reviews​ 16
4. Approach for this topic brief​ 18
4.1. Methodology for this topic brief​ 18
4.2. Results​ 19
5. Integrated literature review tools​ 27
5.1. Fully integrated tools​ 27
5.2. Semi-integrated tools​ 30
5.3. Focus on literature search and discovery​ 32
5.4. Focus on literature screening and categorisation​ 33
5.5. Focus on summarisation and writing assistance​ 35
5.6. General Purpose Large Language Models​ 38
5.7. GPT Researcher​ 40
5.8. Other tools​ 40
6. Tool reviews​ 41
6.1. Problem formulation​ 41
6.2. Literature searches​ 41
6.3. Screening for inclusion​ 45
6.4. Quality assessment​ 49
6.5. Data extraction​ 49
6.6. Data analysis and interpretation​ 50
6.7. General observations​ 50
7. Outlook​ 53
7.1. Convene stakeholders​ 53
7.2. Undertake in-depth exploration of AI tools​ 53
7.3. Conclusion​ 54
Bibliography​ 55

Using AI to Automate the Literature Review Process in Education​ 3


EdTech Hub

Figures
Figure 2.1. Simplified literature review process​ 9

Figure 2.2. Availability of evidence for health research​ 10

Figure 2.3. Availability of evidence for education research​ 10

Figure 2.4. The lack of a single database leads to divergent literature


reviews and meta-analyses​ 10

Figure 3.1. Separating technology use for education from technology


use for education research​ 16

Figure 4.1. AI-based tools for the different steps of the literature review
process in education research.​ 20

Figure 5.1. ChatGPT 4o response: What research evidence is there about


teacher allocation in low-income countries?​ 39

Figure 6.1. Citation and references for a publication in Web of


Science — forward and backward snowballing.​ 42

Figure 6.2. The same article viewed on the Journal of Computer Assisted
Learning (JCAL) website​ 43

Figure 6.3. The same article viewed on Google Scholar​ 43

Figure 6.4. The same article viewed on Open Development & Education’s
evidence library ​ 44

Figure 6.5. Extract from EdTech Hub’s existing keyword inventory ​ 48

Using AI to Automate the Literature Review Process in Education​ 4


EdTech Hub

Abbreviations and glossary


This document avoids the use of abbreviations and acronyms as much as
possible. Nevertheless, we have collated a short list of common acronyms
which readers might encounter in the general literature pertaining to the
topic at hand. The glossary was compiled using ChatGPT.

Glossary
AI (Artificial Intelligence): A branch of computer science that aims to
create machines or systems capable of intelligent behaviour, simulating
human cognitive processes.

AILR (AI-based Literature Reviews): AI-based Literature Reviews is a


phrase coined by ⇡Wagner et al. (2022).

ANN (Artificial Neural Network): A computational model inspired by the


structure and function of biological neural networks, used in machine
learning to process information and make decisions.

API (Application Programming Interface): A set of rules and tools that


allows different software applications to communicate with each other,
enabling the exchange of data and functionality.

DL (Deep Learning): A subset of machine learning that involves neural


networks with multiple layers (deep neural networks), allowing the model
to learn complex representations of data.

Explainable AI (XAI): Explainable AI refers to artificial intelligence systems


designed to make their decision-making processes understandable and
transparent to humans. It involves methods and techniques that allow
users to comprehend how and why an AI model arrives at specific
conclusions or predictions, enhancing trust and accountability.

GPT (Generative Pre-trained Transformer): A type of artificial intelligence


model that uses transformer architecture and is pre-trained on a large
dataset to generate human-like text in a wide range of contexts.

Hallucination: In the context of AI, hallucination refers to instances where a


model generates outputs that are not based on real or accurate
information, often producing incorrect results.

Large Language Model (LLM): A Large Language Model refers to a type of


artificial intelligence model that is designed to understand and generate
human-like language based on the patterns it has learned from extensive

Using AI to Automate the Literature Review Process in Education​ 5


EdTech Hub

amounts of textual data. These models are characterised by their large size,
typically containing millions or even billions of parameters.

LDA (Latent Dirichlet Allocation): A statistical model used for topic


modelling, which identifies topics within a collection of documents and
assigns probability distributions to words in those topics.

ML (Machine Learning): A subset of artificial intelligence that focuses on


developing algorithms and models that enable computers to learn
patterns from data and make predictions or decisions without explicit
programming.

NLG (Natural Language Generation): The process of generating natural


language text or speech by a computer, often used in applications where
human-like communication is required.

NLP (Natural Language Processing): A field of AI that focuses on the


interaction between computers and human language, enabling machines
to understand, interpret, and generate human-like text.

NLU (Natural Language Understanding): The ability of a machine to


comprehend and interpret the meaning of human language, going
beyond simple pattern recognition to understand context, semantics, and
user intent.

RAG (Retrieval-Augmented Generation): RAG is a natural language


processing (NLP) model that combines elements of both retrieval and
generation in order to improve the quality and relevance of generated text.

Topic modelling: Topic modelling is a natural language processing (NLP)


technique used to identify topics present in a collection of text documents.
The primary goal is to discover hidden thematic structures within the text
data, revealing patterns of co-occurring words that suggest the presence of
specific topics or themes. One of the popular methods for topic modelling
is Latent Dirichlet Allocation (LDA), though there are others like
Non-Negative Matrix Factorization (NMF) and Latent Semantic Analysis
(LSA).

Zotero: a free, open-source reference management tool that helps users


collect, organise, cite, and share research

Using AI to Automate the Literature Review Process in Education​ 6


EdTech Hub

Additional abbreviations
CSV​ Comma-separated value

EEF​ Education Endowment Foundation

FCDO​ Foreign, Commonwealth and Development Office

GUI​ Graphical User Interface

LMIC​ Low- and middle-income country

PICO ​ Population, intervention, context, and outcome)

PRISMA ​ Preferred Reporting Items for Systematic Reviews and


Meta-Analyses

RCT​ Randomised controlled trial

RIS​ Research Information System

VAT​ Value-added tax

Using AI to Automate the Literature Review Process in Education​ 7


EdTech Hub

1. Introduction
Systematic literature reviews are often conducted manually in many
research organisations. This topic brief explores the use of AI to automate
the literature review process in the field of EdTech in order to improve the
speed and efficiency of creating literature reviews across the sector.

The topic brief is guided by the following questions, provided as part of the
corresponding helpdesk request to EdTech Hub:

■​ What tools are available for organisations and projects


(university-based research, research in intergovernmental
organisations, donor-funded programmes, within the not-for-profit
sector) to use to automate evidence reviews?

■​ How appropriate are existing tools? Are they easy to use? Do they
have licensing or cost barriers? How advanced are these AI tools?

■​ What tools are other organisations using to present the best


evidence quickly?

■​ Is there value in EdTech Hub building a bespoke AI tool to help


identify, review, and synthesise evidence for literature reviews on
education in low- and middle-income countries (LMICs)?

■​ What are the pros and cons of building a bespoke AI tool focusing on
education in LMIC versus using an already existing commercial
product? What are the cost implications of the two options?

Using AI to Automate the Literature Review Process in Education​ 8


EdTech Hub

2. Overview of findings
Several steps need to be undertaken to ensure a literature review is
successful, as outlined below. In many of these steps, existing AI-based
tools can make significant contributions. However, the fields of education
and EdTech specifically (and the social sciences more broadly) face
particular challenges around the availability of evidence.

To illustrate the challenges, Figure 2.1. illustrates a simplified literature


review process. The green box indicates all published research in a specific
field, with the yellow box indicating a database that covers part of that
published research. This database is used by AI tools for literature reviews
(the middle box), resulting in a literature review (or meta-analysis).

Figure 2.1. Simplified literature review process

2.1. AI tools for literature review


Many AI tools for literature reviews are currently available. They are at
different stages of maturity and usability, but many show great promise.

However, it is important to note that AI is more commonly used in


disciplines other than education (see the discussion in Section 3 on AI and
literature reviews). Nonetheless, despite some specific features of
education research, there is no reason to doubt that AI tools are just as
applicable to education and EdTech research as they are to other fields.

2.2. Literature inputs


There are significant differences between published research in
education / EdTech and other fields, such as health. In particular, there is a
stark contrast in how research outputs are organised in education
compared to health. In health research, some databases cover the vast
majority of health-related research; this is partly because of the rigorous
registration requirements for medical research (see Figure 2.2. below).

Using AI to Automate the Literature Review Process in Education​ 9


EdTech Hub

Figure 2.2. Availability of evidence for health research

However, there are no such stringent requirements for education research


(e.g., for research registration) and no comprehensive databases. Moreover,
grey literature, frequently not indexed at all, can often make significant
contributions to education research. Figure 2.3. illustrates this, while Figure
2.4. demonstrates how this leads to divergent reviews

Figure 2.3. Availability of evidence for education research

Figure 2.4. The lack of a single database leads to divergent literature reviews and
meta-analyses

2.3. Questions posed for this topic brief


As noted above, most current uses of AI in research are in disciplines other
than education; for example, where databases for easy-to-use AI tools can
be determined, they turn out to be based on databases with poor coverage
of education or EdTech (e.g., the Semantic Scholar database). Most
easy-to-use AI tools, while applicable in principle to the field of education or
EdTech (see Section 2.1 on AI tools for literature review), do not operate on
databases that comprehensively index education / EdTech research;

Using AI to Automate the Literature Review Process in Education​ 10


EdTech Hub

therefore, in practice, these easy-to-use AI tools are not currently helpful to


researchers in education / EdTech.

2.3.1. What tools to automate evidence reviews are available?


Are they easy to use?
At the time of writing, no AI tools that are very easy to use or would support
a comprehensive literature review from start to finish are available. Section
5 on integrated literature review tools discusses tools that offer support at
all stages of the process. These tools are ⇡EPPI-Reviewer (5.1.1.), ⇡ASReview
(5.1.2.), ⇡DistillerSR (5.1.3.), and ⇡Colandr (5.1.4.).

It is possible to use a variety of AI tools which support different steps of the


literature review process; these are known as semi-integrated tools. ⇡Iris.ai
(5.2.1.), ⇡Lateral.io (5.2.2.), ⇡SciSpace (5.2.3.) and ⇡Scanlitt (5.2.4.) are examples
of semi-integrated tools and are discussed in further detail in Section 5.2.

Figure 4.1. in Section 4.2. further outlines the potential for AI support in the
literature review process, identifying various web-based tools that could be
used at each step, and their cost. Examples of AI tools that could be used
for the literature review process, and mentioned in Figure 4.1., include, for
example, tools for problem formulation (⇡Elicit), literature search (⇡LitSonar,
⇡Elicit, ⇡ORKG Ask, and ⇡EPPI-Reviewer), screening for inclusion (⇡ASReview,
⇡Rayyan), quality assessment (⇡RevMan, ⇡RobotReviewer), data extraction
(⇡Elicit), and data analysis and interpretation (⇡RevMan, ⇡dmetar).

Web-based AI tools like ⇡Elicit are useful for examining research questions
quickly and without needing specialist knowledge; such tools often operate
on a very limited selection of literature relevant to education and EdTech.
Based on prior experiments, the fraction of literature covered may be 10%
or less.

Such simple-to-use tools must be compared with established tools, e.g.,


⇡EPPI-Reviewer (which can also use the ⇡OpenAlex dataset) and ⇡ASReview.
These established tools tend not to foreground their use of AI but
nevertheless often incorporate powerful AI tools. Tools like ⇡EPPI-Reviewer
are not as simple to use as some available web-based tools; however, unlike
web-based tools, they offer a significantly more rigorous and transparent
approach. While using ⇡EPPI-Reviewer involves a learning curve, the
learning is appropriate to research-level use and aims, i.e., to generate
best-evidence synthesis.

Many emerging web-based tools are aimed at a broader audience, perhaps


including students and researchers examining topics casually. While some
of these new tools may become major contenders for creating high-quality

Using AI to Automate the Literature Review Process in Education​ 11


EdTech Hub

reviews, as things stand, tools like ⇡EPPI-Reviewer will create more credible
outputs. That is not to say that you should not consult web-based tools like
Google Scholar; however, it is advisable to supplement such tools with
other literature databases for rigorous work.

In addition to integrated tools, other significant toolkits are readily available


for researchers with extensive technological skills; such toolkits can be
applied to education and EdTech research, but they are not easy to use.

2.3.2. What about licensing or cost barriers?


Open-source tools without subscription costs are available, but they have
limited capabilities. The most advanced open-source tool is ⇡ASReview.
Most commercial tools have a moderate associated cost, typically per
month, per user (e.g., GBP 15 / month / user). Figure 4.1. in Section 4.2.
identifies the cost of web-based tools associated with each step of the
literature review process.

2.3.3. How appropriate are the tools that are currently


available? How advanced are they?
A variety of tools, with different degrees of appropriateness and levels of
advancement, are available. Currently, no tool can fully automate the
literature review process. However, many advanced and effective tools can
support specific steps within a literature review (see Section 5 on integrated
literature review tools). For example, ⇡EPPI-Reviewer has an established
track record and is widely used for some world-leading syntheses produced
across diverse research fields.

A challenge with web-based tools is the need to check the input literature
on which they operate. Semantic Scholar is one of the databases that has
been available for some time and is widely used. However, it does not index
education research publications extensively. Moreover, the precise AI
processes used by web-based tools are typically not open to inspection.
Such tools are only useful for gathering quick impressions. They would only
be one component of more rigorous work: both AI-based and
non-AI-based processes need to be transparent and ‘explainable’ (in the
sense of ‘explainable AI’).

The question ‘How appropriate are these existing tools?’ is answered more
comprehensively in Section 5. We discuss integrated literature review tools,
such as fully integrated tools (5.1.), semi-integrated tools (5.2.), those that
focus on literature search and discovery (5.3.), literature screening and
categorisation (5.4.), and summarisation and writing assistance (5.5.). In
addition, general purpose Large Language Models (5.6.), GPT (Generative

Using AI to Automate the Literature Review Process in Education​ 12


EdTech Hub

Pre-trained Transformer) Researcher (5.7.), and other tools (5.8.) are


explored.

Regarding the question, ‘How advanced are these AI tools?’, Section 6


indicates how appropriate and effective various tools are in the areas of
problem formulation (6.1.), literature searches (6.2.), screening for inclusion
(6.3.), quality assessment (6.4.), data extraction (6.5.), and data analysis and
interpretation (6.6.). See Figure 4.1. for an overview of different AI tools
appropriate to the various steps of a literature review.

2.3.4. What tools are other organisations using to quickly and


efficiently present a synthesis of the best evidence?
The answer to this question depends on the perspective. As noted in the
previous sections, there are currently no tools that are (1) very easy to use
(e.g., a web-based interface in the style of Google Scholar) or (2) that
operate on the right input data and which would, therefore, reliably
automate the literature review process fully. However, the ⇡EPPI-Reviewer
ecosystem and tools like it offer a platform that can significantly accelerate
evidence synthesis.

2.3.5. Is there value in building a bespoke AI tool for


‘education in LMICs’ to help identify, review, and synthesise
evidence for literature reviews?
There are several different answers to this question.

1.​ Should EdTech Hub experiment with AI tools to help identify,


review, and synthesise evidence for literature reviews? In our view,
the answer to this is yes. The potential payoff is very high and offers a
significant first-mover advantage at this time.

2.​ Would it be appropriate for EdTech Hub to independently build a


bespoke AI tool for education in LMICs to help identify, review, and
synthesise evidence for literature reviews? If this were to be a
comprehensive, easy-to-use tool, the effort for this — if carried by
EdTech Hub alone — is unlikely to be justified (or indeed
maintainable) beyond the life of the Hub. It should also be noted that,
given recent advances (⇡EPPI-Reviewer, ⇡OpenAlex, ⇡ASReview), the
challenge would not necessarily be the development of new tools
but rather the input data (i.e., the literature collection) curated for use
by the AI tools.

3.​ Should EdTech Hub collaborate with others to explore building a


bespoke AI tool for education in LMICs to help identify, review, and

Using AI to Automate the Literature Review Process in Education​ 13


EdTech Hub

synthesise evidence for literature reviews? Several organisations are


interested in organising and synthesising evidence. EdTech Hub
could play an important role in stimulating the sector to advance the
use of AI for education in LMICs (including bespoke tools). Such an
effort would necessarily involve building comprehensive databases,
and encouraging others to register their research (typically in
Crossref).

2.3.6. What are the pros and cons of building a bespoke AI


tool focusing on education in LMICs, versus using an existing
commercial product? What are the cost implications of the
two options?
As noted above, a significant issue is the absence of a comprehensive
database and existing commercial products that deliver on literature
reviews for education / EdTech in LMICs.

EdTech Hub has already done some work on using AI tools for literature
reviews (⇡Haßler et al., 2021k). Some of these approaches were reused in a
programme outside EdTech Hub, namely the England-focused
collaboration between Open Development & Education and the Education
Endowment Foundation (EEF) in a literature review focused on EdTech for
disadvantaged children (⇡Haßler et al., 2024).

Two activities would be conducive to exploring this question further and


answering it more coherently, while also considering our prior work in this
area:

1.​ Convene stakeholders: A moderate amount of funding could be


utilised to explore AI collaboratively with a network of multiple
stakeholders to coordinate and synthesise efforts, maximise
outcomes, and determine how costs could be shared. (Perhaps 20%
full-time equivalent (FTE) over one year, plus some moderate travel
expenses).

2.​ Undertake in-depth exploration of AI tools: A moderate amount of


funding could enable an in-depth exploration, which would involve
several trials of software-developer-level AI tools, to illustrate the
potential impact and determine costs.

This exploration could include an extensive trialling of ⇡EPPI-Reviewer


and ⇡ASReview, including some workshops for researchers from
LMICs, to explore barriers to using these tools.

Using AI to Automate the Literature Review Process in Education​ 14


EdTech Hub

In addition, we propose conducting a coverage assessment of the


⇡OpenAlex dataset. ⇡OpenAlex already categorises publications
according to Sustainable Development Goals (SDGs) (⇡UN, no date).
This existing categorisation could be further extended and tailored to
the needs of education / EdTech within LMICs, including uses in ‘living
reviews’.

The activities mentioned above would allow informed, evidence-based


pathways to be formed, working towards better use of AI tools to help
identify, review, and synthesise evidence for literature reviews for
education / EdTech in LMICs.

These activities are discussed in more detail in Section 7 as part of two


overall recommendations (see Sections 7.1. and 7.2.)

Using AI to Automate the Literature Review Process in Education​ 15


EdTech Hub

3. AI and literature reviews


AI is beginning to transform many aspects of society, including the use of
AI in education, for example, with a focus on improving learning outcomes.
Figure 3.1. presents a visual representation of the use of technology for
education (and AI within this), as well as the use of technology in research
(and the use of AI within this). While the public interest in AI and the use of
AI in education has been very visible, the use of AI in education research
has been less so.

Figure 3.1. Separating technology use for education from technology use for
education research

However, AI is also impacting traditional research practices in many areas.


The potential of AI to augment and partially automate research has
sparked vivid debates in many scientific disciplines, including the health
sciences (⇡Adams et al., 2013; ⇡Tsafnat et al., 2014), biology (⇡King et al., 2009),
and management (⇡Johnson et al., 2019).

Literature reviews are no exception to this: literature reviews draw on large


and rapidly growing volumes of documents, i.e., partially structured
(meta)data. Perhaps the use of AI for literature reviews has been less visible
because methodological research on literature reviews could be
considered a niche area. However, in a field like educational technology
research, one would expect technology use for research to be much more
prevalent than it is currently.

In contrast, within education research, natural science disciplines have


utilised preprint servers for over 30 years (e.g., arXiv was founded in 1991),
and these preprint servers are used very widely. However, such practices

Using AI to Automate the Literature Review Process in Education​ 16


EdTech Hub

are less common in education research. Specifically on the absence of


using AI techniques for literature reviews in education research, we note,
for example, that the review article ‘Automation of systematic literature
reviews: A systematic literature review’ (⇡van Dinter et al., 2021) provides no
examples from the field of education; likewise, a review of 50 years of
conceptual modelling by ⇡Storey et al. (2023) provides no reference to the
fields of education or social sciences.

However, there is clearly significant potential for using AI in education and


EdTech literature reviews. Similarly to ⇡Wagner et al. (2022), writing in the
field of information sciences, we believe that advancing knowledge in this
area is promising because:

■​ rigorous standalone review projects require substantial efforts over a


period of months;

■​ the volume of reviews published in education journals has been


rising steadily, supplemented by a large and quickly growing body of
grey literature;

■​ literature reviews involve tasks that fall somewhere on a spectrum


between the mechanical and the creative.

Currently, the process of reviewing literature is mostly undertaken


manually; while sample sizes are increasing — making more publications
available — the increased sample sizes stretch resources and cognitive
limits. The lack of comprehensive literature databases exacerbates the
issue in education research, making processes even more onerous.
Consequently, many review articles, problematically, do not have
comprehensive coverage, often restricting their scope to a few top journals
or search engines (⇡Haßler et al., 2021k; ⇡Xiao & Watson, 2017).

Although we are particularly interested in tools powered by ‘advanced AI’


(such as Large Language Models), we also consider more established users
of AI, such as topic modelling.

Using AI to Automate the Literature Review Process in Education​ 17


EdTech Hub

4. Approach for this topic brief


The literature review process involves both creative and mechanical tasks;
this creates viable opportunities for advanced AI-based tools to reduce the
level of effort needed by prospective authors for time-consuming and
repetitive tasks, leaving them free to dedicate more time to creative tasks
that require human interpretation, intuition, and expertise (⇡Tsafnat et al.,
2014).

⇡Saeidmehr et al. (2023) note that published research is seeing exponential


growth, resulting in:

“a doubling of the scientific corpus for many fields every nine years, a
trend that reflects the steady increase in the number of researchers
and can be readily confirmed as having continued or even
accelerated.” (⇡Saeidmehr et al., 2023, p. 1)

While new AI tools and platforms are emerging rapidly, ⇡Teijema et al.
(2023) note the

“disparity between the rapid development of these methodologies


and their rigorous evaluation.” (⇡Teijema et al., 2023, p. 3)

4.1. Methodology for this topic brief


Our review borrows from the highly relevant review by ⇡Wagner et al.
(2022), who in turn collected evidence by surveying previous literature
reviews of AI-based tools (e.g., ⇡Al-Zubidy et al. 2017; ⇡Harrison et al., 2020;
⇡Jonnalagadda et al., 2015; ⇡Kohl et al., 2018; ⇡Marshall & Wallace, 2019;
⇡Tsafnat et al., 2014; ⇡van Dinter et al. 2021) and online registries
(i.e., Systematic Review Toolbox).1 We complemented this evidence by
additional searches (with Google Scholar), reviewing papers that cited
⇡Wagner et al. (2022), reviewing selected references, new review articles
(policy analysis: ⇡Lemire et al., 2023; bibliometrics: ⇡Rowe et al., 2023; ⇡Sarin
et al., 2023; ⇡Smith, 2023), as well as further online databases (⇡van de
Schoot, 2023; ⇡Future Tools — Find The Exact AI Tool For Your Needs) and
library guides (⇡Xiao, no date).

We also reviewed prior EdTech Hub outputs pertaining to evidence review


and synthesis, specifically:

1
See http://systematicreviewtools.com. Retrieved 29 August 2024.

Using AI to Automate the Literature Review Process in Education​ 18


EdTech Hub

■​ Methodology for literature reviews ⇡ (⇡Haßler et al., 2021k, based on


⇡Haßler et al. 2019h).

■​ Literature Reviews of Educational Technology Research in Low- and


Middle-Income Countries: An audit of the field (⇡Haßler et al. 2019g,
and associated blog post: ⇡Jordan, 2019).

■​ A scoping review of technology in education in LMICs — descriptive


statistics and sample search results (⇡Haßler et al. 2020v).

In addition to the literature research, we also corresponded with selected


researchers in this field, who provided additional inputs.

The review of these sources surfaces several tools, including tools reviewed
by ⇡Wagner et al. (2022), and new tools; these tools were examined and
tested if relevant to the topic of this review.

4.2. Results
An overview of the new tools discovered and considered relevant to this
topic brief, is available in Figure 4.1. below.

Some tools support multiple steps of the review process


(e.g., ⇡EPPI-Reviewer). These tools tend to focus on data, workflow, and
collaboration management functionality without necessarily drawing on AI
capabilities. This commentary focuses on tools supporting individual steps
because they tend to be more amenable to code inspection and extension
(i.e., published under open-source, non-commercial licences) and
independent validation. These tools are described in Section 5.

Tools specific to individual stages of the literature review process are


detailed in Section 6, which discusses the steps of the literature review
process, outlining relevant AI-based tools that could be used at each step.

Using AI to Automate the Literature Review Process in Education​ 19


EdTech Hub

Figure 4.1. AI-based tools for the different steps of the literature review process in education research.

AI-based tools Cost Potential for


AI-support

Programming libraries supporting thematic N/A Moderate potential


analyses based on Latent Dirichlet Allocation with AI potentially
models (example paper: ⇡Antons & Breidbach, 2017; pointing researchers
example from education research: ⇡Bhutoria, 2022). to promising areas
and questions or
Graphical User Interface (GUI) applications and N/A verifying research
Step 1. Problem
programming libraries supporting scientometric gaps.
formulation
analyses (⇡Swanson & Smalheiser, 1997).

Web-based tools: Free Trial (free


⇡Elicit 5,000 credits),
Pay-as-You-Go
(USD 1/1,000
credits) and
Enterprise Plan
(Custom pricing)

Using AI to Automate the Literature Review Process in Education​ 20


EdTech Hub

AI-based tools Cost Potential for AI-support

TheoryOn (⇡Li et al., 2020) enables N/A Very high potential since
ontology-based searches for constructs the most important search
and construct relationships in methods consist of steps
behavioural theories that are repetitive and
time-consuming, that is,
⇡litbaskets (⇡Boell & Wang, 2019) supports The service is realised amenable to automation
researchers in setting a manageable through Scopus,
scope in terms of journals covered suggesting that users
Step 2. need access to Scopus to
Literature utilise the features.
search
⇡LitSonar (⇡Sturm & Sunyaev, 2019) offers Currently provided only to
syntactic translation of search queries for members of cooperating
different databases; it also provides institutions due to
(journal) coverage reports licensing restrictions

⇡Elicit See above

⇡ORKG Ask Free to use / open source

Using AI to Automate the Literature Review Process in Education​ 21


EdTech Hub

⇡EPPI-Reviewer is a web-based software User fee: GBP 10 per


designed to manage and analyse month. This gives access
literature review data, including to the software and the
systematic reviews, meta-analyses, ability to create and
framework syntheses, and thematic maintain an unlimited
syntheses. It is developed and number of non-shareable
maintained by the EPPI Centre at the reviews.
Step 2. UCL Institute of Education, University of Shareable review fee:
Literature London. GBP 35 per month for
search each shareable review. All
users in a shareable review
must have active user
accounts.
Site licence: Available for
organisations conducting
multiple reviews with
many users.

Using AI to Automate the Literature Review Process in Education​ 22


EdTech Hub

AI-based tools Cost Potential for AI-support

⇡ASReview (⇡van de Schoot et al., Free and open-source software ■​ High potential for
2021) offers screening semi-automated support
prioritisation in the first screen, which
requires many repetitive
Automated detection of implicit
decisions
theory (ADIT) approach (⇡Larsen
et al., 2019) for researchers ■​ Moderate potential for the
capable of designing and
second screen, which
programming machine learning
classifiers (research on the requires considerable
Technology Acceptance Model). expert judgement
Step 3. (especially for borderline
Screening ⇡Rayyan Individual plans: cases)
for inclusion ■​ Free plan: USD 0, free
forever.
■​ Professional plan: USD 8.25
per month, billed annually.
■​ Student plan: USD 4 per
month, billed annually.
Teams plans:
■​ Pro team: USD 8.25 per user,
per month, billed annually.
■​ Teams+: USD 24.99 per user,
per month, billed annually.
Enterprise plans: Custom
pricing

Using AI to Automate the Literature Review Process in Education​ 23


EdTech Hub

AI-based tools Cost Potential for AI-support

Statistical software packages (e.g., ⇡RevMan) For individuals: Low to moderate


Standard rate: potential for
GBP 100.00 + VAT (EU semi-automated
& UK). quality assessment
Step 4.
Quality Academic: GBP 85.00
assessment + VAT (EU & UK).

Meta-analysis, Student: GBP 65.00 +


qualitative VAT (EU & UK).
systematic For organisations:
reviews Custom pricing

⇡RobotReviewer (⇡Marshall et al., 2015) for Pricing information is


experimental research not available. The
source code is
available on the
website.

Using AI to Automate the Literature Review Process in Education​ 24


EdTech Hub

AI-based tools Potential for AI-support

Software for data extraction and qualitative content analysis ■​ Moderate potential for
(e.g., Nvivo and ATLAS.ti) offers AI-based functionality for reviews requiring a formal
qualitative coding, named entity recognition, and sentiment data extraction (descriptive
analysis reviews, scoping reviews,
meta-analyses and
WebPlotDigitizer and Graph2Data for extracting data from qualitative systematic
Step 5. Data reviews)
extraction statistical plots
■​ High for objective and
⇡Elicit atomic data items (e.g.,
sample sizes); low for
complex data which has
ambiguities and lends itself
to different interpretations
(e.g., theoretical arguments
and main conclusions)

Using AI to Automate the Literature Review Process in Education​ 25


EdTech Hub

AI-based tools Potential for AI-support

Descriptive synthesis: Tools for text-mining (⇡Kobayashi et al., ■​ Very high potential for
2017), scientometric techniques, and topic models descriptive syntheses
(⇡Nakagawa et al., 2019; ⇡Schmiedel et al., 2019), and
■​ Moderate potential
computational reviews aimed at stimulating conceptual
Step 6. Data for (inductive) theory
contributions (⇡Antons et al., 2021).
analysis and development and
interpretation theory testing
Theory building: Examples of inductive (computationally
intensive) theory development (e.g., ⇡Berente et al., 2019; ■​ Low, non-existent
⇡Lindberg, 2020; ⇡Nelson, 2020). potential for reviews
adopting traditional and
Theory testing: Tools for meta-analyses (e.g., ⇡RevMan and interpretive approaches
⇡dmetar)

Using AI to Automate the Literature Review Process in Education​ 26


EdTech Hub

5. Integrated literature review tools


We note that several ‘integrated’ tools offer larger workflows, in some cases,
the full literature review workflow. They offer multiple AI / tech tools in the
literature review process, but no single tool can carry out a comprehensive
review.

This section reviews integrated tools that cover multiple stages of the
literature review workflow. Many of these tools are web-based. Section 6
below discusses tools that are specifically relevant at different stages of a
literature review.

The present section makes the case that:

■​ With some notable exceptions, many integrated web-based tools


have unknown or poor coverage, which is problematic.

■​ Such integrated web-based tools often do not state what precise


workflows or algorithms are used, which is also problematic.

■​ However, such tools often have very attractive, easy-to-use interfaces,


which is helpful, and allows them to compete with more established
tools.

Below, we build on Figure 4.1., by identifying AI tools that support


education-related literature reviews by being fully integrated (5.1.) or
semi-integrated (5.2.). We discuss ⇡EPPI-Reviewer (5.1.1.) and ⇡ASReview
(5.1.2.) in more detail while exploring additional tools that could potentially
be used to assist with literature reviews; specifically, we review ⇡DistillerSR
(5.1.3.) and ⇡Colandr (5.1.4.), and the semi-integrated tools, ⇡Iris.ai (5.2.1.),
⇡Lateral.io (5.2.2.), ⇡SciSpace (5.2.3.), and ⇡Scanlitt (5.2.4.), focusing on the
extent of their functionality.

5.1. Fully integrated tools


This subsection presents fully integrated tools which offer support at all
stages of the literature review process.

5.1.1. EPPI-Reviewer
⇡EPPI-Reviewer is subscription and web-based software that assists with
several types of literature reviews (meta-analyses, systematic reviews,
narrative reviews, meta-ethnographies, and more). It is capable of
managing and analysing both large- and small-scale data. ⇡EPPI-Reviewer
can automate several processes in literature reviews, including

Using AI to Automate the Literature Review Process in Education​ 27


EdTech Hub

deduplication, clustering, and screening, and it contains five different


study-type classifiers (randomised controlled trials, systematic reviews,
economic evaluations, Covid-19 categories, and Long Covid categories). It
also integrates with ⇡RobotReviewer, a machine-learning system
supporting evidence synthesis and classification. ⇡EPPI-Reviewer can
import and manage references and store PDF files. It has a coding function
that allows users to include or exclude different parameters for a review.
Using a complementary tool, ⇡EPPI-Reviewer, users can also generate
evidence gap maps based on a literature review conducted in
⇡EPPI-Reviewer. The latest version (June 2020) is ⇡EPPI-Reviewer 6,
although it uses the same data from Version 4.

Overall, ⇡EPPI-Reviewer works best as a tool for screening and extracting


data for review. However, ⇡EPPI-Reviewer can also assist with literature
searches as it can be integrated with the ⇡OpenAlex database. On its own,
⇡EPPI-Reviewer cannot assist in an initial literature search and functions
best when identified studies are imported into the software for screening
and extraction. The use of EPPI Reviewer with a suitable automated
literature search tool holds promise for improving the timeliness and
efficiency of literature review workflows. The EPPI-Reviewer site lists
AI-based automation tools (⇡EPPI Reviewer, no date).

5.1.2. ASReview
⇡ASReview (⇡van de Schoot et al., 2021) offers a promising open-source
option with its range of machine learning classifiers (including naive Bayes,
support vector machines, logistic regression, and random forest classifiers).
It learns from initial inclusion decisions and leverages these insights to
present researchers with a prioritised list of papers (i.e., the titles and
abstracts), proceeding from those most likely to be included to those least
likely.

⇡ASReview is open source, so it could potentially connect with the


OpenAlex database. ⇡ASReview uses active learning algorithms to reorder
records based on their predicted relevance. This approach helps
researchers find relevant records more quickly than traditional methods,
potentially reducing screening time by up to 95% (⇡van de Schoot et al.,
2021)​. Users provide initial input by labelling at least one relevant and one
irrelevant record. The software’s AI model, Elas, then learns from these
decisions to present the next most relevant record for screening. ⇡ASReview
doesn’t generate results from specific databases. Instead, it allows users to
import their datasets collected from various research databases like Web of
Science, PubMed, etc. The software supports datasets in RIS, CSV, and Excel
formats. ⇡van de Schoot (2023) notes that ⇡ASReview, developed at Utrecht

Using AI to Automate the Literature Review Process in Education​ 28


EdTech Hub

University, helps scholars and practitioners to get an overview of the most


relevant records for their work as efficiently as possible while being
transparent in the process. It allows multiple machine learning models and
features with exploration and simulation modes, which is especially useful
for comparing and designing algorithms. Furthermore, it is intended to be
easily extensible, allowing third parties to add modules that enhance the
pipeline with new models, data, and other extensions.

5.1.3. DistillerSR
⇡DistillerSR is subscription, web-based systematic review software that uses
AI and intelligent workflows to automate the management of every stage
of a literature review: searching, screening, text retrieval, data extraction
and appraisal, and reporting. All stages are configurable based on user
needs.

With AI-driven duplicate detection, ⇡DistillerSR is integrated with data


providers to enable automated literature searches and importing
references. AI is used to automate screening based on user preferences,
and its developers claim it reduces screening time by up to 60%. AI is also
used to identify conflicts and provide a quality check of screening.

The tool has existing content-provider integrations and ad-hoc document


retrieval to assist with the collection and copyright management of full-text
articles. Data extraction is supported by pre-built templates and
configurable forms that support validations, calculations, and cleaning of
complex datasets. The software includes a customisable reporting engine
to generate and disseminate reports and updates and allows for
integration with third-party reporting applications. It also provides an audit
trail. In particular, the tool supports project management and provides
real-time metrics to monitor teams and progress.

⇡DistillerSR is a comprehensive and customisable tool and a very promising


application of AI for the literature review process. However, the current
iteration seems heavily focused on supporting medical studies, and many
third-party integrations have a similar focus. As such, its suitability for
literature reviews in different fields may not be as efficient. It is also
important to note that many functions described here from ⇡DistillerSR are
divided into different modules (CuratorCR, LitConnect, AI Classifiers,
Application Programming Interface (API) Integration). The provider does
not necessarily sell these modules together; their inclusion depends on the
subscription plan.

Using AI to Automate the Literature Review Process in Education​ 29


EdTech Hub

5.1.4. Colandr
⇡Colandr (machine-learning assisted) is a free, web-based, open-access tool
designed for conducting evidence synthesis projects, including systematic
and scoping reviews. It supports various stages of the systematic review
process, including protocol development, citation deduplication, article
screening, data extraction and coding, and manuscript development. The
tool employs machine learning to facilitate evidence synthesis, optimising
the process of citation sorting by relevance and semi-automating the
classification of included documents.

Colandr can be used collaboratively with teams of any size, supporting


cooperative work and cross-checking between team members. It allows
users to upload PDFs and extract data from full texts using natural
language processing. Extracted data and screening decisions can be
exported in CSV format (see Colandr for Systematic Reviews [⇡Kahili-Heede,
no date]).

5.2. Semi-integrated tools


Several tools do not support the entire literature review process in the same
way as the above tools do (5.1.); however, they still offer a somewhat
integrated environment or can be integrated with various other software
environments, which is why we refer to these tools as ‘semi-integrated’.

5.2.1. Iris.ai
⇡Iris.ai is a forthcoming subscription service that provides an AI-driven
‘Researcher Workspace’ tool suite to assist with research and systematic
reviews. The platform holds value in using AI to support several steps in the
literature review process, including initial search, screening, data extraction,
and analysis.

The tool suite includes different modules that assist and help automate
content-based searches, context and data filtering, data extraction and
systematisation, and the analysis of document sets. It can also provide
automated summaries of included papers and allows users to distil insights
through a chat feature, enabling interactions between researchers and
data insights. It claims the Researcher Workspace can save up to 75% of
manual effort in the research process.

However, the tool currently appears to be focused on industry- and


science-related research topics. It is unclear what integrations the platform

Using AI to Automate the Literature Review Process in Education​ 30


EdTech Hub

supports and the extent to which it accesses education research. The


website does not offer a standardised subscription model.

5.2.2. Lateral
⇡Lateral.io is a subscription service that focuses on using AI-powered tools
to assist with the organisation and process of literature search, screening,
and data extraction. It offers users a paper search integrated with different
third-party applications, such as Semantic Scholar, to search for relevant
literature. It also has several AI-powered tools, such as concept recognition,
a smart PDF reader, and a search function to help automate literature
screening data extraction.

⇡Lateral.io is a promising tool that can enable the automation of the early
stages of a literature review to help streamline workflows. However, it is
unclear how robust the literature search function is for education research.

5.2.3. SciSpace
⇡SciSpace, formerly known as Typeset, is a platform designed to streamline
the research workflow. SciSpace facilitates the discovery, creation, and
publication of research papers. It offers tools for understanding academic
texts in simpler language and finding connected papers, authors, and
topics. It is an AI-powered tool that aids in comprehending and elaborating
academic texts. SciSpace is best suited for researchers, academic
professionals, and students involved in writing, collaborating, and
publishing research papers. SciSpace has a forever free plan with limited
feature access. SciSpace Premium is available for USD 12 per month, billed
annually, and custom pricing is available for teams and enterprises.

5.2.4. Scanlitt
⇡Scanlitt is a digital research assistant platform designed to streamline
literature review and knowledge acquisition for the scientific community
with the following core features:

■​ ARTIREV: This feature of Scanlitt helps in identifying relevant


scientific articles, clustering them for better understanding, and
prioritising readings. It caters to different use cases, such as for
academics, students, medical practitioners, and institutions,
providing tailored solutions for each​​. ARTIREV’s process involves
downloading, cleaning, analysing data, and interpreting the results.
This process is presented through a dynamic interface with radial

Using AI to Automate the Literature Review Process in Education​ 31


EdTech Hub

dendrograms for quick identification of article groups, accompanied


by word clouds and additional information for each article​​.

■​ DATAMAN: A bibliographic database specifically for management


science. It indexes journals in the management field and is used for
tracking publications, authors, institutions, or thematic searches
using keywords.

Pricing and subscription options:

■​ Offline and in-depth exploration of literature: Offers offline use,


comprehensive analysis without article limits, compatibility with
various databases, and availability for individual or landline
subscriptions.

■​ Online and in-depth exploration of a scientific object: This option


is available online, supports multiple devices, provides complete
analysis limited to 500 articles, and is compatible with various
databases. It offers both individual and institutional subscriptions.

■​ Access to relevant knowledge with a click: Scanlitt is compatible


with multiple devices, providing simplified results limited to 120
articles, and is compatible with DATAMAN data via API​​.

■​ Scanlitt offers a free trial of Artirev, allowing potential users to explore


its functionalities before subscribing. However, during our exploration
of the platform, we noted that support videos are in French.

5.3. Focus on literature search and discovery

5.3.1. Semantic Scholar


Semantic Scholar is a free academic search engine that uses machine
learning to provide brief summaries of literature. Users can apply key topics
or concepts to search from over 214 million papers to identify appropriate
papers to include in a literature review. It can provide short summaries of
papers to help identify potentially relevant papers; it also provides
organisational features such as an online library, which works with an
AI-powered research feed to enable literature recommendations. Other
tools include citation analysis and entity recognition, and they also provide
information on the open-access status of papers.

Semantic Reader is an emerging application that aids reading of academic


literature through AI to support comprehension. At the time of writing, it is
only available for most arXiv papers on Semantic Scholar. Features are

Using AI to Automate the Literature Review Process in Education​ 32


EdTech Hub

growing, and it can currently provide citation cards that include


summaries, tables of content, and online library integration.

Fundamentally, Semantic Scholar is a research search engine that uses


AI-powered tools to help with literature searches and citations. It can be a
valuable tool to integrate into the early-stage workflow of the literature
review process. That said, it currently focuses on scientific papers, and the
search function is not as robust for education research.

5.3.2. Research Rabbit


⇡Research Rabbit is a free, AI-powered tool that assists with searching,
organising, and curating literature. Its unique function provides
visualisations of ‘paper networks’ based on the topics or papers the
researcher inputs into the search engine. These visual networks can be
used to explore and / or curate literature. It also functions as a collaborative
workspace; multiple users can access and comment on different reviews
and visualisations. It can also provide personalised recommendations and
is integrated with Zotero for citation purposes.

5.3.3. Consensus
⇡Consensus is a search engine in its beta phase that uses language models
to identify and synthesise insights from academic research papers.
Consensus’ source material comes from the Semantic Scholar database.
The main purpose of ⇡Consensus is to provide a list of up to 20 of the most
relevant papers related to a research question or phrase, input into the
engine. The language model then ranks the search results by relevance to
the query.

⇡Consensus’ value lies somewhere between literature search and early data
extraction. While it cannot be used to search a high volume of papers, it
can help produce general insights from the most relevant literature on a
topic. Used in conjunction with other tools, it can improve the early
workflow of a literature review process. Its use of the Semantic Scholar
database might limit its applicability to education research.

5.4. Focus on literature screening and categorisation

5.4.1. Covidence
⇡Covidence is a tiered subscription, web-based tool that assists with
systematic reviews. It is aimed at supporting institutions and organisations,
such as universities, government organisations, and research institutes. As
such, ⇡Covidence enables small and large teams to use it to collaborate on

Using AI to Automate the Literature Review Process in Education​ 33


EdTech Hub

the review process. ⇡Covidence works with reference management


software such as Zotero to import citations for review.
Covidence is better described as an assistive and user-friendly tool that aids
in literature review screening, deduplication, and risk-of-bias assessment.

5.4.2. Abstrackr
⇡Abstrackr is developed and maintained by the Center for Evidence
Synthesis in Health at Brown University. It is a free, open-source, web-based
application aimed at optimising the citation screening step for systematic
reviews. The tool includes a web-based annotation feature, allowing review
participants to screen citations for relevance collaboratively. It employs
machine learning technologies to semi-automate the citation screening
process, which is still in development. The software allows for importing
citations from databases like RefMan or PubMed. It provides functionality
for single or double-screening citations and a decision reconciliation mode
for reviewing citations with unclear relevance.
⇡Abstrackr is best suited for researchers, academics, and professionals
involved in conducting systematic reviews, particularly in the biomedical
field. It is designed to aid these users in managing the growing volume of
biomedical literature and make systematic reviews less onerous. Abstrackr
is a free tool, making it accessible to a wide range of users without budget
constraints.

5.4.3. RobotAnalyst
⇡RobotAnalyst (National Centre for Text Mining) is a web-based software
tool developed to assist in the literature screening phase of systematic
reviews. It combines text-mining and machine learning algorithms to
organise references by content and prioritise them based on a relevancy
classification model that is trained and updated throughout the process.
This tool is particularly useful for researchers and professionals engaged in
systematic reviews, helping them to manage and prioritise a large volume
of literature efficiently. According to ⇡van de Schoot (2023),

“RobotAnalyst was developed as part of the Supporting


Evidence-based Public Health Interventions using Text Mining
project to support the literature screening phase of systematic
reviews.”
RobotAnalyst is free to use.

Using AI to Automate the Literature Review Process in Education​ 34


EdTech Hub

5.4.4. SWIFT-Active Screener


⇡SWIFT is an acronym for Sciome Workbench for Interactive
Computer-Facilitated Text-mining. SWIFT-Active Screener is a web-based,
collaborative software application specifically designed for systematic
reviews. It aims to reduce the time and effort required in the literature
screening phase of systematic reviews. It employs statistical and
computational methods to prioritise articles for inclusion in systematic
reviews. Moreover, it includes an algorithm to estimate recall while users
work, providing a statistical basis for deciding when to stop screening. The
application significantly reduces the screening burden compared to
traditional methods, achieving high recall rates with fewer articles
screened. For example, in tests on diverse systematic reviews, it resulted in
an average 54% reduction in screening burden while maintaining 95%
recall or higher. The tool is designed to be easy to use, with a simple yet
powerful graphical user interface, and offers rich project status updates.
SWIFT-Active Screener is best suited for researchers, academics, and
professionals conducting systematic reviews, particularly in areas like
government, industry, and non-profit research organisations. It is free for
the public to use.

5.5. Focus on summarisation and writing assistance

5.5.1. SciPub+
⇡SciPub+ is a recent subscription-based tool that features a collection of ten
AI assistants designed to support the whole workflow of academic writing.
The AI assistants guide individuals through key parts of the academic
writing process, of which literature reviews are one component. The
literature review AI assistant, like the others, makes use of a form that asks
the researcher important questions related to their project to enable the AI
assistant to generate a draft literature review.

⇡SciPub+ is valuable in automating the drafting of a written literature


review, although it is limited in assisting in the larger process of literature
reviews as it is a writing-focused tool. As such, it cannot assist in actual
literature searches, filtering, or data extraction. It may have value in the later
stages of a literature review once a draft is ready to be written.

5.5.2. Paperdigest
⇡Paperdigest is an AI-powered tool designed to summarise academic
articles, providing a quick and efficient way for researchers, students, and

Using AI to Automate the Literature Review Process in Education​ 35


EdTech Hub

science communicators to grasp the core ideas of a paper. It uses advanced


algorithms to generate concise summaries of research papers, effectively
capturing the key points and main subjects. Users can quickly summarise a
research paper by entering its DOI or PDF link on the PaperDigest website.
It highlights specific values, results, comparisons, and other crucial
information from the paper, facilitating a deeper understanding of the
research. Registered users can upload PDFs directly from their computers
for summarisation.

PaperDigest works primarily with open-access articles, meaning it


summarises freely available content not behind paywalls. The specific
databases it draws from are not explicitly mentioned, but it likely includes
major academic databases and journals that offer open-access content.
PaperDigest is a free tool. There are no charges for using its basic features,
including the summarisation of articles using DOI or PDF links.

5.5.3. Scholarcy
⇡Scholarcy is an AI-powered tool designed to assist academic research by
quickly analysing and summarising research articles, reports, and book
chapters. It summarises entire papers, including references, and rewrites
statements in the third person for easy citation. It also highlights key
claims, statistics, terms, and abbreviations. The tool links to open-access
versions of each cited source, reducing the need for manual searching. It
also extracts figures and tables from papers, providing them in a format
suitable for further analysis.

⇡Scholarcy offers browser extensions for Chrome, Firefox, and Edge, and
integrates with the Scholarcy Library for storing and organising summary
cards. Scholarcy does not specify the research databases it uses to generate
results. However, it finds references by locating open-access PDFs from
sources like Google Scholar and arXiv and uses the Unpaywall API to assist
with this.

Scholarcy offers both free and paid-for plans. The free plan includes
browser extensions and flashcards, while the paid-for plans offer additional
features like a personal library for summary flashcards and academic
institution licences. The personal library plan starts at USD 4.9 per month,
and the academic institution licence starts from USD 8K+ per year (see
⇡Viraj, no date for a review of Scholarcy pricing and features).

5.5.4. Elicit
⇡Elicit is an AI research assistant designed to help researchers automate
time-consuming tasks such as summarising papers, extracting data, and

Using AI to Automate the Literature Review Process in Education​ 36


EdTech Hub

synthesising findings. Users can search for research papers using natural
language queries, get one-sentence abstracts, select relevant papers, and
extract details into organised tables. Elicit also identifies themes and
concepts across multiple papers, enhancing the literature review process.
With a database of 125 million academic papers, Elicit saves researchers
time and effort, making it easier to stay well-informed and conduct
systematic reviews.

In our exploratory trialling of ⇡Elicit, researching an education topic on


disadvantaged children, elicit.org only found 5% of the 250 studies included
in the final review. While the features of elicit.org are compelling, this lack
of results is not surprising as elicit.org is based on the Semantic Scholar
database, which has poor coverage of education journals.

5.5.5. ORKG ASK


⇡ORKG Ask is an advanced open-search system designed to help
researchers, academics, and enthusiasts find and extract valuable
information from a vast corpus of research articles. Similar to ⇡Elicit, users
can simply ask a question in natural language. ORKG Ask will handle the
rest by understanding the context of the query to find semantically similar
research articles through its semantic search feature. It conducts global
searches across the entire indexed corpus (76m articles) for comprehensive
results and allows users to refine their search results with metadata filters.
Additionally, ORKG Ask can extract specific properties or information from
related research papers for detailed insights. The system employs a
sophisticated pipeline that starts with query submission, followed by a
semantic search using an embedding model to encode queries and search
for semantically similar articles using an ‘Approximate Nearest Neighbour’
algorithm. Once relevant articles have been identified, a Large Language
Model extracts the necessary information, and users can enhance their
search with metadata filters for more precise results. Technically, ORKG Ask
uses ⇡Qdrant for storing and retrieving vector representations of research
articles efficiently, the ⇡Nomic embedding model to encode search queries
and research articles into high-dimensional vectors for semantic search,
and the ⇡Mistral Instruct 7B v0.2 LLM to extract and interpret relevant
information from the selected research articles. The research articles are
sourced from the ⇡CORE dataset, a comprehensive collection of
open-access research literature.

The service offers a ‘my library’ service, where users can upload their own
datasets.

Using AI to Automate the Literature Review Process in Education​ 37


EdTech Hub

5.6. General Purpose Large Language Models


ChatGPT, developed by OpenAI, is a highly advanced Large Language
Model that has significantly impacted various fields, including research. In
our view, ChatGPT (4.o/o1/o1-mini) levels of responses can be considered
comparable to final-year undergraduate / first-year graduate levels. At these
levels, knowledge and insights are well documented in books. By contrast,
rigorous research synthesis requires access to journals.

For example, in our experience, ChatGPT can produce interesting research


designs in response to research questions. Similarly, our experience shows
that ChatGPT can also answer questions about short text pieces (see
⇡Haßler et al., 2024).

However, research synthesis as such is not really possible with ChatGPT


(see Figure 5.1. for a typical ChatGPT 4o response). While it is possible to
make questions more specific, the pattern of responses is similar: relevant
websites are found, and results are presented. While this is undoubtedly
useful, particularly for graduates or new researchers entering new areas of
research, it does not constitute rigorous synthesis. Similarly, so-called
‘custom GPT models’ tailored to specific research domains or datasets can
enhance the relevance and accuracy of the information generated for
literature reviews.

Using AI to Automate the Literature Review Process in Education​ 38


EdTech Hub

Figure 5.1. ChatGPT 4o response: What research evidence is there about teacher
allocation in low-income countries?

Research on teacher allocation in low-income countries highlights several key


issues and strategies.
■​ Challenges in Recruitment and Retention: Recruiting and retaining
teachers in hard-to-staff schools, particularly in rural and impoverished
areas, is a significant challenge. Studies show that financial incentives,
such as higher salaries, hardship allowances, and housing benefits, can
effectively attract teachers to these regions and reduce turnover.
However, non-financial incentives, such as professional development
opportunities and improved working conditions, also play a crucial role
(Teacher Task Force)​​(Open Book Publishers)​.​
Equity in Teacher Distribution: Ensuring equitable distribution of
teachers is critical to addressing educational disparities. Many countries
use teacher allocation formulas based on student-to-teacher ratios, but
these often fail to consider the varied needs of different regions.
Innovative approaches, like using Geographic Information Systems (GIS)
for mapping and deploying teachers, have shown promise in improving
equity by more accurately matching teacher supply with demand across
different areas ​(Teacher Task Force)​​(IIEP Policy Toolbox)​.​
Integrated Information Systems: The development of integrated teacher
information systems (e.g., T-EMIS) can significantly enhance the efficiency
and transparency of teacher deployment. These systems help manage
teacher data comprehensively, reducing political and social biases in the
allocation process and ensuring that deployment decisions are based on
clear, pre-set criteria such as seniority and merit (IIEP Policy Toolbox)​.
■​ Impact of Teacher Quality: Research consistently shows that teacher
quality is a major determinant of student outcomes. Efforts to improve
teacher quality in low-income countries include providing ongoing
professional development, implementing rigorous teacher training
programs, and setting high standards for teacher certification. However,
maintaining and improving teacher quality remains a complex challenge,
particularly in contexts with limited resources​(Open Book Publishers).
■​ Policy Implications: Effective policies for teacher allocation in low-income
countries often involve a combination of strategies tailored to local
contexts. This includes financial and non-financial incentives, the use of
technology for efficient management, and continuous support for
teacher professional development. Policymakers are encouraged to
involve all stakeholders, including teachers and teacher unions, in the
decision-making process to ensure the sustainability and acceptance of
these policies​(IIEP Policy Toolbox)​.
In summary, addressing the challenges of teacher allocation in low-income
countries requires a multifaceted approach that combines equitable resource
distribution, effective use of technology, and policies that support both
recruitment and retention of high-quality teachers.

Using AI to Automate the Literature Review Process in Education​ 39


EdTech Hub

5.7. GPT Researcher


As a recent article explored, Large Language Models assist in writing
Wikipedia-like articles (⇡Shao et al., 2024). Inspired by this, the ⇡GPT
Researcher tool showcases how a team of AI agents can work together to
research a given topic, from planning and writing to publication. GPT
Researcher is an open-source autonomous agent designed for thorough
online research, producing detailed, factual, and unbiased reports with
customisable options. Inspired by recent AI advancements, it offers
improved speed and reliability through parallelised agent work. Its mission
is to provide accurate information by leveraging AI, overcoming the
limitations of current Large Language Models, which often rely on
outdated data, risk hallucinations, and produce short, biased outputs. GPT
Researcher addresses these issues using a broader range of sources for
more objective research conclusions.

5.8. Other tools


In this subsection, we note other tools that we found but could not or
decided not to review. ⇡Parsifal focuses specifically on literature reviews in
software engineering. Two tools had limited availability (⇡srdb.pro; ⇡SESRA).
⇡Feynman AI has been advertised but appears not to have entered
production. We also note ⇡Grantable, which supports the writing of grant
applications.

Perhaps on a tangent, ⇡Perplexity is a conversational search engine


designed to provide accurate and contextual answers to complex queries. It
can search the internet, including domains like Wolfram|Alpha, Wikipedia,
Reddit, YouTube, News articles, and Academic Papers. Users can save
search threads, share them, and interact with threads shared by others.
Account holders can curate the sources Perplexity AI uses for their
searches, ensuring relevance and accuracy. Powered by GPT-4, the Copilot
feature helps guide users’ search experiences by asking clarifying questions
and refining the search process. It also offers a Chrome extension, allowing
users to use the tool anywhere on the internet, including page- and
domain-specific answers and article summaries. Perplexity AI offers both
free and paid-for versions. The free version includes basic features, while
the premium plan, priced at USD 20 per month or USD 200 per year,
provides unlimited copilot usage and access to advanced language models
like GPT-4 (⇡Aayush, 2023).

Using AI to Automate the Literature Review Process in Education​ 40


EdTech Hub

6. Tool reviews
As noted above, the subsections below follow the steps of the literature
review process, outlining relevant AI-based tools specific to various stages
of the process. For web-based, integrated tools, see Section 5. The
subsections correspond with entries in the tables in Figure 4.1.

6.1. Problem formulation


At present, there is only moderate potential for using AI to support
problem formulation or verify research gaps. Tools such as ⇡Elicit and
⇡Consensus, which have multiple functions but are also helpful in quickly
testing different research questions, are emerging.

6.2. Literature searches


We agree with ⇡Wagner et al.’s (2022) assessment that the area of literature
searches has a very high potential for using AI. We note that the AI
approach used by prior EdTech Hub work, which uses Natural Language
Processing, falls into two domains: literature searches and screening
(⇡Haßler et al., 2021k).

For comprehensive reviews, automated, cross-database searches should be


considered the best research practice for systematic literature reviews in
the fields of education and EdTech. The need for comprehensive,
cross-database searches is motivated by the observation that “no database
contains the complete set of published materials” (⇡Xiao & Watson, 2017:
p. 11); while this assertion dates back to 2017, it still holds today. Our previous
work (⇡Haßler et al., 2020) suggests that, unlike health databases, education
publication databases only overlap by 30%–50%, which makes structured
approaches across multiple databases necessary, as well as the need to
apply multiple search techniques (⇡Papaioannou et al., 2010; ⇡Templier &
Paré, 2018). Our cross-database tools (⇡Haßler et al., 2020) predate ⇡LitSonar
but have similar components; our tools (⇡Haßler et al., 2021k) also offer a
software development kit, enabling large-scale retrievals from commonly
used portals (Scopus, Web of Science, ProQuest, CrossRef, Directory of
Open Access Journals (DOAJ), etc.).

Limited interoperability (accessibility via APIs) is still a major obstacle to


breaking the data processing pipeline between the database and local
repositories of research teams, introducing manual database queries, and
duplicate checking as potential sources of errors. Ultimately, automated
searches also further the goal of transparent reporting (information

Using AI to Automate the Literature Review Process in Education​ 41


EdTech Hub

science: ⇡Templier & Paré, 2018), as well as repeatability and reproducibility


(information science: ⇡Cram et al., 2020).

⇡Wagner et al. (2022) note that a prevalent challenge for literature reviews
in the social sciences is the lack of databases comprehensively curating
research published in the main outlets, including journals and conferences
(⇡Brocke et al., 2015). Within the domain of EdTech, EdTech Hub’s evidence
library is one such effort to comprehensively curate new research on
EdTech in low-income countries (⇡Haßler et al., 2024).

One use of AI for literature searches includes deduplication to manage the


outputs of automated searches. Automated searches surface large
numbers of publications that include many duplications. Natural Language
Processing can support the removal of duplications. AI can also support the
process of backward and forward ‘snowballing’ (backward and forward
citation searching). Currently, few literature databases include citation
trees; the commercially available examples include Web of Science, ⇡Scite,
and Google Scholar. Figures 6.1., 6.2., and 6.3. below illustrate citation
searching in the various tools (forward/backward snowballing). We note
that the three tools provide different citation estimates (Web of Science:
170, ⇡Scite: 180, Google Scholar 528). Web of Science (6.1.) also provides
citations (forward/backward snowballing), while the other tools do not.
⇡Scite (6.2.) not only provides citations but uses AI to attempt a critical
appraisal of the citation: citations that ‘support’ the original claims, citations
that only ‘mention’ the original claims, and citations that ‘contrast’ the
original claims. Notably, Google Scholar (6.3.) is the only tool that allows for
searching within citing papers.

Figure 6.1. Citation and references for a publication in Web of Science — forward
and backward snowballing.

Using AI to Automate the Literature Review Process in Education​ 42


EdTech Hub

Figure 6.2. The same article viewed on the Journal of Computer Assisted
Learning (JCAL) website, with the ⇡Scite plugin active — Scite provides forward
snowballing only but indicates supporting / mentioning / contrasting with a total
of 186 citing articles

Figure 6.3. The same article viewed on Google Scholar, indicating 528 citing
articles and illustrating the ability to search within citing articles

Using AI to Automate the Literature Review Process in Education​ 43


EdTech Hub

Figure 6.4. The same article viewed on Open Development & Education’s
evidence library together with citations and citing articles (forward / backward
snowballing; URL: https://docs.opendeved.net/lib/9IYKEUKJ).

Tools like the evidence libraries of the EdTech Hub and Open Development
& Education also offer citation trees (Figure 6.4.; ⇡Haßler et al., 2024);

Using AI to Automate the Literature Review Process in Education​ 44


EdTech Hub

however, these are only available for a very limited number of publications,
and they are open data. One of the goals of such evidence libraries is to
show new research in the context of other publications (⇡Haßler et al.,
2024). AI tools can support the open generation of such open citation trees,
including extracting references, consolidating, and merging reference data.
The discussion regarding the above figures illustrates that it is unlikely that
any single tool can satisfy all research needs; instead, specific tools need to
be chosen for the required tasks.

Citation searching is important, as there is some evidence for its


effectiveness (⇡Jalali & Wohlin, 2012; ⇡Papaioannou et al., 2010). We also note
⇡Connected Papers as a web-based tool that can show connections
between papers.

Regarding ⇡Scite, we note that this tool has been around for several years,
and we have used it regularly. It offers a freemium-based model with a free
web plugin and a plugin for Zotero; access to the main account requires a
subscription. In November 2023, Research Solutions announced the
acquisition of Scite (⇡Research Solutions, no date). With AI solutions
emerging very quickly, other companies and organisations will frequently
acquire products, and feature sets will change. For example, Elsevier is
adding AI to their Scopus literature search tool (⇡Aguilera Cora et al., 2024;
⇡Elsevier, no date; ⇡Elsevier Products, no date).

New tools are also emerging in the areas of documenting, analysing, and
justifying individual search strategies (cf. ⇡Templier & Paré, 2018), as well as
syntactic search query validation (⇡Russell-Rose & Shokraneh, 2019).
⇡Wagner et al. (2022) note that this could support researchers in designing
and improving different elements of search strategies, including analysis
and justification of the scope (publication outlets covered and the selection
of search terms in database searches). ⇡Sturm & Sunyaev’s (2019) paper
illustrates how journal coverage reports could enable substantially more
targeted and efficient literature searches.

6.3. Screening for inclusion


This step is typically divided into a first (and more inclusive) screening
based on titles and abstracts and a second (more restrictive) screening
based on full texts (⇡Templier & Paré, 2018). The PRISMA (Preferred
Reporting Items for Systematic Reviews and Meta-Analyses) approach is
staged in this way to make the process humanly feasible.

AI-based tool support for screening has been evolving over the years
(⇡Harrison et al., 2020), with promising recent progress with AI to screen

Using AI to Automate the Literature Review Process in Education​ 45


EdTech Hub

articles: see ‘Human-AI collaboration to identify literature for evidence


synthesis’ (⇡Spillias et al., 2023); ‘Breaking through limitations: Enhanced
systematic literature reviews with Large Language Models’ (⇡Reason et al.,
2023).

When considering potential AI support for this step, the reliability of


manual screening processes should not be overestimated, even if
specialists conduct the screening. Recent evidence in the health sciences
suggests a base rate of 10% disagreement between inclusion screens
conducted independently (⇡Wang et al., 2020). This indicates that it may
even be possible to augment and improve the screening activities of
researchers by having AI-based tools identify inconsistent and potentially
erroneous screening decisions.

6.3.1. Taxonomy
The second screening is dedicated to disentangling the remaining cases,
which can be particularly challenging since research in education (like
climate change mitigation research) is not standardised as strictly as other
disciplines. In contrast to the health sciences and biology, for instance, the
lack of widely used taxonomies for education / EdTech constructs (or,
indeed, climate change mitigation) and lack of standard vocabulary for
keywords (contrasting with ‘medical subject heading’ / MeSH terms) can
make it difficult to achieve required classification performance in the
second screening (cf. ⇡O’Mara-Eves et al., 2015). This challenge applies to
humans and machines alike.

We note that EdTech Hub has already made some progress in developing a
multi-language keyword inventory that follows appropriate strategies for
classifying education research and data extraction (⇡Education Endowment
Foundation & Durham University, 2022; ⇡EPPI Centre, 2003). The inventory
allows organising and coding studies based on keywords relating to
publication status, geographic focus, curricular focus, and population, etc.
(⇡Haßler et al., 2019p; ⇡Haßler et al., 2021k). Figure 6.5. illustrates an example
of this.

In the future, AI may allow moving away from a PRISMA-type, two-stage


approach towards more ‘spiral approaches’. ⇡Saeidmehr et al. (2023, p. 16)
note that

“lessons learned from training machine learning with title / abstract


screening do not necessarily transfer to machine learning that also
incorporates PDFs.”

Using AI to Automate the Literature Review Process in Education​ 46


EdTech Hub

6.3.2. Rayyan
⇡Rayyan is a three-tiered subscription, web-based tool that assists with
automating literature reviews. It has subscriptions for individuals and
teams. The free subscription tier for individuals allows for up to three active
reviews with an unlimited number of reviewers but has limited
functionality compared to the other tiers. The team subscription plans have
no free tier.

⇡Rayyan supports the import of citations from a range of reference


software. However, it is not integrated with any common reference
software, and users need to upload reference files from their local disks
manually. ⇡Rayyan can automate the detection of duplicate references,
allowing users to resolve duplications more efficiently. It allows multiple
reviewers to collaborate on one review and enables users to create labels to
assist with screening references. ⇡Rayyan’s AI enables it to help with
inclusion / exclusion decisions once the user manually categorises 50
articles. It can generate a probability based on inclusion and exclusion
criteria for undecided references for the reviewer to make final decisions.
⇡Rayyan also supports collaborative full-text reviews to further assess the
suitability of a study in a literature review; it has its own AI chatbot to assist
users.

Overall, ⇡Rayyan is best described as an AI-supported, semi-automated tool


whose main functionality assists with making the screening process of
literature reviews more user-friendly and efficient. Beyond this scope,
⇡Rayyan has limited functionality.

Using AI to Automate the Literature Review Process in Education​ 47


EdTech Hub

Figure 6.5. Extract from EdTech Hub’s existing keyword inventory (⇡Haßler et al., 2019p)

Using AI to Automate the Literature Review Process in Education​ 48


EdTech Hub

6.4. Quality assessment


The quality assessment stage involves checking primary empirical studies
for methodological issues, such as sources of bias (⇡Higgins & Green, 2008;
⇡Kitchenham & Charters, 2007; ⇡Templier & Paré, 2018). This step is intended
to assess the degree to which the conclusions of reviews aimed at theory
testing may be affected by different types of bias (e.g., selection, attrition,
and reporting bias). At the time of writing, the potential for AI-based tools
supporting these procedures is low to moderate: Assessing
(methodological) quality is a challenging task which requires expert
judgement, making it difficult to achieve high inter-coder agreement
(⇡Hartling et al., 2009).

Following methodological guidelines for quality appraisal and risk of bias


assessment, researchers conducting meta-analyses and systematic
literature reviews can leverage traditional tools like ⇡RevManor
corresponding packages of statistical software environments like R and
SPSS (⇡Bax et al., 2007). Further AI-based tools like RobotReviewer
(⇡Marshall et al., 2015; ⇡RobotReviewer) can also be applicable to
meta-analyses. While focusing on risk-of-bias assessment of randomised
controlled trials in the life sciences, ⇡RobotReviewer is an excellent
exemplar for explainable AI, allowing researchers to interactively trace
ratings in each domain of bias to its origin in the full-text document.

6.5. Data extraction


Data extraction requires researchers to identify relevant qualitative and
quantitative data fragments and transfer them to a (semi) structured
coding sheet (⇡Templier & Paré, 2018). It is more salient in descriptive
reviews, scoping reviews, and reviews aimed at theory testing than in more
selective and interpretive reviews, such as narrative and theory
development reviews.

Tools used in this area, such as ⇡ATLAS.ti and ⇡NVivo, are implementing
Natural Language Processing and machine learning algorithms for tasks
such as automated qualitative coding, named entity recognition and
sentiment analysis (AI in ATLAS.ti: ⇡ATLAS.ti, no date; AI in NVivo: ⇡Lumivero,
2023). There are also specialised tools for extracting data from tables or
statistical plots, such as ⇡WebPlotDigitizer.

In 2015, in the health sciences, which have established relatively consistent


reporting practices, corresponding tools designed to extract study
characteristics like the PICO (population, intervention, context, and

Using AI to Automate the Literature Review Process in Education​ 49


EdTech Hub

outcome) elements were still in the early stages of development


(⇡Jonnalagadda et al., 2015). However, this is a significant focus for
emerging tools (such as ⇡Elicit), and rapid progress is possible.

In the domain of information science, ⇡Wagner et al. (2022) envision


enhancements of databases and complementary repositories to facilitate
AI-based literature reviews. There is a similar need in education research. In
particular, the question arises as to why it has not been possible to
construct databases with relevant extractions. Currently, key descriptive
information (such as PICO) is not reported in machine-readable formats, let
alone a common metadata format for research features. Therefore,
literature reviewers extract such information manually. Such extractions
would not fall under copyright protection and could be shareable. Indeed,
across their portfolio of review and meta-analysis, the Education
Endowment Foundation requires that commissioned work aligns and
contributes to their internal databases. See Section 7.1. for further
discussion.

6.6. Data analysis and interpretation


The final step of the review process can take various forms, depending on
the type of review (⇡Templier & Paré, 2018). Different tools are available
depending on the main knowledge-building activities (⇡Schryen et al.,
2020). For descriptive syntheses, there is a range of established tools for
text-mining (⇡Kobayashi et al., 2017), as well as tools for analysing and
visualising topics, theories, and research communities based on
scientometric techniques, computational techniques, or Latent Dirichlet
Allocation models (⇡Balducci & Marinova, 2018; ⇡Nakagawa et al., 2019;
⇡Thilakaratne et al., 2019), for instance. In assessing the potential for future
AI-based tools to support data analysis, we need to consider that this step
can take various forms. In pre-theoretical reviews, AI-based tools offer
capabilities to generate descriptive insights, for example, based on topic
modelling (⇡Kunc et al., 2018; ⇡Mortenson & Vidgen, 2016; ⇡Schmiedel et al.,
2019) and ontological annotation (⇡Huettemann, 2023).

6.7. General observations


This subsection includes general observations about all AI tools when used
in LMIC contexts. While AI tools offer efficiency and breadth for literature
reviews, there are additional limitations when dealing with the diverse and
complex educational landscape of LMICs compared to what AI offers for
research in high-income country contexts.

Using AI to Automate the Literature Review Process in Education​ 50


EdTech Hub

■​ Representation and accessibility of data: In LMICs, there is a


notable under-representation of education research in major digital
databases, which are the primary sources for AI-driven literature
reviews. This leads to an incomplete understanding of the education
landscape in these regions. The development of education systems
using AI often lacks emphasis on presentation methods or data
mining, focusing instead on logical modelling, which may not fully
capture the complexity of education contexts in LMICs​​(⇡Zhai et al.,
2021).

■​ Language and cultural barriers: AI tools are generally optimised for


English and may not accurately interpret or analyse research
published in local languages or dialects of LMICs. This limitation can
lead to significant gaps in understanding and integrating cultural
contexts and nuances, which are vital for comprehensively reviewing
education research in these regions. For example, the use of AI tools,
like ChatGPT, in higher education, has raised concerns about cultural
bias in generated responses and the need for more linguistically and
culturally diverse training data. This reflects the challenges of AI in
accurately representing and understanding the nuances of different
cultures, particularly in LMICs where cultural diversity is significant​
(⇡Atanasova, 2023).

■​ Bias in AI algorithms: AI algorithms are prone to biases present in


their training data. Since AI development is predominantly
concentrated in high-income countries, there is a risk that these tools
may not be attuned to the specific educational challenges,
methodologies, or priorities in LMICs. This bias can skew the literature
review towards perspectives and contexts more commonly found in
high-income countries. Furthermore, AI language tools can pose a
risk to scientific diversity and innovation. The dominance of English in
scientific publishing, aided by AI language tools, may marginalise
non-English research and researchers, which is particularly relevant
in LMICs where English is not the primary language (⇡Nakadai et al.,
2023).

■​ Complexity of local educational issues: The unique and complex


educational challenges LMICs face may not be fully understood by AI
tools. For instance, integrating AI in education systems in LMICs often
requires a deeper understanding of hierarchical structures and local
nuances, which AI may not adequately address (⇡Zhai et al., 2021).

■​ Ethical and privacy concerns: Ethical concerns arise in the use of AI


for analysing sensitive educational data in LMICs. Issues related to

Using AI to Automate the Literature Review Process in Education​ 51


EdTech Hub

consent, privacy, and data governance are particularly pressing in


these contexts, where standards and regulations may vary
significantly from those in high-income countries.

■​ Dependency on technology and expertise: Over-reliance on AI for


literature reviews can lead to a lack of critical human engagement
with the material, which is crucial in education research where
contextual understanding is vital. Additionally, the implementation of
AI tools in LMICs is limited by resource constraints, including the
need for technical expertise and infrastructure. For example,
integrating AI tools in teaching English as a Foreign Language (EFL)
highlights the dependency on technology and expertise (⇡Rebolledo
Font de la Vall & Gonzalez Araya, 2023). AI-powered tools offer
personalised learning and real-time feedback but underscore the
need for technical know-how and infrastructure to implement and
utilise these technologies effectively. This dependency poses a
significant challenge in LMICs, where resources and expertise in AI
may be limited (⇡Rebolledo Font de la Vall & Gonzalez Araya, 2023).

■​ Rapid evolution of the AI field: AI research is rapidly evolving, and


reviews may miss relevant new research published after the database
search. This is particularly relevant in LMICs, where ongoing research
might not be immediately available in major databases or might be
communicated through channels not typically monitored by AI tools​​
(⇡Ciecierski-Holmes et al., 2022).

Using AI to Automate the Literature Review Process in Education​ 52


EdTech Hub

7. Outlook
⇡Wagner et al. (2022) outline an agenda suggesting how information
science researchers can focus and coordinate their efforts in advancing AI
for literature review. They note that nurturing this endeavour is a task for
the entire scholarly community, including a broad range of researchers,
methodologists, reviewers, journal editors, and authors of primary research
papers. We recommend reviewing the recommendations by ⇡Wagner et al.
(2022).

We close by highlighting some areas that pertain closely to this topic brief,
with reference to initial recommendations made in Section 2.3.6. above.

As noted above, a significant issue is the absence of comprehensive


databases, and there are currently no existing commercial products that
deliver on literature reviews for education / EdTech in LMICs. To make
headway and move forward in this area, we consider two areas.

7.1. Convene stakeholders


The purpose of this convening would be to make evidence available, index
it systematically, and explore AI collaboratively to maximise outcomes and
reduce costs. As noted above, a significant challenge is the collation of
relevant literature into appropriate databases. One could consider building
a network with stakeholders to coordinate and synthesise efforts. Such
stakeholders could include BE2, the Education Endowment Foundation,
Education Sub Saharan Africa (ESSA), Campbell, eBaseAfrica, J-PAL, What
Works Clearinghouse, Cochrane, 3ie, OpenAlex, Open Development &
Education, researchers from the Global South and selected universities
from the Global North, as well as community efforts (e.g., the evidence
synthesis hackathon, ⇡Haddaway, no date). A few of the above stakeholders
are already working together to determine more systematic ways of
sharing evidence, shared ways of extracting data from evidence to speed
up literature review and meta-analysis, etc. Overall, many structured
low-cost approaches are readily available to get more evidence into
circulation (such as Crossref DOI allocation).

7.2. Undertake in-depth exploration of AI tools


In this topic brief, we have highlighted several promising tools and
approaches that appear very promising to accelerate rigorous evidence
synthesis. Given the limitation of the majority of the current
very-easy-to-use web-based tools, it would appear beneficial to undertake

Using AI to Automate the Literature Review Process in Education​ 53


EdTech Hub

an in-depth exploration. This exploration could include an extensive


trialling of ⇡EPPI-Reviewer and ⇡ASReview, including some workshops for
researchers from LMICs, to explore barriers to using those tools.

It would also be interesting to undertake a coverage assessment of the


⇡OpenAlex dataset. This would help to understand the data quality (as
provided by various organisations), with a view to making
recommendations for improving data quality. This activity could extend
into topic modelling within the ⇡OpenAlex dataset. ⇡OpenAlex also
categorises publications according to the Sustainable Development Goals
(⇡UN, no date), which could be extended and tailored to the needs of
education / EdTech within LMICs, perhaps producing a living review.

7.3. Conclusion
The two activities mentioned above would allow for an informed,
evidence-based pathway towards the better use of AI tools to help identify,
review, and synthesise evidence for literature reviews for education / EdTech
in LMICs.

Using AI to Automate the Literature Review Process in Education​ 54


EdTech Hub

Bibliography
⁅bibliography:start⁆

This bibliography is available digitally in our evidence library at


https://docs.edtechhub.org/lib/BVD8JX7V

The first part of the bibliography below lists tools referenced in this brief in
alphabetical order. This is followed by a list of works cited in the brief in
alphabetical order.

AI Tools

Abstrackr. Wallace, Byron. (2023). Bwallace/abstrackr-web [Python].


https://github.com/bwallace/abstrackr-web (Original work published
2010)

ASReview — Active learning for Systematic Reviews. (n.d.). ASReview.


Retrieved January 20, 2024, from https://asreview.nl/. (details)

ATLAS.ti | The #1 Software for Qualitative Data Analysis. (n.d.). ATLAS.Ti.


Retrieved January 20, 2024, from https://atlasti.com. (details

colandr. (n.d.). Retrieved January 20, 2024, from


https://www.colandrapp.com/signin. (details)

dmetar — Companion R Package for the Guide Doing Meta-Analysis in R.


(n.d.). Retrieved January 20, 2024, from https://dmetar.protectlab.org/.
(details)

Connected Papers | Find and explore academic papers. (n.d.). Retrieved


January 19, 2024, from https://www.connectedpapers.com/. (details)

Consensus: AI Search Engine for Research. (n.d.). Consensus: AI Search


Engine for Research. Retrieved January 19, 2024, from
https://consensus.app/. (details)

Covidence — Better systematic review management. (n.d.). Retrieved


January 20, 2024, from https://www.covidence.org/. (details)

DistillerSR | Systematic Review Software | Literature Review Software. (n.d.).


DistillerSR. Retrieved January 20, 2024, from
https://www.distillersr.com/products/distillersr-systematic-review-softw
are. (details)

EPPI-Reviewer: systematic review software. (n.d.). Retrieved January 20,


2024, from https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=2914. (details)

Using AI to Automate the Literature Review Process in Education​ 55


EdTech Hub

Elicit: Find scientific research papers. (n.d.). Retrieved January 19, 2024, from
https://elicit.com/?workflow=table-of-papers. (details)

Future Tools — Find The Exact AI Tool For Your Needs. (n.d.). Retrieved
January 22, 2024, from https://www.futuretools.io/. (details)

Grantable. (n.d.). Retrieved January 19, 2024, from https://grantable.co/.


(details)

Iris.ai — Your Researcher Workspace – Leading AI for your research


challenge. (n.d.). Iris.Ai — Your Researcher Workspace. Retrieved
January 20, 2024, from https://iris.ai/. (details)

Lateral.io: Enhancing Literature Review Speed. (n.d.). Retrieved January 20,


2024, from
https://eightify.app/summary/computer-science-and-technology/latera
l-io-enhancing-literature-review-speed. (details)

litbaskets. (n.d.). https://litbaskets.io/. (details)

LitSonar. (n.d.). Retrieved January 19, 2024, from https://litsonar.com/.


(details)

NVivo. (n.d.). Lumivero. Retrieved January 20, 2024, from


https://lumivero.com/products/nvivo/. (details)

ORKG Ask | Find research you are actually looking for. (n.d.). Retrieved July
16, 2024, from https://ask.orkg.org/. (detail)

paperdigest (AI-Powered Research Platform). (2023, December 22). Paper


Digest. https://www.paperdigest.org. (details)

Parsifal — Perform Systematic Literature Reviews. (n.d.). Parsifal. Retrieved


January 20, 2024, from https://parsif.al/. (details)

Perplexity. (n.d.). Retrieved January 19, 2024, from https://www.perplexity.ai/.


(details)

Qdrant—Vector Database. (n.d.). Retrieved 10 October 2024, from


https://qdrant.tech/. (details)

Rayyan — AI Powered Tool for Systematic Literature Reviews. (2021,


November 8). https://www.rayyan.ai/. (details)

Research Rabbit. (n.d.). Retrieved January 19, 2024, from


https://researchrabbitapp.com/. (details)

Using AI to Automate the Literature Review Process in Education​ 56


EdTech Hub

RevMan. (n.d.). Retrieved January 20, 2024, from


https://training.cochrane.org/online-learning/core-software/revman.
(details)

RobotAnalyst. (n.d.). [XHTML]. National Centre for Text Mining — NaCTEM.


Retrieved January 20, 2024, from
https://www.nactem.ac.uk/robotanalyst/. (details)

RobotReviewer. (n.d.). RobotReviewer. Retrieved January 20, 2024, from


https://www.robotreviewer.net. (details)

SESRA. (n.d.). Retrieved January 20, 2024, from http://sesra.net/index/about.


(details)

Scanlitt. (n.d.). Retrieved January 19, 2024, from https://www.scanlitt.com/.


(details)

Scholarcy. (n.d.). Scholarcy (Online Summarizing Tool | Flashcard Generator


& Summarizer). Scholarcy | The Long-Form Article Summariser.
Retrieved January 19, 2024, from https://www.scholarcy.com/. (details)

SciPub+ | Revolutionize Your Academic Writing. (n.d.). Retrieved January 19,


2024, from https://scipubplus.com. (details)

Scite.ai (AI for Research). (n.d.). Scite.Ai. Retrieved January 19, 2024, from
https://scite.ai. (details)

Semantic Scholar | AI-Powered Research Tool. (n.d.). Retrieved January 20,


2024, from https://www.semanticscholar.org/. (details)

srdb.pro. (n.d.). www.srdb.pro. (details)

SWIFT-Active Screener. (n.d.). Sciome. Retrieved January 20, 2024, from


https://www.sciome.com/swift-activescreener/. (details)

Typeset (AI Chat for scientific PDFs | SciSpace). (n.d.). Retrieved January 19,
2024, from https://typeset.io. (details)

WebPlotDigitizer — Extract data from plots, images, and maps. (n.d.).


Retrieved January 20, 2024, from
https://automeris.io/WebPlotDigitizer/. (details)

References

ATLAS.ti. (n.d.). ATLAS.ti AI Lab | Accelerating Innovation for Data Analysis.


Retrieved January 20, 2024, from

Using AI to Automate the Literature Review Process in Education​ 57


EdTech Hub

https://atlasti.com/atlas-ti-ai-lab-accelerating-innovation-for-data-anal
ysis. (details)

Aayush. (2023, October 29). Perplexity AI: Review, Advantages & Guide
(2023). Elegant Themes Blog.
https://www.elegantthemes.com/blog/business/perplexity-ai. (details)

Adams, C. E., Polzmacher, S., & Wolff, A. (2013). Systematic reviews: Work
that needs to be done and not to be done. Journal of Evidence-Based
Medicine, 6(4), 232–235. https://doi.org/10.1111/jebm.12072. (details)

Aguilera Cora, E., Lopezosa, C., & Codina, L. (2024). Scopus AI Beta:
functional analysis and cases.
http://repositori.upf.edu/handle/10230/58658. This work is distributed
under this Creative Commons license. (details)

Al-Zubidy, A., Carver, J. C., Hale, D. P., & Hassler, E. E. (2017). Vision for SLR
tooling infrastructure: Prioritizing value-added requirements.
Information and Software Technology, 91, 72–81.
https://doi.org/10.1016/j.infsof.2017.06.007. (details)

Antons, D., & Breidbach, C. (2017). Big data, big insights? Advancing service
innovation and design with machine learning. Journal of Service
Research, 21(1), 17–39. https://doi.org/10.1177/1094670517738373. (details)

Antons, D., Breidbach, C. F., & Joshi, A. M. (2021). Computational literature


reviews: Method, algorithms, and roadmap. Organizational Research
Methods, 1094428121991230. (details)

Atanasova, D. (2023, May 4). Bridge or Barrier – Does generative AI


contribute to more culturally inclusive higher education and
research?
https://blogs.lse.ac.uk/impactofsocialsciences/2023/05/04/bridge-or-ba
rrier-does-generative-ai-contribute-to-more-culturally-inclusive-higher
-education-and-research/. (details)

Balducci, B., & Marinova, D. (2018). Unstructured data in marketing. Journal


of the Academy of Marketing Science, 46(4), 557–590.
https://doi.org/10.1007/s11747-018-0581-x. (details)

Bax, L., Yu, L.-M., & Ikeda, N. (2007). A systematic comparison of software
dedicated to meta-analysis of causal studies. BMC Medical Research
Methodology, 7(1), 1–9. https://doi.org/10.1186/1471-2288-7-40. (details)

Using AI to Automate the Literature Review Process in Education​ 58


EdTech Hub

Berente, N., Seidel, S., & Safadi, H. (2019). Research commentary: Data-driven
computationally intensive theory development. Information Systems
Research, 30(1), 50–64. https://doi.org/10.1287/isre.2018.0774. (details)

Bhutoria, A. (2022). Personalized education and artificial intelligence in


United States, China, and India: A systematic review using a
Human-In-The-Loop model. Computers and Education: Artificial
Intelligence, 3, 100068. https://doi.org/10.1016/j.caeai.2022.100068.
Available from
https://www.sciencedirect.com/science/article/pii/S2666920X22000236.
(details)

Boell, S., & Wang, B. (2019). wwwlitbaskets.io, an IT artifact supporting


exploratory literature searches for Information Systems research.
Proceedings of the Pacific Asia Conference on Information Systems
(Eds KK Wei. (details)

Brocke, J., Simons, A., & Riemer, K. (2015). Standing on the shoulders of
giants: Challenges and recommendations of literature search in
information systems research. Communications of the Association for
Information Systems, 37(9), 205–224. (details)

Ciecierski-Holmes, T., Singh, R., Axt, M., Brenner, S., & Barteit, S. (2022).
Artificial intelligence for strengthening healthcare systems in low- and
middle-income countries: A systematic scoping review. Npj Digital
Medicine, 5(1), 1–13. https://doi.org/10.1038/s41746-022-00700-y. Available
from https://www.nature.com/articles/s41746-022-00700-y. (details)

Cram, W. A., Templier, M., & Pare, G. (2020). (Re)considering the Concept of
Literature Review Reproducibility. Journal of the Association for
Information Systems, 21(5), 1103–1114.
https://doi.org/10.17705/1jais.00630. (details)

EPPI Centre. (2003). Core Keywording Strategy.


https://eppi.ioe.ac.uk/CMS/Portals/0/PDF%20reviews%20and%20summ
aries/EPPI_Keyword_strategy.pdf. (details)

EPPI Reviewer. (n.d.). Automation tools in EPPI-Reviewer. Retrieved January


20, 2024, from
https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3772#openalex.(details)

Education Endowment Foundation, & Durham University. (2022). EEF


Evidence Database Coding Guide — Main Data Extraction (No. Version
3).

Using AI to Automate the Literature Review Process in Education​ 59


EdTech Hub

https://d2tic4wvo1iusb.cloudfront.net/production/documents/toolkit/M
DE_CodingGuide_V3_March2022-1.pdf. (details)

Elovic, A. (2023). gpt-researcher (Version 0.5.4).


https://github.com/assafelovic/gpt-researcher (Original work published
2023). (details)

Elsevier Products. (n.d.). Scopus AI: Trusted content. Powered by


responsible AI. Www.Elsevier.Com. Retrieved January 19, 2024, from
https://www.elsevier.com/products/scopus/scopus-ai. (details)

Elsevier. (n.d.). Launch of Scopus AI to Help Researchers Navigate the


World of Research. Www.Elsevier.Com. Retrieved January 19, 2024,
from
https://www.elsevier.com/about/press-releases/launch-of-scopus-ai-to-
help-researchers-navigate-the-world-of-research. (details)

Feynman AI. (n.d.). Feynman AI. Retrieved January 20, 2024, from
https://www.feynman.ai/. (details)

Haßler, B., Adam, T., Allier-Gagneur, Z., Blower, T., Brugha, M., Damani, K.,
Hennessy, S., Martin, K., Megha-Bongnkar, G., Murphy, M., Walker, H., &
Walker, H. (2021k). Methodology for literature reviews (Working Paper
No. 10). EdTech Hub. https://doi.org/10.53832/edtechhub.0002. Available
from https://docs.edtechhub.org/lib/2CKWI7RR. Available under
Creative Commons Attribution 4.0 International. (details)

Haßler, B., Adam, T., Brugha, M., Damani, K., Allier-Gagneur, Z., Hennessy, S.,
Hollow, D., Jordan, K., Martin, K., Murphy, M., & Walker, H. (2019g).
Literature Reviews of Educational Technology Research in Low- and
Middle-Income Countries: An audit of the field (Working Paper No. 2).
EdTech Hub. https://doi.org/10.53832/edtechhub.0015. Available from
http://docs.edtechhub.org/lib/NM6CPLE9. Available under Creative
Commons Attribution 4.0 International. (details)

Haßler, B., Adam, T., Brugha, M., Damani, K., Allier-Gagneur, Z., Hennessy, S.,
Hollow, D., Jordan, K., Martin, K., Murphy, M., & Walker, H. (2019p).
Keyword inventory (version 1) (Working Paper — Research Instrument
Nos. 08–1). EdTech Hub. https://doi.org/10.53832/edtechhub.0016.
Available from https://docs.edtechhub.org/lib/LSEETV6K. Available
under Creative Commons Attribution 4.0 International. (details)

Haßler, B., Adam, T., Brugha, M., Damani, K., Allier-Gagneur, Z., Sara
Hennessy, David Hollow, Katy Jordan, Kevin Martin, Mary Murphy, &
Hannah Walker. (2019h). Methodology for literature reviews

Using AI to Automate the Literature Review Process in Education​ 60


EdTech Hub

undertaken by the EdTech Hub (Working Paper No. 3). EdTech Hub.
https://doi.org/10.5281/zenodo.3352101. Available from
https://docs.edtechhub.org/lib/BMM3Z3CM. Available under Creative
Commons Attribution 4.0 International. (details)

Haßler, B., Haseloff, G., Adam, T., Akoojee, S., Allier-Gagneur, Z., Ayika, S.,
Bahloul, K., Kigwilu, P. C., Costa, D. D., Damani, K., Gordon, R., Idris, A.,
Iseje, F., Jjuuko, R., Kagambèga, A., Khalayleh, A., Konayuma, G.,
Kunwufine, D., Langat, K., … Winkler, E. (2020a). Technical and
Vocational Education and Training in Sub-Saharan Africa A
Systematic Review of the Research Landscape (Berufsbildung in SSA).
VET Repository, Bundesinstitut für Berufsbildung, Bonn, Germany.
(details)

Haßler, B., Major, L., & Hennessy, S. (2016). Tablet use in schools: A critical
review of the evidence for learning outcomes. Journal of Computer
Assisted Learning, 32(2), 139–156. https://doi.org/10.1111/jcal.12123. (details)

Haßler, B., Mansour, H., Friese, L., & Longley, S. (2024). Disseminating the
Evidence and Outputs Generated by Your Programme: Three options
for setting up an evidence library (Helpdesk Response No. 178). EdTech
Hub. https://doi.org/10.53832/edtechhub.1001. Available from
https://docs.edtechhub.org/lib/PWN42VDQ. Available under Creative
Commons Attribution 4.0 International. (details)

Haßler, B., McBurnie, C., Walker, H., Klune, C., Huntington, B., & Bhutoria, A.
(2024). Protocol for a Systematic Review with Meta-Analysis:
Understanding Quality Characteristics of Edtech Interventions and
Implementation for Disadvantaged Pupils (Understanding Quality
Characteristics of EdTech Interventions and Implementation for
Disadvantaged Pupils No. 1). Open Development & Education.
https://doi.org/10.53832/opendeved.1077. Available from
https://docs.opendeved.net/lib/2I2GT22T. (details)

Haßler, B., McBurnie, C., Walker, H., Klune, C., Huntington, B., & Bhutoria, A.
(2024). Protocol for a systematic review with meta-analysis:
Understanding quality characteristics of EdTech interventions and
implementation for disadvantaged pupils (No. 1). Open Development
& Education. https://doi.org/10.53832/opendeved.1077. Available from
https://docs.opendeved.net/lib/2I2GT22T. (details)

Haßler, B., McIntyre, N., Mitchell, J., Martin, K., Nourie, K., Damani, K., Kristi
Nourie, & Kalifa Damani. (2020v). A scoping review of technology in
education in LMICs — descriptive statistics and sample search results

Using AI to Automate the Literature Review Process in Education​ 61


EdTech Hub

(Internal Paper No. 6). EdTech Hub.


https://doi.org/10.5281/zenodo.3631588. Available from
https://docs.edtechhub.org/lib/CMRISZHV. (details)

Haddaway, M. W. & N. (n.d.). Developing collaborations and technology for


evidence synthesis. Evidence Synthesis Hackathon. Retrieved January
19, 2024, from https://www.eshackathon.org/. (details)

Harrison, H., Griffin, S. J., Kuhn, I., & Usher-Smith, J. A. (2020). Software tools
to support title and abstract screening for systematic reviews in
healthcare: an evaluation. BMC Medical Research Methodology, 20(1),
7. https://doi.org/10.1186/s12874-020-0897-3. (details)

Hartling, L., Ospina, M., & Liang, Y. (2009). Risk of bias versus quality
assessment of randomised controlled trials: cross sectional study.
British Medical Journal, 339(1), 1–6. (details)

Higgins, J., & Green, S. (2008). Cochrane Handbook for Systematic Reviews
of Interventions. John Wiley & Sons, Ltd. (details)

Huettemann, S. (2023). Automated knowledge extraction from IS research


articles combining sentence classification and ontological
annotation. https://aisel.aisnet.org/wi2023/86/. (details)

Jalali, S., & Wohlin, C. (2012). Systematic literature studies: Database


searches vs. backward snowballing. Proceedings of the ACM-IEEE
International Symposium on Empirical Software Engineering and
Measurement, 29–38. https://doi.org/10.1145/2372251.2372257. (details)

Johnson, C. D., Bauer, B., & Niederman, F. (2019). The automation of


management and business science. Academy of Management
Perspectives, 35(2), 292–309. https://doi.org/10.5465/amp.2017.0159.
(details)

Jonnalagadda, S. R., Goyal, P., & Huffman, M. D. (2015). Automating data


extraction in systematic reviews: A systematic review. Systematic
Reviews, 4(1), 78. https://doi.org/10.1186/s13643-015-0066-7. (details)

Jordan, K. (2019, December 18). Reviewing the research literature in


educational technology for development: Balancing rigour and
inclusivity. EdTech Hub. https://doi.org/10.5281/zenodo.3581041.
Available from
https://edtechhub.org/2019/12/18/reviewing_the_research_literature_in_
educational_technology_for_development_balancing_rigour_and_incl

Using AI to Automate the Literature Review Process in Education​ 62


EdTech Hub

usivity/. Available under Creative Commons Attribution 4.0


International. (details)

Kahili-Heede, M. (n.d.). JABSOM Library: Colandr for Systematic Reviews:


Introducing Colandr. Retrieved January 22, 2024, from
https://hslib.jabsom.hawaii.edu/colandr/home. (details)

King, R. D., Rowland, J., & Oliver, S. G. (2009). The automation of science.
Science, 324(5923), 85–89. https://doi.org/10.1126/science.1165620.
(details)

Kitchenham, B., & Charters, S. (2007). Guidelines for performing systematic


literature reviews in software engineering. EBSE Technical Report.
(details)

Kobayashi, V. B., Mol, S. T., & Berkers, H. A. (2017). Text mining in


organizational research. Organizational Research Methods, 21(3),
733–765. https://doi.org/10.1177/1094428117722619. (details)

Kohl, C., McIntosh, E. J., Unger, S., Haddaway, N. R., Kecke, S., Schiemann, J.,
& Wilhelm, R. (2018). Online tools supporting the conduct and
reporting of systematic reviews and systematic maps: a case study on
CADIMA and review of existing tools. Environmental Evidence, 7(1), 8.
https://doi.org/10.1186/s13750-018-0115-5. (details)

Kunc, M., Mortenson, M. J., & Vidgen, R. (2018). A computational literature


review of the field of System Dynamics from 1974 to 2017. Journal of
Simulation, 12(2), 115–127. https://doi.org/10.1080/17477778.2018.1468950.
(details)

Larsen, K., Hovorka, D., & Dennis, A. R. (2019). Understanding the elephant:
the discourse approach to boundary identification and corpus
construction for theory review articles. Journal of the Association for
Information Systems, 20(7), 887–928.
https://doi.org/10.17705/1jais.00556. (details)

Lemire, S., Peck, L. R., & Porowski, A. (2023). The evolution of systematic
evidence reviews: Past and future developments and their
implications for policy analysis. Politics & Policy, 51(3), 373–396.
https://doi.org/10.1111/polp.12532. Available from (details)

Li, J., Larsen, K., & Abbasi, A. (2020). TheoryOn: a design framework and
system for unlocking behavioral knowledge through ontology
learning. MIS Quarterly, 44(4), 1733–1772.
https://doi.org/10.25300/MISQ/2020/15323. (details)

Using AI to Automate the Literature Review Process in Education​ 63


EdTech Hub

Lindberg, A. (2020). Developing theory through integrating human and


machine pattern recognition. Journal of the Association for
Information Systems, 21(1). https://doi.org/10.17705/1jais.00593. (details)

Lumivero. (2023, October 23). Revolutionizing Text Data Analysis with AI


Autocoding with NVivo. Lumivero.
https://lumivero.com/resources/blog/revolutionizing-text-data-analysis
-with-ai-autocoding-with-nvivo/. (details)

Marshall, I. (2024). ijmarshall/robotreviewer.


https://github.com/ijmarshall/robotreviewer (Original work published
2016). (details)

Marshall, I. J., & Wallace, B. C. (2019). Toward systematic review automation:


a practical guide to using machine learning tools in research synthesis.
Systematic Reviews, 8(1), 163. https://doi.org/10.1186/s13643-019-1074-9.
(details)

Marshall, I. J., Kuiper, J., & Wallace, B. C. (2015). RobotReviewer: evaluation of


a system for automatically assessing bias in clinical trials. Journal of
the American Medical Informatics Association, 23(1), 193–201.
https://doi.org/10.1093/jamia/ocv044. (details)

Mortenson, M. J., & Vidgen, R. (2016). A computational literature review of


the technology acceptance model. International Journal of
Information Management, 36(6, Part B), 1248–1259.
https://doi.org/10.1016/j.ijinfomgt.2016.07.007. Available from
https://www.sciencedirect.com/science/article/pii/S0268401216300329.
(details)

Nakadai, R., Nakawake, Y., & Shibasaki, S. (2023). AI language tools risk
scientific diversity and innovation. Nature Human Behaviour, 7(11),
1804–1805. https://doi.org/10.1038/s41562-023-01652-3. (details)

Nakagawa, S., Samarasinghe, G., Haddaway, N. R., Westgate, M. J., O’Dea, R.


E., Noble, D. W., & Lagisz, M. (2019). Research weaving: visualizing the
future of research synthesis. Trends in Ecology & Evolution, 34(3),
224–238. https://doi.org/10.1016/j.tree.2018.11.007. Available from
https://www.cell.com/trends/ecology-evolution/fulltext/S0169-5347(18)3
0278-7?ref=https://githubhelp.com. (details)

Nelson, L. K. (2020). Computational grounded theory: A methodological


framework. Sociological Methods & Research, 49(1), 3–42.
https://doi.org/10.1177/0049124117729703. (details)

Using AI to Automate the Literature Review Process in Education​ 64


EdTech Hub

O’Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M., & Ananiadou, S. (2015).
Using text mining for study identification in systematic reviews: A
systematic review of current approaches. Systematic Reviews, 4(1), 5.
https://doi.org/10.1186/2046-4053-4-5. (details)

OpenAlex. (n.d.). OpenAlex: The open catalog to the global research


system. Retrieved January 19, 2024, from https://openalex.org/. (details)

Papaioannou, D., Sutton, A., Carroll, C., Booth, A., & Wong, R. (2010).
Literature searching for social science systematic reviews:
Consideration of a range of search techniques. Health Information &
Libraries Journal, 27(2), 114–122.
https://doi.org/10.1111/j.1471-1842.2009.00863.x.(details)

Reason, T., Langham, J., Gimblett, A., Malcolm, B., & Klijn, S. (2023). Breaking
through limitations: Enhanced systematic literature reviews with large
language models. Population, 464, 25–0.
https://www.ispor.org/docs/default-source/euro2023/isporeurope23-rea
son--msr46poster30102023vfinal132992-pdf.pdf?sfvrsn=9cbf28b7_0.
(details)

Rebolledo Font de la Vall, R., & Gonzalez Araya, F. (2023). Exploring the
benefits and challenges of AI-language learning tools. International
Journal of Social Sciences and Humanities Invention, 10, 7569–7576.
https://doi.org/10.18535/ijsshi/v10i01.02. (details)

Research Solutions. (n.d.). Research Solutions Announces Acquisition of


scite. Retrieved January 19, 2024, from
https://www.researchsolutions.com/resources/press-releases/research-s
olutions-announces-acquisition-of-scite. (details)

Rowe, F., Kanita, N., & Walsh, I. (2023). The importance of theoretical
positioning and the relevance of using bibliometrics for literature
reviews. Journal of Decision Systems, 1–16.
https://doi.org/10.1080/12460125.2023.2217646. (details)

Russell-Rose, T., & Shokraneh, F. (2019). 2Dsearch: Facilitating reproducible


and valid searching in evidence synthesis. BMJ Evidence-Based
Medicine, 24(Suppl 1), 36. (details)

Saeidmehr, A., Steel, P., & Samavati, F. (2023). Systematic Review using a
Spiral approach with Machine Learning.
https://doi.org/10.21203/rs.3.rs-2497596/v1. (details)

Sarin, G., Kumar, P., & Mukund, M. (2023). Text classification using deep
learning techniques: A bibliometric analysis and future research

Using AI to Automate the Literature Review Process in Education​ 65


EdTech Hub

directions. Benchmarking: An International Journal.


https://doi.org/10.1108/BIJ-07-2022-0454. (details)

Schmiedel, T., Müller, O., & Brocke, J. (2019). Topic modeling as a strategy of
inquiry in organizational research: A tutorial with an application
example on organizational culture. Organizational Research Methods,
22(4), 941–968. https://doi.org/10.1177/1094428118773858. (details)

Schryen, G., Wagner, G., Benlian, A., & Paré, G. (2020). A knowledge
development perspective on literature reviews: Validation of a new
typology in the IS field. Communications of the AIS, 46, 134–168.
https://ris.uni-paderborn.de/record/11946. (details)

Shao, Y., Jiang, Y., Kanell, T. A., Xu, P., Khattab, O., & Lam, M. S. (2024).
Assisting in Writing Wikipedia-like Articles From Scratch with Large
Language Models (No. arXiv:2402.14207). arXiv.
https://doi.org/10.48550/arXiv.2402.14207. (details)

Smith, L. C. (2023). Reviews and reviewing: Approaches to research


synthesis. An annual review of information science and technology
(ARIST) paper. Journal of the Association for Information Science and
Technology, asi.24851. https://doi.org/10.1002/asi.24851. (details)

Spillias, S., Tuohy, P., Andreotta, M., Annand-Jones, R., Boschetti, F.,
Cvitanovic, C., Duggan, J., Fulton, E., Karcher, D., & Paris, C. (2023).
Human-AI collaboration to identify literature for evidence synthesis.
https://doi.org/10.21203/rs.3.rs-3099291/v1. (details)

Storey, V. C., Lukyanenko, R., & Castellanos, A. (2023). Conceptual modeling:


Topics, themes, and technology trends. ACM Computing Surveys,
55(14s), 1–38. https://doi.org/10.1145/3589338. (details)

Sturm, B., & Sunyaev, A. (2019). Design principles for systematic search
systems: A holistic synthesis of a rigorous multi-cycle design science
research journey. Business & Information Systems Engineering, 61(1),
91–111. https://doi.org/10.1007/s12599-018-0569-6. (details)

Swanson, D., & Smalheiser, N. (1997). An interactive system for finding


complementary literatures: a stimulus to scientific discovery. Artificial
Intelligence, 91(2), 183–203.
https://doi.org/10.1016/S0004-3702(97)00008-8. (details)

Teijema, J. J., de Bruin, J., Bagheri, A., & van de Schoot, R. (2023). Large-scale
simulation study of active learning models for systematic reviews.
https://doi.org/10.31234/osf.io/2w3rm. (details)

Using AI to Automate the Literature Review Process in Education​ 66


EdTech Hub

Templier, M., & Paré, G. (2018). Transparency in literature reviews: An


assessment of reporting practices across review types and genres in
top IS journals. European Journal of Information Systems, 27(5),
503–550. https://doi.org/10.1080/0960085X.2017.1398880. (details)

Thilakaratne, M., Falkner, K., & Atapattu, T. (2019). A systematic review on


literature-based discovery: General overview, methodology, & statistical
analysis. ACM Computing Surveys, 52(6), 129:1-129:34.
https://doi.org/10.1145/3365756. (details)

Tsafnat, G., Glasziou, P., & Choong, M. K. (2014). Systematic review


automation technologies. Systematic Reviews, 3, 1–15.
https://doi.org/10.1186/2046-4053-3-74. (details)

United Nations. (n.d.). THE 17 GOALS | Sustainable Development. Retrieved


October 7, 2024, from https://sdgs.un.org/goals. (details)van Dinter, R.,
Tekinerdogan, B., & Catal, C. (2021). Automation of systematic literature
reviews: A systematic literature review. Information and Software
Technology, 136, 106589. https://doi.org/10.1016/j.infsof.2021.106589.
(details)

van Dinter, R., Tekinerdogan, B., & Catal, C. (2021). Automation of systematic
literature reviews: A systematic literature review. Information and
Software Technology, 136, 106589.
https://doi.org/10.1016/j.infsof.2021.106589. (details)

van de Schoot, R. (2023).


Rensvandeschoot/software-overview-machine-learning-for-screening
-text.
https://github.com/Rensvandeschoot/software-overview-machine-lear
ning-for-screening-text (Original work published 2022). (details)

van de Schoot, R., de Bruin, J., Schram, R., Zahedi, P., de Boer, J., Weijdema,
F., Kramer, B., Huijts, M., Hoogerwerf, M., Ferdinands, G., Harkema, A.,
Willemsen, J., Ma, Y., Fang, Q., Hindriks, S., Tummers, L., & Oberski, D. L.
(2021). An open source machine learning framework for efficient and
transparent systematic reviews. Nature Machine Intelligence, 3(2),
125–133. https://doi.org/10.1038/s42256-020-00287-7. (details)

Viraj, M. (n.d.). Scholarcy Summarizer Review: Pricing, Features, & More.


Retrieved January 22, 2024, from
https://www.notta.ai/en/blog/scholarcy-summarizer. (details)

Using AI to Automate the Literature Review Process in Education​ 67


EdTech Hub

Wagner, G., Lukyanenko, R., & Paré, G. (2022). Artificial intelligence and the
conduct of literature reviews. Journal of Information Technology, 37(2),
209–226. https://doi.org/10.1177/02683962211048201. (details)

Wang, Z., Nayfeh, T., Tetzlaff, J., O’Blenis, P., & Murad, M. H. (2020). Error rates
of human reviewers during abstract screening in systematic reviews.
PLOS ONE, 15(1), e0227742. https://doi.org/10.1371/journal.pone.0227742.
(details)

Xiao, D. (n.d.). Research Guides: AI-Based Literature Review Tools: Home.


Retrieved January 19, 2024, from
https://tamu.libguides.com/c.php?g=1289555&p=9470549. (details)

Xiao, Y., & Watson, M. (2017). Guidance on conducting a systematic literature


review. Journal of Planning Education and Research, 39.
https://doi.org/10.1177/0739456x17723971. (details)

Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., Liu, J.-B.,
Yuan, J., & Li, Y. (2021). A review of Artificial Intelligence (AI) in education
from 2010 to 2020. Complexity, 2021, e8812542.
https://doi.org/10.1155/2021/8812542. (details)
⁅bibliography:end⁆

Using AI to Automate the Literature Review Process in Education​ 68

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy