Using AI to Automate the Literature Review Process
Using AI to Automate the Literature Review Process
DOI 10.53832/edtechhub.1003
@GlobalEdTechHub edtechhub.org
Creative Commons Attribution 4.0 International https://creativecommons.org/licenses/by/4.0/
EdTech Hub
.
Creative This document uses some content from Gerit Wagner, Roman Lukyanenko
Commons and Guy Pare (2022). Artificial intelligence and the conduct of literature
Attributions reviews. Journal of Information Technology, Vol. 37(2) 209–226; © Association
for Information Technology Trust 2021; DOI: 10.1177/02683962211048201;
Creative Commons Attribution, https://creativecommons.org/licenses/by/4.0/
[reusing-open-access-and-sage-choice-content].
Please also note that this topic brief is set in the context of EdTech Hub’s
prior work, and, therefore, also includes several self-citations by the lead
author, who also serves as technical director of EdTech Hub.
Contents
Glossary 5
Additional abbreviations 7
List of figures 4
1. Introduction 8
2. Overview of findings 9
2.1. AI tools for literature review 9
2.2. Literature inputs 9
2.3. Questions posed for this topic brief 10
3. AI and literature reviews 16
4. Approach for this topic brief 18
4.1. Methodology for this topic brief 18
4.2. Results 19
5. Integrated literature review tools 27
5.1. Fully integrated tools 27
5.2. Semi-integrated tools 30
5.3. Focus on literature search and discovery 32
5.4. Focus on literature screening and categorisation 33
5.5. Focus on summarisation and writing assistance 35
5.6. General Purpose Large Language Models 38
5.7. GPT Researcher 40
5.8. Other tools 40
6. Tool reviews 41
6.1. Problem formulation 41
6.2. Literature searches 41
6.3. Screening for inclusion 45
6.4. Quality assessment 49
6.5. Data extraction 49
6.6. Data analysis and interpretation 50
6.7. General observations 50
7. Outlook 53
7.1. Convene stakeholders 53
7.2. Undertake in-depth exploration of AI tools 53
7.3. Conclusion 54
Bibliography 55
Figures
Figure 2.1. Simplified literature review process 9
Figure 4.1. AI-based tools for the different steps of the literature review
process in education research. 20
Figure 6.2. The same article viewed on the Journal of Computer Assisted
Learning (JCAL) website 43
Figure 6.4. The same article viewed on Open Development & Education’s
evidence library 44
Glossary
AI (Artificial Intelligence): A branch of computer science that aims to
create machines or systems capable of intelligent behaviour, simulating
human cognitive processes.
amounts of textual data. These models are characterised by their large size,
typically containing millions or even billions of parameters.
Additional abbreviations
CSV Comma-separated value
1. Introduction
Systematic literature reviews are often conducted manually in many
research organisations. This topic brief explores the use of AI to automate
the literature review process in the field of EdTech in order to improve the
speed and efficiency of creating literature reviews across the sector.
The topic brief is guided by the following questions, provided as part of the
corresponding helpdesk request to EdTech Hub:
■ How appropriate are existing tools? Are they easy to use? Do they
have licensing or cost barriers? How advanced are these AI tools?
■ What are the pros and cons of building a bespoke AI tool focusing on
education in LMIC versus using an already existing commercial
product? What are the cost implications of the two options?
2. Overview of findings
Several steps need to be undertaken to ensure a literature review is
successful, as outlined below. In many of these steps, existing AI-based
tools can make significant contributions. However, the fields of education
and EdTech specifically (and the social sciences more broadly) face
particular challenges around the availability of evidence.
Figure 2.4. The lack of a single database leads to divergent literature reviews and
meta-analyses
Figure 4.1. in Section 4.2. further outlines the potential for AI support in the
literature review process, identifying various web-based tools that could be
used at each step, and their cost. Examples of AI tools that could be used
for the literature review process, and mentioned in Figure 4.1., include, for
example, tools for problem formulation (⇡Elicit), literature search (⇡LitSonar,
⇡Elicit, ⇡ORKG Ask, and ⇡EPPI-Reviewer), screening for inclusion (⇡ASReview,
⇡Rayyan), quality assessment (⇡RevMan, ⇡RobotReviewer), data extraction
(⇡Elicit), and data analysis and interpretation (⇡RevMan, ⇡dmetar).
Web-based AI tools like ⇡Elicit are useful for examining research questions
quickly and without needing specialist knowledge; such tools often operate
on a very limited selection of literature relevant to education and EdTech.
Based on prior experiments, the fraction of literature covered may be 10%
or less.
reviews, as things stand, tools like ⇡EPPI-Reviewer will create more credible
outputs. That is not to say that you should not consult web-based tools like
Google Scholar; however, it is advisable to supplement such tools with
other literature databases for rigorous work.
A challenge with web-based tools is the need to check the input literature
on which they operate. Semantic Scholar is one of the databases that has
been available for some time and is widely used. However, it does not index
education research publications extensively. Moreover, the precise AI
processes used by web-based tools are typically not open to inspection.
Such tools are only useful for gathering quick impressions. They would only
be one component of more rigorous work: both AI-based and
non-AI-based processes need to be transparent and ‘explainable’ (in the
sense of ‘explainable AI’).
The question ‘How appropriate are these existing tools?’ is answered more
comprehensively in Section 5. We discuss integrated literature review tools,
such as fully integrated tools (5.1.), semi-integrated tools (5.2.), those that
focus on literature search and discovery (5.3.), literature screening and
categorisation (5.4.), and summarisation and writing assistance (5.5.). In
addition, general purpose Large Language Models (5.6.), GPT (Generative
EdTech Hub has already done some work on using AI tools for literature
reviews (⇡Haßler et al., 2021k). Some of these approaches were reused in a
programme outside EdTech Hub, namely the England-focused
collaboration between Open Development & Education and the Education
Endowment Foundation (EEF) in a literature review focused on EdTech for
disadvantaged children (⇡Haßler et al., 2024).
Figure 3.1. Separating technology use for education from technology use for
education research
“a doubling of the scientific corpus for many fields every nine years, a
trend that reflects the steady increase in the number of researchers
and can be readily confirmed as having continued or even
accelerated.” (⇡Saeidmehr et al., 2023, p. 1)
While new AI tools and platforms are emerging rapidly, ⇡Teijema et al.
(2023) note the
1
See http://systematicreviewtools.com. Retrieved 29 August 2024.
The review of these sources surfaces several tools, including tools reviewed
by ⇡Wagner et al. (2022), and new tools; these tools were examined and
tested if relevant to the topic of this review.
4.2. Results
An overview of the new tools discovered and considered relevant to this
topic brief, is available in Figure 4.1. below.
Figure 4.1. AI-based tools for the different steps of the literature review process in education research.
TheoryOn (⇡Li et al., 2020) enables N/A Very high potential since
ontology-based searches for constructs the most important search
and construct relationships in methods consist of steps
behavioural theories that are repetitive and
time-consuming, that is,
⇡litbaskets (⇡Boell & Wang, 2019) supports The service is realised amenable to automation
researchers in setting a manageable through Scopus,
scope in terms of journals covered suggesting that users
Step 2. need access to Scopus to
Literature utilise the features.
search
⇡LitSonar (⇡Sturm & Sunyaev, 2019) offers Currently provided only to
syntactic translation of search queries for members of cooperating
different databases; it also provides institutions due to
(journal) coverage reports licensing restrictions
⇡ASReview (⇡van de Schoot et al., Free and open-source software ■ High potential for
2021) offers screening semi-automated support
prioritisation in the first screen, which
requires many repetitive
Automated detection of implicit
decisions
theory (ADIT) approach (⇡Larsen
et al., 2019) for researchers ■ Moderate potential for the
capable of designing and
second screen, which
programming machine learning
classifiers (research on the requires considerable
Technology Acceptance Model). expert judgement
Step 3. (especially for borderline
Screening ⇡Rayyan Individual plans: cases)
for inclusion ■ Free plan: USD 0, free
forever.
■ Professional plan: USD 8.25
per month, billed annually.
■ Student plan: USD 4 per
month, billed annually.
Teams plans:
■ Pro team: USD 8.25 per user,
per month, billed annually.
■ Teams+: USD 24.99 per user,
per month, billed annually.
Enterprise plans: Custom
pricing
Software for data extraction and qualitative content analysis ■ Moderate potential for
(e.g., Nvivo and ATLAS.ti) offers AI-based functionality for reviews requiring a formal
qualitative coding, named entity recognition, and sentiment data extraction (descriptive
analysis reviews, scoping reviews,
meta-analyses and
WebPlotDigitizer and Graph2Data for extracting data from qualitative systematic
Step 5. Data reviews)
extraction statistical plots
■ High for objective and
⇡Elicit atomic data items (e.g.,
sample sizes); low for
complex data which has
ambiguities and lends itself
to different interpretations
(e.g., theoretical arguments
and main conclusions)
Descriptive synthesis: Tools for text-mining (⇡Kobayashi et al., ■ Very high potential for
2017), scientometric techniques, and topic models descriptive syntheses
(⇡Nakagawa et al., 2019; ⇡Schmiedel et al., 2019), and
■ Moderate potential
computational reviews aimed at stimulating conceptual
Step 6. Data for (inductive) theory
contributions (⇡Antons et al., 2021).
analysis and development and
interpretation theory testing
Theory building: Examples of inductive (computationally
intensive) theory development (e.g., ⇡Berente et al., 2019; ■ Low, non-existent
⇡Lindberg, 2020; ⇡Nelson, 2020). potential for reviews
adopting traditional and
Theory testing: Tools for meta-analyses (e.g., ⇡RevMan and interpretive approaches
⇡dmetar)
This section reviews integrated tools that cover multiple stages of the
literature review workflow. Many of these tools are web-based. Section 6
below discusses tools that are specifically relevant at different stages of a
literature review.
5.1.1. EPPI-Reviewer
⇡EPPI-Reviewer is subscription and web-based software that assists with
several types of literature reviews (meta-analyses, systematic reviews,
narrative reviews, meta-ethnographies, and more). It is capable of
managing and analysing both large- and small-scale data. ⇡EPPI-Reviewer
can automate several processes in literature reviews, including
5.1.2. ASReview
⇡ASReview (⇡van de Schoot et al., 2021) offers a promising open-source
option with its range of machine learning classifiers (including naive Bayes,
support vector machines, logistic regression, and random forest classifiers).
It learns from initial inclusion decisions and leverages these insights to
present researchers with a prioritised list of papers (i.e., the titles and
abstracts), proceeding from those most likely to be included to those least
likely.
5.1.3. DistillerSR
⇡DistillerSR is subscription, web-based systematic review software that uses
AI and intelligent workflows to automate the management of every stage
of a literature review: searching, screening, text retrieval, data extraction
and appraisal, and reporting. All stages are configurable based on user
needs.
5.1.4. Colandr
⇡Colandr (machine-learning assisted) is a free, web-based, open-access tool
designed for conducting evidence synthesis projects, including systematic
and scoping reviews. It supports various stages of the systematic review
process, including protocol development, citation deduplication, article
screening, data extraction and coding, and manuscript development. The
tool employs machine learning to facilitate evidence synthesis, optimising
the process of citation sorting by relevance and semi-automating the
classification of included documents.
5.2.1. Iris.ai
⇡Iris.ai is a forthcoming subscription service that provides an AI-driven
‘Researcher Workspace’ tool suite to assist with research and systematic
reviews. The platform holds value in using AI to support several steps in the
literature review process, including initial search, screening, data extraction,
and analysis.
The tool suite includes different modules that assist and help automate
content-based searches, context and data filtering, data extraction and
systematisation, and the analysis of document sets. It can also provide
automated summaries of included papers and allows users to distil insights
through a chat feature, enabling interactions between researchers and
data insights. It claims the Researcher Workspace can save up to 75% of
manual effort in the research process.
5.2.2. Lateral
⇡Lateral.io is a subscription service that focuses on using AI-powered tools
to assist with the organisation and process of literature search, screening,
and data extraction. It offers users a paper search integrated with different
third-party applications, such as Semantic Scholar, to search for relevant
literature. It also has several AI-powered tools, such as concept recognition,
a smart PDF reader, and a search function to help automate literature
screening data extraction.
⇡Lateral.io is a promising tool that can enable the automation of the early
stages of a literature review to help streamline workflows. However, it is
unclear how robust the literature search function is for education research.
5.2.3. SciSpace
⇡SciSpace, formerly known as Typeset, is a platform designed to streamline
the research workflow. SciSpace facilitates the discovery, creation, and
publication of research papers. It offers tools for understanding academic
texts in simpler language and finding connected papers, authors, and
topics. It is an AI-powered tool that aids in comprehending and elaborating
academic texts. SciSpace is best suited for researchers, academic
professionals, and students involved in writing, collaborating, and
publishing research papers. SciSpace has a forever free plan with limited
feature access. SciSpace Premium is available for USD 12 per month, billed
annually, and custom pricing is available for teams and enterprises.
5.2.4. Scanlitt
⇡Scanlitt is a digital research assistant platform designed to streamline
literature review and knowledge acquisition for the scientific community
with the following core features:
5.3.3. Consensus
⇡Consensus is a search engine in its beta phase that uses language models
to identify and synthesise insights from academic research papers.
Consensus’ source material comes from the Semantic Scholar database.
The main purpose of ⇡Consensus is to provide a list of up to 20 of the most
relevant papers related to a research question or phrase, input into the
engine. The language model then ranks the search results by relevance to
the query.
⇡Consensus’ value lies somewhere between literature search and early data
extraction. While it cannot be used to search a high volume of papers, it
can help produce general insights from the most relevant literature on a
topic. Used in conjunction with other tools, it can improve the early
workflow of a literature review process. Its use of the Semantic Scholar
database might limit its applicability to education research.
5.4.1. Covidence
⇡Covidence is a tiered subscription, web-based tool that assists with
systematic reviews. It is aimed at supporting institutions and organisations,
such as universities, government organisations, and research institutes. As
such, ⇡Covidence enables small and large teams to use it to collaborate on
5.4.2. Abstrackr
⇡Abstrackr is developed and maintained by the Center for Evidence
Synthesis in Health at Brown University. It is a free, open-source, web-based
application aimed at optimising the citation screening step for systematic
reviews. The tool includes a web-based annotation feature, allowing review
participants to screen citations for relevance collaboratively. It employs
machine learning technologies to semi-automate the citation screening
process, which is still in development. The software allows for importing
citations from databases like RefMan or PubMed. It provides functionality
for single or double-screening citations and a decision reconciliation mode
for reviewing citations with unclear relevance.
⇡Abstrackr is best suited for researchers, academics, and professionals
involved in conducting systematic reviews, particularly in the biomedical
field. It is designed to aid these users in managing the growing volume of
biomedical literature and make systematic reviews less onerous. Abstrackr
is a free tool, making it accessible to a wide range of users without budget
constraints.
5.4.3. RobotAnalyst
⇡RobotAnalyst (National Centre for Text Mining) is a web-based software
tool developed to assist in the literature screening phase of systematic
reviews. It combines text-mining and machine learning algorithms to
organise references by content and prioritise them based on a relevancy
classification model that is trained and updated throughout the process.
This tool is particularly useful for researchers and professionals engaged in
systematic reviews, helping them to manage and prioritise a large volume
of literature efficiently. According to ⇡van de Schoot (2023),
5.5.1. SciPub+
⇡SciPub+ is a recent subscription-based tool that features a collection of ten
AI assistants designed to support the whole workflow of academic writing.
The AI assistants guide individuals through key parts of the academic
writing process, of which literature reviews are one component. The
literature review AI assistant, like the others, makes use of a form that asks
the researcher important questions related to their project to enable the AI
assistant to generate a draft literature review.
5.5.2. Paperdigest
⇡Paperdigest is an AI-powered tool designed to summarise academic
articles, providing a quick and efficient way for researchers, students, and
5.5.3. Scholarcy
⇡Scholarcy is an AI-powered tool designed to assist academic research by
quickly analysing and summarising research articles, reports, and book
chapters. It summarises entire papers, including references, and rewrites
statements in the third person for easy citation. It also highlights key
claims, statistics, terms, and abbreviations. The tool links to open-access
versions of each cited source, reducing the need for manual searching. It
also extracts figures and tables from papers, providing them in a format
suitable for further analysis.
⇡Scholarcy offers browser extensions for Chrome, Firefox, and Edge, and
integrates with the Scholarcy Library for storing and organising summary
cards. Scholarcy does not specify the research databases it uses to generate
results. However, it finds references by locating open-access PDFs from
sources like Google Scholar and arXiv and uses the Unpaywall API to assist
with this.
Scholarcy offers both free and paid-for plans. The free plan includes
browser extensions and flashcards, while the paid-for plans offer additional
features like a personal library for summary flashcards and academic
institution licences. The personal library plan starts at USD 4.9 per month,
and the academic institution licence starts from USD 8K+ per year (see
⇡Viraj, no date for a review of Scholarcy pricing and features).
5.5.4. Elicit
⇡Elicit is an AI research assistant designed to help researchers automate
time-consuming tasks such as summarising papers, extracting data, and
synthesising findings. Users can search for research papers using natural
language queries, get one-sentence abstracts, select relevant papers, and
extract details into organised tables. Elicit also identifies themes and
concepts across multiple papers, enhancing the literature review process.
With a database of 125 million academic papers, Elicit saves researchers
time and effort, making it easier to stay well-informed and conduct
systematic reviews.
The service offers a ‘my library’ service, where users can upload their own
datasets.
Figure 5.1. ChatGPT 4o response: What research evidence is there about teacher
allocation in low-income countries?
6. Tool reviews
As noted above, the subsections below follow the steps of the literature
review process, outlining relevant AI-based tools specific to various stages
of the process. For web-based, integrated tools, see Section 5. The
subsections correspond with entries in the tables in Figure 4.1.
⇡Wagner et al. (2022) note that a prevalent challenge for literature reviews
in the social sciences is the lack of databases comprehensively curating
research published in the main outlets, including journals and conferences
(⇡Brocke et al., 2015). Within the domain of EdTech, EdTech Hub’s evidence
library is one such effort to comprehensively curate new research on
EdTech in low-income countries (⇡Haßler et al., 2024).
Figure 6.1. Citation and references for a publication in Web of Science — forward
and backward snowballing.
Figure 6.2. The same article viewed on the Journal of Computer Assisted
Learning (JCAL) website, with the ⇡Scite plugin active — Scite provides forward
snowballing only but indicates supporting / mentioning / contrasting with a total
of 186 citing articles
Figure 6.3. The same article viewed on Google Scholar, indicating 528 citing
articles and illustrating the ability to search within citing articles
Figure 6.4. The same article viewed on Open Development & Education’s
evidence library together with citations and citing articles (forward / backward
snowballing; URL: https://docs.opendeved.net/lib/9IYKEUKJ).
Tools like the evidence libraries of the EdTech Hub and Open Development
& Education also offer citation trees (Figure 6.4.; ⇡Haßler et al., 2024);
however, these are only available for a very limited number of publications,
and they are open data. One of the goals of such evidence libraries is to
show new research in the context of other publications (⇡Haßler et al.,
2024). AI tools can support the open generation of such open citation trees,
including extracting references, consolidating, and merging reference data.
The discussion regarding the above figures illustrates that it is unlikely that
any single tool can satisfy all research needs; instead, specific tools need to
be chosen for the required tasks.
Regarding ⇡Scite, we note that this tool has been around for several years,
and we have used it regularly. It offers a freemium-based model with a free
web plugin and a plugin for Zotero; access to the main account requires a
subscription. In November 2023, Research Solutions announced the
acquisition of Scite (⇡Research Solutions, no date). With AI solutions
emerging very quickly, other companies and organisations will frequently
acquire products, and feature sets will change. For example, Elsevier is
adding AI to their Scopus literature search tool (⇡Aguilera Cora et al., 2024;
⇡Elsevier, no date; ⇡Elsevier Products, no date).
New tools are also emerging in the areas of documenting, analysing, and
justifying individual search strategies (cf. ⇡Templier & Paré, 2018), as well as
syntactic search query validation (⇡Russell-Rose & Shokraneh, 2019).
⇡Wagner et al. (2022) note that this could support researchers in designing
and improving different elements of search strategies, including analysis
and justification of the scope (publication outlets covered and the selection
of search terms in database searches). ⇡Sturm & Sunyaev’s (2019) paper
illustrates how journal coverage reports could enable substantially more
targeted and efficient literature searches.
AI-based tool support for screening has been evolving over the years
(⇡Harrison et al., 2020), with promising recent progress with AI to screen
6.3.1. Taxonomy
The second screening is dedicated to disentangling the remaining cases,
which can be particularly challenging since research in education (like
climate change mitigation research) is not standardised as strictly as other
disciplines. In contrast to the health sciences and biology, for instance, the
lack of widely used taxonomies for education / EdTech constructs (or,
indeed, climate change mitigation) and lack of standard vocabulary for
keywords (contrasting with ‘medical subject heading’ / MeSH terms) can
make it difficult to achieve required classification performance in the
second screening (cf. ⇡O’Mara-Eves et al., 2015). This challenge applies to
humans and machines alike.
We note that EdTech Hub has already made some progress in developing a
multi-language keyword inventory that follows appropriate strategies for
classifying education research and data extraction (⇡Education Endowment
Foundation & Durham University, 2022; ⇡EPPI Centre, 2003). The inventory
allows organising and coding studies based on keywords relating to
publication status, geographic focus, curricular focus, and population, etc.
(⇡Haßler et al., 2019p; ⇡Haßler et al., 2021k). Figure 6.5. illustrates an example
of this.
6.3.2. Rayyan
⇡Rayyan is a three-tiered subscription, web-based tool that assists with
automating literature reviews. It has subscriptions for individuals and
teams. The free subscription tier for individuals allows for up to three active
reviews with an unlimited number of reviewers but has limited
functionality compared to the other tiers. The team subscription plans have
no free tier.
Figure 6.5. Extract from EdTech Hub’s existing keyword inventory (⇡Haßler et al., 2019p)
Tools used in this area, such as ⇡ATLAS.ti and ⇡NVivo, are implementing
Natural Language Processing and machine learning algorithms for tasks
such as automated qualitative coding, named entity recognition and
sentiment analysis (AI in ATLAS.ti: ⇡ATLAS.ti, no date; AI in NVivo: ⇡Lumivero,
2023). There are also specialised tools for extracting data from tables or
statistical plots, such as ⇡WebPlotDigitizer.
7. Outlook
⇡Wagner et al. (2022) outline an agenda suggesting how information
science researchers can focus and coordinate their efforts in advancing AI
for literature review. They note that nurturing this endeavour is a task for
the entire scholarly community, including a broad range of researchers,
methodologists, reviewers, journal editors, and authors of primary research
papers. We recommend reviewing the recommendations by ⇡Wagner et al.
(2022).
We close by highlighting some areas that pertain closely to this topic brief,
with reference to initial recommendations made in Section 2.3.6. above.
7.3. Conclusion
The two activities mentioned above would allow for an informed,
evidence-based pathway towards the better use of AI tools to help identify,
review, and synthesise evidence for literature reviews for education / EdTech
in LMICs.
Bibliography
⁅bibliography:start⁆
The first part of the bibliography below lists tools referenced in this brief in
alphabetical order. This is followed by a list of works cited in the brief in
alphabetical order.
AI Tools
Elicit: Find scientific research papers. (n.d.). Retrieved January 19, 2024, from
https://elicit.com/?workflow=table-of-papers. (details)
Future Tools — Find The Exact AI Tool For Your Needs. (n.d.). Retrieved
January 22, 2024, from https://www.futuretools.io/. (details)
ORKG Ask | Find research you are actually looking for. (n.d.). Retrieved July
16, 2024, from https://ask.orkg.org/. (detail)
Scite.ai (AI for Research). (n.d.). Scite.Ai. Retrieved January 19, 2024, from
https://scite.ai. (details)
Typeset (AI Chat for scientific PDFs | SciSpace). (n.d.). Retrieved January 19,
2024, from https://typeset.io. (details)
References
https://atlasti.com/atlas-ti-ai-lab-accelerating-innovation-for-data-anal
ysis. (details)
Aayush. (2023, October 29). Perplexity AI: Review, Advantages & Guide
(2023). Elegant Themes Blog.
https://www.elegantthemes.com/blog/business/perplexity-ai. (details)
Adams, C. E., Polzmacher, S., & Wolff, A. (2013). Systematic reviews: Work
that needs to be done and not to be done. Journal of Evidence-Based
Medicine, 6(4), 232–235. https://doi.org/10.1111/jebm.12072. (details)
Aguilera Cora, E., Lopezosa, C., & Codina, L. (2024). Scopus AI Beta:
functional analysis and cases.
http://repositori.upf.edu/handle/10230/58658. This work is distributed
under this Creative Commons license. (details)
Al-Zubidy, A., Carver, J. C., Hale, D. P., & Hassler, E. E. (2017). Vision for SLR
tooling infrastructure: Prioritizing value-added requirements.
Information and Software Technology, 91, 72–81.
https://doi.org/10.1016/j.infsof.2017.06.007. (details)
Antons, D., & Breidbach, C. (2017). Big data, big insights? Advancing service
innovation and design with machine learning. Journal of Service
Research, 21(1), 17–39. https://doi.org/10.1177/1094670517738373. (details)
Bax, L., Yu, L.-M., & Ikeda, N. (2007). A systematic comparison of software
dedicated to meta-analysis of causal studies. BMC Medical Research
Methodology, 7(1), 1–9. https://doi.org/10.1186/1471-2288-7-40. (details)
Berente, N., Seidel, S., & Safadi, H. (2019). Research commentary: Data-driven
computationally intensive theory development. Information Systems
Research, 30(1), 50–64. https://doi.org/10.1287/isre.2018.0774. (details)
Brocke, J., Simons, A., & Riemer, K. (2015). Standing on the shoulders of
giants: Challenges and recommendations of literature search in
information systems research. Communications of the Association for
Information Systems, 37(9), 205–224. (details)
Ciecierski-Holmes, T., Singh, R., Axt, M., Brenner, S., & Barteit, S. (2022).
Artificial intelligence for strengthening healthcare systems in low- and
middle-income countries: A systematic scoping review. Npj Digital
Medicine, 5(1), 1–13. https://doi.org/10.1038/s41746-022-00700-y. Available
from https://www.nature.com/articles/s41746-022-00700-y. (details)
Cram, W. A., Templier, M., & Pare, G. (2020). (Re)considering the Concept of
Literature Review Reproducibility. Journal of the Association for
Information Systems, 21(5), 1103–1114.
https://doi.org/10.17705/1jais.00630. (details)
https://d2tic4wvo1iusb.cloudfront.net/production/documents/toolkit/M
DE_CodingGuide_V3_March2022-1.pdf. (details)
Feynman AI. (n.d.). Feynman AI. Retrieved January 20, 2024, from
https://www.feynman.ai/. (details)
Haßler, B., Adam, T., Allier-Gagneur, Z., Blower, T., Brugha, M., Damani, K.,
Hennessy, S., Martin, K., Megha-Bongnkar, G., Murphy, M., Walker, H., &
Walker, H. (2021k). Methodology for literature reviews (Working Paper
No. 10). EdTech Hub. https://doi.org/10.53832/edtechhub.0002. Available
from https://docs.edtechhub.org/lib/2CKWI7RR. Available under
Creative Commons Attribution 4.0 International. (details)
Haßler, B., Adam, T., Brugha, M., Damani, K., Allier-Gagneur, Z., Hennessy, S.,
Hollow, D., Jordan, K., Martin, K., Murphy, M., & Walker, H. (2019g).
Literature Reviews of Educational Technology Research in Low- and
Middle-Income Countries: An audit of the field (Working Paper No. 2).
EdTech Hub. https://doi.org/10.53832/edtechhub.0015. Available from
http://docs.edtechhub.org/lib/NM6CPLE9. Available under Creative
Commons Attribution 4.0 International. (details)
Haßler, B., Adam, T., Brugha, M., Damani, K., Allier-Gagneur, Z., Hennessy, S.,
Hollow, D., Jordan, K., Martin, K., Murphy, M., & Walker, H. (2019p).
Keyword inventory (version 1) (Working Paper — Research Instrument
Nos. 08–1). EdTech Hub. https://doi.org/10.53832/edtechhub.0016.
Available from https://docs.edtechhub.org/lib/LSEETV6K. Available
under Creative Commons Attribution 4.0 International. (details)
Haßler, B., Adam, T., Brugha, M., Damani, K., Allier-Gagneur, Z., Sara
Hennessy, David Hollow, Katy Jordan, Kevin Martin, Mary Murphy, &
Hannah Walker. (2019h). Methodology for literature reviews
undertaken by the EdTech Hub (Working Paper No. 3). EdTech Hub.
https://doi.org/10.5281/zenodo.3352101. Available from
https://docs.edtechhub.org/lib/BMM3Z3CM. Available under Creative
Commons Attribution 4.0 International. (details)
Haßler, B., Haseloff, G., Adam, T., Akoojee, S., Allier-Gagneur, Z., Ayika, S.,
Bahloul, K., Kigwilu, P. C., Costa, D. D., Damani, K., Gordon, R., Idris, A.,
Iseje, F., Jjuuko, R., Kagambèga, A., Khalayleh, A., Konayuma, G.,
Kunwufine, D., Langat, K., … Winkler, E. (2020a). Technical and
Vocational Education and Training in Sub-Saharan Africa A
Systematic Review of the Research Landscape (Berufsbildung in SSA).
VET Repository, Bundesinstitut für Berufsbildung, Bonn, Germany.
(details)
Haßler, B., Major, L., & Hennessy, S. (2016). Tablet use in schools: A critical
review of the evidence for learning outcomes. Journal of Computer
Assisted Learning, 32(2), 139–156. https://doi.org/10.1111/jcal.12123. (details)
Haßler, B., Mansour, H., Friese, L., & Longley, S. (2024). Disseminating the
Evidence and Outputs Generated by Your Programme: Three options
for setting up an evidence library (Helpdesk Response No. 178). EdTech
Hub. https://doi.org/10.53832/edtechhub.1001. Available from
https://docs.edtechhub.org/lib/PWN42VDQ. Available under Creative
Commons Attribution 4.0 International. (details)
Haßler, B., McBurnie, C., Walker, H., Klune, C., Huntington, B., & Bhutoria, A.
(2024). Protocol for a Systematic Review with Meta-Analysis:
Understanding Quality Characteristics of Edtech Interventions and
Implementation for Disadvantaged Pupils (Understanding Quality
Characteristics of EdTech Interventions and Implementation for
Disadvantaged Pupils No. 1). Open Development & Education.
https://doi.org/10.53832/opendeved.1077. Available from
https://docs.opendeved.net/lib/2I2GT22T. (details)
Haßler, B., McBurnie, C., Walker, H., Klune, C., Huntington, B., & Bhutoria, A.
(2024). Protocol for a systematic review with meta-analysis:
Understanding quality characteristics of EdTech interventions and
implementation for disadvantaged pupils (No. 1). Open Development
& Education. https://doi.org/10.53832/opendeved.1077. Available from
https://docs.opendeved.net/lib/2I2GT22T. (details)
Haßler, B., McIntyre, N., Mitchell, J., Martin, K., Nourie, K., Damani, K., Kristi
Nourie, & Kalifa Damani. (2020v). A scoping review of technology in
education in LMICs — descriptive statistics and sample search results
Harrison, H., Griffin, S. J., Kuhn, I., & Usher-Smith, J. A. (2020). Software tools
to support title and abstract screening for systematic reviews in
healthcare: an evaluation. BMC Medical Research Methodology, 20(1),
7. https://doi.org/10.1186/s12874-020-0897-3. (details)
Hartling, L., Ospina, M., & Liang, Y. (2009). Risk of bias versus quality
assessment of randomised controlled trials: cross sectional study.
British Medical Journal, 339(1), 1–6. (details)
Higgins, J., & Green, S. (2008). Cochrane Handbook for Systematic Reviews
of Interventions. John Wiley & Sons, Ltd. (details)
King, R. D., Rowland, J., & Oliver, S. G. (2009). The automation of science.
Science, 324(5923), 85–89. https://doi.org/10.1126/science.1165620.
(details)
Kohl, C., McIntosh, E. J., Unger, S., Haddaway, N. R., Kecke, S., Schiemann, J.,
& Wilhelm, R. (2018). Online tools supporting the conduct and
reporting of systematic reviews and systematic maps: a case study on
CADIMA and review of existing tools. Environmental Evidence, 7(1), 8.
https://doi.org/10.1186/s13750-018-0115-5. (details)
Larsen, K., Hovorka, D., & Dennis, A. R. (2019). Understanding the elephant:
the discourse approach to boundary identification and corpus
construction for theory review articles. Journal of the Association for
Information Systems, 20(7), 887–928.
https://doi.org/10.17705/1jais.00556. (details)
Lemire, S., Peck, L. R., & Porowski, A. (2023). The evolution of systematic
evidence reviews: Past and future developments and their
implications for policy analysis. Politics & Policy, 51(3), 373–396.
https://doi.org/10.1111/polp.12532. Available from (details)
Li, J., Larsen, K., & Abbasi, A. (2020). TheoryOn: a design framework and
system for unlocking behavioral knowledge through ontology
learning. MIS Quarterly, 44(4), 1733–1772.
https://doi.org/10.25300/MISQ/2020/15323. (details)
Nakadai, R., Nakawake, Y., & Shibasaki, S. (2023). AI language tools risk
scientific diversity and innovation. Nature Human Behaviour, 7(11),
1804–1805. https://doi.org/10.1038/s41562-023-01652-3. (details)
O’Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M., & Ananiadou, S. (2015).
Using text mining for study identification in systematic reviews: A
systematic review of current approaches. Systematic Reviews, 4(1), 5.
https://doi.org/10.1186/2046-4053-4-5. (details)
Papaioannou, D., Sutton, A., Carroll, C., Booth, A., & Wong, R. (2010).
Literature searching for social science systematic reviews:
Consideration of a range of search techniques. Health Information &
Libraries Journal, 27(2), 114–122.
https://doi.org/10.1111/j.1471-1842.2009.00863.x.(details)
Reason, T., Langham, J., Gimblett, A., Malcolm, B., & Klijn, S. (2023). Breaking
through limitations: Enhanced systematic literature reviews with large
language models. Population, 464, 25–0.
https://www.ispor.org/docs/default-source/euro2023/isporeurope23-rea
son--msr46poster30102023vfinal132992-pdf.pdf?sfvrsn=9cbf28b7_0.
(details)
Rebolledo Font de la Vall, R., & Gonzalez Araya, F. (2023). Exploring the
benefits and challenges of AI-language learning tools. International
Journal of Social Sciences and Humanities Invention, 10, 7569–7576.
https://doi.org/10.18535/ijsshi/v10i01.02. (details)
Rowe, F., Kanita, N., & Walsh, I. (2023). The importance of theoretical
positioning and the relevance of using bibliometrics for literature
reviews. Journal of Decision Systems, 1–16.
https://doi.org/10.1080/12460125.2023.2217646. (details)
Saeidmehr, A., Steel, P., & Samavati, F. (2023). Systematic Review using a
Spiral approach with Machine Learning.
https://doi.org/10.21203/rs.3.rs-2497596/v1. (details)
Sarin, G., Kumar, P., & Mukund, M. (2023). Text classification using deep
learning techniques: A bibliometric analysis and future research
Schmiedel, T., Müller, O., & Brocke, J. (2019). Topic modeling as a strategy of
inquiry in organizational research: A tutorial with an application
example on organizational culture. Organizational Research Methods,
22(4), 941–968. https://doi.org/10.1177/1094428118773858. (details)
Schryen, G., Wagner, G., Benlian, A., & Paré, G. (2020). A knowledge
development perspective on literature reviews: Validation of a new
typology in the IS field. Communications of the AIS, 46, 134–168.
https://ris.uni-paderborn.de/record/11946. (details)
Shao, Y., Jiang, Y., Kanell, T. A., Xu, P., Khattab, O., & Lam, M. S. (2024).
Assisting in Writing Wikipedia-like Articles From Scratch with Large
Language Models (No. arXiv:2402.14207). arXiv.
https://doi.org/10.48550/arXiv.2402.14207. (details)
Spillias, S., Tuohy, P., Andreotta, M., Annand-Jones, R., Boschetti, F.,
Cvitanovic, C., Duggan, J., Fulton, E., Karcher, D., & Paris, C. (2023).
Human-AI collaboration to identify literature for evidence synthesis.
https://doi.org/10.21203/rs.3.rs-3099291/v1. (details)
Sturm, B., & Sunyaev, A. (2019). Design principles for systematic search
systems: A holistic synthesis of a rigorous multi-cycle design science
research journey. Business & Information Systems Engineering, 61(1),
91–111. https://doi.org/10.1007/s12599-018-0569-6. (details)
Teijema, J. J., de Bruin, J., Bagheri, A., & van de Schoot, R. (2023). Large-scale
simulation study of active learning models for systematic reviews.
https://doi.org/10.31234/osf.io/2w3rm. (details)
van Dinter, R., Tekinerdogan, B., & Catal, C. (2021). Automation of systematic
literature reviews: A systematic literature review. Information and
Software Technology, 136, 106589.
https://doi.org/10.1016/j.infsof.2021.106589. (details)
van de Schoot, R., de Bruin, J., Schram, R., Zahedi, P., de Boer, J., Weijdema,
F., Kramer, B., Huijts, M., Hoogerwerf, M., Ferdinands, G., Harkema, A.,
Willemsen, J., Ma, Y., Fang, Q., Hindriks, S., Tummers, L., & Oberski, D. L.
(2021). An open source machine learning framework for efficient and
transparent systematic reviews. Nature Machine Intelligence, 3(2),
125–133. https://doi.org/10.1038/s42256-020-00287-7. (details)
Wagner, G., Lukyanenko, R., & Paré, G. (2022). Artificial intelligence and the
conduct of literature reviews. Journal of Information Technology, 37(2),
209–226. https://doi.org/10.1177/02683962211048201. (details)
Wang, Z., Nayfeh, T., Tetzlaff, J., O’Blenis, P., & Murad, M. H. (2020). Error rates
of human reviewers during abstract screening in systematic reviews.
PLOS ONE, 15(1), e0227742. https://doi.org/10.1371/journal.pone.0227742.
(details)
Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., Liu, J.-B.,
Yuan, J., & Li, Y. (2021). A review of Artificial Intelligence (AI) in education
from 2010 to 2020. Complexity, 2021, e8812542.
https://doi.org/10.1155/2021/8812542. (details)
⁅bibliography:end⁆