Information 15 00676
Information 15 00676
Review
Generative AI and Higher Education: Trends, Challenges,
and Future Directions from a Systematic Literature Review
João Batista 1,2, * , Anabela Mesquita 3 and Gonçalo Carnaz 1,4
Abstract: (1) Background: The development of generative artificial intelligence (GAI) is transforming
higher education. This systematic literature review synthesizes recent empirical studies on the
use of GAI, focusing on its impact on teaching, learning, and institutional practices. (2) Methods:
Following PRISMA guidelines, a comprehensive search strategy was employed to locate scientific
articles on GAI in higher education published by Scopus and Web of Science between January 2023
and January 2024. (3) Results: The search identified 102 articles, with 37 meeting the inclusion criteria.
These studies were grouped into three themes: the application of GAI technologies, stakeholder
acceptance and perceptions, and specific use situations. (4) Discussion: Key findings include GAI’s
versatility and potential use, student acceptance, and educational enhancement. However, challenges
such as assessment practices, institutional strategies, and risks to academic integrity were also
noted. (5) Conclusions: The findings help identify potential directions for future research, including
assessment integrity and pedagogical strategies, ethical considerations and policy development, the
impact on teaching and learning processes, the perceptions of students and instructors, technological
advancements, and the preparation of future skills and workforce readiness. The study has certain
limitations, particularly due to the short time frame and the search criteria, which might have varied
if conducted by different researchers.
Citation: Batista, J.; Mesquita, A.;
Carnaz, G. Generative AI and Higher
Keywords: generative artificial intelligence; higher education; systematic literature review; PRISMA;
Education: Trends, Challenges, and
Future Directions from a Systematic
ChatGPT; academic integrity; educational technology
Literature Review. Information 2024,
15, 676. https://doi.org/
10.3390/info15110676
1. Introduction
Academic Editor: Mihai Dascalu
The growing dominance of generative artificial intelligence (GAI) has led to significant
Received: 3 October 2024 changes in higher education (HE), prompting extensive research into its consequences. This
Revised: 15 October 2024 development signifies a profound transformation, with GAI’s capabilities being integrated
Accepted: 18 October 2024 into personalized learning experiences, enhancing faculty skills, and increasing student
Published: 28 October 2024
engagement through innovative tools and technological interfaces. Understanding this
process is crucial for two main reasons: it impacts on the dynamics of teaching within
the educational environment and necessitates a reassessment of academic approaches to
Copyright: © 2024 by the authors.
equip students with the necessary tools for a future where artificial intelligence (AI) is
Licensee MDPI, Basel, Switzerland. ubiquitous. Additionally, this evolution underscores the need to rethink and reinvent
This article is an open access article educational institutions, along with the core competencies that students must develop as
distributed under the terms and they increasingly utilize these technologies.
conditions of the Creative Commons AI and GAI, although sharing a common objective, cannot be understood as identical
Attribution (CC BY) license (https:// concepts. Marvin Minsky defined AI as “the science of getting machines to do things that
creativecommons.org/licenses/by/ would require intelligence if done by humans” (Minsky, 1985) [1]. This broad definition
4.0/). encompasses various fields that aim to mimic human behavior through technology or
methods. GAI, for instance, includes systems designed to generate content such as text,
images, videos, music, computer code, or combinations of different types of content [2].
These systems utilize machine learning techniques, a subset of AI, to train models on input
data, enabling them to perform specific tasks.
To grasp the significance of AI in HE, it is crucial to examine the growing academic
interest at the intersection of these two fields. In the past two years (2022–2023), there has
been a marked increase in scholarly focus on this convergence, as demonstrated by the rising
number of articles indexed in the Scopus and Web of Science (WoS) databases. This trend is
supported by systematic evaluations of AI’s use in formal higher education. For example,
studies by Bond et al. [3] and Crompton and Burke [4] provide a comprehensive analysis of
138 publications selected from a pool of 371 prospective studies conducted between 2016
and 2022. This increase highlights the expanding academic discussion, emphasizing the
analysis and prediction of individual behaviors, intelligent teaching systems, evaluation
processes, and flexible customization within the higher education context (op. cit.).
The importance of systematic literature reviews (SLRs) in this rapidly evolving disci-
pline cannot be overstated. SLRs enable the synthesis of extensive research into aggregated
knowledge, providing clear and practical conclusions. By employing well-established
procedures such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses
(PRISMA), researchers ensure the comprehensive inclusion of all relevant studies while
maintaining the integrity of the synthesis process. This method enhances the reliability and
reproducibility of findings, thereby establishing a solid foundation for future research and
the development of institutional policies [3,4].
While some studies explore the use of GAI in higher education, there are few articles
that provide a systematic and comprehensive literature review on this topic. Additionally,
existing reviews generally cover the period up to 2022. Given the significant advancements
in AI, particularly GAI, over the past two years, it is crucial to investigate how this technol-
ogy is shaping higher education and to identify the challenges faced by lecturers, students,
and organizations.
Therefore, the main objective of this research was to conduct a systematic review of
the empirical scientific literature on the use of GAI in HE published in the last two years.
Selected articles were analyzed based on the main problems addressed, the research ques-
tions and objectives pursued, the methodologies employed, and the main results obtained.
The Research Onion model, developed by Saunders, Lewis, and Thornhill [5], was used to
analyze methodologies. The review adhered to the PRISMA methodology [6], and articles
were identified and collected using the Scopus and WoS indexing databases.
The paper is structured as follows. Next, we detail the methods used in the selection
and revision of the papers, including the inclusion and exclusion criteria. Then, we provide
a brief description of the paper’s content, including the categories in which this content
can be grouped (topics and methodologies used). This is followed by a discussion of the
results and a proposal for a future research agenda. The paper ends with a conclusion and
the presentation of the limitations of this research.
2. Methods
The research utilized an SLR methodology, which involved a series of structured steps:
planning (defining the research questions), conducting (executing the literature search,
selecting studies, and synthesizing data), and reporting (writing the report). This process
adhered to the PRISMA guidelines as outlined by Page et al. [6].
During the planning phase, we formulated a research question (RQ) based on the
background provided in Section 1:
RQ: What are the main problems, research questions, objectives pursued, methodolo-
gies employed, and key findings obtained in studies on GAI in HE conducted between
2023 and 2024?
The subsequent step involved identifying the search strategy, study selection, and
data synthesis. The search strategy included the selection of search terms and the liter-
Information 2024, 15, 676 3 of 27
ature resources, and the overall search process. Deriving the research question aided in
defining the specific search terms. For the eligibility criteria—comprising the inclusion and
exclusion criteria for the review and the method of grouping studies for synthesis—we
opted to include only articles that describe scientific empirical research on the use of GAI
in higher education. In this context, we define empirical research as investigations in which
researchers collect data to provide rigorous and objective answers to research questions
and hypotheses. This approach intentionally excluded articles based solely on opinions,
theories, or speculative beliefs to ensure a foundation of concrete evidence. We decided to
use Scopus and WoS as our databases, with the search being conducted in January 2024.
The next step was to identify synonyms for the search strings.
The search restrictions considered in Scopus were as follows: title, abstract, and key-
words; period: since 1 January 2023; document type: article; source type: journal; language:
English; publication stage: final and article in press. The search equation used was as
follows: TITLE-ABS-KEY ((“higher education” OR “university” OR “college” OR “HE”
OR “HEI” OR “higher education institution”) AND (“generative artificial intelligence” OR
“generative ai” OR “GENAI” OR “gai”)) AND PUBYEAR > 2022 AND PUBYEAR < 2025
AND (LIMIT-TO (DOCTYPE, “ar”)) AND (LIMIT-TO (SRCTYPE, “j”)) AND (LIMIT-TO
(PUBSTAGE, “final”) OR LIMIT-TO (PUBSTAGE, “aip”)) AND (LIMIT-TO (LANGUAGE,
“English”)). As a result, we gathered 91 articles, of which only 87 were available. The search
results were documented, and the articles were extracted for further analysis.
The search restrictions in the WoS were as follows: search by topic, including title,
abstract, and keywords; period: since 1 January 2023; document type: article; language:
English; publication stage: published within the specified period. The search equation used
was as follows: TITLE-ABS-KEY ((“higher education” OR “university” OR “college” OR
“HE” OR “HEI” OR “higher education institution”) AND (“generative artificial intelligence”
OR “generative ai” OR “GENAI” OR “gai”)), with the previously outlined restrictions.
As a result, we collected 61 articles. Eight of these articles were unavailable. One article
was excluded because its title was in English, even though the article itself was written in
Portuguese. Thus, we considered a total of 52 articles. The search results were documented,
and the articles were extracted for further analysis.
The entire process was initially tested by the three researchers, with the final procedure
implemented by one of them. All the articles were compiled into an excel sheet, where
duplicates were identified and removed. This resulted in a final list of 102 articles.
The next step involved selecting the articles. The complete list was divided into three
groups, with each group assigned to a different researcher. Each researcher reviewed their
assigned articles, evaluating whether the keywords aligned with the search criteria and
whether each article included empirical research. This evaluation was based on the abstract
and, if necessary, the full article. For each article, the researcher provided one of three
possible responses, based on the items just mentioned (keywords, the abstract, and, if
necessary, the full article): “Yes” for articles the researcher was certain to include, “No”
for those clearly to exclude, or “Yes/No” for cases where there were uncertainties about
inclusion, indicating that the researcher was not entirely sure about including or excluding
the article.
Articles marked as “Yes/No” were redistributed among the researchers for a second
review to ensure that any initial uncertainties were resolved. The first researcher encoun-
tered 7 “Yes/No” cases, of which 2 were ultimately marked as “Yes” and 5 as “No” after
the second review. Thus, out of the 30 articles initially assigned to researcher 1, 15 were
included (“Yes”) in the final set of articles for review. For the second researcher, 8 articles
were marked as “Yes/No”, of which 7 were marked as “No” following the second review,
and 1 article, which was not available in Scopus or WoS, was excluded. Therefore, out of the
31 articles initially assigned to this researcher, 7 were included (“Yes”) in the final set. Lastly,
for researcher 3, 16 articles were marked as “Yes/No”, all of which were considered “No”
after the second review. As a result, out of the 41 articles initially assigned to researcher 3,
15 were included (“Yes”) in the final set of articles for review.
Information 2024, 15, 676 4 of 27
At the end of this process, 37 articles were selected and 65 were excluded (see Table 1
and Figure 1). These 37 articles were ultimately marked as “Yes” for the basis for this SLR
(see Table 2 for the complete list of references).
Figure 1. A PRISMA
PRISMA 2020
2020 flow
flowdiagram
diagramillustrating
illustratingthe
theselection
selectionprocess
processofofstudies,
studies,using
usingthe
thetem-
template
plate provided
provided [6]. [6].
Articles Yes No Yes/No Articles Not Directly Available from Scopus or WoS
Researcher 1 30 13 10 7
Cases Yes/No: Second
2 5
opinion (Researcher 3)
Information 2024, 15, 676 5 of 27
Articles
(Alexander, Savvidou, and Alexander, 2023) [7]
(Al-Zahrani, 2023) [8]
(Barrett and Pack, 2023) [9]
(Chan and Hu, 2023) [10]
(Chan and Lee, 2023) [11]
(Chan and Zhou, 2023) [12]
(Chan, 2023) [13]
(Chen, Zhuo, and Lin, 2023) [14]
(Chergarova, Tomeo, Provost, De la Peña, Ulloa, and Miranda, 2023) [15]
(Chiu, 2024) [16]
(Currie and Barry, 2023) [17]
(De Paoli, 2023) [18]
(Duong, Vu, and Ngo, 2023) [19]
(Elkhodr, Gide, Wu, and Darwish, 2023) [20]
(Escalante, Pack, and Barrett, 2023) [21]
(Essel, Vlachopoulos, Essuman, and Amankwa, 2024) [22]
(Farazouli, Cerratto-Pargman, Bolander-Laksov, and McGrath, 2023) [23]
(French, Levi, Maczo, Simonaityte, Triantafyllidis, and Varda, 2023) [24]
(Greiner, Peisl, Höpfl, and Beese, 2023) [25]
(Hammond, Lucas, Hassouna, and Brown, 2023) [26]
(Hassoulas, Powell, Roberts, Umla-Runge, Gray, and Coffey, 2023) [27]
(Jaboob, Hazaimeh, and Al-Ansi, 2024) [28]
(Kelly, Sullivan, and Strampel, 2023) [29]
(Laker and Sena, 2023) [30]
(Lopezosa, Codina, Pont-Sorribes, and Vállez, 2023) [31]
(Michel-Villarreal, Vilalta-Perdomo, Salinas-Navarro, Thierry-Aguilera, and Gerardou, 2023) [32]
(Nikolic et al., 2023) [33]
(Perkins, Roe, Postma, McGaughran, and Hickerson, 2024) [34]
(Popovici, 2023) [35]
(Rose, Massey, Marshall, and Cardon, 2023) [36]
(Shimizu et al., 2023) [37]
(Singh, 2023) [38]
(Strzelecki and ElArabawy, 2024) [39]
(Walczak and Cellary, 2023) [40]
(Watermeyer, Phipps, Lanclos, and Knight, 2023) [41]
(Yilmaz and Karaoglan Yilmaz, 2023) [42]
(Yilmaz, Yilmaz, and Ceylan, 2023) [43]
These articles were published in 25 different journals, with only 7 journals featuring
more than one article (Table 3).
Journal n
International Journal of Educational Technology in Higher Education 4
Computers and Education: Artificial Intelligence 3
International Journal of Human-Computer Interaction 3
Issues in Information Systems 3
Education Sciences 2
Journal of University Teaching and Learning Practice 2
Smart Learning Environments 2
The 37 articles were written by 119 different authors, of whom only 5 appear as authors
on more than one article (Table 4).
Information 2024, 15, 676 6 of 27
Authors n
Chan, C.K.Y. 4
Barrett, A. 2
Pack, A. 2
Yilmaz, F.G.K. 2
Yilmaz, R. 2
Table 5 shows the geographical origins of the authors of the selected studies, based on
the affiliations provided in the articles. The USA (6 authors), Hong Kong (5 authors), the
UK (5 authors), and Australia (4 authors) stand out as the predominant countries of origin
for the authors. The remaining countries are each represented by only 1 or 2 authors. It
should be noted that some articles have authors from more than one country.
Table 5. The geographical origin of study authors based on their affiliations, considering that some
articles have authors from more than one country.
Country n
USA 6
Hong Kong 5
UK 5
Australia 4
Poland 2
Turkey 2
Vietnam 2
China 1
Cyprus 1
Egypt 1
Germany 1
Ghana 1
Ireland 1
Japan 1
Jordan 1
Mexico 1
Netherlands 1
New Zealand 1
Oman 1
Romania 1
Saudi Arabia 1
Singapore 1
South Africa 1
Spain 1
Sweden 1
Taiwan 1
Yemen 1
Figure 2 presents a word cloud generated from the abstracts of the 37 selected articles.
The statistical analysis of the words in this word cloud shows that some words occured with
a significantly high frequency, such as “Student” (n = 106), “AI” (n = 102), and “Educator”
(n = 91). Figure 2 and Table 6 presents the words that occured at least 25 times in the set of
37 abstracts.
Yemen 1
Figure 2 presents a word cloud generated from the abstracts of the 37 selected articles.
The statistical analysis of the words in this word cloud shows that some words occured
with a significantly high frequency, such as “Student” (n = 106), “AI” (n = 102), and “Edu-
Information 2024, 15, 676 cator” (n = 91). Figure 2 and Table 6 presents the words that occured at least 25 times in 7 of 27
the set of 37 abstracts.
Figure
Figure A word
2.word
2. A cloudcloud generated
generated from thefrom the abstracts
abstracts of thearticles,
of the 37 selected 37 selected articles,
highlighting thehighlighting
most the most
frequently
frequently occurring
occurring words,
words, with that
with words words that occurred
occurred at least 25 times.
at least 25 times.
Table 6. Words that occur at least 25 times in the abstracts of the 37 selected articles.
Table 6. Words that occur at least 25 times in the abstracts of the 37 selected articles.
Word n
Student Word 106 n
Student 106
AI 102
Educator 91
Use 88
ChatGPT 81
Study 65
General 62
Tool 58
Learn 55
Higher 53
Research 49
Academic 40
Technology 36
Assess 35
GenAI 32
Integrity 29
Impact 29
Intelligent 28
University 28
Result 28
Artificial 27
GAI 27
Find 26
Potential 25
Model 25
Information 2024, 15, 676 8 of 27
Each researcher independently conducted a grounded theory exercise using all the
previously gathered information, including the articles themselves, to identify potential
categories for each article. After this individual analysis, the three researchers combined
their findings, resulting in the categorization and distribution of articles presented in
Table 7. We established three main categories for the 37 selected studies: the use of GAI,
stakeholder acceptance and perceptions, and specific tasks and activities. While some
overlap exists—since stakeholders’ use of GAI often involves tasks like content analysis
and content generation—this structure was chosen to capture distinct aspects of GAI’s
impact on higher education. The aim of this process was to provide a comprehensive view
of the various dimensions of GAI applications and their interactions across different areas.
Multiple efforts were made to minimize the risk of bias. The procedures of this investi-
gation were thoroughly described and documented to ensure accurate reproducibility of
the study. The three researchers conducted the procedures, with certain steps performed
independently. The results were then compared and reassessed as needed.
Category A encompasses all studies that focus on the use of GAI technology as the
core of the research. This category contains papers describing research on ChatGPT (sub-
category A.1) and those addressing other technologies (sub-category A.2). Category B
covers papers that examine the acceptance and perception of GAI from the perspective
of different stakeholders, such as students, teachers, researchers, and higher education
institutions. Here, the emphasis is on people rather than technology. Category C consists
of studies that focus on specific tasks or activities, rather than on technology or people.
These tasks include assessment, writing, content analysis, content generation, academic
integrity, and feedback. It is important to note that GAI is a transversal aspect that unites
all this research. This means that, in some cases, although a paper focuses on a particular
stakeholder or activity, the technology factor is still present. However, the categorization
was based on the core focus of each paper, even though technology is a common factor
among them. Finally, a fourth category was added to encompass the methodology.
3. Results
The findings from the data synthesis are aimed at answering the research question
(RQ) and are based on 37 papers, categorized and subcategorized as shown in Table 5 (see
the previous section). These papers are divided into three main categories, as previously
mentioned. The results are presented by category in the following paragraphs.
performance in programming tasks and code review. The results indicated that “ChatGPT
as a student would receive an approximate score of 7 out of a maximum of 10. Nonetheless,
43% of the accurate solutions provided by ChatGPT are either inefficient or comprise of
code that is incomprehensible for the average student” (p. 2). These findings highlight both
the potential and limitations of utilizing ChatGPT in programming tasks and code review.
Elkhodr, Gide, Wu, and Darwish [20] examined the use of ChatGPT in another HE
context, specifically within ICT education. The study aimed to “examine the effectiveness
of ChatGPT as an assistive technology at both undergraduate (UG) and postgraduate (PG)
ICT levels” (p. 71), and three case studies were conducted with students. In each case
study, students were divided into two groups: one group was permitted to use ChatGPT,
while the other was not. Subsequently, the groups were interchanged so that each group
of students performed the same tasks with and without the assistance of ChatGPT, and
they were asked to reflect on their experiences (p. 72). The results indicated that students
responded positively to the use of ChatGPT, considering it to be a valuable resource that
they would like to continue using in the future.
Duong, Vu, and Ngo [19] describe a study in which a modified version of a Technology
Acceptance Model (TAM) was used “to explain how effort and performance expectancies
affect higher education students’ intentions and behaviors to use ChatGPT for learning, as
well as the moderation effect of knowledge sharing on their ChatGPT-adopted intentions
and behaviors” (p. 3). The results of the study show that student behavior is influenced
by both effort expectancy and performance expectancy, which is evident in their use of
ChatGPT for learning purposes (p. 13).
suggest that these aspects be considered in curriculum reform, advocating for an adaptive
educational approach.
Another set of articles [13,40,41] examined the impact of GAI on HE at a macro
scale, focusing on policy development, institutional strategies, and broader curricular
transformations. Walczak and Cellary [40] specifically explored “the advantages and
potential threats of using GAI in education and necessary changes in curricula” as well
as discussing “the need to foster digital literacy and the ethical use of AI” (p. 71). A
survey conducted among students revealed that the majority believed “students should be
encouraged and taught how to use AI” (p. 90). The article provides a thematic analysis of
existing challenges and opportunities to HE institutions. They acknowledge the impact
that the introduction of GAI has on the world of work, raising questions about the future
nature of work and how to prepare students for this reality, emphasizing that human
performance is crucial to avoid “significant consequences of incorrect answers made by AI”
(p. 92). Among the study’s main conclusions and recommendations, it highlights the ethical
concerns in using GAI tools and the need to critically assess the content they produce.
Watermeyer, Phipps, Lanclos, and Knight [41] also raised concerns about the labor
market, specifically regarding academic labor. Their work examines how GAI tools are
transforming scholarly work, how these tools aim to alleviate the pressures inherent in the
academic environment, and the implications for the future of the academic profession. The
authors found that the uncritical use of GAI tools has significant consequences, making
academics “less inquisitive, less reflexive, and more narrow and shallow scholars” (p. 14).
This introduces new institutional challenges for the future of their academic endeavors.
Chan [13] focused on developing a framework for policies regarding the use of AI in
HE. A survey was conducted among students, teachers, and staff members which included
both quantitative and qualitative components. The results indicate that, according to the
respondents, there are several aspects arising from the use of AI technologies, such as Chat-
GPT. For example, the importance of integrating AI into the teaching and learning process is
recognized, although there is still little accumulated experience with this use. Additionally,
there is “strong agreement that institutions should have plans in place associated with AI
technologies” (p. 9). Furthermore, there is no particularly strong opinion about the future of
teachers, specifically regarding the possibility that “AI technologies would replace teachers”
(p. 9). These and other results justify the need for higher education institutions to develop
AI usage policies. The authors also highlight several “implications and suggestions” that
should be considered in these policies, including areas such as “training”, “ethical use and
risk management”, and “fostering a transparent AI environment”, among others (p. 12).
A third group of articles [10,16,28] addressed how GAI affects learning processes,
student engagement, and the overall educational experience from the students’ perspectives.
Chiu [16] focused on the students’ perspectives, as reflected in the research question: “From
the perspective of students, how do GAI transform learning outcomes, pedagogies and
assessment in higher education?” (p. 4). Based on data collected from students, the study
presents a wide range of results grouped into the three areas mentioned in the research
question: learning outcomes, pedagogies, and assessment. It also presents implications for
practices and policy development organized according to these three areas. Generally, the
study suggests the need for higher education to evolve to incorporate the changes arising
from AI development, offering a set of recommendations in this regard. It also shows that
“students are motivated by the prospect of future employment and desire to develop the
skills required for GAI-powered jobs” (p. 8).
The perspective of students is also explored in Jaboob, Hazaimeh, and Al-Ansi [28],
specifically through data collection from students in three Arab countries. The study aimed
“to investigate the effects of generative AI techniques and applications on students’ cogni-
tive achievement through student behavior” (p. 1). Various hypotheses were established
that relate GAI techniques and GAI applications to their impacts on student behavior
and students’ cognitive achievement. The results show that GAI techniques and GAI
applications positively impact student behavior and students’ cognitive achievement (p.
Information 2024, 15, 676 12 of 27
This following study [8] examined the impact of GAI tools on researchers and research
related to higher education in Saudi Arabia. The results show that participants have positive
attitudes and high awareness of GAI in research, recognizing the potential of these tools
to transform academic research. However, the importance of adequate training, support,
and guidance in the ethical use of GAI emerged as a significant concern, underlining the
participants’ commitment to responsible participants’ commitment to responsible research
practices and the need to address the potential biases associated with using these tools.
(APT). The aim was to identify the appropriate and inappropriate ways that these discourses
are deployed. The competing discourses were conceptualized using the metaphorical
representation of the dichotomy between a sheep and a wolf. Additionally, the metaphor
of educators acting as shepherds was employed to illustrate how students may become
aware of the claims presented on the APT websites and develop critical language awareness
when exposed to such content. Educators can assist students in this regard by acquiring
an understanding of how these websites use language to persuade users to circumvent
learning activities.
The article by Kelly, Sullivan, and Strampel [29] provides a novel foundation for
enhancing our understanding of how these tools may affect students as they engage
in academic pursuits at the university level. The authors observed “that students had
relatively low knowledge, experience, and confidence with using GAI”. Additionally, the
rapid advent of these resources in late 2022 and early 2023 meant that many students were
initially unaware of their existence. The limited timeframe precluded academic teaching
staff from considering the emerging challenges and risks associated with GAI and how to
incorporate these tools into their teaching and learning practices. The findings indicated
that students’ self-assessed proficiency in utilizing GAI ethically increases with experience.
It is notable that students are more likely to learn about GAI through social media.
The study by Laker and Sena [30] provides a foundation for future research on the sig-
nificant impact that AI will have on HE in the coming years. The integration of GAI models
such as ChatGPT, in higher education—particularly in the field of business analytics—offers
both potential advantages and inherent limitations. AI has the potential to significantly
enhance the learning experience of students by providing code generation and step-by-step
instructions for complex tasks. However, it also raises concerns about academic dishonesty,
impedes the development of foundational skills, and brings up ethical considerations. The
authors obtained insights into the accuracy of the generated content and the potential for
detecting its use by students. The study indicated that ChatGPT can offer accurate solutions
to certain types of assessments, including straightforward Python quizzes and introductory
linear programming problems. It also illustrates how instructors can identify instances
where students have used AI tools to assist with their learning, despite explicit instructions
not to do so.
Two studies covered, specifically, the topic of academic integrity/plagiarism. The
study by Perkins, Roe, Postma, McGaughran, and Hickerson [34] examined the effective-
ness of academic staff utilizing the Turnitin artificial intelligence (AI) detection tool to
identify AI-generated content in university assessments. Experimental submissions were
created using ChatGPT, employing prompting techniques to minimize the likelihood of
detection by AI content detectors. The results indicated that Turnitin’s AI detection tool has
the potential to support academic staff in detecting AI-generated content. However, the
relatively low detection accuracy among participants suggests a need for further training
and awareness. The findings demonstrate that the Turnitin AI detection tool is not particu-
larly robust against the use of these adversarial techniques, raising questions regarding the
ongoing development and effectiveness of AI detection software.
The aim of the article by Currie and Barry [17] was to analyze the growing challenge of
academic integrity in the context of AI algorithms, such as the GPT 3.5-powered ChatGPT
chatbot. This issue is particularly evident in nuclear medicine training, which has been
impacted by these new technologies. The chatbot “has emerged as an immediate threat to
academic and scientific writing” (p. 247). The authors concluded that there is a “limited
generative capability to assist student” (p. 253) and noted “limitations on depth of insight,
breadth of research, and currency of information” (p. 253). Similarly, the use of inadequate
written assessment tasks can potentially increase the risk of academic misconduct among
students. Although ChatGPT can generate examination answers in real-time, its perfor-
mance is constrained by the superficial nature of the evidence of learning produced by its
responses. These limitations, which reduce the risk of students benefiting from cheating,
also limit ChatGPT’s potential for improving learning and writing skills.
Information 2024, 15, 676 16 of 27
14
14
12
12
10
10
8
8
6
6
4
4
2
2
0
0
Experimental Survey Case studies Qualitative Mixed Literature
Experimental Survey Case studies Qualitative Mixed Literature
research research research methods review
research research research methods review
Scheme 2.Distribution
Distribution ofpapers
papers byresearch
research strategies.
Scheme 2. Distribution of
Scheme 2. of papers by
by researchstrategies.
strategies.
The
The final layer analyzed
final layer analyzedwas
wasthe
thedata
data collection
collection and
and analysis
analysis methods,
methods, which
which are
are cru-
The final layer analyzed was the data collection and analysis methods, which are cru-
crucial
cial forfor understanding
understanding howhow empirical
empirical datadata are gathered.
are gathered. As shown
As shown in Scheme
in Scheme 3, the3,most
the
cial for understanding how empirical data are gathered. As shown in Scheme 3, the most
used methods are surveys and questionnaires, followed by experimental methods and
used methods are surveys and questionnaires, followed by experimental methods and
then interviews and focus groups.
then interviews and focus groups.
Information 2024, 15, 676 18 of 27
Information 2024, 15, x FOR PEER REVIEW 19 of 28
most used methods are surveys and questionnaires, followed by experimental methods
and
• then interviews
Surveys and focus groups.
and questionnaires: according to the analyzed papers, 12 studies gathered
• quantitative
Surveys data from broadaccording
and questionnaires: participant
to groups. Examples
the analyzed include
papers, Elkhodr,
12 studies Gide,
gathered
Wu, and Darwish
quantitative [20] and
data from Perkins,
broad Roe, Postma,
participant groups.McGaughran, and Hickerson
Examples include [34].
Elkhodr, Gide,
• Experimental
Wu, and Darwish methods: central
[20] and to 11
Perkins, studies,
Roe, these
Postma, methods tested
McGaughran, and specific
Hickersonhypothe-
[34].
• Experimental methods:
ses under controlled central toExamples
conditions. 11 studies, these Currie
include methodsandtested
Barryspecific
[17] andhypothe-
Al-Zah-
ses
raniunder
[8]. controlled conditions. Examples include Currie and Barry [17] and Al-
• Zahrani [8]. and focus groups: seven studies collected qualitative data. Examples in-
Interviews
• Interviews
clude Singh and focus
[38] and groups: seven
Farazouli, studies collectedBolander-Laksov,
Cerratto-Pargman, qualitative data. and
Examples in-
McGrath
clude
[23]. Singh [38] and Farazouli, Cerratto-Pargman, Bolander-Laksov, and McGrath [23].
14
12
10
8
6
4
2
0
Surveys and Interviews and Document Observational end Experimental
questionnaires focus Groups analysis ethnographic methods
Scheme3.3.Distribution
Scheme Distributionof
ofpapers
papersby
byresearch
researchstrategies.
strategies.
The
Thecomprehensive
comprehensiveclassification
classificationofofthethe3737
papers
papers reveals
revealstrends
trendsin the methodological
in the methodolog-
choices made by the researchers. Survey research predominates,
ical choices made by the researchers. Survey research predominates, being being used in used
12 studies,
in 12
compared to experimental
studies, compared research (9),research
to experimental case studies
(9),(6),
caseandstudies
qualitative
(6), approaches (7). This
and qualitative ap-
distribution
proaches (7).underscores a preference
This distribution for surveys,
underscores likely for
a preference duesurveys,
to several advantages,
likely such
due to several
as the abilitysuch
advantages, to generalize findings
as the ability across a findings
to generalize larger population. Additionally,
across a larger population. surveys are
Addition-
well suited for exploratory research aimed at gauging perceptions, attitudes,
ally, surveys are well suited for exploratory research aimed at gauging perceptions, atti- and behaviors
towards GAI
tudes, and technologies.
behaviors Conversely,
towards while experimental
GAI technologies. Conversely, research
whileoffers the advantage
experimental of
research
isolating variables to establish causal relationships, it was not employed as
offers the advantage of isolating variables to establish causal relationships, it was not em- frequently as it
could have been. This may be due to the logistical complexities and higher
ployed as frequently as it could have been. This may be due to the logistical complexities costs associated
with conducting
and higher costssuch studies.
associated with conducting such studies.
The
The diversity in data collection
diversity in data collection and
and analysis
analysis methods
methods used used across
across these
these papers
papers high-
high-
lights the varying research priorities and objectives. While surveys
lights the varying research priorities and objectives. While surveys and questionnaires and questionnaires
dominate,
dominate,ensuring
ensuringbroad
broadcoverage
coverageand andease
easeofof
analysis,
analysis, methods
methods like interviews
like interviews andand
focus
fo-
groups are invaluable for their depth. These qualitative tools are essential for exploring
cus groups are invaluable for their depth. These qualitative tools are essential for explor-
nuances that surveys might overlook.
ing nuances that surveys might overlook.
3.4.2. Thing Ethnography: Adapting Ethnographic Methods for Contemporary
3.4.2. Thing Ethnography:
Challenges—Analysis Adapting Ethnographic Methods for Contemporary
of a Case
Challenges—Analysis of a Case
Qualitative research approaches are continually adapting to address emerging chal-
lengesQualitative
and leverage research approaches are
novel technologies. continually
Thing ethnographyadapting to address
exemplifies emerging chal-
this transformation
lenges and leverage novel technologies. Thing ethnography
by modifying conventional ethnographic techniques to accommodate contemporary exemplifies this transfor-
limi-
mation by modifying conventional ethnographic techniques to accommodate
tations, such as restricted access and the need for rapid data gathering. A distinguishing contempo-
rary limitations,
feature such as restricted
of this methodology, access
especially in and the need
its latest for rapid dataisgathering.
implementations, A distin-
the integration of
guishingintelligence
artificial feature of this methodology,
tools, especially
such as ChatGPT, in its
into the latest implementations,
ethnographic is the This
interview process. inte-
gration of artificial
innovative intelligence
methodology tools, such as
allows researchers ChatGPT, into
to incorporate AI the ethnographic
as part interview
of the ethnographic
process. providing
method, This innovative methodology
a distinct perspective allows
on dataresearchers
collectiontoinincorporate
the digitalAI as part ofera.
technology the
ethnographic
Due method,
to this novelty, providing
we decided a distinct
to analyze perspective
it in a deeper way.on data collection in the digital
technology era. Due to this
Thing ethnography novelty,
is a more we decided
efficient iteration toof
analyze
classic it in a deeper aiming
ethnography, way. to collect
cultural andethnography
Thing social knowledge without
is a more requiring
efficient extensive
iteration on-site
of classic presence. This
ethnography, approach
aiming to col-
lect cultural and social knowledge without requiring extensive on-site presence. This
Information 2024, 15, 676 19 of 27
4. Discussion
4.1. Discussion of Results
The integration of GAI in HE has been studied across various dimensions, revealing its
multifaceted impact on educational practices, stakeholders, and activities. This discussion
synthesizes findings from the 37 selected articles, grouped into three categories: focus on
the technology, focus on the stakeholders, and focus on the activities, as outlined in Table 5.
GAI tools were seen as transformative, with significant ethical considerations regarding
their use [31]. In programming education, GAI tools enhanced coding skills and crit-
ical thinking, though their impact on motivation varied depending on the challenge’s
complexity [42]. In medical education, the need for curriculum reform and professional
development to address ethical concerns and improve teaching and learning processes was
emphasized [37]. At the institutional level, GAI tools were recognized for their potential
to drive policy development, digital literacy, and ethical AI use, though concerns were
raised about their possible negative impact on academic labor, such as making scholars less
inquisitive and reflexive [40,41].
Students’ perspectives on GAI technology further enrich this narrative. Research
indicated that students were generally familiar with and positively inclined towards GAI
tools, although they expressed concerns about over-reliance and social interaction lim-
itations. These studies suggested that higher education must evolve to incorporate AI-
driven changes, focusing on preparing students for future employment in AI-powered
jobs [10,16,28].
This discussion reflects the multifaced impact and interconnection of GAI in HE:
1. Versatility and potential: GAI tools like ChatGPT demonstrate significant potential
across various disciplines, enhancing student support, teaching efficiency, and re-
search productivity. They offer innovative learning experiences and assist in routine
educational tasks, thereby freeing up valuable time for educators to focus on complex
teaching and research activities (e.g., [24,32]).
2. Assessment challenges: the use of ChatGPT in educational settings raises concerns
about assessment integrity. Studies have shown that ChatGPT can generate passable
responses to assessment questions, prompting the need for reevaluating traditional
assessment strategies to maintain academic standards (e.g., [33]).
3. Broader impact: beyond specific applications like ChatGPT, GAI tools have broad
applicability and impact across different academic disciplines, including journalism,
programming, and medical education. These tools are recognized for their transfor-
mative potential, though ethical considerations and the need for curriculum reform
are essential (e.g., [31,37,42].
institutional strategies are crucial for fostering a supportive environment for AI adoption
and addressing potential ethical concerns [11].
Regarding the stakeholders, the main conclusions can be summarized as follows:
1. Student acceptance: the acceptance of GAI tools among students is influenced by fac-
tors such as performance expectancy, effort expectancy, and social influence. Studies
indicate that the user-friendliness and multilingual capabilities of tools like Chat-
GPT enhance their acceptance. Effective promotion and support from educators and
administrators are crucial (e.g., [39,43].
2. Instructor perceptions: instructors recognize the practical implications of integrating
GAI tools. Key determinants of impact include the overall quality and customization
of these tools. Continuous optimization, timely feedback, and responsible imple-
mentation are essential for maximizing benefits and addressing potential challenges
(e.g., [14,15].
3. Institutional strategies: higher education institutions need to develop comprehensive
plans for AI usage, incorporating ethical guidelines and risk management strategies.
Institutional support is vital for fostering a positive environment for AI adoption and
addressing concerns about academic labor and ethical use (e.g., [13,41]).
Technology
•Versatility and potencial
•Assessment challenges
•Broader impact
Stakeholders
•Student acceptance
•Instructor perception
•Institutional strategy
Activities
•Academic integrity
•Educational enhancement
•Feedback and assessment
Figure 3.
Figure A summary
3. A summary of
of the
the main
main findings.
findings.
4.2. Contributions
4.2. Contributions in in Relation
Relation toto Other
Other Systematic
Systematic Reviews
Reviews
Although the
Although the primary
primarystudystudycovers
coversthe period
the period fromfrom2023 to January
2023 2024,
to January an additional
2024, an addi-
searchsearch
tional of the databases was conducted
of the databases before finalizing
was conducted the analysis.
before finalizing This search
the analysis. Thisidentified
search
three relevant
identified threestudies
relevantwhich
studies provided a basis for
which provided comparing
a basis our results
for comparing ourwith existing
results with
findings, highlighting areas of alignment and divergence
existing findings, highlighting areas of alignment and divergence [45–47].[45–47].
While our
While our study
study examined
examined GAI GAI inin general
general and
and its
its use
use in
inhigher
highereducation,
education, Filippi
Filippi and
and
Motyl [47] offer a systematic review, including the presentation of inclusion
Motyl [47] offer a systematic review, including the presentation of inclusion and exclusion and exclusion
criteria and
criteria andsubsequent
subsequentprocedures,
procedures,specifically
specifically focusing
focusing ononthethe adoption
adoption of LLMs
of LLMs in
in en-
engineering
gineering education.
education. Their
Their research
research provides
provides insightsinto
insights intohow
howLLMs
LLMscan can be
be helpful
helpful
across fields such as mechanical, software, and chemical engineering,
across fields such as mechanical, software, and chemical engineering, confirming a posi-confirming a positive
impact
tive on student
impact learning,
on student like our
learning, likefindings. Moreover,
our findings. FilippiFilippi
Moreover, and Motyl [47] emphasize
and Motyl [47] em-
phasize that the best results occur when LLMs are used as a complementaryto
that the best results occur when LLMs are used as a complementary tool traditional
tool to tradi-
learning
tional methods.
learning Students
methods. performed
Students better better
performed when whenthey didtheynot
didrely
notsolely on LLMs,
rely solely on
supporting
LLMs, our own
supporting ourcaution againstagainst
own caution over-reliance. Finally,Finally,
over-reliance. regarding the impact
regarding on critical
the impact on
thinking, Filippi and Motyl [47] underscore the importance of integrating LLMs in a way
critical thinking, Filippi and Motyl [47] underscore the importance of integrating LLMs in a
that promotes, rather than diminishes, critical thinking skills, reflecting our concerns.
way that promotes, rather than diminishes, critical thinking skills, reflecting our concerns.
Baig and Yadegaridehkordi [45] cover the use of ChatGPT in higher education and its
Baig and Yadegaridehkordi [45] cover the use of ChatGPT in higher education and
influence on educational processes, including its limitations and the need for continuous
its influence on educational processes, including its limitations and the need for continu-
improvement. Like the present study, their research addresses the post-adoption stages,
ous improvement. Like the present study, their research addresses the post-adoption
intention to use, and acceptance of these technologies. Furthermore, they stress the need for
stages, intention to use, and acceptance of these technologies. Furthermore, they stress the
further research into ChatGPT’s diverse applications and benefits across various academic
roles, including personalized learning experiences, instant feedback, efficient grading,
supervision facilitation, and lesson planning, to name a few.
Information 2024, 15, 676 23 of 27
Figure 4.
Figure 4. Summary
Summary of
of the
the research
research agenda.
agenda.
5. Conclusions
5. Conclusions
The adoption
The adoption of of GAI
GAI isis irreversible.
irreversible. Increasingly,
Increasingly, students,
students, teachers,
teachers, andand researchers
researchers
consider this technology as a valuable support for their work,
consider this technology as a valuable support for their work, impacting all aspects impacting all aspects of the
of
teaching–learning
the teaching–learning process, including
process, including research.
research.ThisThis
systematic
systematic literature review
literature review of 37of ar-
37
ticles, published
articles, publishedbetween
between2023 2023andand2024
2024on onthe
theuseuseofofGAI
GAIininHE,
HE,reveals
revealssome
someconcerns.
concerns.
GAI tools
GAI tools have
have demonstrated
demonstrated their their potential
potential to to enhance
enhance student
student support,
support, improve
improve
teaching efficiency,
teaching efficiency,andandfacilitate
facilitateresearch
researchactivities.
activities. They
They offer
offer innovative
innovative and and interactive
interactive
learning experiences
learning experiences while aiding aiding educators
educators in in managing
managing routine
routine tasks.
tasks. However,
However, these these
advancements necessitate
advancements necessitate the the reevaluation
reevaluation of of assessment
assessment strategies
strategies to to maintain
maintain academic
academic
integrity and
integrity and ensure
ensure thethe quality
quality of of education.
education.
The acceptance
The acceptanceand andperceptions
perceptionsofofGAI GAItools
toolsamong
amongstudents,
students,instructors,
instructors,and and insti-
institu-
tional
tutionalleaders
leadersareare
critical for their
critical successful
for their implementation.
successful implementation. Factors such as
Factors performance
such as perfor-
expectancy,
mance expectancy,effort expectancy, and social
effort expectancy, and influence significantly
social influence shapeshape
significantly attitudes toward
attitudes to-
these technologies. Ethical considerations, particularly concerning
ward these technologies. Ethical considerations, particularly concerning academic integ- academic integrity and
responsible use, mustuse,
rity and responsible be addressed through comprehensive
must be addressed policies and
through comprehensive guidelines.
policies and guide-
lines.Moreover, GAI tools can significantly enhance various educational activities, including
assessment,
Moreover, writing,
GAIcontent
tools can analysis, and feedback.
significantly enhanceHowever, balancing AI
various educational assistance
activities, with
includ-
traditional
ing assessment,learning methods
writing, is crucial.
content analysis,Future research should
and feedback. However,focusbalancing
on developing robust
AI assistance
assessment
with traditional methods, ethical
learning guidelines,
methods and effective
is crucial. pedagogical
Future research should strategies
focus ontodeveloping
maximize
the benefits of GAI while mitigating potential risks.
robust assessment methods, ethical guidelines, and effective pedagogical strategies to
This study
maximize presents
the benefits several
of GAI limitations.
while mitigatingFirstly,
potentialtherisks.
articles reviewed only cover a
period up study
This until the end ofseveral
presents January 2024. Therefore,
limitations. Firstly,this
theanalysis needs to be
articles reviewed updated
only cover witha pe-
studies
riod uppublished
until the end after ofthis date.2024.
January Secondly, the search
Therefore, query used
this analysis needs may be updated
to be considered witha
limitation, as using different words might identify different studies.
studies published after this date. Secondly, the search query used may be considered a At the time the search
was conducted,
limitation, the keywords
as using different wordswe used wereidentify
might those that seemed
different most appropriate,
studies. At the time the although
search
other keywords could have been used to broaden the scope
was conducted, the keywords we used were those that seemed most appropriate, of the review and include other
alt-
relevant studies, such as LLMs (large language models). Additionally,
hough other keywords could have been used to broaden the scope of the review and in- this SLR is limited to
studies conducted in HE. Research conducted in other contexts
clude other relevant studies, such as LLMs (large language models). Additionally, this or educational levels could
provide different
SLR is limited to results
studiesand perspectives.
conducted in HE.Finally,
Research this study employs
conducted in othera mixed
contexts approach,
or edu-
using a descriptive analysis in Section 3 and a critical meta-analysis in Section 4. This
cational levels could provide different results and perspectives. Finally, this study em-
choice aimed to provide both a comprehensive overview and a synthesis of key findings,
ploys a mixed approach, using a descriptive analysis in Section 3 and a critical meta-anal-
though it may reduce the level of individual critique for each article in favor of a broader
ysis in Section 4. This choice aimed to provide both a comprehensive overview and a syn-
comparative analysis.
thesis of key findings, though it may reduce the level of individual critique for each article
For future work, it is important to note that the use of GAI in higher education is still
in favor of a broader comparative analysis.
in its early stages, presenting numerous opportunities for further analysis across topics
For future work, it is important to note that the use of GAI in higher education is still
such as pedagogy, assessment, ethics, technology, and the development of skills for future
in its early stages, presenting numerous opportunities for further analysis across topics
workforce competitiveness. While this study provides a broad synthesis of the selected
such as pedagogy, assessment, ethics, technology, and the development of skills for future
papers, future research could benefit from a more focused analysis of how each study
workforce competitiveness. While this study provides a broad synthesis of the selected
Information 2024, 15, 676 25 of 27
Author Contributions: Conceptualization, J.B., A.M. and G.C.; methodology, J.B., A.M. and G.C.;
formal analysis, J.B., A.M. and G.C.; writing—original draft preparation, J.B., A.M. and G.C.; writing—
review and editing, J.B. A.M., and G.C. All authors have read and agreed to the published version of
the manuscript.
Funding: This work was financially supported by national funds through the FCT—Foundation for
Science and Technology, I.P., under the projects UIDB/05460/2020 and UIDP/05422/2020.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: No new data were created or analyzed in this study. Data sharing is
not applicable to this article.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Fjelland, R. Why general artificial intelligence will not be realized. Humanit. Soc. Sci. Commun. 2020, 7, 10. [CrossRef]
2. Farrelly, T.; Baker, N. Generative artificial intelligence: Implications and considerations for higher education practice. Educ. Sci.
2023, 13, 1109. [CrossRef]
3. Bond, M.; Khosravi, H.; De Laat, M.; Bergdahl, N.; Negrea, V.; Oxley, E.; Pham, P.; Chong, S.W.; Siemens, G. A meta-systematic
review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. Int. J. Educ. Technol.
High. Educ. 2024, 21, 4. [CrossRef]
4. Crompton, H.; Burke, D. Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High. Educ. 2023,
20, 22. [CrossRef]
5. Saunders, M.; Lewis, P.; Thornhill, A. The Research Onion of Mark Saunders. In Research Methods for Business Students, 8th ed.;
Pearson: London, UK, 2019.
6. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.;
Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 372.
[CrossRef] [PubMed]
7. Alexander, K.; Savvidou, C.; Alexander, C. Who wrote this essay? Detecting AI-generated writing in second language education
in higher education. Teach. Engl. Technol. 2023, 23, 25–43. [CrossRef]
8. Al-Zahrani, A.M. The impact of generative AI tools on researchers and research: Implications for academia in higher education.
Innov. Educ. Teach. Int. 2023, 61, 1029–1043. [CrossRef]
9. Barrett, A.; Pack, A. Not quite eye to A.I.: Student and teacher perspectives on the use of generative artificial intelligence in the
writing process. Int. J. Educ. Technol. High. Educ. 2023, 20, 59. [CrossRef]
10. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ.
Technol. High. Educ. 2023, 20, 43. [CrossRef]
11. Chan, C.K.Y.; Lee, K.K.W. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT
in teaching and learning than their Gen X and millennial generation teachers? Smart Learn. Environ. 2023, 10, 60. [CrossRef]
12. Chan, C.K.Y.; Zhou, W. An expectancy value theory (EVT) based instrument for measuring student perceptions of generative AI.
Smart Learn. Environ. 2023, 10, 64. [CrossRef]
13. Chan, C.K.Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High.
Educ. 2023, 20, 38. [CrossRef]
14. Chen, J.; Zhuo, Z.; Lin, J. Does ChatGPT play a double-edged sword role in the field of higher education? An in-depth exploration
of the factors affecting student performance. Sustainability 2023, 15, 16928. [CrossRef]
15. Chergarova, V.; Tomeo, M.; Provost, L.; De la Peña, G.; Ulloa, A.; Miranda, D. Case study: Exploring the role of current and
potential usage of generative artificial intelligence tools in higher education. Issues Inf. Syst. 2023, 24, 282–292. [CrossRef]
Information 2024, 15, 676 26 of 27
16. Chiu, T.K.F. Future research recommendations for transforming higher education with generative AI. Comput. Educ. Artif. Intell.
2024, 6, 100197. [CrossRef]
17. Currie, G.; Barry, K. ChatGPT in nuclear medicine education. J. Nucl. Med. Technol. 2023, 51, 247–254. [CrossRef]
18. De Paoli, S. Performing an inductive thematic analysis of semi-structured interviews with a large language model: An exploration
and provocation on the limits of the approach. Soc. Sci. Comput. Rev. 2023, 42, 997–1019. [CrossRef]
19. Duong, C.D.; Vu, T.N.; Ngo, T.V.N. Applying a modified technology acceptance model to explain higher education students’
usage of ChatGPT: A serial multiple mediation model with knowledge sharing as a moderator. Int. J. Manag. Educ. 2023,
21, 100883. [CrossRef]
20. Elkhodr, M.; Gide, E.; Wu, R.; Darwish, O. ICT students’ perceptions towards ChatGPT: An experimental reflective lab analysis.
STEM Educ. 2023, 3, 70–88. [CrossRef]
21. Escalante, J.; Pack, A.; Barrett, A. AI-generated feedback on writing: Insights into efficacy and ENL student preference. Int. J.
Educ. Technol. High. Educ. 2023, 20, 57. [CrossRef]
22. Essel, H.B.; Vlachopoulos, D.; Essuman, A.B.; Amankwa, J.O. ChatGPT effects on cognitive skills of undergraduate students:
Receiving instant responses from AI-based conversational large language models (LLMs). Comput. Educ. Artif. Intell. 2024,
6, 100198. [CrossRef]
23. Farazouli, A.; Cerratto-Pargman, T.; Bolander-Laksov, K.; McGrath, C. Hello gpt! goodbye home examination? An exploratory
study of AI chatbots’ impact on university teachers’ assessment practices. Assess. Eval. High. Educ. 2023, 49, 363–375. [CrossRef]
24. French, F.; Levi, D.; Maczo, C.; Simonaityte, A.; Triantafyllidis, S.; Varda, G. Creative use of OpenAI in education: Case studies
from game development. Multimodal Technol. Interact. 2023, 7, 81. [CrossRef]
25. Greiner, C.; Peisl, T.C.; Höpfl, F.; Beese, O. Acceptance of AI in semi-structured decision-making situations applying the four-sides
model of communication—An empirical analysis focused on higher education. Educ. Sci. 2023, 13, 865. [CrossRef]
26. Hammond, K.M.; Lucas, P.; Hassouna, A.; Brown, S. A wolf in sheep’s clothing? Critical discourse analysis of five online
automated paraphrasing sites. J. Univ. Teach. Learn. Pract. 2023, 20, 8. [CrossRef]
27. Hassoulas, A.; Powell, N.; Roberts, L.; Umla-Runge, K.; Gray, L.; Coffey, M.J. Investigating marker accuracy in differentiating
between university scripts written by students and those produced using ChatGPT. J. Appl. Learn. Teach. 2023, 6, 71–77. [CrossRef]
28. Jaboob, M.; Hazaimeh, M.; Al-Ansi, A.M. Integration of generative AI techniques and applications in student behavior and
cognitive achievement in Arab higher education. Int. J. Hum. -Comput. Interact. 2024, 24, 1–14. [CrossRef]
29. Kelly, A.; Sullivan, M.; Strampel, K. Generative artificial intelligence: University student awareness, experience, and confidence
in use across disciplines. J. Univ. Teach. Learn. Pract. 2023, 20, 12. [CrossRef]
30. Laker, L.F.; Sena, M. Accuracy and detection of student use of ChatGPT in business analytics courses. Issues Inf. Syst. 2023,
24, 153–163. [CrossRef]
31. Lopezosa, C.; Codina, L.; Pont-Sorribes, C.; Vállez, M. Use of generative artificial intelligence in the training of journalists:
Challenges, uses and training proposal. El Prof. De La Inf. 2023, 32, 1–12. [CrossRef]
32. Michel-Villarreal, R.; Vilalta-Perdomo, E.; Salinas-Navarro, D.E.; Thierry-Aguilera, R.; Gerardou, F.S. Challenges and opportunities
of generative AI for higher education as explained by ChatGPT. Educ. Sci. 2023, 13, 856. [CrossRef]
33. Nikolic, S.; Daniel, S.; Haque, R.; Belkina, M.; Hassan, G.M.; Grundy, S.; Lyden, S.; Neal, P.; Sandison, C. ChatGPT versus
engineering education assessment: A multidisciplinary and multi-institutional benchmarking and analysis of this generative
artificial intelligence tool to investigate assessment integrity. Eur. J. Eng. Educ. 2023, 48, 559–614. [CrossRef]
34. Perkins, M.; Roe, J.; Postma, D.; McGaughran, J.; Hickerson, D. Detection of GPT-4 generated text in higher education: Combining
academic judgement and software to identify generative AI tool misuse. J. Acad. Ethics 2024, 22, 89–113. [CrossRef]
35. Popovici, M.-D. ChatGPT in the classroom: Exploring its potential and limitations in a functional programming course. Int. J.
Hum. -Comput. Interact. 2023, 39, 1–12. [CrossRef]
36. Rose, K.; Massey, V.; Marshall, B.; Cardon, P. IS professors’ perspectives on AI-assisted programming. Issues Inf. Syst. 2023,
24, 178–190. [CrossRef]
37. Shimizu, I.; Kasai, H.; Shikino, K.; Araki, N.; Takahashi, Z.; Onodera, M.; Kimura, Y.; Tsukamoto, T.; Yamauchi, K.; Asahina, M.;
et al. Developing medical education curriculum reform strategies to address the impact of generative AI: Qualitative study. JMIR
Med. Educ. 2023, 9, e53466. [CrossRef]
38. Singh, M. Maintaining the integrity of the South African university: The impact of ChatGPT on plagiarism and scholarly writing.
S. Afr. J. High. Educ. 2023, 37, 203–220. [CrossRef]
39. Strzelecki, A.; ElArabawy, S. Investigation of the moderation effect of gender and study level on the acceptance and use
of generative AI by higher education students: Comparative evidence from Poland and Egypt. Br. J. Educ. Technol. 2024,
55, 1209–1230. [CrossRef]
40. Walczak, K.; Cellary, W. Challenges for higher education in the era of widespread access to generative AI. Econ. Bus. Rev. 2023,
9, 71–100. [CrossRef]
41. Watermeyer, R.; Phipps, L.; Lanclos, D.; Knight, C. Generative AI and the automating of academia. Postdigital Sci. Educ. 2023,
6, 446–466. [CrossRef]
42. Yilmaz, R.; Karaoglan Yilmaz, F.G. The effect of generative artificial intelligence (AI)-based tool use on students’ computational
thinking skills, programming self-efficacy, and motivation. Comput. Educ. Artif. Intell. 2023, 4, 100147. [CrossRef]
Information 2024, 15, 676 27 of 27
43. Yilmaz, F.G.K.; Yilmaz, R.; Ceylan, M. Generative artificial intelligence acceptance scale: A validity and reliability study. Int. J.
Hum. -Comput. Interact. 2023, 39, 1–13. [CrossRef]
44. Saunders, M.; Lewis, P.; Thornhill, A. Research Methods for Business Students, 6th ed.; Pearson: London, UK, 2007.
45. Baig, M.I.; Yadegaridehkordi, E. ChatGPT in higher education: A systematic literature review and research challenges. Int. J.
Educ. Res. 2024, 127, 102411. [CrossRef]
46. Castillo-Martínez, I.M.; Flores-Bueno, D.; Gómez-Puente, S.M.; Vite-León, V.O. AI in higher education: A systematic literature
review. Front. Educ. 2024, 9, 1391485. [CrossRef]
47. Filippi, S.; Motyl, B. Large language models (LLMs) in engineering education: A systematic review and suggestions for practical
adoption. Information 2024, 15, 345. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.