0% found this document useful (0 votes)
21 views7 pages

Chapter2

juju

Uploaded by

Stacy Moriarty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views7 pages

Chapter2

juju

Uploaded by

Stacy Moriarty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

CHAPTER II

REVIEW OF RELATED LITERATURE AND RELATED STUDIES

This chapter contains the discussion on the review of related literature


and related studies which had bearing on the study that forms part of its
framework.

Review of Related Literature

Cognitive abilities of critical thinking, decision-making, and analytical


thinking are important elements in research, particularly in higher
education (Soufi & See, 2019). It involves constructing well-reasoned
arguments supported by evidence (McKinley, 2013). Dwyer (2023) defines
critical thinking abilities as a blend of cognitive abilities and critical
thinking dispositions, emphasizing skills such as truth-seeking, systematic
evaluation, inference, and self-regulation in problem-solving. Critical
thinking dispositions refer to the attitudes and qualities that facilitate
engagement in critical thinking activities (Facione & Facione, 1996). They
include the desire to be informed, the ability to consider multiple
perspectives, the identification of relationships, reflective thinking,
evidence-seeking, skepticism, respect for others’ views, and tolerance.
Liang (2023) highlights its importance in contemporary education,
underscoring its role in fostering various competencies, including drawing
conclusions, understanding contributing factors and effects, assessing
source credibility, and distinguishing facts from opinions.

Decision-making abilities are critical for processing and reasoning through


complex information across diverse domains, including research and
education, in return nurturing proficient decision-making capabilities
(Duhaylungsod & Chavez, 2023). In exploring decision-making theories, a
distinction can be seen between descriptive and normative theories (Bell
et al., 1988). Descriptive theories focus on understanding actual decision-
making behaviors, including both rational and irrational elements, through
empirical research. Normative theories, on the other hand, advocate for
decisions that maximize expected utility, grounded in mathematical
models and ideal behavioral principles (Damnjanović & Janković, 2014).

Analytical thinking embodies the thorough exploration and critical analysis


of data, which are vital for problem-solving and informed decision-making
(Pokkakillath & Suleri, 2023). These elements are crucial for enhancing
learning experiences, as they pertain to reasoning, planning, inquiry,
interpretation of findings, and the subsequent derivation of conclusions in
research and education (Ismail, 2023).

Incorporating AI dialogue systems in research and educational settings,


particularly those utilizing generative modules like variational
autoencoders (VAEs), holds substantial potential for boosting creativity
and elevating the quality of work (Aydin & Karaarslan, 2023). VAEs
provide considerable support to writers, particularly in surmounting challenges
like writer’s block or navigating complex parts of their manuscripts. This is
achieved through the pioneering method of automated text generation, which
not only aids in content creation but also inspires innovative thinking and
problem-solving approaches (Eapen, 2023). AI dialogue systems, bolstered by
interdisciplinary insights from psychology and neuroscience, is set to
revolutionize the way students approach writing and decision-making, critical
thinking and analytical thinking in research and education (Carvalho et al., 2019;
Gao et al., 2022).

These advancements promise to enhance educational experiences by providing


more interactive and personalized learning environments (Carvalho et al., 2019).
However, as AI systems grow more sophisticated and their role in automated
analysis expands, there is a risk that students may become overly reliant on
these technologies (Krullaars et al., 2023). This over-reliance could lead to a
range of issues, including diminished critical thinking (Iskender, 2023), analytical
thinking (Ferrajão, 2020), and decision-making abilities (Pokkakillath & Suleri,
2023) susceptibility to AI-generated errors or AI hallucinations (Hatem et al.,
2023), increased instances of plagiarism (De Angelis et al., 2023), and
challenges related to lack of transparency (Carvalho et al., 2019) and algorithmic
biases (Mbalaka, 2023). Moreover, habitual dependence on AI for decision-
making may reduce individuals’ motivation to engage in independent thinking
and analysis, potentially leading to a weakening of essential cognitive abilities
(Grinschgl & Neubauer, 2022) and automation bias (Gsenger & Strle, 2021).

Given that these AI systems can handle vast data and yield accurate forecasts,
there is a looming danger of humans becoming excessively reliant on AI when
making choices. This over-reliance might stifle creativity and innovative thinking
in both educators and learners, possibly degrading educational quality (Ahmad
et al., 2023). Krullaars et al. (2023) posit that an over-reliance on dialogue
systems hinders students from developing their critical thinking and problem-
solving abilities.

As AI dialogue systems offer pre-formulated answers, this practice can curtail


students’ freedom to convey their unique thoughts and viewpoints (Krullaars et
al., 2023). Similarly, Ahmad et al. (2023) argue that one consequence of over-
reliance on dialogue systems is a potential decline in user prowess of cognitive
abilities. Furthermore, Gao et al. (2022) claim that students often overly rely on
the source of information, leading to challenges in differentiating whether the
content produced by an AI dialogue system was referenced to a credible source.
This scenario of AI hallucination involves the AI creating plausible but untrue
statements or assertions, which can mislead users and obscure the line between
fact and fiction. This phenomenon raises important questions about users’ ability
to critically evaluate and discern the accuracy of AI-generated content.
Moreover, Hatem et al. (2023) discuss the issue of AI systems referencing non-
existent sources, a form of confabulation that presents false information within a
seemingly credible framework. This misrepresentation can deceive users and
undermine the trustworthiness of the information provided by AI systems.
Athaluri et al. (2023) argue that the issue of AI confabulations can potentially
have adverse effects on decision-making and may lead to ethical and legal
dilemmas. The authors found that among 178 reference sources produced by AI
dialogue systems, 69 lacked a Digital Object Identifier (DOI), and 28 could not be
found through a Google search, nor did they possess an existing DOI.

The utilization of AI tools to compose a paper in any academic or professional


context constitutes plagiarism (Gao et al., 2022). Studies show that the
increasing prevalence of journals lacking rigorous quality controls has led to
concerns about the potential surge of AI-generated articles in the scientific
community, with notable plagiarism detection tools failing to identify
infringements effectively (Dehouche, 2021; Francke & Bennett, 2019). De
Angelis et al. (2023) found that the rise of journals that neglect essential quality
controls, like verifying for plagiarism or ensuring ethical standards, might result
in a significant influx of AI-generated articles within the scientific realm. Such a
trend could gravely undermine the credibility of scientific studies and tarnish the
prestige of scholarly publications. Khalil and Er (2023) have reported that
popular plagiarism detection tools, such as Turnitin and iThenticate, show a
significant limitation in identifying plagiarism in essays that are based on
existing literature. Concerningly, their studies reveal that these tools could only
detect plagiarism in less than 15% of cases. This low detection rate raises
serious concerns about the potential misuse of these platforms by students for
academic purposes. It also highlights a critical issue regarding the lack of
transparency in the algorithms that drive these plagiarism detection procedures
(Ventayen, 2023).

The issue of a lack of transparency in algorithm-driven procedures in the realm


of social-legal contributions, often referred to as "black-boxing" (Carvalho et al.,
2019). According to Carvalho et al. (2019), such transparency is pivotal for
enhancing understanding of AI dialogue systems, laying a solid foundation for
the challenges and advantages of algorithmic decisions, and ensuring that
decision-making processes are both accountable and fair. Additionally, it is
crucial to scrutinize how information is accessed online, especially on digital
platforms. Beck (2019) explored the intricate dimensions of media ethics,
identifying similarities to online communication ethics. The core of this discourse
revolves around practices valuing truth. This includes shedding light on the
decision-making behind content selection, validating the genuineness of content,
understanding authorship, and pinpointing deliberate misinformation, including
algorithmic biases.

Algorithmic biases refer to the unintended and systematic discrimination present


in computer algorithms (Alrazaq et al., 2023). Often, these biases stem from
historical data sources upon which the software relies, potentially reflecting or
amplifying past prejudices. Remarkably, even when explicit sensitive attributes
are omitted from the input, a proficient machine-learning algorithm might still
act upon these attributes due to underlying correlations in the data (Kordzadeh
& Ghasemaghaei, 2022). Mbalaka (2023) found that the DALL-E 2 struggled
notably in generating detailed images of "An African Family" compared to more
generic "Family" images. In contrast, StarryAI outshined DALL-E 2 by producing
clearer facial features. However, it still lacked accuracy in depicting the cultural
nuances. Feine et al. (2020) argue that AI dialogue system designs frequently
incorporate gender-specific indicators. Many of these AI dialogue systems are
characterized, overtly or subtly, by a particular gender. Notably, many AI dialogu

Notably, many AI dialogue systems bear female names, present female-centric


avatars, and are often referred to as female. The study found a prevailing
preference for female representations over male ones. This highlights an
inherent gender bias in AI dialogue system design practices.

With issues such as AI hallucinations, plagiarism, lack of transparency, and


algorithmic biases, there arises a critical concern that over-reliance on AI
dialogue systems’ decision-making capabilities might potentially impede the
cultivation of critical thinking skills (Carobene et al., 2023; Hosseini et al., 2023).
Students can inadvertently become overly dependent on AI-generated
assistance, potentially detracting from their ability to make independent, well-
informed decisions (Buçinca et al., 2021). Yet, in the academic realm, particularly
among junior faculty members, there exists the perpetual challenge of balancing
research, publishing commitments, and teaching responsibilities (Holmes et al.,
2023). Institutions often require a specific quota of research articles to be
published annually for career advancement (Sharma, 2020). Additionally, the
ever-dreaded ‘writer’s block’ poses a formidable obstacle, affecting both novice
and experienced writers, including students and educators (Köbis & Mossink,
2021). AI-generated text emerges as a valuable resource to surmount these
hurdles, serving as an effective tool to overcome writer’s block and streamline
the publishing process (Washington, 2023).

There is still a noticeable gap in the current literature to explore the effects of
over-reliance on dialogue systems on abilities such as decision-making, critical
thinking, and analytical thinking in education and research (Ahmad et al., 2023).
To fill this knowledge gap, a systemic review is conducted with a specific
emphasis on research examining the implications of over-reliance on AI dialogue
systems. This study aims to investigate the over-reliance on AI dialogue systems
in educational and research contexts, with a particular focus on their impact on
decision-making, critical thinking, and analytical thinking facilitated through the
use of dialogue systems.

Review of Related Studies


AI systems are likely to affect the way learner–instructor interaction occurs in
online learning environments (Guilherme, 2019). If students and instructors have
strong concerns about the impact of AI systems on their interactions, then they
would not use such systems, in spite of perceived benefits (Felix, 2020). To the
best of our knowledge, the impact of AI systems on learner–instructor interaction
has limited empirical studies, and Misiejuk and Wasson (2017) have called for
more work on this.

There are a variety of AI systems that are expected to affect learner–instructor


interaction in online learning. For example, Goel and Polepeddi (2016) developed
an AI teaching assistant named Jill Watson to augment the instructor’s
communication with students by autonomously responding to student
introductions, posting weekly announcements, and answering routine, frequently
asked questions. Perin and Lauterbach (2018) developed an AI scoring system
that allows faster communication of grades between students and the instructor.
Luckin (2017) showed AI systems that support both students and instructors by
providing constant feedback on how students learn and the progress they are
making towards their learning goals. Ross et al. (2018) developed online
adaptive quizzes to support students by providing learning contents tailored to
each student’s individual needs, which improved student motivation and
engagement. Heidicker et al. (2017) showed that virtual avatars allow several
physically separated users to collaborate in an immersive virtual environment by
increasing sense of presence. Aslan and her colleagues (2019) developed AI
facial analytics to improve instructors’ presence as a coach in technology-
mediated learning environments. When looking at these AI systems, in-depth
insight into how students and instructors perceive the AI’s impact is important
(Zawacki-Richter et al., 2019).

The recent introduction of commercial AI systems for online learning has


demonstrated the complex impact of AI on learner–instructor interaction. For
instance, Proctorio (Proctorio Inc., USA), a system that aims to prevent cheating
by monitoring students and their computer screens during an exam, seems like a
fool-proof plan to monitor students in online learning, but students

complain that it increases their test-taking anxiety (McArthur, 2020). The idea of
being recorded by Proctorio distracts students and creates an uncomfortable
test-taking atmosphere. In a similar vein, although Squirrel AI (Squirrel AI
Learning Inc., China) aims to provide adaptive learning by adjusting itself
automatically to the best method for an individual student, there is a risk that
this might restrict students’ creative learning (Beard, 2020). These environments
have one thing in common: Unlike educational technologies that merely mediate
interactions between instructors and students, AI systems have more autonomy
in the way in which it interprets data, infers learning, and at times, takes
instructional decisions.

In what follows, we describe Speed Dating with storyboards, an exploratory


research method that allows participants to quickly experience different forms of
AI systems possible in the near future, to examine the impact of those systems
on learner–instructor interaction (“Materials and methods”). Findings offer new
insights on students’ and instructors’ boundaries, such as when AI systems are
perceived as “invasive” (“Findings”). Lastly, we discuss how our findings provide
implications for future AI systems in online learning (Discussion and conclusion).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy