ChatGPT in Education
ChatGPT in Education
com/scientificreports
The release and rapid diffusion of ChatGPT have caught the attention of educators worldwide. Some
educators are enthusiastic about its potential to support learning. Others are concerned about how
it might circumvent learning opportunities or contribute to misinformation. To better understand
reactions about ChatGPT concerning education, we analyzed Twitter data (16,830,997 tweets from
5,541,457 users). Based on topic modeling and sentiment analysis, we provide an overview of global
perceptions and reactions to ChatGPT regarding education. ChatGPT triggered a massive response
on Twitter, with education being the most tweeted content topic. Topics ranged from specific (e.g.,
cheating) to broad (e.g., opportunities), which were discussed with mixed sentiment. We traced that
authority decisions may influence public opinions. We discussed that the average reaction on Twitter
(e.g., using ChatGPT to cheat in exams) differs from discussions in which education and teaching–
learning researchers are likely to be more interested (e.g., ChatGPT as an intelligent learning partner).
This study provides insights into people’s reactions when new groundbreaking technology is released
and implications for scientific and policy communication in rapidly changing circumstances.
Artificial intelligence (AI) has the potential to transform the field of education, and its applications are becoming
increasingly prevalent1. The massive diffusion and adoption of ChatGPT following its November 30, 2022 release
suggest that AI can rapidly change how we learn and communicate. The release of ChatGPT generated a great
deal of excitement and trepidation as to its possible effects on education2. The announcement from Microsoft
to make programs like ChatGPT available for all users through its office programs3 hints at the broad impact
of how people may soon leverage AI in their written communication. As generative AI tools such as ChatGPT
become more integrated into e ducation4, educators must address crucial questions about the future of teaching
and learning. Students will need to understand how AI works, its affordances and challenges, and how they can
harness its power without reproducing the biases inherent in its training data. Teachers will have to walk along-
side, learning as they go and reinforcing preexisting habits such as corroboration and interrogation of sources,
critical thinking, and ethical use of sources.
Notably, the spread and speed of innovation often depend on its usage by early adopters and their percep-
tion of the new t echnology5. Therefore, in this paper, we explore the global reception of ChatGPT in the first
two months after its release. We aim to understand how the global educational community viewed the potential
impact of ChatGPT on education and human learning. This may include topics from the potential to personal-
ize learning to the ethical implications of relying on AI for information and communication. Specifically, we
leverage social media data on Twitter to analyze the worldwide reception of ChatGPT, seeking insight into (a)
the most prevalent topics discussed regarding ChatGPT in education and (b) how users discussed these topics
over this initial implementation period.
Theoretical background
What is ChatGPT? ChatGPT (https://openai.com/blog/chatgpt) is the latest release of the Generative Pre-
trained Transformer (GPT) family of language models released by OpenAI (https://openai.com) on November
30, 2022. A language model is a statistical model that can predict the probability of a sequence of words. With
this capability, a language model can generate natural language in a human style. Like all statistical models, a
language model needs to be trained by many word sequences to calculate the probability of each sequence. The
number of word sequences or the training corpus size used to train a model determines how much experi-
ence a model can gain about the language and, more importantly, the knowledge incorporated in the language.
ChatGPT is a large language model trained with data from the Internet and many scanned books. Brown et al.6
reported using a corpus of 499 billion words to train the GPT-3 model, which was the base model for ChatGPT
1
Hector Research Institute of Education Sciences and Psychology, University of Tübingen, Europastraße 6,
72072 Tübingen, Germany. 2University of California, Irvine, USA. 3Leibniz-Institut für Wissensmedien, Tübingen,
Germany. *email: tim.fuetterer@uni-tuebingen.de
Vol.:(0123456789)
www.nature.com/scientificreports/
at its first release. ChatGPT now draws on GPT-4, a much larger and more powerful model. The GPT models
are transformer models, allowing downstream fine-tuning for improved performance on more specific tasks,
such as conversations or document classification. The conversation fine-tuning that ChatGPT obtained on top
of GPT-3 aims at reducing untruthful, toxic, or unhelpful output that uncontrolled large language models may
produce7. The fine-tuning approach used in ChatGPT was called Reinforcement Learning with Human Feed-
back (RLHF). This method fine-tuned the original model with data annotated by human raters as more or less
appropriate responses. Details of the fine-tuning process are reported in Ouyang et al.7 (see also https://openai.
com/blog/chatgpt).
Opportunities and risks of ChatGPT for education. ChatGPT and other large language models can
potentially have a large effect on teaching and learning in practice. This may include, for instance, the potential
to facilitate more personalized and adaptive learning8,9 and organize assessment and evaluation processes more
efficiently4,8,10. Also, Kasneci et al.2 emphasize the potential to compensate for educational disadvantages. Using
speech-to-text technologies or automated generation of written texts, visual impairments or partial impairments
such as dyslexia can be less limiting in learning, contributing to inclusive education. Zhai21 looked at the Next
Generation Science Standards and tested how teachers could use ChatGPT to overcome key instructional chal-
lenges, such as providing feedback and learning guidance or recommending learning materials to students. For
educational purposes, the specific potential of ChatGPT lies in its interactive component that allows for the
execution of effective learning mechanisms. For instance, feedback is a core feature of learner support that is
effective in learning11,12. ChatGPT can be understood as a learning partner or teaching assistant that gives feed-
back if good prompts are given by the learners13–15. Organizations such as the Khan Academy are quickly trying
to exploit the power of ChatGPT as a learning partner by integrating the tool with prompts already built into
their platform (see www.khanacademy.org/khan-labs).
Such education opportunities contrast with AI’s limitations and associated risks. One urgent limitation of
ChatGPT is that no source of truth was included during the Reinforcement Learning training of ChatGPT
(https://openai.com/blog/chatgpt). Thus, the risk that ChatGPT will produce texts with plausible-sounding but
incorrect information is high8–10,13–18. Educators and learners need professional knowledge and critical reflec-
tion skills to assess the responses generated a dequately19 (see also 21st-century s kills20). ChatGPT as a learning
partner, may not promote critical thinking per se21. However, use guided by educators can provide opportunities
for critical thinking (e.g., as students learn to refine prompts based on their understanding of the context or
genre). The question of how ChatGPT (and its successors) opportunities for education can and should be best
exploited and how its risks can be best avoided is an important and interdisciplinary research issue that will
unfold over the next few years.
Twitter data as a measure to gain insights into human reactions. Twitter is a microblogging plat-
form where registered users can post brief messages called tweets (up to 280 Unicode characters for non-paying
users; tweets). Tweets can be linked to specific persons (through links [@]) or specific topics (via hashtags [#]).
Users can follow other users’ tweets. Twitter provides access to real-time data that can capture the public per-
ceptions on innovations like ChatGPT (another example CRISPR33), significant events (e.g., Arab spring, U.S.
elections, COVID-19:34,35), or reforms36. Twitter data has affordances as a research tool as it provides scalable and
Vol:.(1234567890)
www.nature.com/scientificreports/
Aims and research questions. ChatGPT can potentially transform educational processes worldwide, but
whether and how it does so may depend on how educators take it up. In this study, we aim to gain insights into an
unvarnished and immediate global human reaction to the release of ChatGPT that goes beyond statements made
by individual stakeholders in education. Our study may help estimate human reactions to technology innova-
tions that can also be relevant to the education sector in the future (e.g., incorporating measures for acceptance
of new educational technologies such as information on benefits or guidelines on how to use this technology
directly when they are introduced). In addition, we examine which education-related topics were discussed by
users and which topics tend to be disregarded but should not be ignored in a critical discussion of ChatGPT in
educational contexts. We focused on the following three research questions (RQs) in the first two months after
the ChatGPT release (November 30, 2022):
Methods
Data collection and preparation. Using the Twitter API for Academic Research, we collected 16,830,997
tweets posted from November 30, 2022, to January 31, 2023. We chose this rollout period to get an initial reac-
tion from the public before many had spent much time thinking about or using ChatGPT. The data collection
procedure was as follows: first, we queried the Tweets that mentioned ChatGPT. Second, we identified and col-
lected all conversations with either a sparking tweet (i.e., tweets that start a discussion) or a reply mentioning
ChatGPT. A conversation is defined as the sparking tweet and all other tweets directly replying to the sparking
tweet. Notably, a conversation needs to include at least two different users. This led to 127,749 unique conversa-
tions. Notably, we found no mentions of ChatGPT on Twitter before November 30, 2022.
We anonymized the data to protect users’ privacy by replacing Twitter-assigned tweet, author, and conversa-
tion identifiers with random numeric identifiers. Email addresses and phone numbers were substituted with
placeholders. Similarly, we replaced usernames with anonymous user identifiers. In addition, we removed tweets
that were most likely generated by computer programs (i.e., bots) using unsupervised text-based and heuristics
approaches. First, we removed bots by removing those 257,858 accounts that posted more than ten tweets dur-
ing the observation period, or contained the word bot in their screen name, or their screen name ended with
app or included app followed by a non-letter symbol (self-declared bots). We set the number of tweets threshold
based on the assumption that bots are prone to tweet, on average, significantly more than h umans38. We also
removed accounts from the dataset that posted more than 1000 tweets. Overall, 283 bots and their 80,389 tweets
were deleted based on the first rule. Second, we deleted repetitive tweets about unrelated topics (e.g., spam-like
product advertisements or cryptocurrency). The groups of tweets were found by clustering the document (tweet)
embeddings. This text-based approach is preferred over the available tools for bot detection, such as Botometer39,
because of the large dataset size and heterogeneous nature of the data. In addition, modern bots are prone to
behave in groups rather than individually. The text-based approach can capture coordinated b ehavior40. This led
to a final sample size of 16,743,036 tweets, 5,537,942 users, and 125,151 conversations.
Analytical methods. Topic modeling. We applied a topic modeling procedure to gain insight into topics
users discussed after ChatGPT was released (RQ1 and RQ2). We selected only tweets in English, deleted empty
tweets and duplicates, and removed user mentions starting with “@”, hashtags, and links in all tweets. We deleted
the term ChatGPT and its derivatives to improve the model performance, as this term appeared in all tweets (due
to the inclusion criteria of our data collection:41). Next, we used a BERTopic algorithm41 to retrieve clusters of
similar tweets from the dataset. The algorithm allows using document embeddings generated by state-of-the-art
language models. It outperforms conventional topic modeling techniques such as Latent Dirichlet Allocation
(LDA) and Non-Negative Matrix Factorization (NMF), as it accounts for semantic relationships among words
and provides better topic representations41,42.
Moreover, BERTopic was used successfully in recent studies on T witter43,44. We used a Python implementa-
tion of the BERTopic algorithm (https://github.com/MaartenGr/BERTopic) with the minimum size of a cluster
set at 500 and 3 different language models: BERTweet (https://huggingface.co/docs/transformers/model_doc/
bertweet), twitter-roberta-base-sep2022 (https://huggingface.co/cardiffnlp/twitter-roberta-base-mar2022), and
all-MiniLM-L6-v2 (https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). We examined the per-
formance of each embedding model on our dataset by reviewing 20 tweets from each topic. The last embedding
model showed a better performance regarding topic diversity and coherence. We ran the model on all non-
conversation tweets written in English, not including retweets (i.e., 520,566 sparking and other non-conversation
tweets). We did not include retweets as they decelerate clustering significantly while adding little value to the
output of topic modeling. Then, we extrapolated the results on retweets (526,780 full-text retweets) using super-
vised classification and manually grouped some of these topics into larger topical clusters.
Sentiment analysis. We performed sentiment analysis to gain insight into how users discussed ChatGPT after it
was released (RQ1 and RQ3). For this, we used all tweets in English, including conversations. The preprocessing
Vol.:(0123456789)
www.nature.com/scientificreports/
procedure was identical to the one applied at the topic modeling step. Next, we used the rule-based model
VADER to perform the sentiment analysis45, as VADER showed a high accuracy on Twitter d ata46 and outper-
47
formed other sentiment software like LIWC when using Twitter data on e ducation . In addition, we excluded
outliers (i.e., tweets identified within the topic modeling procedure that are far from other topics) to achieve a
more accurate estimate of sentiment for education-related tweets.
We make our data preparation and analysis syntaxes freely available at the following link: https://g ithub.c om/
twitter-tuebingen/ChatGPT_project.
Ethical approval. An ethics committee approved the study and the collection of the data. It confirmed that
the procedures aligned with ethical research standards with human subjects (date of approval: 09-02-2023, file
number: AZ.: A2.5.4-275_bi).
Results
The global reception of ChatGPT (RQ1). To gain insights into the global reception on Twitter about
ChatGPT, we first looked at all 16,743,036 tweets (without identified bots and spam) of the first two months
after the release of ChatGPT that include the term ChatGPT and tweets in related conversations (for descriptive
statistics regarding the number of likes, retweets, replies, and quotes see Table 1A,B). We found that the number
of tweets per day increased on average from 0 tweets before November 30, 2022, to over 550,000 tweets per day
at the end of January (Fig. 1A,B). This number is impressive compared to the number of tweets related to other
prominent hashtags in the past. For instance, in their analyses of the social media response (i.e., Twitter) to the
Black Lives Matter debate, Ince et al.48 found that the Hashtag #BlackLivesMatter (including different spellings)
was mentioned 660,000 times from the beginning of 2014 to November 2014. A more current comparison is the
number of tweets regarding vaccine manufacturers AstraZeneca/Oxford, Pfizer/BioNTech, and Moderna dur-
ing the COVID-19 pandemic. From December 1, 2020, to March 31, 2021, Marcec and Likic49 retrieved 701,891
tweets in English.
Most of the tweets related to ChatGPT (72.7%) were in English (see Fig. A in the appendix), and 52.2%
came from the U.S. (Fig. 2). Next, we looked at the sentiment ratio of daily tweets (Fig. B, see also Fig. C in the
appendix). Almost all tweets were positive in the very first days after the release of ChatGPT. However, the level
(i.e., the proportion of tweets classified as positive, neutral, or negative) then flattened, remaining relatively stable
with small overall fluctuations throughout the first two months. Whereas tweets classified as positive dominated
all analyzed days within the first two months after the release of ChatGPT, the daily number of positive, neutral,
and negative tweets converged to a 40–30–30 distribution over time. This distribution may suggest that users
increasingly discussed ChatGPT more deliberately and reflectively, considering not only its impressive capacity
but also the challenges that it poses. It is unsurprising for early technology adopters to be more positive than
those subsequently investigating the technology. The sentiment change may partially reflect a more diverse
universe of tweet authors.
Topics related to ChatGPT in education (RQ2). To gain insights into the topics discussed related to
ChatGPT, we first ranked all 128 topics users discussed in our sample (i.e., gained from topic modeling) by the
number of associated tweets. Having identified 128 topics indicates that the discussion about ChatGPT on Twit-
ter touches upon many topics. Second, we manually grouped these topics into 39 larger topical clusters based
on semantic and cosine similarities. Education was the third most prevalent topical cluster (measured by the
number of tweets; Table 2) after discussions of general tweets about AI (the most prevalent topical cluster) and
tweets that contain examples of questions and prompts (the second most prevalent topical cluster).
An overview of the ten most prevalent topics in education discussed on Twitter is given in Table 3. The most
prevalent topic in the education topical cluster consisted of statements regarding the opportunities, limitations,
and consequences of using ChatGPT in educational processes in general (Table 3). These statements comprised
22% of all conversations (see T1 in Table 3). For instance, the functions of ChatGPT for educational processes
Table 1. Descriptive statistics of likes, retweets, replies, and quotes. a These statistics refer to all N = 16,743,036
tweets from November 30, 2022, to January 31, 2023. b These statistics refer to all N = 125,151 tweets from
November 30, 2022, to January 31, 2023.
Vol:.(1234567890)
www.nature.com/scientificreports/
Figure 1. (A) Number of tweets per day dealing with ChatGPT. (B) Unique users and number of tweets per day
dealing with ChatGPT. We used all 16,743,036 tweets.
were discussed (e.g., getting feedback), and measures for a successful implementation of ChatGPT in educational
processes were discussed (e.g., prerequisites for educators and learners, such as awareness of opportunities and
boundaries of ChatGPT and an awareness of ethical aspects). The second most prevalent topic in education con-
sisted of statements related to efficiency and cheating when students use ChatGPT to, for instance, do homework
like essays. These statements comprised 18% of all conversations (see T2 in Table 3).
Similarly, the role of ChatGPT in academia was discussed (the third most prevalent topic in education, cover-
ing 16% of all conversations; see T3 in Table 3). For instance, on the one hand, opportunities of using ChatGPT
Vol.:(0123456789)
www.nature.com/scientificreports/
Figure 2. Global distribution of tweets. The visualization is based on 1% (i.e., 160,260) tweets with known
locations out of 16,743,036 tweets. For an interactive version of this figure, see https://public.flourish.studio/
visualisation/13026492/.
(e.g., support in the standardized process of writing a research paper) and the limitations (e.g., no references
or made-up references, potential danger for academia if ChatGPT is not used reflectively) were addressed. The
fourth prevalent topic was banning ChatGPT in educational organizations (covering 10% of all conversations;
see T4 in Table 3). Although there were discussions worldwide about regulations such as bans in schools and
universities, the news that ChatGPT was banned from New York City public schools’ devices and networks, in
particular, dominated discussions on Twitter. In addition to these four dominant topics, which together cov-
ered 66% of the total education-related conversations, the topics T5 (capability of ChatGPT to pass exams), T6
(strategies for using use ChatGPT for writing), T7 (other AI tools for education and developments in the future),
T8 (the capability of ChatGPT to replicate students’ papers), and T9 (costs of education) each covered between
7.5% and 3.6% of all education-related conversations. The topic of how educators could integrate ChatGPT into
teaching–learning processes (e.g., teachers in their lessons) was of minor importance (covering only 2% of all
conversations; see T10 in Table 3).
The pairwise cosine similarity between topic embeddings (Fig. 3) illustrates that the ten educational topics
are closely related. Closely linked are the topics T1 (i.e., opportunities, limitations, and consequences of the use
of ChatGPT), T2 (i.e., efficiency and cheating when students use ChatGPT to write [e.g., homework like essays]),
T3 (i.e., opportunities and limitations of ChatGPT in academia [e.g., writing research papers]) and T4 (i.e., ban
ChatGPT in educational organizations). This means that the words chosen within the topics have significant
commonalities. The close connection may also indicate that these four topics may have been discussed concur-
rently. For instance, the opportunities, limitations, and consequences of using ChatGPT were probably often
discussed with multiple perspectives in mind. Considering, at the same time, how students will use ChatGPT
to write essays, what challenges students and researchers face in writing papers, and what consequences should
be drawn for regulations regarding the use of ChatGPT by organizations such as schools and universities. In
contrast, the topics of how ChatGPT can pass current exams in various disciplines and educational costs had fewer
commonalities with the other topics.
Sentiments of most prevalent educational topics (RQ3). To gain insights into how the most prev-
alent education topics related to ChatGPT were discussed, we examined the sentiments of all 86,934 tweets
related to education (Table 2) and the related 34,732 conversation tweets (i.e., 121,666 tweets in total; Fig. 4). On
average, the number of tweets with positive sentiment outweighed tweets with neutral and negative sentiment
throughout the first two months after ChatGPT’s release. Descriptively, positive tweets decreased, and negative
and neutral tweets increased over the two months. Before January 5 (see the vertical line in Fig. 4), the largest
share of tweets was positive every day after the release of ChatGPT. Only from January 5 onwards the shares of all
three sentiments started to alternate so that the share of negative or neutral tweets was the highest on some days.
Sentiments on each of the ten identified topics are shown in Fig. 5. The costs of education resulting from using
AI tools such as ChatGPT were discussed with the most positive sentiment (73.3%). This positive sentiment was
mainly because many people anticipated a reduction in the cost of education through AI technology such as
ChatGPT. The topic that was discussed the second most positively is how educators could use ChatGPT (57.1%
positive sentiment). The positive statements referred especially to the possibility of saving time for educators. For
example, the potential of ChatGPT to be used to create assignments for worksheets or exams was highlighted.
Additional topics were discussed with a diverse sentiment (e.g., banning ChatGPT in educational institutions
Vol:.(1234567890)
www.nature.com/scientificreports/
Table 2. Topics overall. 1,047,346 tweets used (i.e., English non-conversation tweets). Volume indicates the
absolute number of tweets. LLM large language models.
[44.1% negative, 25.3% neutral, 30.6% positive sentiment] or how to use ChatGPT for writing [32.6% negative,
30.2% neutral, 37.2% positive sentiment]). Whether ChatGPT will ever replicate student papers was discussed
most negatively (72.4% negative sentiment). These negative statements appear to reflect the view that ChatGPT
is only capable of producing schematic essays and is not as original or creative as human writers, thus incapable
of generating new ideas.
Discussion
We aimed to gain insights into an unvarnished and immediate global reaction to the release of ChatGPT, focusing
on the education-related topics users discussed and the sentiment of these discussions. Therefore, we analyzed
16,830,997 tweets posted in the first two months after the release of ChatGPT that included the word ChatGPT.
First, regarding the global reception on Twitter about ChatGPT (RQ1), we traced the rapid awareness of
ChatGPT shortly after its release. Whereas before November 30, 2022, not a single tweet worldwide contained
the word ChatGPT, the rise of ChatGPT after its release was almost unstoppable, with more than 100,000 tweets
per day at the beginning of December 2022 and over half a million by the end of January 2023. This massive
worldwide awareness is also confirmed, for instance, the hits on Google (707,000,000 hits when searching Chat-
GPT on March 24, 2023) and the high number of users (4 months after its launch, there are now an estimated
Vol.:(0123456789)
www.nature.com/scientificreports/
Table 3. The most important topics in education. 121,666 tweets used (i.e., English non-conversation tweets
plus English-related conversation tweets). Volume indicates the absolute number of conversations. The volume
was used as an indicator of the importance of the topics. Anchor tweets are synthetic tweets that represent
typical content of the topics exemplarily.
100 million active monthly users:18,50) indicate that ChatGPT is likely going to have a lasting impact on personal
and professional lives.
Second, education was the top content topic for the ChatGPT discussion beyond generic topics (e.g., how to
access ChatGPT). This is surprising because ChatGPT could drastically change professional practice in many
professions where creative text production is central (e.g., journalism, book authoring, marketing, business
reporting). Implications include that educational stakeholders (e.g., school/higher education administrators,
teachers/instructors, educational policy-makers) must develop guidelines for its use in their contexts.
Third, zooming in on education-related topics (RQ2), we found that both specific topics (e.g., students’ essay
writing, cheating, the capability of ChatGPT to pass exams) and broader issues (e.g., opportunities, limitations,
and consequences of the use of ChatGPT) were discussed. These topics may provide initial directions for edu-
cational stakeholders to develop ChatGPT usage guidelines.
Fourth, although the findings indicated that ChatGPT was generally discussed positively in the first 2 months
after ChatGPT’s release, the statements regarding education were more mixed. This aligns with previous research
on perceptions of technological innovations, which showed that users face most innovations with varying expec-
tations and emotions. Expectations and emotions are associated with attitudes ranging from absolute conviction
of the new technology’s usefulness and a positive attitude (“radical techno-optimists”) to complete rejection of
the new technology (“techno-pessimists”)51. Especially after January 5, when it was announced that ChatGPT
would be banned from New York City public schools’ devices and networks, the sentiment of the discussion
began to be more mixed. The mixed sentiment is not necessarily bad; many potentially positive and negative
effects of using ChatGPT in education must be carefully weighed. At the same time, it is important to consider
how rapid policy decisions, taken as a precaution without an opportunity to weigh evidence, can influence public
debate on topics. This aspect is important because technologies are only used and thus enfold their potential (e.g.,
Vol:.(1234567890)
www.nature.com/scientificreports/
T1 T2 T3 T4 T5 T6 T7 T8 T9 T10
Outliers
Outliers 1.00 0.66 0.67 0.67 0.66 0.55 0.64 0.69 0.62 0.53 0.65
T1. ChatGPT Should Be Integrated into
0.66 1.00 0.98 0.94 0.93 0.75 0.88 0.82 0.83 0.77 0.84
Educational Process
T2. ChatGPT Used by Students to Write
Essays - Efficiency and Cheating 0.67 0.98 1.00 0.93 0.92 0.74 0.88 0.80 0.82 0.76 0.83
(Homework)
T3. ChatGPT in Academia (Research
0.67 0.94 0.93 1.00 0.89 0.74 0.87 0.81 0.84 0.79 0.84
Papers)
T4. Ban ChatGPT in Educational
0.66 0.93 0.92 0.89 1.00 0.79 0.84 0.82 0.82 0.76 0.82
Organizations or Not?
T5. ChatGPT Passed/Failed Exams 0.55 0.75 0.74 0.74 0.79 1.00 0.67 0.70 0.70 0.60 0.65
T6. How to use ChatGPT for Writing? -
0.64 0.88 0.88 0.87 0.84 0.67 1.00 0.79 0.83 0.78 0.85
Strategies
T7. Other AI Tools for Education and
0.69 0.82 0.80 0.81 0.82 0.70 0.79 1.00 0.79 0.72 0.78
Developments in the Future
T8. The Possibility That ChatGPT Will
0.62 0.83 0.82 0.84 0.82 0.70 0.83 0.79 1.00 0.73 0.82
Replicate Students' Papers
T9. Costs of Education 0.53 0.77 0.76 0.79 0.76 0.60 0.78 0.72 0.73 1.00 0.82
T10. How Teachers Could Use ChatGPT 0.65 0.84 0.83 0.84 0.82 0.65 0.85 0.78 0.82 0.82 1.00
Figure 4. Sentiment in education per day. The vertical line is plotted on January 5, 2023, which was the day
when the proportion of negative tweets was greater than the proportion of positive tweets for the first time.
for educational processes) if users recognize their benefits22. The critical topic of how educators could integrate
ChatGPT into teaching–learning processes (e.g., teachers in their lessons) was addressed only in a few tweets.
This is interesting as the didactically meaningful integration of AI tools such as ChatGPT into teaching–learning
processes at schools and universities will certainly be a key question for education research, teacher education,
and daily practice. Moreover, as strong policy decisions such as complete bans also inform public opinion, they
might render it hard for scientific opinions on opportunities of new technologies such as ChatGPT to be heard.
For instance, some of the most critical educational and scientific challenges, like inequalities, heterogeneity, and
adaptivity that might be alleviated with AI tools, were not at the core of the public debate.
Vol.:(0123456789)
www.nature.com/scientificreports/
Finally, zooming into the sentiments of education-related topics (RQ3), we found that, for instance, the costs
of education and how educators could use ChatGPT for teaching were discussed positively (e.g., the possibility
of saving time). In contrast, whether ChatGPT can replicate student papers has been discussed negatively. The
negative sentiment among tweets on the replication of papers from students by ChatGPT was fed, among oth-
ers, by statements in which it was expressed that a technology like ChatGPT cannot replace humans in writing
believable human-like text (e.g., with the charm of humans writing). However, research shows that people quickly
think they can distinguish human text from AI text due to overconfidence when they cannot but are merely
guessing (e.g.,52,53). As awareness (measured by Tweet volume) grew, the range of sentiments increased. This
likely reflects a broader audience becoming aware of the technology beyond early adopters and the increased
time to consider the potential positive and negative consequences. It also may reflect an increased opportunity to
use the tool, resulting in both an awareness of its potential and an ability to learn firsthand about its weaknesses.
Limitations and future research. The results of this study must be interpreted considering at least the
following limitations. First, although we have thoroughly prepared the data for the analyses, problems arise due
to bots. Bots are computer programs that perform automated tasks, such as automatically retweeting tweets
with a specific hashtag. Bots are challenging to detect because their behavior constantly changes as technology
advances40. We used heuristics and text-based approaches to identify and remove tweets posted by bots from our
data but cannot guarantee that tweets from bots are still present in the data used. Second, due to our limitation
of Tweets to those in English, our geographical focus is around 57% North America. Thus, our findings may not
be generalizable to other regions. Third, insight into the sample used for this study is limited by the informa-
tion available through the Twitter API. For instance, we could not accurately determine whether politicians,
academics, entrepreneurs, or opportunists tweeted. However, we were able to provide some general insights into
the sample by analyzing, for instance, the sample’s experience with Twitter (operationalized by Twitter sign-up
date) or the users’ reach (operationalized by number of followers; see Figs. D and E and Table A in the appen-
dix). Fourth, we analyzed only the first two months of conversations on Twitter after the release of ChatGPT.
This means that subsequent discussions based on more experience with ChatGPT (educational stakeholders
had more time to figure out strengths and weaknesses and use ChatGPT for, e.g., teaching–learning scenarios)
or that included GPT4 were not considered. However, it was precisely the approach of this study to capture the
unvarnished, rather unreflective reaction of humans to groundbreaking technological innovations. Moreover,
this study goes beyond previous findings on human reactions to ChatGPT, some of which only focused on the
first few days after the release of ChatGPT (e.g.31).
Our research approach was designed to address the overall international response of humans to the release
of ChatGPT, which encourages numerous potential future research directions. First, it would be interesting
better to understand the interplay between science and public opinion. For instance, it would be interesting to
investigate whether it is possible to find any indications that Twitter users refer to science in their statements
on ChatGPT. For instance, users may cite research papers in the tweets (e.g., compared to COVID-19 vaccina-
tions, where public opinions on benefits and risks were driven by scientists’ daily explanation of new preprint
research findings). Moreover, it would be interesting to gain insights into what scientists initially said about the
potentials and risks of GPT and especially ChatGPT, and if these opinions are reflected in public opinion. Second,
research needs to address the pedagogical qualities of human-AI interaction, which did not play a role in the
global response on Twitter in our data. While recent research aligns with this finding, for instance, examining
institutional websites to see what the references to ChatGPT refer to54, research that examines how AI-powered
Vol:.(1234567890)
www.nature.com/scientificreports/
programs like ChatGPT can be used didactically meaningful and learning-effective (i.e., for high-quality teach-
ing and learning). This may also include studies about best practices for using ChatGPT in teaching–learning
situations, such as teaching effectiveness (e.g., cognitive activation, feedback), cognitive load, offloading, and
adaptivity. Third, future research can address specific scenarios of using GPT in the classroom. This could include
studies examining how ChatGPT could lead to innovative forms of instruction, for instance, asking, "What can
ChatGPT do that we would never have been able to do before (e.g., have it write five different essays on a topic
and let it discuss with learners about which one is the best and why)?”. Also, whereas people on Twitter were
discussing using ChatGPT to cheat, studies should examine the distinction between learning and performance
(e.g., learning to write versus doing a homework writing assignment). With a performance mindset, one can
always cheat (e.g., use a calculator for homework). However, the main issue is not passing an exam or presenting
an essay. It is about conveying knowledge on how to pass the exam or write an essay. These examples illustrate how
a global human response to technological innovation might differ from a more scientifically informed response.
However, human responses can help scientists to identify these blind spots in discussions better to explore and
communicate them in the future. Finally, this study only provides insight into the initial human reactions to
ChatGPT (the first 2 months after the release of ChatGPT). Therefore, future work is encouraged to gain insight
into the long-term effects of ChatGPT. This could include exploring subgroup effects for subjects such as art,
music, literature, STEM, and history, as learning about literature (e.g., writing styles or foreign languages) might
afford entirely different GPT scenarios than learning about STEM. Furthermore, we encourage researchers to
conduct surveys and interview studies targeting various stakeholder groups to gain additional insights into the
use of ChatGPT and their perceived challenges and affordances. Indeed, a steady stream of questionnaire-based
studies of varying quality has emerged quickly in the first months of 2023, offering valuable insights into how
certain user groups perceive and interact with ChatGPT. However, while providing meaningful and interesting
findings, these studies are limited to certain user groups with specific local contexts and certain aspects of their
user experiences—aspects identified and studied through their respective questionnaires. For example, typical
studies of this category analyzed the perceptions and expectations of 50 senior Computer Engineering students
in Abu Dhabi55, 2100 students from a Ghanaian university56, or 288 participants from a convenience sample
study at an Indian university57. Compared to these questionnaire studies on narrow and focused user groups, our
research approach is much broader and more globally oriented to address research questions that these smaller,
targeted questionnaire studies cannot adequately address. Specifically, we sought a comprehensive, worldwide
compilation of active user responses (not passive answers to predefined questions) to ChatGPT in its initial
weeks of existence. This was a specific time window when user reactions might have been still spontaneous and
uninfluenced by the repeated pro and con arguments about GPT that have since then permeated media coverage.
Nevertheless, there are quite a few similarities regarding the positive and negative evaluations that Twitter uses
expressed in the initial phase of ChatGPT and that different user groups later reported in questionnaire studies.
Soon it will probably be interesting to analyze the trajectories of different user perceptions and expectations
regarding generative AI systems such as ChatGPT over the years. However, studying these trajectories requires
a sequence of snapshots at different points in time, as we provided in our study for the birth period of ChatGPT.
Conclusion
To conclude, ChatGPT is perhaps unique in how it exploded into the conversation and is only comparable to
digital tools such as the Internet and computers that have proven to be of similar transformative magnitude.
The availability of social media, particularly Twitter, allowed many people to learn about and discuss generative
AI quickly. ChatGPT was also particularly impressive in its capabilities, far more functional, accessible, and
conversational than other publicly available large language models. This ability for a worldwide conversation in
real-time about the exploration of a potentially transformative digital tool is illustrated by the over half a mil-
lion daily tweets we see only two months after ChatGPT’s release. This rapid awareness in a space with an active
academic user base allowed educators to participate in learning about and exploring the use of this new tool
(and for many, over an academic break period when they perhaps had time and capacity more than usual). For
these reasons, it is comprehensible that education is the third most frequent topic of Tweets during this period,
following how to access the tool, general AI, and sample prompts (use cases). Twitter allows technologically savvy
educators to discuss new tools, share best practices, and try on policy positions among their peers. Twitter may
not be representative of typical teachers. However, those using social media are likely to be among early adopters
and thought leaders on using educational technology in their schools, so their Tweets may be a bellwether of
things to come. When investigating new tools such as ChatGPT, communicating with peers can open our eyes
to their challenges and opportunities. Hopefully, these conversations will continue into more differentiated and
educationally relevant discussions as we gain more experience with generative AI tools.
Data availability
The datasets generated and analyzed during the current study are not publicly available as we use Twitter data
that cannot be completely anonymized without significant effort (i.e., individuals are identifiable through the
tweets) but are available from the corresponding author on a reasonable request.
References
1. UNESCO. Beijing Consensus on artificial intelligence and education. United Nations Educational, Scientific and Cultural Organiza-
tion (2019).
Vol.:(0123456789)
www.nature.com/scientificreports/
2. Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier,
E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Kasneci, G. ChatGPT
for good? On opportunities and challenges of large language models for education. Learn. Indiv. Diff. 103, 102274. https://doi.org/
10.1016/j.lindif.2023.102274 (2023).
3. Warren, T. Microsoft to demo its new ChatGPT-like AI in Word, PowerPoint, and Outlook soon. The Verge. https://www.theve
rge.com/2023/2/10/23593980/microsoft-bing-chatgpt-ai-teams-outlook-integration (2023).
4. Rudolph, J., Tan, S., & Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn.
Teach. 6(1). https://doi.org/10.37074/jalt.2023.6.1.9 (2023).
5. Rogers, E. M. Diffusion of innovations (4th ed.). Free Press (2010).
6. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal,
S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Amodei, D. Language
models are few-shot learners. https://doi.org/10.48550/ARXIV.2005.14165 (2020).
7. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J.,
Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. Training language models
to follow instructions with human feedback. https://doi.org/10.48550/ARXIV.2203.02155 (2022).
8. Baidoo-Anu, D. & Owusu Ansah, L. Education in the era of generative artificial intelligence (AI): Understanding the potential
benefits of ChatGPT in promoting teaching and learning. SSRN Electron. J. https://doi.org/10.2139/ssrn.4337484 (2023).
9. Zhang, B. Preparing educators and students for ChatGPT and AI technology in higher education: Benefits, limitations, strategies, and
implications of ChatGPT & AI Technologies. https://doi.org/10.13140/RG.2.2.32105.98404 (2023).
10. Deng, J., & Lin, Y. The benefits and challenges of ChatGPT: An overview. Front. Comput. Intell. Syst. 2(2), 81–83. https://doi.org/
10.54097/fcis.v2i2.4465 (2023).
11. Hattie, J. & Timperley, H. The power of feedback. Rev. Educ. Res. 77(1), 81–112. https://doi.org/10.3102/003465430298487 (2007).
12. Wisniewski, B., Zierer, K., & Hattie, J. The power of feedback revisited: A meta-analysis of educational feedback research. Front.
Psychol. 10, 3087. https://doi.org/10.3389/fpsyg.2019.03087 (2020).
13. Anders, B. A. Why ChatGPT is such a big deal for education. C2C Digital Mag. 1(18). https://scholarspace.jccc.edu/c2c_online/
vol1/iss18/4 (2023).
14. Lo, C. K. What Is the Impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 13(4), 410. https://doi.org/10.
3390/educsci13040410 (2023).
15. Sok, S. & Heng, K. ChatGPT for education and research: A review of benefits and risks. SSRN Electron. J. https://doi.org/10.2139/
ssrn.4378735 (2023).
16. Pavlik, J. V. Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and
media education. J. Mass Commun. Educ. 78(1), 84–93. https://doi.org/10.1177/10776958221149577 (2023).
17. Sallam, M. The utility of ChatGPT as an example of large language models in healthcare education, research and practice: Systematic
review on the future perspectives and potential limitations [Preprint]. Health Inf. https://doi.org/10.1101/2023.02.19.23286155
(2023).
18. Trust, T., Whalen, J. & Mouza, C. Editorial: ChatGPT: Challenges, opportunities, and implications for teacher education. Contemp.
Issues Technol. Teach. Educ. 23(1), 1–13 (2023).
19. Kohnke, L., Moorhouse, B. L., & Zou, D. ChatGPT for language teaching and learning. RELC J. 003368822311628. https://doi.org/
10.1177/00336882231162868 (2023).
20. Fishman, B. J. Possible futures for online teacher professional development. In C. Dede, A. Eisenkraft, K. Frumin, & A. Hartley
(Eds.), Teacher learning in the digital age. Online professional development in STEM education (pp. 3–31). Harvard Education Press
(2016).
21. Zhai, X. ChatGPT for next generation science learning. SSRN Electron. J. https://doi.org/10.2139/ssrn.4331313 (2023).
22. Marangunić, N. & Granić, A. Technology acceptance model: A literature review from 1986 to 2013. Univ. Access Inf. Soc. 14(1),
81–95. https://doi.org/10.1007/s10209-014-0348-1 (2015).
23. Fishbein, M., & Ajzen, I. Belief, attitude, intention, and behavior: An introduction to theory and research (Addison-Wesley, 1975).
24. Fishbein, M. A theory of reasoned action: Some applications and implications. Nebr. Symp. Motiv. 27, 65–116 (1979).
25. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50(2), 179–211. https://d oi.o
rg/1 0.1 016/0 749-5 978(91)
90020-T (1991).
26. Ajzen, I. The theory of planned behavior: Frequently asked questions. Hum. Behav. Emerg. Technol. 2(4), 314–324. https://doi.org/
10.1002/hbe2.195 (2020).
27. Scherer, R., Siddiq, F. & Tondeur, J. The technology acceptance model (TAM): A meta-analytic structural equation modeling
approach to explaining teachers’ adoption of digital technology in education. Comput. Educ. 128, 13–35. https://doi.org/10.1016/j.
compedu.2018.09.009 (2019).
28. Valor, C., Antonetti, P., & Crisafulli, B. Emotions and consumers’ adoption of innovations: An integrative review and research
agenda. Technol. Forecast. Soc. Change 179, 121609. https://doi.org/10.1016/j.techfore.2022.121609 (2022).
29. Lerner, J. S., Li, Y., Valdesolo, P. & Kassam, K. S. Emotion and decision making. Annu. Rev. Psychol. 66(1), 799–823. https://doi.
org/10.1146/annurev-psych-010213-115043 (2015).
30. Bagozzi, R. P., Gopinath, M. & Nyer, P. U. The role of emotions in marketing. J. Acad. Mark. Sci. 27(2), 184–206. https://doi.org/
10.1177/0092070399272005 (1999).
31. Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N., & Ahmad, H. I think this is the most disruptive technology: Explor-
ing Sentiments of ChatGPT Early Adopters using Twitter Data. https://doi.org/10.48550/ARXIV.2212.05856 (2022).
32. Stokel-Walker, C. AI bot ChatGPT writes smart essays—Should professors worry? Nature, d41586-022-04397-7. https://doi.org/
10.1038/d41586-022-04397-7 (2022).
33. Calabrese, C., Ding, J., Millam, B. & Barnett, G. A. The uproar over gene-edited babies: A semantic network analysis of CRISPR
on Twitter. Environ. Commun. 14(7), 954–970. https://doi.org/10.1080/17524032.2019.1699135 (2020).
34. Fütterer, T. et al. Was bewegt Lehrpersonen während der Schulschließungen?—Eine Analyse der Kommunikation im Twitter-
Lehrerzimmer über Chancen und Herausforderungen digitalen Unterrichts. Z. Erzieh. 24, 443–477. https://d oi.o rg/1 0.1 007/s 11618-
021-01013-8 (2021).
35. Mahdikhani, M. Predicting the popularity of tweets by analyzing public opinion and emotions in different stages of Covid-19
pandemic. Int. J. Inf. Manag. Data Insights 2(1), 100053. https://doi.org/10.1016/j.jjimei.2021.100053 (2022).
36. Rosenberg, J. M., Borchers, C., Dyer, E. B., Anderson, D. & Fischer, C. Understanding public sentiment about educational reforms:
The next generation science standards on Twitter. AERA Open 7, 233285842110242. https://doi.org/10.1177/23328584211024261
(2021).
37. Fischer, C. et al. Mining big data in education: Affordances and challenges. Rev. Res. Educ. 44(1), 130–160. https://d oi.o rg/1 0.3 102/
0091732X20903304 (2020).
38. Howard, P. N. & Kollanyi, B. Bots, #Strongerin, and #Brexit: Computational propaganda during the UK-EU Referendum. SSRN
Electron. J. https://doi.org/10.2139/ssrn.2798311 (2016).
39. Davis, C. A., Varol, O., Ferrara, E., Flammini, A., & Menczer, F. BotOrNot: A System to Evaluate Social Bots. In Proceedings of the
25th International Conference Companion on World Wide Web - WWW ’16 Companion, 273–274. https://doi.org/10.1145/28725
18.2889302 (2016).
Vol:.(1234567890)
www.nature.com/scientificreports/
40. Cresci, S. A decade of social bot detection. Commun. ACM 63(10), 72–83. https://doi.org/10.1145/3409116 (2020).
41. Grootendorst, M. BERTopic: Leveraging BERT and c-TF-IDF to create easily interpretable topics. Zenodo https://d oi.o
rg/1 0.5 281/
zenodo.4430182 (2020).
42. Grootendorst M. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv:2203.0 5794v 0571. Available online
at: https://arxiv.org/pdf/2203.05794.pdf (2022).
43. Anwar, A., Ilyas, H., Yaqub, U., & Zaman, S. Analyzing QAnon on Twitter in context of US elections 2020: Analysis of user mes-
sages and profiles using VADER and BERT topic modeling. DG.O2021: The 22nd Annual International Conference on Digital
Government Research, 82–88. https://doi.org/10.1145/3463677.3463718 (2021).
44. Egger, R. & Yu, J. A topic modeling comparison between LDA, NMF, Top2Vec, and BERTopic to demystify Twitter posts. Front.
Sociol. 7, 886498. https://doi.org/10.3389/fsoc.2022.886498 (2022).
45. Hutto, C. & Gilbert, E. VADER: A parsimonious rule-based model for sentiment analysis of social media text. Proc. Int. AAAI
Conf. Web Soc. Med. 8(1), 216–225. https://doi.org/10.1609/icwsm.v8i1.14550 (2014).
46. Elbagir, S., & Yang, J. Sentiment analysis on Twitter with Python’s natural language toolkit and VADER sentiment analyzer. IAENG
Trans. Eng. Sci. 63–80. https://doi.org/10.1142/9789811215094_0005 (2020).
47. Borchers, C., Rosenberg, J. M., Gibbons, B., Burchfield, M. A., & Fischer, C. To scale or not to scale: Comparing popular sentiment
analysis dictionaries on educational Twitter data. Fourteenth International Conference on Educational Data Mining (EDM 2021),
Paris (2021).
48. Ince, J., Rojas, F. & Davis, C. A. The social media response to Black Lives Matter: How Twitter users interact with Black Lives Matter
through hashtag use. Ethn. Racial Stud. 40(11), 1814–1830. https://doi.org/10.1080/01419870.2017.1334931 (2017).
49. Marcec, R. & Likic, R. Using Twitter for sentiment analysis towards AstraZeneca/Oxford, Pfizer/BioNTech and Moderna COVID-
19 vaccines. Postgrad. Med. J. 98(1161), 544–550. https://doi.org/10.1136/postgradmedj-2021-140685 (2022).
50. Hu, K. ChatGPT sets record for fastest-growing user base—Analyst note. https://www.reuters.com/technology/chatgpt-sets-record-
fastest-growing-user-base-analyst-note-2023-02-01/#:~:text=The%20report%2C%20citing%20data%20from,analysts%20wrote%
20in%20the%20note (2023).
51. Tate, T. P., Doroudi, S., Ritchie, D., Xu, Y., & Warschauer, M. Educational research and AI-generated writing: Confronting the
coming Tsunami [Preprint]. EdArXiv. https://doi.org/10.35542/osf.io/4mec3 (2023).
52. Gunser, V. E., Gottschling, S., Brucker, B., Richter, S., Çakir, D. C., & Gerjets, P. The pure poet: How good is the subjective cred-
ibility and stylistic quality of literary short texts written with an artificial intelligence tool as compared to texts written by human
authors? Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022), 60–61. https://doi.
org/10.18653/v1/2022.in2writing-1.8 (2022).
53. Köbis, N. & Mossink, L. D. Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate
AI-generated from human-written poetry. Comput. Hum. Behav. 114, 106553. https://doi.org/10.1016/j.chb.2020.106553 (2021).
54. Veletsianos, G., Kimmons, R., & Bondah, F. ChatGPT and higher education: Initial prevalence and areas of interest. EDUCAUSE
Review. https://er.educause.edu/articles/2023/3/chatgpt-and-higher-education-initial-prevalence-and-areas-of-interest (2023).
55. Shoufan, A. Exploring students’ perceptions of ChatGPT: Thematic analysis and follow-up survey. IEEE Access 11, 38805–38818.
https://doi.org/10.1109/ACCESS.2023.3268224 (2023).
56. Bonsu, E. M. & Baffour-Koduah, D. From the consumers’ side: Determining students’ perception and intention to use ChatGPT
in Ghanaian higher education. J. Educ. Soc. Multicult. 4(1), 1–29. https://doi.org/10.2478/jesm-2023-0001 (2023).
57. Raman, R., Mandal, S., Das, P., Kaur, T., Jp, S., & Nedungadi, P. University students as early adopters of ChatGPT: Innovation diffu-
sion study [Preprint]. In Review. https://doi.org/10.21203/rs.3.rs-2734142/v1 (2023).
Acknowledgements
This research was supported by the Postdoctoral Academy of Education Sciences and Psychology of the Hector
Research Institute of Education Sciences and Psychology, Tübingen, funded by the Baden-Württemberg Ministry
of Science, Research, and the Arts. We would also like to thank Cheyenne Engeser for her support in the initial
literature review for this study.
Author contributions
T.F. wrote the main manuscript text. A.A. conducted the formal analyses. All authors conceptualized, reviewed,
and edit the manuscript. The authors are responsible for the content of this publication.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Competing interests
The authors declare no competing interests.
Additional information
Supplementary Information The online version contains supplementary material available at https://doi.org/
10.1038/s41598-023-42227-6.
Correspondence and requests for materials should be addressed to T.F.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
Vol.:(0123456789)
www.nature.com/scientificreports/
Open Access This article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Vol:.(1234567890)