0% found this document useful (0 votes)
45 views18 pages

AI in Science

Uploaded by

cezinaldobessa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views18 pages

AI in Science

Uploaded by

cezinaldobessa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Foresight:

Use and impact of Artificial Intelligence


in the scientific process

I1

2023
Table of Content
Context 3
Methodology 3
1. Present use of AI in the scientific process 5
2. Future developments and applications 6
3. Future opportunities and challenges 8
4. Future perspectives on generative AI 11
Key takeaways 12
References 13
Acknowledgments 13
Annex: ERC survey 14

I2
Context
The European Research Council (ERC) is the premier European funding organisation for excellent
frontier research. Since its establishment in 2007, the ERC has played a pivotal role within the EU’s
funding programmes for research and innovation. With a commitment to nurturing excellence, the
ERC gives its grantees the freedom to develop ambitious research projects. These projects have
the potential to propel advancements at the frontiers of knowledge and set a clear and inspirational
target for frontier research across Europe.
The ERC funds a rich and diverse portfolio of projects spanning all fields of science and scholarship,
without any predefined academic or policy priorities. These projects can have an impact well
beyond science and provide frontier knowledge and innovation to help solve societal challenges
and also contribute insights to shape and inform key EU policy objectives.
This report highlights how ERC funded researchers are using artificial intelligence (AI) in their
scientific processes, and how they see its potential impact by 2030. It summarises the findings
of a foresight survey conducted among ERC grantees, which focused on their present use of
AI and their views on future developments by 2030, potential opportunities and risks, and the
future impact of generative AI in science, such as large language models (LLMs). Developed in
collaboration with DG Research & Innovation (R&I) and its unit Science Policy, Advice & Ethics/
Scientific Advice Mechanism (SAM), this report was prepared in the context of the upcoming
Scientific Opinion on the responsible uptake of AI in science (more info below). The aim is to
provide evidence that can inform the development and implementation of policies related to AI in
the realm of science.
The use of AI in scientific and scholarly practices remains a subject of ongoing academic and
policy debates at both European and international levels (Nature 2023, OECD 2023, Birhane et al.
2023, van Dis et al. 2023). AI’s deployment spans various disciplines and serves many purposes,
ranging from large-scale data processing, patterns and predictions generation, experiment design
and control, as well as writing and peer-reviewing of scientific papers or grant proposals. The actual
and potential effects and drawbacks of AI in these contexts are widely debated.
This topic has come to the foreground of a European Commission policy initiative focusing on
the impact of AI in research and innovation (R&I) (Arranz et al. 2023b). In terms of research that
can inform policy-making, CORDIS Results Pack on the use of AI in science has showcased a
collection of EU funded projects on the topic (including 8 ERC projects). Furthermore, an upcoming
Mapping Frontier Research (MFR) report on AI from ERCEA (scheduled for release in early 2024) will
bolster these efforts within the framework of its Feedback to Policy (F2P) activities, as requested
by the ERC Scientific Council.

Methodology
The term ‘AI’ is generally defined here as “machines or agents that are capable of observing their
environment, learning, and based on the knowledge and experience gained, taking intelligent action
or proposing decisions” (Annoni et al. 2018, p.19). It includes a variety of models and approaches
(as defined in the European AI Act):
· logic- and knowledge-based (e.g. inference and deductive engines, symbolic reasoning
and expert systems, etc.);
· statistical approaches, Bayesian estimation, search and optimization methods;
· and machine learning (e.g. supervised, unsupervised and reinforcement learning, deep
learning, etc.).
The survey also included specific questions on generative AI, e.g. large language models such as
ChatGPT. It is defined here as technologies that can “create new content—including text, image,
audio, and video—based on their training data and in response to prompts” (Lorenz et al. 2023).

I3
This report is derived from a survey conducted among 1,034 ERC grantees out of a total of
1,046 ERC projects. The variation in numbers is attributed to cases where certain researchers
received more than one ERC grant. The projects were or are developing AI technology or systems,
or using AI in concrete applications, or studying its impact and effects, or a combination of these
elements. Spanning across all ERC scientific domains (cf. ERC Panel structure, 2024 calls) -
Physical Sciences and Engineering, Life Sciences, Social Sciences and Humanities, and Synergy
Grant - the survey was completed by 300 ERC grantees (representing a 29% response rate)
between 16 October and 12 November 2023. Notably, direct quotes with attribution were included
in the report only when respondents gave explicit consent in the survey form (adhering to the
corresponding Data Protection Notice).

Figure 1: Distribution per scientific domain Figure 2: Top countries of the Host Institution

Synergy
3% France
14
Germany
21
Social Sciences & Humanities
29%

Physical Sciences & Engineering United Kingdom


50% 12

Switzerland
6

Spain
Italy
7
Life Sciences 11
18% Netherlands
9

It is important to note that this portfolio of projects is not exhaustive and does not encompass all
ERC projects involved in the development, use or study of AI. More details about the methodology
used to build this portfolio will be included in an upcoming Mapping Frontier Research (MFR) report
on AI from ERCEA, to be available here (scheduled for release in Q1 2024).
A comprehensive list of 14,829 ERC projects, as of 31 March 2023, was extracted from the
Commission’s internal CORDA database. This list spans all ERC scientific domains – Life Sciences
(LS), Physical Sciences and Engineering (PE), and Social Sciences and Humanities (SH) – covering
projects funded under the FP7, Horizon 2020, and Horizon Europe framework programmes, as well
as all ERC grant schemes (Starting, Consolidator, Advanced, Synergy and Proof of Concept).
To identify projects related to AI, a keyword search was performed on projects’ titles, abstracts and
keywords, resulting in approximately 1,453 projects. The definition of AI was primarily based on a
study of Australia’s National Science Agency CSIRO (Hajkowicz et al. 2022) and an OECD report
(OECD 2022).
Additional projects were identified through insights from ERCEA Scientific and Ethics Officers
and the internal classification exercise Mapping Frontier Research (MFR), which included policy
factsheets on ERC frontier research contribution to the European Green Deal, Europe fit for a digital
age, and EU4Health. A re-check through analysis of abstracts, reporting and results (CORDIS),
along with grant agreement information (CORTEX), grant status as of 21 September 2023, resulted
in a consolidated list of 1,046 projects.

I4
1. Present use of AI in the scientific process
In a recent survey, the percentage of scientists from around the world reporting extensive use of AI
in their research increased from 12% in 2020 to 16% in 2021 (Elsevier 2022). Bibliometric analysis
reveals a consistent increase in the share of research papers mentioning AI or machine-learning
terms across all fields over the past decade, reaching around 8% in total (Van Noorden and Perkel
2023). Another study indicates an average year-on-year growth of 26 % in publications related
to AI within specific fields of research over the past 5 years, surpassing the 17% average for all
preceding years (Hajkowicz et al. 2023). Another bibliometric analysis states that global scientific
activity has grown by around 5% per year between 2004 and 2021, while in the same period, the
annual growth rate of AI-related publications has consistently remained at or above 15%, except
for the years between 2010 and 2012 (Arranz et al. 2023a).
The ERC survey takes place in this context of a demonstrated use of AI in the scientific process, especially
targeting ERC grantees already using or developing AI. When asked about their concrete use of AI in
scientific practices, the responses from ERC grantees illustrate the extensive and diverse applications
of AI in their scientific work. Many respondents also mentioned non-domain specific uses of AI-based
tools (namely generative AI), that is, as support for text writing and editing, language translation, coding
and programming, generation of images for presentations, literature retrieval, among others.

Life Sciences
In the Life Sciences (LS) domain (18% of total respondents), ERC researchers are using AI methods,
for instance, to understand individual differences in large cohorts, and to make predictions about
diagnosis or outcome of targeted therapies. AI tools are seen as an essential support to analyse
datasets of genomic, epigenomic and transcriptomic data, and compare healthy to disease
states and between disease states. Furthermore, AI tools are used in this domain to analyse
large volumes of imaging data and to find complex patterns and/or to generate simulations and
models for clinical applications. In the field of neuroscience, AI can be critical for automatically
detecting neuronal synaptic connections or serving as computational models for human conscious
experiences. Moreover, it has become an essential tool in computational proteomics, leveraging
deep neural networks to decipher protein sequences and predict their properties.

Physical Sciences and Engineering


In the Physical Sciences and Engineering (PE) domain (50% total respondents), the development
or use of AI ranges from fundamental or core AI in computer sciences to more applied research. It
is an essential tool in general for data analysis, and to advance simulations in physics, chemistry,
and biology. Cross-disciplinary applications include classifying hearing loss patterns based on a
diagnostic marker, running quantum-mechanical calculations for chemistry and materials design,
and modelling human-computer dialogue. In astronomy, AI and machine learning enhance the
control of instruments during observation of extrasolar planetary systems and to determine
types of massive stars photometrically. In engineering, neural networks are used for control and
anomaly/fault detection in complex systems, e.g. electricity grids, water distribution networks,
and autonomous vehicles. AI has also become a tool for analysing, classifying and forecasting
physical phenomena, e.g. weather patterns, air pollution, volcano deformation, and earthquakes.

Social Sciences and Humanities


In the Social Sciences and Humanities (SH) domain (29% total respondents), neural networks and
natural language processing (NLP) tools are used for a wide range of applications, e.g. models for
handwritten text recognition and automatic speech recognition, or the automatic classification
of musical compositions. Large language models (LLMs) are leveraged for analysis of data sets
of historical texts, from image segmentation, text mining, up to conceptual and linguistic models.
Furthermore, AI is used to identify vocal biomarkers of stress in voice samples, to detect extreme
speech in online discussions, to identify hidden advertising online, or to classify media articles
related to finance and financial regulation (scandal, crisis, business as usual, etc). AI is also a tool
for model-based data analysis for decoding and comparing mental representations in the brain or
predicting/simulating human learning and decision making.
I5
Host institution
When asked about institutional support, half of respondents (51%) stated that their host institutions
provided support to researchers using AI in their scientific practices (while the other 49% stated no
support). When looking at replies from the scientific domains, 57% of respondents from LS and 52% from
PE signalled this support, compared with only 42% from SH.
The top four types of support were calls for interdisciplinary teams (55%), training on the use of AI (50%),
guidance on the ethics of AI (46%), and development of internal AI tools (41%).
Some respondents added that they had access to institutional computing resources or a High-Performance
cluster (for instance GPU facility, storage, cloud computing, etc.) including dedicated support by IT personnel.
In other cases, their organisations hosted or were part of an AI Centre of Excellence, or had in place dedicated
research groups or institutes and/or organisation-wide support structures. These structures offered access
to funding for teaching and training (also for students and young researchers), and connections to experts in
AI. Collaborative networks with local and multidisciplinary expertise on a broad range of aspects of AI were
also mentioned.

Figure 3: Type of support offered by the researchers’ organisation

Calls for interdisciplinary teams 55

Training on the use of AI 50

Guidance on the ethics of AI 46

Development of internal AI tools 41

Licenses to external AI tools 25

I don’t know 10

Other 19
0% 10% 20% 30% 40% 50% 60%

2. Future developments and applications


ERC grantees were asked to envision the development of AI in the scientific process by 2030, both in
general and within their specific fields, together with key applications or advancements. The majority
said that AI will serve as a support tool, play a pivotal or essential role, and, in some cases, even accelerate,
revolutionise, or transform certain elements of the scientific process or of their own field.
The most common application was data
AI will blur boundaries between quantitative and qualitative
research and make “mixed methods” (…) the normal analysis and processing, that is, AI will speed
procedure. It will not replace humans in qualitative up analysis, quantification and visualisation
interpretation, but it will provide for a back-and-forth of massive, complex datasets. It will rapidly
between methodological approaches that still today is and accurately identify correlations or patterns
extremely labour intensive.
that may not be retrieved by hypothesis-driven
Eeva Luhtakallio, University of Helsinki, FInland) research. In some fields, AI will be used to
develop simulations or surrogate models for
processes for which there is incomplete information, for example for detailed computational models of
neural systems that can make fine-grained predictions about cognition, behavior and disease outcomes.
Many respondents noted that AI as coding or programming aid, spurred by developments in natural
language processing, will be (and already is) an essential support to accelerate research. Also, by
suggesting parameters for trial setups, AI can help in designing experiments more quickly, towards “AI
guided workflows that will significantly reduce the number of experiments or computations to reach the
goals” (Matthias Scheffler, FHI-Max Planck Society, Germany).

I6
Another potential development I think of AI first as an “”exocortex””, amplifying our
was cross-linking of data, cognitive skills and in principle subserving our objectives,
identifying relevant methods, our goals. This is the first level of AI impact. Then we will
or discovering related results see autonomous AI take over science to collaborate with
humans. For this, we have to carefully provide them with
from a different field. AI-
objective functions (goals). Later, humans may be too
based summarisation and
primitive to be relevant to research, but they may still help
consolidation of knowledge asking questions, and leaving the details to AI. At some later
could contribute to more stage, AIs will become a new species
interdisciplinary projects that
Giulio Ruffini, Neuroelectrics, Spain & US
require in-depth knowledge
in different fields, and at the same time, help to identify new/promising yet unexplored ideas or
research questions.
Expectations regarding the use of AI for scientific discovery, however, varied among respondents.
A possible future scenario is a human scientist to “brainstorm on scientific ideas while discussing
with increasingly responsive and “science aware” AI companion (this could be crucial for more
isolated scientists in certain institutions and countries)” (Jean Barbier, ICTP, Italy).
Others highlighted a broader role for AI in generating new scientific hypotheses, emphasising its
potential to “being in the driver’s seat of the process. (Søren Hauberg, Technical University of
Denmark)
Only a few, however, mentioned a fully autonomous process. In this scenario, an AI system would
not only develop detailed plans for testing specific hypotheses (provided by a researcher), but also
take actions such as selecting the best resources in the lab or accessing additional data if needed.
For some respondents, a higher level of contribution by AI to the scientific process is still limited
by our understanding of what AI can actually do better and the consequences of its use. It was
echoed by concerns over the current underdevelopment of uncertainty quantification, that is, the
assessment of the reliability of models and simulations, and also concerns over the transparency of
AI systems. The need for human validation or critical manual checking and interpretation was also
underlined, under the threat of leading to incorrect, biased and compromised data.

AI-based support tools can also be of help in clearly Instead of AI systems acting as
formulating project proposals and for brainstorming on autonomous agents, our survey results
managerial aspects such as potential risks, data and team point more towards a collaboration
management practices, writing concise summaries, and between human scientists and AI. In
helping to report about published works. some instances, “there is an outside
Tias Guns, KU Leuven, Belgium chance that something out of the
paradigm is turned out by an AI search.”,
in a sense of a “second opinion” (Javier Jimenez, Universidad Politecnica Madrid, Spain). “Humans in
the loop”, or AI as a “research assistant”, “co-pilot” or as “hybrid decision support system” assisting
humans in performing their tasks were more frequently mentioned.
In particular, for many respondents when it comes to tasks such as writing and editing, AI-based
tools will play a role in automating certain aspects, e.g. extracting and summarising information
from documents, making visualisations, restructuring and adapting texts for different purposes
(papers, grant proposals, conference presentations, press releases, etc.) – the human scientist “as
a conductor of an orchestra of AI tools”. (Martin Schultz, Research centre Jülich, Germany).
Some respondents believed that at least parts of scientific writing, reviewing and grant applications
that are seen as “mechanical” or “non-creative” (that is, not requiring “original research thinking”)
could be automated to increase accuracy and speed.
Yet, others underlined the continued value of human researchers in the reviewing and analysis of
scientific reports, literature, and data. Some expressed their concern about the overall impact on
grant submission, publishing and reviewing.

I7
3. Future opportunities and challenges
When asked to assess the likelihood of opportunities and benefits of AI in the scientific process
by 2030, respondents expressed strong convictions. 93% found it either ‘highly likely’ or ‘likely’
that the use of AI in science would require the implementation of ethical guidelines for AI. These
guidelines would address concerns such as privacy and data protection, algorithmic fairness,
and the prevention of potential misuse.
I imagine that by 2030, AI will revolutionise Another prevalent perception, shared by
scientific progress by automating tedious work, 88% of respondents, was the belief that AI
detecting patterns, and providing creative prompts. will accelerate the scientific process, e.g.
However, human guidance will remain crucial in by shortening time to find scientific literature,
directing these AI capabilities. analyse data, discover patterns, design studies
Ricardo Henriques, Instituto Gulbenkian de or simulations.
Ciência, Portugal
While maintaining a positive outlook,
respondents exhibited more moderate optimism towards other key opportunities. Notably, 81
% found it ‘highly likely’ or ‘likely’ that AI-human collaboration would become widespread
in the scientific process. Concurrently, 79% expressed similar sentiments about the faster
development of prototypes and transfer of new technologies from the laboratory to the market.
Additional opportunities, also seen with moderate optimism, included knowledge sharing and
interdisciplinary work within and across scientific fields (75%), more accuracy of the scientific
process, e.g. checking for errors and discrepancies, cleaning and classifying data, presenting
conflicting views (74%), and the advancement of solutions to societal challenges, e.g. climate
change or antimicrobial resistance (73%).
There is, however, more scepticism regarding the deployment of AI in certain tasks by 2030. A
large majority (76% in total) perceived it as ‘unlikely’ or ‘highly unlikely’ for AI to autonomously
conduct the entire scientific process end-to-end, e.g. generate hypothesis, carry out
experiments, interpret results, and make extrapolations.
To a lesser degree, 54% of respondents expressed a similar opinion about AI-based scientific
publication and peer-review, e.g. AI writing and reviewing research papers, evaluating and
monitoring of grants, or scientific fact-checking. These results, together with the belief in a
widespread AI-human collaboration as pointed above, imply a prevailing view that AI will for the
most part serve as an “assistant” or “support” to human scientists, rather than operating as an
autonomous agent in the scientific process.

Figure 4: Opportunities and benefits for the use of AI in science by 2030

highly likely likely unlikely highly unlikely I don't know

Accelerate scientific process 51 37 9

AI conducting autonomously a scientific process 7 12 29 47 6

Foster high-risk and blue-sky thinking 16 38 22 12 12

Faster development of prototypes (lab to market) 33 46 7 13

Lead to AI-based publication and peer-review 13 28 37 17 5

Knowledge sharing and interdisciplinary work 30 45 14 4 8

Make the scientific process more accurate 28 46 19 5 2

Implementation of ethical guidelines for AI 69 24 3 3

Advance solutions to societal challenges 25 48 14 4 9

Widespread AI-human collaboration 37 44 9 3 8


0 50 100 150 200 250 300 350

I8
When comparing replies from different domains, respondents from Physical Sciences and
Engineering (PE) were more sceptical about AI-based scientific publication and peer-review, with
only 34% finding it ‘highly likely or ‘likely’.
On the other hand, respondents from the PE domain were more optimistic about the faster
development of prototypes (84% ‘highly likely’ or ‘likely’). This sentiment was also shared by 82%
of Life Sciences (LS) respondents. Respondents from Social Sciences and Humanities (SH) were
less optimistic (69%) or unsure (23%).
Respondents from LS were more positive about AI fostering high-risk and blue-sky thinking (64%)
when compared with respondents in PE (51%) and SH (50%). They were also the most positive
about the implementation of ethical guidelines for AI (98%), although respondents from SH and PE
were also very positive (95% and 90% respectively).

Figure 5: Opportunities and benefits by domain (‘highly likely’ or ‘likely’)

PE LS SH
34
Lead to AI-based scientific publication and peer-review 47

45

84
Faster development of prototypes (lab to market) 82
69

64
Foster high-risk and blue-sky thinking 51
50

90
Implementation of ethical guidelines for AI 98
95
0% 20% 40% 60% 80% 100%

When it comes to the likelihood of challenges and risks associated with the use of AI in the
scientific process by 2030, the results reveal a combination of more moderate perspectives and
uncertainty among respondents.
The most worrying prospects were the lack of transparency and replicability (“black box”
problem) (35% found it ‘highly likely’ and 36% ‘likely’, 71% in total), and the widespread deployment
of AI systems that are intrusive, manipulative or discriminatory, e.g. social scoring, automated
behavioural profiling, mass surveillance (37% ‘highly likely’ and 42% ‘likely’, 79% in total).
Other concerns were also raised, closer to the internal workings of the scientific endeavor. These
included the risk of bias in data or models due to non-representative sampling, inaccurate labeling,
low-quality curation, among others (71% ‘highly likely’ or ‘likely’). Additionally, respondents
expressed concerns about inequalities between researchers and organisations regarding access
to training data, computing capacity, and maintenance costs of AI technologies (68%). Moreover,
there were apprehensions about the reliance on correlation-driven models over deterministic
approaches, with 66% expressing concern, and 14% indicating ‘don’t know’.
Another concern over the risk of misuse e.g. bio hazard, or lethal drugs development was pointed out
(64% but 16% ‘don’t know’), which confirms partially the emphasis on ethical aspects noted above.

The key advancements would be (…) understanding what the AI model is predicting. This would provide
in many cases mechanistic inkling that we are sadly missing when we use AI and develop ways to ensure
AI models to not hallucinate.
Marcelo Nollmann, CNRS, France

I9
The concentration of AI resources and development in global players, tech companies, or
governments outside the EU was acknowledged as a concern by 63% (‘highly likely’ or ‘likely,’
with 17% responding ‘don’t know’). Additionally, the potential loss of creativity and diversity in AI
research due to the increasing dominance of the private sector or large tech companies was noted
by 61% of respondents. Though not ranking high on the list, more than half of the participants
(55%) expressed concerns about the potential lack of competencies in AI among researchers,
coupled with a shortage of AI specialists.
The least concerning aspect, as perceived by respondents, was the outsourcing of scientific
jobs to AI-based systems and its potential severe impact on researchers’ careers (59% found it
‘unlikely’ or ‘highly unlikely’). This aspect could be linked to the earlier mentioned strong belief in
AI-human collaboration rather than AI operating as autonomous agent.

Figure 6: Challenges and risks for the use of AI in science by 2030


highly likely likely unlikely highly unlikely I don't know

Increase risk of bias 28 43 21 2 7

Reliance on correlation over deterministic models 24 42 17 3 14

Lack of transparency and replicability (“black box”) 35 36 21 2 5

Outsource scientific jobs to AI and impact on careers 11 22 43 16 6

Inequalities among researchers and organisations 29 39 20 4 8

Insufficient AI competencies and specialist shortage 20 35 28 8 9

Intrusive and discriminatory uses of AI 37 42 11 2 8

Low diversity due to dominance of companies 24 37 26 4 10

Concentration of AI in global players (outside EU) 20 43 18 3 17

Enhance the risk of misuse 27 37 18 2 16


0 50 100 150 200 250 300

When looking into differences between domains, respondents from SH tended to be more
concerned about inequalities between researchers and organisations (82% mentioned it as ‘highly
likely’ or ‘likely’), losses of creativity and diversity in AI research (72%), risk of misuse (72%), and to
a lesser extent lack of transparency and replicability (76%).

Figure 7: Challenges and risks by domain (‘highly likely’ or ‘likely’)


PE LS SH
64
Inequalities between researchers and organisations 56
82

58
Low diversity due to dominance of companies 45
72

60
Enhance the risk of misuse 60
72
70
Lack of transparency and replicability (“black box”) 66
76

65
Concentration of AI in global players (outside EU) 51
62
0% 20% 40% 60% 80% 100%
I 10
In terms of concentration of AI resources and development outside the EU, respondents from SH
and PE were the most concerned (62% and 65% respectively, compared with 51% from LS).

4. Future perspectives on generative AI


When considering the opportunities and benefits of generative AI (e.g. large language models
such as ChatGPT) for the scientific process by 2030, researchers were asked to select up to three
key aspects. The most evident advantages laid in a lower-level deployment, where generative AI
can efficiently handle repetitive or labour-intensive tasks, e.g. conducting literature reviews, writing
materials from presentations to papers, among others (85%), and improve access to documents in
different languages, while also reducing language barriers for non-native speakers (75%).
Another set of possibilities was also highlighted, though in lower numbers. These include the
promotion of productivity, for instance, researchers being able to write more analyses or papers
at a faster pace (38%). Additionally, there was mention of the tracking of abusive behavior,
While I do not have any doubt that AI will increase such as plagiarism (23%), and the potential for
“productivity” in science in terms of the number researchers to become science communicators,
of papers, I think this is the wrong way to measure disseminating scientific knowledge in a clear and
progress. (...) There is already so much “salami- easily accessible manner to a wide audience
publishing” (publishing really small (19%).
incremental steps with few new insights)
In terms of more radical future impacts, only
Marloes Eeftens, Swiss TPH, Switzerland 13% of the respondents believed that by 2030,
generative AI could pose research questions and
offer suggestions for potential research directions. This finding aligns with earlier mentioned results
highlighting the (in)capacity of AI to autonomously conduct a scientific process from end-to-end.
Additionally, only 5% believed that generative AI could serve as science advisors to policymakers,
non-governmental organizations, and industry—this would involve synthesizing scientific knowledge
and drafting briefings. Furthermore, a mere 3% indicated that AI could replace humans as research
assistants or participants by acting as autonomous agents, for example, in focus groups or simulations.
Figure 8: Opportunities and benefits of generative AI by 2030

Take on repetitive or labour-intensive tasks 85

Reduce language barriers 75

Promote productivity in science 38

Track abusive behaviour such as plagiarism 23

Disseminate science to a wide audience 19

Pose scientific hypothesis and new research 13

Act as science advisors to policy-makers 5

Replace research participants 3

I don't know 3

Other 2

0% 20% 40% 60% 80% 10

There were some differences in the replies according to domains, albeit to a limited extent. Researchers
from Social Sciences and Humanities (SH) were more inclined towards the opportunity for generative
AI to handle mostly repetitive or labor-intensive tasks and reduce language barriers. In the first case,
91% of SH researchers expressed this view compared to 83% in Physical Sciences and Engineering
(PE) and 81% in Life Sciences (LS). In the second case, 80% of SH researchers held this opinion, while
the corresponding figures were 74% for both PE and LS. However, SH researchers were more sceptical
about generative AI acting as science disseminators, with only 10% expressing this view compared
to 23% for both PE and LS. The idea of generative AI posing research questions and suggesting new
research directions was mentioned more often by respondents from LS (19%) compared to those from
PE (9%).
I 11
Overall, the survey results revealed a moderate level of concern regarding a specific set of challenges
and risks associated with generative AI, such as large language models like ChatGPT, in the scientific
process by 2030. 62% expressed concern that generative AI could spread false information or
inaccurate scientific knowledge. Additionally, 50% believed it could impact research integrity, potentially
encouraging plagiarism, authorial and source misrepresentation, or non-disclosure of the use of generative
AI. Furthermore, 46% noted that it might lead to overreliance and dependency on generative AI tools,
posing a threat to the development of researchers’ critical thinking and analytical skills.
Some respondents were also concerned about potential intellectual property rights issues, e.g. unlicensed
content in training data, potential copyright, patent and trademark infringement of AI creations, and ownership
of AI-generated works (37%), and increased dependency on commercial or private providers (35%).
The challenges considered least concerning included data access and open-source software (16%),
dilution of responsibility over the scientific process (16%), and the potential endangerment of peer-review
quality in journals, funding bodies, etc. (13%).

Figure 9: Challenges and risks of generative AI by 2030

Spread false information or inaccurate knowledge 62

A�ect research integrity 50

Lead to overreliance on AI 46

Raise issues of intellectual property rights 37

Dependency on commercial/private providers 35

Require data access and open-source software 16

Dilute responsibility over the scientific process 16

Endanger peer-review quality assurance 13

I don’t know 1

Other 1
0% 10% 20% 30% 40% 50% 60% 70

In terms of differences based on researchers’ domains, respondents from LS expressed more concern
about the spread of false information or inaccurate scientific knowledge (72%, compared to 56% from
SH). Increased dependency on commercial or private providers was more prominently noted by ERC
grantees from SH (45%, compared to 19% from LS). Grantees from PE tended to be more worried about
issues related to intellectual property rights (44%, compared with 28% from SH and 32% from LS).

Key takeaways
Within its broad scope of models and approaches, artificial intelligence (AI) is widely used across various
research fields and purposes, ranging from domain-specific tasks to more cross-cutting applications.
This widespread usage has been, at least, partially spurred by recent advances in generative AI, including
large language models. This report provides a snapshot of the current use of AI in the scientific process
from the viewpoint of ERC grantees. Additionally, it offers a future-oriented outlook on potential
developments, opportunities, and risks by 2030.
One notable future opportunity identified in the survey was the use of AI for data analysis and processing.
It could greatly speed up or help with specific aspects of the scientific process, such as literature
summarisation, patterns discovery, and experiment design.
Another clear direction highlighted was the need for ethical guidelines to governe AI, covering areas
like privacy and data protection, algorithmic fairness, and prevention of misuse. This was coupled with
concerns over lack of transparency and potential issues with intrusive, manipulative, or discriminatory
AI systems.
I 12
There was, however, more scepticism regarding the extent to which AI systems can contribute to
scientific discovery by 2030, especially in scenarios envisioning AI as a fully autonomous agent. Instead,
the prevailing view was that AI functions as a tool or support to human researchers, emphasising
collaboration over replacement or posing a threat to scientific careers.
Generative AI tools, especially large language models, received positive feedback for their current and near-
future usefulness, particularly in handling repetitive or labour-intensive tasks such as literature reviews, content
generation (from presentations to papers), and improving access to documents in different languages. Still,
concerns persist regarding the spread of false information or inaccurate scientific knowledge, as well as
threats to research integrity, notably in the forms of plagiarism and source misrepresentation.

References
· Annoni, A., et al. (2018) Artificial Intelligence: A European Perspective, Craglia, M. editor(s), EUR
29425 EN, Publications Office of the European Union, Luxembourg.
· Arranz, D., et al. (2023a) Trends in the use of AI in science, R&I Paper Series, Working Paper
2023/4.
· Arranz, D., et al. (2023b), The impact of AI on R&I, R&I Paper Series, Literature review, Quarterly
research and innovation literature review (Issue 2023/Q2).
· Birhane et al (2023), Science in the age of large language models, Nat Rev Phys 5, 277–280.
· van Dis et al (2023), ChatGPT: five priorities for research, Nature, 614(7947): 224-226.
· Elsevier (2022) Research Futures Report 2.0.
· European Commission (2021c), AI Act , COM(2021)0206
· Hajkowicz, S. et al. (2022) Artificial intelligence for science: Adoption trends and future
development pathways, CSIRO, Brisbane, Australia.
· Hajkowicz, S. et al. (2023) Artificial intelligence adoption in the physical sciences, natural sciences,
life sciences, social sciences and the arts and humanities: A bibliometric analysis of research
publications from 1960-2021, Technology in Society, Volume 74.
· Lorenz, P. et al. (2023), “Initial policy considerations for generative artificial intelligence”, OECD
Artificial Intelligence Papers, No. 1, OECD Publishing, Paris.
· Nature (2023, AI will transform science — now researchers must tame it, Editorial, Nature 621, 658.
· OECD (2022), OECD Framework for the Classification of AI Systems. OECD Digital Economy
Papers, No. 323.
· OECD (2023), Artificial Intelligence in Science: Challenges, Opportunities and the Future of
Research, OECD Publishing, Paris.
· Van Noorden R., Perkel J.M. (2023) AI and science: what 1,600 researchers think. Nature,
621(7980): 672-675
·

Acknowledgments
We would like to thank the members of the ERCEA Scientific Department AI informal group for their
work (Endika Bengoetxea, Kristina Zakova, Anne-Sophie Paquez, Marietta Sionti, Valeria Croce, Nadia El
Mjiyad and André Vieira), led by Susana Nascimento and Jannik Sielmann for this report. The feedback
from the ERCEA Feedback to Policy (F2P) network was also welcomed. We would also like to thank
our colleagues in DG R&I.002.Science Policy, Advice & Ethics/Scientific Advice Mechanism (SAM) and
E4.Industry 5.0 & AI in Science for the excellent collaboration.
Under the Horizon Europe programme, the European Commission has delegated a new task to the
ERC Executive Agency (ERCEA) to identify, analyse and communicate policy relevant research results to
Commission services. The ERCEA has developed a Feedback to Policy (F2P) framework for ERCEA to
guide these activities adapted to the specificities of the ERC as a bottom-up funding programme.
This report is part of a series aiming to demonstrate the relevance of ERC-funded frontier science, for
addressing acute societal, economic, and environmental challenges and thus their contributions towards
key EU policy goals. This F2P series does not offer any policy recommendations.
More information: https://erc.europa.eu/projects-statistics/mapping-erc-frontier-research

I 13
Annex: ERC survey
Section A.
1. Please select the scientific domain of your last or ongoing ERC grant:
· Physical Sciences and Engineering
· Life Sciences
· Social Sciences and Humanities
· Synergy Grant

2. Please detail your specific field of research.


(up to 1000 characters)

3. In which country is based the Host Institution of your last or ongoing ERC grant?
(drop-down menu)

Section B. Present use of AI


4. Artificial Intelligence (AI) can be used as a tool in the scientific process. Either within or outside
your ERC grant, are you currently using AI in your research or as support to your scientific practices?
No (If no, go to Question 5)
Yes

If yes, please detail how you are using AI in your scientific practices. Please also mention the
importance of AI in such practices (e.g. a support tool, an essential tool, etc.)
(up to 1000 characters)

5. If you are not currently using AI in your research, please select below the main reasons why (up to
three).
· Not relevant for my current research.
· Insufficient funding.
· Lack of high quality and domain-specific data.
· Shortage of AI competencies.
· Lack of infrastructure and computing resources.
· Insufficient interoperability and sharing of data.
· Inadequate support from your organisation.
· I don’t know.
· Other (please specify)

6. Does your Host Institution provide support to researchers using AI in their scientific practices?
No (If no, go to Section C)
Yes (If yes, please list below the type of support offered by your organisation).
· Development of internal AI tools.
· Licenses to external AI tools.
· Training on the use of AI.
· Guidance on the ethics of AI.
· Calls for interdisciplinary teams.
· I don’t know.
· Other (please specify)

I 14
Section C. Future applications and opportunities
7. By 2030 how could the use of AI further develop scientific process in general and in your specific
field? In your view, what will be the three key applications or advancements?
(up to 2000 characters)

8. Please assess the following statements on potential opportunities and benefits.


By 2030 the use of AI in the scientific process will:
(four-point scale: highly likely, likely, unlikely, highly unlikely)
· Accelerate the scientific process (e.g. by shortening literature reviews, discovering patterns,
designing studies, etc.).
· Lead to AI conducting autonomously a scientific process end-to-end.
· Lead to widespread AI-human collaboration in the scientific process.
· Drive high-risk and blue-sky thinking that leads to scientific breakthroughs.
· Make the scientific process more accurate (e.g. checking for errors and discrepancies, cleaning
and classifying data, etc.).
· Lead to AI-based scientific publication and peer-review (e.g. AI writing and reviewing papers,
evaluating grants, etc.).
· Require the implementation of ethical guidelines for AI (e.g. privacy and data protection,
algorithmic fairness, misuse, etc.).
· Allow for faster development of prototypes (from the laboratory to the market).
· Improve knowledge sharing and interdisciplinary work.
· Advance solutions to societal challenges (e.g. climate change, antimicrobial resistance, etc).
· Other (please specify).

Section D. Future challenges and risks


9. Please assess the following statements on potential challenges and risks.
By 2030 the use of AI in the scientific process will:
(four-point scale: highly likely, likely, unlikely, highly unlikely)
· Increase the risk of bias (e.g. due to non-representative sampling, low-quality data curation, etc.).
· Increase reliance on correlation-driven models over deterministic approaches.
· Lead to lack of transparency and replicability (“black box”).
· Outsource scientific jobs to AI and severely impact researchers’ careers.
· Reinforce inequalities among researchers and organisations (e.g. access to training data,
infrastructure, etc.).
· Lead to insufficient AI competencies and shortage of specialists.
· Lead to intrusive and discriminatory uses of AI (e.g. social scoring, automated profiling, mass
surveillance, etc.).
· Lead to low diversity of AI research due to dominance of private or large companies.
· Be impaired by the concentration of AI development in other global players (outside the EU).
· Enhance the risk of misuse (e.g. bio hazard, lethal drugs development, etc.).
· Other (please specify)

I 15
Section E. Future perspectives on generative AI
10. Please select below up to three opportunities and benefits of generative AI (e.g. large language
models such as ChatGPT) for the scientific process by 2030.

By 2030 generative AI will:


· Take on repetitive or labour-intensive tasks.
· Pose scientific hypothesis and new research directions.
· Promote productivity in science (e.g. write more papers and faster).
· Reduce language barriers (e.g. write and access to documents in different languages).
· Track abusive behaviour such as plagiarism.
· Replace research participants (e.g. in focus groups or simulations).
· Disseminate science to a wide audience.
· Act as science advisors to policymakers and other stakeholders.
· I don’t know.
· Other (please specify)

11. Please select below up to three challenges and risks of generative AI (e.g. large language models
such as ChatGPT) for the scientific process by 2030.
By 2030 generative AI will:
· Affect research integrity (e.g. plagiarism, source misrepresentation, etc.).
· Spread false information or inaccurate scientific knowledge.
· Raise issues of intellectual property rights (e.g. infringement, ownership of AI-generated work,
etc.).
· Lead to overreliance on AI (e.g. threatening researchers’ critical and analytical thinking).
· Dilute responsibility over the scientific process.
· Increase dependency on commercial or private providers.
· Require wider data access and open-source software.
· Endanger peer-review quality assurance (e.g. in journals, funding bodies, etc.)
· I don’t know.
· Other (please specify)

12. Would you like to make any final or additional comments?


(up to 1000 characters)

I 16
Neither the Agency nor any person acting on behalf of the Agency is responsible for the use that might be
made of the following information.

Luxembourg: Publications Office of the European Union, 2023

© European Research Council Executive Agency, 2023

ERCEA’s reuse policy is implemented by the Decision of 12 December 2011 - reuse of Commission
documents.

Unless otherwise indicated (e.g. in individual copyright notices), content owned by the ERCEA and/or EU
on this website is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence.
This means that reuse is allowed, provided appropriate credit is given and changes are indicated.

For any use or reproduction of photos or other material that is not under the EU copyright, permission
must be sought directly from the copyright holders.

Pictures: © www.gettyimages.com

PDF: 978-92-9215-123-2 • doi: 10.2828/10694 • JZ-02-23-346-EN-N

Getting in touch with the EU

In person
All over the European Union there are hundreds of Europe Direct information centres. You can find the
address of the centre nearest you at: https://europa.eu/european-union/contact_en

On the phone or by email


Europe Direct is a service that answers your questions about the European Union. You can contact this
service:

– by freephone: 00 800 6 7 8 9 10 11 (certain operators may charge for these calls),

– at the following standard number: +32 22999696 or

– by email via: https://europa.eu/european-union/contact_en

Finding information about the EU

Online
Information about the European Union in all the official languages of the EU is available on the Europa
website at: https://europa.eu/european-union/index_en

EU publications
You can download or order free and priced EU publications from EU Bookshop at: https://publications.
europa.eu/bookshop. Multiple copies of free publications may be obtained by contacting Europe Direct
or your local information centre (see https://europa.eu/european-union/contact_en).

EU law and related documents


For access to legal information from the EU, including all EU law since 1952 in all the official language
versions, go to EUR-Lex at: http://eur-lex.europa.eu

Open data from the EU


The EU Open Data Portal (http://data.europa.eu/euodp/en) provides access to datasets from the EU.
Data can be downloaded and reused for free, both for commercial and non-commercial purposes.

I 17
Contact: ERC-Info@ec.europa.eu
https://erc.europa.eu/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy