AI in Science
AI in Science
I1
2023
Table of Content
Context 3
Methodology 3
1. Present use of AI in the scientific process 5
2. Future developments and applications 6
3. Future opportunities and challenges 8
4. Future perspectives on generative AI 11
Key takeaways 12
References 13
Acknowledgments 13
Annex: ERC survey 14
I2
Context
The European Research Council (ERC) is the premier European funding organisation for excellent
frontier research. Since its establishment in 2007, the ERC has played a pivotal role within the EU’s
funding programmes for research and innovation. With a commitment to nurturing excellence, the
ERC gives its grantees the freedom to develop ambitious research projects. These projects have
the potential to propel advancements at the frontiers of knowledge and set a clear and inspirational
target for frontier research across Europe.
The ERC funds a rich and diverse portfolio of projects spanning all fields of science and scholarship,
without any predefined academic or policy priorities. These projects can have an impact well
beyond science and provide frontier knowledge and innovation to help solve societal challenges
and also contribute insights to shape and inform key EU policy objectives.
This report highlights how ERC funded researchers are using artificial intelligence (AI) in their
scientific processes, and how they see its potential impact by 2030. It summarises the findings
of a foresight survey conducted among ERC grantees, which focused on their present use of
AI and their views on future developments by 2030, potential opportunities and risks, and the
future impact of generative AI in science, such as large language models (LLMs). Developed in
collaboration with DG Research & Innovation (R&I) and its unit Science Policy, Advice & Ethics/
Scientific Advice Mechanism (SAM), this report was prepared in the context of the upcoming
Scientific Opinion on the responsible uptake of AI in science (more info below). The aim is to
provide evidence that can inform the development and implementation of policies related to AI in
the realm of science.
The use of AI in scientific and scholarly practices remains a subject of ongoing academic and
policy debates at both European and international levels (Nature 2023, OECD 2023, Birhane et al.
2023, van Dis et al. 2023). AI’s deployment spans various disciplines and serves many purposes,
ranging from large-scale data processing, patterns and predictions generation, experiment design
and control, as well as writing and peer-reviewing of scientific papers or grant proposals. The actual
and potential effects and drawbacks of AI in these contexts are widely debated.
This topic has come to the foreground of a European Commission policy initiative focusing on
the impact of AI in research and innovation (R&I) (Arranz et al. 2023b). In terms of research that
can inform policy-making, CORDIS Results Pack on the use of AI in science has showcased a
collection of EU funded projects on the topic (including 8 ERC projects). Furthermore, an upcoming
Mapping Frontier Research (MFR) report on AI from ERCEA (scheduled for release in early 2024) will
bolster these efforts within the framework of its Feedback to Policy (F2P) activities, as requested
by the ERC Scientific Council.
Methodology
The term ‘AI’ is generally defined here as “machines or agents that are capable of observing their
environment, learning, and based on the knowledge and experience gained, taking intelligent action
or proposing decisions” (Annoni et al. 2018, p.19). It includes a variety of models and approaches
(as defined in the European AI Act):
· logic- and knowledge-based (e.g. inference and deductive engines, symbolic reasoning
and expert systems, etc.);
· statistical approaches, Bayesian estimation, search and optimization methods;
· and machine learning (e.g. supervised, unsupervised and reinforcement learning, deep
learning, etc.).
The survey also included specific questions on generative AI, e.g. large language models such as
ChatGPT. It is defined here as technologies that can “create new content—including text, image,
audio, and video—based on their training data and in response to prompts” (Lorenz et al. 2023).
I3
This report is derived from a survey conducted among 1,034 ERC grantees out of a total of
1,046 ERC projects. The variation in numbers is attributed to cases where certain researchers
received more than one ERC grant. The projects were or are developing AI technology or systems,
or using AI in concrete applications, or studying its impact and effects, or a combination of these
elements. Spanning across all ERC scientific domains (cf. ERC Panel structure, 2024 calls) -
Physical Sciences and Engineering, Life Sciences, Social Sciences and Humanities, and Synergy
Grant - the survey was completed by 300 ERC grantees (representing a 29% response rate)
between 16 October and 12 November 2023. Notably, direct quotes with attribution were included
in the report only when respondents gave explicit consent in the survey form (adhering to the
corresponding Data Protection Notice).
Figure 1: Distribution per scientific domain Figure 2: Top countries of the Host Institution
Synergy
3% France
14
Germany
21
Social Sciences & Humanities
29%
Switzerland
6
Spain
Italy
7
Life Sciences 11
18% Netherlands
9
It is important to note that this portfolio of projects is not exhaustive and does not encompass all
ERC projects involved in the development, use or study of AI. More details about the methodology
used to build this portfolio will be included in an upcoming Mapping Frontier Research (MFR) report
on AI from ERCEA, to be available here (scheduled for release in Q1 2024).
A comprehensive list of 14,829 ERC projects, as of 31 March 2023, was extracted from the
Commission’s internal CORDA database. This list spans all ERC scientific domains – Life Sciences
(LS), Physical Sciences and Engineering (PE), and Social Sciences and Humanities (SH) – covering
projects funded under the FP7, Horizon 2020, and Horizon Europe framework programmes, as well
as all ERC grant schemes (Starting, Consolidator, Advanced, Synergy and Proof of Concept).
To identify projects related to AI, a keyword search was performed on projects’ titles, abstracts and
keywords, resulting in approximately 1,453 projects. The definition of AI was primarily based on a
study of Australia’s National Science Agency CSIRO (Hajkowicz et al. 2022) and an OECD report
(OECD 2022).
Additional projects were identified through insights from ERCEA Scientific and Ethics Officers
and the internal classification exercise Mapping Frontier Research (MFR), which included policy
factsheets on ERC frontier research contribution to the European Green Deal, Europe fit for a digital
age, and EU4Health. A re-check through analysis of abstracts, reporting and results (CORDIS),
along with grant agreement information (CORTEX), grant status as of 21 September 2023, resulted
in a consolidated list of 1,046 projects.
I4
1. Present use of AI in the scientific process
In a recent survey, the percentage of scientists from around the world reporting extensive use of AI
in their research increased from 12% in 2020 to 16% in 2021 (Elsevier 2022). Bibliometric analysis
reveals a consistent increase in the share of research papers mentioning AI or machine-learning
terms across all fields over the past decade, reaching around 8% in total (Van Noorden and Perkel
2023). Another study indicates an average year-on-year growth of 26 % in publications related
to AI within specific fields of research over the past 5 years, surpassing the 17% average for all
preceding years (Hajkowicz et al. 2023). Another bibliometric analysis states that global scientific
activity has grown by around 5% per year between 2004 and 2021, while in the same period, the
annual growth rate of AI-related publications has consistently remained at or above 15%, except
for the years between 2010 and 2012 (Arranz et al. 2023a).
The ERC survey takes place in this context of a demonstrated use of AI in the scientific process, especially
targeting ERC grantees already using or developing AI. When asked about their concrete use of AI in
scientific practices, the responses from ERC grantees illustrate the extensive and diverse applications
of AI in their scientific work. Many respondents also mentioned non-domain specific uses of AI-based
tools (namely generative AI), that is, as support for text writing and editing, language translation, coding
and programming, generation of images for presentations, literature retrieval, among others.
Life Sciences
In the Life Sciences (LS) domain (18% of total respondents), ERC researchers are using AI methods,
for instance, to understand individual differences in large cohorts, and to make predictions about
diagnosis or outcome of targeted therapies. AI tools are seen as an essential support to analyse
datasets of genomic, epigenomic and transcriptomic data, and compare healthy to disease
states and between disease states. Furthermore, AI tools are used in this domain to analyse
large volumes of imaging data and to find complex patterns and/or to generate simulations and
models for clinical applications. In the field of neuroscience, AI can be critical for automatically
detecting neuronal synaptic connections or serving as computational models for human conscious
experiences. Moreover, it has become an essential tool in computational proteomics, leveraging
deep neural networks to decipher protein sequences and predict their properties.
I don’t know 10
Other 19
0% 10% 20% 30% 40% 50% 60%
I6
Another potential development I think of AI first as an “”exocortex””, amplifying our
was cross-linking of data, cognitive skills and in principle subserving our objectives,
identifying relevant methods, our goals. This is the first level of AI impact. Then we will
or discovering related results see autonomous AI take over science to collaborate with
humans. For this, we have to carefully provide them with
from a different field. AI-
objective functions (goals). Later, humans may be too
based summarisation and
primitive to be relevant to research, but they may still help
consolidation of knowledge asking questions, and leaving the details to AI. At some later
could contribute to more stage, AIs will become a new species
interdisciplinary projects that
Giulio Ruffini, Neuroelectrics, Spain & US
require in-depth knowledge
in different fields, and at the same time, help to identify new/promising yet unexplored ideas or
research questions.
Expectations regarding the use of AI for scientific discovery, however, varied among respondents.
A possible future scenario is a human scientist to “brainstorm on scientific ideas while discussing
with increasingly responsive and “science aware” AI companion (this could be crucial for more
isolated scientists in certain institutions and countries)” (Jean Barbier, ICTP, Italy).
Others highlighted a broader role for AI in generating new scientific hypotheses, emphasising its
potential to “being in the driver’s seat of the process. (Søren Hauberg, Technical University of
Denmark)
Only a few, however, mentioned a fully autonomous process. In this scenario, an AI system would
not only develop detailed plans for testing specific hypotheses (provided by a researcher), but also
take actions such as selecting the best resources in the lab or accessing additional data if needed.
For some respondents, a higher level of contribution by AI to the scientific process is still limited
by our understanding of what AI can actually do better and the consequences of its use. It was
echoed by concerns over the current underdevelopment of uncertainty quantification, that is, the
assessment of the reliability of models and simulations, and also concerns over the transparency of
AI systems. The need for human validation or critical manual checking and interpretation was also
underlined, under the threat of leading to incorrect, biased and compromised data.
AI-based support tools can also be of help in clearly Instead of AI systems acting as
formulating project proposals and for brainstorming on autonomous agents, our survey results
managerial aspects such as potential risks, data and team point more towards a collaboration
management practices, writing concise summaries, and between human scientists and AI. In
helping to report about published works. some instances, “there is an outside
Tias Guns, KU Leuven, Belgium chance that something out of the
paradigm is turned out by an AI search.”,
in a sense of a “second opinion” (Javier Jimenez, Universidad Politecnica Madrid, Spain). “Humans in
the loop”, or AI as a “research assistant”, “co-pilot” or as “hybrid decision support system” assisting
humans in performing their tasks were more frequently mentioned.
In particular, for many respondents when it comes to tasks such as writing and editing, AI-based
tools will play a role in automating certain aspects, e.g. extracting and summarising information
from documents, making visualisations, restructuring and adapting texts for different purposes
(papers, grant proposals, conference presentations, press releases, etc.) – the human scientist “as
a conductor of an orchestra of AI tools”. (Martin Schultz, Research centre Jülich, Germany).
Some respondents believed that at least parts of scientific writing, reviewing and grant applications
that are seen as “mechanical” or “non-creative” (that is, not requiring “original research thinking”)
could be automated to increase accuracy and speed.
Yet, others underlined the continued value of human researchers in the reviewing and analysis of
scientific reports, literature, and data. Some expressed their concern about the overall impact on
grant submission, publishing and reviewing.
I7
3. Future opportunities and challenges
When asked to assess the likelihood of opportunities and benefits of AI in the scientific process
by 2030, respondents expressed strong convictions. 93% found it either ‘highly likely’ or ‘likely’
that the use of AI in science would require the implementation of ethical guidelines for AI. These
guidelines would address concerns such as privacy and data protection, algorithmic fairness,
and the prevention of potential misuse.
I imagine that by 2030, AI will revolutionise Another prevalent perception, shared by
scientific progress by automating tedious work, 88% of respondents, was the belief that AI
detecting patterns, and providing creative prompts. will accelerate the scientific process, e.g.
However, human guidance will remain crucial in by shortening time to find scientific literature,
directing these AI capabilities. analyse data, discover patterns, design studies
Ricardo Henriques, Instituto Gulbenkian de or simulations.
Ciência, Portugal
While maintaining a positive outlook,
respondents exhibited more moderate optimism towards other key opportunities. Notably, 81
% found it ‘highly likely’ or ‘likely’ that AI-human collaboration would become widespread
in the scientific process. Concurrently, 79% expressed similar sentiments about the faster
development of prototypes and transfer of new technologies from the laboratory to the market.
Additional opportunities, also seen with moderate optimism, included knowledge sharing and
interdisciplinary work within and across scientific fields (75%), more accuracy of the scientific
process, e.g. checking for errors and discrepancies, cleaning and classifying data, presenting
conflicting views (74%), and the advancement of solutions to societal challenges, e.g. climate
change or antimicrobial resistance (73%).
There is, however, more scepticism regarding the deployment of AI in certain tasks by 2030. A
large majority (76% in total) perceived it as ‘unlikely’ or ‘highly unlikely’ for AI to autonomously
conduct the entire scientific process end-to-end, e.g. generate hypothesis, carry out
experiments, interpret results, and make extrapolations.
To a lesser degree, 54% of respondents expressed a similar opinion about AI-based scientific
publication and peer-review, e.g. AI writing and reviewing research papers, evaluating and
monitoring of grants, or scientific fact-checking. These results, together with the belief in a
widespread AI-human collaboration as pointed above, imply a prevailing view that AI will for the
most part serve as an “assistant” or “support” to human scientists, rather than operating as an
autonomous agent in the scientific process.
I8
When comparing replies from different domains, respondents from Physical Sciences and
Engineering (PE) were more sceptical about AI-based scientific publication and peer-review, with
only 34% finding it ‘highly likely or ‘likely’.
On the other hand, respondents from the PE domain were more optimistic about the faster
development of prototypes (84% ‘highly likely’ or ‘likely’). This sentiment was also shared by 82%
of Life Sciences (LS) respondents. Respondents from Social Sciences and Humanities (SH) were
less optimistic (69%) or unsure (23%).
Respondents from LS were more positive about AI fostering high-risk and blue-sky thinking (64%)
when compared with respondents in PE (51%) and SH (50%). They were also the most positive
about the implementation of ethical guidelines for AI (98%), although respondents from SH and PE
were also very positive (95% and 90% respectively).
PE LS SH
34
Lead to AI-based scientific publication and peer-review 47
45
84
Faster development of prototypes (lab to market) 82
69
64
Foster high-risk and blue-sky thinking 51
50
90
Implementation of ethical guidelines for AI 98
95
0% 20% 40% 60% 80% 100%
When it comes to the likelihood of challenges and risks associated with the use of AI in the
scientific process by 2030, the results reveal a combination of more moderate perspectives and
uncertainty among respondents.
The most worrying prospects were the lack of transparency and replicability (“black box”
problem) (35% found it ‘highly likely’ and 36% ‘likely’, 71% in total), and the widespread deployment
of AI systems that are intrusive, manipulative or discriminatory, e.g. social scoring, automated
behavioural profiling, mass surveillance (37% ‘highly likely’ and 42% ‘likely’, 79% in total).
Other concerns were also raised, closer to the internal workings of the scientific endeavor. These
included the risk of bias in data or models due to non-representative sampling, inaccurate labeling,
low-quality curation, among others (71% ‘highly likely’ or ‘likely’). Additionally, respondents
expressed concerns about inequalities between researchers and organisations regarding access
to training data, computing capacity, and maintenance costs of AI technologies (68%). Moreover,
there were apprehensions about the reliance on correlation-driven models over deterministic
approaches, with 66% expressing concern, and 14% indicating ‘don’t know’.
Another concern over the risk of misuse e.g. bio hazard, or lethal drugs development was pointed out
(64% but 16% ‘don’t know’), which confirms partially the emphasis on ethical aspects noted above.
The key advancements would be (…) understanding what the AI model is predicting. This would provide
in many cases mechanistic inkling that we are sadly missing when we use AI and develop ways to ensure
AI models to not hallucinate.
Marcelo Nollmann, CNRS, France
I9
The concentration of AI resources and development in global players, tech companies, or
governments outside the EU was acknowledged as a concern by 63% (‘highly likely’ or ‘likely,’
with 17% responding ‘don’t know’). Additionally, the potential loss of creativity and diversity in AI
research due to the increasing dominance of the private sector or large tech companies was noted
by 61% of respondents. Though not ranking high on the list, more than half of the participants
(55%) expressed concerns about the potential lack of competencies in AI among researchers,
coupled with a shortage of AI specialists.
The least concerning aspect, as perceived by respondents, was the outsourcing of scientific
jobs to AI-based systems and its potential severe impact on researchers’ careers (59% found it
‘unlikely’ or ‘highly unlikely’). This aspect could be linked to the earlier mentioned strong belief in
AI-human collaboration rather than AI operating as autonomous agent.
When looking into differences between domains, respondents from SH tended to be more
concerned about inequalities between researchers and organisations (82% mentioned it as ‘highly
likely’ or ‘likely’), losses of creativity and diversity in AI research (72%), risk of misuse (72%), and to
a lesser extent lack of transparency and replicability (76%).
58
Low diversity due to dominance of companies 45
72
60
Enhance the risk of misuse 60
72
70
Lack of transparency and replicability (“black box”) 66
76
65
Concentration of AI in global players (outside EU) 51
62
0% 20% 40% 60% 80% 100%
I 10
In terms of concentration of AI resources and development outside the EU, respondents from SH
and PE were the most concerned (62% and 65% respectively, compared with 51% from LS).
I don't know 3
Other 2
There were some differences in the replies according to domains, albeit to a limited extent. Researchers
from Social Sciences and Humanities (SH) were more inclined towards the opportunity for generative
AI to handle mostly repetitive or labor-intensive tasks and reduce language barriers. In the first case,
91% of SH researchers expressed this view compared to 83% in Physical Sciences and Engineering
(PE) and 81% in Life Sciences (LS). In the second case, 80% of SH researchers held this opinion, while
the corresponding figures were 74% for both PE and LS. However, SH researchers were more sceptical
about generative AI acting as science disseminators, with only 10% expressing this view compared
to 23% for both PE and LS. The idea of generative AI posing research questions and suggesting new
research directions was mentioned more often by respondents from LS (19%) compared to those from
PE (9%).
I 11
Overall, the survey results revealed a moderate level of concern regarding a specific set of challenges
and risks associated with generative AI, such as large language models like ChatGPT, in the scientific
process by 2030. 62% expressed concern that generative AI could spread false information or
inaccurate scientific knowledge. Additionally, 50% believed it could impact research integrity, potentially
encouraging plagiarism, authorial and source misrepresentation, or non-disclosure of the use of generative
AI. Furthermore, 46% noted that it might lead to overreliance and dependency on generative AI tools,
posing a threat to the development of researchers’ critical thinking and analytical skills.
Some respondents were also concerned about potential intellectual property rights issues, e.g. unlicensed
content in training data, potential copyright, patent and trademark infringement of AI creations, and ownership
of AI-generated works (37%), and increased dependency on commercial or private providers (35%).
The challenges considered least concerning included data access and open-source software (16%),
dilution of responsibility over the scientific process (16%), and the potential endangerment of peer-review
quality in journals, funding bodies, etc. (13%).
Lead to overreliance on AI 46
I don’t know 1
Other 1
0% 10% 20% 30% 40% 50% 60% 70
In terms of differences based on researchers’ domains, respondents from LS expressed more concern
about the spread of false information or inaccurate scientific knowledge (72%, compared to 56% from
SH). Increased dependency on commercial or private providers was more prominently noted by ERC
grantees from SH (45%, compared to 19% from LS). Grantees from PE tended to be more worried about
issues related to intellectual property rights (44%, compared with 28% from SH and 32% from LS).
Key takeaways
Within its broad scope of models and approaches, artificial intelligence (AI) is widely used across various
research fields and purposes, ranging from domain-specific tasks to more cross-cutting applications.
This widespread usage has been, at least, partially spurred by recent advances in generative AI, including
large language models. This report provides a snapshot of the current use of AI in the scientific process
from the viewpoint of ERC grantees. Additionally, it offers a future-oriented outlook on potential
developments, opportunities, and risks by 2030.
One notable future opportunity identified in the survey was the use of AI for data analysis and processing.
It could greatly speed up or help with specific aspects of the scientific process, such as literature
summarisation, patterns discovery, and experiment design.
Another clear direction highlighted was the need for ethical guidelines to governe AI, covering areas
like privacy and data protection, algorithmic fairness, and prevention of misuse. This was coupled with
concerns over lack of transparency and potential issues with intrusive, manipulative, or discriminatory
AI systems.
I 12
There was, however, more scepticism regarding the extent to which AI systems can contribute to
scientific discovery by 2030, especially in scenarios envisioning AI as a fully autonomous agent. Instead,
the prevailing view was that AI functions as a tool or support to human researchers, emphasising
collaboration over replacement or posing a threat to scientific careers.
Generative AI tools, especially large language models, received positive feedback for their current and near-
future usefulness, particularly in handling repetitive or labour-intensive tasks such as literature reviews, content
generation (from presentations to papers), and improving access to documents in different languages. Still,
concerns persist regarding the spread of false information or inaccurate scientific knowledge, as well as
threats to research integrity, notably in the forms of plagiarism and source misrepresentation.
References
· Annoni, A., et al. (2018) Artificial Intelligence: A European Perspective, Craglia, M. editor(s), EUR
29425 EN, Publications Office of the European Union, Luxembourg.
· Arranz, D., et al. (2023a) Trends in the use of AI in science, R&I Paper Series, Working Paper
2023/4.
· Arranz, D., et al. (2023b), The impact of AI on R&I, R&I Paper Series, Literature review, Quarterly
research and innovation literature review (Issue 2023/Q2).
· Birhane et al (2023), Science in the age of large language models, Nat Rev Phys 5, 277–280.
· van Dis et al (2023), ChatGPT: five priorities for research, Nature, 614(7947): 224-226.
· Elsevier (2022) Research Futures Report 2.0.
· European Commission (2021c), AI Act , COM(2021)0206
· Hajkowicz, S. et al. (2022) Artificial intelligence for science: Adoption trends and future
development pathways, CSIRO, Brisbane, Australia.
· Hajkowicz, S. et al. (2023) Artificial intelligence adoption in the physical sciences, natural sciences,
life sciences, social sciences and the arts and humanities: A bibliometric analysis of research
publications from 1960-2021, Technology in Society, Volume 74.
· Lorenz, P. et al. (2023), “Initial policy considerations for generative artificial intelligence”, OECD
Artificial Intelligence Papers, No. 1, OECD Publishing, Paris.
· Nature (2023, AI will transform science — now researchers must tame it, Editorial, Nature 621, 658.
· OECD (2022), OECD Framework for the Classification of AI Systems. OECD Digital Economy
Papers, No. 323.
· OECD (2023), Artificial Intelligence in Science: Challenges, Opportunities and the Future of
Research, OECD Publishing, Paris.
· Van Noorden R., Perkel J.M. (2023) AI and science: what 1,600 researchers think. Nature,
621(7980): 672-675
·
Acknowledgments
We would like to thank the members of the ERCEA Scientific Department AI informal group for their
work (Endika Bengoetxea, Kristina Zakova, Anne-Sophie Paquez, Marietta Sionti, Valeria Croce, Nadia El
Mjiyad and André Vieira), led by Susana Nascimento and Jannik Sielmann for this report. The feedback
from the ERCEA Feedback to Policy (F2P) network was also welcomed. We would also like to thank
our colleagues in DG R&I.002.Science Policy, Advice & Ethics/Scientific Advice Mechanism (SAM) and
E4.Industry 5.0 & AI in Science for the excellent collaboration.
Under the Horizon Europe programme, the European Commission has delegated a new task to the
ERC Executive Agency (ERCEA) to identify, analyse and communicate policy relevant research results to
Commission services. The ERCEA has developed a Feedback to Policy (F2P) framework for ERCEA to
guide these activities adapted to the specificities of the ERC as a bottom-up funding programme.
This report is part of a series aiming to demonstrate the relevance of ERC-funded frontier science, for
addressing acute societal, economic, and environmental challenges and thus their contributions towards
key EU policy goals. This F2P series does not offer any policy recommendations.
More information: https://erc.europa.eu/projects-statistics/mapping-erc-frontier-research
I 13
Annex: ERC survey
Section A.
1. Please select the scientific domain of your last or ongoing ERC grant:
· Physical Sciences and Engineering
· Life Sciences
· Social Sciences and Humanities
· Synergy Grant
3. In which country is based the Host Institution of your last or ongoing ERC grant?
(drop-down menu)
If yes, please detail how you are using AI in your scientific practices. Please also mention the
importance of AI in such practices (e.g. a support tool, an essential tool, etc.)
(up to 1000 characters)
5. If you are not currently using AI in your research, please select below the main reasons why (up to
three).
· Not relevant for my current research.
· Insufficient funding.
· Lack of high quality and domain-specific data.
· Shortage of AI competencies.
· Lack of infrastructure and computing resources.
· Insufficient interoperability and sharing of data.
· Inadequate support from your organisation.
· I don’t know.
· Other (please specify)
6. Does your Host Institution provide support to researchers using AI in their scientific practices?
No (If no, go to Section C)
Yes (If yes, please list below the type of support offered by your organisation).
· Development of internal AI tools.
· Licenses to external AI tools.
· Training on the use of AI.
· Guidance on the ethics of AI.
· Calls for interdisciplinary teams.
· I don’t know.
· Other (please specify)
I 14
Section C. Future applications and opportunities
7. By 2030 how could the use of AI further develop scientific process in general and in your specific
field? In your view, what will be the three key applications or advancements?
(up to 2000 characters)
I 15
Section E. Future perspectives on generative AI
10. Please select below up to three opportunities and benefits of generative AI (e.g. large language
models such as ChatGPT) for the scientific process by 2030.
11. Please select below up to three challenges and risks of generative AI (e.g. large language models
such as ChatGPT) for the scientific process by 2030.
By 2030 generative AI will:
· Affect research integrity (e.g. plagiarism, source misrepresentation, etc.).
· Spread false information or inaccurate scientific knowledge.
· Raise issues of intellectual property rights (e.g. infringement, ownership of AI-generated work,
etc.).
· Lead to overreliance on AI (e.g. threatening researchers’ critical and analytical thinking).
· Dilute responsibility over the scientific process.
· Increase dependency on commercial or private providers.
· Require wider data access and open-source software.
· Endanger peer-review quality assurance (e.g. in journals, funding bodies, etc.)
· I don’t know.
· Other (please specify)
I 16
Neither the Agency nor any person acting on behalf of the Agency is responsible for the use that might be
made of the following information.
ERCEA’s reuse policy is implemented by the Decision of 12 December 2011 - reuse of Commission
documents.
Unless otherwise indicated (e.g. in individual copyright notices), content owned by the ERCEA and/or EU
on this website is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence.
This means that reuse is allowed, provided appropriate credit is given and changes are indicated.
For any use or reproduction of photos or other material that is not under the EU copyright, permission
must be sought directly from the copyright holders.
Pictures: © www.gettyimages.com
In person
All over the European Union there are hundreds of Europe Direct information centres. You can find the
address of the centre nearest you at: https://europa.eu/european-union/contact_en
Online
Information about the European Union in all the official languages of the EU is available on the Europa
website at: https://europa.eu/european-union/index_en
EU publications
You can download or order free and priced EU publications from EU Bookshop at: https://publications.
europa.eu/bookshop. Multiple copies of free publications may be obtained by contacting Europe Direct
or your local information centre (see https://europa.eu/european-union/contact_en).
I 17
Contact: ERC-Info@ec.europa.eu
https://erc.europa.eu/