Skip to main content
EURAXESS

Living Guidelines on the Responsible Use of Generative AI in Research

Poster by EC on Responsible Use of AI

Generative AI offers tremendous opportunities for research and science. At the same time, it poses great risks for the sector, including the potential of generating and disseminating disinformation on a wide scale and other unethical use with negative consequences on our societies.

Therefore, the European Research Area Forum decided to develop guidelines on the use of generative AI in research for researchers, researcher organisations and funding bodies from both public and private research ecosystems.

The guidelines are based on 4 key principles which are already outlined in these two pre-existing relevant frameworks: the European Code of Conduct for Research Integrity and the work and guidelines on trustworthy AI, developed by the High-Level Expert Group on AI. These are: 1) Reliability, 2) Honesty, 3) Respect, and 4) Accountability.

While the guidelines are not binding, individual researchers and institutions are highly encouraged to integrate and adapt them in their respective research context. As common directions on the responsible use of generative AI, they will be a relevant supporting tool to promote responsibility and ensure research integrity.

More information:

Guidelines

Factsheet 

It is important that these guidelines will be updated regularly, taking into account the rapid technological development in this area and addressing future challenges that might arise. The European Commission is inviting the research community to be part of the collaborative process of keeping this guidelines updated and to contribute their views and ideas on how to enhance future versions.

Participate and share your views in this feedback form.

ethics
2024 EC Factsheet - Responsible Use of Generative AI in Research.pdf
English
(214.96 KB - PDF)
Download
2024 EC Guidelines _ Responsible Use of Generative AI in Research.pdf
English
(567.15 KB - PDF)
Download