European Commission updates guidance on generative AI use in research

New guidance encourages responsible integration of generative AI tools while protecting scientific integrity across research institutions.

The European Commission has revised its research guidelines to address the growing use of generative AI and strengthen principles of transparency and accountability.

The European Commission has updated the ERA Living Guidelines on the responsible use of generative AI in research, reflecting the growing use of AI tools across scientific work. The revised guidance aims to support researchers, research organisations and funding bodies in adopting generative AI while maintaining core principles of research integrity.

The guidelines emphasise reliability, honesty, respect and accountability, including transparency over AI use, protection of privacy and confidential information, and responsibility for research outputs. They also stress that researchers remain ultimately responsible for scientific output and should verify AI-generated results.

New recommendations address risks linked to the use of generative AI by third parties, including in meetings, note-taking, summaries and document overviews, where confidential information, data protection or intellectual property rights may be affected. The guidelines encourage researchers and organisations to inform third parties about the use of such tools and related risks.

A specific addition concerns the risk of ‘hidden prompts’, where instructions may be secretly embedded in documents or inputs to influence generative AI tools. The guidelines call on research funding organisations to remain aware of such risks, set rules prohibiting manipulation where relevant, and introduce appropriate safeguards in IT systems used to process information.

Developed through the European Research Area Forum, the guidelines are intended as a non-binding supporting tool for the research community. The Commission says they will be updated regularly and that users can continue to provide feedback as generative AI and the surrounding policy landscape evolve.

Why does it matter?

Generative AI is becoming part of everyday research workflows, from drafting and summarising to proposal preparation and document analysis. The updated guidelines show that research integrity risks now extend beyond individual misuse to organisational processes, third-party tools and hidden technical behaviours that may affect scientific judgement. Shared guidance across the European Research Area can help institutions adopt AI without weakening transparency, accountability or trust in research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!