Knowledge synthesis tool RASS presented by European Commission’s Joint Research Centre

The JRC says its Research ASSistant prototype is designed to support knowledge synthesis while allowing researchers to steer the system.

European Commission and Joint Research Centre logos illustrating the JRC's Research ASSistant AI prototype for literature review and knowledge synthesis

The European Commission’s Joint Research Centre (JRC) has presented a new AI tool designed to support faster literature reviews, as policymakers and researchers seek better ways to manage the growing volumes of scientific and online information. Called the Research Assistant, or RASS, the prototype is currently being used experimentally within the JRC.

The project responds to a familiar problem in research and policy work: synthesising large amounts of academic literature, news coverage, and web content quickly enough to support timely analysis. According to the publication, many existing AI research tools are built around strong automation, but this does not always align with how researchers actually work. Instead of removing the human researcher from the process, RASS is designed to keep users involved in steering queries, assessing outputs, and shaping the synthesis as it develops.

That human-in-the-loop model is central to the JRC’s argument. The publication links user involvement to trust, factuality, and accuracy, suggesting that AI-based knowledge synthesis is more credible when researchers can intervene rather than accept machine-generated results. In that sense, the report is not just presenting a new tool but also making a broader case for integrating AI into evidence synthesis workflows.

The publication also identifies a wider methodological gap. While AI-powered tools for summarising and reviewing knowledge are developing quickly, the JRC says robust public validation frameworks for such systems are still lacking. To address that problem, the report sets out a dedicated evaluation model for AI-based knowledge synthesis tools. That framework operates across three levels, process, retrospective, and usability, and examines six dimensions: technical performance, content quality, domain relevance, methodological rigour, usability, and integration.

That gives the publication a significance beyond the tool itself. The more important contribution may be its attempt to define how AI systems used for research support should be judged, especially in environments where speed is valuable but reliability remains essential. Rather than treating literature-review automation as a purely technical challenge, the JRC is framing it as a question of evaluation, accountability, and trustworthiness.

The result is a more cautious and arguably more useful vision of AI in research. RASS is presented not as a replacement for expert judgement, but as a support system for faster and more manageable knowledge synthesis. That makes the story less about full automation and more about how public institutions may try to use AI in ways that remain testable, steerable, and methodologically defensible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!