European Commission allocates €63.2 million to support AI innovation in health and online safety

The European Commission has announced €63.2 million in funding to support AI innovation, focusing on health, online safety and broader technological development. The initiative aims to accelerate the deployment of AI solutions across key sectors.

According to the Commission, the funding will support projects that improve healthcare systems and strengthen protections in digital environments. It is part of ongoing efforts to expand AI capabilities and adoption.

The programme also seeks to encourage collaboration between research institutions, businesses and public bodies. This approach is intended to foster innovation while addressing societal challenges linked to AI use.

The Commission states that the investment will contribute to strengthening Europe’s digital capacity and advancing AI development across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Knowledge synthesis tool RASS presented by European Commission’s Joint Research Centre

The European Commission’s Joint Research Centre (JRC) has presented a new AI tool designed to support faster literature reviews, as policymakers and researchers seek better ways to manage the growing volumes of scientific and online information. Called the Research Assistant, or RASS, the prototype is currently being used experimentally within the JRC.

The project responds to a familiar problem in research and policy work: synthesising large amounts of academic literature, news coverage, and web content quickly enough to support timely analysis. According to the publication, many existing AI research tools are built around strong automation, but this does not always align with how researchers actually work. Instead of removing the human researcher from the process, RASS is designed to keep users involved in steering queries, assessing outputs, and shaping the synthesis as it develops.

That human-in-the-loop model is central to the JRC’s argument. The publication links user involvement to trust, factuality, and accuracy, suggesting that AI-based knowledge synthesis is more credible when researchers can intervene rather than accept machine-generated results. In that sense, the report is not just presenting a new tool but also making a broader case for integrating AI into evidence synthesis workflows.

The publication also identifies a wider methodological gap. While AI-powered tools for summarising and reviewing knowledge are developing quickly, the JRC says robust public validation frameworks for such systems are still lacking. To address that problem, the report sets out a dedicated evaluation model for AI-based knowledge synthesis tools. That framework operates across three levels, process, retrospective, and usability, and examines six dimensions: technical performance, content quality, domain relevance, methodological rigour, usability, and integration.

That gives the publication a significance beyond the tool itself. The more important contribution may be its attempt to define how AI systems used for research support should be judged, especially in environments where speed is valuable but reliability remains essential. Rather than treating literature-review automation as a purely technical challenge, the JRC is framing it as a question of evaluation, accountability, and trustworthiness.

The result is a more cautious and arguably more useful vision of AI in research. RASS is presented not as a replacement for expert judgement, but as a support system for faster and more manageable knowledge synthesis. That makes the story less about full automation and more about how public institutions may try to use AI in ways that remain testable, steerable, and methodologically defensible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australian regulator highlights rising AI use across various industries

The Australian Communications and Media Authority reports that AI use is accelerating across telecommunications, media and online gambling sectors. The findings highlight growing adoption alongside increasing complexity in how the technology is applied.

According to the Authority, AI is being used in media to personalise advertising and streamline content production. However, concerns have been raised about misinformation risks and the use of copyrighted material.

In the gambling sector, AI supports predictive analytics, promotions and detection of harmful behaviour, while telecommunications companies use it to improve efficiency, detect scams and strengthen network resilience.

The Authority states that despite efficiency gains, stakeholders are calling for stronger governance, transparency and safeguards as AI adoption expands in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK regulator selects firms for second cohort of AI testing programme in financial services

The Financial Conduct Authority (FCA) has selected eight firms to join the second cohort of its AI Live Testing programme, with trials beginning in April 2026. The announcement was made at UK FinTech Week.

The initiative allows participants to test AI applications under regulatory oversight, with a focus on risk management and live monitoring. FCA is working with AI assurance specialist Advai to support the deployment of systems across financial markets.

Jessica Rusu, chief data, information and intelligence officer at FCA, said the programme reflects collaboration between regulators and industry. She added that FCA continues to work with firms to support the safe and responsible development of AI in UK financial markets.

The second cohort includes Barclays, Experian, Lloyds Banking Group, UBS, Aereve, Coadjute, GoCardless and Palindrome. FCA noted that use cases include targeted investment support, credit scoring insights, anti-money laundering detection and agentic payments.

FCA will also use the programme to examine emerging concepts, such as targeted support, a lighter-touch regulatory category aimed at addressing the UK’s advice gap. It reported that applications to its innovation services, including the Regulatory Sandbox and Innovation Pathways, increased by 49 percent year on year. A report on AI adoption practices is expected later in 2026, with a full evaluation of the cohort due in 2027.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

India forms expert committee to support AI governance framework

India’s Ministry of Electronics and Information Technology has constituted a Technology and Policy Expert Committee to support the country’s AI governance architecture. The committee will advise the AI Governance and Economic Group (AIGEG) on policy design, regulatory measures, and international engagement.

The committee is chaired by the ministry’s Secretary and includes experts from academia, industry, and digital policy. Its mandate is to provide informed input grounded in technological developments, regulatory approaches, and global practices.

AIGEG will set strategic direction and coordinate policy across government. The expert committee will translate technical and policy issues into actionable insights for decision-making.

The framework aims to ensure a dynamic and adaptive approach to AI governance. It also seeks to align strategic, technical, and policy considerations with India’s social and economic context.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Frontier AI cybersecurity risks highlighted by the World Economic Forum

A shift is emerging in cybersecurity as frontier AI systems become more capable and harder to control.

Anthropic’s decision to restrict access to the Claude Mythos Preview reflects growing concern about how such models can be used in real-world cybersecurity operations, as highlighted in an article published by the World Economic Forum.

Reported capabilities include identifying unknown vulnerabilities and generating working exploits. Tasks that once required specialised teams over long periods can now be accelerated significantly.

Defensive benefits exist, particularly in faster vulnerability detection, but the same capabilities can also lower barriers for attackers.

The main challenge is no longer finding weaknesses but managing them. AI can generate large volumes of vulnerabilities in a short time, while many organisations still rely on slower response cycles.

That gap increases exposure, especially for critical systems and infrastructure.

Cybersecurity is therefore moving away from static protection toward continuous monitoring and rapid response. At the same time, the lack of clear global rules on access to advanced AI systems raises broader concerns about governance and long-term stability.

Such an evolving imbalance between capability and control is likely to define the next phase of cyber risk.

The World Economic Forum report also stresses that AI-driven cyber risk is becoming a strategic issue, requiring board-level attention, stronger public–private coordination, and faster response timelines, as vulnerability discovery and exploitation compress from weeks to hours.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK NCSC calls for stronger cyber readiness

The UK National Cyber Security Centre has warned that organisations must urgently prepare for severe cyber threats, describing them as a growing risk to operations and national resilience. The guidance calls for immediate action from leadership.

Cyber attacks are becoming more capable and disruptive, with new technologies such as AI increasing their speed and scale. These threats can lead to major operational, financial and security impacts.

The agency emphasises that resilience, rather than prevention alone, is critical. Organisations must be able to continue operating and recover during cyber attacks, with preparation and planning carried out in advance.

The Centre states that responsibility lies with organisational leaders, urging investment, coordination and early planning to ensure essential services can continue under pressure in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Report on Geneva 2027 AI Summit preparations available

A report outlining initial consultations for the Geneva 2027 AI Summit has been submitted to the Swiss government following a preparatory event held during GenAI Zürich 2026.

The report consolidates inputs from an invite-only roundtable held on 1 April 2026 and written submissions collected through an open call. It was prepared by ICT4Peace and GenAI Zürich to support Switzerland’s planning for the summit.

According to the organisers, the roundtable brought together participants from government, academia, industry, civil society, and international organisations. It was co-moderated by Daniel Stauffacher, founder of ICT4Peace, Ambassador Thomas Schneider, Vice-Director of the Swiss Federal Office of Communications (BAKOM), and Ambassador Markus Reubi, project lead for the Geneva 2027 AI Summit at the Swiss Federal Department of Foreign Affairs.

In addition to the roundtable, the report includes written contributions submitted through an online consultation process, with organisers noting that 55 submissions were received, including 52 with substantive responses.

The report presents a synthesis of themes and proposals related to the objectives and potential outcomes of the Geneva 2027 AI Summit. According to the organisers, the analysis is based on recurring themes and areas of convergence identified during the consultation process, rather than a statistically representative survey.

Discussions were conducted under the Chatham House Rule, and the report does not attribute comments to individual participants.

The findings were submitted to the Swiss government’s Platform Tripartite on 13 April 2026 to inform further preparations for the summit.

Switzerland is scheduled to host the next global AI Summit in 2027 in Geneva, following previous summits held in the United Kingdom, the Republic of Korea, France, and India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

DIFC unveils plan to build ‘AI-native’ financial centre in Dubai

Dubai International Financial Centre has announced plans to become what it describes as the world’s first ‘AI-native’ financial centre, embedding AI into regulation, business operations, and physical infrastructure rather than treating it as a stand-alone tool.

The initiative is being presented as a broader redesign of how a financial centre functions. Instead of limiting AI to back-office support or isolated digital services, DIFC says it wants AI to shape legal frameworks, compliance processes, client management, and the wider operation of the financial ecosystem.

The plan builds on DIFC’s longer-term AI strategy, launched in 2023 and already tied to changes in data governance and the centre’s wider innovation agenda.

According to DIFC, AI is already being used in areas such as compliance and client services, with further expansion planned across financial workflows, supervisory processes, and institutional decision-making.

DIFC also says the initiative will be supported by a broader ecosystem designed to attract investment, talent, and experimentation. That includes training programmes, venture support, accelerators, and the continued development of its AI-focused innovation infrastructure. The aim is not only to encourage firms to use AI, but to make Dubai a base for building and scaling AI-driven financial services.

The project also extends beyond software and regulation. DIFC says physical infrastructure will evolve alongside digital systems, with plans linked to smart buildings, robotics, autonomous mobility, and digital twins by the end of the decade.

That gives the announcement a broader urban and economic dimension, positioning AI as part of the district’s future design rather than simply a tool used by firms within it.

The broader significance of the move lies in how Dubai is trying to position itself in the global race to shape AI in finance. Rather than focusing only on innovation-friendly rhetoric, DIFC is presenting regulation, infrastructure, skills, and ecosystem-building as part of a single strategy.

If realised in practice, that could strengthen Dubai’s role as a hub for AI-driven financial services and as a testing ground for new governance models.

At the same time, the claim to be the world’s first ‘AI-native’ financial centre should be understood as DIFC’s own description of the project, rather than an independently established category.

The more solid story is that Dubai is trying to make AI part of the operating logic of a financial centre itself, using policy, infrastructure, and investment to support that ambition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF and Utropolis partnership strengthens AI-driven child online safety

The Internet Watch Foundation (IWF) has announced a new partnership with Utropolis, marking a step forward in efforts to strengthen online child protection. The collaboration brings together established detection tools and emerging AI-driven safeguarding technologies.

Utropolis specialises in cloud-based filtering systems designed to identify risks in real time, particularly in school environments.

By integrating IWF datasets, including verified lists of harmful content, the platform aims to improve prevention and detection capabilities while helping educators maintain safer digital spaces.

The initiative reflects a broader trend towards combining AI with established regulatory and safeguarding frameworks. As harmful material continues to spread online, organisations are increasingly focusing on scalable, automated solutions that can adapt to evolving threats.

The partnership also aligns with UK online safety standards in education, reinforcing compliance requirements and strengthening institutional responses.

As digital environments continue to expand, collaborations of this kind highlight the growing role of AI in supporting child protection strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!