EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

World Economic Forum analysis explores AI-driven future planning for organisations

A World Economic Forum article argues that organisations need to move beyond static reports and analytical forecasts to become more future-ready in an era marked by rapid technological and geopolitical change.

The article highlights FutureSlam, a foresight method that combines participatory scenario-building, AI-supported reflection and improvisational performance to help organisations experience possible futures rather than analyse them. The authors say many organisations already invest in foresight, but struggle to translate insights into operational decisions because they often remain confined to strategy teams and slide decks.

The approach integrates human imagination with AI-generated scenarios. Participants first develop scenarios themselves, before comparing them with future images generated by an AI system using the same trend material. The authors argue that this comparison can challenge assumptions, confirm parts of participants’ reasoning and introduce perspectives that human groups may avoid.

FutureSlam then uses improvised performance, including simulated news broadcasts and staged scenarios, to make possible futures more tangible. According to the article, the method is designed to make foresight more inclusive, structured and memorable by turning participants into co-creators rather than passive recipients of expert analysis.

The authors suggest that such approaches could help organisations adapt more effectively to technological, geopolitical and societal change by turning foresight into a shared organisational capability rather than a niche strategic exercise.

Why does it matter?

AI is increasingly being used not only to automate tasks, but also to support strategic thinking, scenario-building and organisational learning. The FutureSlam example points to a broader shift in how organisations may prepare for uncertainty: less focus on predicting precise outcomes, and more focus on building the capacity to test assumptions, imagine alternatives and adapt collectively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Türkiye delegation to explore US cyber and AI technologies

The US Trade and Development Agency will host a delegation of cybersecurity and AI decision-makers from Türkiye as the country works to modernise cyber protection for critical infrastructure.

The 15-member delegation will visit Washington, DC, and Silicon Valley from 9 to 20 May to meet US companies, view demonstrations of cybersecurity technologies and discuss how advanced tools could help protect critical infrastructure from cyber threats.

The visit will also include meetings with US government officials on policy and regulatory approaches to AI and cybersecurity. Delegates are expected to visit the US National Institute of Standards and Technology to learn about its work on cybersecurity frameworks, AI risk management, standards development and applied research.

USTDA will also host a public business briefing in San Francisco on 19 May, where US companies can hear from the delegation about commercial opportunities and present cybersecurity solutions.

The agency said Türkiye is rapidly developing its digital ecosystem and has made cybersecurity for critical infrastructure a national priority. It said Türkiye is looking to AI and other advanced technologies to respond to increasingly sophisticated cyber threats, while describing the US private sector as a potential partner in cybersecurity, AI and data protection.

Why does it matter?

The visit shows how cybersecurity for critical infrastructure is increasingly being linked with AI, standards and cross-border technology partnerships. For Türkiye, the focus is on modernising protection against more sophisticated cyber threats. For the United States, the programme also reflects USTDA’s role in connecting US technology providers with infrastructure and digital security priorities in partner countries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU briefing warns AI health benefits need safeguards

A European Parliamentary Research Service briefing says AI could improve healthcare, disease prevention and well-being across the EU, but warns that its growing use in health advice, AI companions and tools used by children, young people and older adults requires strong safeguards and human oversight.

The briefing, focused on health and well-being in the age of AI, says AI is already supporting diagnostics, personalised treatment, health-risk forecasting, hospital management, pharmaceutical development and disease surveillance. It points to use cases in areas such as radiology, oncology, cardiology, rare diseases and cross-border health data exchange.

AI-powered health chatbots and virtual assistants can help people access health information, understand complex topics and prepare for medical consultations. However, the briefing warns that such tools may also create privacy risks, spread inaccurate or misleading information, and encourage users to delay or replace professional medical advice.

AI companions are presented as another area where benefits and risks coexist. They may support social interaction and alert caregivers when people are at risk of isolation, but cannot replace human relationships and may deepen loneliness or worsen mental health risks for vulnerable users.

For older adults, AI-enabled wearables, in-home sensors, assistive technologies and smart care platforms could support independent living and improve care. At the same time, the briefing warns of privacy and data security concerns, emotional dependency and the risk that technology could replace rather than complement personal interaction.

Young people and children face different risks as AI becomes part of daily life, learning, health advice and social interaction. The briefing highlights possible exposure to harmful content, cyberbullying, emotional dependency, privacy violations, reduced critical thinking, sleep disruption, sedentary behaviour and social withdrawal.

The research service says the EU AI Act, the General Data Protection Regulation, the European Health Data Space, and sector-specific rules on medical devices and diagnostics form part of the EU framework for managing these risks. It concludes that AI’s health benefits can be realised only if innovation is balanced with safeguards, digital skills and a commitment to keeping human care and social connection at the centre.

Why does it matter?

AI is becoming part of healthcare not only through clinical tools, but also through consumer-facing chatbots, companions, wearables and support systems used by vulnerable groups. That widens the policy challenge from medical safety to privacy, misinformation, emotional dependency, digital skills and the preservation of human care.

The briefing shows why health-related AI governance cannot rely only on innovation or efficiency gains. Trustworthy use will depend on safeguards that protect patients, children, older adults and other vulnerable users while ensuring AI supports, rather than replaces, professional care and social connection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven disinformation threatens public trust, Nobel economist warns

Research by Nobel Prize-winning economist Joseph Stiglitz and Columbia University’s Maxim Ventura-Bolet argues that AI could worsen the economics of misinformation by making low-quality and misleading content cheaper and easier to produce at scale.

According to an analysis in The Strategist, their economic modelling suggests that digital markets reward misleading and emotionally charged content because it attracts engagement, advertising revenue and data collection. The analysis argues that without regulation, markets are likely to produce more disinformation and less reliable information as AI lowers the cost of content production.

The article says social media platforms and AI systems have reshaped how people consume information. Instead of visiting original news sources, users increasingly rely on algorithm-driven feeds, search summaries and AI-generated overviews, reducing traffic and revenue for original publishers.

It also argues that AI systems can intensify the problem by producing large volumes of convincing but unreliable material quickly and cheaply. Since AI tools depend on online information for training and outputs, distorted or misleading data can feed back into the information ecosystem and further reduce quality.

The analysis links the issue to political polarisation, warning that audiences are more likely to engage with information that reinforces existing beliefs. That demand can further reward producers of misleading content while putting additional pressure on public-interest journalism.

Stiglitz and Ventura-Bolet argue that market forces alone will not correct the decline in information quality. The article says possible responses include stronger platform accountability for content amplification, obligations to address coordinated disinformation campaigns and intellectual property protections for news producers.

The analysis also points to Australia’s memorandum of understanding with Anthropic as a sign of engagement between government and AI companies, while stressing that voluntary cooperation is not a substitute for regulation.

Why does it matter?

The analysis highlights how AI and platform algorithms can affect the economic incentives behind public information, not only the speed at which false content spreads. If engagement-based systems continue to reward misleading material while weakening the revenue base for quality journalism, the risks extend beyond individual misinformation incidents to the overall reliability of the online information environment.

That matters for democratic debate, public trust and informed decision-making. It also raises regulatory questions about platform accountability, the use of news content by AI systems and whether voluntary agreements with technology companies are enough to protect the information ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California expands digital democracy platform for AI policy debate

California’s Governor is expanding Engaged California, a digital democracy initiative designed to give residents a direct voice in shaping AI policy across the state. The programme invites Californians to share how AI is affecting their jobs, industries, and communities, with the findings expected to help guide future state policy decisions.

The initiative will begin with a public participation phase, during which residents can submit experiences and recommendations through the state’s online platform. A second phase, later in 2026, will bring together a smaller representative group of residents for live deliberative forums focused on AI’s economic and social impact. The process aims to identify areas of public consensus on how government should respond to rapidly evolving AI technologies.

State officials described ‘Engaged California’ as a first-in-the-nation deliberative democracy programme inspired partly by Taiwan’s digital governance model. Instead of functioning like a social media platform or public poll, the initiative is designed to encourage structured discussion and collaborative policymaking around emerging technologies.

California also used the announcement to highlight broader AI initiatives already underway, including AI procurement reforms, workforce training partnerships with major technology companies, AI-powered wildfire detection systems, cybersecurity assessments, and responsible governance frameworks.

Officials said the state aims to balance innovation with safeguards related to child safety, deepfakes, digital likeness protections, and AI accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI cyber capabilities raise risk of correlated financial system failures, IMF warns

AI is rapidly reshaping the global financial system’s cyber risk landscape, according to analysis associated with the International Monetary Fund. While AI improves defence, it also helps attackers find and exploit vulnerabilities more quickly, increasing the risk of systemic disruption.

Financial infrastructure is highly interconnected, relying on shared software, cloud services, and payment networks. IMF analysis suggests that AI-enabled cyberattacks could trigger correlated institutional failures, leading to funding stress, solvency risks, and disruptions to payments and market operations.

Recent developments in advanced AI models demonstrate how quickly offensive capabilities are evolving, with systems now able to identify weaknesses across widely used platforms.

At the same time, defensive AI tools are being deployed to detect threats and strengthen resilience, but their effectiveness depends on governance, oversight, and integration within financial institutions.

Authorities are now being urged to treat cyber risk as a core financial stability issue rather than a purely technical challenge. Stronger supervision, resilience standards, and international coordination are viewed as essential, particularly as cyber threats increasingly cross borders and exploit shared global infrastructure.

Why does it matter? 

Cyber risks related to AI are a macroeconomic threat that can affect liquidity, confidence, and core financial intermediation. At the same time, the same technology is essential for defence, meaning resilience now depends on how quickly supervision, governance, and international coordination can keep pace with rapidly scaling offensive capabilities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!  

OSCE chairpersonship opens Geneva conference on AI and quantum risks

The Swiss OSCE Chairpersonship has opened a high-level conference in Geneva on how emerging technologies are affecting security, international governance, and co-operation across the OSCE region.

The two-day event, titled ‘Anticipating technologies – for a safe and humane future’, brings together about 200 participants from OSCE participating States and Partners for Co-operation, alongside representatives from international organisations, academia, the private sector, and civil society.

The conference focuses on the security implications of rapid technological change, including AI and quantum technologies. The discussions are intended to examine how anticipation, dialogue, and cooperation can help reduce misunderstandings, build trust, and strengthen security in a fast-changing technological environment.

Opening the conference, OSCE Chairman-in-Office and Swiss Federal Councillor Ignazio Cassis said: ‘Technology will not wait for us. Geopolitics will not slow down. If we want to remain relevant, we must anticipate – not react. This is the responsibility we share across the OSCE region. The OSCE still offers something rare: a space where adversaries can speak, where differences can be managed, and where common ground can still be built.’

The organisation’s Secretary General, Feridun H. Sinirlioğlu, also stressed the need for dialogue as emerging technologies evolve faster than governance frameworks. He said: ‘Today, emerging technologies are evolving faster than the frameworks that govern them. This creates a widening gap between what technology can do and how we manage it. This gap must be addressed through dialogue – our most important stabilizing force in uncertain times – and this is where the OSCE has a vital role to play.’

The programme includes discussions on anticipating technological change and its geopolitical impact, water and energy security in the digital age, and the role of AI in early warning and conflict prevention. The conference also highlights Geneva’s role as a meeting point for science and diplomacy, including through institutions such as CERN, the Geneva Science and Diplomacy Anticipator, and the Open Quantum Institute.

The event forms part of the Chairpersonship’s priority to connect scientific and technological anticipation with policy action. It is the second of four international conferences Switzerland is hosting under its chairpersonship, ahead of the OSCE Ministerial Council meeting in Lugano in December.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Swiss media groups launch responsible AI journalism framework

Swiss media organisations have adopted a national code of conduct for the responsible use of AI, aiming to strengthen transparency, copyright protection and public trust in journalism.

The initiative is backed by major Swiss publishing groups, private radio and television organisations, the Swiss Broadcasting Corporation and the national news agency Keystone-ATS. It is based on the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.

The code states that media companies and their employees remain responsible for all published editorial content, whether produced by journalists or with the support of AI systems. It also commits media organisations to train staff in AI use, respect copyright, follow data protection rules and take steps to prevent the spread of false information.

Swiss media groups also agreed to inform the public transparently about their use of AI, including through dedicated information pages, and to introduce binding marking obligations for AI-supported content. The framework is designed as a self-regulatory tool at a time when public concern over AI-generated content remains high.

To support implementation, the code provides for a two-tier reporting and control mechanism. The relevant departments within media companies will first handle questions and complaints. In contrast, an independent AI ombudsperson will act as a second instance for serious or unresolved cases and publish an annual report.

Swiss President Guy Parmelin said AI could strengthen journalism if used responsibly and transparently, while warning that fake news threatens journalistic credibility and social cohesion. Legislative changes needed to implement the Council of Europe convention in Switzerland are expected by the end of 2026.

Why does it matter?

The Swiss code shows how media organisations are moving to set AI governance standards before legal obligations fully take shape. Its significance lies in linking AI-assisted journalism with editorial responsibility, transparency, copyright, data protection and complaint mechanisms, rather than treating AI labelling as the only issue. The model could influence how other media sectors balance innovation with public trust and accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!