China outlines AI and energy integration plan

The Chinese National Energy Administration, alongside the National Development and Reform Commission, the Ministry of Industry and Information Technology and the National Data Administration, has released an action plan to promote mutual development between AI and the energy sector.

The plan focuses on ensuring a reliable energy supply for computing infrastructure while using AI to support energy transformation. It outlines 29 key tasks covering green energy use, efficient coordination between power and computing, and expanding high-value AI applications in energy.

Authorities aim to significantly improve the clean energy supply for AI computing and strengthen AI adoption in energy by 2030. The strategy also seeks to enhance data use and drive innovation in AI models within the energy sector.

The agencies will establish coordination mechanisms across government and industry to support implementation and innovation. The initiative reflects a broader push to integrate AI and energy systems more deeply in China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

World Economic Forum analysis explores AI-driven future planning for organisations

A World Economic Forum article argues that organisations need to move beyond static reports and analytical forecasts to become more future-ready in an era marked by rapid technological and geopolitical change.

The article highlights FutureSlam, a foresight method that combines participatory scenario-building, AI-supported reflection and improvisational performance to help organisations experience possible futures rather than analyse them. The authors say many organisations already invest in foresight, but struggle to translate insights into operational decisions because they often remain confined to strategy teams and slide decks.

The approach integrates human imagination with AI-generated scenarios. Participants first develop scenarios themselves, before comparing them with future images generated by an AI system using the same trend material. The authors argue that this comparison can challenge assumptions, confirm parts of participants’ reasoning and introduce perspectives that human groups may avoid.

FutureSlam then uses improvised performance, including simulated news broadcasts and staged scenarios, to make possible futures more tangible. According to the article, the method is designed to make foresight more inclusive, structured and memorable by turning participants into co-creators rather than passive recipients of expert analysis.

The authors suggest that such approaches could help organisations adapt more effectively to technological, geopolitical and societal change by turning foresight into a shared organisational capability rather than a niche strategic exercise.

Why does it matter?

AI is increasingly being used not only to automate tasks, but also to support strategic thinking, scenario-building and organisational learning. The FutureSlam example points to a broader shift in how organisations may prepare for uncertainty: less focus on predicting precise outcomes, and more focus on building the capacity to test assumptions, imagine alternatives and adapt collectively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California expands digital democracy platform for AI policy debate

California’s Governor is expanding Engaged California, a digital democracy initiative designed to give residents a direct voice in shaping AI policy across the state. The programme invites Californians to share how AI is affecting their jobs, industries, and communities, with the findings expected to help guide future state policy decisions.

The initiative will begin with a public participation phase, during which residents can submit experiences and recommendations through the state’s online platform. A second phase, later in 2026, will bring together a smaller representative group of residents for live deliberative forums focused on AI’s economic and social impact. The process aims to identify areas of public consensus on how government should respond to rapidly evolving AI technologies.

State officials described ‘Engaged California’ as a first-in-the-nation deliberative democracy programme inspired partly by Taiwan’s digital governance model. Instead of functioning like a social media platform or public poll, the initiative is designed to encourage structured discussion and collaborative policymaking around emerging technologies.

California also used the announcement to highlight broader AI initiatives already underway, including AI procurement reforms, workforce training partnerships with major technology companies, AI-powered wildfire detection systems, cybersecurity assessments, and responsible governance frameworks.

Officials said the state aims to balance innovation with safeguards related to child safety, deepfakes, digital likeness protections, and AI accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OSCE chairpersonship opens Geneva conference on AI and quantum risks

The Swiss OSCE Chairpersonship has opened a high-level conference in Geneva on how emerging technologies are affecting security, international governance, and co-operation across the OSCE region.

The two-day event, titled ‘Anticipating technologies – for a safe and humane future’, brings together about 200 participants from OSCE participating States and Partners for Co-operation, alongside representatives from international organisations, academia, the private sector, and civil society.

The conference focuses on the security implications of rapid technological change, including AI and quantum technologies. The discussions are intended to examine how anticipation, dialogue, and cooperation can help reduce misunderstandings, build trust, and strengthen security in a fast-changing technological environment.

Opening the conference, OSCE Chairman-in-Office and Swiss Federal Councillor Ignazio Cassis said: ‘Technology will not wait for us. Geopolitics will not slow down. If we want to remain relevant, we must anticipate – not react. This is the responsibility we share across the OSCE region. The OSCE still offers something rare: a space where adversaries can speak, where differences can be managed, and where common ground can still be built.’

The organisation’s Secretary General, Feridun H. Sinirlioğlu, also stressed the need for dialogue as emerging technologies evolve faster than governance frameworks. He said: ‘Today, emerging technologies are evolving faster than the frameworks that govern them. This creates a widening gap between what technology can do and how we manage it. This gap must be addressed through dialogue – our most important stabilizing force in uncertain times – and this is where the OSCE has a vital role to play.’

The programme includes discussions on anticipating technological change and its geopolitical impact, water and energy security in the digital age, and the role of AI in early warning and conflict prevention. The conference also highlights Geneva’s role as a meeting point for science and diplomacy, including through institutions such as CERN, the Geneva Science and Diplomacy Anticipator, and the Open Quantum Institute.

The event forms part of the Chairpersonship’s priority to connect scientific and technological anticipation with policy action. It is the second of four international conferences Switzerland is hosting under its chairpersonship, ahead of the OSCE Ministerial Council meeting in Lugano in December.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Swiss media groups launch responsible AI journalism framework

Swiss media organisations have adopted a national code of conduct for the responsible use of AI, aiming to strengthen transparency, copyright protection and public trust in journalism.

The initiative is backed by major Swiss publishing groups, private radio and television organisations, the Swiss Broadcasting Corporation and the national news agency Keystone-ATS. It is based on the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.

The code states that media companies and their employees remain responsible for all published editorial content, whether produced by journalists or with the support of AI systems. It also commits media organisations to train staff in AI use, respect copyright, follow data protection rules and take steps to prevent the spread of false information.

Swiss media groups also agreed to inform the public transparently about their use of AI, including through dedicated information pages, and to introduce binding marking obligations for AI-supported content. The framework is designed as a self-regulatory tool at a time when public concern over AI-generated content remains high.

To support implementation, the code provides for a two-tier reporting and control mechanism. The relevant departments within media companies will first handle questions and complaints. In contrast, an independent AI ombudsperson will act as a second instance for serious or unresolved cases and publish an annual report.

Swiss President Guy Parmelin said AI could strengthen journalism if used responsibly and transparently, while warning that fake news threatens journalistic credibility and social cohesion. Legislative changes needed to implement the Council of Europe convention in Switzerland are expected by the end of 2026.

Why does it matter?

The Swiss code shows how media organisations are moving to set AI governance standards before legal obligations fully take shape. Its significance lies in linking AI-assisted journalism with editorial responsibility, transparency, copyright, data protection and complaint mechanisms, rather than treating AI labelling as the only issue. The model could influence how other media sectors balance innovation with public trust and accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI introduces a trusted contact safety feature in ChatGPT

OpenAI has started rolling out Trusted Contact, an optional safety feature in ChatGPT designed to help connect adult users with real-world support during moments of serious emotional distress.

The feature allows users to nominate one trusted adult, such as a friend, family member or caregiver, who may receive a notification if OpenAI’s automated systems and trained reviewers detect that the user may have discussed self-harm in a way that indicates a serious safety concern.

OpenAI said the feature is intended to add another layer of support alongside existing safeguards in ChatGPT, including prompts that encourage users to contact crisis hotlines, emergency services, mental health professionals, or trusted people when appropriate. The company stressed that Trusted Contact does not replace professional care or crisis services.

Users can add a trusted contact through ChatGPT settings. The contact receives an invitation explaining the role and must accept it within one week before the feature becomes active. Users can later edit or remove their trusted contact, while the trusted contact can also remove themselves.

If ChatGPT detects a possible serious self-harm concern, the user is informed that their trusted contact may be notified and is encouraged to reach out directly. A small team of specially trained reviewers then assesses the situation before any notification is sent.

OpenAI said notifications are intentionally limited and do not include chat details or transcripts. Instead, they share the general reason that self-harm came up in a potentially concerning way and encourage the trusted contact to check in. The company said every notification undergoes human review and aims to review safety notifications in under one hour.

The feature was developed with guidance from clinicians, researchers and organisations specialising in mental health and suicide prevention, including the American Psychological Association. OpenAI said Trusted Contact forms part of broader efforts to improve how AI systems respond to people experiencing distress and connect them with real-world care, relationships and resources.

Why does it matter?

Trusted Contact points to a broader shift in AI safety away from content moderation alone toward real-world support mechanisms for users in moments of vulnerability. As conversational AI systems become part of everyday personal reflection and emotional support, companies face growing pressure to define when and how they should intervene, how much privacy to preserve, and what role human review should play in high-risk situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India and France discuss expanding AI and space cooperation

India and France have discussed expanding cooperation in space, AI, applied mathematics and advanced technologies following a bilateral meeting between Indian Minister of State for Science and Technology Dr Jitendra Singh and French Minister for Higher Education, Research and Space Philippe Baptiste.

The talks reviewed the countries’ growing strategic partnership in science, technology and space, with the 2026 Indo-French Year of Innovation identified as an opportunity to deepen collaboration in emerging technology fields.

Both sides discussed stronger links between Indian and French research institutions, including initiatives related to AI, advanced materials and digital sciences. Space cooperation also featured prominently, building on long-standing collaboration between the Indian Space Research Organisation and France’s Centre National d’études Spatiales through joint missions such as Megha-Tropiques and SARAL, and ongoing work on TRISHNA.

France also expressed interest in expanding cooperation on human spaceflight, microgravity experiments and ocean-related data-sharing initiatives.

Indian officials highlighted the expansion of the country’s space ecosystem following recent reforms, noting that nearly 400 space start-ups are now active in the sector. The discussions also covered opportunities linked to India’s Deep Ocean Mission and future engagement around the International Space Summit planned in Paris in September 2026.

Why does it matter?

The meeting reflects how AI, space, ocean data and advanced research are increasingly being treated as linked areas of strategic technology cooperation. For India and France, the agenda goes beyond scientific exchange: it connects national innovation ecosystems, space-sector reforms, research partnerships and the use of data-intensive technologies for climate, ocean and public-interest applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ATxSummit 2026 to focus on AI governance and digital growth in Asia

ATxSummit 2026 will take place in Singapore on 20 and 21 May 2026 as part of Asia Tech x Singapore. Organisers state that the event will convene more than 4,000 participants from over 50 countries, including policymakers, technology companies, researchers, and industry representatives.

The programme will focus on five themes related to AI deployment and governance. These include agentic systems in enterprise operations, AI applications for public-sector and national use, scientific research and embodied intelligence, workforce and organisational changes, and the implementation of AI governance approaches.

Participants include representatives from organisations such as the World Bank Group, NVIDIA, Google, Amazon, and OpenAI. The programme also includes academic and policy discussions involving AI research, security, and digital governance.

The summit will include technical workshops, government roundtables, and the Digital Frontier Forum, focused on AI, deep technology, and digital growth strategies. ATxEnterprise will also take place alongside the summit, with sessions addressing infrastructure investment, digital trust, cross-border connectivity, and responsible AI deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot