California expands digital democracy platform for AI policy debate

California’s Governor is expanding Engaged California, a digital democracy initiative designed to give residents a direct voice in shaping AI policy across the state. The programme invites Californians to share how AI is affecting their jobs, industries, and communities, with the findings expected to help guide future state policy decisions.

The initiative will begin with a public participation phase, during which residents can submit experiences and recommendations through the state’s online platform. A second phase, later in 2026, will bring together a smaller representative group of residents for live deliberative forums focused on AI’s economic and social impact. The process aims to identify areas of public consensus on how government should respond to rapidly evolving AI technologies.

State officials described ‘Engaged California’ as a first-in-the-nation deliberative democracy programme inspired partly by Taiwan’s digital governance model. Instead of functioning like a social media platform or public poll, the initiative is designed to encourage structured discussion and collaborative policymaking around emerging technologies.

California also used the announcement to highlight broader AI initiatives already underway, including AI procurement reforms, workforce training partnerships with major technology companies, AI-powered wildfire detection systems, cybersecurity assessments, and responsible governance frameworks.

Officials said the state aims to balance innovation with safeguards related to child safety, deepfakes, digital likeness protections, and AI accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OSCE chairpersonship opens Geneva conference on AI and quantum risks

The Swiss OSCE Chairpersonship has opened a high-level conference in Geneva on how emerging technologies are affecting security, international governance, and co-operation across the OSCE region.

The two-day event, titled ‘Anticipating technologies – for a safe and humane future’, brings together about 200 participants from OSCE participating States and Partners for Co-operation, alongside representatives from international organisations, academia, the private sector, and civil society.

The conference focuses on the security implications of rapid technological change, including AI and quantum technologies. The discussions are intended to examine how anticipation, dialogue, and cooperation can help reduce misunderstandings, build trust, and strengthen security in a fast-changing technological environment.

Opening the conference, OSCE Chairman-in-Office and Swiss Federal Councillor Ignazio Cassis said: ‘Technology will not wait for us. Geopolitics will not slow down. If we want to remain relevant, we must anticipate – not react. This is the responsibility we share across the OSCE region. The OSCE still offers something rare: a space where adversaries can speak, where differences can be managed, and where common ground can still be built.’

The organisation’s Secretary General, Feridun H. Sinirlioğlu, also stressed the need for dialogue as emerging technologies evolve faster than governance frameworks. He said: ‘Today, emerging technologies are evolving faster than the frameworks that govern them. This creates a widening gap between what technology can do and how we manage it. This gap must be addressed through dialogue – our most important stabilizing force in uncertain times – and this is where the OSCE has a vital role to play.’

The programme includes discussions on anticipating technological change and its geopolitical impact, water and energy security in the digital age, and the role of AI in early warning and conflict prevention. The conference also highlights Geneva’s role as a meeting point for science and diplomacy, including through institutions such as CERN, the Geneva Science and Diplomacy Anticipator, and the Open Quantum Institute.

The event forms part of the Chairpersonship’s priority to connect scientific and technological anticipation with policy action. It is the second of four international conferences Switzerland is hosting under its chairpersonship, ahead of the OSCE Ministerial Council meeting in Lugano in December.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Swiss media groups launch responsible AI journalism framework

Swiss media organisations have adopted a national code of conduct for the responsible use of AI, aiming to strengthen transparency, copyright protection and public trust in journalism.

The initiative is backed by major Swiss publishing groups, private radio and television organisations, the Swiss Broadcasting Corporation and the national news agency Keystone-ATS. It is based on the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.

The code states that media companies and their employees remain responsible for all published editorial content, whether produced by journalists or with the support of AI systems. It also commits media organisations to train staff in AI use, respect copyright, follow data protection rules and take steps to prevent the spread of false information.

Swiss media groups also agreed to inform the public transparently about their use of AI, including through dedicated information pages, and to introduce binding marking obligations for AI-supported content. The framework is designed as a self-regulatory tool at a time when public concern over AI-generated content remains high.

To support implementation, the code provides for a two-tier reporting and control mechanism. The relevant departments within media companies will first handle questions and complaints. In contrast, an independent AI ombudsperson will act as a second instance for serious or unresolved cases and publish an annual report.

Swiss President Guy Parmelin said AI could strengthen journalism if used responsibly and transparently, while warning that fake news threatens journalistic credibility and social cohesion. Legislative changes needed to implement the Council of Europe convention in Switzerland are expected by the end of 2026.

Why does it matter?

The Swiss code shows how media organisations are moving to set AI governance standards before legal obligations fully take shape. Its significance lies in linking AI-assisted journalism with editorial responsibility, transparency, copyright, data protection and complaint mechanisms, rather than treating AI labelling as the only issue. The model could influence how other media sectors balance innovation with public trust and accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI introduces a trusted contact safety feature in ChatGPT

OpenAI has started rolling out Trusted Contact, an optional safety feature in ChatGPT designed to help connect adult users with real-world support during moments of serious emotional distress.

The feature allows users to nominate one trusted adult, such as a friend, family member or caregiver, who may receive a notification if OpenAI’s automated systems and trained reviewers detect that the user may have discussed self-harm in a way that indicates a serious safety concern.

OpenAI said the feature is intended to add another layer of support alongside existing safeguards in ChatGPT, including prompts that encourage users to contact crisis hotlines, emergency services, mental health professionals, or trusted people when appropriate. The company stressed that Trusted Contact does not replace professional care or crisis services.

Users can add a trusted contact through ChatGPT settings. The contact receives an invitation explaining the role and must accept it within one week before the feature becomes active. Users can later edit or remove their trusted contact, while the trusted contact can also remove themselves.

If ChatGPT detects a possible serious self-harm concern, the user is informed that their trusted contact may be notified and is encouraged to reach out directly. A small team of specially trained reviewers then assesses the situation before any notification is sent.

OpenAI said notifications are intentionally limited and do not include chat details or transcripts. Instead, they share the general reason that self-harm came up in a potentially concerning way and encourage the trusted contact to check in. The company said every notification undergoes human review and aims to review safety notifications in under one hour.

The feature was developed with guidance from clinicians, researchers and organisations specialising in mental health and suicide prevention, including the American Psychological Association. OpenAI said Trusted Contact forms part of broader efforts to improve how AI systems respond to people experiencing distress and connect them with real-world care, relationships and resources.

Why does it matter?

Trusted Contact points to a broader shift in AI safety away from content moderation alone toward real-world support mechanisms for users in moments of vulnerability. As conversational AI systems become part of everyday personal reflection and emotional support, companies face growing pressure to define when and how they should intervene, how much privacy to preserve, and what role human review should play in high-risk situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India and France discuss expanding AI and space cooperation

India and France have discussed expanding cooperation in space, AI, applied mathematics and advanced technologies following a bilateral meeting between Indian Minister of State for Science and Technology Dr Jitendra Singh and French Minister for Higher Education, Research and Space Philippe Baptiste.

The talks reviewed the countries’ growing strategic partnership in science, technology and space, with the 2026 Indo-French Year of Innovation identified as an opportunity to deepen collaboration in emerging technology fields.

Both sides discussed stronger links between Indian and French research institutions, including initiatives related to AI, advanced materials and digital sciences. Space cooperation also featured prominently, building on long-standing collaboration between the Indian Space Research Organisation and France’s Centre National d’études Spatiales through joint missions such as Megha-Tropiques and SARAL, and ongoing work on TRISHNA.

France also expressed interest in expanding cooperation on human spaceflight, microgravity experiments and ocean-related data-sharing initiatives.

Indian officials highlighted the expansion of the country’s space ecosystem following recent reforms, noting that nearly 400 space start-ups are now active in the sector. The discussions also covered opportunities linked to India’s Deep Ocean Mission and future engagement around the International Space Summit planned in Paris in September 2026.

Why does it matter?

The meeting reflects how AI, space, ocean data and advanced research are increasingly being treated as linked areas of strategic technology cooperation. For India and France, the agenda goes beyond scientific exchange: it connects national innovation ecosystems, space-sector reforms, research partnerships and the use of data-intensive technologies for climate, ocean and public-interest applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ATxSummit 2026 to focus on AI governance and digital growth in Asia

ATxSummit 2026 will take place in Singapore on 20 and 21 May 2026 as part of Asia Tech x Singapore. Organisers state that the event will convene more than 4,000 participants from over 50 countries, including policymakers, technology companies, researchers, and industry representatives.

The programme will focus on five themes related to AI deployment and governance. These include agentic systems in enterprise operations, AI applications for public-sector and national use, scientific research and embodied intelligence, workforce and organisational changes, and the implementation of AI governance approaches.

Participants include representatives from organisations such as the World Bank Group, NVIDIA, Google, Amazon, and OpenAI. The programme also includes academic and policy discussions involving AI research, security, and digital governance.

The summit will include technical workshops, government roundtables, and the Digital Frontier Forum, focused on AI, deep technology, and digital growth strategies. ATxEnterprise will also take place alongside the summit, with sessions addressing infrastructure investment, digital trust, cross-border connectivity, and responsible AI deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ICESCO and Morocco sign agreement on AI and digital capacity building

The Islamic World Educational, Scientific and Cultural Organisation (ICESCO) and Morocco’s Ministry of Digital Transition and Administrative Reform have signed a memorandum of understanding on cooperation in digital transformation, AI and strategic foresight.

The agreement was signed in Rabat on the sidelines of the African Open Government Conference by ICESCO Director-General Dr Salim M. AlMalik and Dr Amal El Fallah, Minister Delegate to the Head of Government in charge of Digital Transition and Administrative Reform of Morocco.

The memorandum provides for workshops, training programmes and joint seminars aimed at building capacity among public and private sector professionals in digital transformation, AI, strategic foresight and digital diplomacy. It also covers the exchange of expertise and open data, the preparation of reference materials, and research related to future skills and professions in ICESCO member states.

The agreement further includes cooperation with universities and research centres to support a knowledge ecosystem aligned with the requirements of the digital economy. It also refers to innovation laboratories and digital tools for the digitisation, indexing, research and analysis of cultural and scientific heritage materials.

Why does it matter?

The agreement places AI within a broader capacity-building agenda that includes public-sector skills, digital diplomacy, open data, foresight and heritage digitisation. Also, the policy relevance lies in how international organisations and national governments are using AI cooperation not only for technology adoption, but also for institutional readiness and future skills development across member states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Norway joins Pax Silica initiative to secure AI and semiconductor supply chains

The Pax Silixca initiative, which focuses on secure AI, semiconductor, and critical raw materials supply chains, has expanded with the addition of Norway. The partnership aims to strengthen technological innovation while protecting sensitive technologies.

Norway joins a group of 14 participating countries, including the USA, Japan, the UK and India. Norwegian officials said participation could improve market access for domestic companies operating in advanced technological sectors and strengthen economic security cooperation with strategic partners.

Minister of Trade and Industry, Cecilie Myrseth, said the initiative aligns with Norway’s goal of expanding cooperation with leading countries in AI and emerging technologies. Norwegian ambassador to the USA, Anniken Huitfeldt, is expected to formally sign the agreement on behalf of the country.

The move also complements broader Norwegian and European efforts to secure access to critical technologies and supply chains. The government highlighted initiatives linked to the European Chips Act and the EU Critical Raw Materials Act as part of a wider strategy to strengthen technology resilience and industrial competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Apple may be preparing a major Siri AI shake-up in iOS 27

Apple is reportedly preparing a major expansion of Apple Intelligence that could allow users to choose which AI model powers Siri and other system features. According to recent reports, iOS 27, iPadOS 27, and macOS 27 may introduce a new ‘Extensions’ framework designed to integrate third-party AI systems directly into Apple’s software ecosystem.

The reported feature would allow applications such as Gemini and Claude to connect with Siri through their App Store apps. Users may be able to select different AI providers for different tasks, while Apple is also said to be testing separate Siri voices for responses generated by external models rather than Apple’s own systems.

The move would expand Apple’s broader AI partnership strategy rather than replace existing integrations. ChatGPT already supports selected Apple Intelligence functions, and earlier reporting suggested Google Gemini could eventually power parts of Siri itself. The new framework appears aimed at turning Apple devices into a wider AI platform that supports multiple large language models rather than a single assistant stack.

Apple is expected to present further details during its Worldwide Developers Conference on 8 June 2026. If the reported changes materialise, they could significantly reshape how users interact with AI assistants by giving them more control over which models handle tasks such as search, writing, and image generation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Siri AI delays lead to $250 million Apple settlement

Apple has agreed to pay $250 million to settle a class action lawsuit alleging that it misled consumers about the readiness and availability of AI-powered Siri features promoted ahead of the iPhone 16 launch. Under the proposed agreement, eligible US customers who bought supported iPhone models between 10 June 2024 and 29 March 2025 may receive between $25 and $95 per device, depending on the number of claims. Apple denied wrongdoing and settled the case without admitting liability.

The complaint argued that consumers who purchased supported iPhone 15 and iPhone 16 models expected advanced Apple Intelligence features and a significantly upgraded Siri experience that were not available at the time of sale. Plaintiffs said Apple’s marketing created the impression that the new capabilities would arrive sooner and with broader functionality than users ultimately received.

The settlement comes shortly before Apple’s annual Worldwide Developers Conference, where the company is widely expected to present further updates to Siri and its wider AI strategy.

Why does it matter?

The case shows how AI product marketing is becoming a legal and regulatory risk, not just a branding issue. As technology companies use generative AI features to drive device sales and platform adoption, courts and consumers are paying closer attention to whether those capabilities are actually available when products reach the market. The Apple settlement suggests that overstating AI readiness can create liability even before regulators step in, making transparency around launch claims increasingly important across the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!