New AI safety policies target teen protection in apps

OpenAI has released a set of prompt-based safety policies to help developers build safer AI experiences for teenagers. The tools work with the open-weight model gpt-oss-safeguard, turning safety requirements into practical classifiers for real-world use.

The policies address teen risks, including graphic violence, sexual content, harmful body image behaviour, dangerous challenges, roleplay, and age-restricted goods and services. Developers can use them for both real-time filtering and offline content analysis.

The framework was developed with input from organisations such as Common Sense Media and everyone.ai to improve clarity and consistency in teen safety rules. The initiative also responds to long-standing challenges in translating high-level safety goals into precise operational systems.

Open-source availability through the ROOST Model Community allows developers to adapt and expand the policies for different use cases and languages. The framework is a foundational step, not a complete solution, encouraging layered safeguards and ongoing refinement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google sets 2029 deadline for post-quantum cryptography migration

A transition to post-quantum cryptography by 2029 is being led by Google, aiming to secure digital systems against future quantum computing threats instead of relying on existing encryption standards.

The move reflects growing concern that advances in quantum hardware and algorithms could eventually undermine current cryptographic protections, particularly through attacks that store encrypted data today for decryption in the future.

Quantum computers are expected to challenge widely used encryption and digital signature systems, prompting the need for early transition strategies.

Google has updated its threat model to prioritise authentication services, recognising that digital signatures pose a critical vulnerability if not addressed before the arrival of quantum machines capable of cryptanalysis.

The company is encouraging broader industry action to accelerate migration efforts and reduce long-term security risks.

As part of its strategy, Google is integrating post-quantum cryptography into its products and services.

Android 17 will include quantum-resistant digital signature protection aligned with standards developed by the US’s National Institute of Standards and Technology. At the same time, support has already been introduced in Google Chrome and cloud platforms.

These measures aim to bring advanced security technologies directly to users instead of limiting them to experimental environments.

By setting a clear timeline, Google aims to instil urgency and direction across the wider technology sector.

The transition to post-quantum cryptography is expected to become a critical step in maintaining online security, ensuring that digital infrastructure remains resilient as quantum computing capabilities continue to evolve.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI launches a public Safety Bug Bounty programme

OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.

The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.

Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.

The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU strengthens semiconductor strategy through Chips Act dialogue

Executive Vice-President Henna Virkkunen will host a high-level dialogue in Brussels to assess the implementation of the European Chips Act Regulation and gather industry feedback ahead of its planned revision.

Stakeholders from across the semiconductor ecosystem are expected to exchange views and present recommendations to shape future policy direction.

An initiative that forms part of the broader strategy led by the European Commission to reinforce technological sovereignty and competitiveness, rather than relying heavily on external suppliers.

The Chips Act seeks to strengthen Europe’s semiconductor ecosystem, improve supply chain resilience, and reduce strategic dependencies in critical technologies.

The dialogue follows a public consultation and call for evidence conducted in autumn 2025, with findings set to inform the upcoming legislative revision.

Industry representatives will provide direct input through a report outlining challenges, opportunities, and proposed policy adjustments, contributing to a more targeted and effective framework for semiconductor development.

Looking ahead, the revision of the Chips Act will be integrated into a wider Technological Sovereignty package designed to boost the capacity of Europe’s digital industries.

By combining stakeholder engagement with policy reform, the European Commission aims to ensure that semiconductor innovation and production can expand across the EU rather than remain constrained by reliance on external suppliers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New UK rules target foreign influence and crypto donations

The UK government has announced sweeping reforms to political donations, introducing a £100,000 annual cap on contributions from overseas electors. The move targets concerns that individuals living abroad could exert disproportionate financial influence on domestic politics.

Cryptocurrency donations have also been banned with immediate effect, reflecting fears over anonymity and the difficulty of tracing funds. Authorities warn that digital assets risk enabling untraceable political funding until stronger regulation is in place.

Both measures will apply retrospectively, requiring political parties and candidates to return any unlawful donations within 30 days once the legislation takes effect. Enforcement action may follow for non-compliance, signalling a stricter approach to financial oversight.

Reforms stem from the Rycroft Review, which highlighted vulnerabilities in the UK’s electoral system linked to foreign interference. Further changes, including stronger Electoral Commission powers and tighter donor checks, are expected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ICO and Ofcom issue guidance on age assurance and online safety

The Information Commissioner’s Office and Ofcom have issued a joint statement outlining how age assurance measures should align with online safety and data protection requirements.

A guidance that focuses on protecting children from harm online instead of treating safety and privacy as separate obligations, reflecting closer coordination between the two regulators.

The statement is directed at digital services likely to be accessed by children and falling within the scope of the Online Safety Act and UK data protection laws.

It provides a practical overview of existing policies, helping organisations understand how to meet both regulatory frameworks while implementing age assurance technologies.

Rather than introducing new rules, the guidance clarifies how current requirements interact in practice. It highlights the importance of designing systems that both verify users’ ages and safeguard personal data, ensuring that safety measures do not undermine privacy protections.

The approach encourages organisations to integrate compliance into service design instead of addressing obligations separately.

By aligning regulatory expectations, the ICO and Ofcom aim to support organisations in delivering safer online environments for children while maintaining strong data protection standards.

The joint effort signals a broader move towards coordinated digital regulation, where safety and privacy are addressed together to reflect the complexities of modern online services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Quantum readiness gains momentum according to OECD report

The OECD (Organisation for Economic Co-operation and Development) highlights how businesses are preparing for quantum computing, recognising it as a transformative technology instead of relying solely on conventional computing methods.

Quantum readiness is framed as a long-term capability-building effort in which firms gradually develop skills, infrastructure, and partnerships to explore commercial applications while navigating uncertainty.

Drawing on research, surveys, and interviews with public and private organisations across 10 countries, the OECD identifies both the practical steps companies take to build readiness and the barriers that slow adoption.

Early efforts focus on low-cost awareness and exploration, including attending workshops, training sessions, and industry events, allowing firms to familiarise themselves with emerging opportunities instead of waiting for fully mature systems.

Despite growing interest, companies face significant challenges. Technological immaturity complicates pilots and feasibility studies, while many firms lack a clear understanding of potential business applications.

Access to quantum resources, funding for research and development, and staff training are expensive, particularly for small- and medium-sized enterprises. Furthermore, there is a shortage of talent with both quantum computing expertise and domain-specific knowledge.

As a result, readiness tends to be concentrated among large, R&D-intensive firms, while smaller companies often recognise quantum computing’s potential but delay action.

Such an uneven adoption risks creating a divide in the digital economy, with early adopters moving ahead and other firms falling behind instead of engaging proactively.

To address these challenges, the OECD notes that public and private support mechanisms are critical. Networking and collaboration platforms connect firms with researchers, technology providers, and industry peers, fostering knowledge exchange and collective experimentation.

Business advisory and technology extension services help companies assess capabilities, test solutions, and access specialised facilities.

Grants for research and development lower the costs of experimentation and encourage collaboration, while stakeholder consultations ensure that support measures remain aligned with business needs.

Many companies are also establishing internal quantum labs and innovation hubs to trial applications and build expertise in a controlled environment, combining research with practical exploration instead of relying solely on external guidance.

Looking ahead, the OECD recommends expanding education and skills pipelines, strengthening industry-academic partnerships, and designing policies that support broader participation in quantum adoption.

Hybrid approaches that integrate quantum computing with AI and high-performance computing may offer practical commercial entry points for early applications.

Policymakers are encouraged to balance near-term exploratory pilots with forward-looking support for software development, interoperability, and workforce growth, enabling firms to move from experimentation to deployment effectively.

By following OECD guidance, companies can enhance innovation, improve competitiveness, and ensure that readiness efforts span sectors and geographies rather than remain limited to a few early adopters.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU privacy bodies back cybersecurity overhaul

The European Data Protection Board and the European Data Protection Supervisor have backed proposals to strengthen the EU cybersecurity law while safeguarding personal data. Their joint opinion addresses reforms to the Cybersecurity Act and updates to the NIS2 Directive.

Regulators support plans to reinforce the mandate of the European Union Agency for Cybersecurity and expand cybersecurity certification across digital supply chains. Clearer coordination between ENISA and privacy authorities is seen as essential for consistent oversight.

Advice also calls for limits on the processing of personal data and for prior consultation on technical rules affecting privacy. Certification schemes should align with the GDPR and help organisations demonstrate compliance.

Additional recommendations include broader cybersecurity skills training and a single EU entry point for personal data breach notifications. Proposed changes would also classify digital identity wallet providers as essential entities under the EU security rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPB summarises conference on cross-regulatory cooperation in the EU

The European Data Protection Board has published a summary of its 17 March conference in Brussels on cross-regulatory interplay and cooperation in the EU from a data protection perspective. According to the EDPB, the event brought together representatives of the EU institutions, European Data Protection Authorities, academia, and industry.

Three panels structured the conference discussion. One focused on data protection and competition, another on the Digital Markets Act and the General Data Protection Regulation (GDPR), and a third on the Digital Services Act and the GDPR.

Discussion in the first panel centred on cooperation between regulatory bodies in data protection and competition, including lessons from the aftermath of the Bundeskartellamt ruling. The EDPB said speakers emphasised the need for regulators to align their approaches and recognise synergies between the two fields. Speakers also said data protection should be considered in competition analysis only when relevant and on a case-by-case basis. The EDPB added that it had recently agreed with the European Commission to develop joint guidelines on the interplay between competition law and data protection.

The second panel focused on joint guidelines on the Digital Markets Act and the GDPR, developed by the European Commission and the EDPB and recently opened to public consultation. According to the EDPB, speakers described the guidelines as an example of regulatory cooperation aimed at developing a coherent and compatible interpretation of the two frameworks while respecting regulatory competences. The Board said participants linked the guidelines to stronger consistency, legal clarity, and easier compliance. Some speakers also suggested changes to the final version, including points related to proportionality and the relationship between DMA obligations and the GDPR.

The final panel examined the interaction between the Digital Services Act and the GDPR. The EDPB said panellists referred to the protection of minors as one example, arguing that age verification should be effective while remaining fully in line with data protection legislation. Speakers also highlighted the need for coordination between the two frameworks, including cooperation involving the EU institutions such as the European Board for Digital Services, the European Commission, the EDPB, and national authorities. Emerging technologies such as AI were also mentioned in the discussion.

The event also featured keynote speeches from European Commission Executive Vice President Henna Virkkunen and European Parliament LIBE Committee Chair Javier Zarzalejos. According to the EDPB, Virkkunen said the Commission remained committed to cooperation between different frameworks and highlighted the need to support compliance through stronger coordination among regulators. Zarzalejos said close cross-regulatory cooperation was essential for consistency, effective enforcement, and trust, and pointed to the intersections among data protection law, competition law, the DMA, and the DSA.

EDPB Chair Anu Talus closed the conference by reiterating that the EDPB and European Data Protection Authorities are committed to supporting stakeholders in navigating what the Board described as a new cross-regulatory landscape. The EDPB said future work will include continued cooperation with the Commission on joint guidelines on the interplay between the AI Act and the GDPR, finalisation of the joint guidelines on the interplay between the DMA and the GDPR, and work on the recently announced Joint Guidelines on the interplay between data protection and competition law.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!