OpenAI introduces a trusted contact safety feature in ChatGPT

OpenAI has started rolling out Trusted Contact, an optional safety feature in ChatGPT designed to help connect adult users with real-world support during moments of serious emotional distress.

The feature allows users to nominate one trusted adult, such as a friend, family member or caregiver, who may receive a notification if OpenAI’s automated systems and trained reviewers detect that the user may have discussed self-harm in a way that indicates a serious safety concern.

OpenAI said the feature is intended to add another layer of support alongside existing safeguards in ChatGPT, including prompts that encourage users to contact crisis hotlines, emergency services, mental health professionals, or trusted people when appropriate. The company stressed that Trusted Contact does not replace professional care or crisis services.

Users can add a trusted contact through ChatGPT settings. The contact receives an invitation explaining the role and must accept it within one week before the feature becomes active. Users can later edit or remove their trusted contact, while the trusted contact can also remove themselves.

If ChatGPT detects a possible serious self-harm concern, the user is informed that their trusted contact may be notified and is encouraged to reach out directly. A small team of specially trained reviewers then assesses the situation before any notification is sent.

OpenAI said notifications are intentionally limited and do not include chat details or transcripts. Instead, they share the general reason that self-harm came up in a potentially concerning way and encourage the trusted contact to check in. The company said every notification undergoes human review and aims to review safety notifications in under one hour.

The feature was developed with guidance from clinicians, researchers and organisations specialising in mental health and suicide prevention, including the American Psychological Association. OpenAI said Trusted Contact forms part of broader efforts to improve how AI systems respond to people experiencing distress and connect them with real-world care, relationships and resources.

Why does it matter?

Trusted Contact points to a broader shift in AI safety away from content moderation alone toward real-world support mechanisms for users in moments of vulnerability. As conversational AI systems become part of everyday personal reflection and emotional support, companies face growing pressure to define when and how they should intervene, how much privacy to preserve, and what role human review should play in high-risk situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI found non-compliant in Canadian ChatGPT privacy probe

Canada’s federal and provincial privacy regulators have found that aspects of OpenAI’s collection, use, and disclosure of personal information through ChatGPT did not comply with applicable private-sector privacy laws, particularly in relation to model training on publicly accessible online data and user interactions.

The joint investigation was conducted by the Office of the Privacy Commissioner of Canada, the Commission d’accès à l’information du Québec, and the privacy commissioners of British Columbia and Alberta.

It examined OpenAI’s GPT-3.5 and GPT-4 models as used in ChatGPT, focusing on whether the company’s handling of personal information from public internet sources, licensed third-party datasets, and user interactions met legal requirements on appropriate purposes, consent, transparency, accuracy, access, retention, and accountability.

The regulators accepted that OpenAI’s overall purposes for developing and deploying ChatGPT were legitimate and appropriate. However, they found that the company’s initial collection of personal information from publicly accessible websites and licensed third-party sources for model training was overbroad and therefore inappropriate, given the scale, sensitivity, and potential inaccuracy of the data involved, as well as the limits of the mitigation measures in place at the time.

The Offices also found that OpenAI failed to obtain valid consent to collect and use personal information from public internet sources to train its models. They concluded that implied consent was not sufficient because the data could include sensitive personal information and because individuals would not reasonably have expected information about them posted online to be scraped and used for AI model training in this way.

On user interactions with ChatGPT, the regulators accepted that using some chat data for model improvement could serve OpenAI’s legitimate purposes. Still, they found that express consent should have been obtained.

They said OpenAI’s safeguards at the time were not strong enough to ensure that sensitive personal information would not be included in training data, and that many users would not reasonably have understood that their conversations could be used to train models or reviewed by human trainers.

The report also found that OpenAI should have obtained express consent for certain disclosures of personal information through ChatGPT outputs, especially where the information was sensitive or fell outside individuals’ reasonable expectations.

While OpenAI had introduced measures to reduce the risk of sensitive disclosures, the regulators said those measures covered a narrower set of information than the broader categories of personal information protected under the relevant privacy laws.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5.5 ranks among strongest models in UK cyber evaluation

The UK AI Security Institute has published cyber evaluations of OpenAI’s GPT-5.5, finding that the model is among the strongest it has tested on cyber tasks and the second to complete one of its end-to-end multi-step cyber-attack simulations.

According to the institute, GPT-5.5’s results suggest that recent gains in cyber capability are not limited to a single model family. It says an earlier evaluation of Anthropic’s Claude Mythos Preview had already pointed to a step up over previous frontier systems, and GPT-5.5 appears to reinforce that broader trend across leading models.

The institute uses a suite of 95 narrow cyber tasks across four difficulty tiers to test capabilities such as reverse engineering, web exploitation, cryptography, vulnerability research, and exploitation. On expert-level tasks in its advanced suite, GPT-5.5 achieved an average pass rate of 71.4%, ahead of Mythos Preview at 68.6%, GPT-5.4 at 52.4%, and Opus 4.7 at 48.6%.

The UK AI Security Institute also tests models in cyber ranges designed to measure multi-step attack capability. In The Last Ones, a 32-step corporate network intrusion simulation modelled on an enterprise kill chain, GPT-5.5 completed the full attack chain in 2 of 10 attempts, becoming the second model to do so after Mythos Preview. In the Cooling Tower industrial control system simulation, GPT-5.5 did not complete the range, and no model has yet done so.

The institute stresses that these are controlled capability evaluations and do not necessarily reflect what is available to ordinary public users. It also notes that the current ranges do not yet include all the defensive conditions of real-world environments, such as active defenders, defensive tooling, or alert penalties.

Separately, the institute evaluated GPT-5.5’s cyber safeguards and OpenAI’s mitigations against malicious cyber use. It said expert red-teamers identified a universal jailbreak that elicited prohibited cyber content across all malicious cyber queries provided by OpenAI, including in multi-turn agentic settings. OpenAI later updated its safeguard stack, but the institute said a configuration issue prevented it from verifying the effectiveness of the final version.

The institute adds that if offensive cyber capability is emerging as a byproduct of broader gains in autonomy, reasoning, and coding, further increases in model cyber performance could follow quickly. At the same time, it notes that the same capabilities may also help defenders and points to related UK government work on cyber resilience, vulnerability management, and preparation for a possible ‘vulnerability patch wave’.

Why does it matter?

The significance of the evaluation is not only that GPT-5.5 performed strongly on cyber tasks, but that it adds to the evidence that offensive cyber capability may be improving across multiple frontier model families at roughly the same time. If those gains are being driven by broader advances in reasoning, coding, and agentic execution, then cyber risk may rise even when models are not explicitly optimised for offensive use. That makes evaluation, safeguards, and realistic testing environments increasingly important, especially as the same capabilities can also strengthen defensive work and shorten response times for cybersecurity teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI introduces ChatGPT for Clinicians and HealthBench Professional

OpenAI has launched ChatGPT for Clinicians, a version of ChatGPT designed to support clinical tasks such as documentation, medical research, evidence review, and care consults. The company says the product is now available free to verified physicians, nurse practitioners, physician associates, and pharmacists in the United States.

According to OpenAI, ChatGPT for Clinicians includes trusted clinical search with cited answers, reusable skills for repeatable workflows, deep research across medical literature, optional HIPAA support through a Business Associate Agreement for eligible accounts, and the ability for eligible evidence review to count towards continuing medical education credits. OpenAI also says conversations in the product are not used to train models.

The launch builds on OpenAI’s earlier ChatGPT for Healthcare offering for organisations. OpenAI says clinicians across US health systems are already using that product for administrative work such as medical research and documentation, and describes the free clinician version as the next step in expanding access.

Alongside the launch, OpenAI has introduced HealthBench Professional, which it describes as an open benchmark for real-world clinician chat tasks across care consultation, writing, documentation, and medical research. The company says the benchmark is based on physician-authored conversations, multi-stage physician adjudication, and filtered examples selected for quality, representativeness, and difficulty.

OpenAI also says physician advisers reviewed more than 700,000 model responses in health scenarios, and that before release, clinicians tested 6,924 conversations across clinical care, documentation, and research.

According to the company, physicians rated 99.6% of those responses as safe and accurate, while GPT-5.4 in the ChatGPT for Clinicians workspace outperformed base GPT-5.4, other OpenAI and external models, and human physicians on HealthBench Professional. OpenAI adds that the tool is designed to support clinicians with information rather than replace their judgement or expertise.

The company says the free version is currently limited to verified US clinicians, with plans to expand access to additional countries and groups over time. OpenAI also says it will begin by working with the Better Evidence Network to pilot access for verified clinicians outside the United States, subject to local regulations, and has released a Health Blueprint with recommendations for responsible AI integration in US healthcare.

Why does it matter?

The launch of ChatGPT for Clinicians reflects a shift from general-purpose AI use in healthcare towards clinician-specific products tied to workflow, benchmarking, and compliance. It also shows that competition in medical AI is increasingly centred not only on model capability, but on safety evaluation, evidence retrieval, privacy controls, and integration into real clinical practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI privacy model sets new standard for AI-data protection

The US R&D company, OpenAI, has introduced OpenAI Privacy Filter, a specialised AI system designed to detect and redact personally identifiable information in text with high accuracy.

A model that is part of broader efforts to strengthen privacy-by-design practices in AI development, offering developers a practical tool to embed data protection directly into workflows rather than relying on external processing systems.

Unlike traditional rule-based systems, the model applies contextual language understanding to identify sensitive information in unstructured text. It processes inputs in a single pass and supports long-context analysis, enabling efficient handling of large documents.

Local deployment further reduces exposure risks, allowing sensitive data to remain on-device rather than being transmitted to external servers.

Performance benchmarks indicate near frontier-level capability, with strong precision and recall scores across standard evaluation datasets.

The system detects multiple categories of private data, including personal identifiers, financial information, and confidential credentials, while allowing developers to fine-tune detection thresholds according to operational requirements.

Despite its capabilities, the model is positioned as one component within a wider privacy framework instead of a standalone compliance solution.

Human oversight remains necessary in high-risk domains such as legal or financial processing.

Such a release by OpenAI reflects a shift towards smaller, specialised AI systems designed to address targeted challenges in real-world deployments while maintaining adaptability and transparency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI accelerates life sciences research with a new specialised model

OpenAI has launched GPT-Rosalind, a purpose-built model. It is designed to support complex workflows in biology, drug discovery and translational medicine.

A system that focuses on improving reasoning across scientific domains, enabling researchers to process large volumes of data, literature and experimental inputs more efficiently.

The model is engineered to assist with early-stage discovery, where improvements can significantly influence downstream outcomes.

By supporting hypothesis generation, evidence synthesis and experimental design, GPT-Rosalind aims to streamline fragmented research processes that often slow scientific progress.

Integration with specialised tools and access to more than 50 scientific databases enable the new OpenAI model to operate across multi-step workflows.

Why does it matter?

Early evaluations indicate stronger performance in areas such as protein analysis, genomics and chemical reasoning, alongside improved capability in selecting and using domain-specific tools.

Access is currently limited through a controlled deployment framework, ensuring use within governed research environments.

Partnerships with organisations including Amgen and Moderna reflect a broader effort to apply AI to real-world scientific challenges while maintaining safeguards and oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI expands cyber defence programme with trusted access and industry partnerships

The US AI research and deployment company, OpenAI, has introduced an expanded cyber defence initiative aimed at strengthening collaboration across the cybersecurity ecosystem.

A programme, known as Trusted Access for Cyber, is designed to provide advanced AI capabilities to vetted organisations while maintaining safeguards based on trust, validation and accountability.

Such an initiative by OpenAI includes financial support through a cybersecurity grant programme, allocating resources to organisations working on software supply chain security and vulnerability research.

By enabling broader access to advanced tools, the programme seeks to support developers and smaller teams that may lack continuous security capacity.

A range of industry participants, including Cisco, Cloudflare and NVIDIA, are involved in testing and applying these capabilities within complex digital environments.

Public sector collaboration is also reflected through partnerships with institutions focused on evaluating AI safety and security standards.

The initiative reflects a broader approach to cybersecurity as a distributed responsibility, where public and private actors contribute to resilience.

It also highlights the increasing role of AI systems in identifying vulnerabilities and supporting defensive research across critical infrastructure and digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI industrial policy questions control over power, wealth and governance

Every technological leap forces society to renegotiate its relationship with power. Intelligence, once a uniquely human advantage, is now being abstracted, scaled, and embedded into machines. As AI evolves from a tool into an autonomous force shaping economies and institutions, the question is no longer what AI can do, but who it will ultimately serve.

A new framework published by OpenAI sets out a vision for managing the transition towards advanced AI systems, often described as superintelligence. Framed as a policy agenda for governments and institutions, it attempts to define how societies should respond to rapid advances in AI governance, economic transformation, and workforce disruption.

At its core, the document is not a regulation but influence: an attempt to shape how policymakers think about industrial policy for AI, productivity gains, and the redistribution of technological power.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

AI industrial policy and the next economic transformation

The central argument is that AI will act as a general-purpose technology comparable to electricity or the combustion engine. It promises higher productivity, lower costs, and accelerated innovation across industries. In policy terms, this aligns with broader discussions around AI-driven productivity growth and economic restructuring.

However, historical precedent suggests that such transitions are rarely evenly distributed. Industrial revolutions typically begin with labour displacement, rising inequality, and capital concentration, before broader gains are realised. AI may intensify this dynamic due to its dependence on compute infrastructure, proprietary models, and large-scale data ecosystems.

Economic power may become increasingly concentrated among a small number of AI developers and infrastructure providers, posing a structural risk of reinforcing existing inequalities rather than reducing them.

 OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

The return of industrial policy in the AI economy

A key feature of the document is its explicit endorsement of AI industrial policy as a necessary response to market limitations. Governments, it argues, must play a more active role in shaping outcomes through regulation, investment, and public-private coordination.

A broader global shift in economic thinking is reflected in this approach. Strategic sectors such as semiconductors, energy, and digital infrastructure are already experiencing increased state intervention. AI now joins that category as a critical technology.

Yet this approach introduces a significant tension. When leading AI firms contribute directly to the design of AI regulation and governance frameworks, the risk of regulatory capture increases. Policies intended to ensure fairness and safety may inadvertently reinforce the dominance of incumbent companies by raising compliance costs and technical barriers for smaller competitors.

In this sense, AI industrial policy may not only guide innovation but also determine market entry, competition, and the long-term economic structure.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

Redistribution, taxation, and the question of AI wealth

The document places strong emphasis on economic inclusion in the AI economy, proposing mechanisms such as a public wealth fund, AI taxation, and expanded access to capital markets. These ideas are designed to address one of the central challenges of AI-driven growth: the potential for extreme wealth concentration.

As AI systems increase productivity while reducing reliance on human labour, traditional tax bases such as wages and payroll contributions may weaken. The proposal to tax AI-generated profits or automated labour reflects an attempt to stabilise public finances in an increasingly automated economy.

Equally significant is the idea of a ‘right to AI’, which frames access to AI as a foundational requirement for participation in modern economic life. This positions AI not merely as a tool, but as a form of digital infrastructure essential to economic agency and inclusion.

However, these proposals face major implementation challenges. Measuring AI-generated value is complex, particularly in hybrid systems where human and machine inputs are deeply integrated. Without clear definitions, AI taxation frameworks and redistribution mechanisms could prove difficult to enforce at scale.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

Workforce disruption and the future of work

The document recognises that AI will significantly reshape labour markets. Many tasks that currently require hours of human effort are already being automated, with future systems expected to handle more complex, multi-step workflows.

To manage this transition, the proposal highlights reskilling programmes, portable benefits systems, and adaptive social safety nets, alongside experimental ideas such as a reduced working week. These measures aim to mitigate the impact of automation and workforce disruption while maintaining economic stability.

However, the pace of change introduces uncertainty. Historically, labour markets have adjusted over decades, allowing new roles to emerge gradually. AI-driven disruption may occur much faster, compressing adjustment periods and increasing transitional risk.

While the document highlights expansion in sectors such as healthcare, education, and care services, these ‘human-centred jobs’ require substantial investment in training, wages, and institutional support to absorb displaced workers effectively.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

AI safety, governance, and systemic control

Beyond economic considerations, the proposal places a strong emphasis on AI safety, auditing frameworks, and risk mitigation systems. The proposed measures include model evaluation standards, incident reporting mechanisms, and international coordination structures.

These safeguards respond to growing concerns around cybersecurity risks, biosecurity threats, and systemic model misalignment. As AI systems become more autonomous and embedded in critical infrastructure, governance mechanisms must evolve accordingly.

However, safety frameworks also introduce questions of control. Determining which systems are classified as high-risk inevitably centralises authority within regulatory and institutional bodies. In practice, this may restrict access to advanced AI systems to organisations capable of meeting stringent compliance requirements.

A structural trade-off between security and openness is emerging in the AI economy, raising questions about how innovation and oversight can coexist without reinforcing centralisation.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

Strategic influence and the future of AI governance

The proposal from OpenAI is both policy-oriented and strategically positioned. It acknowledges legitimate risks- inequality, labour disruption, and systemic instability, while offering a roadmap for managing them through structured intervention.

At the same time, it reflects the perspective of a leading actor in the AI industry. As a result, its recommendations exist at the intersection of public interest and commercial strategy. The dual role raises important questions about who defines AI governance frameworks and how economic power is distributed in the intelligence age.

The broader challenge is not only technological but also institutional: ensuring that AI industrial policy, regulation, ethics and economic design are shaped through transparent and democratic processes, rather than through concentrated private influence.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

AI industrial policy will define economic power

AI is no longer solely a technological development- it is a structural force reshaping global economic systems. The emergence of AI industrial policy frameworks reflects an attempt to manage this transformation proactively rather than reactively.

The success or failure of these approaches will determine whether AI-driven growth leads to broader prosperity or deeper concentration of wealth and power. Without effective governance, the risks of inequality and centralisation are significant. With carefully designed policies, there is real potential to expand access, improve productivity, and distribute benefits more widely.

Digital diplomacy may increasingly come to the fore as a mechanism for arbitrating competing approaches to AI policy and governance across jurisdictions. As regulatory frameworks diverge, diplomatic channels could serve to bridge gaps, negotiate standards, and balance strategic interests, positioning digital diplomacy as a practical tool for managing fragmentation in the evolving AI economy. 

Ultimately, the intelligence age will not be defined by technology alone, but by the AI governance systems, economic frameworks, and industrial policy decisions that guide its development. The outcome will depend on the extent to which global stakeholders succeed in building a shared and coordinated vision for its future.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

OpenAI launches child safety framework to address AI risks

A new framework has been introduced by OpenAI to address risks of AI-enabled child abuse and strengthen protection mechanisms across digital systems.

An initiative that reflects growing concern over how emerging technologies can both enable and prevent harm.

The blueprint focuses on modernising legal frameworks to address AI-generated harmful content, improving reporting and coordination among service providers, and embedding safety measures directly into AI systems.

These measures aim to enhance early detection and prevent misuse at scale.

Developed in collaboration with organisations such as the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, the framework promotes shared standards across industry and public authorities.

It emphasises coordinated responses and stronger accountability mechanisms.

An approach that combines technical safeguards, human oversight, and legal enforcement, aiming to improve response speed and reduce risks before harm occurs.

Ultimately, the initiative highlights the need for continuous adaptation as AI capabilities evolve and reshape online safety challenges.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI presents policy proposals addressing AI’s economic and labour impacts

Policy proposals advanced by OpenAI outline a vision of economic restructuring in response to the growing influence of AI.

Framed within an emerging ‘intelligence age‘, the approach reflects concerns that AI-driven productivity gains may concentrate wealth while undermining traditional labour-based economic models.

The proposals, therefore, attempt to reconcile market-led innovation with mechanisms aimed at broader distribution of economic benefits.

A central element involves shifting taxation away from labour towards capital, reflecting expectations that automation will reduce reliance on human work.

Instruments such as robot taxes and public wealth funds are presented as potential tools to redistribute gains generated by AI systems.

Such proposals by OpenAI indicate a policy direction where states may need to redefine fiscal structures to sustain social protection systems traditionally funded through employment-based taxation.

Labour market adaptation forms another key pillar, with suggestions including shorter working weeks, portable benefits, and increased corporate contributions to social welfare.

However, reliance on employer-linked mechanisms raises questions about coverage gaps, particularly for individuals displaced by automation. The proposals highlight ongoing tensions between corporate-led welfare models and the need for more comprehensive public safety nets.

Alongside economic measures, the framework addresses governance challenges linked to advanced AI systems, including systemic risks and misuse.

OpenAI’s proposals also recommend that oversight bodies, risk containment strategies, and infrastructure expansion reflect an effort to balance innovation with control.

Treating AI as a utility further signals a shift towards recognising digital infrastructure as a public good, though implementation will depend on political consensus and regulatory capacity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!