OpenAI has launched GPT-Rosalind, a purpose-built model. It is designed to support complex workflows in biology, drug discovery and translational medicine.
A system that focuses on improving reasoning across scientific domains, enabling researchers to process large volumes of data, literature and experimental inputs more efficiently.
The model is engineered to assist with early-stage discovery, where improvements can significantly influence downstream outcomes.
By supporting hypothesis generation, evidence synthesis and experimental design, GPT-Rosalind aims to streamline fragmented research processes that often slow scientific progress.
Integration with specialised tools and access to more than 50 scientific databases enable the new OpenAI model to operate across multi-step workflows.
Access is currently limited through a controlled deployment framework, ensuring use within governed research environments.
Partnerships with organisations including Amgen and Moderna reflect a broader effort to apply AI to real-world scientific challenges while maintaining safeguards and oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US AI research and deployment company, OpenAI, has introduced an expanded cyber defence initiative aimed at strengthening collaboration across the cybersecurity ecosystem.
A programme, known as Trusted Access for Cyber, is designed to provide advanced AI capabilities to vetted organisations while maintaining safeguards based on trust, validation and accountability.
Such an initiative by OpenAI includes financial support through a cybersecurity grant programme, allocating resources to organisations working on software supply chain security and vulnerability research.
By enabling broader access to advanced tools, the programme seeks to support developers and smaller teams that may lack continuous security capacity.
A range of industry participants, including Cisco, Cloudflare and NVIDIA, are involved in testing and applying these capabilities within complex digital environments.
Public sector collaboration is also reflected through partnerships with institutions focused on evaluating AI safety and security standards.
The initiative reflects a broader approach to cybersecurity as a distributed responsibility, where public and private actors contribute to resilience.
It also highlights the increasing role of AI systems in identifying vulnerabilities and supporting defensive research across critical infrastructure and digital services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Every technological leap forces society to renegotiate its relationship with power. Intelligence, once a uniquely human advantage, is now being abstracted, scaled, and embedded into machines. As AI evolves from a tool into an autonomous force shaping economies and institutions, the question is no longer what AI can do, but who it will ultimately serve.
A new framework published by OpenAI sets out a vision for managing the transition towards advanced AI systems, often described as superintelligence. Framed as a policy agenda for governments and institutions, it attempts to define how societies should respond to rapid advances in AI governance, economic transformation, and workforce disruption.
At its core, the document is not a regulation but influence: an attempt to shape how policymakers think about industrial policy for AI, productivity gains, and the redistribution of technological power.
Image via freepik
AI industrial policy and the next economic transformation
The central argument is that AI will act as a general-purpose technology comparable to electricity or the combustion engine. It promises higher productivity, lower costs, and accelerated innovation across industries. In policy terms, this aligns with broader discussions around AI-driven productivity growth and economic restructuring.
However, historical precedent suggests that such transitions are rarely evenly distributed. Industrial revolutions typically begin with labour displacement, rising inequality, and capital concentration, before broader gains are realised. AI may intensify this dynamic due to its dependence on compute infrastructure, proprietary models, and large-scale data ecosystems.
Economic power may become increasingly concentrated among a small number of AI developers and infrastructure providers, posing a structural risk of reinforcing existing inequalities rather than reducing them.
Image via freepik
The return of industrial policy in the AI economy
A key feature of the document is its explicit endorsement of AI industrial policy as a necessary response to market limitations. Governments, it argues, must play a more active role in shaping outcomes through regulation, investment, and public-private coordination.
A broader global shift in economic thinking is reflected in this approach. Strategic sectors such as semiconductors, energy, and digital infrastructure are already experiencing increased state intervention. AI now joins that category as a critical technology.
Yet this approach introduces a significant tension. When leading AI firms contribute directly to the design of AI regulation and governance frameworks, the risk of regulatory capture increases. Policies intended to ensure fairness and safety may inadvertently reinforce the dominance of incumbent companies by raising compliance costs and technical barriers for smaller competitors.
In this sense, AI industrial policy may not only guide innovation but also determine market entry, competition, and the long-term economic structure.
Image via freepik
Redistribution, taxation, and the question of AI wealth
The document places strong emphasis on economic inclusion in the AI economy, proposing mechanisms such as a public wealth fund, AI taxation, and expanded access to capital markets. These ideas are designed to address one of the central challenges of AI-driven growth: the potential for extreme wealth concentration.
As AI systems increase productivity while reducing reliance on human labour, traditional tax bases such as wages and payroll contributions may weaken. The proposal to tax AI-generated profits or automated labour reflects an attempt to stabilise public finances in an increasingly automated economy.
Equally significant is the idea of a ‘right to AI’, which frames access to AI as a foundational requirement for participation in modern economic life. This positions AI not merely as a tool, but as a form of digital infrastructure essential to economic agency and inclusion.
However, these proposals face major implementation challenges. Measuring AI-generated value is complex, particularly in hybrid systems where human and machine inputs are deeply integrated. Without clear definitions, AI taxation frameworks and redistribution mechanisms could prove difficult to enforce at scale.
Image via freepik
Workforce disruption and the future of work
The document recognises that AI will significantly reshape labour markets. Many tasks that currently require hours of human effort are already being automated, with future systems expected to handle more complex, multi-step workflows.
To manage this transition, the proposal highlights reskilling programmes, portable benefits systems, and adaptive social safety nets, alongside experimental ideas such as a reduced working week. These measures aim to mitigate the impact of automation and workforce disruption while maintaining economic stability.
However, the pace of change introduces uncertainty. Historically, labour markets have adjusted over decades, allowing new roles to emerge gradually. AI-driven disruption may occur much faster, compressing adjustment periods and increasing transitional risk.
While the document highlights expansion in sectors such as healthcare, education, and care services, these ‘human-centred jobs’ require substantial investment in training, wages, and institutional support to absorb displaced workers effectively.
Image via freepik
AI safety, governance, and systemic control
Beyond economic considerations, the proposal places a strong emphasis on AI safety, auditing frameworks, and risk mitigation systems. The proposed measures include model evaluation standards, incident reporting mechanisms, and international coordination structures.
These safeguards respond to growing concerns around cybersecurity risks, biosecurity threats, and systemic model misalignment. As AI systems become more autonomous and embedded in critical infrastructure, governance mechanisms must evolve accordingly.
However, safety frameworks also introduce questions of control. Determining which systems are classified as high-risk inevitably centralises authority within regulatory and institutional bodies. In practice, this may restrict access to advanced AI systems to organisations capable of meeting stringent compliance requirements.
A structural trade-off between security and openness is emerging in the AI economy, raising questions about how innovation and oversight can coexist without reinforcing centralisation.
Image via freepik
Strategic influence and the future of AI governance
The proposal from OpenAI is both policy-oriented and strategically positioned. It acknowledges legitimate risks- inequality, labour disruption, and systemic instability, while offering a roadmap for managing them through structured intervention.
At the same time, it reflects the perspective of a leading actor in the AI industry. As a result, its recommendations exist at the intersection of public interest and commercial strategy. The dual role raises important questions about who defines AI governance frameworks and how economic power is distributed in the intelligence age.
The broader challenge is not only technological but also institutional: ensuring that AI industrial policy, regulation, ethics and economic design are shaped through transparent and democratic processes, rather than through concentrated private influence.
Image via freepik
AI industrial policy will define economic power
AI is no longer solely a technological development- it is a structural force reshaping global economic systems. The emergence of AI industrial policy frameworks reflects an attempt to manage this transformation proactively rather than reactively.
The success or failure of these approaches will determine whether AI-driven growth leads to broader prosperity or deeper concentration of wealth and power. Without effective governance, the risks of inequality and centralisation are significant. With carefully designed policies, there is real potential to expand access, improve productivity, and distribute benefits more widely.
Digital diplomacy may increasingly come to the fore as a mechanism for arbitrating competing approaches to AI policy and governance across jurisdictions. As regulatory frameworks diverge, diplomatic channels could serve to bridge gaps, negotiate standards, and balance strategic interests, positioning digital diplomacy as a practical tool for managing fragmentation in the evolving AI economy.
Ultimately, the intelligence age will not be defined by technology alone, but by the AI governance systems, economic frameworks, and industrial policy decisions that guide its development. The outcome will depend on the extent to which global stakeholders succeed in building a shared and coordinated vision for its future.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new framework has been introduced by OpenAI to address risks of AI-enabled child abuse and strengthen protection mechanisms across digital systems.
An initiative that reflects growing concern over how emerging technologies can both enable and prevent harm.
The blueprint focuses on modernising legal frameworks to address AI-generated harmful content, improving reporting and coordination among service providers, and embedding safety measures directly into AI systems.
These measures aim to enhance early detection and prevent misuse at scale.
It emphasises coordinated responses and stronger accountability mechanisms.
An approach that combines technical safeguards, human oversight, and legal enforcement, aiming to improve response speed and reduce risks before harm occurs.
Ultimately, the initiative highlights the need for continuous adaptation as AI capabilities evolve and reshape online safety challenges.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Policy proposals advanced by OpenAI outline a vision of economic restructuring in response to the growing influence of AI.
Framed within an emerging ‘intelligence age‘, the approach reflects concerns that AI-driven productivity gains may concentrate wealth while undermining traditional labour-based economic models.
The proposals, therefore, attempt to reconcile market-led innovation with mechanisms aimed at broader distribution of economic benefits.
A central element involves shifting taxation away from labour towards capital, reflecting expectations that automation will reduce reliance on human work.
Instruments such as robot taxes and public wealth funds are presented as potential tools to redistribute gains generated by AI systems.
Such proposals by OpenAI indicate a policy direction where states may need to redefine fiscal structures to sustain social protection systems traditionally funded through employment-based taxation.
Labour market adaptation forms another key pillar, with suggestions including shorter working weeks, portable benefits, and increased corporate contributions to social welfare.
However, reliance on employer-linked mechanisms raises questions about coverage gaps, particularly for individuals displaced by automation. The proposals highlight ongoing tensions between corporate-led welfare models and the need for more comprehensive public safety nets.
Alongside economic measures, the framework addresses governance challenges linked to advanced AI systems, including systemic risks and misuse.
OpenAI’s proposals also recommend that oversight bodies, risk containment strategies, and infrastructure expansion reflect an effort to balance innovation with control.
Treating AI as a utility further signals a shift towards recognising digital infrastructure as a public good, though implementation will depend on political consensus and regulatory capacity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Penguin Random House has filed a lawsuit against OpenAI, alleging that its chatbot, ChatGPT, infringed copyright by imitating content from the ‘Coconut the Little Dragon’ series by German author Ingo Siegner. Filed in a Munich court, the complaint targets OpenAI’s European subsidiary, citing the chatbot’s creation of text, a book cover, and a promotional blurb as evidence of unauthorised ‘memorisation’ of Siegner’s work.
This issue highlights the challenge of distinguishing between algorithmic learning and direct copying, as AI models like OpenAI’s large language model (LLM) can retain extensive portions of their training data and reproduce them, raising legal and ethical dilemmas.
Penguin Random House insists that protecting human creativity is central to its mission. Carina Mathern, a representative, stressed the importance of safeguarding intellectual property, even as the company acknowledges the potential benefits of AI.
Bertelsmann, the parent company of Penguin Random House, had a prior agreement with OpenAI but did not allow access to its media archives, illustrating the complexities of AI collaboration while safeguarding proprietary content. OpenAI responded by stating that they are reviewing the allegations, reiterating their respect for creators and maintaining dialogue with publishers worldwide.
Why does it matter?
The resolution of this lawsuit could mark a pivotal moment in defining AI’s role in creative industries, shaping future regulations and enforcement strategies for AI-driven content creation and its impact on intellectual property rights globally.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has published a new overview of the safety measures built into Sora 2 and the Sora app, setting out how the company says it is approaching provenance, likeness protection, teen safeguards, harmful-content filtering, audio controls, and user reporting tools. The Sora team published the note on 23 March 2026.
OpenAI says every video generated with Sora includes visible and invisible provenance signals, and that all videos also embed C2PA metadata. The company adds that many outputs feature visible moving watermarks that include the creator’s name, while internal reverse-image and audio search tools are used to trace videos back to Sora.
A substantial part of the update focuses on likeness and consent. OpenAI says users can upload images of people to generate videos, but only after attesting that they have consent from the people featured and the right to upload the media. OpenAI also says image-to-video generations involving people are subject to stricter safeguards than Sora Characters, and that images including children and young-looking persons face stricter moderation. Shared videos generated from such images will always carry watermarks, according to the company.
OpenAI also sets out controls linked to its characters feature, which it says is intended to give users stronger control over their likeness, including both appearance and voice. According to the company, users can decide who can use their characters, revoke access at any time, and review, delete, or report videos featuring their characters. OpenAI says it also applies additional restrictions designed to limit major changes to a person’s appearance, avoid embarrassing uses, and maintain broadly consistent identity presentation.
Protections for younger users form another part of the update. OpenAI says teen accounts are subject to stronger limitations on mature output, that age-inappropriate or harmful content is filtered from teen feeds, and that adult users cannot initiate direct messages with teens. Parental controls in ChatGPT can also be used to manage teen messaging permissions and to select a non-personalised feed in the app, while default limits apply to continuous scrolling for teens.
OpenAI says harmful-content controls operate at both creation and distribution stages. Prompt and output checks are used across multiple video frames and audio transcripts to block content including sexual material, terrorist propaganda, and self-harm promotion. OpenAI also says it has tightened policies for video generation compared with image generation because of added realism, motion, and audio, while automated systems and human review are used to monitor feed content against its global usage policies.
Audio generation is treated separately in the note. OpenAI says generated speech transcripts are automatically scanned for possible policy violations, and that prompts intended to imitate living artists or existing works are blocked. The company also says it honours takedown requests from creators who believe an output infringes their work.
User controls and recourse are presented as the final layer. OpenAI says users can choose whether to share videos to the feed, remove published content, and report videos, profiles, direct messages, comments, and characters for abuse. Blocking tools are also available, according to the company, to stop other users from viewing a profile or posts, using a character, or contacting someone through direct message.
OpenAI’s post is framed as a product-safety explanation rather than an independent assessment of the effectiveness of the measures in practice. Much of the note describes controls that the company says it has built into Sora 2, but it does not provide external evaluation data in the published summary.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has released a set of prompt-based safety policies to help developers build safer AI experiences for teenagers. The tools work with the open-weight model gpt-oss-safeguard, turning safety requirements into practical classifiers for real-world use.
The policies address teen risks, including graphic violence, sexual content, harmful body image behaviour, dangerous challenges, roleplay, and age-restricted goods and services. Developers can use them for both real-time filtering and offline content analysis.
The framework was developed with input from organisations such as Common Sense Media and everyone.ai to improve clarity and consistency in teen safety rules. The initiative also responds to long-standing challenges in translating high-level safety goals into precise operational systems.
Open-source availability through the ROOST Model Community allows developers to adapt and expand the policies for different use cases and languages. The framework is a foundational step, not a complete solution, encouraging layered safeguards and ongoing refinement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.
The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.
Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.
The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A major expansion of its activities has been outlined by OpenAI Foundation, signalling a broader effort to ensure AI delivers tangible benefits while addressing emerging risks.
The organisation plans to invest at least $1 billion over the next year, forming part of a wider $25 billion commitment focused on disease research and AI resilience.
OpenAI Foundation frames such potential as central to its mission, while recognising that more capable systems introduce complex societal and safety challenges that require coordinated responses.
Initial programmes prioritise life sciences, including research into Alzheimer’s disease, expanded access to public health data, and accelerated progress on high-mortality conditions.
Parallel efforts examine the economic impact of automation, with engagement across policymakers, labour groups and businesses aimed at developing practical responses to labour market disruption.
A dedicated resilience strategy addresses risks linked to advanced AI systems, including safety standards, biosecurity concerns and the protection of children and young users.
Alongside community-focused funding, the OpenAI Foundation’s initiative reflects a dual objective: enabling innovation rather than leaving societies exposed to technological disruption.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!