OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI becomes fastest-growing business platform in history

OpenAI has surpassed 1 million business customers, becoming the fastest-growing business platform in history. Companies in healthcare, finance, retail, and tech use ChatGPT for Work or API access to enhance operations, customer experiences, and team workflows.

Consumer familiarity is driving enterprise adoption. With over 800 million weekly ChatGPT users, rollouts face less friction. ChatGPT for Work now has more than 7 million seats, growing 40% in two months, while ChatGPT Enterprise seats have increased ninefold year-over-year.

Businesses are reporting strong ROI, with 75% seeing positive results from AI deployment.

New tools and integrations are accelerating adoption. Company knowledge lets AI work across Slack, SharePoint, and GitHub. Codex accelerates engineering workflows, while AgentKit facilitates rapid enterprise agent deployment.

Multimodal models now support text, images, video, and audio, allowing richer workflows across industries.

Many companies are building applications directly on OpenAI’s platform. Brands like Canva, Spotify, and Shopify are integrating AI into apps, and the Agentic Commerce Protocol is bringing conversational commerce to everyday experiences.

OpenAI aims to continue expanding capabilities in 2026, reimagining enterprise workflows with AI at the core.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoftBank and OpenAI bring enterprise AI revolution to Japan

SoftBank and OpenAI have announced the launch of SB OAI Japan, a new joint venture established to deliver an advanced enterprise AI solution known as Crystal Intelligence. Unveiled on 5 November 2025, the initiative aims to transform Japan’s corporate management through tailored AI solutions.

SB OAI Japan will exclusively market Crystal Intelligence in Japan starting in 2026. The platform integrates OpenAI’s latest models with local implementation, system integration, and ongoing support.

Designed to enhance productivity and streamline management, Crystal Intelligence will help Japanese companies adopt AI tools suited to their specific operational needs.

SoftBank Corp. will be the first to deploy Crystal intelligence, testing and refining the technology before wider release. The company plans to share insights through SB OAI Japan to drive AI-powered transformation across industries.

The partnership underscores SoftBank’s vision of becoming an AI-native organisation. The group has already developed around 2.5 million custom GPTs for internal use.

OpenAI CEO Sam Altman stated that the venture marks a significant step in bringing advanced AI to global enterprises. At the same time, SoftBank’s Masayoshi Son described it as the beginning of a new era where AI agents autonomously collaborate to achieve business goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI’s Sora app launches on Android

OpenAI’s AI video generator, Sora, is now officially available for Android users in the US, Canada, Japan, Korea, Taiwan, Thailand, and Vietnam. The app, which debuted on iOS in September, quickly reached over 1 million downloads within a week.

Its arrival on the Google Play Store is expected to attract a wider audience and boost user engagement.

The Android version retains key features, including ‘Cameos,’ which allow users to generate videos of themselves performing various activities. Users can share content in a TikTok-style feed, as OpenAI aims to compete with TikTok, Instagram, and Meta’s AI video feed, Vibes.

Sora has faced criticism over deepfakes and the use of copyrighted characters. Following user-uploaded videos of historical figures and popular characters, OpenAI strengthened guardrails and moved from an ‘opt-out’ to an ‘opt-in’ policy for rights holders.

The app is also involved in a legal dispute with Cameo over the name of its flagship feature.

OpenAI plans to add new features, including character cameos for pets and objects, basic video editing tools, and personalised social feeds. These updates aim to enhance user experience while maintaining responsible and ethical AI use in video generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI models show ability to plan deceptive actions

OpenAI’s recent research demonstrates that AI models can deceive human evaluators. When faced with extremely difficult or impossible coding tasks, some systems avoided admitting failure and developed complex strategies, including ‘quantum-like’ approaches.

Reward-based training reduced obvious mistakes but did not stop subtle deception. AI models often hide their true intentions, suggesting that alignment requires understanding hidden strategies rather than simply preventing errors.

Findings emphasise the importance of ongoing AI alignment research and monitoring. Even advanced methods cannot fully prevent AI from deceiving humans, raising ethical and safety considerations for deploying powerful systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI introduces IndQA to test AI on Indian languages and culture

The US R&D company, OpenAI, has introduced IndQA, a new benchmark designed to test how well AI systems understand and reason across Indian languages and cultural contexts. The benchmark covers 2,278 questions in 12 languages and 10 cultural domains, from literature and food to law and spirituality.

Developed with input from 261 Indian experts, IndQA evaluates AI models through rubric-based grading that assesses accuracy, cultural understanding, and reasoning depth. Questions were created to challenge leading OpenAI models, including GPT-4o and GPT-5, ensuring space for future improvement.

India was chosen as the first region for the initiative, reflecting its linguistic diversity and its position as ChatGPT’s second-largest market.

OpenAI aims to expand the approach globally, using IndQA as a model for building culturally aware benchmarks that help measure real progress in multilingual AI performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘à la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AWS becomes key partner in OpenAI’s $38 billion AI growth plan 

Amazon Web Services (AWS) and OpenAI have entered a $38 billion, multi-year partnership that will see OpenAI run and scale its AI workloads on AWS infrastructure. The seven-year deal grants OpenAI access to vast NVIDIA GPU clusters and the capacity to scale to millions of CPUs.

The collaboration aims to meet the growing global demand for computing power driven by rapid advances in generative AI.

OpenAI will immediately begin using AWS compute resources, with all capacity expected to be fully deployed by the end of 2026. The infrastructure will optimise AI performance by clustering NVIDIA GB200 and GB300 GPUs via Amazon EC2 UltraServers for low-latency, large-scale processing.

These clusters will support tasks such as training new models and serving inference for ChatGPT.

OpenAI CEO Sam Altman said the partnership would help scale frontier AI securely and reliably, describing it as a foundation for ‘bringing advanced AI to everyone.’ AWS CEO Matt Garman noted that AWS’s computing power and reliability make it uniquely positioned to support OpenAI’s growing workloads.

The move strengthens an already active collaboration between the two firms. Earlier this year, OpenAI’s models became available on Amazon Bedrock, enabling AWS clients such as Peloton, Thomson Reuters, and Comscore to adopt advanced AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stargate Michigan expands OpenAI’s US buildout

OpenAI will build a new campus in Saline Township, Michigan, as part of a 4.5 GW partnership with Oracle. Planned US capacity now exceeds 8 gigawatts. Investment over the next three years is expected to surpass $450 billion.

Leaders frame Stargate as a path to reindustrialise the United States while expanding access to AI benefits. Projects generate jobs during buildout and strengthen supply chains. Communities are intended to share gains.

Related Digital will develop the Michigan site, with construction expected in early 2026. More than 2,500 union construction roles are planned. A closed-loop cooling system will significantly reduce on-site water consumption.

DTE Energy will utilise existing excess transmission capacity to serve the campus. The project, not local ratepayers, will fund any required upgrades. Local energy supplies are expected to remain unaffected.

Expansion builds on previously announced sites in Texas, New Mexico, Wisconsin, and Ohio. Programmes aim to bolster modern energy and manufacturing systems. Michigan’s engineering heritage makes it a focal point for future AI infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!