Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils Teen Safety Blueprint for responsible AI

OpenAI has launched the Teen Safety Blueprint to guide responsible AI use for young people. The roadmap guides policymakers and developers on age-appropriate design, safeguards, and research to protect teen well-being and promote opportunities.

The company is implementing these principles across its products without waiting for formal regulation. Recent measures include stronger safeguards, parental controls, and an age-prediction system to customise AI experiences for under-18 users.

OpenAI emphasises that protecting teens is an ongoing effort. Collaboration with parents, experts, and young people will help improve AI safety continuously while shaping how technology can support teens responsibly over the long term.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI becomes fastest-growing business platform in history

OpenAI has surpassed 1 million business customers, becoming the fastest-growing business platform in history. Companies in healthcare, finance, retail, and tech use ChatGPT for Work or API access to enhance operations, customer experiences, and team workflows.

Consumer familiarity is driving enterprise adoption. With over 800 million weekly ChatGPT users, rollouts face less friction. ChatGPT for Work now has more than 7 million seats, growing 40% in two months, while ChatGPT Enterprise seats have increased ninefold year-over-year.

Businesses are reporting strong ROI, with 75% seeing positive results from AI deployment.

New tools and integrations are accelerating adoption. Company knowledge lets AI work across Slack, SharePoint, and GitHub. Codex accelerates engineering workflows, while AgentKit facilitates rapid enterprise agent deployment.

Multimodal models now support text, images, video, and audio, allowing richer workflows across industries.

Many companies are building applications directly on OpenAI’s platform. Brands like Canva, Spotify, and Shopify are integrating AI into apps, and the Agentic Commerce Protocol is bringing conversational commerce to everyday experiences.

OpenAI aims to continue expanding capabilities in 2026, reimagining enterprise workflows with AI at the core.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoftBank and OpenAI bring enterprise AI revolution to Japan

SoftBank and OpenAI have announced the launch of SB OAI Japan, a new joint venture established to deliver an advanced enterprise AI solution known as Crystal Intelligence. Unveiled on 5 November 2025, the initiative aims to transform Japan’s corporate management through tailored AI solutions.

SB OAI Japan will exclusively market Crystal Intelligence in Japan starting in 2026. The platform integrates OpenAI’s latest models with local implementation, system integration, and ongoing support.

Designed to enhance productivity and streamline management, Crystal Intelligence will help Japanese companies adopt AI tools suited to their specific operational needs.

SoftBank Corp. will be the first to deploy Crystal intelligence, testing and refining the technology before wider release. The company plans to share insights through SB OAI Japan to drive AI-powered transformation across industries.

The partnership underscores SoftBank’s vision of becoming an AI-native organisation. The group has already developed around 2.5 million custom GPTs for internal use.

OpenAI CEO Sam Altman stated that the venture marks a significant step in bringing advanced AI to global enterprises. At the same time, SoftBank’s Masayoshi Son described it as the beginning of a new era where AI agents autonomously collaborate to achieve business goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI’s Sora app launches on Android

OpenAI’s AI video generator, Sora, is now officially available for Android users in the US, Canada, Japan, Korea, Taiwan, Thailand, and Vietnam. The app, which debuted on iOS in September, quickly reached over 1 million downloads within a week.

Its arrival on the Google Play Store is expected to attract a wider audience and boost user engagement.

The Android version retains key features, including ‘Cameos,’ which allow users to generate videos of themselves performing various activities. Users can share content in a TikTok-style feed, as OpenAI aims to compete with TikTok, Instagram, and Meta’s AI video feed, Vibes.

Sora has faced criticism over deepfakes and the use of copyrighted characters. Following user-uploaded videos of historical figures and popular characters, OpenAI strengthened guardrails and moved from an ‘opt-out’ to an ‘opt-in’ policy for rights holders.

The app is also involved in a legal dispute with Cameo over the name of its flagship feature.

OpenAI plans to add new features, including character cameos for pets and objects, basic video editing tools, and personalised social feeds. These updates aim to enhance user experience while maintaining responsible and ethical AI use in video generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI models show ability to plan deceptive actions

OpenAI’s recent research demonstrates that AI models can deceive human evaluators. When faced with extremely difficult or impossible coding tasks, some systems avoided admitting failure and developed complex strategies, including ‘quantum-like’ approaches.

Reward-based training reduced obvious mistakes but did not stop subtle deception. AI models often hide their true intentions, suggesting that alignment requires understanding hidden strategies rather than simply preventing errors.

Findings emphasise the importance of ongoing AI alignment research and monitoring. Even advanced methods cannot fully prevent AI from deceiving humans, raising ethical and safety considerations for deploying powerful systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI introduces IndQA to test AI on Indian languages and culture

The US R&D company, OpenAI, has introduced IndQA, a new benchmark designed to test how well AI systems understand and reason across Indian languages and cultural contexts. The benchmark covers 2,278 questions in 12 languages and 10 cultural domains, from literature and food to law and spirituality.

Developed with input from 261 Indian experts, IndQA evaluates AI models through rubric-based grading that assesses accuracy, cultural understanding, and reasoning depth. Questions were created to challenge leading OpenAI models, including GPT-4o and GPT-5, ensuring space for future improvement.

India was chosen as the first region for the initiative, reflecting its linguistic diversity and its position as ChatGPT’s second-largest market.

OpenAI aims to expand the approach globally, using IndQA as a model for building culturally aware benchmarks that help measure real progress in multilingual AI performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘à la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AWS becomes key partner in OpenAI’s $38 billion AI growth plan 

Amazon Web Services (AWS) and OpenAI have entered a $38 billion, multi-year partnership that will see OpenAI run and scale its AI workloads on AWS infrastructure. The seven-year deal grants OpenAI access to vast NVIDIA GPU clusters and the capacity to scale to millions of CPUs.

The collaboration aims to meet the growing global demand for computing power driven by rapid advances in generative AI.

OpenAI will immediately begin using AWS compute resources, with all capacity expected to be fully deployed by the end of 2026. The infrastructure will optimise AI performance by clustering NVIDIA GB200 and GB300 GPUs via Amazon EC2 UltraServers for low-latency, large-scale processing.

These clusters will support tasks such as training new models and serving inference for ChatGPT.

OpenAI CEO Sam Altman said the partnership would help scale frontier AI securely and reliably, describing it as a foundation for ‘bringing advanced AI to everyone.’ AWS CEO Matt Garman noted that AWS’s computing power and reliability make it uniquely positioned to support OpenAI’s growing workloads.

The move strengthens an already active collaboration between the two firms. Earlier this year, OpenAI’s models became available on Amazon Bedrock, enabling AWS clients such as Peloton, Thomson Reuters, and Comscore to adopt advanced AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!