New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5.1 makes ChatGPT smarter and more personal

Meta has announced the construction of its 30th data centre in Beaver Dam, Wisconsin. The $1 billion investment will support the company’s expanding AI infrastructure while benefiting the local community and the environment.

A facility, designed to support Meta’s most demanding AI workloads, that will run entirely on clean energy and create more than 100 permanent jobs alongside 1,000 construction roles.

The company will invest nearly $200 million in energy infrastructure and donate $15 million to Alliant Energy’s Hometown Care Energy Fund to assist families with home energy costs.

Meta will also launch community grants to fund schools and local organisations, strengthening technology education and digital skills while helping small businesses use AI tools more effectively.

Environmental responsibility remains central to the project. The data centre will use dry cooling, eliminating water demands during operation, and restore 100% of consumed water to local watersheds.

In partnership with Ducks Unlimited, Meta will revitalise 570 acres of wetlands and prairie, transforming degraded habitats into thriving ecosystems. The facility is expected to achieve LEED Gold Certification, reflecting Meta’s ongoing commitment to sustainability and community-focused innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI faces major copyright setback in US court

A US federal judge has ruled that a landmark copyright case against OpenAI can proceed, rejecting the company’s attempt to dismiss claims brought by authors and the Authors Guild.

The authors argue that ChatGPT’s summaries of copyrighted works, including George R.R. Martin’s Game of Thrones, unlawfully replicate the original tone, plot, and characters, raising concerns about AI-generated content infringing on creative rights.

The Publishers Association (PA) welcomed the ruling, warning that generative AI could ‘devastate the market’ for books and other creative works by producing infringing content at scale.

It urged the UK government to strengthen transparency rules to protect authors and publishers, stressing that AI systems capable of reproducing an author’s style could undermine the value of original creation.

The case follows a £1.5bn settlement against Anthropic earlier this year for using pirated books to train its models and comes amid growing scrutiny of AI firms.

In Britain, Stability AI recently avoided a copyright ruling after a claim by Getty Images was dismissed on grounds of jurisdiction. Still, the PA stated that the outcome highlighted urgent gaps in UK copyright law regarding AI training and output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI loses German copyright lawsuit over song lyrics reproduction

A Munich regional court has ruled that OpenAI infringed copyright in a landmark case brought by the German rights society GEMA. The court held OpenAI liable for reproducing and memorising copyrighted lyrics without authorisation, rejecting its claim to operate as a non-profit research institute.

The judgement found that OpenAI had violated copyright even in a 15-word passage, setting a low threshold for infringement. Additionally, the court dismissed arguments about accidental reproduction and technical errors, emphasising that both reproduction and memorisation require a licence.

It also denied OpenAI’s request for a grace period to make compliance changes, citing negligence.

Judges concluded that the company could not rely on proportionality defences, noting that licences were available and alternative AI models exist.

OpenAI’s claim that EU copyright law failed to foresee large language models was rejected, as the court reaffirmed that European law ensures a high level of protection for intellectual property.

The ruling marks a significant step for copyright enforcement in the age of generative AI and could shape future litigation across Europe. It also challenges technology companies to adapt their training and licensing practices to comply with existing legal frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils Teen Safety Blueprint for responsible AI

OpenAI has launched the Teen Safety Blueprint to guide responsible AI use for young people. The roadmap guides policymakers and developers on age-appropriate design, safeguards, and research to protect teen well-being and promote opportunities.

The company is implementing these principles across its products without waiting for formal regulation. Recent measures include stronger safeguards, parental controls, and an age-prediction system to customise AI experiences for under-18 users.

OpenAI emphasises that protecting teens is an ongoing effort. Collaboration with parents, experts, and young people will help improve AI safety continuously while shaping how technology can support teens responsibly over the long term.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI becomes fastest-growing business platform in history

OpenAI has surpassed 1 million business customers, becoming the fastest-growing business platform in history. Companies in healthcare, finance, retail, and tech use ChatGPT for Work or API access to enhance operations, customer experiences, and team workflows.

Consumer familiarity is driving enterprise adoption. With over 800 million weekly ChatGPT users, rollouts face less friction. ChatGPT for Work now has more than 7 million seats, growing 40% in two months, while ChatGPT Enterprise seats have increased ninefold year-over-year.

Businesses are reporting strong ROI, with 75% seeing positive results from AI deployment.

New tools and integrations are accelerating adoption. Company knowledge lets AI work across Slack, SharePoint, and GitHub. Codex accelerates engineering workflows, while AgentKit facilitates rapid enterprise agent deployment.

Multimodal models now support text, images, video, and audio, allowing richer workflows across industries.

Many companies are building applications directly on OpenAI’s platform. Brands like Canva, Spotify, and Shopify are integrating AI into apps, and the Agentic Commerce Protocol is bringing conversational commerce to everyday experiences.

OpenAI aims to continue expanding capabilities in 2026, reimagining enterprise workflows with AI at the core.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoftBank and OpenAI bring enterprise AI revolution to Japan

SoftBank and OpenAI have announced the launch of SB OAI Japan, a new joint venture established to deliver an advanced enterprise AI solution known as Crystal Intelligence. Unveiled on 5 November 2025, the initiative aims to transform Japan’s corporate management through tailored AI solutions.

SB OAI Japan will exclusively market Crystal Intelligence in Japan starting in 2026. The platform integrates OpenAI’s latest models with local implementation, system integration, and ongoing support.

Designed to enhance productivity and streamline management, Crystal Intelligence will help Japanese companies adopt AI tools suited to their specific operational needs.

SoftBank Corp. will be the first to deploy Crystal intelligence, testing and refining the technology before wider release. The company plans to share insights through SB OAI Japan to drive AI-powered transformation across industries.

The partnership underscores SoftBank’s vision of becoming an AI-native organisation. The group has already developed around 2.5 million custom GPTs for internal use.

OpenAI CEO Sam Altman stated that the venture marks a significant step in bringing advanced AI to global enterprises. At the same time, SoftBank’s Masayoshi Son described it as the beginning of a new era where AI agents autonomously collaborate to achieve business goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI’s Sora app launches on Android

OpenAI’s AI video generator, Sora, is now officially available for Android users in the US, Canada, Japan, Korea, Taiwan, Thailand, and Vietnam. The app, which debuted on iOS in September, quickly reached over 1 million downloads within a week.

Its arrival on the Google Play Store is expected to attract a wider audience and boost user engagement.

The Android version retains key features, including ‘Cameos,’ which allow users to generate videos of themselves performing various activities. Users can share content in a TikTok-style feed, as OpenAI aims to compete with TikTok, Instagram, and Meta’s AI video feed, Vibes.

Sora has faced criticism over deepfakes and the use of copyrighted characters. Following user-uploaded videos of historical figures and popular characters, OpenAI strengthened guardrails and moved from an ‘opt-out’ to an ‘opt-in’ policy for rights holders.

The app is also involved in a legal dispute with Cameo over the name of its flagship feature.

OpenAI plans to add new features, including character cameos for pets and objects, basic video editing tools, and personalised social feeds. These updates aim to enhance user experience while maintaining responsible and ethical AI use in video generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot