ChatGPT offers wellness checks for long chat sessions

OpenAI has introduced new features in ChatGPT to encourage healthier use for people who spend extended periods chatting with the AI. Users may see a pop-up message reading ‘Just checking in. You’ve been chatting for a while, is this a good time for a break?’.

Users can dismiss it or continue, helping to prevent excessive screen time while staying flexible. The update also guides high-stakes personal decisions.

ChatGPT will not give direct advice on sensitive topics such as relationships, but instead asks questions and encourages reflection, helping users consider their options safely.

OpenAI acknowledged that AI can feel especially personal for vulnerable individuals. Earlier versions sometimes struggled to recognise signs of emotional dependency or distress.

The company is improving the model to detect these cases and direct users to evidence-based resources when needed, making long interactions safer and more mindful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI Foundation to fund global health and AI safety projects

OpenAI has finalised its recapitalisation, simplifying its structure while preserving its core mission. The new OpenAI Foundation controls OpenAI Group PBC and holds about $130 billion in equity, making it one of history’s best-funded philanthropies.

The Foundation will receive further ownership as OpenAI’s valuation grows, ensuring its financial resources expand alongside the company’s success. Its mission remains to ensure that artificial general intelligence benefits all of humanity.

The more the business prospers, the greater the Foundation’s capacity to fund global initiatives.

An initial $25 billion commitment will focus on two core areas: advancing healthcare breakthroughs and strengthening AI resilience. Funds will go toward open-source health datasets, medical research, and technical defences to make AI systems safer and more reliable.

The initiative builds on OpenAI’s existing People-First AI Fund and reflects recommendations from its Nonprofit Commission.

The recapitalisation follows nearly a year of discussions with the Attorneys General of California and Delaware, resulting in stronger governance and accountability. With this structure, OpenAI aims to advance science, promote global cooperation, and share AI benefits broadly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft sign new $135 billion agreement to deepen AI partnership

Microsoft and OpenAI have signed a new agreement that marks the next phase of their long-standing partnership, deepening ties first formed in 2019.

The updated deal builds on years of collaboration in advancing responsible AI, positioning both organisations for long-term success while introducing new structural and operational changes.

Under the new arrangement, Microsoft supports OpenAI’s transition into a public benefit corporation (PBC) and recapitalisation. The technology giant now holds an investment valued at around $135 billion, representing about 27 percent of OpenAI Group PBC on an as-converted diluted basis.

Despite OpenAI’s recent funding rounds, Microsoft previously held a 32.5 percent stake in the for-profit entity.

The partnership maintains Microsoft’s exclusive rights to OpenAI’s frontier models and Azure API until artificial general intelligence (AGI) is achieved, but also introduces several new terms. Once AGI is declared, an independent panel will verify it.

Microsoft’s intellectual property rights are extended through 2032, including models developed after AGI with safety conditions. OpenAI may now co-develop certain products with third parties, while retaining the option to serve non-API products on any cloud provider.

OpenAI will purchase an additional $250 billion worth of Azure services, although Microsoft will no longer hold first-refusal rights for compute supply. The new framework allows both organisations to innovate independently, with Microsoft permitted to pursue AGI independently or with other partners.

The updated agreement reflects a more flexible collaboration that balances independence, growth, and shared innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MLK estate pushback prompts new Sora 2 guardrails at OpenAI

OpenAI paused the ability to re-create Martin Luther King Jr. in Sora 2 after Bernice King objected to user videos. Company leaders issued a joint statement with the King estate. New guardrails will govern depictions of historical figures on the app.

OpenAI said families and authorised estates should control how likenesses appear. Representatives can request removal or opt-outs. Free speech was acknowledged, but respectful use and consent were emphasised.

Policy scope remains unsettled, including who counts as a public figure. Case-by-case requests may dominate early enforcement. Transparency commitments arrived without full definitions or timelines.

Industry pressure intensified as major talent agencies opted out of clients. CAA and UTA cited exploitation and legal exposure. Some creators welcomed the tool, showing a split among public figures.

User appetite for realistic cameos continues to test boundaries. Rights of publicity and postmortem controls vary by state. OpenAI promised stronger safeguards while Sora 2 evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI rolls out pet-centric AI video features and social tools in Sora

OpenAI has announced significant enhancements to its text-to-video app Sora. The update introduces new features including pet and object ‘cameos’ in AI-generated videos, expanded video editing tools, social sharing elements and a forthcoming Android version of the app.

Using the new pet cameo feature, users will be able to upload photos of their pets or objects and then incorporate them into animated video scenes generated by Sora. The objective is to deepen personalisation and creative expression by letting users centre their own non-human characters.

Sora is also gaining editing capabilities that simplify the creation process. Users can remix existing videos, apply stylistic changes, and integrate social-type features like feeds where others’ creations can be viewed and shared. The Android app is noted as ‘coming soon’ which expands Sora’s accessibility beyond the iOS/web initial release.

The move reflects OpenAI’s strategy to transition Sora from an experimental novelty into a more fully featured social video product. By enabling user-owned content (pets/objects), expanding sharing functionality and broadening platform reach, Sora is positioned to compete in the generative video and social media landscape.

At the same time, the update raises questions around content use, copyright (especially when user-owned pets or objects are included), deepfake risks, and moderation. Given Sora’s prior scrutiny over synthetic media, the expansion into more personalised video may prompt further regulatory or ethical review.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sky acquisition by OpenAI signals ChatGPT’s push into native workflows

OpenAI acquired Software Applications Incorporated, the maker of Sky, to accelerate the development of interfaces that understand context, adapt to intent, and act across apps. Sky’s macOS layer sees what’s on screen and executes tasks. Its team joins OpenAI to bake these capabilities into ChatGPT.

Sky turns the Mac into a cooperative workspace for writing, planning, coding, and daily tasks. It can control native apps, invoke workflows, and ground actions in on-screen context. That tight integration now becomes a core pillar of ChatGPT’s product roadmap.

OpenAI says the goal is capability plus usability: not just answers, but actions completed in your tools. VP Nick Turley framed it as moving from prompts to productivity. Expect ChatGPT features that feel ambient, proactive, and native on desktop.

Sky’s founders say large language models finally enable intuitive, customizable computing. CEO Ari Weinstein described Sky as a layer that ‘floats’ over your desktop, helping you think and create. OpenAI plans to bring that experience to hundreds of millions of users.

A disclosure notes that a fund associated with Sam Altman held a passive stake in Software Applications Incorporated. Nick Turley and Fidji Simo led the deal. OpenAI’s independent Transaction and Audit Committees reviewed and approved the acquisition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea moves to lead the AI era with OpenAI’s economic blueprint

Poised to become a global AI powerhouse, South Korea has the right foundations in place: advanced semiconductor production, robust digital infrastructure, and a highly skilled workforce.

OpenAI’s new Economic Blueprint for Korea sets out how the nation can turn those strengths into broad, inclusive growth through scaled and trusted AI adoption.

The blueprint builds on South Korea’s growing momentum in frontier technology.

Following OpenAI’s first Asia–Pacific country partnership, initiatives such as Stargate with Samsung and SK aim to expand advanced memory supply and explore next-generation AI data centres alongside the Ministry of Science and ICT.

A new OpenAI office in Seoul, along with collaboration with Seoul National University, further signals the country’s commitment to becoming an AI hub.

A strategy that rests on two complementary paths: building sovereign AI capabilities in infrastructure, data governance, and GPU supply, while also deepening cooperation with frontier developers like OpenAI.

The aim is to enhance operational maturity and cost efficiency across key industries, including semiconductors, shipbuilding, healthcare, and education.

By combining domestic expertise with global partnerships, South Korea could boost productivity, improve welfare services, and foster regional growth beyond Seoul. With decisive action, the nation stands ready to transform from a fast adopter into a global standard-setter for safe, scalable AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!