Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘à la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AWS becomes key partner in OpenAI’s $38 billion AI growth plan 

Amazon Web Services (AWS) and OpenAI have entered a $38 billion, multi-year partnership that will see OpenAI run and scale its AI workloads on AWS infrastructure. The seven-year deal grants OpenAI access to vast NVIDIA GPU clusters and the capacity to scale to millions of CPUs.

The collaboration aims to meet the growing global demand for computing power driven by rapid advances in generative AI.

OpenAI will immediately begin using AWS compute resources, with all capacity expected to be fully deployed by the end of 2026. The infrastructure will optimise AI performance by clustering NVIDIA GB200 and GB300 GPUs via Amazon EC2 UltraServers for low-latency, large-scale processing.

These clusters will support tasks such as training new models and serving inference for ChatGPT.

OpenAI CEO Sam Altman said the partnership would help scale frontier AI securely and reliably, describing it as a foundation for ‘bringing advanced AI to everyone.’ AWS CEO Matt Garman noted that AWS’s computing power and reliability make it uniquely positioned to support OpenAI’s growing workloads.

The move strengthens an already active collaboration between the two firms. Earlier this year, OpenAI’s models became available on Amazon Bedrock, enabling AWS clients such as Peloton, Thomson Reuters, and Comscore to adopt advanced AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stargate Michigan expands OpenAI’s US buildout

OpenAI will build a new campus in Saline Township, Michigan, as part of a 4.5 GW partnership with Oracle. Planned US capacity now exceeds 8 gigawatts. Investment over the next three years is expected to surpass $450 billion.

Leaders frame Stargate as a path to reindustrialise the United States while expanding access to AI benefits. Projects generate jobs during buildout and strengthen supply chains. Communities are intended to share gains.

Related Digital will develop the Michigan site, with construction expected in early 2026. More than 2,500 union construction roles are planned. A closed-loop cooling system will significantly reduce on-site water consumption.

DTE Energy will utilise existing excess transmission capacity to serve the campus. The project, not local ratepayers, will fund any required upgrades. Local energy supplies are expected to remain unaffected.

Expansion builds on previously announced sites in Texas, New Mexico, Wisconsin, and Ohio. Programmes aim to bolster modern energy and manufacturing systems. Michigan’s engineering heritage makes it a focal point for future AI infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI unveils new gpt-oss-safeguard models for adaptive content safety

Yesterday, OpenAI launched gpt-oss-safeguard, a pair of open-weight reasoning models designed to classify content according to developer-specified safety policies.

Available in 120b and 20b sizes, these models allow developers to apply and revise policies during inference instead of relying on pre-trained classifiers.

They produce explanations of their reasoning, making policy enforcement transparent and adaptable. The models are downloadable under an Apache 2.0 licence, encouraging experimentation and modification.

The system excels in situations where potential risks evolve quickly, data is limited, or nuanced judgements are required.

Unlike traditional classifiers that infer policies from pre-labelled data, gpt-oss-safeguard interprets developer-provided policies directly, enabling more precise and flexible moderation.

The models have been tested internally and externally, showing competitive performance against OpenAI’s own Safety Reasoner and prior reasoning models. They can also support non-safety tasks, such as custom content labelling, depending on the developer’s goals.

OpenAI developed these models alongside ROOST and other partners, building a community to improve open safety tools collaboratively.

While gpt-oss-safeguard is computationally intensive and may not always surpass classifiers trained on extensive datasets, it offers a dynamic approach to content moderation and risk assessment.

Developers can integrate the models into their systems to classify messages, reviews, or chat content with transparent reasoning instead of static rule sets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT offers wellness checks for long chat sessions

OpenAI has introduced new features in ChatGPT to encourage healthier use for people who spend extended periods chatting with the AI. Users may see a pop-up message reading ‘Just checking in. You’ve been chatting for a while, is this a good time for a break?’.

Users can dismiss it or continue, helping to prevent excessive screen time while staying flexible. The update also guides high-stakes personal decisions.

ChatGPT will not give direct advice on sensitive topics such as relationships, but instead asks questions and encourages reflection, helping users consider their options safely.

OpenAI acknowledged that AI can feel especially personal for vulnerable individuals. Earlier versions sometimes struggled to recognise signs of emotional dependency or distress.

The company is improving the model to detect these cases and direct users to evidence-based resources when needed, making long interactions safer and more mindful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI Foundation to fund global health and AI safety projects

OpenAI has finalised its recapitalisation, simplifying its structure while preserving its core mission. The new OpenAI Foundation controls OpenAI Group PBC and holds about $130 billion in equity, making it one of history’s best-funded philanthropies.

The Foundation will receive further ownership as OpenAI’s valuation grows, ensuring its financial resources expand alongside the company’s success. Its mission remains to ensure that artificial general intelligence benefits all of humanity.

The more the business prospers, the greater the Foundation’s capacity to fund global initiatives.

An initial $25 billion commitment will focus on two core areas: advancing healthcare breakthroughs and strengthening AI resilience. Funds will go toward open-source health datasets, medical research, and technical defences to make AI systems safer and more reliable.

The initiative builds on OpenAI’s existing People-First AI Fund and reflects recommendations from its Nonprofit Commission.

The recapitalisation follows nearly a year of discussions with the Attorneys General of California and Delaware, resulting in stronger governance and accountability. With this structure, OpenAI aims to advance science, promote global cooperation, and share AI benefits broadly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft sign new $135 billion agreement to deepen AI partnership

Microsoft and OpenAI have signed a new agreement that marks the next phase of their long-standing partnership, deepening ties first formed in 2019.

The updated deal builds on years of collaboration in advancing responsible AI, positioning both organisations for long-term success while introducing new structural and operational changes.

Under the new arrangement, Microsoft supports OpenAI’s transition into a public benefit corporation (PBC) and recapitalisation. The technology giant now holds an investment valued at around $135 billion, representing about 27 percent of OpenAI Group PBC on an as-converted diluted basis.

Despite OpenAI’s recent funding rounds, Microsoft previously held a 32.5 percent stake in the for-profit entity.

The partnership maintains Microsoft’s exclusive rights to OpenAI’s frontier models and Azure API until artificial general intelligence (AGI) is achieved, but also introduces several new terms. Once AGI is declared, an independent panel will verify it.

Microsoft’s intellectual property rights are extended through 2032, including models developed after AGI with safety conditions. OpenAI may now co-develop certain products with third parties, while retaining the option to serve non-API products on any cloud provider.

OpenAI will purchase an additional $250 billion worth of Azure services, although Microsoft will no longer hold first-refusal rights for compute supply. The new framework allows both organisations to innovate independently, with Microsoft permitted to pursue AGI independently or with other partners.

The updated agreement reflects a more flexible collaboration that balances independence, growth, and shared innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!