OpenAI has finalised a record $300 billion deal with Oracle to secure vast computing infrastructure over five years, marking one of the most significant cloud contracts in history. The agreement is part of Project Stargate, OpenAI’s plan to build massive data centre capacity in the US and abroad.
The two companies will develop 4.5 gigawatts of computing power, equivalent to the energy consumed by millions of homes.
Backed by SoftBank and other partners, the Stargate initiative aims to surpass $500 billion in investment, with construction already underway in Texas. Additional plans include a large-scale data centre project in the United Arab Emirates, supported by Emirati firm G42.
The scale of the deal highlights the fierce race among tech giants to dominate AI infrastructure. Amazon, Microsoft, Google and Meta are also pledging hundreds of billions of dollars towards data centres, while OpenAI faces mounting financial pressure.
The company currently generates around $10 billion in revenue but is expected to spend far more than that annually to support its expansion.
Oracle is betting heavily on OpenAI as a future growth driver, although the risk is high given OpenAI’s lack of profitability and Oracle’s growing debt burden.
A gamble that rests on the assumption that ChatGPT and related AI technologies will continue to grow at an unprecedented pace, despite intense competition from Google, Anthropic and others.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.
The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.
OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.
The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.
Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Bloomberg’s Mark Gurman now reports that Apple plans to introduce its AI-powered web search tool in spring 2026. The move would position it against OpenAI and Perplexity, while renewing pressure on Google.
The speculation comes after news that Google may integrate its Gemini AI into Apple devices. During an antitrust trial in April, Google CEO Sundar Pichai confirmed plans to roll out updates later this year.
According to Gurman, Apple and Google finalised an agreement for Apple to test a Google-developed AI model to boost its voice assistant. The partnership reflects Apple’s mixed strategy of dependence and rivalry with Google.
With a strong record for accurate Apple forecasts, Gurman suggests the company hopes the move will narrow its competitive gap. Whether it can outpace Google, especially given Pixel’s strong AI features, remains an open question.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI CEO Sam Altman has sparked debate after admitting he increasingly struggles to distinguish between genuine online conversations and content generated by bots or AI models.
Altman described a ‘strangest experience’ while reading about OpenAI’s Codex model, saying comments instinctively felt fake even though he knew the growth trend was real. He said social media rewards, ‘LLM-speak,’ and astroturfing make communities feel less genuine.
His comments follow an earlier admission that he had never considered the so-called dead internet theory until now, when large language model accounts seemed to be running X. The theory claims bots and artificial content dominate online activity, though evidence of coordinated control is lacking.
Reactions were divided, with some users agreeing that online communities have become increasingly bot-like. Others argued the change reflects shifting dynamics in niche groups rather than fake accounts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US tech giant Microsoft is expanding its AI strategy by integrating Anthropic’s Claude models into Office 365, adding them to apps like Word, Excel and Outlook instead of relying solely on OpenAI.
Internal tests reportedly showed Anthropic’s systems outperforming OpenAI in specific reasoning and data-processing tasks, prompting Microsoft to adopt a hybrid approach while maintaining OpenAI as a frontier partner.
The shift reflects growing strain between Microsoft and OpenAI, with disputes over intellectual property and cloud infrastructure as well as OpenAI’s plans for greater independence.
By diversifying suppliers, Microsoft reduces risks, lowers costs and positions itself to stay competitive while OpenAI prepares for a potential public offering and develops its own data centres.
Anthropic, backed by Amazon and Google, has built its reputation on safety-focused AI, appealing to Microsoft’s enterprise customers wary of regulatory pressures.
Analysts believe the move could accelerate innovation, spark a ‘multi-model era’ of AI integration, and pressure OpenAI to enhance its technology faster.
The decision comes amid Microsoft’s push to broaden its AI ecosystem, including its in-house MAI-1 model and partnerships with firms like DeepSeek.
Regulators are closely monitoring these developments, given Microsoft’s dominant role in AI investment and the potential antitrust implications of its expanding influence.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.
Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.
The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.
Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.
Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI researchers say large language models continue to hallucinate because current evaluation methods encourage them to guess rather than admit uncertainty.
Hallucinations, defined as confident but false statements, persist despite advances in models such as GPT-5. Low-frequency facts, like specific dates or names, are particularly vulnerable.
The study argues that while pretraining predicts the next word without true or false labels, the real problem lies in accuracy-based testing. Evaluations that reward lucky guesses discourage models from saying ‘I don’t know’.
Researchers suggest penalising confident errors more heavily than uncertainty, and awarding partial credit when AI models acknowledge limits in knowledge. They argue that only by reforming evaluation methods can hallucinations be meaningfully reduced.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is supporting the production of Critterz, an AI-assisted animated film set for a global theatrical release in 2026. The project aims to show that AI can streamline filmmaking, cutting costs and production time.
Partnering with Vertigo Films and Native Foreign, the film is being produced in nine months, far faster than the usual three years for animated features.
The film, budgeted under $30 million, combines OpenAI’s GPT-5 and DALL·E with traditional voice acting and hand-drawn elements. Building on the acclaimed 2023 short, Critterz will debut at the Cannes Film Festival and expand on a story where humans and AI creatures share the same world.
Writers James Lamont and Jon Foster, known for Paddington in Peru, have been brought in to shape the screenplay.
While producers highlight AI’s creative potential, concerns remain about authenticity and job security in the industry. Some fear AI films could feel impersonal, while major studios continue to defend intellectual property.
Warner Bros., Disney, and Universal are in court with Midjourney over alleged copyright violations.
Despite the debate, OpenAI remains committed to its role in pushing generative storytelling. The company is also expanding its infrastructure, forecasting spending of $115 billion by 2029, with $8 billion planned for this year alone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US AI firm OpenAI has introduced a new ChatGPT feature that allows users to branch conversations into separate threads and explore different tones, styles, or directions without altering the original chat.
The update, rolled out on 5 September, is available to anyone logged into ChatGPT through the web version.
The branching tool lets users copy a conversation from a chosen point and continue in a new thread while preserving the earlier exchange.
Marketing teams, for example, could test formal, informal, or humorous versions of advertising content within parallel chats, avoiding the need to overwrite or restart a conversation.
OpenAI described the update as a response to user requests for greater flexibility. Many users had previously noted that a linear dialogue structure limited efficiency by forcing them to compare and copy content repeatedly.
Early reactions online have compared the new tool to Git, which enables software developers to branch and merge code.
The feature has been welcomed by ChatGPT users who are experimenting with brainstorming, project analysis, or layered problem-solving. Analysts suggest it also reduces cognitive load by allowing users to test multiple scenarios more naturally.
Alongside the update, OpenAI is working on other projects, including a new AI-powered jobs platform to connect workers and companies more effectively.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Sam Altman, chief executive of OpenAI, has suggested that the so-called ‘dead internet theory’ may hold some truth. The idea, long dismissed as a conspiracy theory, claims much of the online world is now dominated by computer-generated content rather than real people.
Altman noted on X that he had not previously taken the theory seriously but believed there were now many accounts run by large language models.
His remark drew criticism from users who argued that OpenAI itself had helped create the problem by releasing ChatGPT in 2022, which triggered a surge of automated content.
The spread of AI systems has intensified debate over whether online spaces are increasingly filled with artificially generated voices.
Some observers also linked Altman’s comments to his work on World Network, formerly Worldcoin, a project launched in 2019 to verify human identity online through biometric scans. That initiative has been promoted as a potential safeguard against the growing influence of AI-driven systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!