OpenAI moves to challenge Meta and China with open-weight models

OpenAI has launched its first open-weight AI models in over five years, under the Apache 2.0 license. Developers now have access to download, adapt, and deploy the models commercially, marking a significant shift in policy from the company’s previously closed-source approach.

The move comes amid pressure from China’s open-source AI sector and Western rivals, such as Meta. The GPT-OS models focus on reasoning and support complex tasks such as coding and mathematics.

GPT-OS-120 b targets high-performance setups, while GPT-OS-20 b can run on standard machines. While not fully open-source, the release provides transparency regarding weights and architecture, although the training data remains undisclosed.

The approach has split expert opinion: some praise the openness, others question the limited disclosure. Regardless, it signals OpenAI’s strategic recalibration in response to market pressure.

Benchmark tests show the models excel in advanced reasoning. The o4-mini, a related model, has already surpassed its competitors in evaluations such as AIME 2024 and 2025. Analysts say these tools could reshape workflows across sectors, from coding to enterprise automation.

OpenAI’s timing aligns with rapid revenue growth and a $40 billion funding round. Analysts see this release as a calculated step in a maturing, competitive industry, where a balance of proprietary control and open access may define future leadership.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI targets $500 billion valuation ahead of potential IPO

OpenAI is in early discussions over a share sale that could value the company at around $500 billion, according to a source familiar with the talks.

The transaction would occur before a possible IPO and let current and former employees sell several billion dollars’ worth of shares.

The valuation marks a steep rise from the $300 billion figure attached to its most recent funding round earlier in the year. Backed by Microsoft, OpenAI has seen rapid growth in users and revenue, with ChatGPT attracting about 700 million weekly active users, up from 400 million in February.

Revenue doubled in the first seven months of the year, reaching an annualised run rate of $12 billion, and is on track for $20 billion by year-end.

The potential sale comes as competition for AI talent intensifies.

Meta has invested billions in Scale AI to lure its chief executive, Alexandr Wang, to head its superintelligence unit. At the same time, firms such as ByteDance and Databricks have used private share sales to update valuations and reward staff.

Thrive Capital and other existing OpenAI investors are discussing joining the deal.

OpenAI is also preparing a major corporate restructuring that could replace its capped-profit model and clear the way for an eventual public listing.

However, Chief Financial Officer Sarah Friar said any IPO would only happen when the company and the markets are ready.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI to improve its ability in detecting mental or emotional distress

In search of emotional support during a mental health crisis, it has been reported that people use ChatGPT as their ‘therapist.’ While this may seem like an easy getaway, reports have shown that ChatGPT’s responses have had an amplifying effect on people’s delusions rather than helping them find coping mechanisms. As a result, OpenAI stated that it plans to improve the chatbot’s ability to detect mental distress in the new GPT-5 AI model, which is expected to launch later this week.

OpenAI admits that GPT-4 sometimes failed to recognise signs of delusion or emotional dependency, especially in vulnerable users. To encourage healthier use of ChatGPT, which now serves nearly 700 million weekly users, OpenAI is introducing break reminders during long sessions, prompting users to pause or continue chatting.

Additionally, it plans to refine how and when ChatGPT displays break reminders, following a trend seen on platforms like YouTube and TikTok.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ‘study mode’ to curb AI-fuelled cheating

OpenAI has introduced a new ‘study mode’ to help students use AI for learning rather than cheating. The update arrives amid a spike in academic dishonesty linked to generative AI tools.

According to The Guardian, a UK survey found nearly 7,000 confirmed cases of AI misuse during the 2023–24 academic year. Universities are under pressure to adapt assessments in response.

Under the chatbot’s Tools menu, the new mode walks users through questions with step-by-step guidance, acting more like a tutor than a solution engine.

Jayna Devani, OpenAI’s international education lead, said the aim is to foster productive use of AI. ‘It’s guiding me towards an answer, rather than just giving it to me first-hand,’ she explained.

The tool can assist with homework and exam prep and even interpret uploaded images of past papers. OpenAI cautions it may still produce errors, underscoring the need for broader conversations around AI in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman shares first glimpse of GPT-5 via Pantheon screenshot

OpenAI CEO Sam Altman shared a screenshot on X showing GPT-5 in action. The post casually endorsed the animated sci-fi series Pantheon, a cult tech favourite exploring general AI.

When asked if GPT-5 also recommends the show, Altman replied with a screenshot: ‘turns out yes’. It marked one of the earliest public glimpses of the new model, hinting at expanded capabilities.

GPT-5 is expected to outperform its predecessors, with a larger context window, multimodal abilities, and more agentic task handling. The screenshot also shows that some quirks remain, such as its fondness for the em dash.

The model identified Pantheon as having a 100% critic rating on Rotten Tomatoes and described it as ‘cerebral, emotional, and philosophically intense’. Business Insider verified the score and tone of the reviews.

OpenAI faces mounting pressure to keep pace with rivals like Google DeepMind, Meta, xAI, and Anthropic. Public teasers such as this one suggest GPT-5 will soon make a broader debut.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools like Grok 4 may make developers obsolete, Musk suggests

Elon Musk has predicted a major shift in software development, claiming that AI is turning coding from a job into a recreational activity. The xAI CEO believes AI has removed much of the ‘drudgery’ from writing software.

Replying to OpenAI President Greg Brockman, Musk compared the future of coding to painting. He suggested that software creation will be more creative and expressive, no longer requiring professional expertise for functional outcomes.

Musk, a co-founder of OpenAI, left the organisation after a public dispute with the current CEO, Sam Altman. He later launched xAI, which now operates the Grok chatbot as a rival to ChatGPT, Gemini and Claude.

Generative AI firms are accelerating efforts in automated coding. OpenAI recently launched Codex to create a cloud-based software engineering agent, while Microsoft released GitHub Spark AI to generate apps from natural language.

xAI’s latest offering, Grok 4, supports over 20 programming languages and integrates with code editors. It enables developers to write, debug, and understand code using commands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI is the next iPhone moment, says Apple CEO Tim Cook

Any remaining doubts about Apple’s commitment to AI have been addressed directly by its CEO, Tim Cook.

At an all-hands meeting on Apple’s Cupertino campus, Cook told employees that the AI revolution is as big as the internet, smartphones, cloud computing, and apps.

According to Bloomberg’s Power On newsletter, Cook clarified that Apple sees AI as an imperative. ‘Apple must do this,’ he said, describing the opportunity as ‘ours to grab’.

Despite Apple unveiling its AI suite, Apple Intelligence, only in June, well after competitors, Cook remains optimistic about Apple’s ability to take the lead.

‘We’ve rarely been first,’ he told staff. ‘There was a PC before the Mac; a smartphone before the iPhone; many tablets before the iPad; an MP3 player before the iPod.’

Cook stressed that Apple had redefined these categories and suggested a similar future for AI, declaring, ‘This is how I feel about AI.’

Cook also outlined concrete steps the company is taking. Around 40% of the 12,000 hires made last year were allocated to research and development, with much of the focus on AI.

According to Bloomberg, Apple is also reportedly developing a new cloud-computing chip, code-named Baltra, designed to support AI features. In a recent interview with CNBC, Cook stated that Apple is open to acquisitions that could accelerate its progress in the AI sector.

Apple is not alone in its intense focus on AI. Rival firms are also increasing expectations and pressure. Sergey Brin, the former Google CEO who has returned to the company, told employees that 60-hour in-office work weeks may be necessary to win the AI race.

Reports of burnout and extreme workloads are becoming more frequent across leading AI firms. Former OpenAI engineer Calvin French-Owen recently described the company’s high-pressure and secretive culture.

French-Owen noted that the environment had become so intense that leadership offered the entire staff a week off to recover, according to Wired.

AI has become the next major battleground in big tech, with companies ramping up investment and reshaping internal structures to secure dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI pulls searchable chats from ChatGPT

OpenAI has removed a feature that allowed users to make their ChatGPT conversations publicly searchable, following backlash over accidental exposure of sensitive content.

Dane Stuckey, OpenAI’s CISO, confirmed the rollback on Thursday, describing it as a short-lived experiment meant to help users find helpful conversations. However, he acknowledged that the feature posed privacy risks.

‘Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,’ Stuckey wrote in a post on X. He added that OpenAI is working to remove any indexed content from search engines.

The move came swiftly after Fast Company and privacy advocate Luiza Jarovsky reported that some shared conversations were appearing in Google search results.

Jarovsky posted examples on X, noting that even though the chats were anonymised, users were unknowingly revealing personal experiences, including harassment and mental health struggles.

To activate the feature, users had to tick a box allowing their chat to be discoverable. While the process required active steps, critics warned that some users might opt in without fully understanding the consequences. Stuckey said the rollback will be complete by Friday morning.

The incident adds to growing concerns around AI and user privacy, particularly as conversational platforms like ChatGPT become more embedded in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Nscale to build an AI super hub in Norway

OpenAI has revealed its first European data centre project in partnership with British startup Nscale, selecting Norway as the location for what is being called ‘Stargate Norway’.

The initiative mirrors the company’s ambitious $500 billion US ‘Stargate’ infrastructure plan and reflects Europe’s growing demand for large-scale AI computing capacity.

Nscale will lead the development of a $1 billion AI gigafactory in Norway, with engineering firm Aker matching the investment. These advanced data centres are designed to meet the heavy processing requirements of cutting-edge AI models.

OpenAI expects the facility to deliver 230MW of computing power by the end of 2026, making it a significant strategic foothold for the company on the continent.

Sam Altman, CEO of OpenAI, stated that Europe needs significantly more computing to unlock AI’s full potential for researchers, startups, and developers. He said Stargate Norway will serve as a cornerstone for driving innovation and economic growth in the region.

Nscale confirmed that Norway’s AI ecosystem will receive priority access to the facility, while remaining capacity will be offered to users across the UK, Nordics and Northern Europe.

The data centre will support 100,000 of NVIDIA’s most advanced GPUs, with long-term plans to scale as demand grows.

The move follows broader European efforts to strengthen AI infrastructure, with the UK and France pushing for major regulatory and funding reforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!