GPT-4o set for retirement as OpenAI shifts focus to newer systems

OpenAI has confirmed that several legacy AI models will be removed from ChatGPT, with GPT-4o scheduled for retirement on 13 February. The decision follows months of debate after the company reinstated the model amid strong user backlash.

Alongside GPT-4o, the models being withdrawn include GPT-5 Instant, GPT-5 Thinking, GPT-4.1, GPT-4.1 mini, and o4-mini. The changes apply only to ChatGPT, while developers will continue to access the models through OpenAI’s API.

GPT-4o had built a loyal following for its natural writing style and emotional awareness, with many users arguing newer models felt less expressive. When OpenAI first attempted to phase it out in 2025, widespread criticism prompted a temporary reversal.

Company data now suggests active use of GPT-4o has dropped to around 0.1% of daily users. OpenAI says features associated with the model have since been integrated into GPT-5.2, including personality tuning and creative response controls.

Despite this, criticism has resurfaced across social platforms, with users questioning usage metrics and highlighting that GPT-4o was no longer prominently accessible. Comments from OpenAI leadership acknowledging recent declines in writing quality have further fuelled concerns about the model’s removal.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk fuses SpaceX and xAI in a $1.25T power play

SpaceX has acquired Elon Musk’s AI company xAI, bringing xAI’s Grok chatbot and the X social platform under the SpaceX umbrella in a deal that further consolidates Musk’s privately held businesses. Investor and media accounts of the transaction put the combined valuation around $1.25 trillion, reflecting SpaceX’s scale in launch services and Starlink, alongside xAI’s rapid growth in the AI market.

The tie-up is pitched as a way to integrate AI development with SpaceX’s communications infrastructure and space hardware, including ambitions to push computing beyond Earth. The companies argue that the power and cooling demands of AI, if met mainly through terrestrial data centres, will strain electricity supply and local environments, and that space-based systems could become part of a longer-term answer.

The deal lands after a period of intense deal-making around xAI. xAI completed a $20 billion Series E in early January that valued the company at about $230 billion, and Tesla has disclosed plans to invest $2 billion in xAI, underscoring how capital-heavy the AI race has become and how closely Musk’s firms are being linked through financing and ownership.

At the same time, Grok and X have faced mounting scrutiny over AI-generated harms, including non-consensual sexualised deepfakes, prompting investigations and renewed pressure on safeguards and enforcement. That backdrop adds regulatory and reputational risk to a structure that now ties AI tooling to a mass-distribution platform and to a company with major government and national-security-adjacent business lines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI news needs ‘nutrition labels’, UK think tank says amid concerns over gatekeepers

A leading British think tank has urged the government to introduce ‘nutrition labels’ for AI-generated news, arguing that clearer rules are needed as AI becomes a dominant source of information.

The Institute for Public Policy Research said AI firms are increasingly acting as new gatekeepers of the internet and must pay publishers for the journalism that shapes their output.

The group recommended standardised labels showing which sources underpin AI-generated answers, instead of leaving users unsure about the origin or reliability of the material they read.

It also called for a formal licensing system in the UK that would allow publishers to negotiate directly with technology companies over the use of their content. The move comes as a growing share of the public turns to AI for news, while Google’s AI summaries reach billions each month.

IPPR’s study found that some AI platforms rely heavily on content from outlets with licensing agreements, such as the Guardian and the Financial Times, while others, like the BBC, appear far less often due to restrictions on scraping.

The think tank warned that such patterns could weaken media plurality by sidelining local and smaller publishers instead of supporting a balanced ecosystem. It added that Google’s search summaries have already reduced traffic to news websites by providing answers before users click through.

The report said public funding should help sustain investigative and local journalism as AI tools expand. OpenAI responded that its products highlight sources and provide links to publishers, arguing that careful design can strengthen trust in the information people see online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-driven scams dominate malicious email campaigns

The Catalan Cybersecurity Agency has warned that generative AI is now being used in the vast majority of email scams containing malicious links. Its Cybersecurity Outlook Report for 2026 found that more than 80% of such messages rely on AI-generated content.

The report shows that 82.6% of emails carrying malicious links include text, video, or voice produced using AI tools, making fraudulent messages increasingly difficult to identify. Scammers use AI to create near-flawless messages that closely mimic legitimate communications.

Agency director Laura Caballero said the sophistication of AI-generated scams means users face greater risks, while businesses and platforms are turning to AI-based defences to counter the threat.

She urged a ‘technology against technology’ approach, combined with stronger public awareness and basic security practices such as two-factor authentication.

Cyber incidents are also rising. The agency handled 3,372 cases in 2024, a 26% increase year on year, mostly involving credential leaks and unauthorised email access.

In response, the Catalan government has launched a new cybersecurity strategy backed by a €18.6 million investment to protect critical public services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT develops compact ultrasound system for frequent breast cancer screening

Massachusetts Institute of Technology researchers have developed a compact ultrasound system designed to make breast cancer screening more accessible and frequent, particularly for people at higher risk.

The portable device could be used in doctors’ offices or at home, helping detect tumours earlier than current screening schedules allow.

The system pairs a small ultrasound probe with a lightweight processing unit to deliver real-time 3D images via a laptop. Researchers say its portability and low power use could improve access in rural areas where traditional ultrasound machines are impractical.

Frequent monitoring is critical, as aggressive interval cancers can develop between routine mammograms and account for up to 30% of breast cancer cases.

By enabling regular ultrasound scans without specialised technicians or bulky equipment, the technology could increase early detection rates, where survival outcomes are significantly higher.

Initial testing successfully produced clear, gap-free 3D images of breast tissue, and larger clinical trials are now underway at partner hospitals. The team is developing a smaller version that could connect to a smartphone and be integrated into a wearable device for home use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chinese court limits liability for AI hallucinations

A court in China has ruled that AI developers are not automatically liable for hallucinations produced by their systems. The decision was issued by the Hangzhou Internet Court in eastern China and sets an early legal precedent.

Judges found that AI-generated content should be treated as a service rather than a product in such cases. In China, users must therefore prove developer fault and show concrete harm caused by the erroneous output.

The case involved a user in China who relied on AI-generated information about a university campus that did not exist. The court ruled no damages were owed, citing a lack of demonstrable harm and no authorisation for the AI to make binding promises.

The Hangzhou Internet Court warned that strict liability could hinder innovation in China’s AI sector. Legal experts say the ruling clarifies expectations for developers while reinforcing the need for user warnings about AI limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Why smaller AI models may be the smarter choice

Most everyday jobs do not actually need the most powerful, cutting-edge AI models, argues Jovan Kurbalija in his blog post ‘Do we really need frontier AI for everyday work?’. While frontier AI systems dominate headlines with ever-growing capabilities, their real-world value for routine professional tasks is often limited. For many people, much of daily work remains simple, repetitive, and predictable.

Kurbalija points out that large parts of professional life, from administration and law to healthcare and corporate management, operate within narrow linguistic and cognitive boundaries. Daily communication relies on a small working vocabulary, and most decision-making follows familiar mental patterns.

In this context, highly complex AI models are often unnecessary. Smaller, specialised systems can handle these tasks more efficiently, at lower cost and with fewer risks.

Using frontier AI for routine work, the author suggests, is like using a sledgehammer to crack a nut. These large models are designed to handle almost anything, but that breadth comes with higher costs, heavier governance requirements, and stronger dependence on major technology platforms.

In contrast, small language models tailored to specific tasks or organisations can be faster, cheaper, and easier to control, while still delivering strong results.

Kurbalija compares this to professional expertise itself. Most jobs never required having the Encyclopaedia Britannica open on the desk. Real expertise lives in procedures, institutions, and communities, not in massive collections of general knowledge.

Similarly, the most useful AI tools are often those designed to draft standard documents, summarise meetings, classify requests, or answer questions based on a defined body of organisational knowledge.

Diplomacy, an area Kurbalija knows well, illustrates both the strengths and limits of AI. Many diplomatic tasks are highly ritualised and can be automated using rules-based systems or smaller models. But core diplomatic skills, such as negotiation, persuasion, empathy, and trust-building, remain deeply human and resistant to automation. The lesson, he argues, is to automate routines while recognising where AI should stop.

The broader paradox is that large AI platforms may benefit more from users than users benefit from frontier AI. By sitting at the centre of workflows, these platforms collect valuable data and organisational knowledge, even when their advanced capabilities are not truly needed.

As Kurbalija concludes, a more common-sense approach would prioritise smaller, specialised models for everyday work, reserving frontier AI for genuinely complex tasks, and moving beyond the assumption that bigger AI is always better.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Education and rights central to UN AI strategy

UN experts are intensifying efforts to shape a people-first approach to AI, warning that unchecked adoption could deepen inequality and disrupt labour markets. AI offers productivity gains, but benefits must outweigh social and economic risks, the organisation says.

UN Secretary-General António Guterres has repeatedly stressed that human oversight must remain central to AI decision-making. UN efforts now focus on ethical governance, drawing on the Global Digital Compact to align AI with human rights.

Education sits at the heart of the strategy. UNESCO has warned against prioritising technology investment over teachers, arguing that AI literacy should support, not replace, human development.

Labour impacts also feature prominently, with the International Labour Organization predicting widespread job transformation rather than inevitable net losses.

Access and rights remain key concerns. The UN has cautioned that AI dominance by a small group of technology firms could widen global divides, while calling for international cooperation to regulate harmful uses, protect dignity, and ensure the technology serves society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!