Ethical limits of rapidly advancing AI debated at Doha forum

Doha Debates, an initiative of Qatar Foundation, hosted a town hall examining the ethical, political, and social implications of rapidly advancing AI. The discussion reflected growing concern that AI capabilities could outpace human control and existing governance frameworks.

Held at Multaqa in Education City, the forum gathered students, researchers, and international experts to assess readiness for rapid technological change. Speakers offered contrasting views, highlighting both opportunity and risk as AI systems grow more powerful.

Philosopher and transhumanist thinker Max More argued for continued innovation guided by reason and proportionate safeguards, warning against fear-driven stagnation.

By contrast, computer scientist Roman Yampolskiy questioned whether meaningful control over superintelligent systems is realistic, cautioning that widening intelligence gaps could undermine governance entirely.

Nabiha Syed, executive director of the Mozilla Foundation, focused on accountability and social impact. She urged broader public participation and transparency, particularly as AI deployment risks reinforcing existing inequalities across societies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

GPT-4o set for retirement as OpenAI shifts focus to newer systems

OpenAI has confirmed that several legacy AI models will be removed from ChatGPT, with GPT-4o scheduled for retirement on 13 February. The decision follows months of debate after the company reinstated the model amid strong user backlash.

Alongside GPT-4o, the models being withdrawn include GPT-5 Instant, GPT-5 Thinking, GPT-4.1, GPT-4.1 mini, and o4-mini. The changes apply only to ChatGPT, while developers will continue to access the models through OpenAI’s API.

GPT-4o had built a loyal following for its natural writing style and emotional awareness, with many users arguing newer models felt less expressive. When OpenAI first attempted to phase it out in 2025, widespread criticism prompted a temporary reversal.

Company data now suggests active use of GPT-4o has dropped to around 0.1% of daily users. OpenAI says features associated with the model have since been integrated into GPT-5.2, including personality tuning and creative response controls.

Despite this, criticism has resurfaced across social platforms, with users questioning usage metrics and highlighting that GPT-4o was no longer prominently accessible. Comments from OpenAI leadership acknowledging recent declines in writing quality have further fuelled concerns about the model’s removal.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Church leaders question who should guide moral answers in the age of AI

AI is increasingly being used to answer questions about faith, morality, and suffering, not just everyday tasks. As AI systems become more persuasive, religious leaders are raising concerns about the authority people may assign to machine-generated guidance.

Within this context, Catholic outlet EWTN Vatican examined Magisterium AI, a platform designed to reference official Church teaching rather than produce independent moral interpretations. Its creators say responses are grounded directly in doctrinal sources.

Founder Matthew Sanders argues mainstream AI models are not built for theological accuracy. He warns that while machines sound convincing, they should never be treated as moral authorities without grounding in Church teaching.

Church leaders have also highlighted broader ethical risks associated with AI, particularly regarding human dignity and emotional dependency. Recent Vatican discussions stressed the need for education and safeguards.

Supporters say faith-based AI tools can help navigate complex religious texts responsibly. Critics remain cautious, arguing spiritual formation should remain rooted in human guidance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk fuses SpaceX and xAI in a $1.25T power play

SpaceX has acquired Elon Musk’s AI company xAI, bringing xAI’s Grok chatbot and the X social platform under the SpaceX umbrella in a deal that further consolidates Musk’s privately held businesses. Investor and media accounts of the transaction put the combined valuation around $1.25 trillion, reflecting SpaceX’s scale in launch services and Starlink, alongside xAI’s rapid growth in the AI market.

The tie-up is pitched as a way to integrate AI development with SpaceX’s communications infrastructure and space hardware, including ambitions to push computing beyond Earth. The companies argue that the power and cooling demands of AI, if met mainly through terrestrial data centres, will strain electricity supply and local environments, and that space-based systems could become part of a longer-term answer.

The deal lands after a period of intense deal-making around xAI. xAI completed a $20 billion Series E in early January that valued the company at about $230 billion, and Tesla has disclosed plans to invest $2 billion in xAI, underscoring how capital-heavy the AI race has become and how closely Musk’s firms are being linked through financing and ownership.

At the same time, Grok and X have faced mounting scrutiny over AI-generated harms, including non-consensual sexualised deepfakes, prompting investigations and renewed pressure on safeguards and enforcement. That backdrop adds regulatory and reputational risk to a structure that now ties AI tooling to a mass-distribution platform and to a company with major government and national-security-adjacent business lines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI news needs ‘nutrition labels’, UK think tank says amid concerns over gatekeepers

A leading British think tank has urged the government to introduce ‘nutrition labels’ for AI-generated news, arguing that clearer rules are needed as AI becomes a dominant source of information.

The Institute for Public Policy Research said AI firms are increasingly acting as new gatekeepers of the internet and must pay publishers for the journalism that shapes their output.

The group recommended standardised labels showing which sources underpin AI-generated answers, instead of leaving users unsure about the origin or reliability of the material they read.

It also called for a formal licensing system in the UK that would allow publishers to negotiate directly with technology companies over the use of their content. The move comes as a growing share of the public turns to AI for news, while Google’s AI summaries reach billions each month.

IPPR’s study found that some AI platforms rely heavily on content from outlets with licensing agreements, such as the Guardian and the Financial Times, while others, like the BBC, appear far less often due to restrictions on scraping.

The think tank warned that such patterns could weaken media plurality by sidelining local and smaller publishers instead of supporting a balanced ecosystem. It added that Google’s search summaries have already reduced traffic to news websites by providing answers before users click through.

The report said public funding should help sustain investigative and local journalism as AI tools expand. OpenAI responded that its products highlight sources and provide links to publishers, arguing that careful design can strengthen trust in the information people see online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-driven scams dominate malicious email campaigns

The Catalan Cybersecurity Agency has warned that generative AI is now being used in the vast majority of email scams containing malicious links. Its Cybersecurity Outlook Report for 2026 found that more than 80% of such messages rely on AI-generated content.

The report shows that 82.6% of emails carrying malicious links include text, video, or voice produced using AI tools, making fraudulent messages increasingly difficult to identify. Scammers use AI to create near-flawless messages that closely mimic legitimate communications.

Agency director Laura Caballero said the sophistication of AI-generated scams means users face greater risks, while businesses and platforms are turning to AI-based defences to counter the threat.

She urged a ‘technology against technology’ approach, combined with stronger public awareness and basic security practices such as two-factor authentication.

Cyber incidents are also rising. The agency handled 3,372 cases in 2024, a 26% increase year on year, mostly involving credential leaks and unauthorised email access.

In response, the Catalan government has launched a new cybersecurity strategy backed by a €18.6 million investment to protect critical public services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT develops compact ultrasound system for frequent breast cancer screening

Massachusetts Institute of Technology researchers have developed a compact ultrasound system designed to make breast cancer screening more accessible and frequent, particularly for people at higher risk.

The portable device could be used in doctors’ offices or at home, helping detect tumours earlier than current screening schedules allow.

The system pairs a small ultrasound probe with a lightweight processing unit to deliver real-time 3D images via a laptop. Researchers say its portability and low power use could improve access in rural areas where traditional ultrasound machines are impractical.

Frequent monitoring is critical, as aggressive interval cancers can develop between routine mammograms and account for up to 30% of breast cancer cases.

By enabling regular ultrasound scans without specialised technicians or bulky equipment, the technology could increase early detection rates, where survival outcomes are significantly higher.

Initial testing successfully produced clear, gap-free 3D images of breast tissue, and larger clinical trials are now underway at partner hospitals. The team is developing a smaller version that could connect to a smartphone and be integrated into a wearable device for home use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Best moments from MoltBook archives

A new ‘Best of MoltBook’ post on Astral Codex Ten has renewed debate over how AI-assisted writing is being presented and understood. The collection highlights selected excerpts from MoltBook, a public notebook used to explore ideas with the help of AI tools.

MoltBook is framed as a space for experimentation rather than finished analysis, with short-form entries reflecting drafts, prompts and revisions. Human judgement remains central, with outputs curated, edited or discarded rather than treated as autonomous reasoning.

Some readers have questioned descriptions of the work as ‘agentic AI’, arguing the label exaggerates the technology’s role. The AI involved responds to instructions but does not act independently, plan goals or retain long-term memory.

The discussion reflects wider scepticism about inflated claims around AI capability. MoltBook is increasingly viewed as an example of AI as a productivity aid for thinking, rather than evidence of a new form of independent intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chinese court limits liability for AI hallucinations

A court in China has ruled that AI developers are not automatically liable for hallucinations produced by their systems. The decision was issued by the Hangzhou Internet Court in eastern China and sets an early legal precedent.

Judges found that AI-generated content should be treated as a service rather than a product in such cases. In China, users must therefore prove developer fault and show concrete harm caused by the erroneous output.

The case involved a user in China who relied on AI-generated information about a university campus that did not exist. The court ruled no damages were owed, citing a lack of demonstrable harm and no authorisation for the AI to make binding promises.

The Hangzhou Internet Court warned that strict liability could hinder innovation in China’s AI sector. Legal experts say the ruling clarifies expectations for developers while reinforcing the need for user warnings about AI limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!