Son warns of vast AI leap as SoftBank outlines future risks

SoftBank chief Masayoshi Son told South Korean President Lee Jae Myung that advanced AI could surpass humans by an extreme margin. He suggested future systems may be 10,000 times more capable than people. The remarks came during a meeting in Seoul focused on national AI ambitions.

Son compared the potential intelligence gap to the difference between humans and goldfish. He said AI might relate to humans as humans relate to pets. Lee acknowledged the vision but admitted feeling uneasy about the scale of the described change.

Son argued that superintelligent systems would not threaten humans physically, noting they lack biological needs. He framed coexistence as the likely outcome. His comments followed renewed political interest in positioning South Korea as an AI leader.

The debate turned to cultural capability when Lee asked whether AI might win the Nobel Prize in Literature. Son said such an achievement was plausible. He pointed to fast-moving advances that continue to challenge expectations about machine creativity.

Researchers say artificial superintelligence remains theoretical, but early steps toward AGI may emerge within a decade. Many expect systems to outperform humans across a wide set of tasks. Policy discussions in South Korea reflect growing urgency around AI governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NSA warns AI poses new risks for operational technology

The US National Security Agency (NSA), together with international partners including Australia’s ACSC, has issued guidance on the secure integration of AI into operational technology (OT).

The Principles for the Secure Integration of AI in OT warn that while AI can optimise critical infrastructure, it also introduces new risks for safety-critical environments. Although aimed at OT administrators, the guidance also highlights issues relevant to IT networks.

AI is increasingly deployed in sectors such as energy, water treatment, healthcare, and manufacturing to automate processes and enhance efficiency.

The NSA’s guidance, however, flags several potential threats, including adversarial prompt injection, data poisoning, AI drift, and reduced explainability, all of which can compromise safety and compliance.

Over-reliance on AI may also lead to human de-skilling, cognitive overload, and distraction, while AI hallucinations raise concerns about reliability in safety-critical settings.

Experts emphasise that AI cannot currently be trusted to make independent safety decisions in OT networks, where the margin for error is far smaller than in standard IT systems.

Sam Maesschalck, an OT engineer, noted that introducing AI without first addressing pre-existing infrastructure issues, such as insufficient data feeds or incomplete asset inventories, could undermine both security and operational efficiency.

The guidance aims to help organisations evaluate AI risks, clarify accountability, and prepare for potential misbehaviour, underlining the importance of careful planning before deploying AI in operationally critical environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LLM shortcomings highlighted by Gary Marcus during industry debate

Gary Marcus argued at Axios’ AI+ Summit that large language models (LLMs) offer utility but fall short of the transformative claims made by their developers. He framed their fundamental role as groundwork for future artificial general intelligence. He suggested that meaningful capability shifts lie beyond today’s systems.

Marcus said alignment challenges stem from LLMs lacking robust world models and reliable constraints. He noted that models still hallucinate despite explicit instructions to avoid errors. He described current systems as an early rehearsal rather than a route to AGI.

Concerns raised included bias, misinformation, environmental impact and implications for education. Marcus also warned about the decline of online information quality as automated content spreads. He believes structural flaws make these issues persistent.

Industry momentum remains strong despite unresolved risks. Developers continue to push forward without clear explanations for model behaviour. Investment flows remain focused on the promise of AGI, despite timelines consistently shifting.

Strategic competition adds pressure, with the United States seeking to maintain an edge over China in advanced AI. Political signals reinforce the drive toward rapid development. Marcus argued that stronger frameworks are needed before systems scale further.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ESMA could gain direct supervision over crypto firms

The European Commission has proposed giving the European Securities and Markets Authority (ESMA) expanded powers to oversee crypto and broader financial markets, aiming to close the regulatory gap with the United States.

The plan would give ESMA direct supervision of crypto service providers, trading venues, and central counterparties, while boosting its role in asset management coordination. Approval from the European Parliament and the Council is still required.

Calls for stronger oversight have grown following concerns over lenient national regimes, including Malta’s crypto licensing system. France, Austria, and Italy have called for ESMA to directly oversee major crypto firms, with France threatening to block cross-border licence passporting.

Revisions to the Markets in Crypto-Assets Regulation (MiCA) are also under discussion, with proposals for stricter rules on offshore crypto activities, improved cybersecurity oversight, and tighter regulations for token offerings.

Experts warn that centralising ESMA supervision may slow innovation, especially for smaller crypto and fintech startups reliant on national regulators. ESMA would need significant resources for the expanded mandate, which could slow decision-making across the EU.

The proposal aims to boost EU capital market competitiveness and increase wealth for citizens. EU stock exchanges currently account for just 73% of the bloc’s GDP, compared with 270% in the US, highlighting the need for a more integrated regulatory framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU partners with EIB to support AI gigafactories

The European Commission and the European Investment Bank Group (EIB) have signed a memorandum of understanding to support the development of AI Gigafactories across the EU. The partnership aims to position Europe as a leading AI hub by accelerating financing and the construction of large-scale AI facilities.

The agreement establishes a framework to guide consortia responding to the Commission’s informal Call for Expression of Interest. EIB advisory support will help turn proposals into bankable projects for the 2026 AI Gigafactory call, with possible co-financing.

The initiative builds on InvestAI, announced in February 2025, mobilising €20 billion to support up to five AI Gigafactories. These facilities will boost Europe’s computing infrastructure, reinforce technological sovereignty, and drive innovation across the continent.

By translating Europe’s AI ambitions into concrete, large-scale projects, the Commission and the EIB aim to position the EU as a global leader in next-generation AI, while fostering investment and industrial growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyber Resilience Act signals a major shift in EU product security

EU regulators are preparing to enforce the Cyber Resilience Act, setting core security requirements for digital products in the European market. The law spans software, hardware, and firmware, establishing shared expectations for secure development and maintenance.

Scope captures apps, embedded systems, and cloud-linked features. Risk classes run from default to critical, directing firms to self-assess or undergo third-party checks. Any product sold beyond December 2027 must align with the regulation.

Obligations apply to manufacturers, importers, distributors, and developers. Duties include secure-by-design practices, documented risk analysis, disclosure procedures, and long-term support. Firms must notify ENISA within 24 hours of active exploitation and provide follow-up reports on a strict timeline.

Compliance requires technical files covering threat assessments, update plans, and software bills of materials. High-risk categories demand third-party evaluation, while lower-risk segments may rely on internal checks. Existing certifications help, but cannot replace CRA-specific conformity work.

Non-compliance risks fines, market restrictions, and reputational damage. Organisations preparing early are urged to classify products, run gap assessments, build structured roadmaps, and align development cycles with CRA guidance. EU authorities plan to provide templates and support as firms transition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU opens antitrust probe into Meta’s WhatsApp AI rollout

Brussels has opened an antitrust inquiry into Meta over how AI features were added to WhatsApp, focusing on whether the updated access policies hinder market competition. Regulators say scrutiny is needed as integrated assistants become central to messaging platforms.

Meta AI has been built into WhatsApp across Europe since early 2025, prompting questions about whether external AI providers face unfair barriers. Meta rejects the accusations and argues that users can reach rival tools through other digital channels.

Italy launched a related proceeding in July and expanded it in November, examining claims that Meta curtailed access for competing chatbots. Authorities worry that dominance in messaging could influence the wider AI services market.

EU officials confirmed the case will proceed under standard antitrust rules rather than the Digital Markets Act. Investigators aim to understand how embedded assistants reshape competitive dynamics in services used by millions.

European regulators say outcomes could guide future oversight as generative AI becomes woven into essential communications. The case signals growing concern about concentrated power in fast-evolving AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FCA launches AI Live Testing for UK financial firms

The UK’s Financial Conduct Authority has launched an AI Live Testing initiative to help firms safely deploy AI in financial markets. Major companies, including NatWest, Monzo, Santander, Scottish Widows, Gain Credit, Homeprotect, and Snorkl, are participating in the first cohort.

Firms receive tailored guidance from the FCA and its technical partner, Advai, to develop and assess AI applications responsibly.

AI testing focuses on retail financial services, exploring uses such as debt resolution, financial advice, improving customer engagement, streamlining complaints handling, and supporting smarter spending and saving decisions.

The project aims to answer key questions around evaluation frameworks, governance, live monitoring, and risk management to protect both consumers and markets.

Jessica Rusu, FCA chief data officer, said the initiative helps firms use AI safely while guiding the FCA on its impact in UK financial services. The project complements the FCA’s Supercharged Sandbox, which supports firms in earlier experimentation phases.

Applications for the second AI Live Testing cohort open in January 2026, with participating firms able to start testing in April. Insights from the initiative will inform FCA AI policy, supporting innovation while ensuring responsible deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI faced questions after ChatGPT surfaced app prompts for paid users

ChatGPT users complained after the system surfaced an unexpected Peloton suggestion during an unrelated conversation. The prompt appeared for a Pro Plan subscriber and triggered questions about ad-like behaviour. Many asked why paid chats were showing promotional-style links.

OpenAI said the prompt was part of early app-discovery tests, not advertising. Staff acknowledged that the suggestion was irrelevant to the query. They said the system is still being adjusted to avoid confusing or misplaced prompts.

Users reported other recommendations, including music apps that contradicted their stated preferences. The lack of an option to turn off these suggestions fuelled irritation. Paid subscribers warned that such prompts undermine the service’s reliability.

OpenAI described the feature as a step toward integrating apps directly into conversations. The aim is to surface tools when genuinely helpful. Early trials, however, have demonstrated gaps between intended relevance and actual outcomes.

The tests remain limited to selected regions and are not active in parts of Europe. Critics argue intrusive prompts risk pushing users to competitors. OpenAI said refinements will continue to ensure suggestions feel helpful, not promotional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Regulators question transparency after Mixpanel data leak

Mixpanel is facing criticism after disclosing a security incident with minimal detail, providing only a brief note before the US Thanksgiving weekend. Analysts say the timing and lack of clarity set a poor example for transparency in breach reporting.

OpenAI later confirmed its own exposure, stating that analytics data linked to developer activity had been obtained from Mixpanel’s systems. It stressed that ChatGPT users were not affected and that it had halted its use of the service following the incident.

OpenAI said the stolen information included names, email addresses, coarse location data and browser details, raising concerns about phishing risks. It noted that no advertising identifiers were involved, limiting broader cross-platform tracking.

Security experts say the breach highlights long-standing concerns about analytics companies that collect detailed behavioural and device data across thousands of apps. Mixpanel’s session-replay tools can be sensitive, as they can inadvertently capture private information.

Regulators argue the case shows why analytics providers have become prime targets for attackers. They say that more transparent disclosure from Mixpanel is needed to assess the scale of exposure and the potential impact on companies and end-users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!