Study questions reliability of AI medical guidance

AI chatbots are not yet capable of providing reliable health advice, according to new research published in the journal Nature Medicine. Findings show users gain no greater diagnostic accuracy from chatbots than from traditional internet searches.

Researchers tested nearly 1,300 UK participants using ten medical scenarios, ranging from minor symptoms to conditions requiring urgent care. Participants were assigned to use either OpenAI’s GPT-4o, Meta’s Llama 3, Command R+, or a standard search engine to assess symptoms and determine next steps.

Chatbot users identified their condition about one-third of the time, with only 45 percent selecting the correct medical response. Performance levels matched those relying solely on search engines, despite AI systems scoring highly on medical licensing benchmarks.

Experts attributed the gap to communication failures. Users often provided incomplete information or misinterpreted chatbot guidance.

Researchers and bioethicists warned that growing reliance on AI for medical queries could pose public health risks without professional oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Crypto exchange scrambles after $40bn Bitcoin payout error

South Korea’s second-largest cryptocurrency exchange, Bithumb, is attempting to recover more than $40bn in Bitcoin after a promotional payout error credited customers with Bitcoin rather than Korean won.

The mistake occurred on 6 February during a ‘random box’ event, when prize values were entered in Bitcoin rather than in Bitcoin. Intended rewards totalled 620,000 won for 695 users, yet 620,000 bitcoins were distributed.

Only 249 customers opened their boxes, but the credited sums exceeded the exchange’s holdings.

Most balances were reversed through internal ledger corrections. About 13bn won ($9m) remains unrecovered after some users sold or withdrew funds before accounts were frozen. Authorities said 86 customers liquidated roughly 1,788 Bitcoins within 35 minutes.

Regulators have opened a full investigation, and lawmakers have scheduled an emergency hearing. Legal uncertainty remains over liability, while the exchange confirmed no hacking was involved and pledged stronger internal controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pakistan pledges major investment in AI by 2030

Pakistan plans to invest $1 billion in AI by 2030, Prime Minister Shehbaz Sharif said at the opening of Indus AI Week in Islamabad. The pledge aims to build a national AI ecosystem in Pakistan.

The government in Pakistan said AI education would expand to schools and universities, including remote regions. Islamabad also plans 1,000 fully funded PhD scholarships in AI to strengthen research capacity in Pakistan.

Shehbaz Sharif said Pakistan would train one million non IT professionals in AI skills by 2030. Islamabad identified agriculture, mining and industry as priority sectors for AI driven productivity gains in Pakistan.

Pakistan approved a National AI Policy in 2025, although implementation has moved slowly. Officials in Islamabad said Indus AI Week marks an early step towards broader adoption of AI across Pakistan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Shadow AI becomes a new governance challenge for European organisations

Employees are adopting generative tools at work faster than organisations can approve or secure them, giving rise to what is increasingly described as ‘shadow AI‘. Unlike earlier forms of shadow IT, these tools can transform data, infer sensitive insights, and trigger automated actions beyond established controls.

For European organisations, the issue is no longer whether AI should be used, but how to regain visibility and control without undermining productivity, as shadow AI increasingly appears inside approved platforms, browser extensions, and developer tools, expanding risks beyond data leakage.

Security experts warn that blanket bans often push AI use further underground, reducing transparency and trust. Instead, guidance from EU cybersecurity bodies increasingly promotes responsible enablement through clear policies, staff awareness, and targeted technical controls.

Key mitigation measures include mapping AI use across approved and informal tools, defining safe prompt data, and offering sanctioned alternatives, with logging, least-privilege access, and approval steps becoming essential as AI acts across workflows.

With the EU AI Act introducing clearer accountability across the AI value chain, unmanaged shadow AI is also emerging as a compliance risk. As AI becomes embedded across enterprise software, organisations face growing pressure to make safe use the default rather than the exception.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Super Bowl 2026 ads embrace the AI power

AI dominated the 2026 Super Bowl advertising landscape as brands relied on advanced models instead of traditional high-budget productions.

Many spots showcased AI as both the creative engine behind the visuals and the featured product, signalling a shift toward technology-centred storytelling during the most expensive broadcast event of the year.

Svedka pursued a provocative strategy by presenting a largely AI-generated commercial starring its robot pair, a choice that reignited arguments over whether generative tools could displace human creatives.

Anthropic went in a different direction by using humour to mock OpenAI’s plan to introduce advertisements to ChatGPT, a jab that led to a pointed response from Sam Altman and fuelled an online dispute.

Meta, Amazon and Google used their airtime to promote their latest consumer offerings, with Meta focusing on AI-assisted glasses for extreme activities and Amazon unveiling Alexa+, framed through a satirical performance by Chris Hemsworth about fears of malfunctioning assistants.

Google leaned toward practical design applications instead of spectacle, demonstrating its Nano Banana Pro system transforming bare rooms into personalised images.

Other companies emphasised service automation, from Ring’s AI tool for locating missing pets to Ramp, Rippling and Wix, which showcased platforms designed to ease administrative work and simplify creative tasks.

Hims & Hers adopted a more social approach by highlighting the unequal nature of healthcare access and promoting its AI-driven MedMatch feature.

The variety of tones across the adverts underscored how brands increasingly depend on AI to stand out, either through spectacle or through commentary on the technology’s expanding cultural power.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Fake DeepSeek and ChatGPT services draw penalties in China

China’s market regulator has fined several companies for impersonating AI services such as DeepSeek and OpenAI’s ChatGPT, citing unfair competition and consumer fraud. The cases form part of a broader crackdown on deceptive practices in the country’s rapidly expanding AI sector.

The State Administration for Market Regulation penalised Shanghai Shangyun Internet Technology for running a fraudulent ChatGPT service on Tencent’s WeChat platform. Regulators said the service falsely presented itself as an official Chinese version of ChatGPT and charged users for AI conversations.

In a separate case, Hangzhou Boheng Culture Media was fined for operating an unauthorised website offering so-called ‘DeepSeek local deployment’. The site closely replicated DeepSeek’s branding and interface, misleading users into paying for imitation services.

Authorities said knock-off DeepSeek mini-programmes and websites surged in early 2025, involving trademark infringement, brand confusion, and false advertising. Regulators described the enforcement actions as a deterrent aimed at restoring order in the AI marketplace.

The regulator also disclosed penalties in other AI-related cases, including unauthorised access to proprietary algorithms and the use of AI calling software for scams. China is simultaneously updating antitrust rules to address emerging risks linked to algorithm-driven market manipulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU strengthens cyber defence after attack on Commission mobile systems

A cyber-attack targeting the European Commission’s central mobile infrastructure was identified on 30 January, raising concerns that staff names and mobile numbers may have been accessed.

The Commission isolated the affected system within nine hours instead of allowing the breach to escalate, and no mobile device compromise was detected.

Also, the Commission plans a full review of the incident to reinforce the resilience of internal systems.

Officials argue that Europe faces daily cyber and hybrid threats targeting essential services and democratic institutions, underscoring the need for stronger defensive capabilities across all levels of the EU administration.

CERT-EU continues to provide constant threat monitoring, automated alerts and rapid responses to vulnerabilities, guided by the Interinstitutional Cybersecurity Board.

These efforts support the broader legislative push to strengthen cybersecurity, including the Cybersecurity Act 2.0, which introduces a Trusted ICT Supply Chain to reduce reliance on high-risk providers.

Recent measures are complemented by the NIS2 Directive, which sets a unified legal framework for cybersecurity across 18 critical sectors, and the Cyber Solidarity Act, which enhances operational cooperation through the European Cyber Shield and the Cyber Emergency Mechanism.

Together, they aim to ensure collective readiness against large-scale cyber threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Czechia weighs under-15 social media ban as government debate intensifies

A ban on social media use for under-15s is being weighed in Czechia, with government officials suggesting the measure could be introduced before the end of the year.

Prime Minister Andrej Babiš has voiced strong support and argues that experts point to potential harm linked to early social media exposure.

France recently enacted an under-15 restriction, and a growing number of European countries are exploring similar limits rather than relying solely on parental guidance.

The discussion is part of a broader debate about children’s digital habits, with Czech officials also considering a ban on mobile phones in schools. Slovakia has already adopted comparable rules, giving Czech ministers another model to study as they work on their own proposals.

Not all political voices agree on the direction of travel. Some warn that strict limits could undermine privacy rights or diminish online anonymity, while others argue that educational initiatives would be more effective than outright prohibition.

UNICEF has cautioned that removing access entirely may harm children who rely on online platforms for learning or social connection instead of traditional offline networks.

Implementing a nationwide age restriction poses practical and political challenges. The government of Czechia heavily uses social media to reach citizens, complicating attempts to restrict access for younger users.

Age verification, fair oversight and consistent enforcement remain open questions as ministers continue consultations with experts and service providers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Bitcoin cryptography safe as quantum threat remains distant

Quantum computing concerns around Bitcoin have resurfaced, yet analysis from CoinShares indicates the threat remains long-term. The report argues that quantum risk is an engineering challenge that gives Bitcoin ample time to adapt.

Bitcoin’s security relies on elliptic-curve cryptography. A sufficiently advanced quantum machine could, in theory, derive private keys using Shor’s algorithm, which requires millions of stable, error-corrected qubits, and remains far beyond current capability.

Network exposure is also limited. Roughly 1.6 million BTC is held in legacy addresses with visible public keys, yet only about 10,200 BTC is realistically targetable. Modern address formats further reduce the feasibility of attacks.

Debate continues over post-quantum upgrades, with researchers warning that premature changes could introduce new vulnerabilities. Market impact, for now, is viewed as minimal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw faces rising security pushback in South Korea

Major technology companies in South Korea are tightening restrictions on OpenClaw after rising concerns about security and data privacy.

Kakao, Naver and Karrot Market have moved to block the open-source agent within corporate networks, signalling a broader effort to prevent sensitive information from leaking into external systems.

Their decisions follow growing unease about how autonomous tools may interact with confidential material, rather than remaining contained within controlled platforms.

OpenClaw serves as a self-hosted agent that performs actions on behalf of a large language model, acting as the hands of a system that can browse the web, edit files and run commands.

Its ability to run directly on local machines has driven rapid adoption, but it has also raised concerns that confidential data could be exposed or manipulated.

Industry figures argue that companies are acting preemptively to reduce regulatory and operational risks by ensuring that internal materials never feed external training processes.

China has urged organisations to strengthen protections after identifying cases of OpenClaw running with inadequate safeguards.

Security analysts in South Korea warn that the agent’s open-source design and local execution model make it vulnerable to misuse, especially when compared to cloud-based chatbots that operate in more restricted environments.

Wiz researchers recently uncovered flaws in agents linked to OpenClaw that exposed personal information.

Despite the warnings, OpenClaw continues to gain traction among users who value its ability to automate complex tasks, rather than rely on manual workflows.

Some people purchase separate devices solely to run the agent, while an active South Korea community on X has drawn more than 1,800 members who exchange advice and share mitigation strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!