Fake DeepSeek and ChatGPT services draw penalties in China

China’s market regulator has fined several companies for impersonating AI services such as DeepSeek and OpenAI’s ChatGPT, citing unfair competition and consumer fraud. The cases form part of a broader crackdown on deceptive practices in the country’s rapidly expanding AI sector.

The State Administration for Market Regulation penalised Shanghai Shangyun Internet Technology for running a fraudulent ChatGPT service on Tencent’s WeChat platform. Regulators said the service falsely presented itself as an official Chinese version of ChatGPT and charged users for AI conversations.

In a separate case, Hangzhou Boheng Culture Media was fined for operating an unauthorised website offering so-called ‘DeepSeek local deployment’. The site closely replicated DeepSeek’s branding and interface, misleading users into paying for imitation services.

Authorities said knock-off DeepSeek mini-programmes and websites surged in early 2025, involving trademark infringement, brand confusion, and false advertising. Regulators described the enforcement actions as a deterrent aimed at restoring order in the AI marketplace.

The regulator also disclosed penalties in other AI-related cases, including unauthorised access to proprietary algorithms and the use of AI calling software for scams. China is simultaneously updating antitrust rules to address emerging risks linked to algorithm-driven market manipulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU strengthens cyber defence after attack on Commission mobile systems

A cyber-attack targeting the European Commission’s central mobile infrastructure was identified on 30 January, raising concerns that staff names and mobile numbers may have been accessed.

The Commission isolated the affected system within nine hours instead of allowing the breach to escalate, and no mobile device compromise was detected.

Also, the Commission plans a full review of the incident to reinforce the resilience of internal systems.

Officials argue that Europe faces daily cyber and hybrid threats targeting essential services and democratic institutions, underscoring the need for stronger defensive capabilities across all levels of the EU administration.

CERT-EU continues to provide constant threat monitoring, automated alerts and rapid responses to vulnerabilities, guided by the Interinstitutional Cybersecurity Board.

These efforts support the broader legislative push to strengthen cybersecurity, including the Cybersecurity Act 2.0, which introduces a Trusted ICT Supply Chain to reduce reliance on high-risk providers.

Recent measures are complemented by the NIS2 Directive, which sets a unified legal framework for cybersecurity across 18 critical sectors, and the Cyber Solidarity Act, which enhances operational cooperation through the European Cyber Shield and the Cyber Emergency Mechanism.

Together, they aim to ensure collective readiness against large-scale cyber threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Czechia weighs under-15 social media ban as government debate intensifies

A ban on social media use for under-15s is being weighed in Czechia, with government officials suggesting the measure could be introduced before the end of the year.

Prime Minister Andrej Babiš has voiced strong support and argues that experts point to potential harm linked to early social media exposure.

France recently enacted an under-15 restriction, and a growing number of European countries are exploring similar limits rather than relying solely on parental guidance.

The discussion is part of a broader debate about children’s digital habits, with Czech officials also considering a ban on mobile phones in schools. Slovakia has already adopted comparable rules, giving Czech ministers another model to study as they work on their own proposals.

Not all political voices agree on the direction of travel. Some warn that strict limits could undermine privacy rights or diminish online anonymity, while others argue that educational initiatives would be more effective than outright prohibition.

UNICEF has cautioned that removing access entirely may harm children who rely on online platforms for learning or social connection instead of traditional offline networks.

Implementing a nationwide age restriction poses practical and political challenges. The government of Czechia heavily uses social media to reach citizens, complicating attempts to restrict access for younger users.

Age verification, fair oversight and consistent enforcement remain open questions as ministers continue consultations with experts and service providers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New York moves toward data centre moratorium as energy fears grow

Lawmakers in New York have proposed a three-year moratorium on permits for new data centres amid pressure to address the strain prominent AI facilities place on local communities.

The proposal mirrors similar moves in several other states and reflects rising concern that rapidly expanding infrastructure may raise electricity costs and worsen environmental conditions rather than supporting balanced development.

Politicians from both major parties have voiced unease about the growing power demand created by data-intensive services. Figures such as Bernie Sanders and Ron DeSantis have warned that unchecked development could drive household bills higher and burden communities.

More than 230 environmental organisations recently urged Congress to consider a national pause to prevent further disruption.

The New York bill, sponsored by Liz Krueger and Anna Kelles, aims to give regulators time to build strict rules before major construction continues. Krueger described the state as unprepared for the scale of facilities seeking entry, arguing that residents should not be left covering future costs.

Supporters say a temporary halt would provide time to design policies that protect consumers rather than encourage unrestrained corporate expansion.

Governor Kathy Hochul recently announced the Energize NY Development initiative, intended to modernise the grid connection process and ensure large energy users contribute fairly.

The scheme would require data centre operators to improve their financial responsibility as New York reassesses its approach to extensive AI-driven infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Lithuania selects Procivis for EU digital ID wallet sandbox

Procivis has been selected to build Lithuania’s European Digital Identity Wallet sandbox, advancing preparations for the EU digital identity rollout. The 12-month initiative will be delivered in partnership with the state Agency for Digital Solutions.

The project will establish a national test environment designed to simulate real-world digital identity scenarios. Built on Procivis One, the platform meets eIDAS 2.0 requirements and will validate the wallet infrastructure before EU deployment.

Testing will cover use cases for citizens, public institutions, and private-sector relying parties. Cross-border scenarios, including access to public and travel-related services, will also be explored to ensure interoperability across EU member states.

The sandbox will contribute to Lithuania’s readiness for the 2026 eIDAS 2.0 deadline while supporting broader participation in the EU Large Scale Pilot programmes focused on digital identity innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bitcoin cryptography safe as quantum threat remains distant

Quantum computing concerns around Bitcoin have resurfaced, yet analysis from CoinShares indicates the threat remains long-term. The report argues that quantum risk is an engineering challenge that gives Bitcoin ample time to adapt.

Bitcoin’s security relies on elliptic-curve cryptography. A sufficiently advanced quantum machine could, in theory, derive private keys using Shor’s algorithm, which requires millions of stable, error-corrected qubits, and remains far beyond current capability.

Network exposure is also limited. Roughly 1.6 million BTC is held in legacy addresses with visible public keys, yet only about 10,200 BTC is realistically targetable. Modern address formats further reduce the feasibility of attacks.

Debate continues over post-quantum upgrades, with researchers warning that premature changes could introduce new vulnerabilities. Market impact, for now, is viewed as minimal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw faces rising security pushback in South Korea

Major technology companies in South Korea are tightening restrictions on OpenClaw after rising concerns about security and data privacy.

Kakao, Naver and Karrot Market have moved to block the open-source agent within corporate networks, signalling a broader effort to prevent sensitive information from leaking into external systems.

Their decisions follow growing unease about how autonomous tools may interact with confidential material, rather than remaining contained within controlled platforms.

OpenClaw serves as a self-hosted agent that performs actions on behalf of a large language model, acting as the hands of a system that can browse the web, edit files and run commands.

Its ability to run directly on local machines has driven rapid adoption, but it has also raised concerns that confidential data could be exposed or manipulated.

Industry figures argue that companies are acting preemptively to reduce regulatory and operational risks by ensuring that internal materials never feed external training processes.

China has urged organisations to strengthen protections after identifying cases of OpenClaw running with inadequate safeguards.

Security analysts in South Korea warn that the agent’s open-source design and local execution model make it vulnerable to misuse, especially when compared to cloud-based chatbots that operate in more restricted environments.

Wiz researchers recently uncovered flaws in agents linked to OpenClaw that exposed personal information.

Despite the warnings, OpenClaw continues to gain traction among users who value its ability to automate complex tasks, rather than rely on manual workflows.

Some people purchase separate devices solely to run the agent, while an active South Korea community on X has drawn more than 1,800 members who exchange advice and share mitigation strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Yuan-pegged stablecoins face new restrictions under China policy

Chinese regulators have tightened controls on digital assets by banning the unauthorised issuance of yuan-pegged stablecoins overseas. The move extends existing restrictions to tokenised financial products linked to China’s currency and reinforces state control over monetary instruments.

In a joint notice, the People’s Bank of China and seven other agencies said no domestic or foreign entity may issue renminbi-linked stablecoins without approval. Authorities warned that such tokens replicate core monetary functions and could undermine currency sovereignty.

The rules also cover blockchain-based representations of real-world assets, including tokenised bonds and equities. Overseas providers are prohibited from offering these services to users in China without regulatory permission.

Beijing reaffirmed that cryptocurrencies such as Bitcoin and Ether have no legal tender status. Facilitating payments or related services using such assets remains illegal under China’s financial laws.

The measures align with China’s broader strategy of restricting private digital currencies while advancing the state-backed digital yuan. Officials have recently expanded the e-CNY’s role by allowing interest payments to encourage wider adoption.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI assistants drive a powerful shift in modern work

AI assistants have become a standard feature of modern working life, increasingly used across business, education, and government for writing, analysis, research, and learning tasks. Their widespread adoption reflects a broader shift in how digital tools support productivity and knowledge work.

As their use expands, AI literacy is emerging as a key professional competence. Understanding how to work effectively with AI assistants is becoming essential for workforce readiness, skills development, and long-term employability.

The growing reliance on AI assistants also raises important questions around responsibility and oversight. While these tools can significantly improve efficiency, they generate content rather than verified facts, making human judgment, accountability, and fact-checking indispensable.

Understanding how AI assistants function is therefore critical. Built on large language models, they predict language patterns rather than think or reason like humans. This technical reality underpins discussions on transparency, reliability, and appropriate use in professional contexts.

In parallel, AI assistants are moving from standalone chatbots into embedded features within workplace software, including documents, spreadsheets, and collaboration platforms. This shift strengthens their role as in-context work tools, while also increasing the need for clear organisational guidelines on their use.

The AI assistant ecosystem is also expanding globally, with platforms offering different approaches to privacy, integration, and governance. This diversity gives users more choice but complicates alignment across regulatory and organisational environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Agentic AI drives structural change in customer care

Customer care is undergoing structural change as agentic AI moves from experimental pilots to large-scale deployment. Advances in AI capabilities, combined with growing organisational readiness, are enabling companies to integrate AI systems directly into core customer service operations, particularly in call centres.

The increasing use of agentic AI is elevating customer care to a strategic management issue. Senior leadership, including CEOs, is paying closer attention to customer operations as a source of resilience, efficiency, and competitive differentiation, rather than viewing it solely as a support function.

At the same time, a growing divide is emerging between organisations that can scale AI effectively and those that remain at an early stage of adoption. AI leaders are investing in internal capabilities, governance structures, and workforce readiness, allowing them to deploy AI consistently across customer interactions.

Agentic AI is increasingly shaping end-to-end customer care models. Instead of being used for isolated automation tasks, AI systems are becoming the coordinating layer for customer service, managing interactions across channels and supporting more complex service processes.

Automation levels in customer care are rising rapidly. Some organisations are automating a majority of customer contacts, driven by improvements in natural language processing, decision-making, and integration with enterprise systems. This trend is changing how customer demand is managed at scale.

Human roles in customer care are evolving alongside automation. AI tools are being used to support agents in decision-making, reduce handling time, and improve service consistency. As a result, human agents are increasingly focused on cases requiring judgement, empathy, and contextual understanding.

Despite the rapid adoption of AI, customer satisfaction remains the primary objective. Efficiency gains, cost reduction, and revenue growth are important outcomes, but they are increasingly assessed based on their impact on customer experience and service quality.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!