OpenClaw faces rising security pushback in South Korea

Major technology companies in South Korea are tightening restrictions on OpenClaw after rising concerns about security and data privacy.

Kakao, Naver and Karrot Market have moved to block the open-source agent within corporate networks, signalling a broader effort to prevent sensitive information from leaking into external systems.

Their decisions follow growing unease about how autonomous tools may interact with confidential material, rather than remaining contained within controlled platforms.

OpenClaw serves as a self-hosted agent that performs actions on behalf of a large language model, acting as the hands of a system that can browse the web, edit files and run commands.

Its ability to run directly on local machines has driven rapid adoption, but it has also raised concerns that confidential data could be exposed or manipulated.

Industry figures argue that companies are acting preemptively to reduce regulatory and operational risks by ensuring that internal materials never feed external training processes.

China has urged organisations to strengthen protections after identifying cases of OpenClaw running with inadequate safeguards.

Security analysts in South Korea warn that the agent’s open-source design and local execution model make it vulnerable to misuse, especially when compared to cloud-based chatbots that operate in more restricted environments.

Wiz researchers recently uncovered flaws in agents linked to OpenClaw that exposed personal information.

Despite the warnings, OpenClaw continues to gain traction among users who value its ability to automate complex tasks, rather than rely on manual workflows.

Some people purchase separate devices solely to run the agent, while an active South Korea community on X has drawn more than 1,800 members who exchange advice and share mitigation strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Smart policing project halted by Greek data protection authority

Greece’s data protection authority has warned against activating an innovative policing system planned by the Hellenic Police. The ruling said biometric identity checks carried out on the street would breach data protection law in Greece.

The system would allow police patrols in Greece to use portable devices to scan fingerprints and facial images during spot checks. Regulators said Greek law lacks a clear legal basis for such biometric processing.

The authority said existing rules cited by the Hellenic Police only apply to suspects or detainees and do not cover modern biometric technologies. Greece, therefore, faces unlawful processing risks if the system enters full operation.

The innovative policing project in Greece received the EU funding of around four million euros and received backlash in the past. Regulators said deployment must wait until new legislation explicitly authorises police to use biometrics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sainsbury’s ejects shopper after facial recognition misidentification

A data professional, Warren Rajah, was escorted out of a Sainsbury’s supermarket in south London after staff incorrectly believed he matched an offender flagged by Facewatch facial recognition technology.

Facewatch later confirmed that there were no alerts or records associated with him, and Sainsbury’s attributed the incident to human error rather than a software fault.

Rajah described the experience as humiliating and ‘Orwellian’, criticising the lack of explanation, absence of a transparent appeals process, and the requirement to submit personal identification to a third party to prove he was not flagged.

He expressed particular concern about the impact such incidents could have on vulnerable customers.

The case highlights broader debates around the deployment of facial recognition in retail, where companies cite reductions in theft and abuse. At the same time, civil liberties groups warn of misidentification, insufficient staff training and the normalisation of privatised biometric surveillance.

UK regulators have reiterated that retailers must assess misidentification risks and ensure robust safeguards when processing biometric data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Cyber Startup Programme unveiled as Infosecurity Europe boosts early innovation

Infosecurity Europe has launched a new Cyber Startup Programme to support early-stage cybersecurity innovation and strengthen ecosystem resilience. The initiative will debut at Infosecurity Europe 2026, offering founders and investors a dedicated experience focused on emerging technologies and growth.

The programme centres on a new Cyber Startups Zone, an exhibition area showcasing young companies and novel security solutions. Founders will gain industry visibility, along with tailored ticket access and curated networking.

Delivery will take place in partnership with UK Cyber Flywheel, featuring a dedicated founder- and investor-focused day on Tuesday 2 June. Sessions will cover scaling strategies, go-to-market planning, funding, and live pitching opportunities.

Infosecurity Europe will also introduce the Cyber Startup Award 2026, recognising early-stage firms with live products and growth potential. Finalists will pitch on stage, with winners receiving exhibition space, PR support, and a future-brand workshop.

Alongside the programme, the Cyber Innovation Zone, delivered with the UK Department for Science, Innovation and Technology, will spotlight innovative UK cybersecurity businesses and emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Social engineering breach exposes 1.4 million Betterment customer records

Betterment has confirmed a data breach affecting around 1.4 million customers after a January 2026 social engineering attack on a third-party platform. Attackers used the access to send fraudulent crypto scam messages posing as official promotions.

The breach occurred after an employee was tricked into sharing login credentials, allowing unauthorised access to internal messaging systems rather than core investment infrastructure. Attackers used the access to send messages promising to multiply cryptocurrency deposits sent to external wallets.

Subsequent forensic analysis and breach monitoring services confirmed that more than 1.4 million unique records were exposed. Betterment said investment accounts and login credentials were not compromised during the incident.

Exposed information included names, email addresses, phone numbers, physical addresses, dates of birth, job titles, location data, and device metadata. Security experts warn that such datasets can enable targeted phishing, identity fraud, and follow-on social engineering campaigns.

Betterment revoked access the same day, notified customers, and launched an external investigation. The breach was formally added to public exposure databases in early February, highlighting the growing risk of human-focused attacks against financial platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI transforms finance systems

Organisations undergoing finance transformations are discovering that traditional system cutovers rarely go as planned. Hidden manual workarounds and undocumented processes often surface late, creating operational risks and delays during ERP migrations.

Agentic AI is emerging as a solution by deploying autonomous software agents that discover real workflows directly from system data. Scout agents analyse transaction logs to uncover hidden dependencies, allowing companies to build more accurate future systems based on actual operations.

Simulator agents to stress test new systems by generating thousands of realistic transactions continuously. When problems arise, agents analyse errors and automatically recommend fixes, turning testing into a continuous improvement process rather than a one-time checkpoint.

Sentinel agents monitor financial records in real time to detect discrepancies before they escalate into compliance risks. Leaders say the approach shifts focus from single go-live milestones to ongoing resilience, with teams increasingly managing intelligent systems instead of manual processes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

User emails and phone numbers leaked in Substack security incident

Substack confirmed a data breach that exposed user email addresses and phone numbers. The company said passwords and financial information were not affected. The incident occurred in October and was later investigated.

Chief executive Chris Best told users the vulnerability was identified in February and has since been fixed, with an internal investigation now underway. The company has not disclosed the technical cause of the breach or why the intrusion went undetected for several months.

Substack also did not confirm how many users were affected or provide evidence showing whether the exposed data has been misused. Users were advised to remain cautious about unexpected emails and text messages following the incident.

The breach was first reported by TechCrunch, which said the company declined to provide further operational details. Questions remain around potential ransom demands or broader system access.

Substack reports more than 50 million active subscriptions, including 5 million paid users, and raised $100 million in Series C funding in 2025, led by BOND and The Chernin Group, with participation from Andreessen Horowitz and other investors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

TikTok accused of breaching EU digital safety rules

The European Commission has concluded that TikTok’s design breaches the Digital Services Act by encouraging compulsive use and failing to protect users, particularly children and teenagers.

Preliminary findings say the platform relies heavily on features such as infinite scroll, which automatically delivers new videos and makes disengagement difficult.

Regulators argue that such mechanisms place users into habitual patterns of repeated viewing rather than supporting conscious choice. EU officials found that safeguards introduced by TikTok do not adequately reduce the risks linked to excessive screen time.

Daily screen time limits were described as ineffective because alerts are easy to dismiss, even for younger users who receive automatic restrictions. Parental control tools were also criticised for requiring significant effort, technical knowledge and ongoing involvement from parents.

Henna Virkkunen, the Commission’s executive vice-president for tech sovereignty, security and democracy, said addictive social media design can harm the development of young people. European law, she said, makes platforms responsible for the effects their services have on users.

Regulators concluded that compliance with the Digital Services Act would require TikTok to alter core elements of its product, including changes to infinite scroll, recommendation systems and screen break features.

TikTok rejected the findings, calling them inaccurate and saying the company would challenge the assessment. The platform argues that it already offers a range of tools, including sleep reminders and wellbeing features, to help users manage their time.

The investigation remains ongoing and no penalties have yet been imposed. A final decision could still result in enforcement measures, including fines of up to six per cent of TikTok’s global annual turnover.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Slovenia plans social media ban for children under 15

Among several countries lately, Slovenia is also moving towards banning access to social media platforms for children under the age of 15, as the government prepares draft legislation aimed at protecting minors online.

Deputy Prime Minister Matej Arčon said the Education Ministry initiated the proposal and would be developed with input from professionals.

The planned law would apply to major social networks where user-generated content is shared, including TikTok, Snapchat and Instagram. Arčon said the initiative reflects growing international concern over the impact of social media on children’s mental health, privacy and exposure to addictive design features.

Slovenia’s move follows similar debates and proposals across Europe and beyond. Countries such as Italy, France, Spain, UK, Greece and Austria have considered restrictions, while Australia has already introduced a nationwide minimum age for social media use.

Spain’s prime minister recently defended proposed limits, arguing that technology companies should not influence democratic decision-making.

Critics of such bans warn of potential unintended consequences. Telegram founder Pavel Durov has argued that age-based restrictions could lead to broader data collection and increased state control over online content.

Despite these concerns, Slovenia’s government appears determined to proceed, positioning the measure as part of a broader effort to strengthen child protection in the digital space.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU split widens over ban on AI nudification apps

European lawmakers remain divided over whether AI tools that generate non-consensual sexual images should face an explicit ban in the EU legislation.

The split emerged as debate intensified over the AI simplification package, which is moving through Parliament and the Council rather than remaining confined to earlier negotiations.

Concerns escalated after Grok was used to create images that digitally undressed women and children.

The EU regulators responded by launching an investigation under the Digital Services Act, and the Commission described the behaviour as illegal under existing European rules. Several lawmakers argue that the AI Act should name pornification apps directly instead of relying on broader legal provisions.

Lead MEPs did not include a ban in their initial draft of the Parliament’s position, prompting other groups to consider adding amendments. Negotiations continue as parties explore how such a restriction could be framed without creating inconsistencies within the broader AI framework.

The Commission appears open to strengthening the law and has hinted that the AI omnibus could be an appropriate moment to act. Lawmakers now have a limited time to decide whether an explicit prohibition can secure political agreement before the amendment deadline passes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!