Hackers can use AI to de-anonymise social media accounts

AI technology behind platforms like ChatGPT is making it significantly easier for hackers to identify anonymous social media users, a new study warns. LLMs could match anonymised accounts to real identities by analysing users’ posts across platforms.

Researchers Simon Lermen and Daniel Paleka warned that AI enables cheap, highly personalised privacy attacks, urging a rethink of what counts as private online. The study highlighted risks from government surveillance to hackers exploiting public data for scams.

Experts caution that AI-driven de-anonymisation is not flawless. Errors in linking accounts could wrongly implicate individuals, while public datasets beyond social media- such as hospital or statistical records- may be exposed to unintended analysis.

Users are urged to reconsider what information they share, and platforms are encouraged to limit bulk data access and detect automated scraping.

The study underscores growing concerns about AI surveillance. While the technology cannot guarantee complete de-anonymisation, its rapid capabilities demand stronger safeguards to protect privacy online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces challenges in curbing digital abuse against women

Researchers and policymakers are raising concerns about how new technologies may put women at risk online, despite existing EU rules designed to ensure safer digital spaces.

AI-powered tools and smart devices have been linked to incidents of harassment and the creation of non-consensual sexualised imagery, highlighting gaps in enforcement and compliance.

The European Commission’s Gender Equality 2026–2030 Strategy noted that women are disproportionately targeted by online gender-based violence, including harassment, doxing, and AI-generated deepfakes.

Investigations into tools such as Elon Musk’s Grok AI and Meta’s Ray-Ban smart glasses have drawn attention to how digital platforms and wearable technologies can be misused, even where legal frameworks like the Digital Services Act (DSA) are in place.

Experts emphasise that while the EU’s rules offer a foundation to regulate online content, significant challenges remain. Advocates and lawmakers say enforcement gaps let harmful AI functions like nudification persist.

Commissioners have stressed ongoing cooperation with tech companies and upcoming guidelines to prioritise flagged content from independent organisations to address gender-based cyber violence.

Authorities are also monitoring new technologies closely. In the case of wearable devices, regulators are considering how users and bystanders are informed about recording features.

Ongoing discussions aim to strengthen compliance under existing legislation and ensure that digital spaces become safer and more accountable for all users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data breaches push South Korea toward stricter corporate liability rules

South Korea’s government and ruling party are advancing a second revision of the Personal Information Protection Act to strengthen corporate liability for large-scale data breaches.

The proposed amendment would make it easier for victims of major data breaches to receive compensation and relief. By removing the requirement for victims to prove a company’s ‘intent or negligence’, the amendment would increase companies’ legal liability when user data is compromised, making it more likely that affected individuals can claim damages.

Momentum for stricter rules follows several high-profile incidents, including a recent Coupang data breach that may have exposed personal information linked to numerous user accounts. The case has intensified scrutiny of how firms handle and protect customer data.

South Korea Officials at the Personal Information Protection Commission (PIPC) say victims often struggle to obtain evidence explaining how data breaches occur or how damages arise. The proposed reform would shift a greater evidentiary burden onto companies in disputes over losses.

The amendment would also introduce criminal penalties for anyone who knowingly obtains or distributes leaked personal data, closing a legal gap that currently applies only to employees who unlawfully disclose information. Authorities would gain powers to issue emergency protective orders to limit the spread of compromised data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia introduces strict online child safety rules covering AI chatbots

New Age-Restricted Material Codes have begun to be enforced in Australia, requiring online platforms to introduce stronger protections to prevent children from accessing harmful digital content.

The rules apply across a wide range of services, including social media, app stores, gaming platforms, search engines, pornography websites, and AI chatbots.

Under the framework, companies must implement age-assurance systems before allowing access to content involving pornography, high-impact violence, self-harm material, or other age-restricted topics.

These measures also extend to AI companions and chatbots, which must prevent sexually explicit or self-harm-related conversations with minors.

The rules form part of Australia’s broader online safety framework overseen by the eSafety Commissioner, which will monitor compliance and enforce the codes.

Companies that fail to comply may face penalties of up to $49.5 million per breach.

The policy aims to shift responsibility toward technology companies by requiring them to build protections directly into their platforms.

Officials in Australia argue the measures mirror long-standing offline safeguards designed to prevent children from accessing adult environments or harmful material.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Data breach hits fintech lender Figure exposing nearly 1 million accounts

Fintech lender Figure Technology Solutions has disclosed a data breach after hackers exposed personal information from nearly one million accounts. Details from 967,200 accounts, including names, email addresses, phone numbers, home addresses, and dates of birth, were compromised.

Figure Technology Solutions, founded in 2018, operates a blockchain-based lending platform built on the Provenance blockchain. The company says it has facilitated more than $22 billion in home equity transactions through partnerships with banks, credit unions, and fintech firms. Despite blockchain security claims, attackers reportedly gained access by manipulating a staff member rather than breaking the underlying technology.

‘We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account,’ a company spokesperson said. ‘We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate.’

Security researchers say the data breach follows a pattern used by groups such as ShinyHunters, who impersonate IT support staff and pressure employees into revealing login credentials through convincing phishing portals.

Once access to corporate single sign-on systems, which allow users to log in to multiple internal applications with a single set of credentials, is obtained, attackers can move across multiple internal platforms, often including services linked to major providers such as Microsoft and Google.

Experts warn that the data breach highlights a wider cybersecurity problem: even advanced technologies such as blockchain cannot prevent attacks that target human behaviour. Criminals can use exposed personal information to launch convincing phishing campaigns or financial scams, reinforcing the need for stronger employee training and security awareness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU Commission’s new guidance to push Cybersecurity Resilience Act

The EU Commission has opened a public consultation on draft guidance to help companies apply the EU’s Cyber Resilience Act (CRA), a regulation that sets baseline cybersecurity requirements for hardware and software ‘products with digital elements’ to reduce vulnerabilities and improve security throughout a product’s life cycle. The guidance is framed as practical help, especially for microenterprises and SMEs, and the consultation runs until 31 March 2026.

The CRA is designed to make ‘secure by design’ the default for connected products people use every day, from consumer devices to business software, while giving users clearer information about a product’s security properties. In timeline terms, the Act entered into force on 10 December 2024. The incident reporting duties start on 11 September 2026, and the main obligations apply from 11 December 2027, giving industry a runway but also a clear countdown.

What the Commission is trying to nail down now are the parts companies have found hardest to interpret: how the rules apply to remote data processing solutions (cloud-linked features), how they treat free and open-source software, what ‘support periods’ mean in practice (i.e. how long security upkeep is expected), and how the CRA fits alongside other EU laws. In other words, this is less about announcing new rules and more about reducing legal grey zones before enforcement ramps up.

The guidance push also lands amid a broader policy drive, as on 20 January 2026, the Commission proposed a new EU cybersecurity package, built around a revised Cybersecurity Act and targeted NIS2 amendments. The package aims to harden ICT supply chains, including a framework to jointly identify and mitigate risks across 18 critical sectors, and would enable mandatory ‘de-risking’ of EU mobile telecom networks away from high‑risk third‑country suppliers. It also proposes a revamped EU cybersecurity certification system with simpler procedures, giving a default 12‑month timeline to develop certification schemes, while cutting red tape for tens of thousands of firms and strengthening ENISA’s role, including early warnings, ransomware support, and a major budget boost.

Taken together, the EU is moving from strategy documents to operational details, product security on one side (CRA) and ecosystem-level resilience on the other (supply chains, certification, incident reporting and supervision). For companies, that can be both reassuring and demanding: clearer guidance should reduce uncertainty, but the compliance reality may still be layered, especially for businesses spanning devices, software, cloud features, and cross-border operations. The Commission’s stakeholder feedback window is essentially a test of whether these rules can be made workable without diluting their bite.

Why does it matter?

Beyond technical risk, this is increasingly about sovereignty: who sets the rules for digital products, who can be trusted in supply chains, and how much dependency is acceptable in critical infrastructure. Digital governance expert Jovan Kurbalija argues that full ‘stack’ digital sovereignty, that is to say control over infrastructure, services, data, and AI knowledge, is concentrated in very few states, while most countries must balance openness with autonomy. The EU’s current wave of cybersecurity governance fits that pattern: it’s an attempt to turn security standards, certification, and supply-chain choices into a practical form of strategic control, not just to prevent hacks, but to protect democratic institutions, economic competitiveness, and trust in the digital tools people rely on.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Calls grow to strengthen New Zealand privacy law

Pressure is growing in New Zealand to strengthen the Privacy Act following several high-profile data breaches. Debate in New Zealand intensified after a cyberattack exposed medical records from the Manage My Health patient portal.

The breach in New Zealand affected about 120,000 patients and involved threats to release documents on the dark web. Another incident forced the MediMap medication platform offline after unauthorised changes were detected in patient records.

Privacy specialists argue that current enforcement powers are too weak to deter serious failures. The Privacy Act allows only limited financial penalties, with fines generally capped at NZD10,000.

Officials are now considering reforms, including stronger penalties for privacy violations. Policymakers also warn that failure to strengthen the law could threaten the country’s EU data adequacy status.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Coruna exploit kit targets iPhones running older iOS versions

The Google Threat Intelligence Group (GTIG) has identified a powerful exploit toolkit, Coruna, that targets Apple iPhones running iOS versions 13.0 to 17.2.1.

The toolkit contains five complete exploit chains and 23 exploits designed to compromise devices using previously unseen techniques and mitigation bypasses.

Parts of the exploit chain were first detected in early 2025, when a client of a commercial surveillance vendor used them. Later investigations revealed the same framework in highly targeted attacks against Ukrainian users linked to a suspected Russian espionage group.

Toward the end of the year, the toolkit resurfaced in large-scale campaigns linked to financially motivated actors operating from China.

Coruna relies on a sophisticated JavaScript framework that identifies iPhone models and their iOS versions before delivering the appropriate WebKit remote code execution exploit and additional bypass techniques.

Several vulnerabilities exploited by the toolkit had previously been treated as zero-day flaws, highlighting the growing circulation of advanced cyber-attack tools among multiple threat actors.

Google warned that the payload can steal sensitive data, including financial and cryptocurrency wallet information, and allows attackers to deploy additional modules remotely.

The company has added related malicious domains to Safe Browsing and urged users to install the latest iOS updates, noting that the exploit kit does not affect the newest version of Apple’s operating system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa Véliz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU considers placing Roblox under strict Digital Services Act rules

European regulators are examining whether Roblox should fall under the Digital Services Act’s most stringent obligations rather than remain outside the bloc’s most demanding platform rules.

The European Commission began analysing the gaming platform’s reported user figures after the company disclosed roughly 48 million monthly users across the EU.

Numbers above the threshold could qualify Roblox as a Very Large Online Platform under the DSA. Such a designation would mark the first time a gaming platform enters the category alongside social media services already subject to heightened oversight.

Platforms receiving the label must conduct regular risk assessments, submit mitigation reports and demonstrate stronger safeguards for minors.

Regulatory pressure has already begun at the national level. The Dutch Authority for Consumers and Markets launched an investigation in January after concerns that children could encounter violent or sexually explicit content within Roblox games or interact with harmful actors through online features.

Designation at the EU level would transfer supervisory authority to the European Commission, enabling wider investigations and potential fines if violations occur. Officials are still verifying user data before making a formal decision, and no deadline has been announced for the process.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!