The US releases national cyber strategy, prioritising offense and AI

President Donald Trump released his administration’s national cybersecurity strategy, outlining priorities across six policy areas: offensive and defensive cyber operations, federal network security, critical infrastructure protection, regulatory reform, emerging technology leadership, and workforce development. Trump also signed an executive order the same day, directing federal agencies to increase the prosecution of cybercrime and fraud.

The strategy document spans five pages of substantive text, with administration officials describing it as intentionally high-level. The White House stated that more detailed implementation guidance would follow.

The strategy’s six pillars include the following provisions:

Shaping adversary behaviour requires deploying US offensive and defensive cyber capabilities and incentivising private-sector disruption of adversary networks. It also states the administration will “counter the spread of the surveillance state and authoritarian technologies.”

Promoting regulation advocates for reducing compliance requirements characterised as ‘costly checklists’ and addresses liability frameworks — a priority also present in the prior administration’s approach.

Modernising federal networks involves adopting post-quantum cryptography, AI, zero-trust architecture, and reducing procurement barriers for technology vendors.

Securing critical infrastructure emphasises supply chain resilience and preference for domestically produced technology, alongside a role for state, local, tribal, and territorial governments.

Sustaining technological superiority focuses primarily on AI, quantum cryptography, data centre security, and privacy protection.

Building cyber talent commits to removing barriers among industry, academia, government, and the military to develop a skilled cybersecurity workforce. This pillar follows a period in which the administration reduced the number of federal cyber positions.

The accompanying executive order directs the attorney general to prioritise cybercrime prosecution, tasks agencies with reviewing tools to counter international criminal organisations, and assigns the Department of Homeland Security expanded training responsibilities. The strategy itself references cybercrime once.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada warns about AI-generated scams targeting citizens online

Authorities in Canada have issued a warning about the growing use of AI in impersonation scams targeting citizens. Fraudsters increasingly deploy advanced tools capable of mimicking politicians, government officials and other public figures with convincing realism.

Deepfake videos, synthetic audio and AI-generated messages allow scammers to create convincing communications that appear to come from trusted authorities.

Such tactics are often used to persuade victims to send money, reveal personal information, install malicious software or engage with fraudulent investment offers.

Officials also warn about fake government websites created with AI-assisted tools that imitate official pages by copying national symbols and similar domain names. Suspicious websites often use unusual web addresses, extra characters, or unfamiliar domain endings to mislead visitors.

Authorities advise Canadians to verify unexpected messages through official channels rather than clicking links or responding immediately.

Suspected impersonation attempts should be reported to the Competition Bureau or the Canadian Anti-Fraud Centre.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Dutch intelligence warns about phishing attacks on Signal and WhatsApp

A large-scale cyber campaign linked to state hackers is targeting accounts on the messaging platforms Signal and WhatsApp.

Intelligence services warn that phishing attacks aim to gain access to communications belonging to diplomats, military personnel and government officials.

The warning was issued by the Dutch intelligence agencies, General Intelligence and Security Service and Military Intelligence and Security Service, which confirmed that several government employees in the Netherlands have already been targeted during the campaign.

Security officials believe the operation forms part of a broader intelligence effort focused on individuals considered valuable to foreign state interests.

Journalists and other public figures may also be potential targets as attackers attempt to monitor sensitive conversations or gather confidential information.

Authorities advise users to remain cautious when receiving unexpected messages or login requests on encrypted messaging platforms.

Phishing attempts designed to capture account credentials remain one of the most effective methods used in cyberespionage campaigns.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Blockchain and AI security central to US cyber framework

The US National Cyber Strategy emphasises support for emerging technologies, including blockchain, cryptocurrencies, AI, and post-quantum cryptography. The strategy highlights the importance of securing digital infrastructure while advancing technological leadership.

The strategy rests on six pillars, including modernising federal networks, protecting critical infrastructure, and advancing secure technology. Specific sections reference cryptocurrencies and blockchain, noting the need to safeguard digital systems from design to deployment.

Financial systems, data centres, and telecommunications networks are identified as key components of the broader cybersecurity framework. The strategy also stresses collaboration with private-sector technology companies and research institutions to foster innovation and strengthen protections.

AI plays a central role, with measures to secure AI data centres and deploy AI-driven tools for network defence. The plan avoids direct crypto rules but signals greater integration of blockchain and cryptography into national digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers can use AI to de-anonymise social media accounts

AI technology behind platforms like ChatGPT is making it significantly easier for hackers to identify anonymous social media users, a new study warns. LLMs could match anonymised accounts to real identities by analysing users’ posts across platforms.

Researchers Simon Lermen and Daniel Paleka warned that AI enables cheap, highly personalised privacy attacks, urging a rethink of what counts as private online. The study highlighted risks from government surveillance to hackers exploiting public data for scams.

Experts caution that AI-driven de-anonymisation is not flawless. Errors in linking accounts could wrongly implicate individuals, while public datasets beyond social media- such as hospital or statistical records- may be exposed to unintended analysis.

Users are urged to reconsider what information they share, and platforms are encouraged to limit bulk data access and detect automated scraping.

The study underscores growing concerns about AI surveillance. While the technology cannot guarantee complete de-anonymisation, its rapid capabilities demand stronger safeguards to protect privacy online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces challenges in curbing digital abuse against women

Researchers and policymakers are raising concerns about how new technologies may put women at risk online, despite existing EU rules designed to ensure safer digital spaces.

AI-powered tools and smart devices have been linked to incidents of harassment and the creation of non-consensual sexualised imagery, highlighting gaps in enforcement and compliance.

The European Commission’s Gender Equality 2026–2030 Strategy noted that women are disproportionately targeted by online gender-based violence, including harassment, doxing, and AI-generated deepfakes.

Investigations into tools such as Elon Musk’s Grok AI and Meta’s Ray-Ban smart glasses have drawn attention to how digital platforms and wearable technologies can be misused, even where legal frameworks like the Digital Services Act (DSA) are in place.

Experts emphasise that while the EU’s rules offer a foundation to regulate online content, significant challenges remain. Advocates and lawmakers say enforcement gaps let harmful AI functions like nudification persist.

Commissioners have stressed ongoing cooperation with tech companies and upcoming guidelines to prioritise flagged content from independent organisations to address gender-based cyber violence.

Authorities are also monitoring new technologies closely. In the case of wearable devices, regulators are considering how users and bystanders are informed about recording features.

Ongoing discussions aim to strengthen compliance under existing legislation and ensure that digital spaces become safer and more accountable for all users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data breaches push South Korea toward stricter corporate liability rules

South Korea’s government and ruling party are advancing a second revision of the Personal Information Protection Act to strengthen corporate liability for large-scale data breaches.

The proposed amendment would make it easier for victims of major data breaches to receive compensation and relief. By removing the requirement for victims to prove a company’s ‘intent or negligence’, the amendment would increase companies’ legal liability when user data is compromised, making it more likely that affected individuals can claim damages.

Momentum for stricter rules follows several high-profile incidents, including a recent Coupang data breach that may have exposed personal information linked to numerous user accounts. The case has intensified scrutiny of how firms handle and protect customer data.

South Korea Officials at the Personal Information Protection Commission (PIPC) say victims often struggle to obtain evidence explaining how data breaches occur or how damages arise. The proposed reform would shift a greater evidentiary burden onto companies in disputes over losses.

The amendment would also introduce criminal penalties for anyone who knowingly obtains or distributes leaked personal data, closing a legal gap that currently applies only to employees who unlawfully disclose information. Authorities would gain powers to issue emergency protective orders to limit the spread of compromised data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia introduces strict online child safety rules covering AI chatbots

New Age-Restricted Material Codes have begun to be enforced in Australia, requiring online platforms to introduce stronger protections to prevent children from accessing harmful digital content.

The rules apply across a wide range of services, including social media, app stores, gaming platforms, search engines, pornography websites, and AI chatbots.

Under the framework, companies must implement age-assurance systems before allowing access to content involving pornography, high-impact violence, self-harm material, or other age-restricted topics.

These measures also extend to AI companions and chatbots, which must prevent sexually explicit or self-harm-related conversations with minors.

The rules form part of Australia’s broader online safety framework overseen by the eSafety Commissioner, which will monitor compliance and enforce the codes.

Companies that fail to comply may face penalties of up to $49.5 million per breach.

The policy aims to shift responsibility toward technology companies by requiring them to build protections directly into their platforms.

Officials in Australia argue the measures mirror long-standing offline safeguards designed to prevent children from accessing adult environments or harmful material.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Data breach hits fintech lender Figure exposing nearly 1 million accounts

Fintech lender Figure Technology Solutions has disclosed a data breach after hackers exposed personal information from nearly one million accounts. Details from 967,200 accounts, including names, email addresses, phone numbers, home addresses, and dates of birth, were compromised.

Figure Technology Solutions, founded in 2018, operates a blockchain-based lending platform built on the Provenance blockchain. The company says it has facilitated more than $22 billion in home equity transactions through partnerships with banks, credit unions, and fintech firms. Despite blockchain security claims, attackers reportedly gained access by manipulating a staff member rather than breaking the underlying technology.

‘We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account,’ a company spokesperson said. ‘We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate.’

Security researchers say the data breach follows a pattern used by groups such as ShinyHunters, who impersonate IT support staff and pressure employees into revealing login credentials through convincing phishing portals.

Once access to corporate single sign-on systems, which allow users to log in to multiple internal applications with a single set of credentials, is obtained, attackers can move across multiple internal platforms, often including services linked to major providers such as Microsoft and Google.

Experts warn that the data breach highlights a wider cybersecurity problem: even advanced technologies such as blockchain cannot prevent attacks that target human behaviour. Criminals can use exposed personal information to launch convincing phishing campaigns or financial scams, reinforcing the need for stronger employee training and security awareness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU Commission’s new guidance to push Cybersecurity Resilience Act

The EU Commission has opened a public consultation on draft guidance to help companies apply the EU’s Cyber Resilience Act (CRA), a regulation that sets baseline cybersecurity requirements for hardware and software ‘products with digital elements’ to reduce vulnerabilities and improve security throughout a product’s life cycle. The guidance is framed as practical help, especially for microenterprises and SMEs, and the consultation runs until 31 March 2026.

The CRA is designed to make ‘secure by design’ the default for connected products people use every day, from consumer devices to business software, while giving users clearer information about a product’s security properties. In timeline terms, the Act entered into force on 10 December 2024. The incident reporting duties start on 11 September 2026, and the main obligations apply from 11 December 2027, giving industry a runway but also a clear countdown.

What the Commission is trying to nail down now are the parts companies have found hardest to interpret: how the rules apply to remote data processing solutions (cloud-linked features), how they treat free and open-source software, what ‘support periods’ mean in practice (i.e. how long security upkeep is expected), and how the CRA fits alongside other EU laws. In other words, this is less about announcing new rules and more about reducing legal grey zones before enforcement ramps up.

The guidance push also lands amid a broader policy drive, as on 20 January 2026, the Commission proposed a new EU cybersecurity package, built around a revised Cybersecurity Act and targeted NIS2 amendments. The package aims to harden ICT supply chains, including a framework to jointly identify and mitigate risks across 18 critical sectors, and would enable mandatory ‘de-risking’ of EU mobile telecom networks away from high‑risk third‑country suppliers. It also proposes a revamped EU cybersecurity certification system with simpler procedures, giving a default 12‑month timeline to develop certification schemes, while cutting red tape for tens of thousands of firms and strengthening ENISA’s role, including early warnings, ransomware support, and a major budget boost.

Taken together, the EU is moving from strategy documents to operational details, product security on one side (CRA) and ecosystem-level resilience on the other (supply chains, certification, incident reporting and supervision). For companies, that can be both reassuring and demanding: clearer guidance should reduce uncertainty, but the compliance reality may still be layered, especially for businesses spanning devices, software, cloud features, and cross-border operations. The Commission’s stakeholder feedback window is essentially a test of whether these rules can be made workable without diluting their bite.

Why does it matter?

Beyond technical risk, this is increasingly about sovereignty: who sets the rules for digital products, who can be trusted in supply chains, and how much dependency is acceptable in critical infrastructure. Digital governance expert Jovan Kurbalija argues that full ‘stack’ digital sovereignty, that is to say control over infrastructure, services, data, and AI knowledge, is concentrated in very few states, while most countries must balance openness with autonomy. The EU’s current wave of cybersecurity governance fits that pattern: it’s an attempt to turn security standards, certification, and supply-chain choices into a practical form of strategic control, not just to prevent hacks, but to protect democratic institutions, economic competitiveness, and trust in the digital tools people rely on.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot