Trump Executive Order revises US cyber policy and sanctions scope

US President Donald J. Trump signed a new Executive Order (EO) aimed at amending existing federal cybersecurity policies. The EO modifies selected provisions of previous executive orders signed by former Presidents Barack Obama and Joe Biden, introducing updates to sanctions policy, digital identity initiatives, and secure technology practices.

One of the main changes involves narrowing the scope of sanctions related to malicious cyber activity. The new EO limits the applicability of such sanctions to foreign individuals or entities involved in cyberattacks against US critical infrastructure. It also states that sanctions do not apply to election-related activities, though this clarification is included in a White House fact sheet rather than the EO text itself.

The order revokes provisions from the Biden-era EO that proposed expanding the use of federal digital identity documents, including mobile driver’s licenses. According to the fact sheet, this revocation is based on concerns regarding implementation and potential for misuse. Some analysts have expressed concerns about the implications of this reversal on broader digital identity strategies.

In addition to these policy revisions, the EO outlines technical measures to strengthen cybersecurity capabilities across federal agencies. These include:

  • Developing new encryption standards to prepare for advances in quantum computing, with implementation targets set for 2030.
  • Directing the National Security Agency (NSA) and Office of Management and Budget (OMB) to issue updated federal encryption requirements.
  • Refocusing artificial intelligence (AI) and cybersecurity initiatives on identifying and mitigating vulnerabilities.
  • Assigning the National Institute of Standards and Technology (NIST) responsibility for updating and guiding secure software development practices. This includes the establishment of an industry consortium and a preliminary update to its secure software development framework.

The EO also includes provisions for improving vulnerability tracking and mitigation in AI systems, with coordination required among the Department of Defence, the Department of Homeland Security, and the Office of the Director of National Intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity alarm after 184 million credentials exposed

A vast unprotected database containing over 184 million credentials from major platforms and sectors has highlighted severe weaknesses in data security worldwide.

The leaked credentials, harvested by infostealer malware and stored in plain text, pose significant risks to consumers and businesses, underscoring an urgent need for stronger cybersecurity and better data governance.

Cybersecurity researcher Jeremiah Fowler discovered the 47 GB database exposing emails, passwords, and authorisation URLs from tech giants like Google, Microsoft, Apple, Facebook, and Snapchat, as well as banking, healthcare, and government accounts.

The data was left accessible without any encryption or authentication, making it vulnerable to anyone with the link.

The credentials were reportedly collected by infostealer malware such as Lumma Stealer, which silently steals sensitive information from infected devices. The stolen data fuels a thriving underground economy involving identity theft, fraud, and ransomware.

The breach’s scope extends beyond tech, affecting critical infrastructure like healthcare and government services, raising concerns over personal privacy and national security. With recurring data breaches becoming the norm, industries must urgently reinforce security measures.

Chief Data Officers and IT risk leaders face mounting pressure as regulatory scrutiny intensifies. The leak highlights the need for proactive data stewardship through encryption, access controls, and real-time threat detection.

Many organisations struggle with legacy systems, decentralised data, and cloud adoption, complicating governance efforts.

Enterprise leaders must treat data as a strategic asset and liability, embedding cybersecurity into business processes and supply chains. Beyond technology, cultivating a culture of accountability and vigilance is essential to prevent costly breaches and protect brand trust.

The massive leak signals a new era in data governance where transparency and relentless improvement are critical. The message is clear: there is no room for complacency in safeguarding the digital world’s most valuable assets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI cracks down on misuse of ChatGPT by foreign threat actors

OpenAI has shut down a network of ChatGPT accounts allegedly linked to nation-state actors from Russia, China, Iran, North Korea, and others after uncovering their use in cyber and influence operations.

The banned accounts were used to assist in developing malware, automate social media content, and conduct reconnaissance on sensitive technologies.

According to OpenAI’s latest threat report, a Russian-speaking group used the chatbot to iteratively improve malware code written in Go. Each account was used only once to refine the code before being abandoned, a tactic highlighting the group’s emphasis on operational security.

The malicious software was later disguised as a legitimate gaming tool and distributed online, infecting victims’ devices to exfiltrate sensitive data and establish long-term access.

Chinese-linked groups, including APT5 and APT15, were found using OpenAI’s models for a range of technical tasks—from researching satellite communications to developing scripts for Android app automation and penetration testing.

Other accounts were linked to influence campaigns that generated propaganda or polarising content in multiple languages, including efforts to pose as journalists and simulate public discourse around elections and geopolitical events.

The banned activities also included scams, social engineering, and politically motivated disinformation. OpenAI stressed that although some misuse was detected, none involved sophisticated or large-scale attacks enabled solely by its tools.

The company said it is continuing to improve detection and mitigation efforts to prevent abuse of its models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Milei cleared of ethics breach over LIBRA token post

Argentina’s Anti-Corruption Office has concluded that President Javier Milei did not violate ethics laws when he published a now-deleted post promoting the LIBRA memecoin. The agency stated the February post was made in a personal capacity and did not constitute an official act.

The ruling clarified that Milei’s X account, where the post appeared, is personally managed and predates his political role. It added that the account identifies him as an economist rather than a public official, meaning the post is protected as a private expression under the constitution.

The investigation had been launched after LIBRA’s price soared and then crashed following Milei’s endorsement, which linked to the token’s contract and a promotional site. Investors reportedly lost millions, and allegations of insider trading surfaced.

Although the Anti-Corruption Office cleared him, a separate federal court investigation remains ongoing, with Milei and his sister’s assets temporarily frozen.

Despite the resolution, the scandal damaged public trust. Milei has maintained he acted in good faith, claiming the aim was to raise awareness of a private initiative to support small Argentine businesses through crypto.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns BADBOX 2.0 malware is infecting millions

The FBI has issued a warning about the resurgence of BADBOX 2.0, a dangerous form of malware infecting millions of consumer electronics globally.

Often preloaded onto low-cost smart TVs, streaming boxes, and IoT devices, primarily from China, the malware grants cyber criminals backdoor access, enabling theft, surveillance, and fraud while remaining essentially undetectable.

BADBOX 2.0 forms part of a massive botnet and can also infect devices through malicious apps and drive-by downloads, especially from unofficial Android stores.

Once activated, the malware enables a range of attacks, including click fraud, fake account creation, DDoS attacks, and the theft of one-time passwords and personal data.

Removing the malware is extremely difficult, as it typically requires flashing new firmware, an option unavailable for most of the affected devices.

Users are urged to check their hardware against a published list of compromised models and to avoid sideloading apps or purchasing unverified connected tech.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S CEO targeted by hackers in abusive ransom email

Marks & Spencer has been directly targeted by a ransomware group calling itself DragonForce, which sent a vulgar and abusive ransom email to CEO Stuart Machin using a compromised employee email address.

The message, laced with offensive language and racist terms, demanded that Machin engage via a darknet portal to negotiate payment. It also claimed that the hackers had encrypted the company’s servers and stolen customer data, a claim M&S eventually acknowledged weeks later.

The email, dated 23 April, appears to have been sent from the account of an Indian IT worker employed by Tata Consultancy Services (TCS), a long-standing M&S tech partner.

TCS has denied involvement and stated that its systems were not the source of the breach. M&S has remained silent publicly, neither confirming the full scope of the attack nor disclosing whether a ransom was paid.

The cyber attack has caused major disruption, costing M&S an estimated £300 million and halting online orders for over six weeks.

DragonForce has also claimed responsibility for a simultaneous attack on the Co-op, which left some shelves empty for days. While nothing has yet appeared on DragonForce’s leak site, the group claims it will publish stolen information soon.

Investigators believe DragonForce operates as a ransomware-as-a-service collective, offering tools and platforms to cybercriminals in exchange for a 20% share of any ransom.

Some experts suspect the real perpetrators may be young hackers from the West, linked to a loosely organised online community called Scattered Spider. The UK’s National Crime Agency has confirmed it is focusing on the group as part of its inquiry into the recent retail hacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail accounts at risk as attacks rise

Google has urged Gmail users to upgrade their account security after revealing that over 60% have been targeted by cyberattacks. Despite the increasing threat, most people still rely on outdated protections like passwords and SMS-based two-factor authentication.

Google is now pushing users to adopt passkeys and social sign-ins to improve their defences. Passkeys offer phishing-resistant access and use biometric methods such as fingerprint or facial recognition tied to a user’s device, removing the need for traditional passwords.

While digitally savvy Gen Z users are more likely to adopt these new methods, but many still reuse passwords, leaving their accounts exposed to breaches and scams. Google emphasised that passwords are both insecure and inconvenient and called on users to switch to tools that offer stronger protection.

Microsoft, meanwhile, has gone even further by encouraging users to eliminate passwords entirely. Google’s long-term goal is to simplify sign-ins while increasing security across its platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe gets new cybersecurity support from Microsoft

Microsoft has launched a free cybersecurity initiative for European governments aimed at countering increasingly sophisticated cyber threats powered by AI. Company President Brad Smith said Europe would benefit from tools already developed and deployed in the US.

The programme is designed to identify and disrupt AI-driven threats, including deepfakes and disinformation campaigns, which have previously been used to target elections and undermine public trust.

Smith acknowledged that AI is a double-edged sword, with malicious actors exploiting it for attacks, while defenders increasingly use it to stay ahead. Microsoft continues to monitor how its AI products are used, blocking known cybercriminals and working to ensure AI serves as a stronger shield than weapon.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!