UK cyber agency warns AI will accelerate cyber threats by 2027

The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.

Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.

By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.

The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.

The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.

NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.

Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI adds pop-up warning after users share sensitive info

Meta has introduced a new pop-up in its Meta AI app, alerting users that any prompts they share may be made public. While AI chat interactions are rarely private by design, many users appeared unaware that their conversations could be published for others to see.

The Discovery feed in the Meta AI app had previously featured conversations that included intimate details—such as break-up confessions, attempts at self-diagnosis, and private photo edits.

According to multiple reports last week, these were often shared unknowingly by users who may not have realised the implications of the app’s sharing functions. Mashable confirmed this by finding such examples directly in the feed.

Now, when a user taps the ‘Share’ button on a Meta AI conversation, a new warning appears: ‘Prompts you post are public and visible to everyone. Your prompts may be suggested by Meta on other Meta apps. Avoid sharing personal or sensitive information.’ A ‘Post to feed’ button then appears below.

Although the sharing step has always required users to confirm, Business Insider reports that the feature wasn’t clearly explained—leading some users to publish their conversations unintentionally. The new alert aims to clarify that process.

As of this week, Meta AI’s Discovery feed features mostly AI-generated images and more generic prompts, often from official Meta accounts. For users concerned about privacy, there is an option in the app’s settings to opt out of the Discovery feed altogether.

Still, experts advise against entering personal or sensitive information into AI chatbots, including Meta AI. Adjusting privacy settings and avoiding the ‘Share’ feature are the best ways to protect your data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google warns against weak passwords amid £12bn scams

Gmail users are being urged to upgrade their security as online scams continue to rise sharply, with cyber criminals stealing over £12 billion in the past year alone. Google is warning that simple passwords leave people vulnerable to phishing and account takeovers.

To combat the threat, users are encouraged to switch to passkeys or use ‘Sign in with Google’, both of which offer stronger protections through fingerprint, face ID or PIN verification. Over 60% of Baby Boomers and Gen X users still rely on weak passwords, increasing their exposure to attacks.

Despite the availability of secure alternatives, only 30% of users reportedly use them daily. Gen Z is leading the shift by adopting newer tools, bypassing outdated security habits altogether.

Google recommends adding 2-Step Verification for those unwilling to leave passwords behind. With scams growing more sophisticated, extra security measures are no longer optional, they are essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anubis ransomware threatens permanent data loss

A new ransomware threat known as Anubis is making waves in the cybersecurity world, combining file encryption with aggressive monetisation tactics and a rare file-wiping feature that prevents data recovery.

Victims discover their files renamed with the .anubis extension and are presented with a ransom note warning that stolen data will be leaked unless payment is made.

What sets Anubis apart is its ability to permanently erase file contents using a command that overwrites them with zero-byte shells. Although the filenames remain, the data inside is lost forever, rendering recovery impossible.

Researchers have flagged the destructive feature as highly unusual for ransomware, typically seen in cyberespionage rather than financially motivated attacks.

The malware also attempts to change the victim’s desktop wallpaper to reinforce the impact, although in current samples, the image file was missing. Anubis spreads through phishing emails and uses tactics like command-line scripting and stolen tokens to escalate privileges and evade defences.

It operates as a ransomware-as-a-service model, meaning less-skilled cybercriminals can rent and use it easily.

Security experts urge organisations to treat Anubis as more than a typical ransomware threat. Besides strong backup practices, firms are advised to improve email security, limit user privileges, and train staff to spot phishing attempts.

As attackers look to profit from stolen access and unrecoverable destruction, prevention becomes the only true line of defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU strikes deal to streamline cross-border GDPR enforcement

The EU Council and European Parliament have reached a political agreement to strengthen cross-border enforcement of the General Data Protection Regulation (GDPR). The new regulation, once adopted, will simplify and speed up how national data protection authorities cooperate on cases involving data processing across EU borders.

That move seeks to protect citizens’ rights better and make enforcement more efficient. Key improvements include harmonising the criteria for assessing complaints, regardless of where in the EU they’re filed, and ensuring both complainants and companies under investigation are given the right to be heard throughout the process. The regulation introduces deadlines to avoid drawn-out investigations — 15 months for complex cases (with a possible 12-month extension) and 12 months for simpler ones.

The agreement also creates an ‘early resolution’ option to settle straightforward complaints without triggering lengthy cross-border procedures. It adds a simplified cooperation track for less contentious cases and encourages authorities to share key case information early to build consensus more quickly among EU partners.

The deal now awaits formal approval from both institutions. Once passed, the new rules will enter into force, marking a significant evolution in how the GDPR is enforced across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge halts OPM data sharing with DOGE amid privacy concerns

A federal judge in New York ordered the US Office of Personnel Management (OPM) to stop sharing sensitive personal data with the Department of Government Efficiency (DOGE) agents.

The preliminary injunction, issued on 6 June by Judge Denise Cote, cited a strong likelihood that OPM and DOGE violated both the Privacy Act of 1974 and the Administrative Procedures Act.

The lawsuit, led by the Electronic Frontier Foundation and several advocacy groups, alleges that OPM unlawfully disclosed information from one of the largest federal employee databases to DOGE, a controversial initiative reportedly linked to billionaire Elon Musk.

The database includes names, social security numbers, health and financial data, union affiliations, and background check records for millions of federal employees, applicants, and retirees.

Union representatives and privacy advocates called the ruling a significant win for data protection and government accountability. AFGE President Everett Kelley criticised the involvement of ‘Musk’s DOGE cronies’, arguing that unelected individuals should not have access to such sensitive material.

The legal action also seeks to delete any data handed over to DOGE. The case comes amid ongoing concerns about federal data security following OPM’s 2015 breach, which compromised information on more than 22 million people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT and generative AI have polluted the internet — and may have broken themselves

The explosion of generative AI tools like ChatGPT has flooded the internet with low-quality, AI-generated content, making it harder for future models to learn from authentic human knowledge.

As AI continues to train on increasingly polluted data, a loop forms in which AI imitates already machine-made content, leading to a steady drop in originality and usefulness. The worrying trend is referred to as ‘model collapse’.

To illustrate the risk, researchers compare clean pre-AI data to ‘low-background steel’ — a rare kind of steel made before nuclear testing in 1945, which remains vital for specific medical and scientific uses.

Just as modern steel became contaminated by radiation, modern data is being tainted by artificial content. Cambridge researcher Maurice Chiodo notes that pre-2022 data is now seen as ‘safe, fine, clean’, while everything after is considered ‘dirty’.

A key concern is that techniques like retrieval-augmented generation, which allow AI to pull real-time data from the internet, risk spreading even more flawed content. Some research already shows that it leads to more ‘unsafe’ outputs.

If developers rely on such polluted data, scaling models by adding more information becomes far less effective, potentially hitting a wall in progress.

Chiodo argues that future AI development could be severely limited without a clean data reserve. He and his colleagues urge the introduction of clear labelling and tighter controls on AI content.

However, industry resistance to regulation might make meaningful reform difficult, raising doubts about whether the pollution can be reversed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia’s cyber push faces capacity challenges in the provinces

Indonesia is decentralising its approach to cybersecurity, launching eight regional Cyber Crime Directorates within provincial police forces in September 2024.

These directorates, located in areas including Jakarta, East Java, Bali, and Papua, aim to boost local responses to increasingly complex cyber threats—from data breaches and financial fraud to hacktivism and disinformation.

The move marks a shift from Jakarta-led cybersecurity efforts toward a more distributed model, aligning with Indonesia’s broader decentralisation goals. It reflects the state’s recognition that digital threats are not only national in scope, but deeply rooted in local contexts.

However, experts warn that regionalising cyber governance comes with significant challenges. Provincial police commands often lack specialised personnel, digital forensics capabilities, and adaptive institutional structures.

Many still rely on rotations from central agencies or basic training programs—insufficient for dealing with fast-moving and technically advanced cyberattacks.

Moreover, the culture of rigid hierarchy and limited cross-agency collaboration may further hinder rapid response and innovation at the local level. Without reforms to increase flexibility, autonomy, and inter-agency cooperation, these new directorates risk becoming symbolic rather than operationally impactful.

The inclusion of provinces like Central Sulawesi and Papua also reveals a political dimension. These regions are historically security-sensitive, and the presence of cyber directorates could serve both policing and state surveillance functions, raising concerns over the balance between security and civil liberties.

To be effective, the initiative requires more than administrative expansion. It demands sustained investment in talent development, modern infrastructure, and trusted partnerships with local stakeholders—including the private sector and academia.

If these issues are not addressed, Indonesia’s push to regionalise cybersecurity may reinforce old hierarchies rather than build meaningful local capacity. Stronger, smarter institutions—not just new offices—will determine whether Indonesia can secure its digital future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Graphite spyware used against European reporters, experts warn

A new surveillance scandal has emerged in Europe as forensic evidence confirms that an Israeli spyware firm Paragon used its Graphite tool to target journalists through zero-click attacks on iOS devices. The attacks, requiring no user interaction, exposed sensitive communications and location data.

Citizen Lab and reports from Schneier on Security identified the spyware on multiple journalists’ devices on April 29, 2025. The findings mark the first confirmed use of Paragon’s spyware against members of the press, raising alarms over digital privacy and press freedom.

Backed by US investors, Paragon has operated outside of Israel under claims of aiding national security. But its spyware is now at the center of a widening controversy, particularly in Italy, where the government recently ended its contract with the company after two journalists were targeted.

Experts warn that such attacks undermine the confidentiality crucial to journalism and could erode democratic safeguards. Even Apple’s secure devices proved vulnerable, according to Bleeping Computer, highlighting the advanced nature of Graphite.

The incident has sparked calls for tighter international regulation of spyware firms. Without oversight, critics argue, tools meant for fighting crime risk being used to silence dissent and target civil society.

The Paragon case underscores the urgent need for transparency, accountability, and stronger protections in an age of powerful, invisible surveillance tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!