UK cyber agency warns AI will accelerate cyber threats by 2027

The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.

Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.

By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.

The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.

The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.

NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.

Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI adds pop-up warning after users share sensitive info

Meta has introduced a new pop-up in its Meta AI app, alerting users that any prompts they share may be made public. While AI chat interactions are rarely private by design, many users appeared unaware that their conversations could be published for others to see.

The Discovery feed in the Meta AI app had previously featured conversations that included intimate details—such as break-up confessions, attempts at self-diagnosis, and private photo edits.

According to multiple reports last week, these were often shared unknowingly by users who may not have realised the implications of the app’s sharing functions. Mashable confirmed this by finding such examples directly in the feed.

Now, when a user taps the ‘Share’ button on a Meta AI conversation, a new warning appears: ‘Prompts you post are public and visible to everyone. Your prompts may be suggested by Meta on other Meta apps. Avoid sharing personal or sensitive information.’ A ‘Post to feed’ button then appears below.

Although the sharing step has always required users to confirm, Business Insider reports that the feature wasn’t clearly explained—leading some users to publish their conversations unintentionally. The new alert aims to clarify that process.

As of this week, Meta AI’s Discovery feed features mostly AI-generated images and more generic prompts, often from official Meta accounts. For users concerned about privacy, there is an option in the app’s settings to opt out of the Discovery feed altogether.

Still, experts advise against entering personal or sensitive information into AI chatbots, including Meta AI. Adjusting privacy settings and avoiding the ‘Share’ feature are the best ways to protect your data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google warns against weak passwords amid £12bn scams

Gmail users are being urged to upgrade their security as online scams continue to rise sharply, with cyber criminals stealing over £12 billion in the past year alone. Google is warning that simple passwords leave people vulnerable to phishing and account takeovers.

To combat the threat, users are encouraged to switch to passkeys or use ‘Sign in with Google’, both of which offer stronger protections through fingerprint, face ID or PIN verification. Over 60% of Baby Boomers and Gen X users still rely on weak passwords, increasing their exposure to attacks.

Despite the availability of secure alternatives, only 30% of users reportedly use them daily. Gen Z is leading the shift by adopting newer tools, bypassing outdated security habits altogether.

Google recommends adding 2-Step Verification for those unwilling to leave passwords behind. With scams growing more sophisticated, extra security measures are no longer optional, they are essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s robotics industry set to double by 2028, led by drones and humanoid robots

China’s robotics industry is on course to double in size by 2028, with Morgan Stanley projecting market growth from US$47 billion in 2024 to US$108 billion.

With an annual expansion rate of 23 percent, the country is expected to strengthen its leadership in this fast-evolving field. Analysts credit China’s drive for innovation and cost efficiency as key to advancing next-generation robotics.

A cornerstone of the ‘Made in China 2025’ initiative, robotics is central to the nation’s goal of dominating global high-tech industries. Last year, China accounted for 40 percent of the worldwide robotics market and over half of all industrial robot installations.

Recent data shows industrial robot production surged 35.5 percent in May, while service robot output climbed nearly 14 percent.

Morgan Stanley anticipates drones will remain China’s largest robotics segment, set to grow from US$19 billion to US$40 billion by 2028.

Meanwhile, the humanoid robot sector is expected to see an annual growth rate of 63 percent, expanding from US$300 million in 2025 to US$3.4 billion by 2030. By 2050, China could be home to 302 million humanoid robots, making up 30 percent of the global population.

The researchers describe 2025 as a milestone year, marking the start of mass humanoid robot production.

They emphasise that automation is already reshaping China’s manufacturing industry, boosting productivity and quality instead of simply replacing workers and setting the stage for a brighter industrial future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anubis ransomware threatens permanent data loss

A new ransomware threat known as Anubis is making waves in the cybersecurity world, combining file encryption with aggressive monetisation tactics and a rare file-wiping feature that prevents data recovery.

Victims discover their files renamed with the .anubis extension and are presented with a ransom note warning that stolen data will be leaked unless payment is made.

What sets Anubis apart is its ability to permanently erase file contents using a command that overwrites them with zero-byte shells. Although the filenames remain, the data inside is lost forever, rendering recovery impossible.

Researchers have flagged the destructive feature as highly unusual for ransomware, typically seen in cyberespionage rather than financially motivated attacks.

The malware also attempts to change the victim’s desktop wallpaper to reinforce the impact, although in current samples, the image file was missing. Anubis spreads through phishing emails and uses tactics like command-line scripting and stolen tokens to escalate privileges and evade defences.

It operates as a ransomware-as-a-service model, meaning less-skilled cybercriminals can rent and use it easily.

Security experts urge organisations to treat Anubis as more than a typical ransomware threat. Besides strong backup practices, firms are advised to improve email security, limit user privileges, and train staff to spot phishing attempts.

As attackers look to profit from stolen access and unrecoverable destruction, prevention becomes the only true line of defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Diplo empowers Armenian civil society on digital issues

A new round of training sessions has been launched in Armenia to strengthen civil society’s understanding of digital governance. The initiative, which began on 12 June, brings together NGO representatives from both the regions and the capital to deepen their knowledge of crucial digital topics, including internet governance, AI, and digital rights.

The training program combines online and offline components, aiming to equip participants with the tools needed to actively shape the digital future of Armenia. By increasing the digital competence of civil society actors, the program aspires to promote broader democratic engagement and more informed contributions to policy discussions in the digital space.

The educational initiative is being carried out by Diplo as part of the ‘Digital Democracy for ALL’ measure by GIZ (Deutsche Gesellschaft für Internationale Zusammenarbeit), in close cooperation with several regional GIZ projects that focus on civil society and public administration reform in Eastern Partnership countries. The sessions have been praised for their depth and impact, with particular appreciation extended to Angela Saghatelyan for her leadership, and to Diplo’s experts Vladimir Radunovic, Katarina Bojovic, and Marília Maciel for their contributions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump unveils gold smartphone and new 5G wireless service

US President Donald Trump and his sons have launched a mobile phone service called Trump Mobile 5G, alongside plans to release a gold-coloured smartphone branded with the Trump name.

The service is being offered through partnerships with all three major US mobile networks, though they are not named directly.

The monthly plan, known as the ’47 Plan’, costs $47.45- referencing Trump’s position as the 45th and 47th president. Customers can join their current Android or iPhone devices, either with a physical SIM or an eSIM.

A new Trump-branded Android device, the T1, will launch in September. Priced at $499, it comes with Android 15, a 6.8-inch screen and biometric features like fingerprint scanning and AI facial recognition.

At a press event in New York, Donald Trump Jr. and Eric Trump introduced the initiative, saying it would combine high-quality service with an ‘America First’ approach.

They emphasised that the company is US-based, including its round-the-clock customer service, which promises real human support instead of automated systems.

While some critics may see the move as political branding, the Trump Organisation framed it as a business venture.

The company has already earned hundreds of millions from Trump-branded consumer goods. As with other mobile providers, the new service will fall under the regulatory oversight of the Federal Communications Commission, led by a Trump-appointed chair.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scientists convert brain signals into words using AI

Australian scientists have developed an AI model that converts brainwaves into spoken words and sentences using a wearable EEG cap.

The system, created at the University of Technology Sydney, marks a significant step in communication technology and cognitive care.

The deep learning model, designed by Daniel Leong, Charles Zhou, and Chin-Teng Lin, currently works with a limited vocabulary but has achieved around 75% accuracy. Researchers aim to improve this to 90% by expanding training data and refining brainwave analysis.

Bioelectronics expert Mohit Shivdasani noted that AI now detects neural patterns previously hidden from human interpretation. Future uses include real-time thought-to-text interfaces or direct communication between people via brain signals.

However, breakthrough opens new possibilities for patients with speech or movement impairments, pointing to future human-machine interaction that bypasses traditional input methods.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smart machines, dark intentions: UN urges global action on AI threats

The United Nations has warned that terrorists could seize control of AI-powered vehicles to launch devastating attacks in public spaces. A new report outlines how extremists might exploit autonomous cars and drones to bypass traditional defences.

AI is also feared to be a tool for facial recognition targeting and mass ‘swarm’ assaults using aerial devices. Experts suggest that key parts of modern infrastructure could be turned against the public if hacked.

Britain’s updated counter-terrorism strategy now reflects these growing concerns, including the risk of AI-generated propaganda and detailed attack planning. The UN has called for immediate global cooperation to limit how such technologies can be misused.

Security officials maintain that AI also offers valuable tools in the fight against extremism, enabling quicker intelligence processing and real-time threat identification. Nonetheless, authorities have been urged to prepare for worst-case scenarios involving AI-directed violence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!