Deepfake abuse in schools raises legal and ethical concerns

Deepfake abuse is emerging as a troubling form of peer-on-peer harassment in schools, targeting mainly girls with AI-generated explicit imagery. Tools that once required technical skill are now easily accessible to young people, allowing harmful content to be created and shared in seconds.

Though all US states and Washington, D.C. have laws addressing the distribution of nonconsensual intimate images, many do not cover AI-generated content or address the fact that minors are often both victims and perpetrators.

Some states have begun adapting laws to include proportional sentencing and behavioural interventions for minors. Advocates argue that education on AI, consent and digital literacy is essential to address the root causes and help young people understand the consequences of their actions.

Regulating tech platforms and app developers is also key, as companies continue to profit from tools used in digital exploitation. Experts say schools, families, lawmakers and platforms must share responsibility for curbing the spread of AI-generated abuse and ensuring support for those affected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

United brings facial recognition to Seattle airport

United Airlines has rolled out facial recognition at Seattle-Tacoma International Airport, allowing TSA PreCheck passengers to pass through security without ID or boarding passes. This service uses facial recognition to match real-time images with government-provided ID photos during the check-in process.

Seattle is the tenth US airport to adopt the system, following its launch at Chicago O’Hare in 2023. Alaska Airlines and Delta have also introduced similar services at Sea-Tac, signalling a broader shift toward biometric travel solutions.

The TSA’s Credential Authentication Technology was introduced at the airport in October and supports this touchless approach. Experts say facial recognition could soon be used throughout the airport journey, from bag drop to retail purchases.

TSA PreCheck access remains limited to US citizens, nationals, and permanent residents, with a five-year membership costing $78. As more airports adopt facial recognition, concerns about privacy and consent are likely to increase.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use AI to create phishing sites in seconds

Hackers are now using generative AI tools to build convincing phishing websites in under a minute, researchers at Okta have warned. The company discovered that a tool developed by Vercel had been abused to replicate login portals for platforms such as Okta, Microsoft 365 and crypto services.

Using simple prompts like ‘build a copy of the website login.okta.com’, attackers can create fake login pages with little effort or technical skill. Okta’s investigation found no evidence of successful breaches, but noted that threat actors repeatedly used v0 to target new platforms.

Vercel has since removed the fraudulent sites and is working with Okta to create a system for reporting abuse. Security experts are concerned the speed and accessibility of generative AI tools could accelerate low-effort cybercrime on a massive scale.

Researchers also found cloned versions of the v0 tool on GitHub, which may allow continued abuse even if access to the original is restricted. Okta urges organisations to adopt passwordless systems, as traditional phishing detection methods are becoming obsolete.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bots are taking your meetings for you

AI-powered note takers are increasingly filling virtual meeting rooms, sometimes even outnumbering the humans present. Workers are now sending bots to listen, record, and summarise meetings they no longer feel the need to attend themselves.

Major platforms such as Zoom, Teams and Meet offer built-in AI transcription, while startups like Otter and Fathom provide bots that quietly join meetings or listen in through users’ devices. The tools raise new concerns about privacy, consent, and the erosion of human engagement.

Some workers worry that constant recording suppresses honest conversation and makes meetings feel performative. Others, including lawyers and business leaders, point out the legal grey zones created by using these bots without full consent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattacks drain millions from hospitality sector

The booming hospitality sector handles sensitive guest information daily, from passports to payment details, making it a prime target for cybercriminals. Recent figures reveal the average cost of a data breach in hospitality rose to $3.86 million in 2024, with over 14,000 critical vulnerabilities detected in hotel networks worldwide.

Complex systems connecting guests, staff, vendors, and devices like smart locks multiply entry points for attackers. High staff turnover and frequent reliance on temporary workers add to the sector’s cybersecurity challenges.

New employees are often more susceptible to phishing and social engineering attacks, as demonstrated by costly breaches such as the 2023 MGM Resorts incident. Artificial intelligence helps boost defences but isn’t a cure-all and must be used with staff training and clear policies.

Recent attacks on major hotel brands have exposed millions of customer records, intensifying pressure on hospitality firms to meet privacy regulations like GDPR. Maintaining robust cybersecurity requires continuous updates to policies, vendor checks, and committed leadership support.

Hotels lagging in these areas risk severe financial and reputational damage in an increasingly hostile cyber landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chinese-linked hackers target French state in Ivanti exploit campaign

A sophisticated cyber campaign linked to Chinese threat actors has targeted French government, defence and media organisations by exploiting zero-day vulnerabilities in Ivanti’s server software, France’s national cyber agency has revealed.

The French National Agency for Information Systems Security (ANSSI) reported that attackers exploited flaws in an end-of-life version of Ivanti’s Cloud Services Appliance. Victims include public agencies, telecoms, finance firms and media outlets. ANSSI dubbed the threat ‘Houken.’

Hackers used tools developed by Chinese-speaking actors, operated during Chinese working hours and pursued both espionage and financial gain. In one case, they deployed a cryptominer—an unusual move for state-linked actors.

The campaign that targeted France relied on chaining Ivanti zero-days (CVE-2024-8190, CVE-2024-9380 and CVE-2024-8963) to deploy a novel rootkit. Attackers then used webshells, fileless backdoors, and anonymising services like NordVPN.

ANSSI noted similarities to activity by UNC5174, a Chinese initial access broker tracked by Mandiant. This actor, also known as ‘Uteus,’ reportedly works with the Ministry of State Security in China.

Evidence suggests that Houken not only sells access to compromised networks but also carries out direct data exfiltration. One victim included the foreign ministry of a South American country.

The Paris Prosecutor’s Office is investigating a possible botnet linked to Chinese state hackers, though it’s unclear if it’s connected to Houken.

ANSSI warns that both Houken and UNC5174 are still active and likely to continue exploiting exposed infrastructure worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AliExpress agrees to binding EU rules on data and transparency

AliExpress has agreed to legally binding commitments with the European Commission to comply with the Digital Services Act (DSA). These cover six key areas, including recommender systems, advertising transparency, and researcher data access.

The announcement on 18 June marks only the second case where a major platform, following TikTok, has formally committed to specific changes under the DSA.

The platform promised greater transparency in its recommendation algorithms, user opt-out from personalisation, and clearer information on product rankings. It also committed to allowing researchers access to publicly available platform data through APIs and customised requests.

However, the lack of clear definitions around terms such as ‘systemic risk’ and ‘public data’ may limit practical oversight.

AliExpress has also established an internal monitoring team to ensure implementation of these commitments. Yet experts argue that without measurable benchmarks and external verification, internal monitoring may not be enough to guarantee meaningful compliance or accountability.

The Commission, meanwhile, is continuing its investigation into the platform’s role in the distribution of illegal products.

These commitments reflect the EU’s broader enforcement strategy under the DSA, aiming to establish transparency and accountability across digital platforms. The agreement is a positive start but highlights the need for stronger oversight and clearer definitions for lasting impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

BT launches cyber training as small businesses struggle with threats

Cyber attacks aren’t just a problem for big-name brands. Small and medium businesses are increasingly in the crosshairs, according to new research from BT and Be the Business.

Two in five SMEs have never provided cyber security training to their staff, despite a sharp increase in attacks. In the past year alone, 42% of small firms and 67% of medium-sized companies reported breaches.

Phishing remains the most common threat, affecting 85% of businesses. But more advanced tactics are spreading fast, including ransomware and ‘quishing’ scams — where fake QR codes are used to steal data.

Recovering from a breach is costly. Micro and small businesses spend nearly £8,000 on average to recover from their most serious incident. The figure excludes reputational damage and long-term disruption.

To help tackle the issue, BT has launched a new training programme with Be the Business. The course offers practical, low-cost cyber advice designed for companies without dedicated IT support.

The programme focuses on real-world threats, including AI-driven scams, and offers guidance on steps like password hygiene, two-factor authentication, and safe software practices.

Although 69% of SME leaders are now exploring AI tools to help defend their systems, 18% also list AI as one of their top cyber threats — a sign of both potential and risk.

Experts warn that basic precautions still matter most. With free and affordable training options now widely available, small firms have more tools than ever to improve their cyber defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU races to catch up in quantum tech amid cybersecurity fears

The European Union is ramping up efforts to lead in quantum computing, but cybersecurity experts warn that the technology could upend digital security as we know it.

In a new strategy published Wednesday, the European Commission admitted that Europe trails the United States and China in commercialising quantum technology, despite its strong academic presence. The bloc is now calling for more private investment to close the gap.

Quantum computing offers revolutionary potential, from drug discovery to defence applications. But its power poses a serious risk: it could break today’s internet encryption.

Current digital security relies on public key cryptography — complex maths that conventional computers can’t solve. But quantum machines could one day easily break these codes, making sensitive data readable to malicious actors.

Experts fear a ‘store now, decrypt later’ scenario, where adversaries collect encrypted data now and crack it once quantum capabilities mature. That could expose government secrets and critical infrastructure.

The EU is also concerned about losing control over homegrown tech companies to foreign investors. While Europe leads in quantum research output, it only receives 5% of global private funding. In contrast, the US and China attract over 90% combined.

European cybersecurity agencies published a roadmap for transitioning to post-quantum cryptography to address the threat. The aim is to secure critical infrastructure by 2030 — a deadline shared by the US, UK, and Australia.

IBM recently said it could release a workable quantum computer by 2029, highlighting the urgency of the challenge. Experts stress that replacing encryption is only part of the task. The broader transition will affect billions of systems, requiring enormous technical and logistical effort.

Governments are already reacting. Some EU states have imposed export restrictions on quantum tech, fearing their communications could be exposed. Despite the risks, European officials say the worst-case scenarios are not inevitable, but doing nothing is not an option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!