Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake abuse in schools raises legal and ethical concerns

Deepfake abuse is emerging as a troubling form of peer-on-peer harassment in schools, targeting mainly girls with AI-generated explicit imagery. Tools that once required technical skill are now easily accessible to young people, allowing harmful content to be created and shared in seconds.

Though all US states and Washington, D.C. have laws addressing the distribution of nonconsensual intimate images, many do not cover AI-generated content or address the fact that minors are often both victims and perpetrators.

Some states have begun adapting laws to include proportional sentencing and behavioural interventions for minors. Advocates argue that education on AI, consent and digital literacy is essential to address the root causes and help young people understand the consequences of their actions.

Regulating tech platforms and app developers is also key, as companies continue to profit from tools used in digital exploitation. Experts say schools, families, lawmakers and platforms must share responsibility for curbing the spread of AI-generated abuse and ensuring support for those affected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

United brings facial recognition to Seattle airport

United Airlines has rolled out facial recognition at Seattle-Tacoma International Airport, allowing TSA PreCheck passengers to pass through security without ID or boarding passes. This service uses facial recognition to match real-time images with government-provided ID photos during the check-in process.

Seattle is the tenth US airport to adopt the system, following its launch at Chicago O’Hare in 2023. Alaska Airlines and Delta have also introduced similar services at Sea-Tac, signalling a broader shift toward biometric travel solutions.

The TSA’s Credential Authentication Technology was introduced at the airport in October and supports this touchless approach. Experts say facial recognition could soon be used throughout the airport journey, from bag drop to retail purchases.

TSA PreCheck access remains limited to US citizens, nationals, and permanent residents, with a five-year membership costing $78. As more airports adopt facial recognition, concerns about privacy and consent are likely to increase.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattacks drain millions from hospitality sector

The booming hospitality sector handles sensitive guest information daily, from passports to payment details, making it a prime target for cybercriminals. Recent figures reveal the average cost of a data breach in hospitality rose to $3.86 million in 2024, with over 14,000 critical vulnerabilities detected in hotel networks worldwide.

Complex systems connecting guests, staff, vendors, and devices like smart locks multiply entry points for attackers. High staff turnover and frequent reliance on temporary workers add to the sector’s cybersecurity challenges.

New employees are often more susceptible to phishing and social engineering attacks, as demonstrated by costly breaches such as the 2023 MGM Resorts incident. Artificial intelligence helps boost defences but isn’t a cure-all and must be used with staff training and clear policies.

Recent attacks on major hotel brands have exposed millions of customer records, intensifying pressure on hospitality firms to meet privacy regulations like GDPR. Maintaining robust cybersecurity requires continuous updates to policies, vendor checks, and committed leadership support.

Hotels lagging in these areas risk severe financial and reputational damage in an increasingly hostile cyber landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AliExpress agrees to binding EU rules on data and transparency

AliExpress has agreed to legally binding commitments with the European Commission to comply with the Digital Services Act (DSA). These cover six key areas, including recommender systems, advertising transparency, and researcher data access.

The announcement on 18 June marks only the second case where a major platform, following TikTok, has formally committed to specific changes under the DSA.

The platform promised greater transparency in its recommendation algorithms, user opt-out from personalisation, and clearer information on product rankings. It also committed to allowing researchers access to publicly available platform data through APIs and customised requests.

However, the lack of clear definitions around terms such as ‘systemic risk’ and ‘public data’ may limit practical oversight.

AliExpress has also established an internal monitoring team to ensure implementation of these commitments. Yet experts argue that without measurable benchmarks and external verification, internal monitoring may not be enough to guarantee meaningful compliance or accountability.

The Commission, meanwhile, is continuing its investigation into the platform’s role in the distribution of illegal products.

These commitments reflect the EU’s broader enforcement strategy under the DSA, aiming to establish transparency and accountability across digital platforms. The agreement is a positive start but highlights the need for stronger oversight and clearer definitions for lasting impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU races to catch up in quantum tech amid cybersecurity fears

The European Union is ramping up efforts to lead in quantum computing, but cybersecurity experts warn that the technology could upend digital security as we know it.

In a new strategy published Wednesday, the European Commission admitted that Europe trails the United States and China in commercialising quantum technology, despite its strong academic presence. The bloc is now calling for more private investment to close the gap.

Quantum computing offers revolutionary potential, from drug discovery to defence applications. But its power poses a serious risk: it could break today’s internet encryption.

Current digital security relies on public key cryptography — complex maths that conventional computers can’t solve. But quantum machines could one day easily break these codes, making sensitive data readable to malicious actors.

Experts fear a ‘store now, decrypt later’ scenario, where adversaries collect encrypted data now and crack it once quantum capabilities mature. That could expose government secrets and critical infrastructure.

The EU is also concerned about losing control over homegrown tech companies to foreign investors. While Europe leads in quantum research output, it only receives 5% of global private funding. In contrast, the US and China attract over 90% combined.

European cybersecurity agencies published a roadmap for transitioning to post-quantum cryptography to address the threat. The aim is to secure critical infrastructure by 2030 — a deadline shared by the US, UK, and Australia.

IBM recently said it could release a workable quantum computer by 2029, highlighting the urgency of the challenge. Experts stress that replacing encryption is only part of the task. The broader transition will affect billions of systems, requiring enormous technical and logistical effort.

Governments are already reacting. Some EU states have imposed export restrictions on quantum tech, fearing their communications could be exposed. Despite the risks, European officials say the worst-case scenarios are not inevitable, but doing nothing is not an option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S eyes full online recovery by august after cyberattack

Marks & Spencer (M&S) expects its full online operations to be restored within four weeks, following a cyber attack that struck in April. Speaking at the retailer’s annual general meeting, CEO Stuart Machin said the company aims to resolve the majority of the incident’s impact by August.

The cyberattack, attributed to human error, forced M&S to suspend online sales and disrupted supply chain operations, including its Castle Donington distribution centre. The breach also compromised customer personal data and is expected to result in a £300 million hit to the company’s profit.

April marked the beginning of a multi-month recovery process, with M&S confirming by May that the breach involved a supply chain partner. By June, the financial and operational damage became clear, with limited online services restored and key features like click-and-collect still unavailable.

The e-commerce platform in Great Britain is now partially operational, but services such as next-day delivery remain offline. Machin stated that recovery is progressing steadily, with the goal of full functionality within weeks.

Julius Cerniauskas, CEO of web intelligence firm Oxylabs, highlighted the growing risks of social engineering in cyber incidents. He noted that while technical defences are improving, attackers continue to exploit human vulnerabilities to gain access.

Cerniauskas described the planned recovery timeline as a ‘solid achievement’ but warned that long-term reputational effects could persist. ‘It’s not a question of if you’ll be targeted – but when,’ he said, urging firms to bolster both human and technical resilience.

Executive pay may also be impacted by the incident. According to the Evening Standard, chairman Archie Norman said incentive compensation would reflect any related performance shortfalls. Norman added that systems are gradually returning online and progress is being made each week.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas cyber attack sparks customer alert

Qantas is investigating a major data breach that may have exposed the personal details of up to six million customers.

The breach affected a third-party platform used by the airline’s contact centre to store sensitive data, including names, phone numbers, email addresses, dates of birth and frequent flyer numbers.

The airline discovered unusual activity on 30 June and responded by immediately isolating the affected system. While the full scope of the breach is still being assessed, Qantas expects the volume of stolen data to be significant.

However, it confirmed that no passwords, PINs, credit card details or passport numbers were stored on the compromised platform.

Qantas has informed the Australian Federal Police, the Cyber Security Centre and the Office of the Information Commissioner. CEO Vanessa Hudson apologised to customers and urged anyone concerned to call a dedicated support line. She added that airline operations and safety remain unaffected.

The incident follows recent cyber attacks on Hawaiian Airlines, WestJet and major UK retailers, reportedly linked to a group known as Scattered Spider. The breach adds to a growing list of Australian organisations targeted in 2025, in what privacy authorities describe as a worsening trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tinder trials face scans to verify profiles

Tinder is trialling a facial recognition feature to boost user security and crack down on fraudulent profiles. The pilot is currently underway in the US, after initial launches in Colombia and Canada.

New users are now required to take a short video selfie during sign-up, which will be matched against profile photos to confirm authenticity. The app also compares the scan with other accounts to catch duplicates and impersonations.

Verified users receive a profile badge, and Tinder stores a non-reversible encrypted face map to aid in detection. The company claims all facial data is deleted when accounts are removed.

The update follows a sharp rise in catfishing and romance scams, with over 64,000 cases reported in the US last year alone. Other measures introduced in recent years include photo verification, ID checks and location-sharing tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!