Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LFR tech helps catch dangerous offenders, but Liberty urges legal safeguards

Live facial recognition (LFR) technology used by the Metropolitan Police has led to more than 1,000 arrests, including dangerous offenders wanted for serious crimes, such as rape, robbery and child protection breaches.

Among those arrested was David Cheneler, 73, a registered sex offender spotted by LFR cameras in Camberwell, south London. He was found with a young girl and later jailed for two years for breaching a sexual harm prevention order.

Another arrest included Adenola Akindutire, linked to a machete robbery in Hayes that left a man with life-changing injuries. Stopped during an LFR operation in Stratford, he was carrying a false passport and admitted to several violent offences.

LFR also helped identify Darren Dubarry, 50, who was wanted for theft. He was stopped with stolen designer goods after passing an LFR-equipped van in east London.

The Met says the technology has helped arrest over 100 people linked to serious violence against women and girls, including domestic abuse, stalking, and strangulation.

Lindsey Chiswick, who leads the Met’s LFR work, said the system is helping deliver justice more efficiently, calling it a ‘powerful tool’ that is removing dangerous offenders from the streets of London.

While police say biometric data is not retained for those not flagged, rights groups remain concerned. Liberty says nearly 1.9 million faces were scanned between January 2022 and March 2024, and is calling for new laws to govern police use of facial recognition.

Charlie Whelton of Liberty said the tech risks infringing rights and must be regulated. ‘We shouldn’t leave police forces to come up with frameworks on their own,’ he warned, urging Parliament to legislate before further deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ari Aster warns of AI’s creeping normality ahead of Eddington release

Ari Aster, the director behind Hereditary and Midsommar, is sounding the alarm on AI. In a recent Letterboxd interview promoting his upcoming A24 film Eddington, Aster described his growing unease with AI.

He framed it as a quasi-religious force reshaping reality in ways that are already irreversible. ‘If you talk to these engineers… they talk about AI as a god,’ said Aster. ‘They’re very worshipful of this thing. Whatever space there was between our lived reality and this imaginal reality — that’s disappearing.’

Aster’s comments suggest concern not just about the technology, but about the mindset surrounding its development. Eddington, set during the COVID-19 pandemic, is a neo-Western dark comedy.
It stars Joaquin Phoenix and Pedro Pascal as a sheriff and a mayor locked in a bitter digital feud.

The film reflects Aster’s fears about the dehumanising impact of modern technology. He drew from the ideas of media theorist Marshall McLuhan, referencing his phrase: ‘Man is the sex organ of the machine world.’ Aster asked, ‘Is this technology an extension of us, are we extensions of this technology, or are we here to usher it into being?’

The implication is clear: AI may not simply assist humanity—it might define it. Aster’s films often explore existential dread and loss of control. His perspective on AI taps into similar fears, but in real life. ‘The most uncanny thing about it is that it’s less uncanny than I want it to be,’ he said.

‘I see AI-generated videos, and they look like life. The longer we live in them, the more normal they become.’ The normalisation of artificial content strikes at the core of Aster’s unease. It also mirrors recent tensions in Hollywood over AI’s role in creative industries.

In 2023, WGA and SAG-AFTRA fought for protections against AI-generated scripts and likenesses. Their strike shut down the industry for months, but won language limiting AI use.

The battles highlighted the same issue Aster warns of—losing artistic agency to machines. ‘What happens when content becomes so seamless, it replaces real creativity?’ he seems to ask.

‘Something huge is happening right now, and we have no say in it,’ he said. ‘I can’t believe we’re actually going to live through this and see what happens. Holy cow.’ Eddington is scheduled for release in the United States on 18 July 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Veo 3 video for Gemini users globally

Google has begun rolling out its Veo 3 video-generation model to Gemini users across more than 159 countries. The advanced AI tool allows subscribers to create short video clips simply by entering text prompts.

Access to Veo 3 is limited to those on Google’s AI Pro plan, and usage is currently restricted to three videos per day. The tool can generate clips lasting up to eight seconds, enabling rapid video creation for a variety of purposes.

Google is already developing additional features for Gemini, including the ability to turn images into videos, according to product director Josh Woodward.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake abuse in schools raises legal and ethical concerns

Deepfake abuse is emerging as a troubling form of peer-on-peer harassment in schools, targeting mainly girls with AI-generated explicit imagery. Tools that once required technical skill are now easily accessible to young people, allowing harmful content to be created and shared in seconds.

Though all US states and Washington, D.C. have laws addressing the distribution of nonconsensual intimate images, many do not cover AI-generated content or address the fact that minors are often both victims and perpetrators.

Some states have begun adapting laws to include proportional sentencing and behavioural interventions for minors. Advocates argue that education on AI, consent and digital literacy is essential to address the root causes and help young people understand the consequences of their actions.

Regulating tech platforms and app developers is also key, as companies continue to profit from tools used in digital exploitation. Experts say schools, families, lawmakers and platforms must share responsibility for curbing the spread of AI-generated abuse and ensuring support for those affected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

United brings facial recognition to Seattle airport

United Airlines has rolled out facial recognition at Seattle-Tacoma International Airport, allowing TSA PreCheck passengers to pass through security without ID or boarding passes. This service uses facial recognition to match real-time images with government-provided ID photos during the check-in process.

Seattle is the tenth US airport to adopt the system, following its launch at Chicago O’Hare in 2023. Alaska Airlines and Delta have also introduced similar services at Sea-Tac, signalling a broader shift toward biometric travel solutions.

The TSA’s Credential Authentication Technology was introduced at the airport in October and supports this touchless approach. Experts say facial recognition could soon be used throughout the airport journey, from bag drop to retail purchases.

TSA PreCheck access remains limited to US citizens, nationals, and permanent residents, with a five-year membership costing $78. As more airports adopt facial recognition, concerns about privacy and consent are likely to increase.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI brings Babylon’s lost hymn back to life

A hymn to the ancient city of Babylon has been reconstructed after 2,100 years using AI to piece together 30 clay tablet fragments. Once lost after Alexander the Great’s conquest, the song praises the city’s grandeur, morals and daily life in exceptional poetic detail.

The hymn, sung to the god Marduk, depicts Babylon as a flourishing paradise filled with jewelled gates, verdant pastures and flowing rivers. AI tools helped researchers quickly assemble and translate the fragments, revealing a third of the original 250-line text.

The poem sheds rare light on Babylonian values, highlighting kindness to foreigners, the release of prisoners and the sanctity of orphans. It also gives a surprising glimpse into the role of women, including cloistered priestesses who acted as midwives.

Parts of the hymn were copied out by schoolchildren up to 1,400 years after it was composed, showing its cultural importance. Scholars now place it alongside the Epic of Gilgamesh as one of the most treasured literary works from ancient Mesopotamia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Artists explore meaning and memory at Antwerp Art Weekend

At Antwerp Art Weekend, two standout exhibitions by Eddie Peake and the Amsterdam-based collective Metahaven explored how meaning shifts or falls apart in an age shaped by AI, identity, and emotional complexity.

Metahaven’s film follows a character interacting with an AI assistant while exploring poetry by Eugene Ostashevsky. It contrasts AI’s predictive language models with the unpredictable nature of poetry, using visual metaphors to expose how AI mimics language without fully grasping it.

Meanwhile, Peake’s immersive installation at TICK TACK turned the Belgian gallery into a psychological labyrinth, combining architectural intrusion, raw paintings, and a haunting audio piece. His work considers the weight of identity, sexuality, and memory, moving from aggression to vulnerability.

Despite their differences, both projects provoke questions about how language, identity, and emotion are formed and fractured. Each invites viewers to reconsider the boundaries of expression in a world increasingly influenced by AI and abstraction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI chatbots are designed to initiate conversations and enhance user engagement

Meta is training AI-powered chatbots that can remember previous conversations, send personalised follow-up messages, and actively re-engage users without needing a prompt.

Internal documents show that the company aims to keep users interacting longer across platforms like Instagram and Facebook by making bots more proactive and human-like.

Under the project code-named ‘Omni’, contractors from the firm Alignerr are helping train these AI agents using detailed personality profiles and memory-based conversations.

These bots are developed through Meta’s AI Studio — a no-code platform launched in 2024 that lets users build customised digital personas, from chefs and designers to fictional characters. Only after a user initiates a conversation can a bot send one follow-up, and that too within a 14-day window.

Bots must match their assigned personality and reference earlier interactions, offering relevant and light-hearted responses while avoiding emotionally charged or sensitive topics unless the user brings them up. Meta says the feature is being tested and rolled out gradually.

The company hopes it will not only improve user retention but also serve as a response to what CEO Mark Zuckerberg calls the ‘loneliness epidemic’.

With revenue from generative AI tools projected to reach up to $3 billion in 2025, Meta’s focus on more prolonged and engaging chatbot interactions appears to be as strategic as social.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!