Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI gets Memphis approval to run 15 gas turbines

xAI, Elon Musk’s AI company, has secured permits to operate 15 natural gas turbines at its Memphis data centre, despite facing legal threats over alleged Clean Air Act violations.

The Shelby County Health Department approved the generators, which can produce up to 247 megawatts, provided specific emissions controls are in place.

Environmental lawyers say xAI had already been running as many as 35 generators without permits. The Southern Environmental Law Center (SELC), acting on behalf of the NAACP, has accused the company of serious pollution and is preparing to sue.

Even under the new permit, xAI is allowed to emit substantial pollutants annually, including nearly 10 tons of formaldehyde — a known carcinogen.

Community concerns about the health impact remain strong. A local group pledged $250,000 for an independent air quality study, and although the City of Memphis carried out its own tests, the SELC questioned their validity.

The tests missed ozone levels and were reportedly conducted in favourable wind conditions, with equipment placed too close to buildings.

Officials previously argued that the turbines were exempt from regulation due to their ‘mobile’ status, a claim the SELC refuted as legally flawed. Meanwhile, xAI has recently raised $10 billion, split between debt and equity, highlighting its rapid expansion, even as regulatory scrutiny grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Veo 3 video for Gemini users globally

Google has begun rolling out its Veo 3 video-generation model to Gemini users across more than 159 countries. The advanced AI tool allows subscribers to create short video clips simply by entering text prompts.

Access to Veo 3 is limited to those on Google’s AI Pro plan, and usage is currently restricted to three videos per day. The tool can generate clips lasting up to eight seconds, enabling rapid video creation for a variety of purposes.

Google is already developing additional features for Gemini, including the ability to turn images into videos, according to product director Josh Woodward.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans new laws to tackle undersea cable sabotage

The UK government’s evolving defence and security policies aim to close legal gaps exposed by modern threats such as cyberattacks and sabotage of undersea cables. As set out in the recent Strategic Defence Review, ministers plan to introduce a new defence readiness bill to protect critical subsea infrastructure better and prepare for hostile acts that fall outside traditional definitions of war.

The government is also considering revising the outdated Submarine Telegraph Act of 1885, whose penalties, last raised in 1982 to £1,000, are now recognised as inadequate. Instead of merely increasing fines, officials from the Ministry of Defence and the Department for Science, Innovation and Technology intend to draft comprehensive legislation that balances civil and military needs, clarifies how to prosecute sabotage, and updates the UK’s approach to national defence in the digital age.

These policy initiatives reflect growing concern about ‘grey zone’ threats—deliberate acts of sabotage or cyber aggression that stop short of open conflict yet pose serious national security risks. Recent suspected sabotage incidents, including damage to subsea cables connecting Sweden, Latvia, Finland, and Estonia, have highlighted how vulnerable undersea infrastructure remains.

Investigations have linked several of these operations to Russian and Chinese interests, emphasising the urgency of modernising UK law. By updating its legislative framework, the UK government aims to ensure it can respond effectively to attacks that blur the line between peace and conflict, safeguarding both national interests and critical international data flows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU PREVAIL project opens Edge AI platform to users in June

The European Union’s PREVAIL project is preparing to open its Edge AI services to external users in June 2025.

Coordinated by Europe’s top research and technology organisations—CEA-Leti, Fraunhofer-Gesellschaft, imec, and VTT—the initiative offers a shared, multi-hub infrastructure designed to speed up the development and commercialisation of next-generation Edge AI technologies.

Through its platform, European designers will gain access to advanced chip prototyping capabilities and full design support using standard commercial tools.

PREVAIL combines commercial foundry processes with advanced technology modules developed in partner clean rooms. These include embedded non-volatile memories (eNVM), silicon photonics, and 3D integration technologies such as silicon interposers and packaging innovations.

Initial demonstrators, already in development with industry partners, will serve as test cases to ensure compatibility with a broad range of applications and future scalability.

From July 2025, a €20 million EU-funded call under the ‘Low Power Edge AI’ initiative will help selected customers co-finance their access to the platform. Whether supported by EU funds or independently financed, users will be able to design chips using one of four shared platforms.

The consortium has also set up a user interface team to manage technical support and provide access to Process Design Kits and Design Rule Manuals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada’s telecoms face a key choice between competition and investment

Canada is preparing to finalise a critical policy decision regarding internet affordability and competition. The core policy, reaffirmed by the Canadian Radio-television and Telecommunications Commission (CRTC), mandates that the country’s three major telecom providers, Bell, Telus, and Rogers, must grant wholesale access to their fibre optic networks to smaller internet service providers (ISPs).

The ruling aims to increase consumer choice and stimulate competition by allowing smaller players to use existing infrastructure rather than building their own. The policy also notably expands Telus’s ability to enter new markets, such as Ontario and Quebec, without additional infrastructure investment.

Following concerns raised by major telecom companies, the federal government has been asked to review and potentially overturn the decision. The CRTC warns that reversing the policy could undo competition gains and limit future ISP options.

Meanwhile, Telus and other supporters argue that maintaining the ruling protects regulatory independence and encourages further investment by creating market certainty. Major telecom companies in Canada argue that this policy discourages investment and creates unfair competition, with Bell reporting significant cuts to planned infrastructure spending.

Smaller providers worry about losing market share as big players expand using shared networks. The decision will strongly influence Canada’s future internet competition and investment landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use AI to create phishing sites in seconds

Hackers are now using generative AI tools to build convincing phishing websites in under a minute, researchers at Okta have warned. The company discovered that a tool developed by Vercel had been abused to replicate login portals for platforms such as Okta, Microsoft 365 and crypto services.

Using simple prompts like ‘build a copy of the website login.okta.com’, attackers can create fake login pages with little effort or technical skill. Okta’s investigation found no evidence of successful breaches, but noted that threat actors repeatedly used v0 to target new platforms.

Vercel has since removed the fraudulent sites and is working with Okta to create a system for reporting abuse. Security experts are concerned the speed and accessibility of generative AI tools could accelerate low-effort cybercrime on a massive scale.

Researchers also found cloned versions of the v0 tool on GitHub, which may allow continued abuse even if access to the original is restricted. Okta urges organisations to adopt passwordless systems, as traditional phishing detection methods are becoming obsolete.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model predicts sudden cardiac death more accurately

A new AI tool developed by researchers at Johns Hopkins University has shown promise in predicting sudden cardiac death among people with hypertrophic cardiomyopathy (HCM), outperforming existing clinical tools.

The model, known as MAARS (Multimodal AI for ventricular Arrhythmia Risk Stratification), uses a combination of medical records, cardiac MRI scans, and imaging reports to assess individual patient risk more accurately.

In early trials, MAARS achieved an AUC (area under the curve) score of 0.89 internally and 0.81 in external validation — both significantly higher than traditional risk calculators recommended by American and European guidelines.

The improvement is attributed to its ability to interpret raw cardiac MRI data, particularly scans enhanced with gadolinium, which are often overlooked in standard assessments.

While the tool has the potential to personalise care and reduce unnecessary defibrillator implants, researchers caution that the study was limited to small cohorts from Johns Hopkins and North Carolina’s Sanger Heart & Vascular Institute.

They also acknowledged that MAARS’s reliance on large and complex datasets may pose challenges for widespread clinical use.

Nevertheless, the research team believes MAARS could mark a shift in managing HCM, the most common inherited heart condition.

By identifying hidden patterns in imaging and medical histories, the AI model may protect patients more effectively, especially younger individuals who remain at risk yet receive no benefit from current interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pursues two AI paths with internal tension

Meta’s AI strategy is facing internal friction, with CEO Mark Zuckerberg and Chief AI Scientist Yann LeCun taking sharply different paths toward the company’s future.

While Zuckerberg is doubling down on superintelligence, even launching a new division called Meta Superintelligence Labs, LeCun argues that even ‘cat-level’ intelligence remains a distant goal.

The new lab, led by Scale AI founder Alexandr Wang, marks Zuckerberg’s ambition to accelerate progress in large language models — a move triggered by disappointment in Meta’s recent Llama performance.

Reports suggest the models were tested with customised benchmarks to appear more capable than they were. That prompted frustration at the top, especially after Chinese firm DeepSeek built more advanced tools using Meta’s open-source Llama.

LeCun’s long-standing advocacy for open-source AI now appears at odds with the company’s shifting priorities. While he promotes openness for diversity and democratic access, Zuckerberg’s recent memo did not mention open-source principles.

Internally, executives have even discussed backing away from Llama and turning to closed models like those from OpenAI or Anthropic instead.

Meta is pursuing both visions — supporting LeCun’s research arm, FAIR, and investing in a new, more centralised superintelligence effort. The company has offered massive compensation packages to OpenAI researchers, with some reportedly offered up to $100 million.

Whether Meta continues balancing both philosophies or chooses one outright could determine the direction of its AI legacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!