Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI gets Memphis approval to run 15 gas turbines

xAI, Elon Musk’s AI company, has secured permits to operate 15 natural gas turbines at its Memphis data centre, despite facing legal threats over alleged Clean Air Act violations.

The Shelby County Health Department approved the generators, which can produce up to 247 megawatts, provided specific emissions controls are in place.

Environmental lawyers say xAI had already been running as many as 35 generators without permits. The Southern Environmental Law Center (SELC), acting on behalf of the NAACP, has accused the company of serious pollution and is preparing to sue.

Even under the new permit, xAI is allowed to emit substantial pollutants annually, including nearly 10 tons of formaldehyde — a known carcinogen.

Community concerns about the health impact remain strong. A local group pledged $250,000 for an independent air quality study, and although the City of Memphis carried out its own tests, the SELC questioned their validity.

The tests missed ozone levels and were reportedly conducted in favourable wind conditions, with equipment placed too close to buildings.

Officials previously argued that the turbines were exempt from regulation due to their ‘mobile’ status, a claim the SELC refuted as legally flawed. Meanwhile, xAI has recently raised $10 billion, split between debt and equity, highlighting its rapid expansion, even as regulatory scrutiny grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Southern Water uses AI to cut sewer floods

AI used in the sewer system has helped prevent homes in West Sussex from flooding, Southern Water has confirmed. The system was able to detect a fatberg in East Lavington before it caused damage.

The AI monitors sewer flow patterns and distinguishes between regular use, rainfall and developing blockages. On 16 June, digital sensors flagged an anomaly—leading teams to clear the fatberg before wastewater could flood gardens or homes.

‘We’re spotting hundreds of potential blockages before it’s too late,’ said Daniel McElhinney, proactive operations control manager at Southern Water. AI has reduced internal flooding by 40% and external flooding by 15%, the utility said.

Around 32,000 sewer level monitors are in place, checking for unusual flow activity that could signal a blockage or leak. Blocked sewers remain the main cause of pollution incidents, according to the company.

‘Most customers don’t realise the average sewer is only the size of an orange,’ McElhinney added. Even a small amount of cooking fat, combined with unflushable items, can lead to fatbergs and serious disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Veo 3 video for Gemini users globally

Google has begun rolling out its Veo 3 video-generation model to Gemini users across more than 159 countries. The advanced AI tool allows subscribers to create short video clips simply by entering text prompts.

Access to Veo 3 is limited to those on Google’s AI Pro plan, and usage is currently restricted to three videos per day. The tool can generate clips lasting up to eight seconds, enabling rapid video creation for a variety of purposes.

Google is already developing additional features for Gemini, including the ability to turn images into videos, according to product director Josh Woodward.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU PREVAIL project opens Edge AI platform to users in June

The European Union’s PREVAIL project is preparing to open its Edge AI services to external users in June 2025.

Coordinated by Europe’s top research and technology organisations—CEA-Leti, Fraunhofer-Gesellschaft, imec, and VTT—the initiative offers a shared, multi-hub infrastructure designed to speed up the development and commercialisation of next-generation Edge AI technologies.

Through its platform, European designers will gain access to advanced chip prototyping capabilities and full design support using standard commercial tools.

PREVAIL combines commercial foundry processes with advanced technology modules developed in partner clean rooms. These include embedded non-volatile memories (eNVM), silicon photonics, and 3D integration technologies such as silicon interposers and packaging innovations.

Initial demonstrators, already in development with industry partners, will serve as test cases to ensure compatibility with a broad range of applications and future scalability.

From July 2025, a €20 million EU-funded call under the ‘Low Power Edge AI’ initiative will help selected customers co-finance their access to the platform. Whether supported by EU funds or independently financed, users will be able to design chips using one of four shared platforms.

The consortium has also set up a user interface team to manage technical support and provide access to Process Design Kits and Design Rule Manuals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattacks drain millions from hospitality sector

The booming hospitality sector handles sensitive guest information daily, from passports to payment details, making it a prime target for cybercriminals. Recent figures reveal the average cost of a data breach in hospitality rose to $3.86 million in 2024, with over 14,000 critical vulnerabilities detected in hotel networks worldwide.

Complex systems connecting guests, staff, vendors, and devices like smart locks multiply entry points for attackers. High staff turnover and frequent reliance on temporary workers add to the sector’s cybersecurity challenges.

New employees are often more susceptible to phishing and social engineering attacks, as demonstrated by costly breaches such as the 2023 MGM Resorts incident. Artificial intelligence helps boost defences but isn’t a cure-all and must be used with staff training and clear policies.

Recent attacks on major hotel brands have exposed millions of customer records, intensifying pressure on hospitality firms to meet privacy regulations like GDPR. Maintaining robust cybersecurity requires continuous updates to policies, vendor checks, and committed leadership support.

Hotels lagging in these areas risk severe financial and reputational damage in an increasingly hostile cyber landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BT launches cyber training as small businesses struggle with threats

Cyber attacks aren’t just a problem for big-name brands. Small and medium businesses are increasingly in the crosshairs, according to new research from BT and Be the Business.

Two in five SMEs have never provided cyber security training to their staff, despite a sharp increase in attacks. In the past year alone, 42% of small firms and 67% of medium-sized companies reported breaches.

Phishing remains the most common threat, affecting 85% of businesses. But more advanced tactics are spreading fast, including ransomware and ‘quishing’ scams — where fake QR codes are used to steal data.

Recovering from a breach is costly. Micro and small businesses spend nearly £8,000 on average to recover from their most serious incident. The figure excludes reputational damage and long-term disruption.

To help tackle the issue, BT has launched a new training programme with Be the Business. The course offers practical, low-cost cyber advice designed for companies without dedicated IT support.

The programme focuses on real-world threats, including AI-driven scams, and offers guidance on steps like password hygiene, two-factor authentication, and safe software practices.

Although 69% of SME leaders are now exploring AI tools to help defend their systems, 18% also list AI as one of their top cyber threats — a sign of both potential and risk.

Experts warn that basic precautions still matter most. With free and affordable training options now widely available, small firms have more tools than ever to improve their cyber defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model predicts sudden cardiac death more accurately

A new AI tool developed by researchers at Johns Hopkins University has shown promise in predicting sudden cardiac death among people with hypertrophic cardiomyopathy (HCM), outperforming existing clinical tools.

The model, known as MAARS (Multimodal AI for ventricular Arrhythmia Risk Stratification), uses a combination of medical records, cardiac MRI scans, and imaging reports to assess individual patient risk more accurately.

In early trials, MAARS achieved an AUC (area under the curve) score of 0.89 internally and 0.81 in external validation — both significantly higher than traditional risk calculators recommended by American and European guidelines.

The improvement is attributed to its ability to interpret raw cardiac MRI data, particularly scans enhanced with gadolinium, which are often overlooked in standard assessments.

While the tool has the potential to personalise care and reduce unnecessary defibrillator implants, researchers caution that the study was limited to small cohorts from Johns Hopkins and North Carolina’s Sanger Heart & Vascular Institute.

They also acknowledged that MAARS’s reliance on large and complex datasets may pose challenges for widespread clinical use.

Nevertheless, the research team believes MAARS could mark a shift in managing HCM, the most common inherited heart condition.

By identifying hidden patterns in imaging and medical histories, the AI model may protect patients more effectively, especially younger individuals who remain at risk yet receive no benefit from current interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pursues two AI paths with internal tension

Meta’s AI strategy is facing internal friction, with CEO Mark Zuckerberg and Chief AI Scientist Yann LeCun taking sharply different paths toward the company’s future.

While Zuckerberg is doubling down on superintelligence, even launching a new division called Meta Superintelligence Labs, LeCun argues that even ‘cat-level’ intelligence remains a distant goal.

The new lab, led by Scale AI founder Alexandr Wang, marks Zuckerberg’s ambition to accelerate progress in large language models — a move triggered by disappointment in Meta’s recent Llama performance.

Reports suggest the models were tested with customised benchmarks to appear more capable than they were. That prompted frustration at the top, especially after Chinese firm DeepSeek built more advanced tools using Meta’s open-source Llama.

LeCun’s long-standing advocacy for open-source AI now appears at odds with the company’s shifting priorities. While he promotes openness for diversity and democratic access, Zuckerberg’s recent memo did not mention open-source principles.

Internally, executives have even discussed backing away from Llama and turning to closed models like those from OpenAI or Anthropic instead.

Meta is pursuing both visions — supporting LeCun’s research arm, FAIR, and investing in a new, more centralised superintelligence effort. The company has offered massive compensation packages to OpenAI researchers, with some reportedly offered up to $100 million.

Whether Meta continues balancing both philosophies or chooses one outright could determine the direction of its AI legacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!