Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bots are taking your meetings for you

AI-powered note takers are increasingly filling virtual meeting rooms, sometimes even outnumbering the humans present. Workers are now sending bots to listen, record, and summarise meetings they no longer feel the need to attend themselves.

Major platforms such as Zoom, Teams and Meet offer built-in AI transcription, while startups like Otter and Fathom provide bots that quietly join meetings or listen in through users’ devices. The tools raise new concerns about privacy, consent, and the erosion of human engagement.

Some workers worry that constant recording suppresses honest conversation and makes meetings feel performative. Others, including lawyers and business leaders, point out the legal grey zones created by using these bots without full consent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AliExpress agrees to binding EU rules on data and transparency

AliExpress has agreed to legally binding commitments with the European Commission to comply with the Digital Services Act (DSA). These cover six key areas, including recommender systems, advertising transparency, and researcher data access.

The announcement on 18 June marks only the second case where a major platform, following TikTok, has formally committed to specific changes under the DSA.

The platform promised greater transparency in its recommendation algorithms, user opt-out from personalisation, and clearer information on product rankings. It also committed to allowing researchers access to publicly available platform data through APIs and customised requests.

However, the lack of clear definitions around terms such as ‘systemic risk’ and ‘public data’ may limit practical oversight.

AliExpress has also established an internal monitoring team to ensure implementation of these commitments. Yet experts argue that without measurable benchmarks and external verification, internal monitoring may not be enough to guarantee meaningful compliance or accountability.

The Commission, meanwhile, is continuing its investigation into the platform’s role in the distribution of illegal products.

These commitments reflect the EU’s broader enforcement strategy under the DSA, aiming to establish transparency and accountability across digital platforms. The agreement is a positive start but highlights the need for stronger oversight and clearer definitions for lasting impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI chatbots are designed to initiate conversations and enhance user engagement

Meta is training AI-powered chatbots that can remember previous conversations, send personalised follow-up messages, and actively re-engage users without needing a prompt.

Internal documents show that the company aims to keep users interacting longer across platforms like Instagram and Facebook by making bots more proactive and human-like.

Under the project code-named ‘Omni’, contractors from the firm Alignerr are helping train these AI agents using detailed personality profiles and memory-based conversations.

These bots are developed through Meta’s AI Studio — a no-code platform launched in 2024 that lets users build customised digital personas, from chefs and designers to fictional characters. Only after a user initiates a conversation can a bot send one follow-up, and that too within a 14-day window.

Bots must match their assigned personality and reference earlier interactions, offering relevant and light-hearted responses while avoiding emotionally charged or sensitive topics unless the user brings them up. Meta says the feature is being tested and rolled out gradually.

The company hopes it will not only improve user retention but also serve as a response to what CEO Mark Zuckerberg calls the ‘loneliness epidemic’.

With revenue from generative AI tools projected to reach up to $3 billion in 2025, Meta’s focus on more prolonged and engaging chatbot interactions appears to be as strategic as social.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X to test AI-generated Community Notes

X, the social platform formerly known as Twitter, is preparing to test a new feature allowing AI chatbots to generate Community Notes.

These notes, a user-driven fact-checking system expanded under Elon Musk, are meant to provide context on misleading or ambiguous posts, such as AI-generated videos or political claims.

The pilot will enable AI systems like Grok or third-party large language models to submit notes via API. Each AI-generated comment will be treated the same as a human-written one, undergoing the same vetting process to ensure reliability.

However, concerns remain about AI’s tendency to hallucinate, where it may generate inaccurate or fabricated information instead of grounded fact-checks.

A recent research paper by the X Community Notes team suggests that AI and humans should collaborate, with people offering reinforcement learning feedback and acting as the final layer of review. The aim is to help users think more critically, not replace human judgment with machine output.

Still, risks persist. Over-reliance on AI, particularly models prone to excessive helpfulness rather than accuracy, could lead to incorrect notes slipping through.

There are also fears that human raters could become overwhelmed by a flood of AI submissions, reducing the overall quality of the system. X intends to trial the system over the coming weeks before any wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare’s new tool lets publishers charge AI crawlers

Cloudflare, which powers 20% of the web, has launched a new marketplace called Pay per Crawl, aiming to redefine how website owners interact with AI companies.

The platform allows publishers to set a price for AI crawlers to access their content instead of allowing unrestricted scraping or blocking. Website owners can decide to charge a micropayment for each crawl, permit free access, or block crawlers altogether, gaining more control over their material.

Over the past year, Cloudflare introduced tools for publishers to monitor and block AI crawlers, laying the groundwork for the marketplace. Major publishers like Conde Nast, TIME and The Associated Press have joined Cloudflare in blocking AI crawlers by default, supporting a permission-based approach.

The company also now blocks AI bots by default on all new sites, requiring site owners to grant access.

Cloudflare’s data reveals that AI crawlers scrape websites far more aggressively than traditional search engines, often without sending equivalent referral traffic. For example, OpenAI’s crawler scraped sites 1,700 times for every referral, compared to Google’s 14 times.

As AI agents evolve to gather and deliver information directly, it raises challenges for publishers who rely on site visits for revenue.

Pay per Crawl could offer a new business model for publishers in an AI-driven world. Cloudflare envisions a future where AI agents operate with a budget to access quality content programmatically, helping users synthesise information from trusted sources.

For now, both publishers and AI companies need Cloudflare accounts to set crawl rates, with Cloudflare managing payments. The company is also exploring stablecoins as a possible payment method in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas cyber attack sparks customer alert

Qantas is investigating a major data breach that may have exposed the personal details of up to six million customers.

The breach affected a third-party platform used by the airline’s contact centre to store sensitive data, including names, phone numbers, email addresses, dates of birth and frequent flyer numbers.

The airline discovered unusual activity on 30 June and responded by immediately isolating the affected system. While the full scope of the breach is still being assessed, Qantas expects the volume of stolen data to be significant.

However, it confirmed that no passwords, PINs, credit card details or passport numbers were stored on the compromised platform.

Qantas has informed the Australian Federal Police, the Cyber Security Centre and the Office of the Information Commissioner. CEO Vanessa Hudson apologised to customers and urged anyone concerned to call a dedicated support line. She added that airline operations and safety remain unaffected.

The incident follows recent cyber attacks on Hawaiian Airlines, WestJet and major UK retailers, reportedly linked to a group known as Scattered Spider. The breach adds to a growing list of Australian organisations targeted in 2025, in what privacy authorities describe as a worsening trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Springer machine learning book faces fake citation scandal

A Springer Nature book on machine learning has come under scrutiny after researchers discovered that many of its citations were fabricated or erroneous.

A review of 18 citations in Mastering Machine Learning: From Basics to Advanced revealed that two-thirds either referenced nonexistent papers or misattributed authorship and publication sources.

Several academics whose names were included in the book confirmed they did not write the cited material, while others noted inaccuracies in where their actual work was supposedly published. One researcher was alerted by Google Scholar to multiple fake citations under his name.

Govindakumar Madhavan, the author, has not confirmed whether AI tools were used in producing the content, though his book discusses ethical concerns around AI-generated text.

Springer Nature has acknowledged the issue and is investigating whether the book breached its AI use policies, which require authors to declare AI involvement beyond basic editing.

The incident has reignited concerns about publishers’ quality control, with critics pointing to the increasing misuse of large language models in academic texts. As AI tools become more advanced, ensuring the integrity of published research remains a growing challenge for both authors and editors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tinder trials face scans to verify profiles

Tinder is trialling a facial recognition feature to boost user security and crack down on fraudulent profiles. The pilot is currently underway in the US, after initial launches in Colombia and Canada.

New users are now required to take a short video selfie during sign-up, which will be matched against profile photos to confirm authenticity. The app also compares the scan with other accounts to catch duplicates and impersonations.

Verified users receive a profile badge, and Tinder stores a non-reversible encrypted face map to aid in detection. The company claims all facial data is deleted when accounts are removed.

The update follows a sharp rise in catfishing and romance scams, with over 64,000 cases reported in the US last year alone. Other measures introduced in recent years include photo verification, ID checks and location-sharing tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!