How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT unveils new shopping research experience

Since yesterday, ChatGPT has introduced a more comprehensive approach to product discovery with a new shopping research feature, designed to simplify complex purchasing decisions.

Users describe what they need instead of sifting through countless sites, and the system generates personalised buyer guides based on high-quality sources. The feature adapts to each user by asking targeted questions and reflecting previously stored preferences in memory.

The experience has been built with a specialised version of GPT-5 mini trained for shopping tasks through reinforcement learning. It gathers fresh information such as prices, specifications, and availability by reading reliable retail pages directly.

Users can refine the process in real-time by marking products as unsuitable or requesting similar alternatives, enabling a more precise result.

The tool is available on all ChatGPT plans and offers expanded usage during the holiday period. OpenAI emphasises that no chats are shared with retailers and that search results are sourced from public data sources, rather than sponsored content.

Some errors may still occur in product details, yet the intention is to develop a more intuitive and personalised way to navigate an increasingly crowded digital marketplace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Waymo wins regulatory green light to expand robotaxi reach in Bay Area and SoCal

Waymo has received regulatory approval from the California Department of Motor Vehicles to deploy its fully autonomous vehicles across significantly more territory.

In the Bay Area, the newly permitted regions include much of the East Bay, the North Bay (including Napa), and the Sacramento area. In Southern California, Waymo’s newly approved zone stretches from Santa Clarita down to San Diego.

While this approval allows for driverless operation, Waymo still requires additional regulatory clearances before it can begin carrying paying passengers in certain parts of the expansion area. The company says it plans to start welcoming riders in San Diego by mid-2026.

From a policy and urban mobility perspective, this marks a significant milestone for Waymo, laying the groundwork for a truly statewide robotaxi network. It will be essential to monitor how this expansion interacts with local transit planning, safety regulation, and infrastructure demands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI helps you shop smarter this holiday season

Holiday shoppers can now rely on AI to make Black Friday and Cyber Monday less stressful. AI tools help track prices across multiple retailers and notify users when items fall within their budget, saving hours of online searching.

Finding gifts for difficult-to-shop-for friends and family is also easier with AI. By describing a person’s interests or lifestyle, shoppers receive curated recommendations with product details, reviews, and availability, drawing from billions of listings in Google’s Shopping Graph.

Local shopping is more convenient thanks to AI features that enhance the shopping experience. Shoppers can check stock at nearby stores without having to call around, and virtual try-on technology allows users to see how clothing looks on them before making a purchase.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US warns of rising senior health fraud as AI lifts scam sophistication

AI-driven fraud schemes are on the rise across the US health system, exposing older adults to increasing financial and personal risks. Officials say tens of billions in losses have already been uncovered this year. High medical use and limited digital literacy leave seniors particularly vulnerable.

Criminals rely on schemes such as phantom billing, upcoding and identity theft using Medicare numbers. Fraud spans home health, hospice care and medical equipment services. Authorities warn that the ageing population will deepen exposure and increase long-term harm.

AI has made scams harder to detect by enabling cloned voices, deepfakes and convincing documents. The tools help impersonate providers and personalise attacks at scale. Even cautious seniors may struggle to recognise false calls or messages.

Investigators are also using AI to counter fraud by spotting abnormal billing, scanning records for inconsistencies and flagging high-risk providers. Cross-checking data across clinics and pharmacies helps identify duplicate claims. Automated prompts can alert users to suspicious contacts.

Experts urge seniors to monitor statements, ignore unsolicited calls and avoid clicking unfamiliar links. They should verify official numbers, protect Medicare details and use strong login security. Suspicious activity should be reported to Medicare or to local fraud response teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Spain opens inquiry into Meta over privacy concerns

Prime Minister of Spain, Pedro Sánchez, has announced that an investigation will be launched against Meta following concerns over a possible large-scale violation of user privacy.

The company will be required to explain its conduct before the parliamentary committee on economy, trade and digital transformation instead of continuing to handle the issue privately.

Several research centres in Spain, Belgium and the Netherlands uncovered a concealed tracking tool used on Android devices for almost a year.

Their findings showed that web browsing data had been linked to identities on Facebook and Instagram even when users relied on incognito mode or a VPN.

The practice may have contravened key European rules such as the GDPR, the ePrivacy Directive, the Digital Markets Act and the Digital Services Act, while class action lawsuits are already underway in Germany, the US and Canada.

Pedro Sánchez explained that the investigation aims to clarify events, demand accountability from company leadership and defend any fundamental rights that might have been undermined.

He stressed that the law in Spain prevails over algorithms, platforms or corporate size, and those who infringe on rights will face consequences.

The prime minister also revealed a package of upcoming measures to counter four major threats in the digital environment. A plan that focuses on disinformation, child protection, hate speech and privacy defence instead of reactive or fragmented actions.

He argued that social media offers value yet has evolved into a space shaped by profit over well-being, where engagement incentives overshadow rights. He concluded that the sector needs to be rebuilt to restore social cohesion and democratic resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Twitch is classified as age-restricted by the Australian regulator

Australia’s online safety regulator has moved to classify Twitch as an age-restricted social media platform after ruling that the service is centred on user interaction through livestreamed content.

The decision means Twitch must take reasonable steps to stop children under sixteen from creating accounts from 10 December instead of relying on its own internal checks.

Pinterest has been treated differently after eSafety found that its main purpose is image collection and idea curation instead of social interaction.

As a result, the platform will not be required to follow age-restriction rules. The regulator stressed that the courts hold the final say on whether a service is age-restricted. Yet, the assessments were carried out to support families and industry ahead of the December deadline.

The ruling places Twitch alongside earlier named platforms such as Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube.

eSafety expects all companies operating in Australia to examine their legal responsibilities and has provided a self assessment tool to guide platforms that may fall under the social media minimum age requirements.

eSafety confirmed that assessments have been completed in stages to offer timely advice while reviews were still underway. The regulator added that no further assessments will be released before 10 December as preparations for compliance continue across the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Under-16s face new online restrictions as Malaysia tightens oversight

Malaysia plans to introduce a ban on social media accounts for people under 16 starting in 2026, becoming the latest country to push stricter digital age limits for children. Communications Minister Fahmi Fadzil said the government aims to better protect minors from cyberbullying, online scams and sexual exploitation.

Authorities are reviewing verification methods used abroad, including electronic age checks through national ID cards or passports, though an exact enforcement date has not yet been set.

The move follows new rules introduced earlier this year, which require major digital platforms in Malaysia to obtain a licence if they have more than eight million users. Licensed services must adopt age-verification tools, content-safety measures and clearer transparency standards, part of a wider effort to create a safer online environment for young people and families.

Australia, which passed the world’s first nationwide ban on social media accounts for children under 16, is serving as a key reference point for Malaysia’s plans. The Australian law takes effect on 10 December and imposes heavy fines on platforms like Facebook, TikTok, Instagram, X and YouTube if they fail to prevent underage users from signing up.

The move has drawn global attention as governments grapple with the impact of social media on young audiences. Similar proposals are emerging elsewhere in Europe.

Denmark has recently announced its intention to block social media access for children under 15, while Norway is advancing legislation that would introduce a minimum age of 15 for opening social media accounts. Countries adopting such measures say stricter age limits are increasingly necessary to address growing concerns about online safety and the well-being of children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot