How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA powers a new wave of specialised AI agents to transform business

Agentic AI has entered a new phase as companies rely on specialised systems instead of broad, one-size-fits-all models.

Open-source foundations, such as NVIDIA’s Neuron family, now allow organisations to combine internal knowledge with tailored architectures, leading to agents that understand the precise demands of each workflow.

Firms across cybersecurity, payments and semiconductor engineering are beginning to treat specialisation as the route to genuine operational value.

CrowdStrike is utilising Nemotron and NVIDIA NIM microservices to enhance its Agentic Security Platform, which supports teams by handling high-volume tasks such as alert triage and remediation.

Accuracy has risen from 80 to 98.5 percent, reducing manual effort tenfold and helping analysts manage complex threats with greater speed.

PayPal has taken a similar path by building commerce-focused agents that enable conversational shopping and payments, cutting latency nearly in half while maintaining the precision required across its global network of customers and merchants.

Synopsys is deploying agentic AI throughout chip design workflows by pairing open models with NVIDIA’s accelerated infrastructure. Early trials in formal verification show productivity improvements of 72 percent, offering engineers a faster route to identifying design errors.

The company is blending fine-tuned models with tools such as the NeMo Agent Toolkit and Blueprints to embed agentic support at every stage of development.

Across industries, strategic steps are becoming clear. Organisations begin by evaluating open models before curating and securing domain-specific data and then building agents capable of acting on proprietary information.

Continuous refinement through a data flywheel strengthens long-term performance.

NVIDIA aims to support the shift by promoting Nemotron, NeMo and its broader software ecosystem as the foundation for the next generation of specialised enterprise agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan boosts Rapidus with major semiconductor funding

Japan will inject more than one trillion yen (approximately 5.5 billion €) into chipmaker Rapidus between 2026 and 2027. The plan aims to fortify national economic security by rebuilding domestic semiconductor capacity after decades of reliance on overseas suppliers.

Rapidus intends to begin producing 2-nanometre chips in late 2027 as global demand for faster, AI-ready components surges. The firm expects overall investment to reach seven trillion yen and hopes to list publicly around 2031.

Japanese government support includes large subsidies and direct investment that add to earlier multi-year commitments. Private contributors, including Toyota and Sony, previously backed the venture, which was founded in 2022 to revive Japan’s cutting-edge chip ambitions.

Officials argue that advanced production is vital for technological competitiveness and future resilience. Critics to this investment note that there are steep costs and high risks, yet policymakers view the Rapidus investment as crucial to keeping pace with technological advancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nvidia’s results fail to ease AI bubble fears

Record profits and year-on-year revenue growth above 60 percent have put Nvidia at the centre of debate over whether the surge in AI spending signals a bubble or a long-term boom.

CEO Jensen Huang and CFO Colette Kress dismissed concerns about the bubble, highlighting strong demand and expectations of around $65 billion in revenue for the next quarter.

Executives forecast global AI infrastructure spending could reach $3–4 trillion annually by the end of the decade as both generative AI and traditional cloud computing workloads increasingly run on GPUs.

Widespread adoption by major partners, including Meta, Anthropic and Salesforce, suggests lasting momentum rather than short-term hype.

Analysts generally agree that Nvidia’s performance remains robust, but questions persist over the sustainability of heavy investment in AI. Investors continue to monitor whether Big Tech can maintain this pace and if highly leveraged customers might expose Nvidia to future risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ireland confronts rising energy strain from data centres

Ireland faces mounting pressure over soaring electricity use from data centres clustered around Dublin. Facilities powering global tech giants have grown into a major energy consumer, accounting for over a fifth of national demand.

The load could reach 30 percent by 2030 as expanding cloud and AI services drive further growth. Analysts warn that rising consumption threatens climate commitments and places significant strain on grid stability.

Campaigners argue that data centres monopolise renewable capacity while pushing Ireland towards potential EU emissions penalties. Some local authorities have already blocked developments due to insufficient grid capacity and limited on-site green generation.

Sector leaders fear stalled projects and uncertain policy may undermine Ireland’s role as a digital hub. Investment risks remain high unless upgrades, clearer rules and balanced planning reduce the pressure on national infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI chatbot comes to Shinagawa station in Japan

JR Central will trial an AI-operated language service for travellers at JR Shinagawa Station in Tokyo, Japan. The service, running from 15 December to mid-March, allows passengers to access a dedicated site via smartphone by scanning a QR code at the station.

Named ‘JRTok-AI,’ the chatbot provides ticketing information, handles large luggage, and performs service operations. It supports English, Chinese, Korean, French, and Spanish, offering location-based details and English commentary on the history and culture along the Tokaido Shinkansen route.

The trial aims to enhance travel convenience and gather feedback to inform service expansion. JR Central said enhancements and a broader rollout will be considered based on the results of this experiment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI helps you shop smarter this holiday season

Holiday shoppers can now rely on AI to make Black Friday and Cyber Monday less stressful. AI tools help track prices across multiple retailers and notify users when items fall within their budget, saving hours of online searching.

Finding gifts for difficult-to-shop-for friends and family is also easier with AI. By describing a person’s interests or lifestyle, shoppers receive curated recommendations with product details, reviews, and availability, drawing from billions of listings in Google’s Shopping Graph.

Local shopping is more convenient thanks to AI features that enhance the shopping experience. Shoppers can check stock at nearby stores without having to call around, and virtual try-on technology allows users to see how clothing looks on them before making a purchase.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pope Leo warns teens not to outsource schoolwork to AI

During a livestream from the Vatican to the National Catholic Youth Conference in Indianapolis, Pope Leo XIV warned roughly 15,000 young people not to rely on AI to do their homework.

He described AI as ‘one of the defining features of our time’ but insisted that responsible use should promote personal growth, not shortcut learning: ‘Don’t ask it to do your homework for you.’

Leo also urged teens to be deliberate with their screen time and use technology in ways that nurture faith, community and authentic friendships. He warned that while AI can process data quickly, it cannot replace real wisdom or the capacity for moral judgement.

His remarks reflect a broader concern from the Vatican about the impact of AI on the development of young people. In a previous message to a Vatican AI ethics conference, he emphasised that access to data is not the same as accurate intelligence. That youth must not let AI stunt their growth or compromise their dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Creativity that AI cannot reshape

A landmark ruling in Munich has put renewed pressure on AI developers, following a German court’s finding that OpenAI is liable for reproducing copyrighted song lyrics in outputs generated by GPT-4 and GPT-4o. The judges rejected OpenAI’s argument that the system merely predicts text without storing training data, stressing the long-established EU principle of technological neutrality that, regardless of the medium, vinyl, MP3, or AI output, the unauthorised reproduction of protected works remains infringement.

Because the models produced lyrics nearly identical to the originals, the court concluded that they had memorised and therefore stored copyrighted content. The ruling dismantled OpenAI’s attempt to shift responsibility to users by claiming that any copying occurs only at the output stage.

Judges found this implausible, noting that simple prompts could not have ‘accidentally’ produced full, complex song verses without the model retaining them internally. Arguments around coincidence, probability, or so-called ‘hallucinations’ were dismissed, with the court highlighting that even partially altered lyrics remain protected if their creative structure survives.

As Anita Lamprecht explains in her blog, the judgement reinforces that AI systems are not neutral tools like tape recorders but active presenters of content shaped by their architecture and training data.

A deeper issue lies beneath the legal reasoning, the nature of creativity itself. The court inferred that highly original works, which are statistically unique, force AI systems into a kind of memorisation because such material cannot be reliably reproduced through generalisation alone.

That suggests that when models encounter high-entropy, creative texts during training, they must internalise them to mimic their structure, making infringement difficult to avoid. Even if this memorisation is a technical necessity, the judges stressed that it falls outside the EU’s text and data mining exemptions.

The case signals a turning point for AI regulation. It exposes contradictions between what companies claim in court and what their internal guidelines acknowledge. OpenAI’s own model specifications describe the output of lyrics as ‘reproduction’.

As Lamprecht notes, the ruling demonstrates that traditional legal principles remain resilient even as technology shifts from physical formats to vector space. It also hints at a future where regulation must reach inside AI systems themselves, requiring architectures that are legible to the law and laws that can be enforced directly within the models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!