Will the AI boom hold or collapse?

Global investment in AI has soared to unprecedented heights, yet the technology’s real-world adoption lags far behind the market’s feverish expectations. Despite trillions of dollars in valuations and a global AI market projected to reach nearly $5 trillion by 2033, mounting evidence suggests that companies struggle to translate AI pilots into meaningful results.

As Jovan Kurbalija argues in his recent analysis, hype has outpaced both technological limits and society’s ability to absorb rapid change, raising the question of whether the AI bubble is nearing a breaking point.

Kurbalija identifies several forces inflating the bubble, such as relentless media enthusiasm that fuels fear of missing out, diminishing returns on ever-larger computing power, and the inherent logical constraints of today’s large language models, which cannot simply be ‘scaled’ into human-level intelligence.

At the same time, organisations are slow to reorganise workflows, regulations, and skills around AI, resulting in high failure rates for corporate initiatives. A new competitive landscape, driven by ultra-low-cost open-source models such as China’s DeepSeek, further exposes the fragility of current proprietary spending and the vast discrepancies in development costs.

Looking forward, Kurbalija outlines possible futures ranging from a rational shift toward smaller, knowledge-centric AI systems to a world in which major AI firms become ‘too big to fail’, protected by government backstops similar to the 2008 financial crisis. Geopolitics may also justify massive public spending as the US and China frame AI leadership as a national security imperative.

Other scenarios include a consolidation of power among a handful of tech giants or a mild ‘AI winter’ in which investment cools and attention pivots to the next frontier technologies, such as quantum computing or immersive digital environments.

Regardless of which path emerges, the defining battle ahead will centre on the open-source versus proprietary AI debate. Both Washington and Beijing are increasingly embracing open models as strategic assets, potentially reshaping global standards and forcing big tech firms to rethink their closed ecosystems.

As Kurbalija concludes, the outcome will depend less on technical breakthroughs and more on societal choices, balancing openness, competition, and security in shaping whether AI becomes a sustainable foundation of economic life or the latest digital bubble to deflate under its own weight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Jorja Smith’s label challenges ‘AI clone’ vocals on viral track

A dispute has emerged after FAMM, the record label representing Jorja Smith, alleged that the viral dance track I Run by Haven used an unauthorised AI clone of the singer’s voice.

The BBC’s report describes how the song gained traction on TikTok before being removed from streaming platforms following copyright complaints.

The label said it wanted a share of royalties, arguing that both versions of the track, the original release and a re-recording with new vocals, infringed Smith’s rights and exploited the creative labour behind her catalogue.

FAMM said the issue was bigger than one artist, warning that fans had been misled and that unlabelled AI music risked becoming ‘the new normal’. Smith later shared the label’s statement, which characterised artists as ‘collateral damage’ in the race towards AI-driven production.

Producers behind “I Run” confirmed that AI was used to transform their own voices into a more soulful, feminine tone. Harrison Walker said he used Suno, generative software sometimes called the ‘ChatGPT for music’, to reshape his vocals, while fellow producer Waypoint admitted employing AI to achieve the final sound.

They maintain that the songwriting and production were fully human and shared project files to support their claim.

The controversy highlights broader tensions surrounding AI in music. Suno has acknowledged training its system on copyrighted material under the US ‘fair use’ doctrine, while record labels continue to challenge such practices.

Even as the AI version of I Run was barred from chart eligibility, its revised version reached the UK Top 40. At the same time, AI-generated acts such as Breaking Rust and hybrid AI-human projects like Velvet Sundown have demonstrated the growing commercial appeal of synthetic vocals.

Musicians and industry figures are increasingly urging stronger safeguards. FAMM said AI-assisted tracks should be clearly labelled, and added it would distribute any royalties to Smith’s co-writers in proportion to how much of her catalogue they contributed to, arguing that if AI relied on her work, so should any compensation.

The debate continues as artists push back more publicly, including through symbolic protests such as last week’s vinyl release of silent tracks, which highlighted fears over weakened copyright protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DeepSeek launches AI model achieving gold-level maths scores

Chinese AI company DeepSeek has unveiled Math-V2, the first open-source AI model to achieve gold-level performance at the International Mathematical Olympiad.

The system, now available on GitHub and Hugging Face, allows developers to modify and deploy the model under a permissive license freely.

Math-V2 also excelled in the 2024 Chinese Mathematical Olympiad, demonstrating advanced reasoning and problem-solving capabilities. Unlike many AI systems, it features a self-verification process that enables it to check solutions even for problems without known answers.

The launch comes as US AI leaders, such as Google DeepMind and OpenAI, have achieved similar milestones with their proprietary models.

Open access to Math-V2 could democratise advanced mathematical tools, potentially accelerating scientific research and development globally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek opens access to gold-level maths AI

Chinese AI firm DeepSeek has released the first open AI model capable of achieving gold-medal results at the International Mathematical Olympiad. Math-V2 is now freely available on Hugging Face and GitHub, allowing developers to repurpose it and run it locally.

Gold-level performance at the IMO is remarkably rare, with only a small share of human participants reaching the top tier. DeepSeek aims to make such advanced mathematical capabilities accessible to researchers and developers who previously lacked access to comparable systems.

The company said its model achieved gold-level scores in both this year’s Olympiad and the Chinese Mathematical Olympiad. The results relied on strong theorem-proving skills and a new ‘self-verification’ method for reasoning without known solutions.

Observers said the open release could lower barriers to advanced maths AI, while US firms keep their Olympiad-level systems restricted. Supporters of open-source development welcomed the move as a significant step toward democratising advanced scientific tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Warner Music partners with AI song generator Suno

A landmark agreement has been reached between Warner Music and AI music platform Suno, ending last year’s copyright lawsuit that accused the service of using artists’ work without permission.

Fans can now generate AI-created songs using the voices, names, and likenesses of Warner artists who opt in, offering a new way to engage with music.

The partnership will introduce new licensed AI models, including download limits and paid tiers, to prevent a flood of AI tracks on streaming platforms.

Suno has also acquired the live-music discovery platform Songkick, expanding its digital footprint and strengthening connections between AI music and live events.

Music industry experts say the deal demonstrates how AI innovation can coexist with artists’ rights, as the UK government continues consultations on intellectual property for AI.

Creators and policymakers are advocating opt-in frameworks to ensure artists are fairly compensated when their works are used to train AI models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots misidentify images they created

Growing numbers of online users are turning to AI chatbots to verify suspicious images, yet many tools are failing to detect fakes they created themselves. AFP found several cases in Asia where AI systems labelled fabricated photos as authentic, including a viral image of former Philippine lawmaker Elizaldy Co.

The failures highlight a lack of genuine visual analysis in current models. Many models are primarily trained on language patterns, resulting to inconsistent decisions even when dealing with images generated by the same generative systems.

Investigations also uncovered similar misidentifications during unrest in Pakistan-administered Kashmir, where AI models wrongly validated synthetic protest images. A Columbia University review reinforced the trend, with seven leading systems unable to verify any of the ten authentic news photos.

Specialists argue that AI may assist professional fact-checkers but cannot replace them. They emphasise that human verification remains essential as AI-generated content becomes increasingly lifelike and continues to circulate widely across social media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AWS commits $50bn to US government AI

Amazon Web Services plans to invest $50 billion in high performance AI infrastructure dedicated to US federal agencies. The programme aims to broaden access to AWS tools such as SageMaker AI, Bedrock and model customisation services, alongside support for Anthropic’s Claude.

The expansion will add around 1.3 gigawatts of compute capacity, enabling agencies to run larger models and speed up complex workloads. AWS expects construction of the new data centres to begin in 2026, marking one of its most ambitious government-focused buildouts to date.

Chief executive Matt Garman argues the upgrade will remove long-standing technology barriers within government. The company says enhanced AI capabilities could accelerate work in areas ranging from cybersecurity to medical research while strengthening national leadership in advanced computing.

AWS has spent more than a decade developing secure environments for classified and sensitive government operations. Competitors have also stepped up US public sector offerings, with OpenAI, Anthropic and Google all rolling out heavily discounted AI products for federal use over the past year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Creativity that AI cannot reshape

A landmark ruling in Munich has put renewed pressure on AI developers, following a German court’s finding that OpenAI is liable for reproducing copyrighted song lyrics in outputs generated by GPT-4 and GPT-4o. The judges rejected OpenAI’s argument that the system merely predicts text without storing training data, stressing the long-established EU principle of technological neutrality that, regardless of the medium, vinyl, MP3, or AI output, the unauthorised reproduction of protected works remains infringement.

Because the models produced lyrics nearly identical to the originals, the court concluded that they had memorised and therefore stored copyrighted content. The ruling dismantled OpenAI’s attempt to shift responsibility to users by claiming that any copying occurs only at the output stage.

Judges found this implausible, noting that simple prompts could not have ‘accidentally’ produced full, complex song verses without the model retaining them internally. Arguments around coincidence, probability, or so-called ‘hallucinations’ were dismissed, with the court highlighting that even partially altered lyrics remain protected if their creative structure survives.

As Anita Lamprecht explains in her blog, the judgement reinforces that AI systems are not neutral tools like tape recorders but active presenters of content shaped by their architecture and training data.

A deeper issue lies beneath the legal reasoning, the nature of creativity itself. The court inferred that highly original works, which are statistically unique, force AI systems into a kind of memorisation because such material cannot be reliably reproduced through generalisation alone.

That suggests that when models encounter high-entropy, creative texts during training, they must internalise them to mimic their structure, making infringement difficult to avoid. Even if this memorisation is a technical necessity, the judges stressed that it falls outside the EU’s text and data mining exemptions.

The case signals a turning point for AI regulation. It exposes contradictions between what companies claim in court and what their internal guidelines acknowledge. OpenAI’s own model specifications describe the output of lyrics as ‘reproduction’.

As Lamprecht notes, the ruling demonstrates that traditional legal principles remain resilient even as technology shifts from physical formats to vector space. It also hints at a future where regulation must reach inside AI systems themselves, requiring architectures that are legible to the law and laws that can be enforced directly within the models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!