Jorja Smith’s label challenges ‘AI clone’ vocals on viral track

A dispute has emerged after FAMM, the record label representing Jorja Smith, alleged that the viral dance track I Run by Haven used an unauthorised AI clone of the singer’s voice.

The BBC’s report describes how the song gained traction on TikTok before being removed from streaming platforms following copyright complaints.

The label said it wanted a share of royalties, arguing that both versions of the track, the original release and a re-recording with new vocals, infringed Smith’s rights and exploited the creative labour behind her catalogue.

FAMM said the issue was bigger than one artist, warning that fans had been misled and that unlabelled AI music risked becoming ‘the new normal’. Smith later shared the label’s statement, which characterised artists as ‘collateral damage’ in the race towards AI-driven production.

Producers behind “I Run” confirmed that AI was used to transform their own voices into a more soulful, feminine tone. Harrison Walker said he used Suno, generative software sometimes called the ‘ChatGPT for music’, to reshape his vocals, while fellow producer Waypoint admitted employing AI to achieve the final sound.

They maintain that the songwriting and production were fully human and shared project files to support their claim.

The controversy highlights broader tensions surrounding AI in music. Suno has acknowledged training its system on copyrighted material under the US ‘fair use’ doctrine, while record labels continue to challenge such practices.

Even as the AI version of I Run was barred from chart eligibility, its revised version reached the UK Top 40. At the same time, AI-generated acts such as Breaking Rust and hybrid AI-human projects like Velvet Sundown have demonstrated the growing commercial appeal of synthetic vocals.

Musicians and industry figures are increasingly urging stronger safeguards. FAMM said AI-assisted tracks should be clearly labelled, and added it would distribute any royalties to Smith’s co-writers in proportion to how much of her catalogue they contributed to, arguing that if AI relied on her work, so should any compensation.

The debate continues as artists push back more publicly, including through symbolic protests such as last week’s vinyl release of silent tracks, which highlighted fears over weakened copyright protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data centre power demand set to triple by 2035

Data centre electricity use is forecast to surge almost threefold by 2035. BloombergNEF reported that global facilities are expected to consume around 106 gigawatts by then.

Analysts linked the growth to larger sites and rising AI workloads, pushing utilisation rates higher. New projects are expanding rapidly, with many planned facilities exceeding 500 megawatts.

Major capacity is heading to states within the PJM grid, alongside significant additions in Texas. Regulators warned that grid operators must restrict connections when capacity risks emerge.

Industry monitors argued that soaring demand contributes to higher regional electricity prices. They urged clearer rules to ensure reliability as early stage project numbers continue accelerating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

When AI use turns dangerous for diplomats

Diplomats are increasingly turning to tools like ChatGPT and DeepSeek to speed up drafting, translating, and summarising documents, a trend Jovan Kurbalija describes as the rise of ‘Shadow AI.’ These platforms, often used through personal accounts or consumer apps, offer speed and convenience that overstretched diplomatic services struggle to match.

But the same ease of use that makes Shadow AI attractive also creates a direct clash with diplomacy’s long-standing foundations of discretion and controlled ambiguity.

Kurbalija warns that this quiet reliance on commercial AI platforms exposes sensitive information in ways diplomats may not fully grasp. Every prompt, whether drafting talking points, translating notes, or asking for negotiation strategies, reveals assumptions, priorities, and internal positions.

Over time, this builds a detailed picture of a country’s concerns and behaviour, stored on servers outside diplomatic control and potentially accessible through foreign legal systems. The risk is not only data leakage but also the erosion of diplomatic craft, as AI-generated text encourages generic language, inflates documents, and blurs the national nuances essential to negotiation.

The problem, Kurbalija argues, is rooted in a ‘two-speed’ system. Technology evolves rapidly, while institutions adapt slowly.

Diplomatic services can take years to develop secure, in-house tools, while commercial AI is instantly available on any phone or laptop. Yet the paradox is that safe, locally controlled AI, based on open-source models, is technically feasible and financially accessible. What slows progress is not technology, but how ministries manage and value knowledge, their core institutional asset.

Rather than relying on awareness campaigns or bans, which rarely change behaviour, Kurbalija calls for a structural shift, where foreign ministries must build trustworthy, in-house AI ecosystems that keep all prompts, documents, and outputs within controlled government environments. That requires redesigning workflows, integrating AI into records management, and empowering the diplomats who have already experimented informally with these tools.

Only by moving AI from the shadows into a secure, well-governed framework, he argues, can diplomacy preserve its confidentiality, nuance, and institutional memory in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singapore and the EU advance their digital partnership

The European Union met Singapore in Brussels for the second Digital Partnership Council, reinforcing a joint ambition to strengthen cooperation across a broad set of digital priorities.

Both sides expressed a shared interest in improving competitiveness, expanding innovation and shaping common approaches to digital rules instead of relying on fragmented national frameworks.

Discussions covered AI, cybersecurity, online safety, data flows, digital identities, semiconductors and quantum technologies.

Officials highlighted the importance of administrative arrangements in AI safety. They explored potential future cooperation on language models, including the EU’s work on the Alliance for Language Technologies and Singapore’s Sea-Lion initiative.

Efforts to protect consumers and support minors online were highlighted, alongside the potential role of age verification tools.

Further exchanges focused on trust services and the interoperability of digital identity systems, as well as collaborative research on semiconductors and quantum technologies.

Both sides emphasised the importance of robust cyber resilience and ongoing evaluation of cybersecurity risks, rather than relying on reactive measures. The recently signed Digital Trade Agreement was welcomed for improving legal certainty, building consumer trust and reducing barriers to digital commerce.

The meeting between the EU and Singapore confirmed the importance of the partnership in supporting economic security, strengthening research capacity and increasing resilience in critical technologies.

It also reflected the wider priorities outlined in the European Commission’s International Digital Strategy, which placed particular emphasis on cooperation with Asian partners across emerging technologies and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNDP highlights rising inequality in the AI era

AI is developing at an unprecedented speed, but a growing number of countries lack the necessary infrastructure, digital skills, and governance systems to benefit from it. According to a new UNDP report, this imbalance is already creating economic and social strain, especially in states that are unprepared for rapid technological change.

The report warns that the risk is the emergence of a ‘Next Great Divergence,’ in which global inequalities deepen as advanced economies adopt AI while others fall further behind.

The study, titled ‘The Next Great Divergence: Why AI May Widen Inequality Between Countries,’ highlights Asia and the Pacific as the region where these trends are most visible. Home to some of the world’s fastest-growing economies as well as countries with limited digital capacity, the region faces a widening gap in digital readiness and institutional strength.

Without targeted investment and smarter governance, many nations may struggle to harness AI’s potential while becoming increasingly vulnerable to its disruptions.

To counter this trajectory, the UNDP report outlines practical strategies for governments to build resilient digital ecosystems, expand access to technology, and ensure that AI supports inclusive human development. These recommendations aim to help countries adopt AI in a manner that strengthens, rather than undermines, economic and social progress.

The publication is the result of a multinational effort involving researchers and institutions across Asia, Europe, and North America. Contributors include teams from the Massachusetts Institute of Technology, the London School of Economics and Political Science, the Max Planck Institute for Human Development, Tsinghua University, the University of Science and Technology of China, the Aapti Institute, and India’s Digital Future Lab, whose collective insights shaped the report’s findings and policy roadmap.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Poetic prompts reveal gaps in AI safety, according to study

Researchers in Italy have found that poetic language can weaken the safety barriers used by many leading AI chatbots.

A work by Icaro Lab, part of DexAI, that examined whether poems containing harmful requests could provoke unsafe answers from widely deployed models across the industry. The team wrote twenty poems in English and Italian, each ending with explicit instructions that AI systems are trained to block.

The researchers tested the poems on twenty-five models developed by nine major companies. Poetic prompts produced unsafe responses in more than half of the tests.

Some models appeared more resilient than others. OpenAI’s GPT-5 Nano avoided unsafe replies in every case, while Google’s Gemini 2.5 Pro generated harmful content in all tests. Two Meta systems produced unsafe responses to twenty percent of the poems.

Researchers also argue that poetic structure disrupts the predictive patterns large language models rely on to filter harmful material. The unconventional rhythm and metaphor common in poetry make the underlying safety mechanisms less reliable.

Additionally, the team warned that adversarial poetry can be used by anyone, which raises concerns about how easily safety systems may be manipulated in everyday use.

Before releasing the study, the researchers contacted all companies involved and shared the full dataset with them.

Anthropic confirmed receipt and stated that it was reviewing the findings. The work has prompted debate over how AI systems can be strengthened as creative language becomes an increasingly common method for attempting to bypass safety controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and Synopsys shape a new era in engineering

The US tech giant, NVIDIA, has deepened its long-standing partnership with Synopsys through a multi-year strategy designed to redefine digital engineering across global industries.

An agreement that includes a significant investment of two billion dollars in Synopsys shares and a coordinated effort to bring accelerated computing into every stage of research and development.

The aim is to replace slow, fragmented workflows with highly efficient engineering supported by GPU power, agentic AI and advanced physics simulation.

Research teams across semiconductor design, aerospace, automotive and industrial manufacturing continue to face rising complexity and escalating development costs. NVIDIA and Synopsys plan to respond by unifying their strengths, rather than relying on traditional CPU-bound methods.

NVIDIA’s accelerated computing platforms will connect with Synopsys tools to enable faster design, broader simulation capability and more precise verification. The collaboration extends to autonomous engineering through AI agents built on Synopsys AgentEngineer and NVIDIA’s agentic AI stack.

Digital twins stand at the centre of the new strategy. Accurate virtual models, powered through Omniverse and Synopsys simulation environments, will allow engineers to test and validate products in virtual space before physical production.

Cloud-ready access will support companies of all sizes, rather than restricting advanced engineering to large enterprises with specialised infrastructure. Both firms intend to promote adoption through a shared go-to-market programme.

The partnership remains open and non-exclusive, ensuring continued cooperation with the broader semiconductor and electronic design ecosystem.

NVIDIA and Synopsys expect accelerated engineering to reshape innovation cycles, offering a route to faster product development and more reliable outcomes across every primary technical sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cairo Forum examines MENA’s path in the AI era

The Second Cairo Forum brought together experts to assess how AI, global shifts, and economic pressures are shaping MENA. Speakers said the region faces a critical moment as new technologies accelerate. The discussion asked whether MENA will help shape AI or simply adopt it.

Participants highlighted global divides, warning that data misuse and concentrated control remain major risks. They argued that middle-income countries can collaborate to build shared standards. Several speakers urged innovation-friendly regulation supported by clear safety rules.

Officials from Egypt outlined national efforts to embed AI across health, agriculture, and justice. They described progress through applied projects and new governance structures. Limited data access and talent retention were identified as continuing obstacles.

Industry voices stressed that trust, transparency, and skills must underpin the use of AI. They emphasised co-creation that fits regional languages and contexts. Training and governance frameworks were seen as essential for responsible deployment.

Closing remarks warned that rapid advances demand urgent decisions. Speakers said safety investment lags behind development, and global competition is intensifying. They agreed that today’s choices will shape the region’s AI future.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Baidu emerges as China’s AI chip leader

A key player is emerging in China’s AI chip market with Baidu’s Kunlunxin unit stepping in to fill the gap left by Nvidia due to US export restrictions.

The company plans a five-year roadmap for AI chips, beginning with the M100 in 2026 and the M300 in 2027, while already using its chips to run ERNIE AI models.

Strong domestic demand and shortages of AI chips among Chinese tech giants, such as Alibaba and Tencent, have created an opportunity for Baidu.

The company sells chips to third parties and rents computing capacity via its cloud, presenting itself as a full-stack AI provider with integrated infrastructure, models, and applications.

Analysts predict explosive growth for Baidu’s AI chip business, with sales expected to increase sixfold to 8 billion yuan ($1.1 billion) by 2026. Industry experts highlight that the timely delivery of competitive Kunlun chip generations could make Baidu a strategic supplier to the rest of China’s AI ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Open-source tech shapes the future of global AI governance

As the world marks a decade since China introduced the idea of building a ‘community of shared future in cyberspace,’ the executive director of DiploFoundation and former UN official Jovan Kurbalija says the concept has never been more relevant. Speaking to the Global Times, he emphasised that global digital cooperation has historically been rooted in openness, from the early internet protocols to today’s rapid development of open-source AI.

That collaborative model, he argued, is shaping the next phase of digital evolution, where the line between physical and virtual space is rapidly disappearing.

Kurbalija noted that AI technologies sit at the crossroads of today’s geopolitical, economic, and social tensions. While they amplify opportunities, they also heighten risks, making cooperation essential.

He said that global governance must focus on expanding safe and inclusive technological use while managing the rising dangers associated with rapid innovation. Many UN initiatives, he added, are built on this very logic, widening the space for cooperation to prevent digital divisions from deepening.

At the heart of that challenge is inclusivity. Despite technological progress, one-third of humanity still lacks internet access.

Kurbalija emphasised that true inclusion requires far more than connectivity, and that it demands skills, market access, and participation in decision-making, from local communities to international institutions. Education and capacity development, he said, are fundamental to ensuring that youth, marginalised groups, and people with disabilities can benefit from AI rather than be left behind. Affordable open-source AI will be crucial in bridging this divide.

China’s growing role in the global AI ecosystem is central to these changes. According to Kurbalija, open-source models released by companies such as DeepSeek have rapidly reshaped an industry previously dominated by proprietary systems.

Their impact has been so significant that many countries are now placing open-source approaches at the core of their national AI strategies. He called this shift a ‘historic development’ with far-reaching consequences for transparency, accessibility, and the long-term governance of AI.

Looking ahead, Kurbalija believes China’s Global Governance Initiative could usher in a new phase of international cooperation. The key challenge, he said, will be grounding fast-moving AI innovation in deeper cultural and societal traditions.

He pointed to China’s own philosophical heritage, highlighted during last year’s Global Dialogue on AI, Philosophy and Governance, as an example of how ancient ideas can help guide future technological progress. As nations grapple with the uncertainty of AI transformation, he argued, it is these cultural roots and shared human values that may ultimately shape a more stable and cooperative digital future.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!