Warner Music partners with AI song generator Suno

A landmark agreement has been reached between Warner Music and AI music platform Suno, ending last year’s copyright lawsuit that accused the service of using artists’ work without permission.

Fans can now generate AI-created songs using the voices, names, and likenesses of Warner artists who opt in, offering a new way to engage with music.

The partnership will introduce new licensed AI models, including download limits and paid tiers, to prevent a flood of AI tracks on streaming platforms.

Suno has also acquired the live-music discovery platform Songkick, expanding its digital footprint and strengthening connections between AI music and live events.

Music industry experts say the deal demonstrates how AI innovation can coexist with artists’ rights, as the UK government continues consultations on intellectual property for AI.

Creators and policymakers are advocating opt-in frameworks to ensure artists are fairly compensated when their works are used to train AI models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots misidentify images they created

Growing numbers of online users are turning to AI chatbots to verify suspicious images, yet many tools are failing to detect fakes they created themselves. AFP found several cases in Asia where AI systems labelled fabricated photos as authentic, including a viral image of former Philippine lawmaker Elizaldy Co.

The failures highlight a lack of genuine visual analysis in current models. Many models are primarily trained on language patterns, resulting to inconsistent decisions even when dealing with images generated by the same generative systems.

Investigations also uncovered similar misidentifications during unrest in Pakistan-administered Kashmir, where AI models wrongly validated synthetic protest images. A Columbia University review reinforced the trend, with seven leading systems unable to verify any of the ten authentic news photos.

Specialists argue that AI may assist professional fact-checkers but cannot replace them. They emphasise that human verification remains essential as AI-generated content becomes increasingly lifelike and continues to circulate widely across social media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AWS commits $50bn to US government AI

Amazon Web Services plans to invest $50 billion in high performance AI infrastructure dedicated to US federal agencies. The programme aims to broaden access to AWS tools such as SageMaker AI, Bedrock and model customisation services, alongside support for Anthropic’s Claude.

The expansion will add around 1.3 gigawatts of compute capacity, enabling agencies to run larger models and speed up complex workloads. AWS expects construction of the new data centres to begin in 2026, marking one of its most ambitious government-focused buildouts to date.

Chief executive Matt Garman argues the upgrade will remove long-standing technology barriers within government. The company says enhanced AI capabilities could accelerate work in areas ranging from cybersecurity to medical research while strengthening national leadership in advanced computing.

AWS has spent more than a decade developing secure environments for classified and sensitive government operations. Competitors have also stepped up US public sector offerings, with OpenAI, Anthropic and Google all rolling out heavily discounted AI products for federal use over the past year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Creativity that AI cannot reshape

A landmark ruling in Munich has put renewed pressure on AI developers, following a German court’s finding that OpenAI is liable for reproducing copyrighted song lyrics in outputs generated by GPT-4 and GPT-4o. The judges rejected OpenAI’s argument that the system merely predicts text without storing training data, stressing the long-established EU principle of technological neutrality that, regardless of the medium, vinyl, MP3, or AI output, the unauthorised reproduction of protected works remains infringement.

Because the models produced lyrics nearly identical to the originals, the court concluded that they had memorised and therefore stored copyrighted content. The ruling dismantled OpenAI’s attempt to shift responsibility to users by claiming that any copying occurs only at the output stage.

Judges found this implausible, noting that simple prompts could not have ‘accidentally’ produced full, complex song verses without the model retaining them internally. Arguments around coincidence, probability, or so-called ‘hallucinations’ were dismissed, with the court highlighting that even partially altered lyrics remain protected if their creative structure survives.

As Anita Lamprecht explains in her blog, the judgement reinforces that AI systems are not neutral tools like tape recorders but active presenters of content shaped by their architecture and training data.

A deeper issue lies beneath the legal reasoning, the nature of creativity itself. The court inferred that highly original works, which are statistically unique, force AI systems into a kind of memorisation because such material cannot be reliably reproduced through generalisation alone.

That suggests that when models encounter high-entropy, creative texts during training, they must internalise them to mimic their structure, making infringement difficult to avoid. Even if this memorisation is a technical necessity, the judges stressed that it falls outside the EU’s text and data mining exemptions.

The case signals a turning point for AI regulation. It exposes contradictions between what companies claim in court and what their internal guidelines acknowledge. OpenAI’s own model specifications describe the output of lyrics as ‘reproduction’.

As Lamprecht notes, the ruling demonstrates that traditional legal principles remain resilient even as technology shifts from physical formats to vector space. It also hints at a future where regulation must reach inside AI systems themselves, requiring architectures that are legible to the law and laws that can be enforced directly within the models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google launches Nano Banana Pro image model

Google has launched Nano Banana Pro, a new image generation and editing model built on Gemini 3 Pro. The upgrade expands Gemini’s visual capabilities inside the Gemini app, Google Ads, Google AI Studio, Vertex AI and Workspace tools.

Nano Banana Pro focuses on cleaner text rendering, richer world knowledge and tighter control over style and layout. Creators can produce infographics, diagrams and character consistent scenes, and refine lighting, camera angle or composition with detailed prompts.

The AI model supports higher resolution visuals, localised text in multiple languages and more accurate handling of complex scripts. Google highlights uses in marketing materials, business presentations and professional design workflows, as partners such as Adobe integrate the model into Firefly and Photoshop.

Users can try Nano Banana Pro through Gemini with usage limits, while paying customers and enterprises gain extended access. Google embeds watermarking and C2PA-style metadata to help identify AI-generated images, foregrounding safety and transparency around synthetic content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Creative industries seek rights protection amid AI surge

British novelists are raising concerns that AI could replace their work, with nearly half saying the technology could ‘entirely replace’ them. The MCTD survey of 332 authors found deep unease about the impact of generative tools trained on vast fiction datasets.

About 97% of novelists expressed intense negativity towards the idea of AI writing complete novels, while around 40% said their income from related work had already suffered. Many authors have reported that their work has been used to train large language models without their permission or payment.

While 80 % agreed AI offers societal benefits, authors called for better protections, including copyright reform and consent-based use of their work. MCTD Executive Director Prof. Gina Neff stressed that creative industries are not expendable in the AI race.

A UK government spokesperson said collaboration between the AI sector and creative industries is vital, with a focus on innovation and protection for creators. But writers say urgent action is needed to ensure their rights are upheld.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI co-pilot uses CAD software to generate 3D designs

MIT engineers have developed a novel AI system able to use CAD software in a human-like way, controlling the interface with clicks, drags and menu commands to build 3D models from 2D sketches.

The team created a dataset called VideoCAD, comprising more than 41,000 real CAD session videos that explicitly show how users build shapes step-by-step, including mouse movement, keyboard commands and UI interactions.

By learning from this data, the AI agent can translate high-level design intents, such as ‘draw a line’ or ‘extrude a shape’, into specific UI actions like clicking a tool, dragging over a sketch region and executing the command.

When given a 2D drawing, the AI generates a complete 3D model by replicating the sequence of UI interactions a human designer would use. The researchers tested this on a variety of objects, from simple brackets to more complex architectural shapes.

The long-term vision is to build an AI-enabled CAD co-pilot. This tool not only automates repetitive modelling tasks but also works collaboratively with human designers to suggest next steps, speed up workflows or handle tedious operations.

The researchers argue this could significantly lower the barrier to entry for CAD use, making 3D design accessible to people without years of training.

From a digital economy and innovation policy perspective, this development is significant. It demonstrates how AI-driven UI agents are evolving, not just processing text or data, but also driving complex, creative software. That raises questions around intellectual property (who owns the design if the AI builds it?), productivity (will it replace or support designers?) and education (how will CAD teaching adapt?).

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU proposal sparks alarm over weakened privacy rules

The Digital Omnibus has been released by the European Commission, prompting strong criticism from privacy advocates. Campaigners argue the reforms would weaken long-standing data protection standards and introduce sweeping changes without proper consultation.

Noyb founder Max Schrems claims the plan favours large technology firms by creating loopholes around personal data and lowering user safeguards. Critics say the proposals emerge despite limited political support from EU governments, civil society groups and several parliamentary factions.

The Omnibus is welcomed by industry which have called for simplification and changes to be made for quite a number of years. These changes should make carrying out business activities simpler for entities which do process vast amounts of data.

The Commission is also accused of rushing (errors can be found in the draft, including references to the GDPR) the process under political pressure, abandoning impact assessments and shifting priorities away from widely supported protections. View our analysis on the matter for a deep dive on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot