DeepSeek launches AI model achieving gold-level maths scores

Chinese AI company DeepSeek has unveiled Math-V2, the first open-source AI model to achieve gold-level performance at the International Mathematical Olympiad.

The system, now available on GitHub and Hugging Face, allows developers to modify and deploy the model under a permissive license freely.

Math-V2 also excelled in the 2024 Chinese Mathematical Olympiad, demonstrating advanced reasoning and problem-solving capabilities. Unlike many AI systems, it features a self-verification process that enables it to check solutions even for problems without known answers.

The launch comes as US AI leaders, such as Google DeepMind and OpenAI, have achieved similar milestones with their proprietary models.

Open access to Math-V2 could democratise advanced mathematical tools, potentially accelerating scientific research and development globally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek opens access to gold-level maths AI

Chinese AI firm DeepSeek has released the first open AI model capable of achieving gold-medal results at the International Mathematical Olympiad. Math-V2 is now freely available on Hugging Face and GitHub, allowing developers to repurpose it and run it locally.

Gold-level performance at the IMO is remarkably rare, with only a small share of human participants reaching the top tier. DeepSeek aims to make such advanced mathematical capabilities accessible to researchers and developers who previously lacked access to comparable systems.

The company said its model achieved gold-level scores in both this year’s Olympiad and the Chinese Mathematical Olympiad. The results relied on strong theorem-proving skills and a new ‘self-verification’ method for reasoning without known solutions.

Observers said the open release could lower barriers to advanced maths AI, while US firms keep their Olympiad-level systems restricted. Supporters of open-source development welcomed the move as a significant step toward democratising advanced scientific tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Warner Music partners with AI song generator Suno

A landmark agreement has been reached between Warner Music and AI music platform Suno, ending last year’s copyright lawsuit that accused the service of using artists’ work without permission.

Fans can now generate AI-created songs using the voices, names, and likenesses of Warner artists who opt in, offering a new way to engage with music.

The partnership will introduce new licensed AI models, including download limits and paid tiers, to prevent a flood of AI tracks on streaming platforms.

Suno has also acquired the live-music discovery platform Songkick, expanding its digital footprint and strengthening connections between AI music and live events.

Music industry experts say the deal demonstrates how AI innovation can coexist with artists’ rights, as the UK government continues consultations on intellectual property for AI.

Creators and policymakers are advocating opt-in frameworks to ensure artists are fairly compensated when their works are used to train AI models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots misidentify images they created

Growing numbers of online users are turning to AI chatbots to verify suspicious images, yet many tools are failing to detect fakes they created themselves. AFP found several cases in Asia where AI systems labelled fabricated photos as authentic, including a viral image of former Philippine lawmaker Elizaldy Co.

The failures highlight a lack of genuine visual analysis in current models. Many models are primarily trained on language patterns, resulting to inconsistent decisions even when dealing with images generated by the same generative systems.

Investigations also uncovered similar misidentifications during unrest in Pakistan-administered Kashmir, where AI models wrongly validated synthetic protest images. A Columbia University review reinforced the trend, with seven leading systems unable to verify any of the ten authentic news photos.

Specialists argue that AI may assist professional fact-checkers but cannot replace them. They emphasise that human verification remains essential as AI-generated content becomes increasingly lifelike and continues to circulate widely across social media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AWS commits $50bn to US government AI

Amazon Web Services plans to invest $50 billion in high performance AI infrastructure dedicated to US federal agencies. The programme aims to broaden access to AWS tools such as SageMaker AI, Bedrock and model customisation services, alongside support for Anthropic’s Claude.

The expansion will add around 1.3 gigawatts of compute capacity, enabling agencies to run larger models and speed up complex workloads. AWS expects construction of the new data centres to begin in 2026, marking one of its most ambitious government-focused buildouts to date.

Chief executive Matt Garman argues the upgrade will remove long-standing technology barriers within government. The company says enhanced AI capabilities could accelerate work in areas ranging from cybersecurity to medical research while strengthening national leadership in advanced computing.

AWS has spent more than a decade developing secure environments for classified and sensitive government operations. Competitors have also stepped up US public sector offerings, with OpenAI, Anthropic and Google all rolling out heavily discounted AI products for federal use over the past year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Creativity that AI cannot reshape

A landmark ruling in Munich has put renewed pressure on AI developers, following a German court’s finding that OpenAI is liable for reproducing copyrighted song lyrics in outputs generated by GPT-4 and GPT-4o. The judges rejected OpenAI’s argument that the system merely predicts text without storing training data, stressing the long-established EU principle of technological neutrality that, regardless of the medium, vinyl, MP3, or AI output, the unauthorised reproduction of protected works remains infringement.

Because the models produced lyrics nearly identical to the originals, the court concluded that they had memorised and therefore stored copyrighted content. The ruling dismantled OpenAI’s attempt to shift responsibility to users by claiming that any copying occurs only at the output stage.

Judges found this implausible, noting that simple prompts could not have ‘accidentally’ produced full, complex song verses without the model retaining them internally. Arguments around coincidence, probability, or so-called ‘hallucinations’ were dismissed, with the court highlighting that even partially altered lyrics remain protected if their creative structure survives.

As Anita Lamprecht explains in her blog, the judgement reinforces that AI systems are not neutral tools like tape recorders but active presenters of content shaped by their architecture and training data.

A deeper issue lies beneath the legal reasoning, the nature of creativity itself. The court inferred that highly original works, which are statistically unique, force AI systems into a kind of memorisation because such material cannot be reliably reproduced through generalisation alone.

That suggests that when models encounter high-entropy, creative texts during training, they must internalise them to mimic their structure, making infringement difficult to avoid. Even if this memorisation is a technical necessity, the judges stressed that it falls outside the EU’s text and data mining exemptions.

The case signals a turning point for AI regulation. It exposes contradictions between what companies claim in court and what their internal guidelines acknowledge. OpenAI’s own model specifications describe the output of lyrics as ‘reproduction’.

As Lamprecht notes, the ruling demonstrates that traditional legal principles remain resilient even as technology shifts from physical formats to vector space. It also hints at a future where regulation must reach inside AI systems themselves, requiring architectures that are legible to the law and laws that can be enforced directly within the models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google launches Nano Banana Pro image model

Google has launched Nano Banana Pro, a new image generation and editing model built on Gemini 3 Pro. The upgrade expands Gemini’s visual capabilities inside the Gemini app, Google Ads, Google AI Studio, Vertex AI and Workspace tools.

Nano Banana Pro focuses on cleaner text rendering, richer world knowledge and tighter control over style and layout. Creators can produce infographics, diagrams and character consistent scenes, and refine lighting, camera angle or composition with detailed prompts.

The AI model supports higher resolution visuals, localised text in multiple languages and more accurate handling of complex scripts. Google highlights uses in marketing materials, business presentations and professional design workflows, as partners such as Adobe integrate the model into Firefly and Photoshop.

Users can try Nano Banana Pro through Gemini with usage limits, while paying customers and enterprises gain extended access. Google embeds watermarking and C2PA-style metadata to help identify AI-generated images, foregrounding safety and transparency around synthetic content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Creative industries seek rights protection amid AI surge

British novelists are raising concerns that AI could replace their work, with nearly half saying the technology could ‘entirely replace’ them. The MCTD survey of 332 authors found deep unease about the impact of generative tools trained on vast fiction datasets.

About 97% of novelists expressed intense negativity towards the idea of AI writing complete novels, while around 40% said their income from related work had already suffered. Many authors have reported that their work has been used to train large language models without their permission or payment.

While 80 % agreed AI offers societal benefits, authors called for better protections, including copyright reform and consent-based use of their work. MCTD Executive Director Prof. Gina Neff stressed that creative industries are not expendable in the AI race.

A UK government spokesperson said collaboration between the AI sector and creative industries is vital, with a focus on innovation and protection for creators. But writers say urgent action is needed to ensure their rights are upheld.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot