Microsoft outlines challenges in verifying AI-generated media

In an era of deepfakes and AI-manipulated content, determining what is real online has become increasingly complex. Microsoft’s report Media Integrity and Authentication reviews current verification methods, their limits, and ways to boost trust in digital media.

The study emphasises that no single solution can prevent digital deception. Techniques such as provenance tracking, watermarking, and digital fingerprinting can provide useful context about a media file’s origin, creation tools, and whether it has been altered.

Microsoft has pioneered these technologies, cofounding the Coalition for Content Provenance and Authenticity (C2PA) to standardise media authentication globally.

The report also addresses the risks of sociotechnical attacks, where even subtle edits can manipulate authentication results to mislead the public.

Researchers explored how provenance information can remain durable and reliable across different environments, from high-security systems to offline devices, highlighting the challenge of maintaining consistent verification.

As AI-generated or edited content becomes commonplace, secure media provenance is increasingly important for news outlets, public figures, governments, and businesses.

Reliable provenance helps audiences spot manipulated content, with ongoing research guiding clearer, practical verification displays for the public.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lyria 3 brings AI-generated music to Gemini

The Gemini app has introduced Lyria 3, the latest music-generation model from Google DeepMind, enabling users to create 30-second tracks from text prompts, images, or videos. The feature is rolling out in beta, marking a further expansion of creative tools within the platform.

Users can customise genre, tempo, and vocals, while the system generates lyrics automatically when needed. Tracks include AI-generated cover art and can be shared directly, aiming to provide a simple way to produce short, personalised soundtracks rather than full compositions.

Audio created in the app is embedded with SynthID watermarking to identify AI-generated content, alongside new verification tools that allow users to check whether files were produced using Google AI.

The model is designed to produce original material rather than replicate specific artists, supported by filters and reporting mechanisms.

Availability initially covers multiple major languages for users aged 18 and over, with higher usage limits offered to premium subscribers. Lyria 3 is also being integrated into YouTube creator tools to enhance Shorts soundtracks as the rollout expands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood groups challenge ByteDance over Seedance 2.0 copyright concerns

ByteDance is facing scrutiny from Hollywood organisations over its AI video generator Seedance 2.0. Industry groups allege the system uses actors’ likenesses and copyrighted material without permission.

The Motion Picture Association said the tool reflects large-scale unauthorised use of protected works. Chairman Charles Rivkin called on ByteDance to halt what he described as infringing activities that undermine creators’ rights and jobs.

SAG-AFTRA also criticised the platform, citing concerns over the use of members’ voices and images. Screenwriter Rhett Reese warned that rapid AI development could reshape opportunities for creative professionals.

ByteDance acknowledged the concerns and said it would strengthen safeguards to prevent misuse of intellectual property. The company reiterated its commitment to respecting copyright while addressing complaints.

The dispute underscores wider tensions between technological innovation and rights protection as generative AI tools expand. Legal experts say the outcome could influence how AI video systems operate within existing copyright frameworks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LegalOn launches agentic AI for in-house legal teams

LegalOn Technologies has introduced five agentic AI tools aimed at transforming in-house legal operations. The company says the agents complete specialised contract and workflow tasks in seconds within its secure platform.

Unlike conventional AI assistants that respond to prompts, the new system is designed to plan and execute multi-step workflows independently, tailoring outputs to each organisation’s templates and standards while keeping lawyers informed of every action.

The suite includes tools for generating playbooks, processing legal intake requests and translating contracts across dozens of languages. Additional agents triage high-volume agreements and produce review-ready drafts from clause libraries and deal inputs.

Founded by two corporate lawyers in Japan, LegalOn now operates across Asia, Europe and North America. Backed by $200m in funding, it serves more than 8,000 clients globally, including Fortune 500 companies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Olympic ice dancers performing to AI-generated music spark controversy

The Olympic ice dance format combines a themed rhythm dance with a free dance. For the 2026 season, skaters must draw on 1990s music and styles. While most competitors chose recognisable tracks, the Czech siblings used a hybrid soundtrack blending AC/DC with an AI-generated music piece.

Katerina Mrazkova and Daniel Mrazek, ice dancers from Czechia, made their Olympic debut using a rhythm dance soundtrack that included AI-generated music, a choice permitted under current competition rules but one that quickly drew attention.

The International Skating Union lists the rhythm dance music as ‘One Two by AI (of 90s style Bon Jovi)’ alongside ‘Thunderstruck’ by AC/DC. Olympic organisers confirmed the use of AI-generated material, with commentators noting the choice during the broadcast.

Criticism of the music selection extends beyond novelty. Earlier versions of the programme reportedly included AI-generated music with lyrics that closely resembled lines from well-known 1990s songs, raising concerns about originality.

The episode reflects wider tensions across creative industries, where generative tools increasingly produce outputs that closely mirror existing works. For the athletes, attention remains on performance, but questions around authorship and creative value continue to surface.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic drives strategic trademark dispute in India

US AI company Anthropic’s expansion into India has triggered a legal dispute with a Bengaluru-based software firm that claims it has used the name ‘Anthropic’ since 2017. The Indian company argues that the US AI firm’s market entry has caused customer confusion. It is seeking recognition of prior use and damages of ₹10 million.

A commercial court in Karnataka has issued notice and suit summons to Anthropic but declined to grant an interim injunction. Further hearings are scheduled. The local firm says it prefers coexistence but turned to litigation due to growing marketplace confusion.

The dispute comes as India becomes a key growth market for global AI companies. Anthropic recently announced local leadership and expanded operations in the country. India’s large digital economy and upcoming AI industry events reinforce its strategic importance.

The case also highlights broader challenges linked to the rapid global expansion of AI firms. Trademark protection, brand due diligence, and regulatory clarity are increasingly central to cross-border digital market entry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated ‘slop’ spreads on Spotify, raising platform integrity concerns

A TechRadar report highlights the growing presence of AI-generated music on Spotify, often produced in large quantities and designed to exploit platform algorithms or royalty systems.

These tracks, sometimes described as ‘AI slop’, are appearing in playlists and recommendations, raising concerns about quality control and fairness for human musicians.

The article outlines signs that a track may be AI-generated, including generic or repetitive artwork, minimal or inconsistent artist profiles, and unusually high volumes of releases in a short time. Some tracks also feature vague or formulaic titles and metadata, making them difficult to trace to real creators.

Readers are encouraged to use Spotify’s reporting tools to flag suspicious or low-quality AI content.

The issue is a part of a broader governance challenge for streaming platforms, which must balance open access to generative tools with the need to maintain content quality, transparency and fair compensation for artists.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon expands AI film production tools as Hollywood trials new systems

The US tech giant, Amazon, is preparing a new phase for its proprietary production tools as the company opens a closed beta that will give selected studios early access to its AI systems.

Developers created the technology inside Amazon MGM Studios to improve character consistency across scenes and speed up work in pre and post-production instead of relying on fragmented processes.

The programme begins in March and is expected to deliver initial outcomes by May. Amazon is working with recognised industry figures such as Robert Stromberg, Kunal Nayyar and former Pixar animator Colin Brady to refine the methods.

The company is also drawing on Amazon Web Services and several external language model providers to strengthen performance.

Executives insist the aim is to assist creative teams rather than remove them from the process. The second season of the series ‘House of David’ already used more than 300 AI-generated shots, showing how the technology can support large-scale productions instead of replacing artistic decision-making.

Industry debate continues to intensify as studios explore new automation methods. Netflix also used generative tools for major scenes in ‘The Eternaut’.

Amazon has repeatedly cited AI progress when announcing staff reductions, which added further concern over the long-term effects on employment and creative roles.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Adobe Firefly unlocks powerful unlimited AI generation in 2026

Adobe has updated its Firefly platform to allow unlimited AI image and video generation for paid subscribers, removing the monthly credit limits that previously capped usage. The move marks a shift toward more flexible access to generative AI tools and is positioned as a way to support high-volume creative workflows.

The update reinforces Firefly’s role as an all-in-one creative AI studio. Users can generate images and videos using Adobe’s own Firefly models alongside third-party AI models, bringing multiple generation tools into a single platform.

Unlimited generation is available across the Firefly ecosystem, including the web interface, mobile apps, Firefly Boards, and the browser-based video editor. This expanded access supports collaboration and end-to-end content creation, from ideation to final editing.

The offer applies to Firefly Pro and Firefly Premium subscribers, including plans that previously operated under monthly credit limits. Users who sign up before March 16 will have access to unlimited image and video generation, with video output supported up to 2K resolution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Electronic Arts expands AI push with Stability AI

Electronic Arts has entered a multi year partnership with Stability AI to develop generative AI tools for game creation. The collaboration will support franchises such as The Sims, Battlefield and Madden NFL.

The company said the partnership centres on customised AI models that give developers more control over creative processes. Electronic Arts invested in Stability AI during its latest funding round in October.

Executives at Electronic Arts said concerns about job losses are understandable across the gaming industry. The company views AI as a way to enhance specific tasks and create new roles rather than replace staff.

Stability AI said similar technologies have historically increased demand for skilled workers. Electronic Arts added that active involvement in AI development helps the industry adapt rather than react to disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot