AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US appeals court reverses key findings in Sonos-Google patent case

The US Court of Appeals for the Federal Circuit (CAFC) issued a reversed-in-part and affirmed-in-part a district court decision in the ongoing legal battle between Sonos and Google over smart speaker technologies. The court reversed the district court’s finding that Sonos’s ‘Zone Scene’ patents were unenforceable due to prosecution laches, a legal doctrine that can bar the enforcement of patents if the owner unreasonably delays in pursuing claims.

The district court had held that Sonos waited too long (13 years) to file specific claims following its 2006 provisional application, allegedly prejudicing Google, which had begun developing similar products by 2015.

However, the CAFC found that Google had failed to establish actual prejudice. It noted a lack of evidence that Google had meaningfully invested in the accused technology based on the assumption that Sonos had not already invented it. As a result, the court held that the lower court had abused its discretion in declaring the patents unenforceable.

The CAFC also reversed the district court’s invalidation of the Zone Scene patents for lack of written description, citing sufficient detail in Sonos’s 2019 patents. Google’s argument that the patents described only alternative embodiments was rejected, particularly as Google had presented no expert testimony to rebut Sonos’s claims.

Case background

Essentially, in 2020, Sonos filed a lawsuit against Google in the US, accusing it of infringing on key patents related to wireless multi-room speaker technology. Sonos claimed that after collaborating with Google years earlier, Google used its proprietary technology without permission in products like Google Home and Chromecast.

In 2022, the US International Trade Commission sided with Sonos, leading to a limited import ban on some Google products. In response, Google had to remove or change certain features, such as group volume control.

However, Google later challenged the validity of Sonos’s patents, and some were ruled invalid by a federal court. The legal battle has continued in various jurisdictions, reflecting broader conflicts over intellectual property rights and innovation in the tech world. Both companies have appealed different aspects of the rulings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI music takes ethical turn with Beatoven.ai’s Maestro launch

Beatoven.ai has launched Maestro, a generative AI model for instrumental music that will later expand to vocals and sound effects. The company claims it is the first fully licensed AI model, ensuring royalties for artists and rights holders.

Trained on licensed datasets from partners such as Rightsify and Symphonic Music, Maestro avoids scraping issues and guarantees attribution. Beatoven.ai, with two million users and 15 million tracks generated, says Maestro can be fine-tuned for new genres.

The platform also includes tools for catalogue owners, allowing labels and publishers to analyse music, generate metadata, and enhance back-catalogue discovery. CEO Mansoor Rahimat Khan said Maestro builds an ‘AI-powered music ecosystem’ designed to push creativity forward rather than mimic it.

Industry figures praised the approach. Ed Newton-Rex of Fairly Trained said Maestro proves AI can be ethical, while Musical AI’s Sean Power called it a fair licensing model. Beatoven.ai also plans to expand its API into gaming, film, and virtual production.

The launch highlights the wider debate over licensing versus scraping. Scraping often exploits copyrighted works without payment, while licensed datasets ensure royalties, higher-quality outputs, and long-term trust. Advocates argue that licensing offers a more sustainable and fairer path for GenAI music.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Publishers set to earn from Comet Plus, Perplexity’s new initiative

Perplexity has announced Comet Plus, a new service that will pay premium publishers to provide high-quality news content as an alternative to clickbait. The company has not disclosed its roster of partners or payment structure, though reports suggest a pool of $42.5 million.

Publishers have long criticised AI services for exploiting their work without compensation. Perplexity, backed by Amazon’s Jeff Bezos, said Comet Plus will create a fairer system and reward journalists for producing trusted content in the era of AI.

The platform introduces a revenue model based on three streams: human visits, search citations, and agent actions. Perplexity argues this approach better reflects how people consume information today, whether by browsing manually, seeking AI-generated answers, or using AI agents.

The company stated that the initiative aims to rebuild trust between readers and publishers, while ensuring that journalism thrives in a changing digital economy. The initial group of publishing partners will be revealed later.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Netflix limits AI use in productions with new rules

Netflix has issued detailed guidance for production companies on the approved use of generative AI. The guidelines allow AI tools for early ideation tasks such as moodboards or reference images, but stricter oversight applies beyond that stage.

The company outlined five guiding principles. These include ensuring generated content does not replicate copyrighted works, maintaining security of inputs, avoiding use of AI in final deliverables, and prohibiting storage or reuse of production data by AI tools.

Enterprises or vendors working on Netflix content must pass the platform’s AI compliance checks at every stage.

Netflix has already used AI to reduce VFX costs on projects like The Eternaut, but has moved to formalise boundaries around how and when the technology is applied.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta teams up with Midjourney for AI video and image tools

Meta has confirmed a new partnership with Midjourney to license its AI image and video generation technology. The collaboration, announced by Meta Chief AI Officer Alexandr Wang, will see Meta integrate Midjourney’s tools into upcoming models and products.

Midjourney will remain independent following the deal. CEO David Holz said the startup, which has never taken external investment, will continue operating on its own. The company launched its first video model earlier this year and has grown rapidly, reportedly reaching $200 million in revenue by 2023.

Midjourney is currently being sued by Disney and Universal for alleged copyright infringement in AI training data. Meta faces similar challenges, although courts have often sided with tech firms in recent decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musicians report surge in AI fakes appearing on Spotify and iTunes

Folk singer Emily Portman has become the latest artist targeted by fraudsters releasing AI-generated music in her name. Fans alerted her to a fake album called Orca appearing on Spotify and iTunes, which she said sounded uncannily like her style but was created without her consent.

Portman has filed copyright complaints, but says the platforms were slow to act, and she has yet to regain control of her Spotify profile. Other artists, including Josh Kaufman, Jeff Tweedy, Father John Misty, Sam Beam, Teddy Thompson, and Jakob Dylan, have faced similar cases in recent weeks.

Many of the fake releases appear to originate from the same source, using similar AI artwork and citing record labels with Indonesian names. The tracks are often credited to the same songwriter, Zyan Maliq Mahardika, whose name also appears on imitations of artists in other genres.

Industry analysts say streaming platforms and distributors are struggling to keep pace with AI-driven fraud. Tatiana Cirisano of Midia Research noted that fraudsters exploit passive listeners to generate streaming revenue, while services themselves are turning to AI and machine learning to detect impostors.

Observers warn the issue is likely to worsen before it improves, drawing comparisons to the early days of online piracy. Artists and rights holders may face further challenges as law enforcement attempts to catch up with the evolving abuse of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nexon investigates AI-generated TikTok ads for The First Descendant

Nexon launched an investigation after players spotted several suspicious adverts for The First Descendant on TikTok that appeared to have been generated by AI.

One advertisement allegedly used a content creator’s likeness without permission, sparking concerns about the misuse of digital identities.

The company issued a statement acknowledging ‘irregularities’ in its TikTok Creative Challenge, a campaign that lets creators voluntarily submit content for advertising.

While Nexon confirmed that all videos had been verified through TikTok’s system, it admitted that some submissions may have been produced in inappropriate circumstances.

Nexon apologised for the delay in informing players, saying the review took longer than expected. It confirmed that a joint investigation with TikTok is underway to determine what happened, and it was promised that updates would be provided once the process is complete.

The developer has not yet addressed the allegation from creator DanieltheDemon, who claims his likeness was used without consent.

The controversy has added to ongoing debates about AI’s role in advertising and protecting creators’ rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The dark side of AI: Seven fears that won’t go away

AI has been hailed as the most transformative technology of our age, but with that power comes unease. From replacing jobs to spreading lies online, the risks attached to AI are no longer abstract; they are already reshaping lives. While governments and tech leaders promise safeguards, uncertainty fuels public anxiety.

Perhaps the most immediate concern is employment. Machines are proving cheaper and faster than humans in the software development and graphic design industries. Talk of a future “post-scarcity” economy, where robot labour frees people from work, remains speculative. Workers see only lost opportunities now, while policymakers struggle to offer coordinated solutions.

Environmental costs are another hidden consequence. Training large AI models demands enormous data centres that consume vast amounts of electricity and water. Critics argue that supposed future efficiencies cannot justify today’s pollution, which sometimes rivals small nations’ carbon footprint.

Privacy fears are also escalating. AI-driven surveillance—from facial recognition in public spaces to workplace monitoring—raises questions about whether personal freedom will survive in an era of constant observation. Many fear that “smart” devices and cameras may soon leave nowhere to hide.

Then there is the spectre of weaponisation. AI is already integrated into warfare, with autonomous drones and robotic systems assisting soldiers. While fully self-governing lethal machines are not yet in use, military experts warn that it is only a matter of time before battlefields become dominated by algorithmic decision-makers.

Artists and writers, meanwhile, worry about intellectual property theft. AI systems trained on creative works without permission or payment have sparked lawsuits and protests, leaving cultural workers feeling exploited by tech giants eager for training data.

Misinformation represents another urgent risk. Deepfakes and AI-generated propaganda are flooding social media, eroding trust in institutions and amplifying extremist views. The danger lies not only in falsehoods themselves but in the echo chambers algorithms create, where users are pushed toward ever more radical beliefs.

And hovering above it all is the fear of runaway AI. Although science fiction often exaggerates this threat, researchers take seriously the possibility of systems evolving in ways we cannot predict or control. Calls for global safeguards and transparency have grown louder, yet solutions remain elusive.

In the end, fear alone cannot guide us. Addressing these risks requires not just caution but decisive governance and ethical frameworks. Only then can humanity hope to steer AI toward progress rather than peril.

Source: Forbes

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Musk threatens legal action against Apple over AI App rankings

Elon Musk has announced plans to sue Apple, accusing the company of unfairly favouring OpenAI’s ChatGPT over his xAI app Grok on the App Store.

Musk claims that Apple’s ranking practices make it impossible for any AI app except OpenAI’s to reach the top spot, calling this behaviour an ‘unequivocal antitrust violation’. ChatGPT holds the number one position on Apple’s App Store, while Grok ranks fifth.

Musk expressed frustration on social media, questioning why his X app, which he describes as ‘the number one news app in the world,’ has not received higher placement. He suggested that Apple’s ranking decisions might be politically motivated.

The dispute highlights growing tensions as AI companies compete for prominence on major platforms.

Apple and Musk’s xAI have not responded yet to requests for comment.

The controversy unfolds amid increasing scrutiny of App Store policies and their impact on competition, especially within the fast-evolving AI sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!