A licensed AI music platform emerges from UMG and Udio

UMG and Udio have struck an industry-first deal to license AI music, settle litigation, and launch a 2026 platform that blends creation, streaming, and sharing in a licensed environment. Training uses authorised catalogues, with fingerprinting, filtering, and revenue sharing for artists and songwriters.

Udio’s current app stays online during the transition under a walled garden, with fingerprinting, filtering, and other controls added ahead of relaunch. Rights management sits at the core: licensed inputs, transparent outputs, and enforcement that aims to deter impersonation and unlicensed derivatives.

Leaders frame the pact as a template for a healthier AI music economy that aligns rightsholders, developers, and fans. Udio calls it a way to champion artists while expanding fan creativity, and UMG casts it as part of its broader AI partnerships across platforms.

Commercial focus extends beyond headline licensing to business model design, subscriptions, and collaboration tools for creators. Expect guardrails around style guidance, attribution, and monetisation, plus pathways for official stems and remix packs so fan edits can be cleared and paid.

Governance will matter as usage scales, with audits of model inputs, takedown routes, and payout rules under scrutiny. Success will be judged on artist adoption, catalogue protection, and whether fans get safer ways to customise music without sacrificing rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Former Meta lobbyist’s appointment to Irish data watchdog triggers conflict-of-interest complaint

Rights group the Irish Council for Civil Liberties (ICCL) has asked the European Commission to review Ireland’s appointment of former Meta lobbyist Niamh Sweeney to the Data Protection Commission (DPC), alleging the process breaches EU rules on independent regulators. ICCL argues the law requires authorities to be ‘above any suspicion of partiality’.

Sweeney, appointed on 25 September, is now one of three commissioners. Her profile shows roles at Meta from 2015–2021, including leading WhatsApp public policy across Europe, Africa and the Middle East. Before that, she lobbied for Facebook in Ireland. ICCL also notes that Leo Moore, a lawyer whose clients include major tech and social media firms, and, according to ICCL, the only panellist with data-protection expertise, sat on the five-member panel that selected Sweeney.

The Commission said it is ‘not empowered to take action with respect to appointments’, indicating the complaint may fall outside its remit. This latest development comes amid growing scrutiny of the DPC. In a previous case on Meta’s behavioural advertising practices, the European Data Protection Board overturned the DPC’s decision not to impose a fine and ordered stricter enforcement measures against the tech giant.

This move is the latest in a series of complaints against the independence of the DPC. More than 40 civil society organisations asked the European Commission to investigate Ireland’s privacy regulator earlier this month.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!


OpenAI unveils new gpt-oss-safeguard models for adaptive content safety

Yesterday, OpenAI launched gpt-oss-safeguard, a pair of open-weight reasoning models designed to classify content according to developer-specified safety policies.

Available in 120b and 20b sizes, these models allow developers to apply and revise policies during inference instead of relying on pre-trained classifiers.

They produce explanations of their reasoning, making policy enforcement transparent and adaptable. The models are downloadable under an Apache 2.0 licence, encouraging experimentation and modification.

The system excels in situations where potential risks evolve quickly, data is limited, or nuanced judgements are required.

Unlike traditional classifiers that infer policies from pre-labelled data, gpt-oss-safeguard interprets developer-provided policies directly, enabling more precise and flexible moderation.

The models have been tested internally and externally, showing competitive performance against OpenAI’s own Safety Reasoner and prior reasoning models. They can also support non-safety tasks, such as custom content labelling, depending on the developer’s goals.

OpenAI developed these models alongside ROOST and other partners, building a community to improve open safety tools collaboratively.

While gpt-oss-safeguard is computationally intensive and may not always surpass classifiers trained on extensive datasets, it offers a dynamic approach to content moderation and risk assessment.

Developers can integrate the models into their systems to classify messages, reviews, or chat content with transparent reasoning instead of static rule sets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Internet Bill of Rights unveiled as response to global safety laws

A proposed US Internet Bill of Rights aims to protect digital freedoms as governments expand online censorship laws. The framework, developed by privacy advocates, calls for stronger guarantees of free expression, privacy, and access to information in the digital era.

Supporters argue that recent legislation such as the UK’s Online Safety Act, the EU’s Digital Services Act, and US proposals like KOSA and the STOP HATE Act have eroded civil liberties. They claim these measures empower governments and private firms to control online speech under the guise of safety.

The proposed US bill sets out rights including privacy in digital communications, platform transparency, protection against government surveillance, and fair access to the internet. It also calls for judicial oversight of censorship requests, open algorithms, and the protection of anonymous speech.

Advocates say the framework would enshrine digital freedoms through federal law or constitutional amendment, ensuring equal access and privacy worldwide. They argue that safeguarding free and open internet access is vital to preserve democracy and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft restores Azure services after global outage

The US tech giant, Microsoft, has resolved a global outage affecting its Azure cloud services, which disrupted access to Office 365, Minecraft, and numerous other websites.

The company attributed the incident to a configuration change that triggered DNS issues, impacting businesses and consumers worldwide.

An outage that affected high-profile services, including Heathrow Airport, NatWest, Starbucks, and New Zealand’s police and parliament websites.

Microsoft restored access after several hours, but the event highlighted the fragility of the internet due to the concentration of cloud services among a few major providers.

Experts noted that reliance on platforms such as Azure, Amazon Web Services, and Google Cloud creates systemic risks. Even minor configuration errors can ripple across thousands of interconnected systems, affecting payment processing, government operations, and online services.

Despite the disruption, Microsoft’s swift fix mitigated long-term impact. The company reiterated the importance of robust infrastructure and contingency planning as the global economy increasingly depends on cloud computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Character.ai restricts teen chat access on its platform

The AI chatbot service, Character.ai, has announced that teenagers can no longer chat with its AI characters from 25 November.

Under-18s will instead be limited to generating content such as videos, as the platform responds to concerns over risky interactions and lawsuits in the US.

Character.ai has faced criticism after avatars related to sensitive cases were discovered on the site, prompting safety experts and parents to call for stricter measures.

The company cited feedback from regulators and safety specialists, explaining that AI chatbots can pose emotional risks for young users by feigning empathy or providing misleading encouragement.

Character.ai also plans to introduce new age verification systems and fund a research lab focused on AI safety, alongside enhancing role-play and storytelling features that are less likely to place teens in vulnerable situations.

Safety campaigners welcomed the decision but emphasised that preventative measures should have been implemented.

Experts warn the move reflects a broader shift in the AI industry, where platforms increasingly recognise the importance of child protection in a landscape transitioning from permissionless innovation to more regulated oversight.

Analysts note the challenge for Character.ai will be maintaining teen engagement without encouraging unsafe interactions.

Separating creative play from emotionally sensitive exchanges is key, and the company’s new approach may signal a maturing phase in AI development, where responsible innovation prioritises the protection of young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alliance science pact lifts US–Korea cooperation on AI, quantum, 6G, and space

The United States and South Korea agreed on a broad science and technology memorandum to deepen alliance ties and bolster Indo-Pacific stability. The non-binding pact aims to accelerate innovation while protecting critical capabilities. Both sides cast it as groundwork for a new Golden Age of Innovation.

AI sits at the centre. Plans include pro-innovation policy alignment, trusted exports across the stack, AI-ready datasets, safety standards, and enforcement of compute protection. Joint metrology and standards work links the US Center for AI Standards and Innovation with the AI Safety Institute of South Korea.

Trusted technology leadership extends beyond AI. The memorandum outlines shared research security, capacity building for universities and industry, and joint threat analysis. Telecommunications cooperation targets interoperable 6G supply chains and coordinated standards activity with industry partners.

Quantum and basic research are priority growth areas. Participants plan interoperable quantum standards, stronger institutional partnerships, and secured supply chains. Larger projects and STEM exchanges aim to widen collaboration, supported by shared roadmaps and engagement in global consortia.

Space cooperation continues across civil and exploration programmes. Strands include Artemis contributions, a Korean cubesat rideshare on Artemis II, and Commercial Lunar Payload Services. The Korea Positioning System will be developed for maximum interoperability with GPS.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia founder questions Musk’s Grokipedia accuracy

Speaking at the CNBC Technology Executive Council Summit in New York, Wikipedia founder Jimmy Wales has expressed scepticism about Elon Musk’s new AI-powered Grokipedia, suggesting that large language models cannot reliably produce accurate wiki entries.

Wales highlighted the difficulties of verifying sources and warned that AI tools can produce plausible but incorrect information, citing examples where chatbots fabricated citations and personal details.

He rejected Musk’s claims of liberal bias on Wikipedia, noting that the site prioritises reputable sources over fringe opinions. Wales emphasised that focusing on mainstream publications does not constitute political bias but preserves trust and reliability for the platform’s vast global audience.

Despite his concerns, Wales acknowledged that AI could have limited utility for Wikipedia in uncovering information within existing sources.

However, he stressed that substantial costs and potential errors prevent the site from entirely relying on generative AI, preferring careful testing before integrating new technologies.

Wales concluded that while AI may mislead the public with fake or plausible content, the Wiki community’s decades of expertise in evaluating information help safeguard accuracy. He urged continued vigilance and careful source evaluation as misinformation risks grow alongside AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China outlines plan to expand high-tech industries

China has pledged to expand its high-tech industries over the next decade. Officials said emerging sectors such as quantum computing, hydrogen energy, nuclear fusion, and brain-computer interfaces will receive major investment and policy backing.

Development chief Zheng Shanjie told reporters that the coming decade will redefine China’s technology landscape, describing it as a ‘new scale’ of innovation. The government views breakthroughs in science and AI as key to boosting economic resilience amid a slowing property market and demographic decline.

The plan underscores Beijing’s push to rival Washington in cutting-edge technology, with billions already channelled into state-led innovation programmes. Public opinion in Beijing appears supportive, with many citizens expressing optimism that China could lead the next technological revolution.

Economists warn, however, that sustained progress will require tackling structural issues, including low domestic consumption and reduced investor confidence. Analysts said Beijing’s long-term success will depend on whether it can balance rapid growth with stable governance and transparent regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta and TikTok agree to comply with Australia’s under-16 social media ban

Meta and TikTok have confirmed they will comply with Australia’s new law banning under-16s from using social media platforms, though both warned it will be difficult to enforce. The legislation, taking effect on 10 December, will require major platforms to remove accounts belonging to users under that age.

The law is among the world’s strictest, but regulators and companies are still working out how it will be implemented. Social media firms face fines of up to A$49.5 million if found in breach, yet they are not required to verify every user’s age directly.

TikTok’s Australia policy head, Ella Woods-Joyce, warned the ban could drive children toward unregulated online spaces lacking safety measures. Meta’s director, Mia Garlick, acknowledged the ‘significant engineering and age assurance challenges’ involved in detecting and removing underage users.

Critics including YouTube and digital rights groups have labelled the ban vague and rushed, arguing it may not achieve its aim of protecting children online. The government maintains that platforms must take ‘reasonable steps’ to prevent young users from accessing their services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot