Microsoft leaders envision AI as an invisible partner in work and play

AI, gaming and work were at the heart of the discussion during the Paley International Council Summit, where three Microsoft executives explored how technology is reshaping human experience and industry structures.

Mustafa Suleyman, Phil Spencer and Ryan Roslansky offered perspectives on the next phase of digital transformation, from personalised AI companions to the evolution of entertainment and the changing nature of work.

Mustafa Suleyman, CEO of Microsoft AI, described a future where AI becomes an invisible companion that quietly assists users. He explained that AI is moving beyond standalone apps to integrate directly into systems and browsers, performing tasks through natural language rather than manual navigation.

With features like Copilot on Windows and Edge, users can let AI automate everyday functions, creating a seamless experience where technology anticipates rather than responds.

Phil Spencer, CEO of Microsoft Gaming, underlined gaming’s cultural impact, noting that the industry now surpasses film, books and music combined. He emphasised that gaming’s interactive nature offers lessons for all media, where creativity, participation and community define success.

For Spencer, the future of entertainment lies in blending audience engagement with technology, allowing fans and creators to shape experiences together.

Ryan Roslansky, CEO of LinkedIn, discussed how AI is transforming skills and workforce dynamics. He highlighted that required job skills are changing faster than ever, with adaptability, AI literacy and human-centred leadership becoming essential.

Roslansky urged companies to focus on potential and continuous learning instead of static job descriptions, suggesting that the most successful organisations will be those that evolve with technology and cultivate resilience through education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp adds passkey encryption for safer chat backups

Meta is rolling out a new security feature for WhatsApp that allows users to encrypt their chat backups using passkeys instead of passwords or lengthy encryption codes.

A feature for WhatsApp that enables users to protect their backups with biometric authentication such as fingerprints, facial recognition or screen lock codes.

WhatsApp became the first messaging service to introduce end-to-end encrypted backups over four years ago, and Meta says the new update builds on that foundation to make privacy simpler and more accessible.

With passkey encryption, users can secure and access their chat history easily without the need to remember complex keys.

The feature will be gradually introduced worldwide over the coming months. Users can activate it by going to WhatsApp settings, selecting Chats, then Chat backup, and enabling end-to-end encrypted backup.

Meta says the goal is to make secure communication effortless while ensuring that private messages remain protected from unauthorised access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A licensed AI music platform emerges from UMG and Udio

UMG and Udio have struck an industry-first deal to license AI music, settle litigation, and launch a 2026 platform that blends creation, streaming, and sharing in a licensed environment. Training uses authorised catalogues, with fingerprinting, filtering, and revenue sharing for artists and songwriters.

Udio’s current app stays online during the transition under a walled garden, with fingerprinting, filtering, and other controls added ahead of relaunch. Rights management sits at the core: licensed inputs, transparent outputs, and enforcement that aims to deter impersonation and unlicensed derivatives.

Leaders frame the pact as a template for a healthier AI music economy that aligns rightsholders, developers, and fans. Udio calls it a way to champion artists while expanding fan creativity, and UMG casts it as part of its broader AI partnerships across platforms.

Commercial focus extends beyond headline licensing to business model design, subscriptions, and collaboration tools for creators. Expect guardrails around style guidance, attribution, and monetisation, plus pathways for official stems and remix packs so fan edits can be cleared and paid.

Governance will matter as usage scales, with audits of model inputs, takedown routes, and payout rules under scrutiny. Success will be judged on artist adoption, catalogue protection, and whether fans get safer ways to customise music without sacrificing rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils new gpt-oss-safeguard models for adaptive content safety

Yesterday, OpenAI launched gpt-oss-safeguard, a pair of open-weight reasoning models designed to classify content according to developer-specified safety policies.

Available in 120b and 20b sizes, these models allow developers to apply and revise policies during inference instead of relying on pre-trained classifiers.

They produce explanations of their reasoning, making policy enforcement transparent and adaptable. The models are downloadable under an Apache 2.0 licence, encouraging experimentation and modification.

The system excels in situations where potential risks evolve quickly, data is limited, or nuanced judgements are required.

Unlike traditional classifiers that infer policies from pre-labelled data, gpt-oss-safeguard interprets developer-provided policies directly, enabling more precise and flexible moderation.

The models have been tested internally and externally, showing competitive performance against OpenAI’s own Safety Reasoner and prior reasoning models. They can also support non-safety tasks, such as custom content labelling, depending on the developer’s goals.

OpenAI developed these models alongside ROOST and other partners, building a community to improve open safety tools collaboratively.

While gpt-oss-safeguard is computationally intensive and may not always surpass classifiers trained on extensive datasets, it offers a dynamic approach to content moderation and risk assessment.

Developers can integrate the models into their systems to classify messages, reviews, or chat content with transparent reasoning instead of static rule sets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Character.ai restricts teen chat access on its platform

The AI chatbot service, Character.ai, has announced that teenagers can no longer chat with its AI characters from 25 November.

Under-18s will instead be limited to generating content such as videos, as the platform responds to concerns over risky interactions and lawsuits in the US.

Character.ai has faced criticism after avatars related to sensitive cases were discovered on the site, prompting safety experts and parents to call for stricter measures.

The company cited feedback from regulators and safety specialists, explaining that AI chatbots can pose emotional risks for young users by feigning empathy or providing misleading encouragement.

Character.ai also plans to introduce new age verification systems and fund a research lab focused on AI safety, alongside enhancing role-play and storytelling features that are less likely to place teens in vulnerable situations.

Safety campaigners welcomed the decision but emphasised that preventative measures should have been implemented.

Experts warn the move reflects a broader shift in the AI industry, where platforms increasingly recognise the importance of child protection in a landscape transitioning from permissionless innovation to more regulated oversight.

Analysts note the challenge for Character.ai will be maintaining teen engagement without encouraging unsafe interactions.

Separating creative play from emotionally sensitive exchanges is key, and the company’s new approach may signal a maturing phase in AI development, where responsible innovation prioritises the protection of young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia founder questions Musk’s Grokipedia accuracy

Speaking at the CNBC Technology Executive Council Summit in New York, Wikipedia founder Jimmy Wales has expressed scepticism about Elon Musk’s new AI-powered Grokipedia, suggesting that large language models cannot reliably produce accurate wiki entries.

Wales highlighted the difficulties of verifying sources and warned that AI tools can produce plausible but incorrect information, citing examples where chatbots fabricated citations and personal details.

He rejected Musk’s claims of liberal bias on Wikipedia, noting that the site prioritises reputable sources over fringe opinions. Wales emphasised that focusing on mainstream publications does not constitute political bias but preserves trust and reliability for the platform’s vast global audience.

Despite his concerns, Wales acknowledged that AI could have limited utility for Wikipedia in uncovering information within existing sources.

However, he stressed that substantial costs and potential errors prevent the site from entirely relying on generative AI, preferring careful testing before integrating new technologies.

Wales concluded that while AI may mislead the public with fake or plausible content, the Wiki community’s decades of expertise in evaluating information help safeguard accuracy. He urged continued vigilance and careful source evaluation as misinformation risks grow alongside AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google limits search results to 10 per page

Google has removed the option to display up to 100 search results per page, now showing only 10 results at a time. The change limits visibility for websites beyond the top 10 and may reduce organic traffic for many content creators.

The update also impacts AI systems and automated workflows. Many tools rely on search engines to collect data, index content, or feed retrieval systems. With fewer results per query, these processes require additional searches, slowing data collection and increasing operational costs.

Content strategists and developers are advised to adapt. Optimising for top-ranked pages, revising SEO approaches, and rethinking data-gathering methods are increasingly important for both human users and AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT offers wellness checks for long chat sessions

OpenAI has introduced new features in ChatGPT to encourage healthier use for people who spend extended periods chatting with the AI. Users may see a pop-up message reading ‘Just checking in. You’ve been chatting for a while, is this a good time for a break?’.

Users can dismiss it or continue, helping to prevent excessive screen time while staying flexible. The update also guides high-stakes personal decisions.

ChatGPT will not give direct advice on sensitive topics such as relationships, but instead asks questions and encourages reflection, helping users consider their options safely.

OpenAI acknowledged that AI can feel especially personal for vulnerable individuals. Earlier versions sometimes struggled to recognise signs of emotional dependency or distress.

The company is improving the model to detect these cases and direct users to evidence-based resources when needed, making long interactions safer and more mindful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and TikTok agree to comply with Australia’s under-16 social media ban

Meta and TikTok have confirmed they will comply with Australia’s new law banning under-16s from using social media platforms, though both warned it will be difficult to enforce. The legislation, taking effect on 10 December, will require major platforms to remove accounts belonging to users under that age.

The law is among the world’s strictest, but regulators and companies are still working out how it will be implemented. Social media firms face fines of up to A$49.5 million if found in breach, yet they are not required to verify every user’s age directly.

TikTok’s Australia policy head, Ella Woods-Joyce, warned the ban could drive children toward unregulated online spaces lacking safety measures. Meta’s director, Mia Garlick, acknowledged the ‘significant engineering and age assurance challenges’ involved in detecting and removing underage users.

Critics including YouTube and digital rights groups have labelled the ban vague and rushed, arguing it may not achieve its aim of protecting children online. The government maintains that platforms must take ‘reasonable steps’ to prevent young users from accessing their services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA expands open-source AI models to boost global innovation

The US tech giant, NVIDIA, has released open-source AI models and data tools across language, biology and robotics to accelerate innovation and expand access to cutting-edge research.

New model families, Nemotron, Cosmos, Isaac GR00T and Clara, are designed to empower developers to build intelligent agents and applications with enhanced reasoning and multimodal capabilities.

The company is contributing these open models and datasets to Hugging Face, further solidifying its position as a leading supporter of open research.

Nemotron models improve reasoning for digital AI agents, while Cosmos and Isaac GR00T enable physical AI and robotic systems to perform complex simulations and behaviours. Clara advances biomedical AI, allowing scientists to analyse RNA, generate 3D protein structures and enhance medical imaging.

Major industry partners, including Amazon Robotics, ServiceNow, Palantir and PayPal, are already integrating NVIDIA’s technologies to develop next-generation AI agents.

An initiative that reflects NVIDIA’s aim to create an open ecosystem that supports both enterprise and scientific innovation through accessible, transparent and responsible AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!