WhatsApp adds passkey encryption for safer chat backups

Meta is rolling out a new security feature for WhatsApp that allows users to encrypt their chat backups using passkeys instead of passwords or lengthy encryption codes.

A feature for WhatsApp that enables users to protect their backups with biometric authentication such as fingerprints, facial recognition or screen lock codes.

WhatsApp became the first messaging service to introduce end-to-end encrypted backups over four years ago, and Meta says the new update builds on that foundation to make privacy simpler and more accessible.

With passkey encryption, users can secure and access their chat history easily without the need to remember complex keys.

The feature will be gradually introduced worldwide over the coming months. Users can activate it by going to WhatsApp settings, selecting Chats, then Chat backup, and enabling end-to-end encrypted backup.

Meta says the goal is to make secure communication effortless while ensuring that private messages remain protected from unauthorised access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE and Google launch ‘AI for All’ national skills initiative

In a major public-private collaboration, the UAE’s Artificial Intelligence, Digital Economy, and Remote Work Applications Office and Google announced the ‘AI for All’ initiative, aimed at delivering AI skills training across the United Arab Emirates.

The announcement was made on 29 October 2025 and will roll out through 2026.

The programme targets a broad audience, from students, teachers, university learners and government employees, to small and medium-sized enterprises (SMEs), creatives and content-makers.

It will cover fundamentals of AI, practical use-cases, responsible and safe AI use, and prompt-engineering for generative models. Google is also providing university students and other participants access to its advanced Gemini models as part of the skilling effort.

This initiative reflects the UAE’s broader ambition to become a global hub for innovation and talent in the AI economy, as well as Google’s regional strategy under its ‘AI Opportunity Initiative’ for the Middle East & North Africa.

By combining training, awareness campaigns and access to AI tools, the collaboration seeks to ensure that AI’s benefits are accessible to all segments of society in the UAE.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Trainium2 power surges as AWS’s Project Rainier enters service for Anthropic

Anthropic and AWS switched on Project Rainier, a vast Trainium2 cluster spanning multiple US sites to accelerate Claude’s evolution.

Project Rainier is now fully operational, less than a year after its announcement. AWS engineered an EC2 UltraCluster of Trainium2 UltraServers to deliver massive training capacity. Anthropic says it offers more than five times the compute used for prior Claude models.

UltraServers bind four Trainium2 servers with high-speed NeuronLinks so 64 chips act as one. Tens of thousands of networks are connected through Elastic Fabric Adapter across buildings. The design reduces latency within racks while preserving flexible scale across data centres.

Anthropic is already training and serving Claude on Rainier across the US and plans to exceed one million Trainium2 chips by year’s end. More computing should raise model accuracy, speed evaluations, and shorten iteration cycles for new frontier releases.

AWS controls the stack from chip to data centre for reliability and efficiency. Teams tune power delivery, cooling, and software orchestration. New sites add water-wise cooling, contributing to the company’s renewable energy and net-zero goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

A licensed AI music platform emerges from UMG and Udio

UMG and Udio have struck an industry-first deal to license AI music, settle litigation, and launch a 2026 platform that blends creation, streaming, and sharing in a licensed environment. Training uses authorised catalogues, with fingerprinting, filtering, and revenue sharing for artists and songwriters.

Udio’s current app stays online during the transition under a walled garden, with fingerprinting, filtering, and other controls added ahead of relaunch. Rights management sits at the core: licensed inputs, transparent outputs, and enforcement that aims to deter impersonation and unlicensed derivatives.

Leaders frame the pact as a template for a healthier AI music economy that aligns rightsholders, developers, and fans. Udio calls it a way to champion artists while expanding fan creativity, and UMG casts it as part of its broader AI partnerships across platforms.

Commercial focus extends beyond headline licensing to business model design, subscriptions, and collaboration tools for creators. Expect guardrails around style guidance, attribution, and monetisation, plus pathways for official stems and remix packs so fan edits can be cleared and paid.

Governance will matter as usage scales, with audits of model inputs, takedown routes, and payout rules under scrutiny. Success will be judged on artist adoption, catalogue protection, and whether fans get safer ways to customise music without sacrificing rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils new gpt-oss-safeguard models for adaptive content safety

Yesterday, OpenAI launched gpt-oss-safeguard, a pair of open-weight reasoning models designed to classify content according to developer-specified safety policies.

Available in 120b and 20b sizes, these models allow developers to apply and revise policies during inference instead of relying on pre-trained classifiers.

They produce explanations of their reasoning, making policy enforcement transparent and adaptable. The models are downloadable under an Apache 2.0 licence, encouraging experimentation and modification.

The system excels in situations where potential risks evolve quickly, data is limited, or nuanced judgements are required.

Unlike traditional classifiers that infer policies from pre-labelled data, gpt-oss-safeguard interprets developer-provided policies directly, enabling more precise and flexible moderation.

The models have been tested internally and externally, showing competitive performance against OpenAI’s own Safety Reasoner and prior reasoning models. They can also support non-safety tasks, such as custom content labelling, depending on the developer’s goals.

OpenAI developed these models alongside ROOST and other partners, building a community to improve open safety tools collaboratively.

While gpt-oss-safeguard is computationally intensive and may not always surpass classifiers trained on extensive datasets, it offers a dynamic approach to content moderation and risk assessment.

Developers can integrate the models into their systems to classify messages, reviews, or chat content with transparent reasoning instead of static rule sets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Automakers and freight partners join NVIDIA and Uber to accelerate level 4 deployments

NVIDIA and Uber partner on level 4-ready fleets using the DRIVE AGX Hyperion 10, aiming to scale a unified human-and-robot driver network from 2027. A joint AI data factory on NVIDIA Cosmos will curate training data, aiming to reach 100,000 vehicles over time.

DRIVE AGX Hyperion 10 is a reference compute and sensor stack for level 4 readiness across cars, vans, and trucks. Automakers can pair validated hardware with compatible autonomy software to speed safer, scalable, AI-defined mobility. Passenger and freight services gain faster paths from prototype to fleet.

Stellantis, Lucid, and Mercedes-Benz are preparing passenger platforms on Hyperion 10. Aurora, Volvo Autonomous Solutions, and Waabi are extending level 4 capability to long-haul trucking. Avride, May Mobility, Momenta, Nuro, Pony.ai, Wayve, and WeRide continue to build on NVIDIA DRIVE.

The production platform pairs dual DRIVE AGX Thor on Blackwell with DriveOS and a qualified multimodal sensor suite. Cameras, radar, lidar, and ultrasonics deliver 360-degree coverage. Modular design plus PCIe, Ethernet, confidential computing, and liquid cooling support upgrades and uptime.

NVIDIA is also launching Halos, a cloud-to-vehicle AI safety and certification system with an ANSI-accredited inspection lab and certification program. A multimodal AV dataset and reasoning VLA models aim to improve urban driving, testing, and validation for deployments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grammarly becomes Superhuman with unified AI tools for work

Superhuman, formerly known as Grammarly, is bundling its writing tools, workspace platform, and email client with a new AI assistant suite. The company says the rebrand reflects a push to unify generative AI features that streamline workplace tasks and online communication for subscribers.

Grammarly acquired Coda and Superhuman Mail earlier this year and added Superhuman Go. The bundle arrives as a single plan. Go’s agents brainstorm, gather information, send emails, and schedule meetings to reduce app switching.

Superhuman Mail organises inboxes and drafts replies in your voice. Coda pulls data from other apps into documents, tables, and dashboards. An upcoming update lets Coda act on that data to automate plans and tasks.

CEO Shishir Mehrotra says the aim is ambient, integrated AI. Built on Grammarly’s infrastructure, the tools work in place without prompting or pasting. The bundle targets teams seeking consistent AI across writing, email, and knowledge work.

Analysts will watch brand overlap with the existing Superhuman email app and enterprise pricing. Success depends on trust, data controls, and measurable time savings versus point tools. Rollout specifics, including regions, will follow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US Internet Bill of Rights unveiled as response to global safety laws

A proposed US Internet Bill of Rights aims to protect digital freedoms as governments expand online censorship laws. The framework, developed by privacy advocates, calls for stronger guarantees of free expression, privacy, and access to information in the digital era.

Supporters argue that recent legislation such as the UK’s Online Safety Act, the EU’s Digital Services Act, and US proposals like KOSA and the STOP HATE Act have eroded civil liberties. They claim these measures empower governments and private firms to control online speech under the guise of safety.

The proposed US bill sets out rights including privacy in digital communications, platform transparency, protection against government surveillance, and fair access to the internet. It also calls for judicial oversight of censorship requests, open algorithms, and the protection of anonymous speech.

Advocates say the framework would enshrine digital freedoms through federal law or constitutional amendment, ensuring equal access and privacy worldwide. They argue that safeguarding free and open internet access is vital to preserve democracy and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft restores Azure services after global outage

The US tech giant, Microsoft, has resolved a global outage affecting its Azure cloud services, which disrupted access to Office 365, Minecraft, and numerous other websites.

The company attributed the incident to a configuration change that triggered DNS issues, impacting businesses and consumers worldwide.

An outage that affected high-profile services, including Heathrow Airport, NatWest, Starbucks, and New Zealand’s police and parliament websites.

Microsoft restored access after several hours, but the event highlighted the fragility of the internet due to the concentration of cloud services among a few major providers.

Experts noted that reliance on platforms such as Azure, Amazon Web Services, and Google Cloud creates systemic risks. Even minor configuration errors can ripple across thousands of interconnected systems, affecting payment processing, government operations, and online services.

Despite the disruption, Microsoft’s swift fix mitigated long-term impact. The company reiterated the importance of robust infrastructure and contingency planning as the global economy increasingly depends on cloud computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Character.ai restricts teen chat access on its platform

The AI chatbot service, Character.ai, has announced that teenagers can no longer chat with its AI characters from 25 November.

Under-18s will instead be limited to generating content such as videos, as the platform responds to concerns over risky interactions and lawsuits in the US.

Character.ai has faced criticism after avatars related to sensitive cases were discovered on the site, prompting safety experts and parents to call for stricter measures.

The company cited feedback from regulators and safety specialists, explaining that AI chatbots can pose emotional risks for young users by feigning empathy or providing misleading encouragement.

Character.ai also plans to introduce new age verification systems and fund a research lab focused on AI safety, alongside enhancing role-play and storytelling features that are less likely to place teens in vulnerable situations.

Safety campaigners welcomed the decision but emphasised that preventative measures should have been implemented.

Experts warn the move reflects a broader shift in the AI industry, where platforms increasingly recognise the importance of child protection in a landscape transitioning from permissionless innovation to more regulated oversight.

Analysts note the challenge for Character.ai will be maintaining teen engagement without encouraging unsafe interactions.

Separating creative play from emotionally sensitive exchanges is key, and the company’s new approach may signal a maturing phase in AI development, where responsible innovation prioritises the protection of young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!