Facebook update lets admins make private groups public safely

Meta has introduced a new Facebook update allowing group administrators to change their private groups to public while keeping members’ privacy protected. The company said the feature gives admins more flexibility to grow their communities without exposing existing private content.

All posts, comments, and reactions shared before the change will remain visible only to previous members, admins, and moderators. The member list will also stay private. Once converted, any new posts will be visible to everyone, including non-Facebook users, which helps discussions reach a broader audience.

Admins have three days to review and cancel the conversion before it becomes permanent. Members will be notified when a group changes its status, and a globe icon will appear when posting in public groups as a reminder of visibility settings.

Groups can be switched back to private at any time, restoring member-only access.

Meta said the feature supports community growth and deeper engagement while maintaining privacy safeguards. Group admins can also utilise anonymous or nickname-based participation options, providing users with greater control over their engagement in public discussions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Live exploitation of CVE-2024-1086 across older Linux versions flagged by CISA

CISA’s warning serves as a reminder that ransomware is not confined to Windows. A Linux kernel flaw, CVE-2024-1086, is being exploited in real-world incidents, and federal networks face a November 20 patch-or-disable deadline. Businesses should read it as their cue, too.

Attackers who reach a vulnerable host can escalate privileges to root, bypass defences, and deploy malware. Many older kernels remain in circulation even though upstream fixes were shipped in January 2024, creating a soft target when paired with phishing and lateral movement.

Practical steps matter more than labels. Patch affected kernels where possible, isolate any components that cannot be updated, and verify the running versions against vendor advisories and the NIST catalogue. Treat emergency changes as production work, with change logs and checks.

Resilience buys time when updates lag. Enforce least privilege, require MFA for admin entry points, and segment crown-jewel services. Tune EDR to spot privilege-escalation behaviour and suspicious modules, then rehearse restores from offline, immutable backups.

Security habits shape outcomes as much as CVEs. Teams that patch quickly, validate fixes, and document closure shrink the blast radius. Teams that defer kernel maintenance invite repeat visits, turning a known bug into an avoidable outage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI-powered patent search to make innovation intelligence accessible

The US software company, Perplexity, has unveiled Perplexity Patents, the first AI-powered patent research agent designed to democratise access to intellectual property intelligence. The new tool allows anyone to explore patents using natural language instead of complex keyword syntax.

Traditional patent research has long relied on rigid search systems that demand specialist knowledge and expensive software.

Perplexity Patents instead offers conversational interaction, enabling users to ask questions such as ‘Are there any patents on AI for language learning?’ or ‘Key quantum computing patents since 2024?’.

The system automatically identifies relevant patents, provides inline viewing, and maintains context across multiple questions.

Powered by Perplexity’s large-scale search infrastructure, the platform uses agentic reasoning to break down complex queries, perform multi-step searches, and return comprehensive results supported by extensive patent documentation.

Its semantic understanding also captures related concepts that traditional tools often miss, linking terms such as ‘fitness trackers’, ‘activity bands’, and ‘health monitoring wearables’.

Beyond patent databases, Perplexity Patents can also draw from academic papers, open-source code, and other publicly available data, revealing the entire landscape of technological innovation. The service launches today in beta, free for all users, with extra features for Pro and Max subscribers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp adds passkey encryption for safer chat backups

Meta is rolling out a new security feature for WhatsApp that allows users to encrypt their chat backups using passkeys instead of passwords or lengthy encryption codes.

A feature for WhatsApp that enables users to protect their backups with biometric authentication such as fingerprints, facial recognition or screen lock codes.

WhatsApp became the first messaging service to introduce end-to-end encrypted backups over four years ago, and Meta says the new update builds on that foundation to make privacy simpler and more accessible.

With passkey encryption, users can secure and access their chat history easily without the need to remember complex keys.

The feature will be gradually introduced worldwide over the coming months. Users can activate it by going to WhatsApp settings, selecting Chats, then Chat backup, and enabling end-to-end encrypted backup.

Meta says the goal is to make secure communication effortless while ensuring that private messages remain protected from unauthorised access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trainium2 power surges as AWS’s Project Rainier enters service for Anthropic

Anthropic and AWS switched on Project Rainier, a vast Trainium2 cluster spanning multiple US sites to accelerate Claude’s evolution.

Project Rainier is now fully operational, less than a year after its announcement. AWS engineered an EC2 UltraCluster of Trainium2 UltraServers to deliver massive training capacity. Anthropic says it offers more than five times the compute used for prior Claude models.

UltraServers bind four Trainium2 servers with high-speed NeuronLinks so 64 chips act as one. Tens of thousands of networks are connected through Elastic Fabric Adapter across buildings. The design reduces latency within racks while preserving flexible scale across data centres.

Anthropic is already training and serving Claude on Rainier across the US and plans to exceed one million Trainium2 chips by year’s end. More computing should raise model accuracy, speed evaluations, and shorten iteration cycles for new frontier releases.

AWS controls the stack from chip to data centre for reliability and efficiency. Teams tune power delivery, cooling, and software orchestration. New sites add water-wise cooling, contributing to the company’s renewable energy and net-zero goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven cybercrime rises across Asia

Cybersecurity experts met in Dubai for the World Economic Forum’s Annual Global Future Councils and Cybersecurity meetings. More than 500 participants, including 150 top cybersecurity leaders, discussed how emerging technologies such as AI are reshaping digital security.

UAE officials highlighted the importance of resilience, trust and secure infrastructure as fundamental to future prosperity. Sessions examined how geopolitical shifts and technological advances are changing the cyber landscape and stressed the need for coordinated global action.

AI-driven cybercrime is rising sharply in Japan, with criminals exploiting advanced technology to scale attacks and target data. Recent incidents include a cyber attack on Asahi Breweries, which temporarily halted production at its domestic factories.

Authorities are calling for stronger cross-border collaboration and improved cybersecurity measures, while Japan’s new Prime Minister, Sanae Takaichi, pledged to enhance cooperation on AI and cybersecurity with regional partners.

Significant global developments include the signing of the first UN cybercrime treaty by 65 nations in Viet Nam, establishing a framework for international cooperation, rapid-response networks and stronger legal protections.

High-profile cyber incidents in the UK, including attacks on Jaguar Land Rover and a nursery chain, have highlighted the growing economic and social costs of cybercrime. These events are prompting calls for businesses to prioritise cyber resilience.

Experts warn that technology is evolving faster than cyber defences, leaving small businesses and less developed regions highly vulnerable. Integrating AI, automation and proactive security strategies is seen as essential to protect organizations and ensure global digital stability.

Cyber resilience is increasingly recognised not just as an IT issue but as a strategic imperative for economic and national security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI unveils new gpt-oss-safeguard models for adaptive content safety

Yesterday, OpenAI launched gpt-oss-safeguard, a pair of open-weight reasoning models designed to classify content according to developer-specified safety policies.

Available in 120b and 20b sizes, these models allow developers to apply and revise policies during inference instead of relying on pre-trained classifiers.

They produce explanations of their reasoning, making policy enforcement transparent and adaptable. The models are downloadable under an Apache 2.0 licence, encouraging experimentation and modification.

The system excels in situations where potential risks evolve quickly, data is limited, or nuanced judgements are required.

Unlike traditional classifiers that infer policies from pre-labelled data, gpt-oss-safeguard interprets developer-provided policies directly, enabling more precise and flexible moderation.

The models have been tested internally and externally, showing competitive performance against OpenAI’s own Safety Reasoner and prior reasoning models. They can also support non-safety tasks, such as custom content labelling, depending on the developer’s goals.

OpenAI developed these models alongside ROOST and other partners, building a community to improve open safety tools collaboratively.

While gpt-oss-safeguard is computationally intensive and may not always surpass classifiers trained on extensive datasets, it offers a dynamic approach to content moderation and risk assessment.

Developers can integrate the models into their systems to classify messages, reviews, or chat content with transparent reasoning instead of static rule sets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft restores Azure services after global outage

The US tech giant, Microsoft, has resolved a global outage affecting its Azure cloud services, which disrupted access to Office 365, Minecraft, and numerous other websites.

The company attributed the incident to a configuration change that triggered DNS issues, impacting businesses and consumers worldwide.

An outage that affected high-profile services, including Heathrow Airport, NatWest, Starbucks, and New Zealand’s police and parliament websites.

Microsoft restored access after several hours, but the event highlighted the fragility of the internet due to the concentration of cloud services among a few major providers.

Experts noted that reliance on platforms such as Azure, Amazon Web Services, and Google Cloud creates systemic risks. Even minor configuration errors can ripple across thousands of interconnected systems, affecting payment processing, government operations, and online services.

Despite the disruption, Microsoft’s swift fix mitigated long-term impact. The company reiterated the importance of robust infrastructure and contingency planning as the global economy increasingly depends on cloud computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Character.ai restricts teen chat access on its platform

The AI chatbot service, Character.ai, has announced that teenagers can no longer chat with its AI characters from 25 November.

Under-18s will instead be limited to generating content such as videos, as the platform responds to concerns over risky interactions and lawsuits in the US.

Character.ai has faced criticism after avatars related to sensitive cases were discovered on the site, prompting safety experts and parents to call for stricter measures.

The company cited feedback from regulators and safety specialists, explaining that AI chatbots can pose emotional risks for young users by feigning empathy or providing misleading encouragement.

Character.ai also plans to introduce new age verification systems and fund a research lab focused on AI safety, alongside enhancing role-play and storytelling features that are less likely to place teens in vulnerable situations.

Safety campaigners welcomed the decision but emphasised that preventative measures should have been implemented.

Experts warn the move reflects a broader shift in the AI industry, where platforms increasingly recognise the importance of child protection in a landscape transitioning from permissionless innovation to more regulated oversight.

Analysts note the challenge for Character.ai will be maintaining teen engagement without encouraging unsafe interactions.

Separating creative play from emotionally sensitive exchanges is key, and the company’s new approach may signal a maturing phase in AI development, where responsible innovation prioritises the protection of young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia founder questions Musk’s Grokipedia accuracy

Speaking at the CNBC Technology Executive Council Summit in New York, Wikipedia founder Jimmy Wales has expressed scepticism about Elon Musk’s new AI-powered Grokipedia, suggesting that large language models cannot reliably produce accurate wiki entries.

Wales highlighted the difficulties of verifying sources and warned that AI tools can produce plausible but incorrect information, citing examples where chatbots fabricated citations and personal details.

He rejected Musk’s claims of liberal bias on Wikipedia, noting that the site prioritises reputable sources over fringe opinions. Wales emphasised that focusing on mainstream publications does not constitute political bias but preserves trust and reliability for the platform’s vast global audience.

Despite his concerns, Wales acknowledged that AI could have limited utility for Wikipedia in uncovering information within existing sources.

However, he stressed that substantial costs and potential errors prevent the site from entirely relying on generative AI, preferring careful testing before integrating new technologies.

Wales concluded that while AI may mislead the public with fake or plausible content, the Wiki community’s decades of expertise in evaluating information help safeguard accuracy. He urged continued vigilance and careful source evaluation as misinformation risks grow alongside AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!