Facebook update lets admins make private groups public safely

Meta has introduced a new Facebook update allowing group administrators to change their private groups to public while keeping members’ privacy protected. The company said the feature gives admins more flexibility to grow their communities without exposing existing private content.

All posts, comments, and reactions shared before the change will remain visible only to previous members, admins, and moderators. The member list will also stay private. Once converted, any new posts will be visible to everyone, including non-Facebook users, which helps discussions reach a broader audience.

Admins have three days to review and cancel the conversion before it becomes permanent. Members will be notified when a group changes its status, and a globe icon will appear when posting in public groups as a reminder of visibility settings.

Groups can be switched back to private at any time, restoring member-only access.

Meta said the feature supports community growth and deeper engagement while maintaining privacy safeguards. Group admins can also utilise anonymous or nickname-based participation options, providing users with greater control over their engagement in public discussions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study finds AI summaries can flatten understanding compared with reading sources

AI summaries can speed learning, but an extensive study finds they often blunt depth and recall. More than 10,000 participants used chatbots or traditional web search to learn assigned topics. Those relying on chatbot digests showed shallower knowledge and offered fewer concrete facts afterwards.

Researchers from Wharton and New Mexico State conducted seven experiments across various tasks, including gardening, health, and scam awareness. Some groups saw the same facts, either as an AI digest or as source links. Advice written after AI use was shorter, less factual, and more similar across users.

Follow-up raters judged AI-derived advice as less informative and less trustworthy. Participants who used AI also reported spending less time with sources. Lower effort during synthesis reduces the mental work that cements understanding.

Findings land amid broader concerns about summary reliability. A BBC-led investigation recently found that major chatbots frequently misrepresented news content in their responses. The evidence suggests that to serves as support for critical reading, rather than a substitute for it.

The practical takeaway for learners and teachers is straightforward. Use AI to scaffold questions, outline queries, and compare viewpoints. Build lasting understanding by reading multiple sources, checking citations, and writing your own synthesis before asking a model to refine it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK teachers rethink assignments as AI reshapes classroom practice

Nearly eight in ten UK secondary teachers say AI has forced a rethink of how assignments are set, a British Council survey finds. Many now design tasks either to deter AI use or to harness it constructively in lessons. Findings reflect rapid cultural and technological shifts across schools.

Approaches are splitting along two paths. Over a third of designers create AI-resistant tasks, while nearly six in ten purposefully integrate AI tools. Younger staff are most likely to adapt; yet, strong majorities across all age groups report changes to their practices.

Perceived impacts remain mixed. Six in ten worry about their communication skills, with some citing narrower vocabulary and weaker writing and comprehension skills. Similar shares report improvements in listening, pronunciation, and confidence, suggesting benefits for speech-focused learning.

Language norms are evolving with digital culture. Most UK teachers now look up slang and online expressions, from ‘rizz’ to ‘delulu’ to ‘six, seven’. Staff are adapting lesson design while seeking guidance and training that keeps pace with students’ online lives.

Long-term views diverge. Some believe AI could lift outcomes, while others remain unconvinced and prefer guardrails to limit misuse. British Council leaders say support should focus on practical classroom integration, teacher development, and clear standards that strike a balance between innovation and academic integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mustafa Suleyman warns against building seemingly conscious AI

Mustafa Suleyman, CEO of Microsoft AI, argues that AI should be built for people, not to replace them. Growing belief in chatbot consciousness risks campaigns for AI rights and a needless struggle over personhood that distracts from human welfare.

Debates over true consciousness miss the urgent issue of convincing imitation. Seemingly conscious AI may speak fluently, recall interactions, claim experiences, and set goals that appear to exhibit agency. Capabilities are close, and the social effects will be real regardless of metaphysics.

People already form attachments to chatbots and seek meaning in conversations. Reports of dependency and talk of ‘AI psychosis‘ show persuasive systems can nudge vulnerable users. Extending moral status to uncertainty, Suleyman argues, would amplify delusions and dilute existing rights.

Norms and design principles are needed across the industry. Products should include engineered interruptions that break the illusion, clear statements of nonhuman status, and guardrails for responsible ‘personalities’. Microsoft AI is exploring approaches that promote offline connection and healthy use.

A positive vision keeps AI empowering without faking inner life. Companions should organise tasks, aid learning, and support collaboration while remaining transparently artificial. The focus remains on safeguarding humans, animals, and the natural world, not on granting rights to persuasive simulations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Unexpected language emerges as best for AI prompting

A new joint study by the University of Maryland and Microsoft has found that Polish is the most effective language for prompting AI, outperforming 25 others, including English, French, and Chinese.

The researchers tested leading AI models, including OpenAI, Google Gemini, Qwen, Llama, and DeepSeek, by providing identical prompts in 26 languages. Polish achieved an average accuracy of 88 percent, securing first place. English, often seen as the natural choice for AI interaction, came only sixth.

According to the study, Polish proved to be the most precise in issuing commands to AI, despite the fact that far less Polish-language data exists for model training. The Polish Patent Office noted that while humans find the language difficult, AI systems appear to handle it with remarkable ease.

Other high-performing languages included French, Italian, and Spanish, with Chinese ranking among the lowest. The finding challenges the widespread assumption that English dominates AI communication and could reshape future research on multilingual model optimisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI-powered patent search to make innovation intelligence accessible

The US software company, Perplexity, has unveiled Perplexity Patents, the first AI-powered patent research agent designed to democratise access to intellectual property intelligence. The new tool allows anyone to explore patents using natural language instead of complex keyword syntax.

Traditional patent research has long relied on rigid search systems that demand specialist knowledge and expensive software.

Perplexity Patents instead offers conversational interaction, enabling users to ask questions such as ‘Are there any patents on AI for language learning?’ or ‘Key quantum computing patents since 2024?’.

The system automatically identifies relevant patents, provides inline viewing, and maintains context across multiple questions.

Powered by Perplexity’s large-scale search infrastructure, the platform uses agentic reasoning to break down complex queries, perform multi-step searches, and return comprehensive results supported by extensive patent documentation.

Its semantic understanding also captures related concepts that traditional tools often miss, linking terms such as ‘fitness trackers’, ‘activity bands’, and ‘health monitoring wearables’.

Beyond patent databases, Perplexity Patents can also draw from academic papers, open-source code, and other publicly available data, revealing the entire landscape of technological innovation. The service launches today in beta, free for all users, with extra features for Pro and Max subscribers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google removes Gemma AI model following defamation claims

Google has removed its Gemma AI model from AI Studio after US Senator Marsha Blackburn accused it of producing false sexual misconduct claims about her. The senator said Gemma fabricated an incident allegedly from her 1987 campaign, citing nonexistent news links to support the claim.

Blackburn described the AI’s response as defamatory and demanded action from Google.

The controversy follows a similar case involving conservative activist Robby Starbuck, who claims Google’s AI tools made false accusations about him. Google acknowledged that AI’ hallucinations’ are a known issue but insisted it is working to mitigate such errors.

Blackburn argued these fabrications go beyond harmless mistakes and represent real defamation from a company-owned AI model.

Google stated that Gemma was never intended as a consumer-facing tool, noting that some non-developers misused it to ask factual questions. The company confirmed it would remove the model from AI Studio while keeping it accessible via API for developers.

The incident has reignited debates over AI bias and accountability. Blackburn highlighted what she sees as a consistent pattern of conservative figures being targeted by AI systems, amid wider political scrutiny over misinformation and AI regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan’s KDDI partners with Google for AI-driven news service

Japan’s telecom leader KDDI is set to partner with Google to introduce an AI-powered news search service in spring 2026. The platform will use Google’s Gemini model to deliver articles from authorised Japanese media sources while preventing copyright violations.

The service will cite original publishers and exclude independent web scraping, addressing growing global concerns about the unauthorised use of journalism by generative AI systems. Around six domestic media companies, including digital outlets, are expected to join the initiative.

KDDI aims to strengthen user trust by offering reliable news through a transparent and copyright-safe AI interface. Details of how the articles will appear to users are still under review, according to sources familiar with the plan.

The move follows lawsuits filed in Tokyo by major Japanese newspapers, including Nikkei and Yomiuri, against US startup Perplexity AI over alleged copyright infringement. Industry experts say KDDI’s collaboration could become a model for responsible AI integration in news services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK traffic to Pornhub plunges after age-verification law

In response to the UK’s new age-verification law, Pornhub reports that visits from UK users have fallen by about 77 %.

The change comes following legislation designed to block under-18s from accessing adult sites via mandatory age checks.

The company states that it began enforcing the verification system early in October, noting that many users are now turned away or fail the checks.

According to Pornhub, this explains the significant decrease in traffic from the UK. The platform emphasised that this is a reflection of compliance rather than an admission of harm.

Critics argue that the law creates risks of overblocking and privacy concerns, as users may turn to less regulated or unsafe alternatives. This case also underscores tensions between content regulation, digital rights and the efficacy of age-gating as a tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reliance and Google expand Gemini AI access across India

Google has partnered with Reliance Intelligence to expand access to its Gemini AI across India.

Under the new collaboration, Jio Unlimited 5G users aged between 18 and 25 will receive the Google AI Pro plan free for 18 months, with nationwide eligibility to follow soon.

The partnership grants access to the Gemini 2.5 Pro model and includes increased limits for generating images and videos with the Nano Banana and Veo 3.1 tools.

Users in India will also benefit from expanded NotebookLM access for study and research, plus 2 TB of cloud storage shared across Google Photos, Gmail and Drive for data and WhatsApp backups.

According to Google, the offer represents a value of about ₹35,100 and can be activated via the MyJio app. The company said the initiative aims to make its most advanced AI tools available to a wider audience and support everyday productivity across India’s fast-growing digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!