Facebook update lets admins make private groups public safely

Meta has introduced a new Facebook update allowing group administrators to change their private groups to public while keeping members’ privacy protected. The company said the feature gives admins more flexibility to grow their communities without exposing existing private content.

All posts, comments, and reactions shared before the change will remain visible only to previous members, admins, and moderators. The member list will also stay private. Once converted, any new posts will be visible to everyone, including non-Facebook users, which helps discussions reach a broader audience.

Admins have three days to review and cancel the conversion before it becomes permanent. Members will be notified when a group changes its status, and a globe icon will appear when posting in public groups as a reminder of visibility settings.

Groups can be switched back to private at any time, restoring member-only access.

Meta said the feature supports community growth and deeper engagement while maintaining privacy safeguards. Group admins can also utilise anonymous or nickname-based participation options, providing users with greater control over their engagement in public discussions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Enterprise AI gains traction at Mercedes-Benz with Celonis platform

Mercedes-Benz reported faster decisions and better on-time delivery at Celosphere 2025. Using Celonis within MO360, it unifies production and logistics data, extending visibility across every order, part, and process.

Order-to-delivery operations use AI copilots to forecast timelines, optimise sequencing, and cut delays. After-sales teams surface bottlenecks in service parts logistics and speed customer responses. Quality management utilises anomaly detection to identify deviations early, preventing them from impacting production output.

Executives say complete data transparency enables teams to act faster and with greater precision across production and supply chains. The approach helps anticipate change and react to market shifts. Hundreds of active users are expanding adoption as data-driven practices scale across the company.

Celonis positions process intelligence as the backbone that makes enterprise AI valuable. Integrated process data and business context create a live operational twin. The goal is moving from visibility to action, unlocking value through targeted fixes and intelligent automation.

Conference sessions highlighted broader momentum for process intelligence and AI in industry. Leaders discussed governance, standards, and measurable outcomes from digital platforms. Mercedes-Benz framed its results as proof that structured data and AI can lift performance at a global scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Monolith’s AI powers Nissan push to halve testing time

Nissan and Monolith have extended their strategic partnership for three years to apply AI across more vehicle programmes in Europe. The collaboration supports Nissan. RE: Nissan Plans to Compress Development Timelines and Improve Operational Efficiency. Early outcomes are guiding a wider rollout.

Engineers at Nissan Technical Centre Europe will utilise Monolith to predict test results based on decades of data and simulations. Reducing prototypes and conducting targeted, high-value experiments enables teams to focus more effectively on design decisions. Ensuring both accuracy and coverage remains essential.

A prior project on chassis bolt joints saw AI recommend optimal torque ranges and prioritise the following best tests for engineers. Compared with the non-AI process, physical testing fell by 17 percent in controlled comparisons. Similar approaches are being prepared for future models beyond LEAF.

Leaders say that a broader deployment could halve testing time across European programmes if comparable gains are achieved. Governance encompasses rigorous validation before changes are deployed to production. Operational benefits include faster iteration cycles and reduced test waste.

Monolith’s toolkit includes next-test recommendation and anomaly detection to flag outliers before rework. Nissan frames the push as an innovation with sustainability benefits, cutting material use while maintaining quality across a complex supply chain. Partners will share results as adoption scales.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study finds AI summaries can flatten understanding compared with reading sources

AI summaries can speed learning, but an extensive study finds they often blunt depth and recall. More than 10,000 participants used chatbots or traditional web search to learn assigned topics. Those relying on chatbot digests showed shallower knowledge and offered fewer concrete facts afterwards.

Researchers from Wharton and New Mexico State conducted seven experiments across various tasks, including gardening, health, and scam awareness. Some groups saw the same facts, either as an AI digest or as source links. Advice written after AI use was shorter, less factual, and more similar across users.

Follow-up raters judged AI-derived advice as less informative and less trustworthy. Participants who used AI also reported spending less time with sources. Lower effort during synthesis reduces the mental work that cements understanding.

Findings land amid broader concerns about summary reliability. A BBC-led investigation recently found that major chatbots frequently misrepresented news content in their responses. The evidence suggests that to serves as support for critical reading, rather than a substitute for it.

The practical takeaway for learners and teachers is straightforward. Use AI to scaffold questions, outline queries, and compare viewpoints. Build lasting understanding by reading multiple sources, checking citations, and writing your own synthesis before asking a model to refine it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK teachers rethink assignments as AI reshapes classroom practice

Nearly eight in ten UK secondary teachers say AI has forced a rethink of how assignments are set, a British Council survey finds. Many now design tasks either to deter AI use or to harness it constructively in lessons. Findings reflect rapid cultural and technological shifts across schools.

Approaches are splitting along two paths. Over a third of designers create AI-resistant tasks, while nearly six in ten purposefully integrate AI tools. Younger staff are most likely to adapt; yet, strong majorities across all age groups report changes to their practices.

Perceived impacts remain mixed. Six in ten worry about their communication skills, with some citing narrower vocabulary and weaker writing and comprehension skills. Similar shares report improvements in listening, pronunciation, and confidence, suggesting benefits for speech-focused learning.

Language norms are evolving with digital culture. Most UK teachers now look up slang and online expressions, from ‘rizz’ to ‘delulu’ to ‘six, seven’. Staff are adapting lesson design while seeking guidance and training that keeps pace with students’ online lives.

Long-term views diverge. Some believe AI could lift outcomes, while others remain unconvinced and prefer guardrails to limit misuse. British Council leaders say support should focus on practical classroom integration, teacher development, and clear standards that strike a balance between innovation and academic integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mustafa Suleyman warns against building seemingly conscious AI

Mustafa Suleyman, CEO of Microsoft AI, argues that AI should be built for people, not to replace them. Growing belief in chatbot consciousness risks campaigns for AI rights and a needless struggle over personhood that distracts from human welfare.

Debates over true consciousness miss the urgent issue of convincing imitation. Seemingly conscious AI may speak fluently, recall interactions, claim experiences, and set goals that appear to exhibit agency. Capabilities are close, and the social effects will be real regardless of metaphysics.

People already form attachments to chatbots and seek meaning in conversations. Reports of dependency and talk of ‘AI psychosis‘ show persuasive systems can nudge vulnerable users. Extending moral status to uncertainty, Suleyman argues, would amplify delusions and dilute existing rights.

Norms and design principles are needed across the industry. Products should include engineered interruptions that break the illusion, clear statements of nonhuman status, and guardrails for responsible ‘personalities’. Microsoft AI is exploring approaches that promote offline connection and healthy use.

A positive vision keeps AI empowering without faking inner life. Companions should organise tasks, aid learning, and support collaboration while remaining transparently artificial. The focus remains on safeguarding humans, animals, and the natural world, not on granting rights to persuasive simulations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft AI chief rules out machine consciousness as purely biological phenomenon

Microsoft’s AI head, Mustafa Suleyman, has dismissed the idea that AI could ever become conscious, arguing that consciousness is a property exclusive to biological beings.

Speaking at the AfroTech Conference in Houston, Suleyman said researchers should stop exploring the notion of sentient AI, calling it ‘the wrong question’.

He explained that while AI can simulate experience, it cannot feel pain or possess subjective awareness.

Suleyman compared AI’s output to a narrative illusion rather than genuine consciousness, aligning with the philosophical theory of biological naturalism, which ties awareness to living brain processes.

Suleyman has become one of the industry’s most outspoken critics of conscious AI research. His book ‘The Coming Wave’ and his recent essay ‘We must build AI for people;’ not to be a person warn against anthropomorphising machines.

He also confirmed that Microsoft will not develop erotic chatbots, a direction that has been embraced by competitors such as OpenAI and xAI.

He stressed that Microsoft’s AI systems are designed to serve humans, not mimic them. The company’s Copilot assistant now includes a ‘real talk’ mode that challenges users’ assumptions instead of offering flattery.

Suleyman said responsible development must avoid ‘unbridled accelerationism’, adding that fear and scepticism are essential for navigating AI’s rapid evolution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brain-inspired networks boost AI performance and cut energy use

Researchers at the University of Surrey have developed a new method to enhance AI by imitating how the human brain connects information. The approach, called Topographical Sparse Mapping, links each artificial neuron only to nearby or related ones, replicating the brain’s efficient organisation.

According to findings published in Neurocomputing, the structure reduces redundant connections and improves performance without compromising accuracy. Senior lecturer Dr Roman Bauer said intelligent systems can now be designed to consume less energy while maintaining power.

Training large models today often requires over a million kilowatt-hours of electricity, a trend he described as unsustainable.

An advanced version, Enhanced Topographical Sparse Mapping, introduces a biologically inspired pruning process that refines neural connections during training, similar to how the brain learns.

Researchers believe that the system could contribute to more realistic neuromorphic computers, which simulate brain functions to process data more efficiently.

The Surrey team said that such a discovery may advance generative AI systems and pave the way for sustainable large-scale model training. Their work highlights how lessons from biology can shape the next generation of energy-efficient computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Q3 funding in Europe rebounds with growth rounds leading

Europe raised €13.7bn across just over 1,300 rounds in Q3, the strongest quarter since Q2 2024. September alone brought €8.7bn. July and August reflected the familiar summer slowdown.

Growth equity provided €7bn, or 51.6% of the total, with two consecutive quarters surpassing 150 growth rounds. Data centres, AI agents, and GenAI led the activity, with more AI startups scaling with larger cheques.

Early-stage totals were the lowest in 12 months, yet they were ahead of Q3 last year. Lovable’s $200 million Series A at a $1.8 billion valuation stood out. Seven new unicorns included Nscale, Fuse Energy, Framer, IQM, Nothing, and Tide.

ASML led the quarter’s largest deal, investing €1.3bn in Mistral AI’s €1.7bn Series C. France tallied €2.7 billion, heavily concentrated in Mistral, while the UK reached €4.49 billion. Germany followed with just over €1.5bn, ahead of the Netherlands and Switzerland.

AI-native funding surpassed all verticals for the first time on record, reaching €3.9 billion, with deeptech at €2.6 billion. Agentic AI logged 129 rounds, sharply higher year-over-year, while data centres edged out agents for capital. Defence and dual-use technology attracted €2.1 billion across 44 rounds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Unexpected language emerges as best for AI prompting

A new joint study by the University of Maryland and Microsoft has found that Polish is the most effective language for prompting AI, outperforming 25 others, including English, French, and Chinese.

The researchers tested leading AI models, including OpenAI, Google Gemini, Qwen, Llama, and DeepSeek, by providing identical prompts in 26 languages. Polish achieved an average accuracy of 88 percent, securing first place. English, often seen as the natural choice for AI interaction, came only sixth.

According to the study, Polish proved to be the most precise in issuing commands to AI, despite the fact that far less Polish-language data exists for model training. The Polish Patent Office noted that while humans find the language difficult, AI systems appear to handle it with remarkable ease.

Other high-performing languages included French, Italian, and Spanish, with Chinese ranking among the lowest. The finding challenges the widespread assumption that English dominates AI communication and could reshape future research on multilingual model optimisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!