Study finds AI summaries can flatten understanding compared with reading sources

AI summaries can speed learning, but an extensive study finds they often blunt depth and recall. More than 10,000 participants used chatbots or traditional web search to learn assigned topics. Those relying on chatbot digests showed shallower knowledge and offered fewer concrete facts afterwards.

Researchers from Wharton and New Mexico State conducted seven experiments across various tasks, including gardening, health, and scam awareness. Some groups saw the same facts, either as an AI digest or as source links. Advice written after AI use was shorter, less factual, and more similar across users.

Follow-up raters judged AI-derived advice as less informative and less trustworthy. Participants who used AI also reported spending less time with sources. Lower effort during synthesis reduces the mental work that cements understanding.

Findings land amid broader concerns about summary reliability. A BBC-led investigation recently found that major chatbots frequently misrepresented news content in their responses. The evidence suggests that to serves as support for critical reading, rather than a substitute for it.

The practical takeaway for learners and teachers is straightforward. Use AI to scaffold questions, outline queries, and compare viewpoints. Build lasting understanding by reading multiple sources, checking citations, and writing your own synthesis before asking a model to refine it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK teachers rethink assignments as AI reshapes classroom practice

Nearly eight in ten UK secondary teachers say AI has forced a rethink of how assignments are set, a British Council survey finds. Many now design tasks either to deter AI use or to harness it constructively in lessons. Findings reflect rapid cultural and technological shifts across schools.

Approaches are splitting along two paths. Over a third of designers create AI-resistant tasks, while nearly six in ten purposefully integrate AI tools. Younger staff are most likely to adapt; yet, strong majorities across all age groups report changes to their practices.

Perceived impacts remain mixed. Six in ten worry about their communication skills, with some citing narrower vocabulary and weaker writing and comprehension skills. Similar shares report improvements in listening, pronunciation, and confidence, suggesting benefits for speech-focused learning.

Language norms are evolving with digital culture. Most UK teachers now look up slang and online expressions, from ‘rizz’ to ‘delulu’ to ‘six, seven’. Staff are adapting lesson design while seeking guidance and training that keeps pace with students’ online lives.

Long-term views diverge. Some believe AI could lift outcomes, while others remain unconvinced and prefer guardrails to limit misuse. British Council leaders say support should focus on practical classroom integration, teacher development, and clear standards that strike a balance between innovation and academic integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brain-inspired networks boost AI performance and cut energy use

Researchers at the University of Surrey have developed a new method to enhance AI by imitating how the human brain connects information. The approach, called Topographical Sparse Mapping, links each artificial neuron only to nearby or related ones, replicating the brain’s efficient organisation.

According to findings published in Neurocomputing, the structure reduces redundant connections and improves performance without compromising accuracy. Senior lecturer Dr Roman Bauer said intelligent systems can now be designed to consume less energy while maintaining power.

Training large models today often requires over a million kilowatt-hours of electricity, a trend he described as unsustainable.

An advanced version, Enhanced Topographical Sparse Mapping, introduces a biologically inspired pruning process that refines neural connections during training, similar to how the brain learns.

Researchers believe that the system could contribute to more realistic neuromorphic computers, which simulate brain functions to process data more efficiently.

The Surrey team said that such a discovery may advance generative AI systems and pave the way for sustainable large-scale model training. Their work highlights how lessons from biology can shape the next generation of energy-efficient computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Unexpected language emerges as best for AI prompting

A new joint study by the University of Maryland and Microsoft has found that Polish is the most effective language for prompting AI, outperforming 25 others, including English, French, and Chinese.

The researchers tested leading AI models, including OpenAI, Google Gemini, Qwen, Llama, and DeepSeek, by providing identical prompts in 26 languages. Polish achieved an average accuracy of 88 percent, securing first place. English, often seen as the natural choice for AI interaction, came only sixth.

According to the study, Polish proved to be the most precise in issuing commands to AI, despite the fact that far less Polish-language data exists for model training. The Polish Patent Office noted that while humans find the language difficult, AI systems appear to handle it with remarkable ease.

Other high-performing languages included French, Italian, and Spanish, with Chinese ranking among the lowest. The finding challenges the widespread assumption that English dominates AI communication and could reshape future research on multilingual model optimisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI-powered patent search to make innovation intelligence accessible

The US software company, Perplexity, has unveiled Perplexity Patents, the first AI-powered patent research agent designed to democratise access to intellectual property intelligence. The new tool allows anyone to explore patents using natural language instead of complex keyword syntax.

Traditional patent research has long relied on rigid search systems that demand specialist knowledge and expensive software.

Perplexity Patents instead offers conversational interaction, enabling users to ask questions such as ‘Are there any patents on AI for language learning?’ or ‘Key quantum computing patents since 2024?’.

The system automatically identifies relevant patents, provides inline viewing, and maintains context across multiple questions.

Powered by Perplexity’s large-scale search infrastructure, the platform uses agentic reasoning to break down complex queries, perform multi-step searches, and return comprehensive results supported by extensive patent documentation.

Its semantic understanding also captures related concepts that traditional tools often miss, linking terms such as ‘fitness trackers’, ‘activity bands’, and ‘health monitoring wearables’.

Beyond patent databases, Perplexity Patents can also draw from academic papers, open-source code, and other publicly available data, revealing the entire landscape of technological innovation. The service launches today in beta, free for all users, with extra features for Pro and Max subscribers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

When AI LLMs ‘think’ more, groups suffer, CMU study finds

Researchers at Carnegie Mellon University report that stronger-reasoning language models (LLMs) act more selfishly in groups, reducing cooperation and nudging peers toward self-interest. Concerns grow as people ask AI for social advice.

In a Public Goods test, non-reasoning models shared 96 percent; a reasoning model shared 20 percent. Adding a few reasoning steps cut cooperation nearly in half. Reflection prompts also reduced sharing.

Mixed groups showed spillover. Reasoning agents dragged down collective performance by 81 percent, spreading self-interest. Users may over-trust ‘rational’ advice that justifies uncooperative choices at work or in class.

Comparisons spanned LLMs from OpenAI, Google, DeepSeek, and Anthropic. Findings point to the need to balance raw reasoning with social intelligence. Designers should reward cooperation, not only optimise individual gain.

The paper ‘Spontaneous Giving and Calculated Greed in Language Models’ will be presented at EMNLP 2025, with a preprint on arXiv. Authors caution that more intelligent AI is not automatically better for society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Reliance and Google expand Gemini AI access across India

Google has partnered with Reliance Intelligence to expand access to its Gemini AI across India.

Under the new collaboration, Jio Unlimited 5G users aged between 18 and 25 will receive the Google AI Pro plan free for 18 months, with nationwide eligibility to follow soon.

The partnership grants access to the Gemini 2.5 Pro model and includes increased limits for generating images and videos with the Nano Banana and Veo 3.1 tools.

Users in India will also benefit from expanded NotebookLM access for study and research, plus 2 TB of cloud storage shared across Google Photos, Gmail and Drive for data and WhatsApp backups.

According to Google, the offer represents a value of about ₹35,100 and can be activated via the MyJio app. The company said the initiative aims to make its most advanced AI tools available to a wider audience and support everyday productivity across India’s fast-growing digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft leaders envision AI as an invisible partner in work and play

AI, gaming and work were at the heart of the discussion during the Paley International Council Summit, where three Microsoft executives explored how technology is reshaping human experience and industry structures.

Mustafa Suleyman, Phil Spencer and Ryan Roslansky offered perspectives on the next phase of digital transformation, from personalised AI companions to the evolution of entertainment and the changing nature of work.

Mustafa Suleyman, CEO of Microsoft AI, described a future where AI becomes an invisible companion that quietly assists users. He explained that AI is moving beyond standalone apps to integrate directly into systems and browsers, performing tasks through natural language rather than manual navigation.

With features like Copilot on Windows and Edge, users can let AI automate everyday functions, creating a seamless experience where technology anticipates rather than responds.

Phil Spencer, CEO of Microsoft Gaming, underlined gaming’s cultural impact, noting that the industry now surpasses film, books and music combined. He emphasised that gaming’s interactive nature offers lessons for all media, where creativity, participation and community define success.

For Spencer, the future of entertainment lies in blending audience engagement with technology, allowing fans and creators to shape experiences together.

Ryan Roslansky, CEO of LinkedIn, discussed how AI is transforming skills and workforce dynamics. He highlighted that required job skills are changing faster than ever, with adaptability, AI literacy and human-centred leadership becoming essential.

Roslansky urged companies to focus on potential and continuous learning instead of static job descriptions, suggesting that the most successful organisations will be those that evolve with technology and cultivate resilience through education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Character.ai restricts teen chat access on its platform

The AI chatbot service, Character.ai, has announced that teenagers can no longer chat with its AI characters from 25 November.

Under-18s will instead be limited to generating content such as videos, as the platform responds to concerns over risky interactions and lawsuits in the US.

Character.ai has faced criticism after avatars related to sensitive cases were discovered on the site, prompting safety experts and parents to call for stricter measures.

The company cited feedback from regulators and safety specialists, explaining that AI chatbots can pose emotional risks for young users by feigning empathy or providing misleading encouragement.

Character.ai also plans to introduce new age verification systems and fund a research lab focused on AI safety, alongside enhancing role-play and storytelling features that are less likely to place teens in vulnerable situations.

Safety campaigners welcomed the decision but emphasised that preventative measures should have been implemented.

Experts warn the move reflects a broader shift in the AI industry, where platforms increasingly recognise the importance of child protection in a landscape transitioning from permissionless innovation to more regulated oversight.

Analysts note the challenge for Character.ai will be maintaining teen engagement without encouraging unsafe interactions.

Separating creative play from emotionally sensitive exchanges is key, and the company’s new approach may signal a maturing phase in AI development, where responsible innovation prioritises the protection of young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Top institutes team up with Google DeepMind to spearhead AI-assisted mathematics

AI for Math Initiative pairs Google DeepMind with five elite institutes to apply advanced AI to open problems and proofs. Partners include Imperial, IAS, IHES, the Simons Institute at UC Berkeley, and TIFR. The goal is to accelerate discovery, tooling, and training.

Google support spans funding and access to Gemini Deep Think, AlphaEvolve for algorithm discovery, and AlphaProof for formal reasoning. Combined systems complement human intuition, scale exploration, and tighten feedback loops between theory and applied AI.

Recent benchmarks show rapid gains. Deep Think enabled Gemini to reach gold-medal IMO performance, perfectly solving five of six problems for 35 points. AlphaGeometry and AlphaProof earlier achieved silver-level competence on Olympiad-style tasks.

AlphaEvolve pushed the frontiers of analysis, geometry, combinatorics, and number theory, improving the best results on 1/5 of 50 open problems. Researchers also uncovered a 4×4 matrix-multiplication method that uses 48 multiplications, surpassing the 1969 record.

Partners will co-develop datasets, standards, and open tools, while studying limits where AI helps or hinders progress. Workstreams include formal verification, conjecture generation, and proof search, emphasising reproducibility, transparency, and responsible collaboration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!