AI summaries can speed learning, but an extensive study finds they often blunt depth and recall. More than 10,000 participants used chatbots or traditional web search to learn assigned topics. Those relying on chatbot digests showed shallower knowledge and offered fewer concrete facts afterwards.
Researchers from Wharton and New Mexico State conducted seven experiments across various tasks, including gardening, health, and scam awareness. Some groups saw the same facts, either as an AI digest or as source links. Advice written after AI use was shorter, less factual, and more similar across users.
Follow-up raters judged AI-derived advice as less informative and less trustworthy. Participants who used AI also reported spending less time with sources. Lower effort during synthesis reduces the mental work that cements understanding.
Findings land amid broader concerns about summary reliability. A BBC-led investigation recently found that major chatbots frequently misrepresented news content in their responses. The evidence suggests that to serves as support for critical reading, rather than a substitute for it.
The practical takeaway for learners and teachers is straightforward. Use AI to scaffold questions, outline queries, and compare viewpoints. Build lasting understanding by reading multiple sources, checking citations, and writing your own synthesis before asking a model to refine it.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Nearly eight in ten UK secondary teachers say AI has forced a rethink of how assignments are set, a British Council survey finds. Many now design tasks either to deter AI use or to harness it constructively in lessons. Findings reflect rapid cultural and technological shifts across schools.
Approaches are splitting along two paths. Over a third of designers create AI-resistant tasks, while nearly six in ten purposefully integrate AI tools. Younger staff are most likely to adapt; yet, strong majorities across all age groups report changes to their practices.
Perceived impacts remain mixed. Six in ten worry about their communication skills, with some citing narrower vocabulary and weaker writing and comprehension skills. Similar shares report improvements in listening, pronunciation, and confidence, suggesting benefits for speech-focused learning.
Language norms are evolving with digital culture. Most UK teachers now look up slang and online expressions, from ‘rizz’ to ‘delulu’ to ‘six, seven’. Staff are adapting lesson design while seeking guidance and training that keeps pace with students’ online lives.
Long-term views diverge. Some believe AI could lift outcomes, while others remain unconvinced and prefer guardrails to limit misuse. British Council leaders say support should focus on practical classroom integration, teacher development, and clear standards that strike a balance between innovation and academic integrity.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
World Economic Forum President Borge Brende has warned that massive investments in AI and cryptocurrencies may create financial bubbles. Speaking in Berlin, he noted that $500 billion has been invested in AI this year, raising concerns about speculative bubbles in AI and cryptocurrency.
Brende described frontier technologies as a ‘big paradigm shift’ that could drive global growth, with potential productivity gains of up to 10% over the next decade. He noted that breakthroughs in medicine, synthetic biology, space, and energy could transform economies, but stressed that the benefits must be widely shared.
Geopolitical uncertainty remains a significant concern, according to Brende. He pointed to rising tensions between the US and China, calling it a race for technological dominance that could shape global power.
He also urged multilateral cooperation to address global challenges, including pandemics, cybercrime, and investment uncertainty.
Despite the disorder in world politics, Brende highlighted the resilience of economies like those in the US, China, and India. He called for patient investment strategies and stronger international coordination to ensure that new technologies translate into sustainable prosperity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Mustafa Suleyman, CEO of Microsoft AI, argues that AI should be built for people, not to replace them. Growing belief in chatbot consciousness risks campaigns for AI rights and a needless struggle over personhood that distracts from human welfare.
Debates over true consciousness miss the urgent issue of convincing imitation. Seemingly conscious AI may speak fluently, recall interactions, claim experiences, and set goals that appear to exhibit agency. Capabilities are close, and the social effects will be real regardless of metaphysics.
People already form attachments to chatbots and seek meaning in conversations. Reports of dependency and talk of ‘AI psychosis‘ show persuasive systems can nudge vulnerable users. Extending moral status to uncertainty, Suleyman argues, would amplify delusions and dilute existing rights.
Norms and design principles are needed across the industry. Products should include engineered interruptions that break the illusion, clear statements of nonhuman status, and guardrails for responsible ‘personalities’. Microsoft AI is exploring approaches that promote offline connection and healthy use.
A positive vision keeps AI empowering without faking inner life. Companions should organise tasks, aid learning, and support collaboration while remaining transparently artificial. The focus remains on safeguarding humans, animals, and the natural world, not on granting rights to persuasive simulations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is playing an increasingly important role in personal finance, with over 28 million UK adults using AI over the past year.
Lloyds Banking Group’s latest Consumer Digital Index reveals that many individuals turn to platforms like ChatGPT for budgeting, savings planning, and financial education, reporting an average annual savings of £399 through AI insights.
Digital confidence strongly supports financial empowerment. Two-thirds of internet users report that online tools enhance their ability to manage money, while those with higher digital skills experience lower stress and greater control over their finances.
Regular engagement with AI and other digital tools enhances both knowledge and confidence in financial decision-making.
Trust remains a significant concern despite growing usage. Around 80% of users worry about inaccurate information or insufficient personalisation, emphasising the need for reliable guidance.
Jas Singh, CEO of Consumer Relationships at Lloyds, highlights that banks must combine AI innovation with trusted expertise to help people make more intelligent choices and build long-term financial resilience.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Dubai-based telecom operator du has launched a cryptocurrency mining service designed to promote digital finance adoption across the UAE. The initiative, named Cloud Miner, enables residents to mine cryptocurrency via subscription, eliminating the need for personal hardware or maintenance.
The service, operated by du Tech’s data centres, enables users to rent computational power and mine Bitcoin and other digital assets. Participants can bid online from November 3 to 9 for 24-month contracts that offer 250 terahashes per second.
Users will also gain access to a calculator to track monthly Bitcoin yields linked directly to their crypto wallets.
According to Jasim Al Awadi, du’s Chief ICT Officer, Cloud Miner represents the company’s first step in expanding into digital asset services. He added that as adoption grows, Du plans to explore adjacent sectors such as crypto exchanges and lending platforms.
The company also intends to increase the number of available contracts and hash rate in future phases.
The UAE continues to position itself as a leader in digital finance, introducing supportive regulations and encouraging blockchain innovation. Al Awadi emphasised that trusted, regulated entities like du play a key role in helping users confidently engage with the crypto ecosystem.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Perplexity has unveiled new privacy features for its AI-powered browser, Comet, designed to give users clearer control over their data. The updates include a new homepage widget called Privacy Snapshot, which allows people to review and adjust privacy settings in one place.
The widget provides a real-time view of how Comet protects users online and simplifies settings for ad blocking, tracker management and data access. Users can toggle permissions for the Comet Assistant directly from the homepage.
Comet’s updated AI Assistant settings now show precisely how data is used, including where it is stored locally or shared for processing. Sensitive information such as passwords and payment details remain securely stored on the user’s device.
Perplexity said the changes reinforce its ‘privacy by default’ approach, an important principle in EU data protection law, combining ad blocking, safe browsing and transparent data handling. The new features are available in the latest Comet update across desktop and mobile platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
CISA’s warning serves as a reminder that ransomware is not confined to Windows. A Linux kernel flaw, CVE-2024-1086, is being exploited in real-world incidents, and federal networks face a November 20 patch-or-disable deadline. Businesses should read it as their cue, too.
Attackers who reach a vulnerable host can escalate privileges to root, bypass defences, and deploy malware. Many older kernels remain in circulation even though upstream fixes were shipped in January 2024, creating a soft target when paired with phishing and lateral movement.
Practical steps matter more than labels. Patch affected kernels where possible, isolate any components that cannot be updated, and verify the running versions against vendor advisories and the NIST catalogue. Treat emergency changes as production work, with change logs and checks.
Resilience buys time when updates lag. Enforce least privilege, require MFA for admin entry points, and segment crown-jewel services. Tune EDR to spot privilege-escalation behaviour and suspicious modules, then rehearse restores from offline, immutable backups.
Security habits shape outcomes as much as CVEs. Teams that patch quickly, validate fixes, and document closure shrink the blast radius. Teams that defer kernel maintenance invite repeat visits, turning a known bug into an avoidable outage.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft’s AI head, Mustafa Suleyman, has dismissed the idea that AI could ever become conscious, arguing that consciousness is a property exclusive to biological beings.
Speaking at the AfroTech Conference in Houston, Suleyman said researchers should stop exploring the notion of sentient AI, calling it ‘the wrong question’.
He explained that while AI can simulate experience, it cannot feel pain or possess subjective awareness.
Suleyman compared AI’s output to a narrative illusion rather than genuine consciousness, aligning with the philosophical theory of biological naturalism, which ties awareness to living brain processes.
Suleyman has become one of the industry’s most outspoken critics of conscious AI research. His book ‘The Coming Wave’ and his recent essay ‘We must build AI for people;’ not to be a person warn against anthropomorphising machines.
He also confirmed that Microsoft will not develop erotic chatbots, a direction that has been embraced by competitors such as OpenAI and xAI.
He stressed that Microsoft’s AI systems are designed to serve humans, not mimic them. The company’s Copilot assistant now includes a ‘real talk’ mode that challenges users’ assumptions instead of offering flattery.
Suleyman said responsible development must avoid ‘unbridled accelerationism’, adding that fear and scepticism are essential for navigating AI’s rapid evolution.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the University of Surrey have developed a new method to enhance AI by imitating how the human brain connects information. The approach, called Topographical Sparse Mapping, links each artificial neuron only to nearby or related ones, replicating the brain’s efficient organisation.
According to findings published in Neurocomputing, the structure reduces redundant connections and improves performance without compromising accuracy. Senior lecturer Dr Roman Bauer said intelligent systems can now be designed to consume less energy while maintaining power.
Training large models today often requires over a million kilowatt-hours of electricity, a trend he described as unsustainable.
An advanced version, Enhanced Topographical Sparse Mapping, introduces a biologically inspired pruning process that refines neural connections during training, similar to how the brain learns.
Researchers believe that the system could contribute to more realistic neuromorphic computers, which simulate brain functions to process data more efficiently.
The Surrey team said that such a discovery may advance generative AI systems and pave the way for sustainable large-scale model training. Their work highlights how lessons from biology can shape the next generation of energy-efficient computing.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!