Cognitive offloading and the future of the mind in the AI age

AI reshapes work and learning

The rapid advancement of AI is bringing to light a range of emerging phenomena within contemporary human societies.

The integration of AI-driven tools into a broad spectrum of professional tasks has proven beneficial in many respects, particularly in terms of alleviating the cognitive and physical burdens traditionally placed on human labour.

By automating routine processes and enhancing decision-making capabilities, AI has the potential to significantly improve efficiency and productivity across various sectors.

In response to these accelerating technological changes, a growing number of nations are prioritising the integration of AI technologies into their education systems to ensure students are prepared for future societal and workforce transformations.

China advances AI education for youth

China has released two landmark policy documents aimed at integrating AI education systematically into the national curriculum for primary and secondary schools.

The initiative not only reflects the country’s long-term strategic vision for educational transformation but also seeks to position China at the forefront of global AI literacy and talent development.

chinese flag with the city of shanghai in the background and digital letters ai somewhere over the flag

The two guidelines, formally titled the Guidelines for AI General Education in Primary and Secondary Schools and the Guidelines for the Use of Generative AI in Primary and Secondary Schools, represent a scientific and systemic approach to cultivating AI competencies among school-aged children.

Their release marks a milestone in the development of a tiered, progressive AI education system, with carefully delineated age-appropriate objectives and ethical safeguards for both students and educators.

The USA expands AI learning in schools

In April, the US government outlined a structured national policy to integrate AI literacy into every stage of the education system.

By creating a dedicated federal task force, the administration intends to coordinate efforts across departments to promote early and equitable access to AI education.

Instead of isolating AI instruction within specialised fields, the initiative seeks to embed AI concepts across all learning pathways—from primary education to lifelong learning.

The plan includes the creation of a nationwide AI challenge to inspire innovation among students and educators, showcasing how AI can address real-world problems.

The policy also prioritises training teachers to understand and use AI tools, instead of relying solely on traditional teaching methods. It supports professional development so educators can incorporate AI into their lessons and reduce administrative burdens.

The strategy encourages public-private partnerships, using industry expertise and existing federal resources to make AI teaching materials widely accessible.

European Commission supports safe AI use

As AI becomes more common in classrooms around the globe, educators must understand not only how to use it effectively but also how to apply it ethically.

Rather than introducing AI tools without guidance or reflection, the European Commission has provided ethical guidelines to help teachers use AI and data responsibly in education.

european union regulates ai

Published in 2022 and developed with input from educators and AI experts, the EU guidelines are intended primarily for primary and secondary teachers who have little or no prior experience with AI.

Instead of focusing on technical complexity, the guidelines aim to raise awareness about how AI can support teaching and learning, highlight the risks involved, and promote ethical decision-making.

The guidelines explain how AI can be used in schools, encourage safe and informed use by both teachers and students, and help educators consider the ethical foundations of any digital tools they adopt.

Rather than relying on unexamined technology, they support thoughtful implementation by offering practical questions and advice for adapting AI to various educational goals.

AI tools may undermine human thinking

However, technological augmentation is not without drawbacks. Concerns have been raised regarding the potential for job displacement, increased dependency on digital systems, and the gradual erosion of certain human skills.

As such, while AI offers promising opportunities for enhancing the modern workplace, it simultaneously introduces complex challenges that must be critically examined and responsibly addressed.

One significant challenge that must be addressed in the context of increasing reliance on AI is the phenomenon known as cognitive offloading. But what exactly does this term entail?

What happens when we offload thinking?

Cognitive offloading refers to the practice of using physical actions or external tools to modify the information processing demands of a task, with the aim of reducing the cognitive load on an individual.

In essence, it involves transferring certain mental functions—such as memory, calculation, or decision-making—to outside resources like digital devices, written notes, or structured frameworks.

digital brain

While this strategy can enhance efficiency and performance, it also raises concerns about long-term cognitive development, dependency on technological aids, and the potential degradation of innate mental capacities.

How AI may be weakening critical thinking

A study, led by Dr Michael Gerlich, Head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, published in the journal Societies raises serious concerns about the cognitive consequences of AI augmentation in various aspects of life.

The study suggests that frequent use of AI tools may be weakening individuals’ capacity for critical thinking, a skill considered fundamental to independent reasoning, problem-solving, and informed decision-making.

More specifically, Dr Gerlich adopted a mixed-methods approach, combining quantitative survey data from 666 participants with qualitative interviews involving 50 individuals.

Participants were drawn from diverse age groups and educational backgrounds and were assessed on their frequency of AI tool use, their tendency to offload cognitive tasks, and their critical thinking performance.

The study employed both self-reported and performance-based measures of critical thinking, alongside statistical analyses and machine learning models, such as random forest regression, to identify key factors influencing cognitive performance.

Younger users, who rely more on AI, think less critically

The findings revealed a strong negative correlation between frequent AI use and critical thinking abilities. Individuals who reported heavy reliance on AI tools—whether for quick answers, summarised explanations, or algorithmic recommendations—scored lower on assessments of critical thinking.

The effect was particularly pronounced among younger users aged 17 to 25, who reported the highest levels of cognitive offloading and showed the weakest performance in critical thinking tasks.

In contrast, older participants (aged 46 and above) demonstrated stronger critical thinking skills and were less inclined to delegate mental effort to AI.

Higher education strengthens critical thinking

The data also indicated that educational attainment served as a protective factor: those with higher education levels consistently exhibited more robust critical thinking abilities, regardless of their AI usage levels.

These findings suggest that formal education may equip individuals with better tools for critically engaging with digital information rather than uncritically accepting AI-generated responses.

Now, we must understand that while the study does not establish direct causation, the strength of the correlations and the consistency across quantitative and qualitative data suggest that AI usage may indeed be contributing to a gradual decline in cognitive independence.

However, in his study, Gerlich also notes the possibility of reverse causality—individuals with weaker critical thinking skills may be more inclined to rely on AI tools in the first place.

Offloading also reduces information retention

While cognitive offloading can enhance immediate task performance, it often comes at the cost of reduced long-term memory retention, as other studies show.

The trade-off has been most prominently illustrated in experimental tasks such as the Pattern Copy Task, where participants tasked with reproducing a pattern typically choose to repeatedly refer to the original rather than commit it to memory.

Even when such behaviours introduce additional time or effort (e.g., physically moving between stations), the majority of participants opt to offload, suggesting a strong preference for minimising cognitive strain.

These findings underscore the human tendency to prioritise efficiency over internalisation, especially under conditions of high cognitive demand.

The tendency to offload raises crucial questions about the cognitive and educational consequences of extended reliance on external aids. On the one hand, offloading can free up mental resources, allowing individuals to focus on higher-order problem-solving or multitasking.

On the other hand, it may foster a kind of cognitive dependency, weakening internal memory traces and diminishing opportunities for deep engagement with information.

Within the framework, cognitive offloading is not a failure of memory or attention but a reconfiguration of cognitive architecture—a process that may be adaptive rather than detrimental.

However, the perspective remains controversial, especially in light of findings that frequent offloading can impair retention, transfer of learning, and critical thinking, as Gerlich’s study argues.

If students, for example, continually rely on digital devices to recall facts or solve problems, they may fail to develop the robust mental models necessary for flexible reasoning and conceptual understanding.

The mind may extend beyond the brain

The tension has also sparked debate among cognitive scientists and philosophers, particularly in light of the extended mind hypothesis.

Contrary to the traditional view that cognition is confined to the brain, the extended mind theory argues that cognitive processes often rely on, and are distributed across, tools, environments, and social structures.

digital brain spin

As digital technologies become increasingly embedded in daily life, this hypothesis raises profound questions about human identity, cognition, and agency.

At the core of the extended mind thesis lies a deceptively simple question: Where does the mind stop, and the rest of the world begin?

Drawing an analogy to prosthetics—external objects that functionally become part of the body—Clark and Chalmers argue that cognitive tools such as notebooks, smartphones, and sketchpads can become integrated components of our mental system.

These tools do not merely support cognition; they constitute it when used in a seamless, functionally integrated manner. This conceptual shift has redefined thinking not as a brain-bound process but as a dynamic interaction between mind, body, and world.

Balancing AI and human intelligence

In conclusion, cognitive offloading represents a powerful mechanism of modern cognition, one that allows individuals to adapt to complex environments by distributing mental load.

However, its long-term effects on memory, learning, and problem-solving remain a subject of active investigation. Rather than treating offloading as inherently beneficial or harmful, future research and practice should seek to balance its use, leveraging its strengths while mitigating its costs.

Human VS Ai Background Brain and heart hd background 1024x576 1

Ultimately, we -as educators, policymakers, and technologists- have to shape the future of learning, work and confront a central tension: how to harness the benefits of AI without compromising the very faculties—critical thought, memory, and independent judgment—that define human intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eminem sues Meta over copyright violations

Eminem has filed a major lawsuit against Meta, accusing the tech giant of knowingly enabling widespread copyright infringement across its platforms. The rapper’s publishing company, Eight Mile Style, is seeking £80.6 million in damages, claiming 243 of his songs were used without authorisation.

The lawsuit argues that Meta, which owns Facebook, Instagram and WhatsApp, allowed tools such as Original Audio and Reels to encourage unauthorised reproduction and use of Eminem’s music.

The filing claims it occurred without proper licensing or attribution, significantly diminishing the value of his copyrights.

Eminem’s legal team contends that Meta profited from the infringement instead of ensuring his works were protected. If a settlement cannot be reached, the artist is demanding the maximum statutory damages — $150,000 per song — which would amount to over $109 million.

Meta has faced similar lawsuits before, including a high-profile case in 2022 brought by Epidemic Sound, which alleged the unauthorised use of thousands of its tracks. The latest claim adds to growing pressure on social media platforms to address copyright violations more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attacks and ransomware rise globally in early 2025

Cyber attacks have surged by 47% globally in the first quarter of 2025, with organisations facing an average of 1,925 attacks each week.

Check Point Software, a cybersecurity firm, warns that attackers are growing more sophisticated and persistent, targeting critical sectors like healthcare, finance, and technology with increasing intensity.

Ransomware activity alone has soared by 126% compared to last year. Attackers are no longer just encrypting files but now also threaten to leak sensitive data unless paid — a tactic known as dual extortion.

Instead of operating as large, centralised gangs, modern ransomware groups are smaller and more agile, often coordinating through dark web forums, making them harder to trace.

The report also notes that cybercriminals are using AI to automate phishing attacks and scan systems for vulnerabilities, allowing them to strike with greater accuracy. Emerging markets remain particularly vulnerable, as they often lack advanced cybersecurity infrastructure.

Check Point urges companies to act decisively by adopting proactive security measures, investing in threat detection and employee training, and implementing real-time monitoring. Waiting for an attack instead of preparing in advance could leave organisations dangerously exposed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok bans ‘SkinnyTok’ hashtag worldwide

TikTok has globally banned the hashtag ‘SkinnyTok’ after pressure from the French government, which accused the platform of promoting harmful eating habits among young users. The decision comes as part of the platform’s broader effort to improve user safety, particularly around content linked to unhealthy weight loss practices.

The move was hailed as a win by France’s Digital Minister, Clara Chappaz, who led the charge and called it a ‘first collective victory.’ She, along with other top French digital and data protection officials, travelled to Dublin to engage directly with TikTok’s Trust and Safety team. Notably, no representatives from the European Commission were present during these discussions, raising questions about the EU’s role and influence in enforcing digital regulations.

While the European Commission had already opened a broader investigation into TikTok over child protection issues in early 2024 under the Digital Services Act (DSA), it has yet to comment on the SkinnyTok case specifically. Despite this, the Commission says it is still coordinating with French authorities on matters related to DSA enforcement.

The episode has spotlighted national governments’ power in pushing for online safety reforms and the uncertain role of the EU institutions in urgent digital policy actions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI turns ChatGPT into AI gateway

OpenAI plans to reinvent ChatGPT as an all-in-one ‘super assistant’ that knows its users and becomes their primary gateway to the internet.

Details emerged from a partly redacted internal strategy document shared during the US government’s antitrust case against Google.

Rather than limiting ChatGPT to existing apps and websites, OpenAI envisions a future where the assistant supports everyday life—from suggesting recipes at home to taking notes at work or guiding users while travelling.

The company says the AI should evolve into a reliable, emotionally intelligent helper capable of handling a various personal and professional tasks.

OpenAI also believes hardware will be key to this transformation. It recently acquired io, a start-up founded by former Apple designer Jony Ive, for $6.4 billion to develop AI-powered devices.

The company’s strategy outlines how upcoming models like o2 and o3, alongside tools like multimodality and generative user interfaces, could make ChatGPT capable of taking meaningful action instead of simply offering responses.

The document also reveals OpenAI’s intention to back a regulation requiring tech platforms to allow users to set ChatGPT as their default assistant. Confident in its fast growth, research lead, and independence from ads, the company aims to maintain its advantage through bold decisions, speed, and self-disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp fixes deleted message privacy gap

WhatsApp is rolling out a privacy improvement that ensures deleted messages no longer linger in quoted replies, addressing a long-standing issue that exposed partial content users had intended to remove.

The update applies automatically, with no toggle required, and has begun reaching iOS users through version 25.12.73, with wider availability expected soon.

Until now, deleting a message for everyone in a chat has not removed it from quoted replies. That allowed fragments of deleted content to remain visible, undermining the purpose of deletion.

WhatsApp removes the associated quoted message entirely instead of keeping it in conversation threads, even in group or community chats.

WABetaInfo, which first spotted the update, noted that users delete messages for privacy or personal reasons, and leave behind quoted traces conflicted with those intentions.

The change ensures conversations reflect user expectations by entirely erasing deleted content, not only from the original message but also from any references.

Meta continues to develop new features for WhatsApp. Recent additions include voice chat in groups and a native interface for iPad. The company is also testing tools like AI-generated wallpapers, message summaries, and more refined privacy settings to enhance user control and experience further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek claims R1 model matches OpenAI

Chinese AI start-up DeepSeek has announced a major update to its R1 reasoning model, claiming it now performs on par with leading systems from OpenAI and Google.

The R1-0528 version, released following the model’s initial launch in January, reportedly surpasses Alibaba’s Qwen3, which debuted only weeks earlier in April.

According to DeepSeek, the upgrade significantly enhances reasoning, coding, and creative writing while cutting hallucination rates by half.

These improvements stem largely from greater computational resources applied after the training phase, allowing the model to outperform domestic rivals in benchmark tests involving maths, logic, and programming.

Unlike many Western competitors, DeepSeek takes an open-source approach. The company recently shared eight GitHub projects detailing methods to optimise computing, communication, and storage efficiency during training.

Its transparency and resource-efficient design have attracted attention, especially since its smaller distilled model rivals Alibaba’s Qwen3-235B while being nearly 30 times lighter.

Major Chinese tech firms, including Tencent, Baidu and ByteDance, plan to integrate R1-0528 into their cloud services for enterprise clients. DeepSeek’s progress signals China’s continued push into globally competitive AI, driven by a young team determined to offer high performance with fewer resources

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSO asks court to overturn WhatsApp verdict

Israeli spyware company NSO Group has requested a new trial after a US jury ordered it to pay $168 million in damages to WhatsApp.

The company, which has faced mounting legal and financial troubles, filed a motion in a California federal court last week seeking to reduce the verdict or secure a retrial.

The May verdict awarded WhatsApp $444,719 in compensatory damages and $167.25 million in punitive damages. Jurors found that NSO exploited vulnerabilities in the encrypted platform and sold the exploit to clients who allegedly used it to target journalists, activists and political rivals.

WhatsApp, owned by Meta, filed the lawsuit in 2019.

NSO claims the punitive award is unconstitutional, arguing it is over 376 times greater than the compensatory damages and far exceeds the US Supreme Court’s general guidance of a 4:1 ratio.

The firm also said it cannot afford the penalty, citing losses of $9 million in 2023 and $12 million in 2024. Its CEO testified that the company is ‘struggling to keep our heads above water’.

WhatsApp, responding to TechCrunch in a statement, said NSO was once again trying to evade accountability. The company vowed to continue its legal campaign, including efforts to secure a permanent injunction that would prevent NSO from ever targeting WhatsApp or its users again.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI copyright clash stalls UK data bill

A bitter standoff over AI and copyright has returned to the House of Lords, as ministers and peers clash over how to protect creative workers while fostering technological innovation.

At the centre of the debate is the proposed Data (Use and Access) Bill, which was expected to pass smoothly but is now stuck in parliamentary limbo due to growing resistance.

The bill would allow AI firms to access copyrighted material unless rights holders opt out, a proposal that many artists and peers believe threatens the UK’s £124bn creative industry.

Nearly 300 Lords have called for AI developers to disclose what content they use and seek licences instead of relying on blanket access. Former film director Baroness Kidron described the policy as ‘state-sanctioned theft’ and warned it would sacrifice British talent to benefit large tech companies.

Supporters of the bill, like former Meta executive Sir Nick Clegg, argue that forcing AI firms to seek individual permissions would severely damage the UK’s AI sector. The Department for Science, Innovation and Technology insists it will only consider changes if they are proven to benefit creators.

If no resolution is found, the bill risks being shelved entirely. That would also scrap unrelated proposals bundled into it, such as new NHS data-sharing rules and plans for a nationwide underground map.

Despite the bill’s wide scope, the fight over copyright remains its most divisive and emotionally charged feature.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail adds automatic AI summaries

Gmail on mobile now displays AI-generated summaries by default, marking a shift in how Google’s Gemini assistant operates within inboxes.

Instead of relying on users to request a summary, Gemini will now decide when it’s useful—typically for long email threads with multiple replies—and present a brief summary card at the top of the message.

These summaries update automatically as conversations evolve, aiming to save users from scrolling through lengthy discussions.

The feature is currently limited to mobile devices and available only to users with Google Workspace accounts, Gemini Education add-ons, or a Google One AI Premium subscription. For the moment, summaries are confined to emails written in English.

Google expects the rollout to take around two weeks, though it remains unclear when, or if, the tool will extend to standard Gmail accounts or desktop users.

Anyone wanting to opt out must disable Gmail’s smart features entirely—giving up tools like Smart Compose, Smart Reply, and package tracking in the process.

While some may welcome the convenience, others may feel uneasy about their emails being analysed by large language models, especially since this process could contribute to further training of Google’s AI systems.

The move reflects a wider trend across Google’s products, where AI is becoming central to everyday user experiences.

Additional user controls and privacy commitments

According to Google Workspace, users have some control over the summary cards. They can collapse a Gemini summary card, and it will remain collapsed for that specific email thread.

In the near future, Gmail will introduce enhancements, such as automatically collapsing future summary cards for users who consistently collapse them, until the user chooses to expand them again. For emails that don’t display automatic summaries, Gmail still offers manual options.

Users can tap the ‘summarise this email’ chip at the top of the message or use the Gemini side panel to trigger a summary manually. Google also reaffirms its commitment to data protection and user privacy. All AI features in Gmail adhere to its privacy principles, with more details available on the Privacy Hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!