The cognitive cost of AI: Balancing assistance and awareness

The double-edged sword of AI assistance

The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI has become a ubiquitous companion, helping students with essays and professionals streamline emails.

However, a new study by MIT raises a crucial red flag: excessive reliance on AI may come at the cost of our own mental sharpness. Researchers discovered that frequent ChatGPT users showed significantly lower brain activity, particularly in areas tied to critical thinking and creativity.

The study introduces a concept dubbed ‘cognitive debt,’ a reminder that while AI offers convenience, it may undermine our cognitive resilience if not used responsibly.

MIT’s method: How the study was conducted

The MIT Media Lab study involved 54 participants split into three groups: one used ChatGPT, another used traditional search engines, and the third completed tasks unaided. Participants were assigned writing exercises over multiple sessions while their brain activity was tracked using electroencephalography (EEG).

That method allowed scientists to measure changes in alpha and beta waves, indicators of mental effort. The findings revealed a striking pattern: those who depended on ChatGPT demonstrated the lowest brain activity, especially in the frontal cortex, where high-level reasoning and creativity originate.

Diminished mental engagement and memory recall

One of the most alarming outcomes of the study was the cognitive disengagement observed in AI users. Not only did they show reduced brainwave activity, but they also struggled with short-term memory.

Many could not recall what they had written just minutes earlier because the AI had done most of the cognitive heavy lifting. This detachment from the creative process meant that users were no longer actively constructing ideas or arguments but passively accepting the machine-generated output.

The result? A diminished sense of authorship and ownership over one’s own work.

Homogenised output: The erosion of creativity

The study also noted a tendency for AI-generated content to appear more uniform and less original. While ChatGPT can produce grammatically sound and coherent text, it often lacks the personal flair, nuance, and originality that come from genuine human expression.

Essays written with AI assistance were found to be more homogenised, lacking distinct voice and perspective. This raises concerns, especially in academic and creative fields, where originality and critical thinking are fundamental.

The overuse of AI could subtly condition users to accept ‘good enough’ content, weakening their creative instincts over time.

The concept of cognitive debt

‘Cognitive debt’ refers to the mental atrophy that can result from outsourcing too much thinking to AI. Like financial debt, this form of cognitive laziness builds over time and eventually demands repayment, often in the form of diminished skills when the tool is no longer available.

Typing

Participants who became accustomed to using AI found it more challenging to write without it later on. The reliance suggests that continuous use without active mental engagement can erode our capacity to think deeply, form complex arguments, and solve problems independently.

A glimmer of hope: Responsible AI use

Despite these findings, the study offers hope. Participants who started tasks without AI and only later integrated it showed significantly better cognitive performance.

That implies that when AI is used as a complementary tool rather than a replacement, it can support learning and enhance productivity. By encouraging users to first engage with the problem and then use AI to refine or expand their ideas, we can strike a healthy balance between efficiency and mental effort.

Rather than abstinence, responsible usage is the key to retaining our cognitive edge.

Use it or lose it

The MIT study underscores a critical reality of our AI-driven era: while tools like ChatGPT can boost productivity, they must not become a substitute for thinking itself. Overreliance risks weakening the faculties defining human intelligence—creativity, reasoning, and memory.

The challenge in the future is to embrace AI mindfully, ensuring that we remain active participants in the cognitive process. If we treat AI as a partner rather than a crutch, we can unlock its full potential without sacrificing our own.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MAI-DxO: Microsoft’s New AI diagnoses complex medical cases with 85% accuracy

Microsoft has introduced a new AI-powered diagnostic tool capable of tackling complex medical cases that often baffle expert clinicians. Called MAI-DxO (Microsoft AI Diagnostic Orchestrator), the system has been developed by Microsoft’s AI health unit, founded by DeepMind co-founder Mustafa Suleyman.

When tested on complex real-world cases published in the New England Journal of Medicine, the AI tool correctly diagnosed 85.5%. For comparison, experienced doctors managed to solve only 20% of the same cases without external help.

The tool uses five virtual AI agents, each simulating a medical expert with unique roles, such as choosing tests or proposing hypotheses. The approach, dubbed the ‘chain of debate’, allows for step-by-step reasoning in arriving at diagnoses.

Microsoft trained MAI-DxO using 304 case studies and large language models from leading AI companies, including OpenAI, Google, Meta, and xAI. The AI panel mimics a real-world diagnostic team with significantly faster and more accurate outcomes.

Despite the promising results, Microsoft acknowledges that more validation and regulatory clarity are needed before such tools can be used in clinical practice. The company is currently working with health organisations to test the system further.

The aim is not to replace doctors but to ease their workload by offering a reliable assistant for the most challenging cases. Microsoft says MAI-DxO could represent a significant step toward what it calls ‘medical superintelligence’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta launches AI superintelligence lab to compete with rivals

Meta has launched a new division called Meta Superintelligence Labs to accelerate its AI ambitions and close the gap with rivals such as OpenAI and Google.

The lab will be led by Alexandr Wang, former CEO of Scale AI, following Meta’s $14.3 billion investment in the data-labeling company. Former GitHub CEO Nat Friedman and SSI co-founder Daniel Gross will also hold key roles in the initiative.

Mark Zuckerberg announced the new effort in an internal memo, stating that Meta is now focused on developing superintelligent AI systems capable of matching or even outperforming humans. He described this as the beginning of a new era and reaffirmed Meta’s commitment to leading the field.

The lab’s mission is to push AI to a point where it can solve complex tasks more effectively than current models.

To meet these goals, Meta has been aggressively recruiting AI researchers from top competitors. Reports suggest that OpenAI employees have been offered signing bonuses as high as $100 million to join Meta.

New hires include talent from Anthropic and Google, although Meta has reportedly avoided deeper recruitment from Anthropic due to concerns over culture fit.

Meta’s move comes in response to the lukewarm reception of its Llama 4 model and mounting pressure from more advanced AI products released by competitors.

The company hopes that by combining high-level leadership, fresh talent and massive investment, its new lab can deliver breakthrough results and reposition Meta as a serious contender in the race for AGI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI rock band’s Spotify rise fuels calls for transparency

A mysterious indie rock band called The Velvet Sundown has shot to popularity on Spotify, and may be powered by AI. Their debut track, Dust on the Wind, has racked up over 380,000 plays since 20 June and helped attract more than 470,000 monthly listeners.

The song bears a resemblance to the 1977 Kansas hit Dust in the Wind, prompting suspicion from Reddit users. The band’s profile picture and Instagram photos appear AI-generated, while the band members listed — such as ‘Milo Rains’ and ‘Rio Del Mar’ — have no online trace.

Despite the clues, Spotify does not label the group as AI-generated. Their songs are appearing in curated playlists like Discover Weekly. Only Deezer, a French streaming service, has identified The Velvet Sundown as likely created by generative AI models like Suno or Udio.

Deezer began tagging AI music in June and now detects over 20,000 entirely artificial tracks each day. Another AI band, The Devil Inside, has also gained traction. Their song Bones in the River has over 1.6 million plays on Spotify, but lacks credited creators.

On Deezer, the same track is labelled as AI-generated and linked to Hungarian musician László Tamási — a rare human credit for bot-made music. While Deezer takes a transparent approach, Spotify, Apple Music, and Amazon Music have not announced detection systems or labelling plans.

Deezer CEO Alexis Lanternier said AI is ‘not inherently good or bad,’ but called for transparency to protect artist rights and user trust. Legal battles are already underway. US record labels have sued Suno and Udio for mass copyright infringement, though the companies argue it falls under fair use.

As AI-generated music continues to rise, platforms face increasing pressure to inform users and draw more precise lines between human and machine-made art.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI leadership battles talent exodus

OpenAI is scrambling to retain its top researchers after Meta launched a bold recruitment drive. Chief Research Officer Mark Chen likened the situation to a break-in at home and reassured staff that leadership is actively addressing the issue.

Meta has reportedly offered signing bonuses of up to $100 million to entice senior OpenAI staff. Chen and CEO Sam Altman have responded by reviewing compensation packages and exploring creative retention incentives, assuring fairness in the process.

The recruitment push comes as Meta intensifies efforts in AI, investing heavily in its superintelligence lab and targeting experts from OpenAI, Google DeepMind, and Scale AI.

OpenAI has encouraged staff to resist pressure to make quick decisions, especially during its scheduled recharge week, emphasising the importance of the broader mission over short-term gains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan leads in AI defence of democracy

Taiwan has emerged as a global model for using AI to defend democracy, earning recognition for its success in combating digital disinformation.

The island joined a new international coalition led by the International Foundation for Electoral Systems to strengthen election integrity through AI collaboration.

Constantly targeted by foreign actors, Taiwan has developed proactive digital defence systems that serve as blueprints for other democracies.

Its rapid response strategies and tech-forward approach have made it a leader in countering AI-powered propaganda.

While many nations are only beginning to grasp the risks posed by AI to democratic systems, Taiwan has already faced these threats and adapted.

Its approach now shapes global policy discussions around safeguarding elections in the digital era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance through the lens of magical realism

AI today straddles the line between the extraordinary and the mundane, a duality that evokes the spirit of magical realism—a literary genre where the impossible blends seamlessly with the real. Speaking at the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, Jovan Kurbalija proposed that we might better understand the complexities of AI governance by viewing it through this narrative lens.

Like Gabriel García Márquez’s floating characters or Salman Rushdie’s prophetic protagonists, AI’s remarkable feats—writing novels, generating art, mimicking human conversation—are increasingly accepted without question, despite their inherent strangeness.

Kurbalija argues that AI, much like the supernatural in literature, doesn’t merely entertain; it reveals and shapes profound societal realities. Algorithms quietly influence politics, reshape economies, and even redefine relationships.

Just as magical realism uses the extraordinary to comment on power, identity, and truth, AI forces us to confront new ethical dilemmas: Who owns AI-created content? Can consent be meaningfully given to machines? And does predictive technology amplify societal biases?

The risks of AI—job displacement, misinformation, surveillance—are akin to the symbolic storms of magical realism: always present, always shaping the backdrop. Governance, then, must walk a fine line between stifling innovation and allowing unchecked technological enchantment.

Kurbalija warns against ‘black magic’ policy manipulation cloaked in humanitarian language and urges regulators to focus on real-world impacts while resisting the temptation of speculative fears. Ultimately, AI isn’t science fiction—it’s magical realism in motion.

As we build policies and frameworks to govern it, we must ensure this magic serves humanity, rather than distort our sense of what is real, ethical, and just. In this unfolding story, the challenge is not only technological, but deeply human.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Gartner warns that more than 40 percent of agentic AI projects could be cancelled by 2027

More than 40% of agentic AI projects will likely be cancelled by the end of 2027 due to rising costs, limited business value, and poor risk control, according to research firm Gartner.

These cancellations are expected as many early-stage initiatives remain trapped in hype, often misapplied and far from ready for real-world deployment.

Gartner analyst Anushree Verma warned that most agentic AI efforts are still at the proof-of-concept stage. Instead of focusing on scalable production, many companies have been distracted by experimental use cases, underestimating the cost and complexity of full-scale implementation.

A recent poll by Gartner found that only 19% of organisations had made significant investments in agentic AI, while 31% were undecided or waiting.

Much of the current hype is fuelled by vendors engaging in ‘agent washing’ — marketing existing tools like chatbots or RPA under a new agentic label without offering true agentic capabilities.

Out of thousands of vendors, Gartner believes only around 130 offer legitimate agentic solutions. Verma noted that most agentic models today lack the intelligence to deliver strong returns or follow complex instructions independently.

Still, agentic AI holds long-term promise. Gartner expects 15% of daily workplace decisions to be handled autonomously by 2028, up from zero in 2024. Moreover, one-third of enterprise applications will include agentic capabilities by then.

However, to succeed, organisations must reimagine workflows from the ground up, focusing on enterprise-wide productivity instead of isolated task automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta hires top OpenAI researcher for AI superintelligence push

Meta has reportedly hired AI researcher Trapit Bansal, who previously worked closely with OpenAI co-founder Ilya Sutskever on reinforcement learning and co-created the o1 reasoning model.

Bansal joins Meta’s ambitious superintelligence team, which is focused on further pushing AI reasoning capabilities.

Former Scale AI CEO Alexandr Wang leads the new team, brought in after Meta invested $14.3 billion in the AI data labelling company.

Alongside Bansal, several other notable figures have recently joined, including three OpenAI researchers from Zurich, a former Google DeepMind expert, Jack Rae, and a senior machine learning lead from Sesame AI.

Meta CEO Mark Zuckerberg is accelerating AI recruitment by negotiating with prominent names like former GitHub CEO Nat Friedman and Safe Superintelligence co-founder Daniel Gross.

Despite these aggressive efforts, OpenAI CEO Sam Altman revealed that even $100 million joining bonuses have failed to lure key staff away from his firm.

Zuckerberg has also explored acquiring startups such as Sutskever’s Safe SuperIntelligence and Perplexity AI, further highlighting Meta’s urgency in catching up in the generative AI race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BT report shows rise in cyber attacks on UK small firms

A BT report has found that 42% of small businesses in the UK suffered a cyberattack in the past year. The study also revealed that 67% of medium-sized firms were targeted, while many lacked basic security measures or staff training.

Phishing was named the most common threat, hitting 85% of businesses in the UK, and ransomware incidents have more than doubled. BT’s new training programme aims to help SMEs take practical steps to reduce risks, covering topics like AI threats, account takeovers and QR code scams.

Tris Morgan from BT highlighted that SMEs face serious risks from cyber attacks, which could threaten their survival. He stressed that security is a necessary foundation and can be achieved without vast resources.

The report follows wider warnings on AI-enabled cyber threats, with other studies showing that few firms feel prepared for these risks. BT’s training is part of its mission to help businesses grow confidently despite digital dangers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!