Gemini AI suite expands to help teachers plan and students learn

Google has unveiled a major expansion of its Gemini AI tools tailored for classroom use, launching over 30 features to support teachers and students. These updates include personalised AI-powered lesson planning, content generation, and interactive study guides.

Teachers can now create custom AI tutors, known as ‘Gems’, to assist students with specific academic needs using their own teaching materials. Google’s AI reading assistant is also gaining real-time support features through the Read Along tool in Classroom, enhancing literacy development for younger users.

Students and teachers will benefit from wider access to Google Vids, the company’s video creation app, enabling them to create instructional content and complete multimedia assignments.

Additional features aim to monitor student progress, manage AI permissions, improve data security, and streamline classroom content delivery using new Class tools.

By placing AI directly into the hands of educators, Google aims to offer more engaging and responsive learning, while keeping its tools aligned with classroom goals and policies. The rollout continues Google’s bid to take the lead in the evolving AI-driven edtech space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The cognitive cost of AI: Balancing assistance and awareness

The double-edged sword of AI assistance

The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI has become a ubiquitous companion, helping students with essays and professionals streamline emails.

However, a new study by MIT raises a crucial red flag: excessive reliance on AI may come at the cost of our own mental sharpness. Researchers discovered that frequent ChatGPT users showed significantly lower brain activity, particularly in areas tied to critical thinking and creativity.

The study introduces a concept dubbed ‘cognitive debt,’ a reminder that while AI offers convenience, it may undermine our cognitive resilience if not used responsibly.

MIT’s method: How the study was conducted

The MIT Media Lab study involved 54 participants split into three groups: one used ChatGPT, another used traditional search engines, and the third completed tasks unaided. Participants were assigned writing exercises over multiple sessions while their brain activity was tracked using electroencephalography (EEG).

That method allowed scientists to measure changes in alpha and beta waves, indicators of mental effort. The findings revealed a striking pattern: those who depended on ChatGPT demonstrated the lowest brain activity, especially in the frontal cortex, where high-level reasoning and creativity originate.

Diminished mental engagement and memory recall

One of the most alarming outcomes of the study was the cognitive disengagement observed in AI users. Not only did they show reduced brainwave activity, but they also struggled with short-term memory.

Many could not recall what they had written just minutes earlier because the AI had done most of the cognitive heavy lifting. This detachment from the creative process meant that users were no longer actively constructing ideas or arguments but passively accepting the machine-generated output.

The result? A diminished sense of authorship and ownership over one’s own work.

Homogenised output: The erosion of creativity

The study also noted a tendency for AI-generated content to appear more uniform and less original. While ChatGPT can produce grammatically sound and coherent text, it often lacks the personal flair, nuance, and originality that come from genuine human expression.

Essays written with AI assistance were found to be more homogenised, lacking distinct voice and perspective. This raises concerns, especially in academic and creative fields, where originality and critical thinking are fundamental.

The overuse of AI could subtly condition users to accept ‘good enough’ content, weakening their creative instincts over time.

The concept of cognitive debt

‘Cognitive debt’ refers to the mental atrophy that can result from outsourcing too much thinking to AI. Like financial debt, this form of cognitive laziness builds over time and eventually demands repayment, often in the form of diminished skills when the tool is no longer available.

Typing

Participants who became accustomed to using AI found it more challenging to write without it later on. The reliance suggests that continuous use without active mental engagement can erode our capacity to think deeply, form complex arguments, and solve problems independently.

A glimmer of hope: Responsible AI use

Despite these findings, the study offers hope. Participants who started tasks without AI and only later integrated it showed significantly better cognitive performance.

That implies that when AI is used as a complementary tool rather than a replacement, it can support learning and enhance productivity. By encouraging users to first engage with the problem and then use AI to refine or expand their ideas, we can strike a healthy balance between efficiency and mental effort.

Rather than abstinence, responsible usage is the key to retaining our cognitive edge.

Use it or lose it

The MIT study underscores a critical reality of our AI-driven era: while tools like ChatGPT can boost productivity, they must not become a substitute for thinking itself. Overreliance risks weakening the faculties defining human intelligence—creativity, reasoning, and memory.

The challenge in the future is to embrace AI mindfully, ensuring that we remain active participants in the cognitive process. If we treat AI as a partner rather than a crutch, we can unlock its full potential without sacrificing our own.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hacktivist attacks surge in Iran–Israel tensions

The Iran–Israel conflict has now expanded into cyberspace, with rival hacker groups launching waves of politically driven attacks.

Following Israel’s military operation against Iran, pro-Israeli hackers known as ‘Predatory Sparrow‘ struck Iran’s Sepah Bank, deleting data and causing significant service disruption.

A day later, the same group targeted Nobitex, Iran’s largest crypto exchange, stealing and destroying over $90 million in assets.

Cyber attacks intensified in the days before and after Israeli strikes. According to NSFOCUS, cyberattacks on Iran peaked three days before the military operation, suggesting pre-attack reconnaissance.

In retaliation, pro-Iranian hackers escalated attacks on Israel on 16 June, focusing on government systems, aerospace, and education.

While attacks on Iran have been fewer, Israeli systems have faced over 1,300 attacks in 2025 alone, with 37% of all global hacktivist activity aimed at Israel since the conflict began.

However, analysts note these attacks have been high in volume but limited in impact. Their malware tactics involve evading antivirus software, deleting data, and turning off recovery systems.

NSFOCUS warns that geopolitical tensions are turning hacktivist groups into informal cyber proxies. Though not formally state-backed, these loosely organised actors align closely with national interests.

As traditional defences lag, cybersecurity experts argue that national infrastructure must adopt more strategic, coordinated defence measures instead of fragmented responses, especially during crises and conflicts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kurbalija’s book on internet governance turns 20 with new life at IGF

At the Internet Governance Forum 2025 in Lillestrøm, Norway, Jovan Kurbalija launched the eighth edition of his seminal textbook ‘Introduction to Internet Governance’, marking a return to writing after a nine-year pause. Moderated by Sorina Teleanu of the Diplo, the session unpacked not just the content of the new edition but also the reasoning behind retaining its original title in an era buzzing with buzzwords like ‘AI governance’ and ‘digital governance.’

Kurbalija defended the choice, arguing that most so-called digital issues—from content regulation to cybersecurity—ultimately operate over internet infrastructure, making ‘Internet governance’ the most precise term available.

The updated edition reflects both continuity and adaptation. He introduced ‘Kaizen publishing,’ a new model that replaces the traditional static book cycle with a continuously updated digital platform. Driven by the fast pace of technological change and aided by AI tools trained on his own writing style, the new format ensures the book evolves in real-time with policy and technological developments.

Jovan book launch

The new edition is structured as a seven-floor pyramid tackling 50 key issues rooted in history and future internet governance trajectories. The book also traces digital policy’s deep historical roots.

Kurbalija highlighted how key global internet governance frameworks—such as ICANN, the WTO e-commerce moratorium, and UN cyber initiatives—emerged within months of each other in 1998, a pivotal moment he calls foundational to today’s landscape. He contrasted this historical consistency with recent transformations, identifying four key shifts since 2016: mass data migration to the cloud, COVID-19’s digital acceleration, the move from CPUs to GPUs, and the rise of AI.

Finally, the session tackled the evolving discourse around AI governance. Kurbalija emphasised the need to weigh long-term existential risks against more immediate challenges like educational disruption and concentrated knowledge power. He also critiqued the shift in global policy language—from knowledge-centric to data-driven frameworks—and warned that this transformation might obscure AI’s true nature as a knowledge-based phenomenon.

As geopolitics reasserts itself in digital governance debates, Kurbalija’s updated book aims to ground readers in the enduring principles shaping an increasingly complex landscape.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI and the future of work: Global forum highlights risks, promise, and urgent choices

At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use.

AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps.

AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms.

Joseph Gordon Levitt at IGF 2025

Yet, concerns about fairness and data rights loomed large. Actor and entrepreneur Joseph Gordon-Levitt delivered a pointed critique of tech companies using creative work to train AI without consent or compensation.

He called for economic systems that reward human contributions, warning that failing to do so risks eroding creative and financial incentives. This argument underscored broader concerns about job displacement, automation, and the growing digital divide, especially among women and marginalised communities.

Debates also exposed philosophical rifts between regulatory approaches. While the US emphasised minimal interference to spur innovation, the European Commission and Norway called for risk-based regulation and international cooperation to ensure trust and equity. Speakers agreed on the need for inclusive governance frameworks and education systems that foster critical thinking, resist de-skilling, and prepare workers for an AI-augmented economy.

The session made clear that the future of work in the AI era depends on today’s collective choices that must centre people, fairness, and global solidarity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

EuroDIG outcomes shared at IGF 2025 session in Norway

At the Internet Governance Forum (IGF) 2025 in Norway, a high-level networking session was held to share key outcomes from the 18th edition of the European Dialogue on Internet Governance (EuroDIG), which took place earlier this year from 12–14 May in Strasbourg, France. Hosted by the Council of Europe and supported by the Luxembourg Presidency of the Committee of Ministers, the Strasbourg conference centred on balancing innovation and regulation, strongly focusing on safeguarding human rights in digital policy.

Sandra Hoferichter, who moderated the session in Norway, opened by noting the symbolic significance of EuroDIG’s return to Strasbourg—the city where the forum began in 2008. She emphasised EuroDIG’s unique tradition of issuing “messages” as policy input, which IGF and other regional dialogues later adopted.

Swiss Ambassador Thomas Schneider, President of the EuroDIG Support Association, presented the community’s consolidated contributions to the WSIS+20 review process. “The multistakeholder model isn’t optional—it’s essential,” he said, adding that Europe strongly supports making the Internet Governance Forum a permanent institution rather than one renewed every decade. He called for a transparent and inclusive WSIS+20 process, warning against decisions being shaped behind closed diplomatic doors.

YouthDIG representative Frances Douglas Thomson shared insights from the youth-led sessions at EuroDIG. She described strong debates on digital literacy, particularly around the role of generative AI in schools. ‘Some see AI as a helpful assistant; others fear it diminishes critical thinking,’ she said. Content moderation also sparked division, with some young participants calling for vigorous enforcement against harmful content and others raising concerns about censorship. Common ground emerged around the need for greater algorithmic transparency so users understand how content is curated.

Hans Seeuws, business operations manager at EURid, emphasised the need for infrastructure providers to be heard in policy spaces. He supported calls for concrete action on AI governance and digital rights, stressing the importance of translating dialogue into implementation.

Chetan Sharma from the Data Mission Foundation Trust India questioned the practical impact of governance forums in humanitarian crises. Frances highlighted several EuroDIG sessions that tackled using autonomous weapons, internet shutdowns, and misinformation during conflicts. ‘Dialogue across stakeholders can shift how we understand digital conflict. That’s meaningful change,’ she noted.

A representative from Geneva Macro Labs challenged the panel to explain how internet policy can be effective when many governments lack technical literacy. Schneider replied that civil society, business, and academia must step in when public institutions fall short. ‘Democracy is not self-sustaining—it requires daily effort. The price of neglect is high,’ he cautioned.

Janice Richardson, an expert at the Council of Europe, asked how to widen youth participation. Frances praised YouthDIG’s accessible, bottom-up format and called for increased funding to help young people from underrepresented regions join discussions. ‘The more youth feel heard, the more they stay engaged,’ she said.

As the session closed, Hoferichter reminded attendees of the over 400 applications received for YouthDIG this year. She urged donors to help cover the high travel costs, mainly from Eastern Europe and the Caucasus. ‘Supporting youth in internet governance isn’t charity—it’s a long-term investment in inclusive, global policy,’ she concluded.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI tools at work pose hidden dangers

AI tools are increasingly used in workplaces to enhance productivity but come with significant security risks. Workers may unknowingly breach privacy laws like GDPR or HIPAA by sharing sensitive data with AI platforms, risking legal penalties and job loss.

Experts warn of AI hallucinations where chatbots generate false information, highlighting the need for thorough human review. Bias in AI outputs, stemming from flawed training data or system prompts, can lead to discriminatory decisions and potential lawsuits.

Cyber threats like prompt injection and data poisoning can manipulate AI behaviour, while user error and IP infringement pose further challenges. As AI technology evolves, unknown risks remain a concern, making caution essential when integrating AI into business processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Big Tech’s grip on information sparks urgent debate at IGF 2025 in Norway

At the Internet Governance Forum 2025 in Lillestrøm, Norway, global leaders, tech executives, civil society figures, and academics converged for a high-level session to confront one of the digital age’s most pressing dilemmas: how to protect democratic discourse and human rights amid big tech’s tightening control over the global information space. The session, titled ‘Losing the Information Space?’, tackled the rising threat of disinformation, algorithmic opacity, and the erosion of public trust, all amplified by powerful AI technologies.

Norwegian Minister Lubna Jaffery sounded the alarm, referencing the annulled Romanian presidential election as a stark reminder of how influence operations and AI-driven disinformation campaigns can destabilise democracies. She warned that while platforms have democratised access to expression, they’ve also created fragmented echo chambers and supercharged the spread of propaganda.

Estonia’s Minister of Justice and Digital Affairs Liisa Ly Pakosta echoed the concern, describing how her country faces persistent information warfare—often backed by state actors—and announced Estonia’s rollout of AI-based education to equip youth with digital resilience. The debate revealed deep divides over how to achieve transparency and accountability in tech.

TikTok’s Lisa Hayes defended the company’s moderation efforts and partnerships with fact-checkers, advocating for what she called ‘meaningful transparency’ through accessible tools and reporting. But others, like Reporters Without Borders’ Thibaut Bruttin, demanded structural reform.

He argued platforms should be treated as public utilities, legally obliged to give visibility to trustworthy journalism, and rejected the idea that digital space should remain under the control of private interests. Despite conflicting views on the role of regulation versus collaboration, panellists agreed that the threat of disinformation is real and growing—and no single entity can tackle it alone.

The session closed with calls for stronger international legal frameworks, cross-sector cooperation, and bold action to defend truth, freedom of expression, and democratic integrity in an era where technology’s influence is pervasive and, if unchecked, potentially perilous.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

How ROAMX helps bridge the digital divide

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and stakeholders gathered to assess the progress of UNESCO’s ROAMX framework, a tool for evaluating digital development through the lenses of Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues such as gender equality and sustainability. Since its introduction in 2018, and with the rollout of new second-generation indicators in 2024, ROAMX has helped countries align their digital policies with global standards like the WSIS and Sustainable Development Goals.

Dr Tawfik Jelassi of UNESCO opened the session by highlighting the urgency of inclusive digital transformation, noting that 2.6 billion people remain offline, particularly in lower-income regions.

Brazil and Fiji were presented as case studies for the updated framework. Brazil, the first to implement the revised indicators, showcased improvements in digital public services, but also revealed enduring inequalities—particularly among Black women and rural communities—with limited meaningful connectivity and digital literacy.

Meanwhile, Fiji piloted a capacity-building workshop that exposed serious intergovernmental coordination gaps: despite extensive consultation, most ministries were unaware of their national digital strategy. These findings underscore the need for ongoing engagement across government and civil society to implement effective digital policies truly.

Speakers emphasised that ROAMX is more than just an assessment tool; it offers a full policy lifecycle framework that can inform planning, monitoring, and evaluation. Participants noted that the framework’s adaptability makes it suitable for integration into national and regional digital governance efforts, including Internet Governance Forums.

They also pointed out the acute lack of sex-disaggregated data, which severely hampers effective policy responses to gender-based digital divides, especially in regions like Africa, where women remain underrepresented in both access and leadership roles in tech.

The session concluded with a call for broader adoption of ROAMX as a strategic tool to guide inclusive digital transformation efforts worldwide. Its relevance was affirmed in the context of WSIS+20 and the Global Digital Compact, with panellists agreeing that meaningful, rights-based digital development must be data-driven, inclusive, and participatory to leave no one behind in the digital age.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

MIT study links AI chatbot use to reduced brain activity and learning

A new preprint study from MIT has revealed that using AI chatbots for writing tasks significantly reduces brain activity and impairs memory retention.

The research, led by Dr Nataliya Kosmyna at the MIT Media Lab, involved Boston-area students writing essays under three conditions: unaided, using a search engine, or assisted by OpenAI’s GPT-4o. Participants wore EEG headsets to monitor brain activity throughout.

Results indicated that those relying on AI exhibited the weakest neural connectivity, with up to 55% lower cognitive engagement than the unaided group. Those using search engines showed a moderate drop of up to 48%.

The researchers used Dynamic Directed Transfer Function (dDTF) to assess cognitive load and information flow across brain regions. They found that while the unaided group activated broad neural networks, AI users primarily engaged in procedural tasks with shallow encoding of information.

Participants using GPT-4o also performed worst in recall and perceived ownership of their written work. In follow-up sessions, students previously reliant on AI struggled more when the tool was removed, suggesting diminished internal processing skills.

Meanwhile, those who used their own cognitive skills earlier showed improved performance when later given AI support.

The findings suggest that early AI use in education may hinder deeper learning and critical thinking. Researchers recommend that students first engage in self-driven learning before incorporating AI tools to enhance understanding.

Dr Kosmyna emphasised that while the results are preliminary and not yet peer-reviewed, the study highlights the need for careful consideration of AI’s cognitive impact.

MIT’s team now plans to explore similar effects in coding tasks, studying how AI tools like code generators influence brain function and learning outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!