Cybersecurity vs freedom of expression: IGF 2025 panel calls for balanced, human-centred digital governance

At the 2025 Internet Governance Forum in Lillestrøm, Norway, experts from government, civil society, and the tech industry convened to discuss one of the thorniest challenges of the digital age: how to secure cyberspace without compromising freedom of expression and fundamental human rights. The session, moderated by terrorism survivor and activist Bjørn Ihler, revealed a shared urgency across sectors to move beyond binary thinking and craft nuanced, people-centred approaches to online safety.

Paul Ash, head of the Christchurch Call Foundation, warned against framing regulation and inaction as the only options, urging legislators to build human rights safeguards directly into cybersecurity laws. Echoing him, Mallory Knodel of the Global Encryption Coalition stressed the foundational role of end-to-end encryption, calling it a necessary boundary-setting tool in an era where digital surveillance and content manipulation pose systemic risks. She warned that weakening encryption compromises privacy and invites broader security threats.

Representing the tech industry, Meta’s Cagatay Pekyrour underscored the complexity of moderating content across jurisdictions with over 120 speech-restricting laws. He called for more precise legal definitions, robust procedural safeguards, and a shift toward ‘system-based’ regulatory frameworks that assess platforms’ processes rather than micromanage content.

Meanwhile, Romanian regulator and former MP Pavel Popescu detailed his country’s recent struggles with election-related disinformation and cybercrime, arguing that social media companies must shoulder more responsibility, particularly in responding swiftly to systemic threats like AI-driven scams and coordinated influence operations.

While perspectives diverged on enforcement and regulation, all participants agreed that lasting digital governance requires sustained multistakeholder collaboration grounded in transparency, technical expertise, and respect for human rights. As the digital landscape evolves rapidly under the influence of AI and new forms of online harm, this session underscored that no single entity or policy can succeed alone, and that the stakes for security and democracy have never been higher.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Civil society pushes for digital rights and justice in WSIS+20 review at IGF 2025

At a packed session during Day 0 of the Internet Governance Forum 2025 in Lillestrøm, Norway, civil society leaders gathered to strategise how the upcoming WSIS+20 review can deliver on the promise of digital rights and justice. Organised by the Global Digital Justice Forum and the Global Digital Rights Coalition for WSIS, the brainstorming session brought together voices from across the globe to assess the ‘elements paper’ recently issued by review co-facilitators from Albania and Kenya.

Anna Oosterlinck of ARTICLE 19 opened the session by noting significant gaps in the current draft, especially in its treatment of human rights and multistakeholder governance.

Ellie McDonald of Global Partners Digital, speaking on behalf of the Global Digital Rights Coalition, presented the group’s three strategic pillars: anchoring digital policy in international human rights law, reinforcing multistakeholder governance based on São Paulo guidance, and strengthening WSIS institutions like the Internet Governance Forum. She warned that current policy language risks drifting away from established human rights commitments and fails to propose concrete steps for institutional resilience.

Nandini Chami of the Global Digital Justice Forum outlined their campaign’s broader structural agenda, including a call for an integrated human rights framework fit for the digital age, safeguarding the internet as a global commons, ensuring sustainable digital transitions, and creating a fair international digital economy that combats digital colonialism. She stressed the importance of expanding rights protections to include people affected by AI and data practices, even those not directly online.

Zach Lampell from the International Centre for Not-for-Profit Law closed the session with a stark reminder: those who control internet infrastructure hold immense power over how digital rights are exercised. He and others urged participants to provide feedback by 15 July through an open consultation process, emphasising the need for strong, unified civil society input. The organising coalitions committed to publishing a summary paper to advance advocacy ahead of the final WSIS+20 outcome document.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Global digital dialogue opens at IGF 2025 in Norway

The 2025 Internet Governance Forum (IGF) commenced in Lillestrøm, Norway, with a warm welcome from Chengetai Masango, Head of the UN IGF Secretariat. Marking the sixth year of its parliamentary track, the event gathered legislators from across the globe, including nations such as Nepal, Lithuania, Spain, Zimbabwe, and Uruguay.

Masango highlighted the growing momentum of parliamentary engagement in global digital governance and emphasised Norway’s deep-rooted value of freedom of expression as a guiding principle for shaping responsible digital futures. In his remarks, Masango praised the unique role of parliamentarians in bridging local realities with global digital policy discussions, underlining the importance of balancing human rights with digital security.

He encouraged continued collaboration, learning, and building upon the IGF’s past efforts, primarily through local leadership and national implementation of ideas born from multistakeholder dialogue. Masango concluded by urging participants to engage in meaningful exchanges and form new partnerships, stressing that their contributions matter far beyond the forum itself.

Andy Richardson from the IGF Secretariat reiterated these themes, noting how parliamentary involvement underscores the urgency and weight of digital policy issues in the legislative realm. He drew attention to the critical intersection of AI and democracy, referencing recent resolutions and efforts to track parliamentary actions worldwide. With over 37 national reports on AI-related legislation already compiled, Richardson stressed the IGF’s ongoing commitment to staying updated and learning from legislators’ diverse experiences.

The opening session closed with an invitation to continue discussions in the day’s first panel, titled ‘Digital Deceit: The Societal Impact of Online Misinformation and Disinformation.’ Simultaneous translations were made available, highlighting the IGF’s inclusive and multilingual approach as it moved into a day of rich, cross-cultural policy conversations.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Parliamentarians at IGF 2025 call for action on information integrity

At the Internet Governance Forum 2025 in Lillestrøm, Norway, global lawmakers and experts gathered to confront one of the most pressing challenges of our digital era: the societal impact of misinformation and disinformation, especially amid the rapid advance of AI. Framed by the UN Global Principles for Information Integrity, the session spotlighted the urgent need for resilient, democratic responses to online erosion of public trust.

AI’s disruptive power took centre stage, with speakers citing alarming trends—deepfakes manipulated global election narratives in over a third of national polls in 2024 alone. Experts like Lindsay Gorman from the German Marshall Fund warned of a polluted digital ecosystem where fabricated video and audio now threaten core democratic processes.

UNESCO’s Marjorie Buchser expanded the concern, noting that generative AI enables manipulation and redefines how people access information, often diverting users from traditional journalism toward context-stripped AI outputs. However, regulation alone was not touted as a panacea.

Instead, panellists promoted ‘democracy-affirming technologies’ that embed transparency, accountability, and human rights at their foundation. The conversation urged greater investment in open, diverse digital ecosystems, particularly those supporting low-resource languages and underrepresented cultures. At the same time, multiple voices called for more equitable research, warning that Western-centric data and governance models skew current efforts.

In the end, a recurring theme echoed across the room: tackling information manipulation is a collective endeavour that demands multistakeholder cooperation. From enforcing technical standards to amplifying independent journalism and bolstering AI literacy, participants called for governments, civil society, and the tech industry to build unified, future-proof solutions that protect democratic integrity while preserving the fundamental right to free expression.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Spyware accountability demands Global South leadership at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a powerful roundtable titled ‘Spyware Accountability in the Global South’ brought together experts, activists, and policymakers to confront the growing threat of surveillance technologies in the world’s most vulnerable regions. Moderated by Nighat Dad of Pakistan’s Digital Rights Foundation, the session featured diverse perspectives from Mexico, India, Lebanon, the UK, and the private sector, each underscoring how spyware like Pegasus has been weaponised to target journalists, human rights defenders, and civil society actors across Latin America, South Asia, and the Middle East.

Ana Gaitán of R3D Mexico revealed how Mexican military forces routinely deploy spyware to obstruct investigations into abuses like the Ayotzinapa case. Apar Gupta from India’s Internet Freedom Foundation warned of the enduring legacy of colonial surveillance laws enabling secret spyware use. At the same time, Mohamad Najem of Lebanon’s SMEX explained how post-Arab Spring authoritarianism has fueled a booming domestic and export market for surveillance tools in the Gulf region. All three pointed to the urgent need for legal reform and international support, noting the failure of courts and institutions to provide effective remedies.

Representing regulatory efforts, Elizabeth Davies of the UK Foreign Commonwealth and Development Office outlined the Pall Mall Process, a UK-France initiative to create international norms for commercial cyber intrusion tools. Former UN Special Rapporteur David Kaye emphasised that such frameworks must go beyond soft law, calling for export controls, domestic legal safeguards, and litigation to ensure enforcement.

Rima Amin of Meta added a private sector lens, highlighting Meta’s litigation against NSO Group and pledging to reinvest any damages into supporting surveillance victims. Despite emerging international efforts, the panel agreed that meaningful spyware accountability will remain elusive without centring Global South voices, expanding technical and legal capacity, and bridging the North-South knowledge gap.

With spyware abuse expanding faster than regulation, the call from Lillestrøm was clear: democratic protections and digital rights must not be a privilege of geography.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok ban delayed for the third time by Trump order

US President Donald Trump has announced a 90-day extension for TikTok’s Chinese parent company, ByteDance, to secure a US buyer, effectively postponing a nationwide ban of the popular video-sharing app. The move comes in the wake of a bipartisan law passed in 2024, requiring the platform to be sold to a non-Chinese entity due to national security concerns.

Trump is expected to formalise this decision with an executive order later this week, ensuring the platform remains available to its approximately 170 million American users. White House Press Secretary Karoline Leavitt emphasised that Trump is determined to keep TikTok operational, stating that the president ‘does not want TikTok to go dark.’

The latest extension follows a series of delays since Trump returned to office, including an initial 75-day grace period granted in January and an extension in April when no buyer had emerged. The situation remains unresolved despite optimism from Vice President JD Vance earlier this year that a deal would materialise in time.

President Trump has acknowledged that any sale would likely require Chinese government approval but expressed confidence in reaching a solution, citing a potentially cooperative stance from President Xi Jinping.

Interestingly, while Trump previously sought to ban TikTok during his first term, citing national security risks, he now appears to be more pragmatic. The president himself joined TikTok as a user just over a year ago, underscoring the app’s enduring popularity and the complex political and economic dynamics surrounding its future in the US.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Social media overtakes TV as main news source in the US

Social media and video platforms have officially overtaken traditional television and news websites as the primary way Americans consume news, according to new research from the Reuters Institute. Over half of respondents (54%) now turn to platforms like Facebook, YouTube, and X (formerly Twitter) for their news, surpassing TV (50%) and dedicated news websites or apps (48%).

The study highlights the growing dominance of personality-driven news, particularly through social video, with figures like podcaster Joe Rogan reaching nearly a quarter of the population weekly. That shift poses serious challenges for traditional media outlets as more users gravitate toward influencers and creators who present news in a casual or partisan style.

There is concern, however, about the accuracy of this new media landscape. Nearly half of global respondents identify online influencers as major sources of false or misleading information, on par with politicians.

At the same time, populist leaders are increasingly using sympathetic online hosts to bypass tough questions from mainstream journalists, often spreading unchecked narratives. The report also notes a rise in AI tools for news consumption, especially among Gen Z, though public trust in AI’s ability to deliver reliable news remains low.

Despite the rise of alternative platforms like Threads and Mastodon, they’ve struggled to gain traction. Even as user habits change, one constant remains: people still value reliable news sources, even if they turn to them less often.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cognitive offloading and the future of the mind in the AI age

AI reshapes work and learning

The rapid advancement of AI is bringing to light a range of emerging phenomena within contemporary human societies.

The integration of AI-driven tools into a broad spectrum of professional tasks has proven beneficial in many respects, particularly in terms of alleviating the cognitive and physical burdens traditionally placed on human labour.

By automating routine processes and enhancing decision-making capabilities, AI has the potential to significantly improve efficiency and productivity across various sectors.

In response to these accelerating technological changes, a growing number of nations are prioritising the integration of AI technologies into their education systems to ensure students are prepared for future societal and workforce transformations.

China advances AI education for youth

China has released two landmark policy documents aimed at integrating AI education systematically into the national curriculum for primary and secondary schools.

The initiative not only reflects the country’s long-term strategic vision for educational transformation but also seeks to position China at the forefront of global AI literacy and talent development.

chinese flag with the city of shanghai in the background and digital letters ai somewhere over the flag

The two guidelines, formally titled the Guidelines for AI General Education in Primary and Secondary Schools and the Guidelines for the Use of Generative AI in Primary and Secondary Schools, represent a scientific and systemic approach to cultivating AI competencies among school-aged children.

Their release marks a milestone in the development of a tiered, progressive AI education system, with carefully delineated age-appropriate objectives and ethical safeguards for both students and educators.

The USA expands AI learning in schools

In April, the US government outlined a structured national policy to integrate AI literacy into every stage of the education system.

By creating a dedicated federal task force, the administration intends to coordinate efforts across departments to promote early and equitable access to AI education.

Instead of isolating AI instruction within specialised fields, the initiative seeks to embed AI concepts across all learning pathways—from primary education to lifelong learning.

The plan includes the creation of a nationwide AI challenge to inspire innovation among students and educators, showcasing how AI can address real-world problems.

The policy also prioritises training teachers to understand and use AI tools, instead of relying solely on traditional teaching methods. It supports professional development so educators can incorporate AI into their lessons and reduce administrative burdens.

The strategy encourages public-private partnerships, using industry expertise and existing federal resources to make AI teaching materials widely accessible.

European Commission supports safe AI use

As AI becomes more common in classrooms around the globe, educators must understand not only how to use it effectively but also how to apply it ethically.

Rather than introducing AI tools without guidance or reflection, the European Commission has provided ethical guidelines to help teachers use AI and data responsibly in education.

european union regulates ai

Published in 2022 and developed with input from educators and AI experts, the EU guidelines are intended primarily for primary and secondary teachers who have little or no prior experience with AI.

Instead of focusing on technical complexity, the guidelines aim to raise awareness about how AI can support teaching and learning, highlight the risks involved, and promote ethical decision-making.

The guidelines explain how AI can be used in schools, encourage safe and informed use by both teachers and students, and help educators consider the ethical foundations of any digital tools they adopt.

Rather than relying on unexamined technology, they support thoughtful implementation by offering practical questions and advice for adapting AI to various educational goals.

AI tools may undermine human thinking

However, technological augmentation is not without drawbacks. Concerns have been raised regarding the potential for job displacement, increased dependency on digital systems, and the gradual erosion of certain human skills.

As such, while AI offers promising opportunities for enhancing the modern workplace, it simultaneously introduces complex challenges that must be critically examined and responsibly addressed.

One significant challenge that must be addressed in the context of increasing reliance on AI is the phenomenon known as cognitive offloading. But what exactly does this term entail?

What happens when we offload thinking?

Cognitive offloading refers to the practice of using physical actions or external tools to modify the information processing demands of a task, with the aim of reducing the cognitive load on an individual.

In essence, it involves transferring certain mental functions—such as memory, calculation, or decision-making—to outside resources like digital devices, written notes, or structured frameworks.

digital brain

While this strategy can enhance efficiency and performance, it also raises concerns about long-term cognitive development, dependency on technological aids, and the potential degradation of innate mental capacities.

How AI may be weakening critical thinking

A study, led by Dr Michael Gerlich, Head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, published in the journal Societies raises serious concerns about the cognitive consequences of AI augmentation in various aspects of life.

The study suggests that frequent use of AI tools may be weakening individuals’ capacity for critical thinking, a skill considered fundamental to independent reasoning, problem-solving, and informed decision-making.

More specifically, Dr Gerlich adopted a mixed-methods approach, combining quantitative survey data from 666 participants with qualitative interviews involving 50 individuals.

Participants were drawn from diverse age groups and educational backgrounds and were assessed on their frequency of AI tool use, their tendency to offload cognitive tasks, and their critical thinking performance.

The study employed both self-reported and performance-based measures of critical thinking, alongside statistical analyses and machine learning models, such as random forest regression, to identify key factors influencing cognitive performance.

Younger users, who rely more on AI, think less critically

The findings revealed a strong negative correlation between frequent AI use and critical thinking abilities. Individuals who reported heavy reliance on AI tools—whether for quick answers, summarised explanations, or algorithmic recommendations—scored lower on assessments of critical thinking.

The effect was particularly pronounced among younger users aged 17 to 25, who reported the highest levels of cognitive offloading and showed the weakest performance in critical thinking tasks.

In contrast, older participants (aged 46 and above) demonstrated stronger critical thinking skills and were less inclined to delegate mental effort to AI.

Higher education strengthens critical thinking

The data also indicated that educational attainment served as a protective factor: those with higher education levels consistently exhibited more robust critical thinking abilities, regardless of their AI usage levels.

These findings suggest that formal education may equip individuals with better tools for critically engaging with digital information rather than uncritically accepting AI-generated responses.

Now, we must understand that while the study does not establish direct causation, the strength of the correlations and the consistency across quantitative and qualitative data suggest that AI usage may indeed be contributing to a gradual decline in cognitive independence.

However, in his study, Gerlich also notes the possibility of reverse causality—individuals with weaker critical thinking skills may be more inclined to rely on AI tools in the first place.

Offloading also reduces information retention

While cognitive offloading can enhance immediate task performance, it often comes at the cost of reduced long-term memory retention, as other studies show.

The trade-off has been most prominently illustrated in experimental tasks such as the Pattern Copy Task, where participants tasked with reproducing a pattern typically choose to repeatedly refer to the original rather than commit it to memory.

Even when such behaviours introduce additional time or effort (e.g., physically moving between stations), the majority of participants opt to offload, suggesting a strong preference for minimising cognitive strain.

These findings underscore the human tendency to prioritise efficiency over internalisation, especially under conditions of high cognitive demand.

The tendency to offload raises crucial questions about the cognitive and educational consequences of extended reliance on external aids. On the one hand, offloading can free up mental resources, allowing individuals to focus on higher-order problem-solving or multitasking.

On the other hand, it may foster a kind of cognitive dependency, weakening internal memory traces and diminishing opportunities for deep engagement with information.

Within the framework, cognitive offloading is not a failure of memory or attention but a reconfiguration of cognitive architecture—a process that may be adaptive rather than detrimental.

However, the perspective remains controversial, especially in light of findings that frequent offloading can impair retention, transfer of learning, and critical thinking, as Gerlich’s study argues.

If students, for example, continually rely on digital devices to recall facts or solve problems, they may fail to develop the robust mental models necessary for flexible reasoning and conceptual understanding.

The mind may extend beyond the brain

The tension has also sparked debate among cognitive scientists and philosophers, particularly in light of the extended mind hypothesis.

Contrary to the traditional view that cognition is confined to the brain, the extended mind theory argues that cognitive processes often rely on, and are distributed across, tools, environments, and social structures.

digital brain spin

As digital technologies become increasingly embedded in daily life, this hypothesis raises profound questions about human identity, cognition, and agency.

At the core of the extended mind thesis lies a deceptively simple question: Where does the mind stop, and the rest of the world begin?

Drawing an analogy to prosthetics—external objects that functionally become part of the body—Clark and Chalmers argue that cognitive tools such as notebooks, smartphones, and sketchpads can become integrated components of our mental system.

These tools do not merely support cognition; they constitute it when used in a seamless, functionally integrated manner. This conceptual shift has redefined thinking not as a brain-bound process but as a dynamic interaction between mind, body, and world.

Balancing AI and human intelligence

In conclusion, cognitive offloading represents a powerful mechanism of modern cognition, one that allows individuals to adapt to complex environments by distributing mental load.

However, its long-term effects on memory, learning, and problem-solving remain a subject of active investigation. Rather than treating offloading as inherently beneficial or harmful, future research and practice should seek to balance its use, leveraging its strengths while mitigating its costs.

Human VS Ai Background Brain and heart hd background 1024x576 1

Ultimately, we -as educators, policymakers, and technologists- have to shape the future of learning, work and confront a central tension: how to harness the benefits of AI without compromising the very faculties—critical thought, memory, and independent judgment—that define human intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Diplo joins Brazil’s Internet Forum and celebrates CGI.br’s 30 years

Diplo actively participated in Brazil’s Internet Forum (FIB), held from May 26 to 30 and hosted by the Brazilian Internet Steering Committee (CGI.br). The event brought together key stakeholders from across sectors to discuss pressing issues in digital governance.

Representing Diplo, Marilia Maciel contributed to critical discussions on state roles and multistakeholder collaboration in managing cloud infrastructures and defending digital sovereignty. She also offered insights during the main session on setting principles for regulating digital platforms.

Maciel’s contributions were recognised with the ‘Destaques em Governança da Internet no Brasil’ award, one of the most respected acknowledgments of excellence in internet governance in the country. The award highlights individuals and organisations that have made significant advances in promoting inclusive and effective digital policy in Brazil.

The event also marked a major milestone for CGI.br—its 30th anniversary. Diplo joined in celebrating the committee’s three decades of leadership in internet governance. CGI.br’s pioneering approach to multistakeholder governance has served not only as a national model but as a global inspiration for collaborative digital policy-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!