Big Tech’s grip on information sparks urgent debate at IGF 2025 in Norway

At the Internet Governance Forum 2025 in Lillestrøm, Norway, global leaders, tech executives, civil society figures, and academics converged for a high-level session to confront one of the digital age’s most pressing dilemmas: how to protect democratic discourse and human rights amid big tech’s tightening control over the global information space. The session, titled ‘Losing the Information Space?’, tackled the rising threat of disinformation, algorithmic opacity, and the erosion of public trust, all amplified by powerful AI technologies.

Norwegian Minister Lubna Jaffery sounded the alarm, referencing the annulled Romanian presidential election as a stark reminder of how influence operations and AI-driven disinformation campaigns can destabilise democracies. She warned that while platforms have democratised access to expression, they’ve also created fragmented echo chambers and supercharged the spread of propaganda.

Estonia’s Minister of Justice and Digital Affairs Liisa Ly Pakosta echoed the concern, describing how her country faces persistent information warfare—often backed by state actors—and announced Estonia’s rollout of AI-based education to equip youth with digital resilience. The debate revealed deep divides over how to achieve transparency and accountability in tech.

TikTok’s Lisa Hayes defended the company’s moderation efforts and partnerships with fact-checkers, advocating for what she called ‘meaningful transparency’ through accessible tools and reporting. But others, like Reporters Without Borders’ Thibaut Bruttin, demanded structural reform.

He argued platforms should be treated as public utilities, legally obliged to give visibility to trustworthy journalism, and rejected the idea that digital space should remain under the control of private interests. Despite conflicting views on the role of regulation versus collaboration, panellists agreed that the threat of disinformation is real and growing—and no single entity can tackle it alone.

The session closed with calls for stronger international legal frameworks, cross-sector cooperation, and bold action to defend truth, freedom of expression, and democratic integrity in an era where technology’s influence is pervasive and, if unchecked, potentially perilous.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

How ROAMX helps bridge the digital divide

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and stakeholders gathered to assess the progress of UNESCO’s ROAMX framework, a tool for evaluating digital development through the lenses of Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues such as gender equality and sustainability. Since its introduction in 2018, and with the rollout of new second-generation indicators in 2024, ROAMX has helped countries align their digital policies with global standards like the WSIS and Sustainable Development Goals.

Dr Tawfik Jelassi of UNESCO opened the session by highlighting the urgency of inclusive digital transformation, noting that 2.6 billion people remain offline, particularly in lower-income regions.

Brazil and Fiji were presented as case studies for the updated framework. Brazil, the first to implement the revised indicators, showcased improvements in digital public services, but also revealed enduring inequalities—particularly among Black women and rural communities—with limited meaningful connectivity and digital literacy.

Meanwhile, Fiji piloted a capacity-building workshop that exposed serious intergovernmental coordination gaps: despite extensive consultation, most ministries were unaware of their national digital strategy. These findings underscore the need for ongoing engagement across government and civil society to implement effective digital policies truly.

Speakers emphasised that ROAMX is more than just an assessment tool; it offers a full policy lifecycle framework that can inform planning, monitoring, and evaluation. Participants noted that the framework’s adaptability makes it suitable for integration into national and regional digital governance efforts, including Internet Governance Forums.

They also pointed out the acute lack of sex-disaggregated data, which severely hampers effective policy responses to gender-based digital divides, especially in regions like Africa, where women remain underrepresented in both access and leadership roles in tech.

The session concluded with a call for broader adoption of ROAMX as a strategic tool to guide inclusive digital transformation efforts worldwide. Its relevance was affirmed in the context of WSIS+20 and the Global Digital Compact, with panellists agreeing that meaningful, rights-based digital development must be data-driven, inclusive, and participatory to leave no one behind in the digital age.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

MIT study links AI chatbot use to reduced brain activity and learning

A new preprint study from MIT has revealed that using AI chatbots for writing tasks significantly reduces brain activity and impairs memory retention.

The research, led by Dr Nataliya Kosmyna at the MIT Media Lab, involved Boston-area students writing essays under three conditions: unaided, using a search engine, or assisted by OpenAI’s GPT-4o. Participants wore EEG headsets to monitor brain activity throughout.

Results indicated that those relying on AI exhibited the weakest neural connectivity, with up to 55% lower cognitive engagement than the unaided group. Those using search engines showed a moderate drop of up to 48%.

The researchers used Dynamic Directed Transfer Function (dDTF) to assess cognitive load and information flow across brain regions. They found that while the unaided group activated broad neural networks, AI users primarily engaged in procedural tasks with shallow encoding of information.

Participants using GPT-4o also performed worst in recall and perceived ownership of their written work. In follow-up sessions, students previously reliant on AI struggled more when the tool was removed, suggesting diminished internal processing skills.

Meanwhile, those who used their own cognitive skills earlier showed improved performance when later given AI support.

The findings suggest that early AI use in education may hinder deeper learning and critical thinking. Researchers recommend that students first engage in self-driven learning before incorporating AI tools to enhance understanding.

Dr Kosmyna emphasised that while the results are preliminary and not yet peer-reviewed, the study highlights the need for careful consideration of AI’s cognitive impact.

MIT’s team now plans to explore similar effects in coding tasks, studying how AI tools like code generators influence brain function and learning outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MSU launches first robotics and AI degree programs in Minnesota

Minnesota State University is set to break new ground this fall by launching two pioneering academic programs in robotics and AI. The university will introduce the state’s only undergraduate robotics engineering degree and the first graduate-level AI program within the Minnesota State system.

With these offerings, MSU aims to meet the fast-growing industry demand for skilled professionals in these cutting-edge fields. The programs have already drawn significant interest, with 13 students applying for the AI master’s and more expected in both tracks.

MSU officials say the curriculum combines strong theoretical foundations with hands-on learning to prepare students for careers in sectors like agriculture, healthcare, finance, construction, and manufacturing. Students will engage in real-world projects, building and deploying AI and robotics solutions while exploring ethical and societal implications.

University leaders emphasise that these programs are tailored to Minnesota’s economy’s needs, including a high concentration of Fortune 500 companies and a growing base of smaller firms eager to adopt AI technologies. Robotics also enjoys strong interest at the high school level, and MSU hopes to offer local students an in-state option for further study, competing with institutions in neighbouring states.

Why does it matter?

According to faculty, graduates of these programs will be well-positioned in the job market. The university sees the initiative as part of its broader mission to deliver education aligned with emerging technological trends and societal needs, ensuring Minnesota’s workforce remains competitive in an increasingly automated and AI-driven world.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Diplo empowers Armenian civil society on digital issues

A new round of training sessions has been launched in Armenia to strengthen civil society’s understanding of digital governance. The initiative, which began on 12 June, brings together NGO representatives from both the regions and the capital to deepen their knowledge of crucial digital topics, including internet governance, AI, and digital rights.

The training program combines online and offline components, aiming to equip participants with the tools needed to actively shape the digital future of Armenia. By increasing the digital competence of civil society actors, the program aspires to promote broader democratic engagement and more informed contributions to policy discussions in the digital space.

The educational initiative is being carried out by Diplo as part of the ‘Digital Democracy for ALL’ measure by GIZ (Deutsche Gesellschaft für Internationale Zusammenarbeit), in close cooperation with several regional GIZ projects that focus on civil society and public administration reform in Eastern Partnership countries. The sessions have been praised for their depth and impact, with particular appreciation extended to Angela Saghatelyan for her leadership, and to Diplo’s experts Vladimir Radunovic, Katarina Bojovic, and Marília Maciel for their contributions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Switzerland’s unique AI path: Blending innovation, governance, and local empowerment

In his recent blog post ‘Advancing Swiss AI Trinity: Zurich’s entrepreneurship, Geneva’s governance, and Communal subsidiarity,’ Jovan Kurbalija proposes a distinctive roadmap for Switzerland to navigate the rapidly evolving landscape of AI. Rather than mimicking the AI power plays of the United States or China, Kurbalija argues that Switzerland can lead by integrating three national strengths: Zurich’s thriving innovation ecosystem, Geneva’s global leadership in governance, and the country’s foundational principle of subsidiarity rooted in local decision-making.

Zurich, already a global tech hub, is positioned to drive cutting-edge development through its academic excellence and robust entrepreneurial culture. Institutions like ETH Zurich and the presence of major tech firms provide a fertile ground for collaborations that turn research into practical solutions.

With AI tools becoming increasingly accessible, Kurbalija emphasises that success now depends on how societies harness the interplay of human and machine intelligence—a field where Switzerland’s education and apprenticeship systems give it a competitive edge. Meanwhile, Geneva is called upon to spearhead balanced international governance and standard-setting for AI.

Kurbalija stresses that AI policy must go beyond abstract discussions and address real-world issues—health, education, the environment—by embedding AI tools in global institutions and negotiations. He notes that Geneva’s experience in multilateral diplomacy and technical standardisation offers a strong foundation for shaping ethical, inclusive AI frameworks.

The third pillar—subsidiarity—empowers Swiss cantons and communities to develop AI that reflects local values and needs. By supporting grassroots innovation through mini-grants, reimagining libraries as AI learning hubs, and embedding AI literacy from primary school to professional training, Switzerland can build an AI model that is democratic and inclusive.

Why does it matter?

Kurbalija’s call to action is clear: with its tools, talent, and traditions aligned, Switzerland must act now to chart a future where AI serves society, not the other way around.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI cheating crisis leaves teachers in despair

Teachers across the US are growing alarmed by widespread student use of AI for assignments, calling it a crisis that undermines education itself. Some professors report that students now rely on AI for everything from note-taking to essay writing, leaving educators questioning the future of learning.

The fear of false accusations is rising among honest students, with some recording their screens to prove their work is genuine. Detection tools often misfire, further complicating efforts to distinguish real effort from AI assistance.

While some argue for banning tech and returning to traditional classroom methods, others suggest rethinking US education entirely. Rather than fighting AI, some believe it offers a chance to re-engage students by giving them meaningful work they want to do.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government backs AI to help teachers and reduce admin

The UK government has unveiled new guidance for schools that promotes the use of AI to reduce teacher workloads and increase face-to-face time with pupils.

The Department for Education (DfE) says AI could take over time-consuming administrative tasks such as lesson planning, report writing, and email drafting—allowing educators to focus more on classroom teaching.

The guidance, aimed at schools and colleges in the UK, highlights how AI can assist with formative assessments like quizzes and low-stakes feedback, while stressing that teachers must verify outputs for accuracy and data safety.

It also recommends using only school-approved tools and limits AI use to tasks that support rather than replace teaching expertise.

Education unions welcomed the move but said investment is needed to make it work. Leaders from the NAHT and ASCL praised AI’s potential to ease pressure on staff and help address recruitment issues, but warned that schools require proper infrastructure and training.

The government has pledged £1 million to support AI tool development for marking and feedback.

Education Secretary Bridget Phillipson said the plan will free teachers to deliver more personalised support, adding: ‘We’re putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s AI tools disabled for gaokao exam

As millions of high school students across China began the rigorous ‘gaokao’ college entrance exam, the country’s leading tech companies took unprecedented action by disabling AI features on their popular platforms.

Apps from Tencent, ByteDance, and Moonshot AI temporarily blocked functionalities like photo recognition and real-time question answering. This move aimed to prevent students from using AI chatbots to cheat during the critical national examination, which largely dictates university admissions in China.

This year, approximately 13.4 million students are participating in the ‘gaokao,’ a multi-day test that serves as a pivotal determinant for social mobility, particularly for those from rural or lower-income backgrounds.

The immense pressure associated with the exam has historically fueled intense test preparation. However, screenshots circulating on Chinese social media app Rednote confirmed that AI chatbots like Tencent’s YuanBao, ByteDance’s Doubao, and Moonshot AI’s Kimi displayed messages indicating the temporary closure of exam-relevant features to ensure fairness.

China’s ‘gaokao’ exam highlights a balanced approach to AI: promoting its education from a young age, with compulsory instruction in Beijing schools this autumn, while firmly asserting it’s for learning, not cheating. Regulators draw a clear line, reinforcing that AI aids development, but never compromises academic integrity.

This coordinated action by major tech firms reinforces the message that AI has no place in the examination hall, despite China’s broader push to cultivate an AI-literate generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!