A unified call for a stronger digital future at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, global stakeholders converged to shape the future of digital governance by aligning the Internet Governance Forum (IGF) with the World Summit on the Information Society (WSIS) Plus 20 review and the Global Digital Compact (GDC) follow-up. Moderated by Yoichi Iida, former Vice Minister at Japan’s Ministry of Internal Affairs and Communications, the session featured high-level representatives from governments, international organisations, the business sector, and youth networks, all calling for a stronger, more inclusive, better-resourced IGF.

William Lee, WSIS Plus 20 Policy Lead for the Australian Government, emphasised the need for sustainable funding, tighter integration between global and national IGF processes, and the creation of ‘communities of practice.’ Philipp Schulte from Germany’s Ministry of Education, Digital Transformation and Government Modernisation echoed these goals, adding proposals such as appointing an IGF director and establishing an informal multistakeholder sounding board.

The European Union’s unified stance also prioritised long-term mandate renewal and structural support for inclusive participation. Speaking online, Gitanjali Sah, Strategy and Policy Coordinator at the International Telecommunication Union (ITU), argued that WSIS frameworks already offer the tools to implement GDC goals, while stressing the urgency of addressing global connectivity gaps.

Maarit Palovirta, Deputy Director General at Connect Europe, represented the business sector, lauding the IGF as an accessible forum for private sector engagement and advocating for continuity and simplicity in governance processes. Representing over 40 youth IGFs globally, Murillo Salvador emphasised youth inclusion, digital literacy, online well-being, and co-ownership in policymaking as core pillars for future success.

Across all groups, there was strong agreement on the urgency of bridging digital divides, supporting grassroots voices, and building a resilient, inclusive, and forward-looking IGF. The shared sentiment was clear: to ensure digital governance reflects the needs of all, the IGF must evolve boldly, inclusively, and collaboratively.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Cybersecurity vs freedom of expression: IGF 2025 panel calls for balanced, human-centred digital governance

At the 2025 Internet Governance Forum in Lillestrøm, Norway, experts from government, civil society, and the tech industry convened to discuss one of the thorniest challenges of the digital age: how to secure cyberspace without compromising freedom of expression and fundamental human rights. The session, moderated by terrorism survivor and activist Bjørn Ihler, revealed a shared urgency across sectors to move beyond binary thinking and craft nuanced, people-centred approaches to online safety.

Paul Ash, head of the Christchurch Call Foundation, warned against framing regulation and inaction as the only options, urging legislators to build human rights safeguards directly into cybersecurity laws. Echoing him, Mallory Knodel of the Global Encryption Coalition stressed the foundational role of end-to-end encryption, calling it a necessary boundary-setting tool in an era where digital surveillance and content manipulation pose systemic risks. She warned that weakening encryption compromises privacy and invites broader security threats.

Representing the tech industry, Meta’s Cagatay Pekyrour underscored the complexity of moderating content across jurisdictions with over 120 speech-restricting laws. He called for more precise legal definitions, robust procedural safeguards, and a shift toward ‘system-based’ regulatory frameworks that assess platforms’ processes rather than micromanage content.

Meanwhile, Romanian regulator and former MP Pavel Popescu detailed his country’s recent struggles with election-related disinformation and cybercrime, arguing that social media companies must shoulder more responsibility, particularly in responding swiftly to systemic threats like AI-driven scams and coordinated influence operations.

While perspectives diverged on enforcement and regulation, all participants agreed that lasting digital governance requires sustained multistakeholder collaboration grounded in transparency, technical expertise, and respect for human rights. As the digital landscape evolves rapidly under the influence of AI and new forms of online harm, this session underscored that no single entity or policy can succeed alone, and that the stakes for security and democracy have never been higher.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Global digital dialogue opens at IGF 2025 in Norway

The 2025 Internet Governance Forum (IGF) commenced in Lillestrøm, Norway, with a warm welcome from Chengetai Masango, Head of the UN IGF Secretariat. Marking the sixth year of its parliamentary track, the event gathered legislators from across the globe, including nations such as Nepal, Lithuania, Spain, Zimbabwe, and Uruguay.

Masango highlighted the growing momentum of parliamentary engagement in global digital governance and emphasised Norway’s deep-rooted value of freedom of expression as a guiding principle for shaping responsible digital futures. In his remarks, Masango praised the unique role of parliamentarians in bridging local realities with global digital policy discussions, underlining the importance of balancing human rights with digital security.

He encouraged continued collaboration, learning, and building upon the IGF’s past efforts, primarily through local leadership and national implementation of ideas born from multistakeholder dialogue. Masango concluded by urging participants to engage in meaningful exchanges and form new partnerships, stressing that their contributions matter far beyond the forum itself.

Andy Richardson from the IGF Secretariat reiterated these themes, noting how parliamentary involvement underscores the urgency and weight of digital policy issues in the legislative realm. He drew attention to the critical intersection of AI and democracy, referencing recent resolutions and efforts to track parliamentary actions worldwide. With over 37 national reports on AI-related legislation already compiled, Richardson stressed the IGF’s ongoing commitment to staying updated and learning from legislators’ diverse experiences.

The opening session closed with an invitation to continue discussions in the day’s first panel, titled ‘Digital Deceit: The Societal Impact of Online Misinformation and Disinformation.’ Simultaneous translations were made available, highlighting the IGF’s inclusive and multilingual approach as it moved into a day of rich, cross-cultural policy conversations.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Parliamentarians at IGF 2025 call for action on information integrity

At the Internet Governance Forum 2025 in Lillestrøm, Norway, global lawmakers and experts gathered to confront one of the most pressing challenges of our digital era: the societal impact of misinformation and disinformation, especially amid the rapid advance of AI. Framed by the UN Global Principles for Information Integrity, the session spotlighted the urgent need for resilient, democratic responses to online erosion of public trust.

AI’s disruptive power took centre stage, with speakers citing alarming trends—deepfakes manipulated global election narratives in over a third of national polls in 2024 alone. Experts like Lindsay Gorman from the German Marshall Fund warned of a polluted digital ecosystem where fabricated video and audio now threaten core democratic processes.

UNESCO’s Marjorie Buchser expanded the concern, noting that generative AI enables manipulation and redefines how people access information, often diverting users from traditional journalism toward context-stripped AI outputs. However, regulation alone was not touted as a panacea.

Instead, panellists promoted ‘democracy-affirming technologies’ that embed transparency, accountability, and human rights at their foundation. The conversation urged greater investment in open, diverse digital ecosystems, particularly those supporting low-resource languages and underrepresented cultures. At the same time, multiple voices called for more equitable research, warning that Western-centric data and governance models skew current efforts.

In the end, a recurring theme echoed across the room: tackling information manipulation is a collective endeavour that demands multistakeholder cooperation. From enforcing technical standards to amplifying independent journalism and bolstering AI literacy, participants called for governments, civil society, and the tech industry to build unified, future-proof solutions that protect democratic integrity while preserving the fundamental right to free expression.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Spyware accountability demands Global South leadership at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a powerful roundtable titled ‘Spyware Accountability in the Global South’ brought together experts, activists, and policymakers to confront the growing threat of surveillance technologies in the world’s most vulnerable regions. Moderated by Nighat Dad of Pakistan’s Digital Rights Foundation, the session featured diverse perspectives from Mexico, India, Lebanon, the UK, and the private sector, each underscoring how spyware like Pegasus has been weaponised to target journalists, human rights defenders, and civil society actors across Latin America, South Asia, and the Middle East.

Ana Gaitán of R3D Mexico revealed how Mexican military forces routinely deploy spyware to obstruct investigations into abuses like the Ayotzinapa case. Apar Gupta from India’s Internet Freedom Foundation warned of the enduring legacy of colonial surveillance laws enabling secret spyware use. At the same time, Mohamad Najem of Lebanon’s SMEX explained how post-Arab Spring authoritarianism has fueled a booming domestic and export market for surveillance tools in the Gulf region. All three pointed to the urgent need for legal reform and international support, noting the failure of courts and institutions to provide effective remedies.

Representing regulatory efforts, Elizabeth Davies of the UK Foreign Commonwealth and Development Office outlined the Pall Mall Process, a UK-France initiative to create international norms for commercial cyber intrusion tools. Former UN Special Rapporteur David Kaye emphasised that such frameworks must go beyond soft law, calling for export controls, domestic legal safeguards, and litigation to ensure enforcement.

Rima Amin of Meta added a private sector lens, highlighting Meta’s litigation against NSO Group and pledging to reinvest any damages into supporting surveillance victims. Despite emerging international efforts, the panel agreed that meaningful spyware accountability will remain elusive without centring Global South voices, expanding technical and legal capacity, and bridging the North-South knowledge gap.

With spyware abuse expanding faster than regulation, the call from Lillestrøm was clear: democratic protections and digital rights must not be a privilege of geography.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

WGIG reunion sparks calls for reform at IGF 2025 in Norway

At the Internet Governance Forum (IGF) 2025 in Lillestrøm, Norway, a reunion of the original Working Group on Internet Governance (WGIG) marked a significant reflection and reckoning moment for global digital governance. Commemorating the 20th anniversary of WGIG’s formation, the session brought together pioneers of the multistakeholder model that reshaped internet policy discussions during the World Summit on the Information Society (WSIS).

Moderated by Markus Kummer and organised by William J. Drake, the panel featured original WGIG members, including Ayesha Hassan, Raul Echeberria, Wolfgang Kleinwächter, Avri Doria, Juan Fernandez, and Jovan Kurbalija, with remote contributions from Alejandro Pisanty, Carlos Afonso, Vittorio Bertola, Baher Esmat, and others. While celebrating their achievements, speakers did not shy away from blunt assessments of the IGF’s present state and future direction.

Speakers universally praised WGIG’s groundbreaking work in legitimising multi-stakeholderism within the UN system. The group’s broad, inclusive definition of internet governance—encompassing technical infrastructure and social and economic policies—was credited for transforming how global internet issues are addressed.

Participants emphasised the group’s unique working methodology, prioritising transparency, pluralism, and consensus-building without erasing legitimate disagreements. Many argue that these practices remain instructive amid today’s fragmented digital governance landscape.

However, as the conversation shifted from legacy to present-day performance, participants voiced deep concerns about the IGF’s limitations. Despite successes in capacity-building and agenda-setting, the forum was criticised for its failure to tackle controversial issues like surveillance, monopolies, and platform accountability.

 Crowd, Person, People, Press Conference, Adult, Male, Man, Face, Head, Electrical Device, Microphone, Clothing, Formal Wear, Suit, Audience
Jovan Kurbalija, Executive Director of Diplo

Speakers such as Vittorio Bertola and Avri Doria lamented its increasingly top-down character. At the same time, Nandini Chami and Ariette Esterhuizen raised questions about the IGF’s relevance and inclusiveness in the face of growing power imbalances. Some, including Bertrand de la Chapelle and Jovan Kurbalija, proposed bold reforms, including establishing a new working group to address the interlinked challenges of AI, data governance, and digital justice.

The session closed on a forward-looking note, urging the IGF community to recapture WGIG’s original spirit of collaborative innovation. As emerging technologies raise the stakes for global cooperation, participants agreed that internet governance must evolve—not only to reflect new realities but to stay true to the inclusive, democratic ideals that defined its founding two decades ago.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!