Global digital dialogue opens at IGF 2025 in Norway

The 2025 Internet Governance Forum (IGF) commenced in Lillestrøm, Norway, with a warm welcome from Chengetai Masango, Head of the UN IGF Secretariat. Marking the sixth year of its parliamentary track, the event gathered legislators from across the globe, including nations such as Nepal, Lithuania, Spain, Zimbabwe, and Uruguay.

Masango highlighted the growing momentum of parliamentary engagement in global digital governance and emphasised Norway’s deep-rooted value of freedom of expression as a guiding principle for shaping responsible digital futures. In his remarks, Masango praised the unique role of parliamentarians in bridging local realities with global digital policy discussions, underlining the importance of balancing human rights with digital security.

He encouraged continued collaboration, learning, and building upon the IGF’s past efforts, primarily through local leadership and national implementation of ideas born from multistakeholder dialogue. Masango concluded by urging participants to engage in meaningful exchanges and form new partnerships, stressing that their contributions matter far beyond the forum itself.

Andy Richardson from the IGF Secretariat reiterated these themes, noting how parliamentary involvement underscores the urgency and weight of digital policy issues in the legislative realm. He drew attention to the critical intersection of AI and democracy, referencing recent resolutions and efforts to track parliamentary actions worldwide. With over 37 national reports on AI-related legislation already compiled, Richardson stressed the IGF’s ongoing commitment to staying updated and learning from legislators’ diverse experiences.

The opening session closed with an invitation to continue discussions in the day’s first panel, titled ‘Digital Deceit: The Societal Impact of Online Misinformation and Disinformation.’ Simultaneous translations were made available, highlighting the IGF’s inclusive and multilingual approach as it moved into a day of rich, cross-cultural policy conversations.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Parliamentarians at IGF 2025 call for action on information integrity

At the Internet Governance Forum 2025 in Lillestrøm, Norway, global lawmakers and experts gathered to confront one of the most pressing challenges of our digital era: the societal impact of misinformation and disinformation, especially amid the rapid advance of AI. Framed by the UN Global Principles for Information Integrity, the session spotlighted the urgent need for resilient, democratic responses to online erosion of public trust.

AI’s disruptive power took centre stage, with speakers citing alarming trends—deepfakes manipulated global election narratives in over a third of national polls in 2024 alone. Experts like Lindsay Gorman from the German Marshall Fund warned of a polluted digital ecosystem where fabricated video and audio now threaten core democratic processes.

UNESCO’s Marjorie Buchser expanded the concern, noting that generative AI enables manipulation and redefines how people access information, often diverting users from traditional journalism toward context-stripped AI outputs. However, regulation alone was not touted as a panacea.

Instead, panellists promoted ‘democracy-affirming technologies’ that embed transparency, accountability, and human rights at their foundation. The conversation urged greater investment in open, diverse digital ecosystems, particularly those supporting low-resource languages and underrepresented cultures. At the same time, multiple voices called for more equitable research, warning that Western-centric data and governance models skew current efforts.

In the end, a recurring theme echoed across the room: tackling information manipulation is a collective endeavour that demands multistakeholder cooperation. From enforcing technical standards to amplifying independent journalism and bolstering AI literacy, participants called for governments, civil society, and the tech industry to build unified, future-proof solutions that protect democratic integrity while preserving the fundamental right to free expression.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Spyware accountability demands Global South leadership at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a powerful roundtable titled ‘Spyware Accountability in the Global South’ brought together experts, activists, and policymakers to confront the growing threat of surveillance technologies in the world’s most vulnerable regions. Moderated by Nighat Dad of Pakistan’s Digital Rights Foundation, the session featured diverse perspectives from Mexico, India, Lebanon, the UK, and the private sector, each underscoring how spyware like Pegasus has been weaponised to target journalists, human rights defenders, and civil society actors across Latin America, South Asia, and the Middle East.

Ana Gaitán of R3D Mexico revealed how Mexican military forces routinely deploy spyware to obstruct investigations into abuses like the Ayotzinapa case. Apar Gupta from India’s Internet Freedom Foundation warned of the enduring legacy of colonial surveillance laws enabling secret spyware use. At the same time, Mohamad Najem of Lebanon’s SMEX explained how post-Arab Spring authoritarianism has fueled a booming domestic and export market for surveillance tools in the Gulf region. All three pointed to the urgent need for legal reform and international support, noting the failure of courts and institutions to provide effective remedies.

Representing regulatory efforts, Elizabeth Davies of the UK Foreign Commonwealth and Development Office outlined the Pall Mall Process, a UK-France initiative to create international norms for commercial cyber intrusion tools. Former UN Special Rapporteur David Kaye emphasised that such frameworks must go beyond soft law, calling for export controls, domestic legal safeguards, and litigation to ensure enforcement.

Rima Amin of Meta added a private sector lens, highlighting Meta’s litigation against NSO Group and pledging to reinvest any damages into supporting surveillance victims. Despite emerging international efforts, the panel agreed that meaningful spyware accountability will remain elusive without centring Global South voices, expanding technical and legal capacity, and bridging the North-South knowledge gap.

With spyware abuse expanding faster than regulation, the call from Lillestrøm was clear: democratic protections and digital rights must not be a privilege of geography.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

WGIG reunion sparks calls for reform at IGF 2025 in Norway

At the Internet Governance Forum (IGF) 2025 in Lillestrøm, Norway, a reunion of the original Working Group on Internet Governance (WGIG) marked a significant reflection and reckoning moment for global digital governance. Commemorating the 20th anniversary of WGIG’s formation, the session brought together pioneers of the multistakeholder model that reshaped internet policy discussions during the World Summit on the Information Society (WSIS).

Moderated by Markus Kummer and organised by William J. Drake, the panel featured original WGIG members, including Ayesha Hassan, Raul Echeberria, Wolfgang Kleinwächter, Avri Doria, Juan Fernandez, and Jovan Kurbalija, with remote contributions from Alejandro Pisanty, Carlos Afonso, Vittorio Bertola, Baher Esmat, and others. While celebrating their achievements, speakers did not shy away from blunt assessments of the IGF’s present state and future direction.

Speakers universally praised WGIG’s groundbreaking work in legitimising multi-stakeholderism within the UN system. The group’s broad, inclusive definition of internet governance—encompassing technical infrastructure and social and economic policies—was credited for transforming how global internet issues are addressed.

Participants emphasised the group’s unique working methodology, prioritising transparency, pluralism, and consensus-building without erasing legitimate disagreements. Many argue that these practices remain instructive amid today’s fragmented digital governance landscape.

However, as the conversation shifted from legacy to present-day performance, participants voiced deep concerns about the IGF’s limitations. Despite successes in capacity-building and agenda-setting, the forum was criticised for its failure to tackle controversial issues like surveillance, monopolies, and platform accountability.

 Crowd, Person, People, Press Conference, Adult, Male, Man, Face, Head, Electrical Device, Microphone, Clothing, Formal Wear, Suit, Audience
Jovan Kurbalija, Executive Director of Diplo

Speakers such as Vittorio Bertola and Avri Doria lamented its increasingly top-down character. At the same time, Nandini Chami and Ariette Esterhuizen raised questions about the IGF’s relevance and inclusiveness in the face of growing power imbalances. Some, including Bertrand de la Chapelle and Jovan Kurbalija, proposed bold reforms, including establishing a new working group to address the interlinked challenges of AI, data governance, and digital justice.

The session closed on a forward-looking note, urging the IGF community to recapture WGIG’s original spirit of collaborative innovation. As emerging technologies raise the stakes for global cooperation, participants agreed that internet governance must evolve—not only to reflect new realities but to stay true to the inclusive, democratic ideals that defined its founding two decades ago.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France appeals porn site ruling based on EU legal grounds

The French government is challenging a recent decision by the Administrative Court of Paris that temporarily halted the enforcement of mandatory age verification on pornographic websites based in the EU. The court found France’s current approach potentially inconsistent with the EU law—specifically the 2002 E-Commerce Directive—which upholds the ‘country-of-origin’ principle.

That rule limits an EU country’s authority to regulate online services hosted in another member state unless it follows a formal process involving both the host country and the European Commission. The dispute’s heart is whether France correctly followed the required legal steps.

While French authorities say they notified the host countries of porn companies like Hammy Media (Xhamster) and Aylo (owner of Pornhub and others) and waited the mandated three months, legal experts argue that notifying the Commission is also essential. So far, there is no confirmation that this additional step was taken, which may weaken France’s legal standing.

Digital Minister Clara Chappaz reaffirmed the government’s commitment to enforcing age checks, calling it a ‘priority’ in a public statement. The ministry insists its rules align with the EU’s Audiovisual Media Services Directive.

However, the court’s ruling highlights broader tensions between France’s national digital regulations and overarching the EU law. Similar legal challenges have already forced France to adjust parts of its digital, influencer, and cloud regulation frameworks in the past two years.

The appeal could have significant implications for age restrictions on adult content and how France asserts digital sovereignty within the EU. If the court upholds the suspension, other digital regulations based on national initiatives may also be vulnerable to legal scrutiny under the EU principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

North Korea’s BlueNoroff uses deepfakes in Zoom calls to hack crypto workers

The North Korea-linked threat group BlueNoroff has been caught deploying deepfake Zoom meetings to target an employee at a cryptocurrency foundation, aiming to install malware on macOS systems.

According to cybersecurity firm Huntress, the attack began through a Telegram message that redirected the victim to a fake Zoom site. Over several weeks, the employee was lured into a group video call featuring AI-generated replicas of company executives.

When the employee encountered microphone issues during the meeting, the fake participants instructed them to download a Zoom extension, which instead executed a malicious AppleScript.

The script covertly fetched multiple payloads, installed Rosetta 2, and prompted for the system password while wiping command histories to hide forensic traces. Eight malicious binaries were uncovered on the compromised machine, including keyloggers, information stealers, and remote access tools.

BlueNoroff, also known as APT38 and part of the Lazarus Group, has a track record of targeting financial and blockchain organisations for monetary gain. The group’s past operations include the Bybit and Axie Infinity breaches.

Their campaigns often combine deep social engineering with sophisticated multi-stage malware tailored for macOS, with new tactics now mimicking audio and camera malfunctions to trick remote workers.

Cybersecurity analysts have noted that BlueNoroff has fractured into subgroups like TraderTraitor and CryptoCore, specialising in cryptocurrency theft.

Recent offshoot campaigns involve fake job interview portals and dual-platform malware, such as the Python-based PylangGhost and GolangGhost trojans, which harvest sensitive data from victims across operating systems.

The attackers have impersonated firms like Coinbase and Uniswap, mainly targeting users in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!