Spyware accountability demands Global South leadership at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a powerful roundtable titled ‘Spyware Accountability in the Global South’ brought together experts, activists, and policymakers to confront the growing threat of surveillance technologies in the world’s most vulnerable regions. Moderated by Nighat Dad of Pakistan’s Digital Rights Foundation, the session featured diverse perspectives from Mexico, India, Lebanon, the UK, and the private sector, each underscoring how spyware like Pegasus has been weaponised to target journalists, human rights defenders, and civil society actors across Latin America, South Asia, and the Middle East.

Ana Gaitán of R3D Mexico revealed how Mexican military forces routinely deploy spyware to obstruct investigations into abuses like the Ayotzinapa case. Apar Gupta from India’s Internet Freedom Foundation warned of the enduring legacy of colonial surveillance laws enabling secret spyware use. At the same time, Mohamad Najem of Lebanon’s SMEX explained how post-Arab Spring authoritarianism has fueled a booming domestic and export market for surveillance tools in the Gulf region. All three pointed to the urgent need for legal reform and international support, noting the failure of courts and institutions to provide effective remedies.

Representing regulatory efforts, Elizabeth Davies of the UK Foreign Commonwealth and Development Office outlined the Pall Mall Process, a UK-France initiative to create international norms for commercial cyber intrusion tools. Former UN Special Rapporteur David Kaye emphasised that such frameworks must go beyond soft law, calling for export controls, domestic legal safeguards, and litigation to ensure enforcement.

Rima Amin of Meta added a private sector lens, highlighting Meta’s litigation against NSO Group and pledging to reinvest any damages into supporting surveillance victims. Despite emerging international efforts, the panel agreed that meaningful spyware accountability will remain elusive without centring Global South voices, expanding technical and legal capacity, and bridging the North-South knowledge gap.

With spyware abuse expanding faster than regulation, the call from Lillestrøm was clear: democratic protections and digital rights must not be a privilege of geography.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

WGIG reunion sparks calls for reform at IGF 2025 in Norway

At the Internet Governance Forum (IGF) 2025 in Lillestrøm, Norway, a reunion of the original Working Group on Internet Governance (WGIG) marked a significant reflection and reckoning moment for global digital governance. Commemorating the 20th anniversary of WGIG’s formation, the session brought together pioneers of the multistakeholder model that reshaped internet policy discussions during the World Summit on the Information Society (WSIS).

Moderated by Markus Kummer and organised by William J. Drake, the panel featured original WGIG members, including Ayesha Hassan, Raul Echeberria, Wolfgang Kleinwächter, Avri Doria, Juan Fernandez, and Jovan Kurbalija, with remote contributions from Alejandro Pisanty, Carlos Afonso, Vittorio Bertola, Baher Esmat, and others. While celebrating their achievements, speakers did not shy away from blunt assessments of the IGF’s present state and future direction.

Speakers universally praised WGIG’s groundbreaking work in legitimising multi-stakeholderism within the UN system. The group’s broad, inclusive definition of internet governance—encompassing technical infrastructure and social and economic policies—was credited for transforming how global internet issues are addressed.

Participants emphasised the group’s unique working methodology, prioritising transparency, pluralism, and consensus-building without erasing legitimate disagreements. Many argue that these practices remain instructive amid today’s fragmented digital governance landscape.

However, as the conversation shifted from legacy to present-day performance, participants voiced deep concerns about the IGF’s limitations. Despite successes in capacity-building and agenda-setting, the forum was criticised for its failure to tackle controversial issues like surveillance, monopolies, and platform accountability.

 Crowd, Person, People, Press Conference, Adult, Male, Man, Face, Head, Electrical Device, Microphone, Clothing, Formal Wear, Suit, Audience
Jovan Kurbalija, Executive Director of Diplo

Speakers such as Vittorio Bertola and Avri Doria lamented its increasingly top-down character. At the same time, Nandini Chami and Ariette Esterhuizen raised questions about the IGF’s relevance and inclusiveness in the face of growing power imbalances. Some, including Bertrand de la Chapelle and Jovan Kurbalija, proposed bold reforms, including establishing a new working group to address the interlinked challenges of AI, data governance, and digital justice.

The session closed on a forward-looking note, urging the IGF community to recapture WGIG’s original spirit of collaborative innovation. As emerging technologies raise the stakes for global cooperation, participants agreed that internet governance must evolve—not only to reflect new realities but to stay true to the inclusive, democratic ideals that defined its founding two decades ago.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France appeals porn site ruling based on EU legal grounds

The French government is challenging a recent decision by the Administrative Court of Paris that temporarily halted the enforcement of mandatory age verification on pornographic websites based in the EU. The court found France’s current approach potentially inconsistent with the EU law—specifically the 2002 E-Commerce Directive—which upholds the ‘country-of-origin’ principle.

That rule limits an EU country’s authority to regulate online services hosted in another member state unless it follows a formal process involving both the host country and the European Commission. The dispute’s heart is whether France correctly followed the required legal steps.

While French authorities say they notified the host countries of porn companies like Hammy Media (Xhamster) and Aylo (owner of Pornhub and others) and waited the mandated three months, legal experts argue that notifying the Commission is also essential. So far, there is no confirmation that this additional step was taken, which may weaken France’s legal standing.

Digital Minister Clara Chappaz reaffirmed the government’s commitment to enforcing age checks, calling it a ‘priority’ in a public statement. The ministry insists its rules align with the EU’s Audiovisual Media Services Directive.

However, the court’s ruling highlights broader tensions between France’s national digital regulations and overarching the EU law. Similar legal challenges have already forced France to adjust parts of its digital, influencer, and cloud regulation frameworks in the past two years.

The appeal could have significant implications for age restrictions on adult content and how France asserts digital sovereignty within the EU. If the court upholds the suspension, other digital regulations based on national initiatives may also be vulnerable to legal scrutiny under the EU principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

North Korea’s BlueNoroff uses deepfakes in Zoom calls to hack crypto workers

The North Korea-linked threat group BlueNoroff has been caught deploying deepfake Zoom meetings to target an employee at a cryptocurrency foundation, aiming to install malware on macOS systems.

According to cybersecurity firm Huntress, the attack began through a Telegram message that redirected the victim to a fake Zoom site. Over several weeks, the employee was lured into a group video call featuring AI-generated replicas of company executives.

When the employee encountered microphone issues during the meeting, the fake participants instructed them to download a Zoom extension, which instead executed a malicious AppleScript.

The script covertly fetched multiple payloads, installed Rosetta 2, and prompted for the system password while wiping command histories to hide forensic traces. Eight malicious binaries were uncovered on the compromised machine, including keyloggers, information stealers, and remote access tools.

BlueNoroff, also known as APT38 and part of the Lazarus Group, has a track record of targeting financial and blockchain organisations for monetary gain. The group’s past operations include the Bybit and Axie Infinity breaches.

Their campaigns often combine deep social engineering with sophisticated multi-stage malware tailored for macOS, with new tactics now mimicking audio and camera malfunctions to trick remote workers.

Cybersecurity analysts have noted that BlueNoroff has fractured into subgroups like TraderTraitor and CryptoCore, specialising in cryptocurrency theft.

Recent offshoot campaigns involve fake job interview portals and dual-platform malware, such as the Python-based PylangGhost and GolangGhost trojans, which harvest sensitive data from victims across operating systems.

The attackers have impersonated firms like Coinbase and Uniswap, mainly targeting users in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated photo falsely claims to show a downed Israeli jet

Following Iranian state media claims that its forces shot down two Israeli fighter jets, an image circulated online falsely purporting to show the wreckage of an F-35.

The photo, which shows a large jet crash-landing in a desert, quickly spread across platforms like Threads and South Korean forums, including Aagag and Ruliweb. An Israeli official dismissed the shootdown claim as ‘fake news’.

The image’s caption in Korean read: ‘The F-35 shot down by Iran. Much bigger than I thought.’ However, a detailed AFP analysis found the photo contained several hallmarks of AI generation.

People near the aircraft appear the same size as buses, and one vehicle appears to merge with the road — visual anomalies common in synthetic images.

In addition to size distortions, the aircraft’s markings did not match those used on actual Israeli F-35s. Lockheed Martin specifications confirm the F-35 is just under 16 metres long, unlike the oversized version shown in the image.

Furthermore, the wing insignia in the image differed from the Israeli Air Force’s authentic emblem.

Amid escalating tensions between Iran and Israel, such misinformation continues to spread rapidly. Although AI-generated content is becoming more sophisticated, inconsistencies in scale, symbols, and composition remain key indicators of digital fabrication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France 24 partners with Mediagenix to streamline on-demand programming

Mediagenix has entered a collaboration with French international broadcaster France 24, operated by France Médias Monde, to support its content scheduling modernisation programme.

As part of the upgrade, France 24 will adopt Mediagenix’s AI-powered, cloud-based scheduling solution to manage content across its on-demand platforms. The system promises improved operational flexibility, enabling rapid adjustments to programming in response to major events and shifting editorial priorities.

Pamela David, Engineering Manager for TV and Systems Integration at France Médias Monde, said: ‘This partnership with Mediagenix is a critical part of equipping our France 24 channels with the best scheduling and content management solutions.’

‘The system gives our staff the ultimate flexibility to adjust schedules as major events happen and react to changing news priorities.’

Françoise Semin, Chief Commercial Officer at Mediagenix, added: ‘France Médias Monde is a truly global broadcaster. We are delighted to support France 24’s evolving scheduling needs with our award-winning solution.’

Training for France 24 staff will be provided by Lapins Bleus Formation, based in Paris, ahead of the system’s planned rollout next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!