How ROAMX helps bridge the digital divide

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and stakeholders gathered to assess the progress of UNESCO’s ROAMX framework, a tool for evaluating digital development through the lenses of Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues such as gender equality and sustainability. Since its introduction in 2018, and with the rollout of new second-generation indicators in 2024, ROAMX has helped countries align their digital policies with global standards like the WSIS and Sustainable Development Goals.

Dr Tawfik Jelassi of UNESCO opened the session by highlighting the urgency of inclusive digital transformation, noting that 2.6 billion people remain offline, particularly in lower-income regions.

Brazil and Fiji were presented as case studies for the updated framework. Brazil, the first to implement the revised indicators, showcased improvements in digital public services, but also revealed enduring inequalities—particularly among Black women and rural communities—with limited meaningful connectivity and digital literacy.

Meanwhile, Fiji piloted a capacity-building workshop that exposed serious intergovernmental coordination gaps: despite extensive consultation, most ministries were unaware of their national digital strategy. These findings underscore the need for ongoing engagement across government and civil society to implement effective digital policies truly.

Speakers emphasised that ROAMX is more than just an assessment tool; it offers a full policy lifecycle framework that can inform planning, monitoring, and evaluation. Participants noted that the framework’s adaptability makes it suitable for integration into national and regional digital governance efforts, including Internet Governance Forums.

They also pointed out the acute lack of sex-disaggregated data, which severely hampers effective policy responses to gender-based digital divides, especially in regions like Africa, where women remain underrepresented in both access and leadership roles in tech.

The session concluded with a call for broader adoption of ROAMX as a strategic tool to guide inclusive digital transformation efforts worldwide. Its relevance was affirmed in the context of WSIS+20 and the Global Digital Compact, with panellists agreeing that meaningful, rights-based digital development must be data-driven, inclusive, and participatory to leave no one behind in the digital age.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Spyware accountability demands Global South leadership at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a powerful roundtable titled ‘Spyware Accountability in the Global South’ brought together experts, activists, and policymakers to confront the growing threat of surveillance technologies in the world’s most vulnerable regions. Moderated by Nighat Dad of Pakistan’s Digital Rights Foundation, the session featured diverse perspectives from Mexico, India, Lebanon, the UK, and the private sector, each underscoring how spyware like Pegasus has been weaponised to target journalists, human rights defenders, and civil society actors across Latin America, South Asia, and the Middle East.

Ana Gaitán of R3D Mexico revealed how Mexican military forces routinely deploy spyware to obstruct investigations into abuses like the Ayotzinapa case. Apar Gupta from India’s Internet Freedom Foundation warned of the enduring legacy of colonial surveillance laws enabling secret spyware use. At the same time, Mohamad Najem of Lebanon’s SMEX explained how post-Arab Spring authoritarianism has fueled a booming domestic and export market for surveillance tools in the Gulf region. All three pointed to the urgent need for legal reform and international support, noting the failure of courts and institutions to provide effective remedies.

Representing regulatory efforts, Elizabeth Davies of the UK Foreign Commonwealth and Development Office outlined the Pall Mall Process, a UK-France initiative to create international norms for commercial cyber intrusion tools. Former UN Special Rapporteur David Kaye emphasised that such frameworks must go beyond soft law, calling for export controls, domestic legal safeguards, and litigation to ensure enforcement.

Rima Amin of Meta added a private sector lens, highlighting Meta’s litigation against NSO Group and pledging to reinvest any damages into supporting surveillance victims. Despite emerging international efforts, the panel agreed that meaningful spyware accountability will remain elusive without centring Global South voices, expanding technical and legal capacity, and bridging the North-South knowledge gap.

With spyware abuse expanding faster than regulation, the call from Lillestrøm was clear: democratic protections and digital rights must not be a privilege of geography.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

FC Barcelona documents leaked in ransomware breach

A recent cyberattack on French insurer SMABTP’s Spanish subsidiary, Asefa, has led to the leak of over 200GB of sensitive data, including documents related to FC Barcelona.

The ransomware group Qilin has claimed responsibility for the breach, highlighting the growing threat posed by such actors. With high-profile victims now in the spotlight, the reputational damage could be substantial for Asefa and its clients.

The incident comes amid growing concern among UK small and medium-sized enterprises (SMEs) about cyber threats. According to GlobalData’s UK SME Insurance Survey 2025, more than a quarter of SMEs have been influenced by media reports of cyberattacks when purchasing cyber insurance.

Meanwhile, nearly one in five cited a competitor’s victimisation as a motivating factor.

Over 300 organisations have fallen victim to Qilin in the past year alone, reflecting a broader trend in the rise of AI-enabled cybercrime.

AI allows cybercriminals to refine their methods, making attacks more effective and challenging to detect. As a result, companies are increasingly recognising the importance of robust cybersecurity measures.

With threats escalating, there is an urgent call for insurers to offer more tailored cyber coverage and proactive services. The breach involving FC Barcelona is a stark reminder that no organisation is immune and that better risk assessment and resilience planning are now business essentials.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI and the continued importance of cybersecurity fundamentals

The introduction of generative AI (GenAI) is influencing developments in cybersecurity across industries.

AI-powered tools are being integrated into systems such as end point detection and response (EDR) platforms and security operations centres (SOCs), while threat actors are reportedly exploring ways to use GenAI to automate known attack methods.

While GenAI presents new capabilities, common cybersecurity vulnerabilities remain a primary concern. Issues such as outdated patching, misconfigured cloud environments, and limited incident response readiness are still linked to most breaches.

Cybersecurity researchers have noted that GenAI is often used to scale familiar techniques rather than create new attack methods.

Social engineering, privilege escalation, and reconnaissance remain core tactics, with GenAI accelerating their execution. There are also indications that some GenAI systems can be manipulated to reveal sensitive data, particularly when not properly secured or configured.

Security experts recommend maintaining strong foundational practices such as access control, patch management, and configuration audits. These measures remain critical, regardless of the integration of advanced AI tools.

Some organisations may prioritise tool deployment over training, but research suggests that incident response skills are more effective when developed through practical exercises. Traditional awareness programmes may not sufficiently prepare personnel for real-time decision-making.

Some companies implement cyber drills that simulate attacks under realistic conditions to address this. These exercises can help teams practise protocols, identify weaknesses in workflows, and evaluate how systems perform under pressure. Such drills are designed to complement, not replace, other security measures.

Although GenAI is expected to continue shaping the threat landscape, current evidence suggests that most breaches stem from preventable issues. Ongoing training, configuration management, and response planning efforts remain central to organisational resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp ad rollout in EU slower than global pace amid privacy scrutiny

Meta is gradually rolling out advertising features on WhatsApp globally, starting with the Updates tab, where users follow channels and may see sponsored content.

Although the global rollout remains on track, the Irish Data Protection Commission has indicated that a full rollout across the EU will not occur before 2026. However, this delay reflects ongoing regulatory scrutiny, particularly over privacy compliance.

Concerns have emerged regarding how user data from Meta platforms like Facebook, Instagram, and Messenger might be used to target ads on WhatsApp.

Privacy group NOYB had previously voiced criticism about such cross-platform data use. However, Meta clarified that these concerns are not directly applicable to the current WhatsApp ad model.

According to Meta, integrating WhatsApp with the Meta Account Center—which allows cross-app ad personalization—is optional and off by default.

If users do not link their WhatsApp accounts, only limited data sourced from WhatsApp (such as city, language, followed channels, and ad interactions) will be used for ad targeting in the Updates tab.

Meta maintains that this approach aligns with EU privacy rules. Nonetheless, regulators are expected to carefully assess Meta’s implementation, especially in light of recent judgments against the company’s ‘pay or consent’ model under the Digital Markets Act.

Meta recently reduced the cost of its ad-free subscriptions in the EU, signalling a willingness to adapt—but the company continues to prioritize personalized advertising globally as part of its long-term strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek under fire for alleged military ties and export control evasion

The United States has accused Chinese AI startup DeepSeek of assisting China’s military and intelligence services while allegedly seeking to evade export controls to obtain advanced American-made semiconductors.

The claims, made by a senior US State Department official speaking anonymously to Reuters, add to growing concerns over the global security risks posed by AI.

DeepSeek, based in Hangzhou, China, gained international attention earlier this year after claiming its AI models rivalled those of leading United States firms like OpenAI—yet at a fraction of the cost.

However, US officials now say that the firm has shared data with Chinese surveillance networks and provided direct technological support to the People’s Liberation Army (PLA). According to the official, DeepSeek has appeared in over 150 procurement records linked to China’s defence sector.

The company is also suspected of transmitting data from foreign users, including Americans, through backend infrastructure connected to China Mobile, a state-run telecom operator. DeepSeek has not responded publicly to questions about these privacy or security issues.

The official further alleges that DeepSeek has been trying to access Nvidia’s restricted H100 AI chips by creating shell companies in Southeast Asia and using foreign data centres to run AI models on US-origin hardware remotely.

While Nvidia maintains it complies with export restrictions and has not knowingly supplied chips to sanctioned parties, DeepSeek is said to have secured several H100 chips despite the ban.

US officials have yet to place DeepSeek on a trade blacklist, though the company is under scrutiny. Meanwhile, Singapore has already charged three men with fraud in investigating the suspected illegal movement of Nvidia chips to DeepSeek.

Questions have also been raised over the credibility of DeepSeek’s technological claims. Experts argue that the reported $5.58 million spent on training their flagship models is unrealistically low, especially given the compute scale typically required to match OpenAI or Meta’s performance.

DeepSeek has remained silent amid the mounting scrutiny. Still, with the US-China tech race intensifying, the firm could soon find itself at the centre of new trade sanctions and geopolitical fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Onnuri Church probes hack after broadcast hijacked by North Korean flag

A North Korean flag briefly appeared during a live-streamed worship service from one of Seoul’s largest Presbyterian churches, prompting an urgent investigation into what church officials are calling a cyberattack.

The incident occurred Wednesday morning during an early service at Onnuri Church’s Seobinggo campus in Yongsan, South Korea.

While Pastor Park Jong-gil was delivering his sermon, the broadcast suddenly cut to a full-screen image of the flag of North Korea, accompanied by unidentified background music. His audio was muted during the disruption, which lasted around 20 seconds.

The unexpected clip appeared on the church’s official YouTube channel and was quickly captured by viewers, who began sharing it across online platforms and communities.

On Thursday, Onnuri Church issued a public apology on its website and confirmed it was treating the event as a deliberate cyber intrusion.

‘An unplanned video was transmitted during the livestream of our early morning worship on 18 June. We believe this resulted from a hacking incident,’ the statement read. ‘An internal investigation is underway, and we are taking immediate measures to identify the source and prevent future breaches.’

A church official told Yonhap News Agency that the incident had been reported to the relevant authorities, and no demands or threats had been received regarding the breach. The investigation continues as the church works with authorities to determine the origin and intent of the attack.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!