How can technical standards bridge or broaden the digital divide?

At the Internet Governance Forum 2025 in Lillestrøm, Norway, the Freedom Online Coalition convened a diverse panel to explore how technical standards shape global connectivity and inclusion. The session, moderated by Laura O’Brien, Senior International Counsel at Access Now, highlighted how open and interoperable standards can empower underserved communities.

Divine Agbeti, Director General of the Cybersecurity Authority of Ghana, shared how mobile money systems helped bring over 80% of Ghana’s adult population into the digital financial fold—an example of how shared standards translate into real-world impact, especially across Africa. However, the conversation quickly turned to the systemic barriers that exclude many from the standard-setting process itself.

ICANN’s At-Large Advisory Committee member emphasised challenges like high membership fees, lack of transparency, English-only proceedings, and complex technical jargon.

Stephanie Borg Psaila, Director of Digital Policy at Diplo, presented detailed research mapping these hurdles across bodies like ITU, ICANN, and IETF, and called for reforms such as multilingual interpretation, hybrid meeting formats, and adjusted membership models to enable broader civil society participation.

Stephanie Borg Psaila

Security and infrastructure governance also featured prominently. Rose Payne, Policy and Advocacy Lead at Global Partners Digital, spotlighted the role of technical standards in safeguarding subsea cables—which carry 95–99% of transnational data—but also pointed to the limitations of technical solutions when facing geopolitical threats.

She underscored the urgency of updating international legal frameworks like UNCLOS and fostering cooperation between governments, the private sector, and civil society. Alex Walden, Global Head of Human Rights at Google, also reaffirmed the private sector’s role in investing in global connectivity while advocating for human rights-based frameworks and inclusive multistakeholder participation.

While the session closed on a constructive note, tensions emerged during the Q&A. Technical community members like Colin Perkins (University of Glasgow) and Harold, a technologist and civil society advocate, challenged the panel’s framing, arguing that distinctions between technical and civil society actors are often artificial and counterproductive.

Panellists responded diplomatically, acknowledging the need for more nuanced engagement and mutual understanding. Despite differing views, the forum concluded with shared commitments: dismantling barriers to participation, building cross-sectoral capacity, and grounding technical governance in international human rights from the outset.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

DeepSeek struggles to launch R2 amid NVIDIA chip shortage

The launch of DeepSeek’s next-generation AI model, R2, is expected to face delays due to a shortage of NVIDIA H20 chips in China.

These chips, designed specifically for the Chinese market following US export restrictions, are essential for running DeepSeek’s highly optimised models.

The ban on H20 shipments in April has triggered widespread concern among cloud providers about the scalability of R2, especially if it outperforms existing open-source models.

CEO Liang Wenfeng has reportedly held back the model’s release, expressing dissatisfaction with its current performance.

Engineers continue refining R2, but the lack of compatible hardware poses a deeper challenge. DeepSeek’s reliance on NVIDIA architecture makes switching to Chinese chips inefficient, as the models are tightly built for NVIDIA’s software and hardware ecosystem.

Some Chinese firms have begun using workarounds by flying engineers to Malaysia, where NVIDIA chips are still available in local data centres.

After training their models abroad, teams return to China with trained systems. Others rely on gaming GPUs like the RTX 5090, which are easier to access via grey markets despite restrictions.

While Chinese tech giants ordered 1.2 million H20 chips earlier in 2025 to meet demand sparked by R1’s success, inventory is still unlikely to support a full R2 rollout.

Companies outside China may launch R2 more easily without facing the same export hurdles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber Command and Coast Guard establish task force for port cyber defence

US Cyber Command has joined forces with the Coast Guard in a major military exercise designed to simulate cyberattacks on key port infrastructure.

Known as Cyber Guard, the training scenario marked a significant evolution in defensive readiness, integrating for the first time with Pacific Sentry—an Indo-Pacific Command exercise simulating conflict over Taiwan.

The joint effort included the formation of Task Force Port, a temporary unit tasked with coordinating defence of coastal infrastructure.

The drill reflected real-world concerns over the vulnerability of US ports in times of geopolitical tension, and brought together multiple combatant commands under a unified operational framework.

Rear Admiral Dennis Velez described the move as part of a broader shift from isolated training to integrated joint force operations.

Cyber Guard also marked the activation of the Department of Defense Cyber Defense Command (DCDC), previously known as Joint Force Headquarters–DOD Information Network.

The unit worked closely with the Coast Guard, signalling the increasing importance of cyber coordination across military branches when protecting critical infrastructure.

Port security has featured in past exercises but was previously handled as a separate scenario. Its inclusion within the core structure of Cyber Guard suggests a strategic realignment, ensuring cyber defence is embedded in wider contingency planning for future conflicts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hawaiian Airlines confirms flights are safe despite cyberattack

Hawaiian Airlines has reported a cyberattack that affected parts of its IT infrastructure, though the carrier confirmed all flights remain unaffected and are operating as scheduled.

Now part of the Alaska Air Group, the airline stated it is actively working with authorities and cybersecurity experts to investigate and resolve the incident.

In a statement, the airline stressed that the safety and security of passengers and staff remain its highest priority. It has taken steps to protect its systems, restoring affected services while continuing full operations. No disruption to passenger travel has been reported.

The exact nature of the attack has not been disclosed, and no group has claimed responsibility so far. The Federal Aviation Administration (FAA) confirmed it monitors the situation closely and remains in contact with the airline. It added that there has been no impact on flight safety.

Cyberattacks in aviation are becoming increasingly common due to the sector’s heavy reliance on complex digital systems. Earlier incidents this year included cyberattacks on WestJet and Japan Airlines, which caused operational disruptions but did not compromise passenger data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BT report shows rise in cyber attacks on UK small firms

A BT report has found that 42% of small businesses in the UK suffered a cyberattack in the past year. The study also revealed that 67% of medium-sized firms were targeted, while many lacked basic security measures or staff training.

Phishing was named the most common threat, hitting 85% of businesses in the UK, and ransomware incidents have more than doubled. BT’s new training programme aims to help SMEs take practical steps to reduce risks, covering topics like AI threats, account takeovers and QR code scams.

Tris Morgan from BT highlighted that SMEs face serious risks from cyber attacks, which could threaten their survival. He stressed that security is a necessary foundation and can be achieved without vast resources.

The report follows wider warnings on AI-enabled cyber threats, with other studies showing that few firms feel prepared for these risks. BT’s training is part of its mission to help businesses grow confidently despite digital dangers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NHS patient death linked to cyber attack delays

A patient has died after delays caused by a major cyberattack on NHS services, King’s College Hospital NHS Foundation Trust has confirmed. The attack, targeting pathology services, resulted in a long wait for blood test results that contributed to the patient’s death.

The June 2024 ransomware attack on Synnovis, a provider of blood test services, also delayed 1,100 cancer treatments and postponed more than 1,000 operations. The Russian group Qilin is believed to have been behind the attack that impacted multiple hospital trusts across London.

Healthcare providers struggled to deliver essential services, resorting to using universal O-type blood, which triggered a national shortage. Sensitive data stolen during the attack was later published online, adding to the crisis.

Cybersecurity experts warned that the NHS remains vulnerable because of its dependence on a vast network of suppliers. The incident highlights the human cost of cyber attacks, with calls for stronger protections across critical healthcare systems in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025: Africa charts a sovereign path for AI governance

African leaders at the Internet Governance Forum (IGF) 2025 in Oslo called for urgent action to build sovereign and ethical AI systems tailored to local needs. Hosted by the German Federal Ministry for Economic Cooperation and Development (BMZ), the session brought together voices from government, civil society, and private enterprises.

Moderated by Ashana Kalemera, Programmes Manager at CIPESA, the discussion focused on ensuring AI supports democratic governance in Africa. ‘We must ensure AI reflects our realities,’ Kalemera said, emphasising fairness, transparency, and inclusion as guiding principles.

Executive Director of Policy Neema Iyer warned that AI harms governance through surveillance, disinformation, and political manipulation. ‘Civil society must act as watchdogs and storytellers,’ she said, urging public interest impact assessments and grassroots education.

Representing South Africa, Mlindi Mashologu stressed the need for transparent governance frameworks rooted in constitutional values. ‘Policies must be inclusive,’ he said, highlighting explainability, data bias removal, and citizen oversight as essential components of trustworthy AI.

Lacina Koné, CEO of Smart Africa, called for urgent action to avoid digital dependency. ‘We cannot be passively optimistic. Africa must be intentional,’ he stated. Over 1,000 African startups rely on foreign AI models, creating sovereignty risks.

Koné emphasised that Africa should focus on beneficial AI, not the most powerful. He highlighted agriculture, healthcare, and education sectors where local AI could transform. ‘It’s about opportunity for the many, not just the few,’ he said.

From Mauritania, Matchiane Soueid Ahmed shared her country’s experience developing a national AI strategy. Challenges include poor rural infrastructure, technical capacity gaps, and lack of institutional coordination. ‘Sovereignty is not just territorial—it’s digital too,’ she noted.

Shikoh Gitau, CEO of KALA in Kenya, brought a private sector perspective. ‘We must move from paper to pavement,’ she said. Her team runs an AI literacy campaign across six countries, training teachers directly through their communities.

Gitau stressed the importance of enabling environments and blended financing. ‘Governments should provide space, and private firms must raise awareness,’ she said. She also questioned imported frameworks: ‘What definition of democracy are we applying?’

Audience members from Gambia, Ghana, and Liberia raised key questions about harmonisation, youth fears over job loss and AI readiness. Koné responded that Smart Africa is benchmarking national strategies and promoting convergence without erasing national sovereignty.

Though 19 African countries have published AI strategies, speakers noted that implementation remains slow. Practical action—such as infrastructure upgrades, talent development, and public-private collaboration—is vital to bring these frameworks to life.

The panel underscored the need to build AI systems prioritising inclusion, utility, and human rights. Investments in digital literacy, ethics boards, and regulatory sandboxes were cited as key tools for democratic AI governance.

Kalemera concluded, ‘It’s not yet Uhuru for AI in Africa—but with the right investments and partnerships, the future is promising.’ The session reflected cautious optimism and a strong desire for Africa to shape its AI destiny.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Irish businesses face cybersecurity reality check

Most Irish businesses believe they are well protected from cyberattacks, yet many neglect essential defences. Research from Gallagher shows most firms do not update software regularly or back up data as needed.

The survey of 300 companies found almost two-thirds of Irish firms feel very secure, with another 28 percent feeling quite safe. Despite this, nearly six in ten fail to apply software updates, leaving systems vulnerable to attacks.

Cybersecurity training is provided by just four in ten Irish organisations, even though it is one of the most effective safeguards. Gallagher warns that overconfidence may lead to complacency, putting businesses at risk of disruption and financial loss.

Laura Vickers of Gallagher stressed the importance of basic measures like updates and data backups to prevent serious breaches. With four in ten Irish companies suffering attacks in the past five years, firms are urged to match confidence with action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Internet Governance Forum marks 20 years of reshaping global digital policy

The 2025 Internet Governance Forum (IGF), held in Norway, offered a deep and wide-ranging reflection on the IGF’s 20-year journey in shaping digital governance and its prospects for the future.

Bringing together voices from governments, civil society, the technical community, business, and academia, the session celebrated the IGF’s unique role in institutionalising a multistakeholder approach to internet policymaking, particularly through inclusive and non-binding dialogue.

Moderated by Avri Doria, who has been with the IGF since its inception, the session focused on how the forum has influenced individuals, governments, and institutions across the globe. Doria described the IGF as a critical learning platform and a ‘home for evolving objectives’ that has helped connect people with vastly different viewpoints over the decades.

Professor Bitange Ndemo, Ambassador of Kenya to the European Union, reflected on his early scepticism, admitting that stakeholder consultation initially felt ‘painful’ for policymakers unfamiliar with collaborative approaches.

Over time, however, it proved ‘much, much easier’ for implementation and policy acceptance. ‘Thank God it went the IGF way,’ he said, emphasising how early IGF discussions guided Kenya and much of Africa in building digital infrastructure from the ground up.

Hans Petter Holen, Managing Director of RIPE NCC, underlined the importance of the IGF as a space where ‘technical realities meet policy aspirations’. He called for a permanent IGF mandate, stressing that uncertainty over its future limits its ability to shape digital governance effectively.

Renata Mielli, Chair of the Internet Steering Committee of Brazil (CGI.br), spoke about how IGF-inspired dialogue was key to shaping Brazil’s Internet Civil Rights Framework and Data Protection Law. ‘We are not talking about an event or a body, but an ecosystem,’ she said, advocating for the IGF to become the focal point for implementing the UN Global Digital Compact.

Funke Opeke, founder of MainOne in Nigeria, credited the IGF with helping drive West Africa’s digital transformation. ‘When we launched our submarine cable in 2010, penetration was close to 10%. Now it’s near 50%,’ she noted, urging continued support for inclusion and access in the Global South.

Qusai Al Shatti, from the Arab IGF, highlighted how the forum helped embed multistakeholder dialogue into governance across the Arab world, calling the IGF ‘the most successful outcome of WSIS‘.

From the civil society perspective, Chat Garcia Ramilo of the Association for Progressive Communications (APC) described the IGF as a platform to listen deeply, to speak, and, more importantly, to act’. She stressed the forum’s role in amplifying marginalised voices and pushing human rights and gender issues to the forefront of global internet policy.

Luca Belli of FGV Law School in Brazil echoed the need for better visibility of the IGF’s successes. Despite running four dynamic coalitions, he expressed frustration that many contributions go unnoticed. ‘We’re not good at celebrating success,’ he remarked.

Isabelle Lois, Vice Chair of the UN Commission on Science and Technology for Development (CSTD), emphasised the need to ‘connect the IGF to the wider WSIS architecture’ and ensure its outcomes influence broader UN digital frameworks.

Other voices joined online and from the floor, including Dr Robinson Sibbe of Digital Footprints Nigeria, who praised the IGF for contextualising cybersecurity challenges, and Emily Taylor, a UK researcher, who noted that the IGF had helped lay the groundwork for key initiatives like the IANA transition and the proliferation of internet exchange points across Africa.

Youth participants like Jasmine Maffei from Hong Kong and Piu from Myanmar stressed the IGF’s openness and accessibility. They called for their voices to be formally recognised within the multistakeholder model.

Veteran internet governance leader Markus Kummer reminded the room that the IGF’s ability to build trust and foster dialogue across divides enabled global cooperation during crucial events like the IANA transition.

Despite the celebratory tone, speakers repeatedly stressed three urgent needs: a permanent IGF mandate, stronger integration with global digital governance efforts such as the WSIS and Global Digital Compact, and broader inclusion of youth and underrepresented regions.

As the forum entered its third decade, many speakers agreed that the IGF’s legacy lies in its meetings or declarations and the relationships, trust, and governance culture it has helped create. The message from Norway was clear: in a fragmented and rapidly changing digital world, the IGF is more vital than ever—and its future must be secured.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Children safety online in 2025: Global leaders demand stronger rules

At the 20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates gathered to address the growing risks children face from algorithm-driven digital platforms.

The high-level session, Ensuring Child Security in the Age of Algorithms, explored the impact of engagement-based algorithmic systems on children’s mental health, cultural identity, and digital well-being.

Shivanee Thapa, Senior News Editor at Nepal Television and moderator of the session, opened with a personal note on the urgency of the issue, calling it ‘too urgent, too complex, and too personal.’

She outlined the session’s three focus areas: identifying algorithmic risks, reimagining child-centred digital systems, and defining accountability for all stakeholders.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Leanda Barrington-Leach, Executive Director of the Five Rights Foundation, delivered a powerful opening, sharing alarming data: ‘Half of children feel addicted to the internet, and more than three-quarters encounter disturbing content.’

She criticised tech platforms for prioritising engagement and profit over child safety, warning that children can stumble from harmless searches to harmful content in a matter of clicks.

‘The digital world is 100% human-engineered. It can be optimised for good just as easily as for bad,’ she said.

Norway is pushing for age limits on social media and implementing phone bans in classrooms, according to Minister of Digitalisation and Public Governance Karianne Tung.

‘Children are not commodities,’ she said. ‘We must build platforms that respect their rights and wellbeing.’

Salima Bah, Sierra Leone’s Minister of Science, Technology, and Innovation, raised concerns about cultural erasure in algorithmic design. ‘These systems often fail to reflect African identities and values,’ she warned, noting that a significant portion of internet traffic in Sierra Leone flows through TikTok.

Bah emphasised the need for inclusive regulation that works for regions with different digital access levels.

From the European Commission, Thibaut Kleiner, Director for Future Networks at DG Connect, pointed to the Digital Services Act as a robust regulatory model.

He challenged the assumption of children as ‘digital natives’ and called for stronger age verification systems. ‘Children use apps but often don’t understand how they work — this makes them especially vulnerable,’ he said.

Representatives from major platforms described their approaches to online safety. Christine Grahn, Head of Public Policy at TikTok Europe, emphasised safety-by-design features such as private default settings for minors and the Global Youth Council.

‘We show up, we listen, and we act,’ she stated, describing TikTok’s ban on beauty filters that alter appearance as a response to youth feedback.

Emily Yu, Policy Senior Director at Roblox, discussed the platform’s Trust by Design programme and its global teen council.

‘We aim to innovate while keeping safety and privacy at the core,’ she said, noting that Roblox emphasises discoverability over personalised content for young users.

Thomas Davin, Director of Innovation at UNICEF, underscored the long-term health and societal costs of algorithmic harm, describing it as a public health crisis.

‘We are at risk of losing the concept of truth itself. Children increasingly believe what algorithms feed them,’ he warned, stressing the need for more research on screen time’s effect on neurodevelopment.

The panel agreed that protecting children online requires more than regulation alone. Co-regulation, international cooperation, and inclusion of children’s voices were cited as essential.

Davin called for partnerships that enable companies to innovate responsibly. At the same time, Grahn described a successful campaign in Sweden to help teens avoid criminal exploitation through cross-sector collaboration.

Tung concluded with a rallying message: ‘Looking back 10 or 20 years from now, I want to know I stood on the children’s side.’

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.