NHS patient death linked to cyber attack delays

A patient has died after delays caused by a major cyberattack on NHS services, King’s College Hospital NHS Foundation Trust has confirmed. The attack, targeting pathology services, resulted in a long wait for blood test results that contributed to the patient’s death.

The June 2024 ransomware attack on Synnovis, a provider of blood test services, also delayed 1,100 cancer treatments and postponed more than 1,000 operations. The Russian group Qilin is believed to have been behind the attack that impacted multiple hospital trusts across London.

Healthcare providers struggled to deliver essential services, resorting to using universal O-type blood, which triggered a national shortage. Sensitive data stolen during the attack was later published online, adding to the crisis.

Cybersecurity experts warned that the NHS remains vulnerable because of its dependence on a vast network of suppliers. The incident highlights the human cost of cyber attacks, with calls for stronger protections across critical healthcare systems in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025: Africa charts a sovereign path for AI governance

African leaders at the Internet Governance Forum (IGF) 2025 in Oslo called for urgent action to build sovereign and ethical AI systems tailored to local needs. Hosted by the German Federal Ministry for Economic Cooperation and Development (BMZ), the session brought together voices from government, civil society, and private enterprises.

Moderated by Ashana Kalemera, Programmes Manager at CIPESA, the discussion focused on ensuring AI supports democratic governance in Africa. ‘We must ensure AI reflects our realities,’ Kalemera said, emphasising fairness, transparency, and inclusion as guiding principles.

Executive Director of Policy Neema Iyer warned that AI harms governance through surveillance, disinformation, and political manipulation. ‘Civil society must act as watchdogs and storytellers,’ she said, urging public interest impact assessments and grassroots education.

Representing South Africa, Mlindi Mashologu stressed the need for transparent governance frameworks rooted in constitutional values. ‘Policies must be inclusive,’ he said, highlighting explainability, data bias removal, and citizen oversight as essential components of trustworthy AI.

Lacina Koné, CEO of Smart Africa, called for urgent action to avoid digital dependency. ‘We cannot be passively optimistic. Africa must be intentional,’ he stated. Over 1,000 African startups rely on foreign AI models, creating sovereignty risks.

Koné emphasised that Africa should focus on beneficial AI, not the most powerful. He highlighted agriculture, healthcare, and education sectors where local AI could transform. ‘It’s about opportunity for the many, not just the few,’ he said.

From Mauritania, Matchiane Soueid Ahmed shared her country’s experience developing a national AI strategy. Challenges include poor rural infrastructure, technical capacity gaps, and lack of institutional coordination. ‘Sovereignty is not just territorial—it’s digital too,’ she noted.

Shikoh Gitau, CEO of KALA in Kenya, brought a private sector perspective. ‘We must move from paper to pavement,’ she said. Her team runs an AI literacy campaign across six countries, training teachers directly through their communities.

Gitau stressed the importance of enabling environments and blended financing. ‘Governments should provide space, and private firms must raise awareness,’ she said. She also questioned imported frameworks: ‘What definition of democracy are we applying?’

Audience members from Gambia, Ghana, and Liberia raised key questions about harmonisation, youth fears over job loss and AI readiness. Koné responded that Smart Africa is benchmarking national strategies and promoting convergence without erasing national sovereignty.

Though 19 African countries have published AI strategies, speakers noted that implementation remains slow. Practical action—such as infrastructure upgrades, talent development, and public-private collaboration—is vital to bring these frameworks to life.

The panel underscored the need to build AI systems prioritising inclusion, utility, and human rights. Investments in digital literacy, ethics boards, and regulatory sandboxes were cited as key tools for democratic AI governance.

Kalemera concluded, ‘It’s not yet Uhuru for AI in Africa—but with the right investments and partnerships, the future is promising.’ The session reflected cautious optimism and a strong desire for Africa to shape its AI destiny.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Irish businesses face cybersecurity reality check

Most Irish businesses believe they are well protected from cyberattacks, yet many neglect essential defences. Research from Gallagher shows most firms do not update software regularly or back up data as needed.

The survey of 300 companies found almost two-thirds of Irish firms feel very secure, with another 28 percent feeling quite safe. Despite this, nearly six in ten fail to apply software updates, leaving systems vulnerable to attacks.

Cybersecurity training is provided by just four in ten Irish organisations, even though it is one of the most effective safeguards. Gallagher warns that overconfidence may lead to complacency, putting businesses at risk of disruption and financial loss.

Laura Vickers of Gallagher stressed the importance of basic measures like updates and data backups to prevent serious breaches. With four in ten Irish companies suffering attacks in the past five years, firms are urged to match confidence with action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Internet Governance Forum marks 20 years of reshaping global digital policy

The 2025 Internet Governance Forum (IGF), held in Norway, offered a deep and wide-ranging reflection on the IGF’s 20-year journey in shaping digital governance and its prospects for the future.

Bringing together voices from governments, civil society, the technical community, business, and academia, the session celebrated the IGF’s unique role in institutionalising a multistakeholder approach to internet policymaking, particularly through inclusive and non-binding dialogue.

Moderated by Avri Doria, who has been with the IGF since its inception, the session focused on how the forum has influenced individuals, governments, and institutions across the globe. Doria described the IGF as a critical learning platform and a ‘home for evolving objectives’ that has helped connect people with vastly different viewpoints over the decades.

Professor Bitange Ndemo, Ambassador of Kenya to the European Union, reflected on his early scepticism, admitting that stakeholder consultation initially felt ‘painful’ for policymakers unfamiliar with collaborative approaches.

Over time, however, it proved ‘much, much easier’ for implementation and policy acceptance. ‘Thank God it went the IGF way,’ he said, emphasising how early IGF discussions guided Kenya and much of Africa in building digital infrastructure from the ground up.

Hans Petter Holen, Managing Director of RIPE NCC, underlined the importance of the IGF as a space where ‘technical realities meet policy aspirations’. He called for a permanent IGF mandate, stressing that uncertainty over its future limits its ability to shape digital governance effectively.

Renata Mielli, Chair of the Internet Steering Committee of Brazil (CGI.br), spoke about how IGF-inspired dialogue was key to shaping Brazil’s Internet Civil Rights Framework and Data Protection Law. ‘We are not talking about an event or a body, but an ecosystem,’ she said, advocating for the IGF to become the focal point for implementing the UN Global Digital Compact.

Funke Opeke, founder of MainOne in Nigeria, credited the IGF with helping drive West Africa’s digital transformation. ‘When we launched our submarine cable in 2010, penetration was close to 10%. Now it’s near 50%,’ she noted, urging continued support for inclusion and access in the Global South.

Qusai Al Shatti, from the Arab IGF, highlighted how the forum helped embed multistakeholder dialogue into governance across the Arab world, calling the IGF ‘the most successful outcome of WSIS‘.

From the civil society perspective, Chat Garcia Ramilo of the Association for Progressive Communications (APC) described the IGF as a platform to listen deeply, to speak, and, more importantly, to act’. She stressed the forum’s role in amplifying marginalised voices and pushing human rights and gender issues to the forefront of global internet policy.

Luca Belli of FGV Law School in Brazil echoed the need for better visibility of the IGF’s successes. Despite running four dynamic coalitions, he expressed frustration that many contributions go unnoticed. ‘We’re not good at celebrating success,’ he remarked.

Isabelle Lois, Vice Chair of the UN Commission on Science and Technology for Development (CSTD), emphasised the need to ‘connect the IGF to the wider WSIS architecture’ and ensure its outcomes influence broader UN digital frameworks.

Other voices joined online and from the floor, including Dr Robinson Sibbe of Digital Footprints Nigeria, who praised the IGF for contextualising cybersecurity challenges, and Emily Taylor, a UK researcher, who noted that the IGF had helped lay the groundwork for key initiatives like the IANA transition and the proliferation of internet exchange points across Africa.

Youth participants like Jasmine Maffei from Hong Kong and Piu from Myanmar stressed the IGF’s openness and accessibility. They called for their voices to be formally recognised within the multistakeholder model.

Veteran internet governance leader Markus Kummer reminded the room that the IGF’s ability to build trust and foster dialogue across divides enabled global cooperation during crucial events like the IANA transition.

Despite the celebratory tone, speakers repeatedly stressed three urgent needs: a permanent IGF mandate, stronger integration with global digital governance efforts such as the WSIS and Global Digital Compact, and broader inclusion of youth and underrepresented regions.

As the forum entered its third decade, many speakers agreed that the IGF’s legacy lies in its meetings or declarations and the relationships, trust, and governance culture it has helped create. The message from Norway was clear: in a fragmented and rapidly changing digital world, the IGF is more vital than ever—and its future must be secured.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Children safety online in 2025: Global leaders demand stronger rules

At the 20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates gathered to address the growing risks children face from algorithm-driven digital platforms.

The high-level session, Ensuring Child Security in the Age of Algorithms, explored the impact of engagement-based algorithmic systems on children’s mental health, cultural identity, and digital well-being.

Shivanee Thapa, Senior News Editor at Nepal Television and moderator of the session, opened with a personal note on the urgency of the issue, calling it ‘too urgent, too complex, and too personal.’

She outlined the session’s three focus areas: identifying algorithmic risks, reimagining child-centred digital systems, and defining accountability for all stakeholders.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Leanda Barrington-Leach, Executive Director of the Five Rights Foundation, delivered a powerful opening, sharing alarming data: ‘Half of children feel addicted to the internet, and more than three-quarters encounter disturbing content.’

She criticised tech platforms for prioritising engagement and profit over child safety, warning that children can stumble from harmless searches to harmful content in a matter of clicks.

‘The digital world is 100% human-engineered. It can be optimised for good just as easily as for bad,’ she said.

Norway is pushing for age limits on social media and implementing phone bans in classrooms, according to Minister of Digitalisation and Public Governance Karianne Tung.

‘Children are not commodities,’ she said. ‘We must build platforms that respect their rights and wellbeing.’

Salima Bah, Sierra Leone’s Minister of Science, Technology, and Innovation, raised concerns about cultural erasure in algorithmic design. ‘These systems often fail to reflect African identities and values,’ she warned, noting that a significant portion of internet traffic in Sierra Leone flows through TikTok.

Bah emphasised the need for inclusive regulation that works for regions with different digital access levels.

From the European Commission, Thibaut Kleiner, Director for Future Networks at DG Connect, pointed to the Digital Services Act as a robust regulatory model.

He challenged the assumption of children as ‘digital natives’ and called for stronger age verification systems. ‘Children use apps but often don’t understand how they work — this makes them especially vulnerable,’ he said.

Representatives from major platforms described their approaches to online safety. Christine Grahn, Head of Public Policy at TikTok Europe, emphasised safety-by-design features such as private default settings for minors and the Global Youth Council.

‘We show up, we listen, and we act,’ she stated, describing TikTok’s ban on beauty filters that alter appearance as a response to youth feedback.

Emily Yu, Policy Senior Director at Roblox, discussed the platform’s Trust by Design programme and its global teen council.

‘We aim to innovate while keeping safety and privacy at the core,’ she said, noting that Roblox emphasises discoverability over personalised content for young users.

Thomas Davin, Director of Innovation at UNICEF, underscored the long-term health and societal costs of algorithmic harm, describing it as a public health crisis.

‘We are at risk of losing the concept of truth itself. Children increasingly believe what algorithms feed them,’ he warned, stressing the need for more research on screen time’s effect on neurodevelopment.

The panel agreed that protecting children online requires more than regulation alone. Co-regulation, international cooperation, and inclusion of children’s voices were cited as essential.

Davin called for partnerships that enable companies to innovate responsibly. At the same time, Grahn described a successful campaign in Sweden to help teens avoid criminal exploitation through cross-sector collaboration.

Tung concluded with a rallying message: ‘Looking back 10 or 20 years from now, I want to know I stood on the children’s side.’

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

IGF and WSIS platforms must be strengthened, not replaced, say leaders

At the Internet Governance Forum 2025 in Lillestrøm, Norway, stakeholders gathered to assess the International Telecommunication Union’s (ITU) role in the WSIS Plus 20 review process.

The session, moderated by Cynthia Lesufi of South Africa, invited input on the achievements and future direction of the World Summit on the Information Society (WSIS), now marking its 20th year.

Speakers from Brazil, Australia, Korea, Germany, Japan, Cuba, South Africa, Ghana, Nigeria, and Bangladesh offered their national and regional insights.

There was strong consensus on maintaining and strengthening existing platforms like the Internet Governance Forum (IGF) and WSIS Forum, rather than creating new mechanisms that might burden developing countries.

Renata Santoyo, representing Brazil’s telecommunications regulator ANATEL, affirmed ITU’s coordinating role: ‘The WSIS architecture remains valuable, and ITU has been instrumental in supporting its action lines.’

Australia’s William Lee echoed this, commending ITU’s work on integrating WSIS with the SDGs and the Global Digital Compact, and noted: ‘The digital divide is now less about access and more about usability.’

Korean vice chair of the ITU Council Working Group, Mina Seonmin Jun, stressed the continued inequality in her region: ‘One third of Asia-Pacific remains offline. WSIS must go beyond infrastructure and focus on equity.’

 People, Person, Cinema, Adult, Female, Woman, Male, Man, Crowd, Face, Head, Indoors, Audience, Chair, Furniture

Swantje Jager Lindemann from Germany backed extending the IGF mandate without renegotiation, saying: ‘The mandate is broad enough. What we need is better support and sustainable funding.’

Japan’s Yoichi Iida, former vice minister and now special advisor, also warned against reopening existing mandates, instead calling for a stronger IGF secretariat. ‘We must focus on inclusivity, not duplicating structures,’ he said.

ITU’s Gitanjali Sah outlined its leadership on WSIS action lines, noting the organisation’s collaboration with over 50 UN bodies. ‘2.6 billion people are still offline. Connectivity must be meaningful and inclusive,’ she said, highlighting ITU’s technical support on cybersecurity, capacity building, and standards.

Cuba’s representative stressed that the WSIS outcome documents remain fully valid and should be reaffirmed rather than rewritten. ‘Creating new mechanisms risks excluding countries with limited resources,’ they argued.

Local voices called for grassroots inclusion. Louvo Gray from the South African IGF asked, ‘How do we ensure marginalised voices from the Global South are truly heard?’ Ghana’s Kweku Enchi proposed tapping retired language teachers to bridge digital and generational divides.

Abdul Karim from Nigeria raised concerns about public access to the review documents. Sah confirmed that most contributions are published on the ITU website unless requested otherwise.

The UNDP representative reiterated UN-wide support for an inclusive WSIS review, while Mohamed Abdulla Konu of Bangladesh IGF pressed for developing countries’ voices to be meaningfully reflected.

Speakers agreed that the WSIS Plus 20 review is a key opportunity to refocus digital governance on inclusion, equity, and sustainability. The ITU will submit the compiled inputs to the UN General Assembly in December, while South Africa will include the session’s outcomes in its high-level report.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Fake video claims Nigeria is sending troops to Israel

A video circulating on TikTok falsely claims that Nigeria has announced the deployment of troops to Israel. Since 17 June, the video has been shared more than 6,100 times and presents a fabricated news segment constructed from artificial intelligence-generated visuals and outdated footage.

No official Nigerian authority has made any such announcement regarding military involvement in the ongoing Middle East crisis.

The video, attributed to a fictitious media outlet called ‘TBC News’, combines visuals of soldiers and aircraft with simulated newsroom graphics. However, no broadcaster by that name exists, and the logo and branding do not correspond to any known or legitimate media source.

Upon closer inspection, several anomalies suggest the use of generative AI. The news presenter’s appearance subtly shifts throughout the segment — with clothing changes, facial inconsistencies, and robotic voiceovers indicating non-authentic production.

Similarly, the footage of military activity lacks credible visual markers. For example, a purported official briefing displays a coat of arms inconsistent with Nigeria’s national emblems, and no standard flags or insignia are typically present at such events.

While two brief aircraft clips appear authentic — originally filmed during a May airshow in Lagos — the remainder seems digitally altered or artificially generated.

In reality, Nigerian officials have issued intense public criticism of Israel’s recent military actions in Iran and have not indicated any intent to provide military support to Israel.

The video in question, therefore, significantly distorts Nigeria’s diplomatic position and risks exacerbating tensions during an already sensitive period in international affairs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercrime in Africa: Turning research into justice and action

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and policymakers gathered to confront the escalating issue of cybercrime across Africa, marked by the launch of the research report ‘Access to Justice in the Digital Age: Empowering Victims of Cybercrime in Africa’, co-organised by UNICRI and ALT Advisory.

Based on experiences in South Africa, Namibia, Sierra Leone, and Uganda, the study highlights a troubling rise in cybercrime, much of which remains invisible due to widespread underreporting, institutional weaknesses, and outdated or absent legal frameworks. The report’s author, Tina Power, underscored the need to recognise cybercrime not merely as a technical challenge, but as a profound justice issue.

One of the central concerns raised was the gendered nature of many cybercrimes. Victims—especially women and LGBTQI+ individuals—face severe societal stigma and are often met with disbelief or indifference when reporting crimes such as revenge porn, cyberstalking, or online harassment.

Sandra Aceng from the Women of Uganda Network detailed how cultural taboos, digital illiteracy, and unsympathetic police responses prevent victims from seeking justice. Without adequate legal tools or trained officers, victims are left exposed, compounding trauma and enabling perpetrators.

Law enforcement officials, such as Zambia’s Michael Ilishebo, described various operational challenges, including limited forensic capabilities, the complexity of crimes facilitated by AI and encryption, and the lack of cross-border legal cooperation. Only a few African nations are party to key international instruments like the Budapest Convention, complicating efforts to address cybercrime that often spans multiple jurisdictions.

Ilishebo also highlighted how social media platforms frequently ignore law enforcement requests, citing global guidelines that don’t reflect African legal realities. To counter these systemic challenges, speakers advocated for a robust, victim-centred response built on strong laws, sustained training for justice-sector actors, and improved collaboration between governments, civil society, and tech companies.

Nigerian Senator Shuaib Afolabi Salisu called for a unified African stance to pressure big tech into respecting the continent’s legal systems. The session ended with a consensus – the road to justice in Africa’s digital age must be paved with coordinated action, inclusive legislation, and empowered victims.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Microsoft family safety blocks Google Chrome on Windows 11

Windows 11 users have reported that Google Chrome crashes and fails to reopen when Microsoft family safety parental controls are active.

The issue appears to be linked to Chrome’s recent update, version 137.0.7151.68 and does not affect users of Microsoft Edge under the same settings.

Google acknowledged the problem and provided a workaround involving changes to family safety settings, such as unblocking Chrome or adjusting content filters.

Microsoft has not issued a formal statement, but its family safety FAQ confirms that non-Edge browsers are blocked from web filtering.

Users are encouraged to update Google Chrome to version 138.0.7204.50 to address other security concerns recently disclosed by Google.

The update aims to patch vulnerabilities that could let attackers bypass security policies and run malicious code.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!