Tackling internet fragmentation: A global challenge at IGF 2024

At the Internet Governance Forum (IGF) 2024 in Riyadh, the main session ‘Policy Network on Internet Fragmentation’ delved into implementing Article 29C of the Global Digital Compact (GDC), which seeks to prevent internet fragmentation. A diverse panel comprising government officials, technical experts, and civil society representatives highlighted the multifaceted nature of this issue and proposed actionable strategies to address it.

The scope of internet fragmentation

Panellists underscored that internet fragmentation manifests on technical, governance, and user experience levels. While the global network of over 70,000 systems remains technically unified, fragmentation is evident in user experiences. Anriette Esterhuysen from the Association for Progressive Communications pointed out, ‘How you view the internet as fragmented or not depends on whose internet you think it is.’ She stressed that billions face access and content restrictions, fragmenting their digital experience.

Gbenga Sesan of Paradigm Initiative echoed this concern, noting that fragmentation undermines the goal of universal connectivity by 2030. The tension between a seamless technical infrastructure and fractured user realities loomed large in the discussion.

Operationalising the GDC commitment

Alisa Heaver from the Dutch Ministry of Economic Affairs and Climate highlighted the critical role of Article 29C as a blueprint for preventing fragmentation. She called for a measurable framework to track progress by the GDC’s 2027 review, emphasising that research on the economic impacts of fragmentation must be prioritised. ‘We need to start measuring internet fragmentation now more than ever,’ Heaver urged.

Strategies for collaboration and progress

Multistakeholder cooperation emerged as a cornerstone for addressing fragmentation. Wim Degezelle, a consultant with the IGF Secretariat, presented the Policy Network on Internet Fragmentation (PNIF) framework, while Amitabh Singhal of ICANN highlighted the IGF’s unique position in bridging technical and policy divides. Singhal also pointed to the potential renewal of the IGF’s mandate as pivotal in continuing these essential discussions.

The session emphasised inclusivity in technical standard-setting processes, with Sesan advocating for civil society’s role and audience members calling for stronger private sector engagement. Sheetal Kumar, co-facilitator of the session, stressed the importance of leveraging national and regional IGFs to foster localised dialogues on fragmentation.

Next steps and future outlook

The panel identified key actions, including developing measurable frameworks, conducting economic research, and utilising national and regional IGFs to sustain discussions. The upcoming IGF in 2025 was flagged as a milestone for assessing progress. Despite the issue’s complexity, the panellists were united in their commitment to fostering a more inclusive and seamless internet.

As Esterhuysen aptly summarised, addressing internet fragmentation requires a concerted effort to view the digital landscape through diverse lenses. This session reaffirmed that preventing fragmentation is not just a technical challenge but a deeply human one, demanding collaboration, research, and sustained dialogue.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Human rights concerns over UN Cybercrime Treaty raised at IGF 2024

A panel discussion at the Internet Governance Forum (IGF) raised serious concerns over the UN Cybercrime Treaty and its potential to undermine human rights. Experts from organisations such as Human Rights Watch and the Electronic Frontier Foundation criticised the treaty’s broad scope and lack of clear safeguards for individual freedoms. They warned that the treaty’s vague language, particularly around what constitutes a ‘serious crime,’ could empower authoritarian regimes to exploit its provisions for surveillance and repress dissent.

Speakers such as Joey Shea from Human Rights Watch and Lina al-Hathloul, a Saudi human rights defender, pointed out the risks posed by the treaty’s expansive investigative powers, which extend beyond cybercrimes to any crimes defined by domestic law. Flexibility like this one could force countries to assist in prosecuting acts that are not crimes within their own borders. They also highlighted the treaty’s weak privacy protections, which could jeopardise encryption standards and further harm cybersecurity researchers.

Deborah Brown from Human Rights Watch and Veridiana Alimonti of the Electronic Frontier Foundation shared examples from Saudi Arabia and Latin America, where existing cybercrime and anti-terrorism laws have already been used to target journalists and activists. The panelists expressed concern that the treaty could exacerbate these abuses globally, especially for cybersecurity professionals and civil society.

Fionnuala Ni Aolain, a former UN Special Rapporteur on counterterrorism and human rights, emphasised that the treaty’s provisions could lead to criminalising the vital work of cybersecurity researchers. She joined other experts in urging policymakers and industry leaders to resist ratification in its current form. They called for upcoming protocol negotiations to address these human rights gaps and for greater involvement of civil society voices to prevent the treaty from becoming a tool for transnational repression.

IGF 2024 addresses cybercrime laws in Africa and the Middle East

Discussions at the IGF 2024 in Riyadh shed light on growing challenges to freedom of expression in Africa and the Middle East. Experts from diverse organisations highlighted how restrictive cybercrime legislation and content regulation have been used to silence dissent, marginalise communities, and undermine democracy. Examples from Tunisia and Nigeria revealed how critics and activists often face criminalisation under these laws, fostering fear and self-censorship.

Panellists included Annelies Riezebos from the Dutch Ministry of Foreign Affairs, Jacqueline Rowe of the University of Edinburgh, Adeboye Adegoke from Paradigm Initiative, and Aymen Zaghdoudi of AccessNow. They discussed the negative effects of vague cybercrime regulations and overly broad restrictions on online speech, which frequently suppress political discourse. Maria Paz Canales from Global Partners Digital added that content governance frameworks need urgent reform to balance addressing online harms with protecting fundamental rights.

The speakers emphasised that authoritarian values are being enforced through legislation that criminalises disinformation and imposes ambiguous rules on online platforms. These measures, they argued, contribute to a deteriorating climate for free expression across the region. They also pointed out the need for online platforms to adopt responsible content moderation practices while resisting pressures to conform to repressive local laws.

Panellists proposed several strategies to counter these trends, including engaging with parliamentarians, building capacity among legal professionals, and ensuring civil society’s involvement during the early stages of policy development. The importance of international collaboration was underlined, with the UN Cybercrime Treaty cited as a key opportunity for collective advocacy against repressive measures.

Participants also stressed the urgency of increased representation of Global South organisations in global policy discussions. Flexible funding for civil society initiatives was described as essential for supporting grassroots efforts to defend digital rights. Such funding would enable local groups to challenge restrictive laws effectively and amplify their voices in international debates.

The event concluded with a call for multi-stakeholder approaches to internet governance. Collaborative efforts involving governments, civil society, and online platforms were deemed critical to safeguarding freedom of expression. The discussions underscored the pressing need to balance addressing legitimate online harms with protecting democratic values and the voices of vulnerable communities.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Netflix fined for failing to inform customers about data usage

The Dutch Data Protection Authority (DPA) has imposed a €4.75 million ($4.98 million) fine on Netflix for not adequately informing its customers about how their personal data was being used between 2018 and 2020. The fine follows a detailed investigation that began in 2019, which revealed that Netflix’s privacy statement was insufficiently clear regarding the company’s data practices. Specifically, the DPA found that the streaming giant did not provide customers with enough information on how their data was being processed or used.

The investigation also uncovered that when customers sought to understand which personal data Netflix was collecting, they did not receive clear answers. This lack of transparency was deemed a violation of the General Data Protection Regulation (GDPR), which sets strict requirements on companies to protect user privacy and ensure clear communication about data usage.

In response to the findings, Netflix has since updated its privacy statement and improved how it informs customers about its data collection practices. Despite these changes, the company has objected to the fine, though it did not provide a comment when approached by the press.

This fine highlights the increasing scrutiny on companies to comply with GDPR and underscores the importance of clear, transparent data handling practices, especially for tech giants like Netflix that handle vast amounts of personal information.

TP-Link faces US ban amid cybersecurity concerns, WSJ reports

US authorities are weighing a potential ban on TP-Link Technology Co., a Chinese router manufacturer, over national security concerns, following reports linking its home internet routers to cyberattacks. According to the Wall Street Journal, the US government is investigating whether TP-Link routers could be used in cyber operations targeting the US, citing concerns raised by lawmakers and intelligence agencies.

In August, two US lawmakers urged the Biden administration to examine TP-Link and its affiliates for possible links to cyberattacks, highlighting fears that the company’s routers could be exploited in future cyber operations. The Commerce, Defence, and Justice departments have launched separate investigations into the company, with reports indicating that a ban on the sale of TP-Link routers in the US could come as early as next year. As part of the investigations, the Commerce Department has reportedly subpoenaed the company.

TP-Link has been under scrutiny since the US Cybersecurity and Infrastructure Agency (CISA) flagged vulnerabilities in the company’s routers, that could potentially allow remote code execution. This comes amid heightened concerns that Chinese-made routers could be used by Beijing to infiltrate and spy on American networks. The US government, along with its allies and Microsoft, has also uncovered a Chinese government-linked hacking campaign, Volt Typhoon, which targeted critical US infrastructure by taking control of private routers.

The Commerce, Defence, and Justice departments, as well as TP-Link, did not immediately respond to requests for comment.

Meta data breach leads to huge EU fine

Meta has been fined €251 million by the European Union’s privacy regulator over a 2018 security breach that affected 29 million users worldwide. The breach involved the ‘View As’ feature, which cyber attackers exploited to access sensitive personal data such as names, contact details, and even information about users’ children.

The Irish Data Protection Commission, Meta’s lead EU regulator, highlighted the severity of the violation, which exposed users to potential misuse of their private information. Meta resolved the issue shortly after its discovery and notified affected users and authorities. Of the 29 million accounts compromised, approximately 3 million belonged to users in the EU and European Economic Area.

This latest fine brings Meta’s total penalties under the EU’s General Data Protection Regulation to nearly €3 billion. A Meta spokesperson stated that the company plans to appeal the decision and emphasised the measures it has implemented to strengthen user data protection. This case underscores the ongoing regulatory scrutiny faced by major technology firms in Europe.

Experts at IGF 2024 address challenges of online information governance

The IGF 2024 panel explored the challenges and opportunities in creating healthier online information spaces. Experts from civil society, governments, and media highlighted concerns about big tech‘s influence, misinformation, and the financial struggles of journalism in the digital age. Discussions centred on multi-stakeholder approaches, regulatory frameworks, and innovative solutions to address these issues.

Speakers including Nighat Dad and Martin Samaan criticised the power imbalance created by major platforms acting as gatekeepers to information. Concerns about insufficient language-specific content moderation and misinformation affecting non-English speakers were raised, with Aws Al-Saadi showcasing Tech4Peace, an Iraqi app tackling misinformation. Julia Haas called for stronger AI governance and transparency to protect vulnerable users while enhancing content curation systems.

The financial sustainability of journalism took centre stage, with Elena Perotti highlighting the decline in advertising revenue for traditional publishers. Isabelle Lois presented Switzerland‘s regulatory initiatives, which focus on transparency, user rights, and media literacy, as potential solutions. Industry collaborations to redirect advertising revenue to professional media were also proposed to sustain quality journalism.

Collaboration emerged as a key theme, with Claire Harring and other speakers emphasising partnerships among governments, media organisations, and tech companies. Initiatives like Meta’s Oversight Board and global dialogues on AI governance were cited as steps toward creating balanced and equitable digital spaces. The session concluded with a call to action for greater engagement in global governance to address the interconnected challenges of the digital information ecosystem.

Protecting critical infrastructure in a fragile cyberspace

Securing Critical Infrastructure in Cyber: Who and How?‘ is the name of one of the main panels at IGF 2024 in Riyadh, where participants discussed the complexities of identifying, securing, and cooperating to protect critical systems from cyber threats. The session, part of the Geneva Dialogue project, focused on safeguarding critical infrastructure from cyber threats and implementing international cyber norms.

The dialogue highlighted the elusive nature of defining critical infrastructure, as interpretations vary widely across nations. ‘Understanding critical infrastructure begins with impact analysis, but what happens if these systems fail?’ noted Nicolas Grunder from ABB, underscoring the need for clarity. Regional interdependencies further complicate matters, as cascading failures in energy, transportation, or cloud services can cripple interconnected sectors, a scenario brought to life through a fictional cyberattack simulation on a cloud provider.

Baseline cybersecurity measures emerged as a priority, focusing on asset inventories, supply chain security, and resilience planning. Kazuo Noguchi of Hitachi America emphasised the mantra of ‘backup, backup, backup’, advocating for distributed systems across regions to mitigate single points of failure. Practical measures like incident response plans, vulnerability management, and operator awareness training were cited as essential components of any security framework.

The role of international cyber norms and confidence-building measures (CBMs) sparked debate. While voluntary, norms such as avoiding attacks on critical infrastructure during peacetime provide a foundation for responsible state behaviour. Yet, as Kaleem Usmani of CERT Mauritius pointed out, ‘Norms reduce risks and foster cooperation, but accountability remains a challenge.’ Regional collaboration, such as harmonised security certifications, was proposed as a pragmatic solution to bridge gaps in global standards.

Amid growing geopolitical complexities, participants called for greater transparency and cooperation. Bushra AlBlooshi from the Dubai Electronic Security Center showcased Dubai’s approach, where interdependencies between sectors like power and transportation are mapped to preempt disruptions. However, securing systems reliant on foreign service providers adds another layer of vulnerability, prompting calls for international agreements to establish untouchable ‘red lines’ for critical infrastructure in peace and war.

Parliamentary panel at IGF discusses ICTs and AI in counterterrorism efforts

At the 2024 Internet Governance Forum (IGF) in Riyadh, a panel of experts explored how parliaments can harness information and communication technologies (ICTs) and AI to combat terrorism while safeguarding human rights. The session, titled ‘Parliamentary Approaches to ICT and UN SC Resolution 1373,’ emphasised the dual nature of these technologies—as tools for both law enforcement and malicious actors—and highlighted the pivotal role of international collaboration.

Legislation and oversight in a digital era

David Alamos, Chief of the UNOCT programme on Parliamentary Engagement, set the stage by underscoring the responsibility of parliaments to translate international frameworks like UN Security Council Resolution 1373 into national laws. ‘Parliamentarians must allocate budgets and exercise oversight to ensure counterterrorism efforts are both effective and ethical,’ Alamos stated.

Akvile Giniotiene of the UN Office of Counterterrorism echoed this sentiment, emphasising the need for robust legal frameworks to empower law enforcement in leveraging new technologies responsibly.

Opportunities and risks in emerging technologies

Panelists examined the dual role of ICTs and AI in counterterrorism. Abdelouahab Yagoubi, a member of Algeria’s National Assembly, highlighted AI’s potential to enhance threat detection and predictive analysis.

At the same time, Jennifer Bramlette from the UN Counterterrorism Committee stressed the importance of digital literacy in fortifying societal resilience. On the other hand, Kamil Aydin and Emanuele Loperfido of the OSCE Parliamentary Assembly cautioned against the misuse of these technologies, pointing to risks like deepfakes and cybercrime-as-a-service, enabling terrorist propaganda and disinformation campaigns.

The case for collaboration

The session spotlighted the critical need for international cooperation and public-private partnerships to address the cross-border nature of terrorist threats. Giniotiene called for enhanced coordination mechanisms among nations, while Yagoubi praised the Parliamentary Assembly of the Mediterranean for fostering knowledge-sharing on AI’s implications.

‘No single entity can tackle this alone,’ Alamos remarked, advocating for UN-led capacity-building initiatives to support member states.

Balancing security with civil liberties

A recurring theme was the necessity of balancing counterterrorism measures with the protection of human rights. Loperfido warned against the overreach of security measures, noting that ethical considerations must guide the development and deployment of AI in law enforcement.

An audience query on the potential misuse of the term ‘terrorism’ further underscored the importance of safeguarding civil liberties within legislative frameworks.

Looking ahead

The panel concluded with actionable recommendations, including updating the UN Parliamentary Handbook on Resolution 1373, investing in digital literacy, and ensuring parliamentarians are well-versed in emerging technologies.

‘Adapting to the rapid pace of technological advancement while maintaining a steadfast commitment to the rule of law is paramount,’ Alamos said, encapsulating the session’s ethos. The discussion underscored the indispensable role of parliaments in shaping a global counterterrorism strategy that is both effective and equitable.

TikTok appeals to Supreme Court to block looming US ban

TikTok and its parent company, ByteDance, have asked the Supreme Court to halt a US law that would force ByteDance to sell TikTok by 19 January or face a nationwide ban. The companies argue that the law violates the First Amendment, as it targets one of the most widely used social media platforms in the United States, which currently has 170 million American users. A group of TikTok users also submitted a similar request to prevent the shutdown.

The law, passed by Congress in April, reflects concerns over national security. The Justice Department claims TikTok poses a threat due to its access to vast user data and potential for content manipulation by a Chinese-owned company. A lower court in December upheld the law, rejecting TikTok’s argument that it infringes on free speech rights. TikTok maintains that users should be free to decide for themselves whether to use the app and that shutting it down for even a month could cause massive losses in users and advertisers.

With the ban set to take effect the day before President-elect Donald Trump’s inauguration, TikTok has urged the Supreme Court to decide by 6 January. Trump, who once supported banning TikTok, has since reversed his position and expressed willingness to reconsider. The case highlights rising trade tensions between the US and China and could set a precedent for other foreign-owned apps operating in America.