US Supreme Court to hear TikTok’s bid to block ban

The US Supreme Court has agreed to review a case involving TikTok and its Chinese parent company, ByteDance, in a challenge against a law requiring the app’s sale or a ban in the US by January 19. The court will hear arguments on 10 January but has not yet decided on TikTok’s request to block the law, which it claims violates free speech rights under the First Amendment. TikTok, used by 170 million Americans, argues the law would harm its operations and user base, while US officials cite national security concerns over data access and content manipulation.

The Justice Department has labelled TikTok a significant security risk due to its Chinese ownership, while TikTok denies posing any threat and accuses lawmakers of speculation. The law, passed in April and signed by President Biden, would ban the app unless ByteDance divests its ownership. The company warns that even a temporary shutdown could damage its US market share, advertising revenue, and ability to recruit creators and staff.

The case also reflects heightened tensions between the US and China over technology and trade policies. TikTok’s fate could set a precedent for the treatment of other foreign-owned apps, raising questions about free speech and digital commerce. The Supreme Court’s decision may have far-reaching implications for the platform’s future and US-China relations.

Apple criticises Meta’s requests for access to iPhone tools

Apple has accused Meta of making excessive interoperability requests that could compromise user privacy and security, intensifying the rivalry between the two tech giants. Under the European Union’s Digital Markets Act (DMA), Apple must allow competitors access to its services or face significant fines. Apple claims Meta’s 15 requests — more than any other company — could expose sensitive data like messages, emails, and passwords.

Meta, which seeks integration for products like its Quest VR headsets and smart glasses, dismissed Apple’s privacy concerns as a cover for anticompetitive practices. Apple cited Meta’s past privacy violations in Europe as a reason for caution.

Meanwhile, the European Commission has outlined measures to ensure Apple complies with the DMA, including clear timelines and feedback mechanisms for developers. A final decision on Apple’s compliance with the law is expected in March 2025.

Tackling internet fragmentation: A global challenge at IGF 2024

At the Internet Governance Forum (IGF) 2024 in Riyadh, the main session ‘Policy Network on Internet Fragmentation’ delved into implementing Article 29C of the Global Digital Compact (GDC), which seeks to prevent internet fragmentation. A diverse panel comprising government officials, technical experts, and civil society representatives highlighted the multifaceted nature of this issue and proposed actionable strategies to address it.

The scope of internet fragmentation

Panellists underscored that internet fragmentation manifests on technical, governance, and user experience levels. While the global network of over 70,000 systems remains technically unified, fragmentation is evident in user experiences. Anriette Esterhuysen from the Association for Progressive Communications pointed out, ‘How you view the internet as fragmented or not depends on whose internet you think it is.’ She stressed that billions face access and content restrictions, fragmenting their digital experience.

Gbenga Sesan of Paradigm Initiative echoed this concern, noting that fragmentation undermines the goal of universal connectivity by 2030. The tension between a seamless technical infrastructure and fractured user realities loomed large in the discussion.

Operationalising the GDC commitment

Alisa Heaver from the Dutch Ministry of Economic Affairs and Climate highlighted the critical role of Article 29C as a blueprint for preventing fragmentation. She called for a measurable framework to track progress by the GDC’s 2027 review, emphasising that research on the economic impacts of fragmentation must be prioritised. ‘We need to start measuring internet fragmentation now more than ever,’ Heaver urged.

Strategies for collaboration and progress

Multistakeholder cooperation emerged as a cornerstone for addressing fragmentation. Wim Degezelle, a consultant with the IGF Secretariat, presented the Policy Network on Internet Fragmentation (PNIF) framework, while Amitabh Singhal of ICANN highlighted the IGF’s unique position in bridging technical and policy divides. Singhal also pointed to the potential renewal of the IGF’s mandate as pivotal in continuing these essential discussions.

The session emphasised inclusivity in technical standard-setting processes, with Sesan advocating for civil society’s role and audience members calling for stronger private sector engagement. Sheetal Kumar, co-facilitator of the session, stressed the importance of leveraging national and regional IGFs to foster localised dialogues on fragmentation.

Next steps and future outlook

The panel identified key actions, including developing measurable frameworks, conducting economic research, and utilising national and regional IGFs to sustain discussions. The upcoming IGF in 2025 was flagged as a milestone for assessing progress. Despite the issue’s complexity, the panellists were united in their commitment to fostering a more inclusive and seamless internet.

As Esterhuysen aptly summarised, addressing internet fragmentation requires a concerted effort to view the digital landscape through diverse lenses. This session reaffirmed that preventing fragmentation is not just a technical challenge but a deeply human one, demanding collaboration, research, and sustained dialogue.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Experts at IGF 2024 address the dual role of AI in elections, emphasising empowerment and challenges

At IGF 2024, panellists explored AI’s role in elections, its potential for both empowerment and disruption, and the challenges it poses to democratic processes. Moderator Tapani Tarvainen led the discussion with contributions from Ayobangira Safari Nshuti, Roxana Radu, Babu Ram Aryal, and other experts.

Speakers noted that AI had been primarily used for self-promotion in campaigns, helping smaller candidates compete with limited resources. Roxana Radu highlighted AI’s positive role in voter outreach in India but warned of risks such as disinformation and public opinion manipulation. Ayobangira Safari Nshuti pointed to algorithmic biases and transparency issues in platforms as critical concerns, emphasising a recent case in Romania where AI-enabled manipulation caused election disruption.

Accountability of social media platforms became a focal point. Platforms increasingly rely on AI for content moderation, but their effectiveness in languages with limited online presence remains inadequate. Babu Ram Aryal stressed the need for stronger oversight, particularly in multilingual nations, while Dennis Redeker underscored the challenges of balancing regulation with free speech.

Panellists called for holistic solutions to safeguard democracy. Suggestions included enhancing platform transparency, implementing robust digital literacy programmes, and addressing social factors like poverty that exacerbate misinformation. Nana, an AI ethics specialist, advocated for proactive governance to adapt electoral institutions to technological realities.

The session concluded with a recognition that AI’s role in elections will continue to evolve. Panellists urged collaborative efforts between governments, civil society, and technology companies to ensure election integrity and maintain public trust in democratic systems.

Meta data breach leads to huge EU fine

Meta has been fined €251 million by the European Union’s privacy regulator over a 2018 security breach that affected 29 million users worldwide. The breach involved the ‘View As’ feature, which cyber attackers exploited to access sensitive personal data such as names, contact details, and even information about users’ children.

The Irish Data Protection Commission, Meta’s lead EU regulator, highlighted the severity of the violation, which exposed users to potential misuse of their private information. Meta resolved the issue shortly after its discovery and notified affected users and authorities. Of the 29 million accounts compromised, approximately 3 million belonged to users in the EU and European Economic Area.

This latest fine brings Meta’s total penalties under the EU’s General Data Protection Regulation to nearly €3 billion. A Meta spokesperson stated that the company plans to appeal the decision and emphasised the measures it has implemented to strengthen user data protection. This case underscores the ongoing regulatory scrutiny faced by major technology firms in Europe.

Experts at IGF 2024 address challenges of online information governance

The IGF 2024 panel explored the challenges and opportunities in creating healthier online information spaces. Experts from civil society, governments, and media highlighted concerns about big tech‘s influence, misinformation, and the financial struggles of journalism in the digital age. Discussions centred on multi-stakeholder approaches, regulatory frameworks, and innovative solutions to address these issues.

Speakers including Nighat Dad and Martin Samaan criticised the power imbalance created by major platforms acting as gatekeepers to information. Concerns about insufficient language-specific content moderation and misinformation affecting non-English speakers were raised, with Aws Al-Saadi showcasing Tech4Peace, an Iraqi app tackling misinformation. Julia Haas called for stronger AI governance and transparency to protect vulnerable users while enhancing content curation systems.

The financial sustainability of journalism took centre stage, with Elena Perotti highlighting the decline in advertising revenue for traditional publishers. Isabelle Lois presented Switzerland‘s regulatory initiatives, which focus on transparency, user rights, and media literacy, as potential solutions. Industry collaborations to redirect advertising revenue to professional media were also proposed to sustain quality journalism.

Collaboration emerged as a key theme, with Claire Harring and other speakers emphasising partnerships among governments, media organisations, and tech companies. Initiatives like Meta’s Oversight Board and global dialogues on AI governance were cited as steps toward creating balanced and equitable digital spaces. The session concluded with a call to action for greater engagement in global governance to address the interconnected challenges of the digital information ecosystem.

Election integrity in the digital age: insights from IGF 2024

Election integrity and disinformation have been closely followed topics during the session ‘Internet governance and elections: maximising the potential for trust and addressing risks’ at the Internet Governance Forum (IGF) 2024 on Wednesday. Experts from across sectors convened to discuss the need to safeguard election integrity amid digital challenges. With more than 65 elections occurring globally this year, the so-called ‘super election year,’ the risks of being misguided have never been higher. From misinformation to AI deepfakes, the conversation underscored the escalating threats and the need for collaborative, multistakeholder solutions.

The Growing Threat of Disinformation

Tawfik Jelassi from UNESCO emphasised the exponential rise of disinformation, framing it as a key global risk. ‘Without facts, there is no trust, and without trust, democracy falters,’ he cautioned, adding that misinformation spreads ten times faster than verified content, exacerbating distrust in elections. Panellists, including William Bird of Media Monitoring Africa and Lina Viltrakiene of the Lithuanian government, described how malicious actors manipulate digital platforms to mislead voters, with deepfakes and coordinated inauthentic behaviour becoming increasingly pervasive.

Digital Inequality and Global Disparities

Elizabeth Orembo of ICT Africa highlighted the stark challenges faced by the Global South, where digital divides and unequal media access leave populations more vulnerable to misinformation. Unregulated influencers and podcasters wield significant power in Africa, often spreading unchecked narratives. ‘We cannot apply blanket policies from tech companies without addressing regional contexts,’ Orembo noted, pointing to the need for tailored approaches that account for infrastructural and cultural disparities.

AI, Social Media, and Platform Accountability

Meta’s Sezen Yesil shed light on the company’s efforts to combat election-related threats, including stricter measures against fake accounts, improved transparency for political ads, and collaboration with fact-checkers. While AI-driven disinformation remains a concern, Yesil observed that the anticipated impact of generative AI in the 2024 elections was modest. Nonetheless, panellists called for stronger accountability measures for tech companies, with Viltrakiene advocating for legal frameworks like the EU’s Digital Services Act to counter digital harms effectively.

A Multi-Stakeholder Solution

The session highlighted the importance of multistakeholder collaboration, a frequent theme across discussions. Rosemary Sinclair of Australia’s AUDA emphasised that safeguarding democracy is a ‘global team sport,’ requiring contributions from governments, civil society, academia, and the technical community. ‘The IGF is the ideal space for fostering such cooperation,’ she added, urging closer coordination between national and global IGF platforms.

Participants agreed that the fight for election integrity must extend beyond election cycles. Digital platforms, governments, and civil society must sustain efforts to build trust, address digital inequities, and create frameworks that protect democracy in the digital age. The IGF’s role as a forum for global dialogue and action was affirmed, with calls to strengthen its influence in shaping governance solutions for the future.

Election coalitions against misinformation

In our digital age where misinformation threatens the integrity of elections worldwide, a session at the IGF 2024 in Riyadh titled ‘Combating Misinformation with Election Coalitions’ strongly advocated for a collaborative approach to this issue. Panelists from diverse backgrounds, including Google, fact-checking organisations, and journalism, underscored the significance of election coalitions in safeguarding democratic processes. Mevan Babakar from Google introduced the ‘Elections Playbook,’ a public policy guide for forming effective coalitions, highlighting the necessity of trust, neutrality, and collaboration across varied stakeholders.

The session explored successful models like Brazil’s Comprova, which unites media outlets to fact-check election-related claims, and Facts First PH in the Philippines, promoting a ‘mesh’ approach where fact-checked information circulates through community influencers. Daniel Bramatti, an investigative journalist from Brazil, emphasised the importance of fact-checking as a response to misinformation, not a suppression of free speech. ‘Fact-checking is the free speech response to misinformation,’ he stated, advocating for context determination over censorship.

Challenges discussed included maintaining coalition momentum post-election, navigating government pressures, and dealing with the advent of AI-generated content. Alex Walden, Global Head of Human Rights for Google, addressed the delicate balance of engaging with governments while maintaining neutrality. ‘We have to be mindful of the role that we have in engaging neutrally,’ she noted, stressing the importance of clear, consistent policies for content moderation.

The conversation also touched on engaging younger, non-voting demographics in fact-checking initiatives, with David Ajikobi from Africa Check highlighting media literacy programs in Nigeria. The panellists agreed on the need for a multistakeholder approach, advocating for frameworks that focus on specific harms rather than the broad term ‘misinformation,’ as suggested by Peter Cunliffe-Jones’s work at Westminster University.

The session concluded with clear advice: for anyone looking to start or join an election coalition, prioritise relationship-building and choose coordinators with neutrality and independence. The call to action was for continued collaboration, innovation, and adaptation to local contexts to combat the evolving landscape of misinformation, ensuring that these coalitions survive and thrive beyond election cycles.

International experts converge at IGF 2024 to promote digital solidarity in global governance

A panel of international experts at the IGF 2024 gathered to discuss the growing importance of digital solidarity in global digital governance. Jennifer Bachus of the US State Department introduced the concept as a framework for fostering international cooperation centred on human rights and multi-stakeholder engagement. Nashilongo Gervasius, a public interest technology expert from Namibia, highlighted the need to close digital divides and promote inclusivity in global digital policymaking.

The discussion focused on balancing digital sovereignty with the need for international collaboration. Jason Pielemeier, Executive Director of the Global Network Initiative, stressed the critical role of data privacy and cybersecurity in advancing global digital rights. Robert Opp, Chief Digital Officer at the United Nations Development Programme, emphasised the importance of capacity building and enhancing digital infrastructure, particularly in developing nations.

Key global mechanisms like the Internet Governance Forum (IGF) and the World Summit on the Information Society (WSIS) processes featured prominently in the dialogue. Panellists, including Susan Mwape from Zambia, underscored the need to strengthen these platforms while ensuring they remain inclusive and respectful of human rights. The upcoming WSIS+20 review was recognised as an opportunity to revitalise international cooperation in the digital realm.

Challenges such as internet shutdowns, mass surveillance, and the misuse of cybercrime legislation were debated. Mwape voiced concerns about the potential for international forums to lose credibility if hosted by countries with poor human rights records. Audience member Barbara from Nepal called for greater accountability in digital governance practices, while Hala Rasheed from the Alnahda Society echoed the urgency of addressing inequalities in digital policy implementation.

Russian civil society representative Alexander Savnin brought attention to the impact of sanctions on international technical cooperation in cybersecurity. He argued for a more balanced approach that would allow global stakeholders to address shared security challenges effectively. Panellists agreed that fostering trust among diverse actors remains a critical hurdle to achieving digital solidarity.

The session concluded with a commitment to fostering continuous dialogue and collaboration. Panellists expressed hope that inclusive and rights-based approaches could transform digital solidarity into tangible solutions, helping to address the pressing challenges of the digital age.

Parliamentary panel at IGF discusses ICTs and AI in counterterrorism efforts

At the 2024 Internet Governance Forum (IGF) in Riyadh, a panel of experts explored how parliaments can harness information and communication technologies (ICTs) and AI to combat terrorism while safeguarding human rights. The session, titled ‘Parliamentary Approaches to ICT and UN SC Resolution 1373,’ emphasised the dual nature of these technologies—as tools for both law enforcement and malicious actors—and highlighted the pivotal role of international collaboration.

Legislation and oversight in a digital era

David Alamos, Chief of the UNOCT programme on Parliamentary Engagement, set the stage by underscoring the responsibility of parliaments to translate international frameworks like UN Security Council Resolution 1373 into national laws. ‘Parliamentarians must allocate budgets and exercise oversight to ensure counterterrorism efforts are both effective and ethical,’ Alamos stated.

Akvile Giniotiene of the UN Office of Counterterrorism echoed this sentiment, emphasising the need for robust legal frameworks to empower law enforcement in leveraging new technologies responsibly.

Opportunities and risks in emerging technologies

Panelists examined the dual role of ICTs and AI in counterterrorism. Abdelouahab Yagoubi, a member of Algeria’s National Assembly, highlighted AI’s potential to enhance threat detection and predictive analysis.

At the same time, Jennifer Bramlette from the UN Counterterrorism Committee stressed the importance of digital literacy in fortifying societal resilience. On the other hand, Kamil Aydin and Emanuele Loperfido of the OSCE Parliamentary Assembly cautioned against the misuse of these technologies, pointing to risks like deepfakes and cybercrime-as-a-service, enabling terrorist propaganda and disinformation campaigns.

The case for collaboration

The session spotlighted the critical need for international cooperation and public-private partnerships to address the cross-border nature of terrorist threats. Giniotiene called for enhanced coordination mechanisms among nations, while Yagoubi praised the Parliamentary Assembly of the Mediterranean for fostering knowledge-sharing on AI’s implications.

‘No single entity can tackle this alone,’ Alamos remarked, advocating for UN-led capacity-building initiatives to support member states.

Balancing security with civil liberties

A recurring theme was the necessity of balancing counterterrorism measures with the protection of human rights. Loperfido warned against the overreach of security measures, noting that ethical considerations must guide the development and deployment of AI in law enforcement.

An audience query on the potential misuse of the term ‘terrorism’ further underscored the importance of safeguarding civil liberties within legislative frameworks.

Looking ahead

The panel concluded with actionable recommendations, including updating the UN Parliamentary Handbook on Resolution 1373, investing in digital literacy, and ensuring parliamentarians are well-versed in emerging technologies.

‘Adapting to the rapid pace of technological advancement while maintaining a steadfast commitment to the rule of law is paramount,’ Alamos said, encapsulating the session’s ethos. The discussion underscored the indispensable role of parliaments in shaping a global counterterrorism strategy that is both effective and equitable.