Experts at IGF 2024 address challenges of online information governance

The IGF 2024 panel explored the challenges and opportunities in creating healthier online information spaces. Experts from civil society, governments, and media highlighted concerns about big tech‘s influence, misinformation, and the financial struggles of journalism in the digital age. Discussions centred on multi-stakeholder approaches, regulatory frameworks, and innovative solutions to address these issues.

Speakers including Nighat Dad and Martin Samaan criticised the power imbalance created by major platforms acting as gatekeepers to information. Concerns about insufficient language-specific content moderation and misinformation affecting non-English speakers were raised, with Aws Al-Saadi showcasing Tech4Peace, an Iraqi app tackling misinformation. Julia Haas called for stronger AI governance and transparency to protect vulnerable users while enhancing content curation systems.

The financial sustainability of journalism took centre stage, with Elena Perotti highlighting the decline in advertising revenue for traditional publishers. Isabelle Lois presented Switzerland‘s regulatory initiatives, which focus on transparency, user rights, and media literacy, as potential solutions. Industry collaborations to redirect advertising revenue to professional media were also proposed to sustain quality journalism.

Collaboration emerged as a key theme, with Claire Harring and other speakers emphasising partnerships among governments, media organisations, and tech companies. Initiatives like Meta’s Oversight Board and global dialogues on AI governance were cited as steps toward creating balanced and equitable digital spaces. The session concluded with a call to action for greater engagement in global governance to address the interconnected challenges of the digital information ecosystem.

Inclusive AI governance: Perspectives from the Global South

At the 2024 Internet Governance Forum (IGF) in Riyadh, the Data and AI Governance coalition convened a panel to explore the challenges and opportunities of AI governance from the perspective of the Global South. The discussion delved into AI’s impacts on human rights, democracy, and economic development, emphasising the need for inclusive and region-specific frameworks.

Towards inclusive frameworks

Ahmad Bhinder, representing the Digital Cooperation Organization, stressed the importance of regional AI strategies. He highlighted the development of a self-assessment tool for AI readiness, designed to guide member states in governance and capacity development.

Similarly, Melody Musoni, Policy Officer at ECDPM, pointed to the African Union’s continental strategy as a promising example of unified AI governance. Elise Racine’s (Doctoral candidate at the University of Oxford) proposal for reparative algorithmic impact assessments underscored the need to address historical inequities, providing a blueprint for more equitable AI systems.

Ethics, rights, and regional challenges

The ethical dimensions of AI took centre stage, with Bianca Kremer, a member of the board of CGI.br and a professor at FGV Law School Rio, highlighting algorithmic bias in Brazil, where ‘90.5% of those arrested through facial recognition technologies are black and brown.’ This stark statistic underscored the urgent need to mitigate AI-driven discrimination.

Guangyu Qiao Franco from Radboud University emphasised the underrepresentation of Global South nations in AI arms control discussions, advocating for an inclusive approach to global AI governance.

Labour, economy, and sustainability

The panel explored AI’s economic and environmental ramifications. Avantika Tewari, PhD candidate at the Center for Comparative Politics and Political Theory at Jawaharlal Nehru University in New Delhi, discussed the exploitation of digital labour in AI development, urging fair compensation for workers in the Global South.

Rachel Leach raised concerns about the environmental costs of AI technologies, including embodied carbon, and criticised the lack of sustainability measures in current AI development paradigms.

Regional and global collaboration

Speakers highlighted the necessity of cross-border cooperation. Sizwe Snail ka Mtuze and Rocco Saverino, PhD candidate at the Free University of Brussels, examined region-specific approaches in Africa and Latin America, stressing the importance of tailored frameworks.

Luca Belli’s (Professor at Vilo School, Director of the Center for Technology and Society) observations on Brazil revealed gaps between AI regulation and implementation, emphasising the need for pragmatic, context-sensitive policies.

Actionable pathways forward

The discussion concluded with several actionable recommendations: fostering inclusive AI governance frameworks, implementing reparative assessments, addressing environmental and labour impacts, and prioritising digital literacy and regional collaboration.

‘Inclusive governance is not just a moral imperative but a practical necessity,’ Bhinder remarked, encapsulating the panel’s call to action. The session underscored the critical need for global cooperation to ensure AI serves humanity equitably.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Election coalitions against misinformation

In our digital age where misinformation threatens the integrity of elections worldwide, a session at the IGF 2024 in Riyadh titled ‘Combating Misinformation with Election Coalitions’ strongly advocated for a collaborative approach to this issue. Panelists from diverse backgrounds, including Google, fact-checking organisations, and journalism, underscored the significance of election coalitions in safeguarding democratic processes. Mevan Babakar from Google introduced the ‘Elections Playbook,’ a public policy guide for forming effective coalitions, highlighting the necessity of trust, neutrality, and collaboration across varied stakeholders.

The session explored successful models like Brazil’s Comprova, which unites media outlets to fact-check election-related claims, and Facts First PH in the Philippines, promoting a ‘mesh’ approach where fact-checked information circulates through community influencers. Daniel Bramatti, an investigative journalist from Brazil, emphasised the importance of fact-checking as a response to misinformation, not a suppression of free speech. ‘Fact-checking is the free speech response to misinformation,’ he stated, advocating for context determination over censorship.

Challenges discussed included maintaining coalition momentum post-election, navigating government pressures, and dealing with the advent of AI-generated content. Alex Walden, Global Head of Human Rights for Google, addressed the delicate balance of engaging with governments while maintaining neutrality. ‘We have to be mindful of the role that we have in engaging neutrally,’ she noted, stressing the importance of clear, consistent policies for content moderation.

The conversation also touched on engaging younger, non-voting demographics in fact-checking initiatives, with David Ajikobi from Africa Check highlighting media literacy programs in Nigeria. The panellists agreed on the need for a multistakeholder approach, advocating for frameworks that focus on specific harms rather than the broad term ‘misinformation,’ as suggested by Peter Cunliffe-Jones’s work at Westminster University.

The session concluded with clear advice: for anyone looking to start or join an election coalition, prioritise relationship-building and choose coordinators with neutrality and independence. The call to action was for continued collaboration, innovation, and adaptation to local contexts to combat the evolving landscape of misinformation, ensuring that these coalitions survive and thrive beyond election cycles.

Parliamentary panel at IGF discusses ICTs and AI in counterterrorism efforts

At the 2024 Internet Governance Forum (IGF) in Riyadh, a panel of experts explored how parliaments can harness information and communication technologies (ICTs) and AI to combat terrorism while safeguarding human rights. The session, titled ‘Parliamentary Approaches to ICT and UN SC Resolution 1373,’ emphasised the dual nature of these technologies—as tools for both law enforcement and malicious actors—and highlighted the pivotal role of international collaboration.

Legislation and oversight in a digital era

David Alamos, Chief of the UNOCT programme on Parliamentary Engagement, set the stage by underscoring the responsibility of parliaments to translate international frameworks like UN Security Council Resolution 1373 into national laws. ‘Parliamentarians must allocate budgets and exercise oversight to ensure counterterrorism efforts are both effective and ethical,’ Alamos stated.

Akvile Giniotiene of the UN Office of Counterterrorism echoed this sentiment, emphasising the need for robust legal frameworks to empower law enforcement in leveraging new technologies responsibly.

Opportunities and risks in emerging technologies

Panelists examined the dual role of ICTs and AI in counterterrorism. Abdelouahab Yagoubi, a member of Algeria’s National Assembly, highlighted AI’s potential to enhance threat detection and predictive analysis.

At the same time, Jennifer Bramlette from the UN Counterterrorism Committee stressed the importance of digital literacy in fortifying societal resilience. On the other hand, Kamil Aydin and Emanuele Loperfido of the OSCE Parliamentary Assembly cautioned against the misuse of these technologies, pointing to risks like deepfakes and cybercrime-as-a-service, enabling terrorist propaganda and disinformation campaigns.

The case for collaboration

The session spotlighted the critical need for international cooperation and public-private partnerships to address the cross-border nature of terrorist threats. Giniotiene called for enhanced coordination mechanisms among nations, while Yagoubi praised the Parliamentary Assembly of the Mediterranean for fostering knowledge-sharing on AI’s implications.

‘No single entity can tackle this alone,’ Alamos remarked, advocating for UN-led capacity-building initiatives to support member states.

Balancing security with civil liberties

A recurring theme was the necessity of balancing counterterrorism measures with the protection of human rights. Loperfido warned against the overreach of security measures, noting that ethical considerations must guide the development and deployment of AI in law enforcement.

An audience query on the potential misuse of the term ‘terrorism’ further underscored the importance of safeguarding civil liberties within legislative frameworks.

Looking ahead

The panel concluded with actionable recommendations, including updating the UN Parliamentary Handbook on Resolution 1373, investing in digital literacy, and ensuring parliamentarians are well-versed in emerging technologies.

‘Adapting to the rapid pace of technological advancement while maintaining a steadfast commitment to the rule of law is paramount,’ Alamos said, encapsulating the session’s ethos. The discussion underscored the indispensable role of parliaments in shaping a global counterterrorism strategy that is both effective and equitable.

NeurIPS conference showcases AI’s rapid growth

The NeurIPS conference, AI’s premier annual gathering, drew over 16,000 computer scientists to British Columbia last week, highlighting the field’s rapid growth and transformation. Once an intimate meeting of academic outliers, the event has evolved into a showcase for technological breakthroughs and corporate ambitions, featuring major players like Alphabet, Meta, and Microsoft.

Industry luminaries like Ilya Sutskever and Fei-Fei Li discussed AI’s evolving challenges. Sutskever emphasised AI’s unpredictability as it learns to reason, while Li called for expanding beyond 2D internet data to develop “spatial intelligence.” The conference, delayed a day to avoid clashing with a Taylor Swift concert, underscored AI’s growing mainstream prominence.

Venture capitalists, sponsors, and tech giants flooded the event, reflecting AI’s lucrative appeal. The number of research papers accepted has surged tenfold in a decade, and discussions focused on tackling the costs and limitations of scaling AI models. Notable attendees included Meta’s Yann LeCun and Google DeepMind’s Jeff Dean, who advocated for ‘modular’ and ‘tangly’ AI architectures.

In a symbolic moment of AI’s widening reach, 10-year-old Harini Shravan became the youngest ever to have a paper accepted, illustrating how the field now embraces new generations and diverse ideas.

Meta enhances Ray-Ban smart glasses with AI video and translation

Meta Platforms has introduced significant upgrades to its Ray-Ban Meta smart glasses, adding AI video capabilities and real-time language translation. The updates, announced during Meta’s Connect conference in September, are now available through the v11 software rollout for Early Access Program members.

The new AI video feature lets the smart glasses process visuals and answer user queries in real-time. Additionally, the glasses can now translate speech between English and Spanish, French, or Italian, providing translations via open-ear speakers or as text on a connected phone.

Meta also integrated the Shazam music identification app into the glasses, enhancing their functionality for users in the US and Canada. Earlier AI upgrades, such as setting reminders and scanning QR codes via voice commands, continue to expand the glasses’ utility.

Enhancing parliamentary skills for a thriving digital future

As digital transformation accelerates, parliaments across the globe are challenged to keep pace with emerging technologies like AI and data governance. On the second day of IGF 2024 in Riyadh, an influential panel discussed how parliamentary capacity development is essential to shaping inclusive, balanced digital policies without stifling innovation.

The session ‘Building parliamentary capacity to effectively shape the digital realm,’ moderated by Rima Al-Yahya of Saudi Arabia’s Shura Council, brought together representatives from international organisations and tech giants, including ICANN, Google, GIZ, and UNESCO. Their message was that parliamentarians need targeted training and collaboration to effectively navigate AI regulation, data sovereignty, and the digital economy.

The debate on AI regulation reflected a global dilemma: how to regulate AI responsibly without halting progress. UNESCO’s Cedric Wachholz outlined flexible approaches, including risk-based frameworks and ethical principles, as seen in their Ethics of AI. Google’s Olga Skorokhodova reinforced this by saying that as AI develops, it’s becoming ‘too important not to regulate well,’ advocating with this known Google motto for multistakeholder collaboration and local capacity development.

Beckwith Burr, ICANN board member, stressed that while internet governance requires global coordination, legislative decisions are inherently national. ‘Parliamentarians must understand how the internet works to avoid laws that unintentionally break it,’ she cautioned and added that ICANN offers robust capacity-building programs to bridge knowledge gaps.

With a similar stance, Franz von Weizsäcker of GIZ highlighted Africa’s efforts to harmonise digital policies across 55 countries under the African Union’s Data Policy Framework. He noted that concerns about ‘data colonialism’, where local data benefits global corporations, must be tackled through innovative policies that protect data without hindering cross-border data flows.

Parliamentarians from Kenya, Egypt, and Gambia emphasised the need for widespread digital literacy among legislators, as poorly informed laws risk impeding innovation. ‘Over 95% of us do not understand the technical sector,’ said Kenyan Senator Catherine Muma, urging investments to empower lawmakers across all sectors (health, finance, or education) to legislate for an AI-driven future.

As Rima Al-Yahya trustworthily summarised, ‘Equipping lawmakers with tools and knowledge is pivotal to ensuring digital policies promote innovation, security, and accountability for all.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Diplo Foundation explores AI’s ethical and philosophical challenges at IGF 2024

At the 2024 Internet Governance Forum (IGF) in Riyadh, a session featuring experts from DiploFoundation, addressed AI’s deep philosophical and ethical implications. The discussion moved beyond surface-level concerns about bias and ethics, focusing instead on more profound questions about human identity and agency in a world increasingly shaped by AI.

Jovan Kurbalija, Director of the Diplo Foundation, emphasised the need to critically examine AI’s impact on human knowledge and identity. He introduced the idea of a ‘right to be humanly imperfect,’ advocating for preserving human flaws and agency in an AI-dominated world.

That concept was echoed by other speakers, who expressed concern that the pursuit of AI-driven optimisation could erode essential human qualities. Sorina Teleanu, Diplo Foundation’s Director of Knowledge, raised important questions about the tendency to anthropomorphise AI, warning against attributing human traits to machines and urging a broader consideration of non-human forms of intelligence.

The panel also delved into the philosophical dimensions of AI, with Teleanu pointing out the lack of privacy protections surrounding brain data processing and the potential risks of attributing personhood to advanced AI. The discussion of Artificial General Intelligence (AGI) brought up the provocative idea that if AI becomes indistinguishable from humans, it could potentially deserve human rights, challenging our traditional notions of consciousness and personhood.

Addressing AI governance, Kurbalija focused on practical, immediate issues, such as AI’s impact on education, employment, and daily life, rather than speculative long-term concerns. He called for a decentralised approach to AI development that preserves diverse knowledge sources and prevents the centralisation of power by large tech companies. Henri-Jean Pollet from ISPA Belgium added to the conversation by advocating for open-source models and data licensing to ensure AI reliability and prevent inaccuracies in AI-generated content.

The conversation also explored the evolving dynamics of human-AI interaction. Teleanu highlighted the potential changes in human communication as AI-generated text becomes more prevalent, while Mohammad Abdul Haque Anu, Secretary-General of the Bangladesh Internet Governance Forum, stressed the need for AI ethics education, particularly in developing countries. Kurbalija shared a revealing anecdote about AI-generated speeches at conferences, illustrating how AI could influence professional communication in the future.

As the session concluded, Kurbalija highlighted the Diplo Foundation’s approach to AI development, focusing on tools that support diplomats and policymakers by enhancing human knowledge without replacing human decision-making. The discussion wrapped up with a demonstration of these AI tools in action, emphasising their potential to augment human capabilities in specialised fields. The speakers left the audience with an invitation for continued philosophical exploration of AI’s role in shaping humanity’s future.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

IGF 2024 panellists highlight infrastructure, literacy, and fair digital access

The Internet Governance Forum 2024 (IGF) brought together global stakeholders to discuss the implementation of the Global Digital Compact (GDC), aiming to address digital inequalities and foster cross-sector partnerships. The session spotlighted key challenges such as funding gaps, cultural adaptation of digital initiatives, and sustainability concerns in infrastructure development.

Isabel De Sola from the Office of the Tech Envoy emphasised stakeholder collaboration and revealed plans for an upcoming GDC implementation roadmap. Roy Eriksson, Finland‘s Ambassador for Global Gateway, shared successes from AI strategy projects in African nations, illustrating how capacity-building partnerships can close technology gaps. Kevin Hernandez of the Universal Postal Union presented the Connect.Post programme, which aims to connect global post offices to digital networks by 2030.

Discussions also underscored energy efficiency and sustainability in digital infrastructure. Nandipha Ntshalbu highlighted the need to balance technological growth with environmental considerations. Data governance and cybersecurity frameworks were identified as critical, with Shamsher Mavin Chowdhury stressing the importance of inclusive frameworks to protect the interests of developing countries.

Innovative projects demonstrated local impact, such as Damilare Oydele’s Library Tracker for African libraries and Patricia Ainembabazi’s efforts promoting regional knowledge-sharing platforms. However, Alisa Heaver of the Dutch Ministry of Economic Affairs raised concerns about aligning GDC objectives with existing frameworks to avoid redundancy.

The IGF session concluded with a unified call for continued collaboration. Despite challenges, there was optimism that effective partnerships and targeted initiatives can ensure secure, inclusive, and sustainable digital progress worldwide.

Balancing innovation and oversight: AI’s future requires shared governance

At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dilemmas in the AI age: how to foster innovation in large-scale AI systems while ensuring ethical governance and regulation. The session ‘Researching at the frontier: Insights from the private sector in developing large-scale AI systems’ reflected the urgency of navigating AI’s transformative power without losing sight of privacy, fairness, and societal impact.

Ivana Bartoletti, Chief Privacy and AI Governance Officer at Wipro called on governments to better use existing privacy and data protection laws rather than rush into new AI-specific legislation. ‘AI doesn’t exist in isolation. Privacy laws, consumer rights, and anti-discrimination frameworks already apply,’ she said, stressing the need for ‘privacy by design’ to protect individual freedoms at every stage of AI development.

Basma Ammari from Meta added a private-sector perspective, advocating for a risk-based and principles-driven regulatory approach. Highlighting Meta’s open-source strategy for its large language models, Ammari explained, ‘More diverse global input strips biases and makes AI systems fairer and more representative.’ She added that collaboration, rather than heavy-handed regulation, is key to safeguarding innovation.

Another expert, Fuad Siddiqui, EY’s Emerging Tech Leader, introduced the concept of an ‘intelligence grid,’ likening AI infrastructure to electricity networks. He detailed AI’s potential to address real-world challenges, citing applications in agriculture and energy sectors that improve productivity and reduce environmental impacts. ‘AI must be embedded into resilient national strategies that balance innovation and sovereignty,’ Siddiqui noted.

Parliamentarians played a central role in the discussion, raising concerns about AI’s societal impacts, particularly on jobs and education. ‘Legislators face a steep learning curve in AI governance,’ remarked Silvia Dinica, a Romanian senator with a background in mathematics. Calls emerged for upskilling initiatives and AI-driven tools to support legislative processes, with private-sector partnerships seen as crucial to addressing workforce disruption.

The debate over AI regulation remains unsettled, but a consensus emerged on transparency, fairness, and accountability. Panelists urged parliamentarians to define national priorities, invest in research on algorithm validation, and work with private stakeholders to create adaptable governance frameworks. As Bartoletti aptly summarised, ‘The future of AI is not just technological—it’s about the values we choose to protect.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.