Revitalising trust with AI: Boosting governance and public services

AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The discussion at the Internet Governance Forum (IGF) in Riyadh, moderated by Brandon Soloski of Meridian International, focused on using AI to streamline services like passport processing and tax systems, while also addressing privacy and data sovereignty concerns. Open-source AI was highlighted as a critical tool for democratising access and fostering innovation, particularly in developing nations.

Global regulatory frameworks were a central theme, with panellists underscoring the need for harmonisation to avoid fragmentation and ensure seamless interoperability across borders. Economist and policy analyst at the OECD, Lucia Russo, discussed regulatory approaches such as the EU AI Act, which aims to create a comprehensive legal framework. Brandon Soloski and Sarim Aziz from Meta pointed to the benefits of principle-based frameworks in other regions, which provide flexibility while maintaining oversight. Pellerin Matis, Vice President of Global Government Affairs at Oracle, emphasised the importance of public-private partnerships, which allow governments to leverage private sector expertise and startup innovation for effective AI implementation.

The panellists explored how AI can enhance public services, highlighting its role in healthcare, agriculture, and public safety. Examples included AI-driven tools that improve patient care and streamline food production. However, challenges like data protection, trust in AI systems, and the balance between innovation and regulation were also discussed. Anil Pura, an audience member from Nepal, contributed valuable perspectives on the need for education and transparency to foster public trust.

Transparency and education were recognised as fundamental for building trust in AI adoption. Panellists agreed that ensuring citizens understand how AI technologies work and how their data is protected is essential for encouraging adoption. They called for governments to work closely with civil society and academia to create awareness and promote responsible AI use.

The discussion concluded with a call to strengthen collaborations between governments, private companies, and startups. Brandon Soloski highlighted how partnerships could drive responsible AI innovation, while Pellerin Matis stressed the importance of ethical and regulatory considerations to guide development. The session ended on an optimistic note, with panellists agreeing on AI’s immense potential to improve government efficiency and enhance public trust.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

IGF 2024 panellists highlight infrastructure, literacy, and fair digital access

The Internet Governance Forum 2024 (IGF) brought together global stakeholders to discuss the implementation of the Global Digital Compact (GDC), aiming to address digital inequalities and foster cross-sector partnerships. The session spotlighted key challenges such as funding gaps, cultural adaptation of digital initiatives, and sustainability concerns in infrastructure development.

Isabel De Sola from the Office of the Tech Envoy emphasised stakeholder collaboration and revealed plans for an upcoming GDC implementation roadmap. Roy Eriksson, Finland‘s Ambassador for Global Gateway, shared successes from AI strategy projects in African nations, illustrating how capacity-building partnerships can close technology gaps. Kevin Hernandez of the Universal Postal Union presented the Connect.Post programme, which aims to connect global post offices to digital networks by 2030.

Discussions also underscored energy efficiency and sustainability in digital infrastructure. Nandipha Ntshalbu highlighted the need to balance technological growth with environmental considerations. Data governance and cybersecurity frameworks were identified as critical, with Shamsher Mavin Chowdhury stressing the importance of inclusive frameworks to protect the interests of developing countries.

Innovative projects demonstrated local impact, such as Damilare Oydele’s Library Tracker for African libraries and Patricia Ainembabazi’s efforts promoting regional knowledge-sharing platforms. However, Alisa Heaver of the Dutch Ministry of Economic Affairs raised concerns about aligning GDC objectives with existing frameworks to avoid redundancy.

The IGF session concluded with a unified call for continued collaboration. Despite challenges, there was optimism that effective partnerships and targeted initiatives can ensure secure, inclusive, and sustainable digital progress worldwide.

Balancing innovation and oversight: AI’s future requires shared governance

At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dilemmas in the AI age: how to foster innovation in large-scale AI systems while ensuring ethical governance and regulation. The session ‘Researching at the frontier: Insights from the private sector in developing large-scale AI systems’ reflected the urgency of navigating AI’s transformative power without losing sight of privacy, fairness, and societal impact.

Ivana Bartoletti, Chief Privacy and AI Governance Officer at Wipro called on governments to better use existing privacy and data protection laws rather than rush into new AI-specific legislation. ‘AI doesn’t exist in isolation. Privacy laws, consumer rights, and anti-discrimination frameworks already apply,’ she said, stressing the need for ‘privacy by design’ to protect individual freedoms at every stage of AI development.

Basma Ammari from Meta added a private-sector perspective, advocating for a risk-based and principles-driven regulatory approach. Highlighting Meta’s open-source strategy for its large language models, Ammari explained, ‘More diverse global input strips biases and makes AI systems fairer and more representative.’ She added that collaboration, rather than heavy-handed regulation, is key to safeguarding innovation.

Another expert, Fuad Siddiqui, EY’s Emerging Tech Leader, introduced the concept of an ‘intelligence grid,’ likening AI infrastructure to electricity networks. He detailed AI’s potential to address real-world challenges, citing applications in agriculture and energy sectors that improve productivity and reduce environmental impacts. ‘AI must be embedded into resilient national strategies that balance innovation and sovereignty,’ Siddiqui noted.

Parliamentarians played a central role in the discussion, raising concerns about AI’s societal impacts, particularly on jobs and education. ‘Legislators face a steep learning curve in AI governance,’ remarked Silvia Dinica, a Romanian senator with a background in mathematics. Calls emerged for upskilling initiatives and AI-driven tools to support legislative processes, with private-sector partnerships seen as crucial to addressing workforce disruption.

The debate over AI regulation remains unsettled, but a consensus emerged on transparency, fairness, and accountability. Panelists urged parliamentarians to define national priorities, invest in research on algorithm validation, and work with private stakeholders to create adaptable governance frameworks. As Bartoletti aptly summarised, ‘The future of AI is not just technological—it’s about the values we choose to protect.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Reasoning AI to be unpredictable, says OpenAI co-founder

At the NeurIPS conference in Vancouver, Ilya Sutskever, co-founder of OpenAI, predicted that artificial intelligence will become increasingly unpredictable as its reasoning abilities grow. Speaking to thousands of attendees, Sutskever explained that while advancements in AI have relied on scaling data and computing power, this approach is nearing its limits due to finite resources like the internet.

To overcome these challenges, Sutskever suggested that AI could begin generating its own data or evaluating multiple responses to improve accuracy. He envisions a future where superintelligent machines, capable of reasoning like humans, become a reality. However, this reasoning power could lead to unexpected outcomes, as seen with AlphaGo’s famous move in a 2016 board game match or unpredictable strategies from advanced chess algorithms.

Sutskever emphasised that AI’s evolution will make it ‘radically different’ from what we know today, with deeper understanding and potential self-awareness. Yet, he warned that this reasoning could complicate predictability, as AI analyses millions of options to solve complex problems. This shift, he stated, marks the beginning of a new chapter in AI.

Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach

Global AI governance was the focus of a high-level forum at the IGF 2024 in Riyadh that brought together leaders from government, industry, civil society, and youth organisations. Discussions explored the current state of AI development, highlighting challenges such as bias, security risks, and the environmental impact of AI technologies. The need for global frameworks to govern AI responsibly was a central theme, with participants emphasising collaboration across regions and sectors.

Speakers stressed the importance of balancing innovation with regulation to ensure ethical and inclusive AI development. The discussion highlighted inequalities between developed and developing regions, with particular attention to Africa’s challenges in infrastructure and skills. Thelma Quaye, representing Smart Africa, noted the continent’s lack of data centres and trained professionals, which hinders its participation in the global AI landscape.

Data privacy, ownership, and localisation emerged as critical governance issues. Open-source AI was presented as a potential solution to foster innovation and flexibility, particularly for emerging economies. Audrey Plonk of the OECD stressed the need for inclusive frameworks that address regional disparities while promoting cultural and linguistic diversity in AI technologies.

Youth perspectives featured prominently, with contributions from Leydon Shantseko of Zambia Youth IGN and Levi, a youth representative. They highlighted the role of young people in shaping AI’s future and called for governance structures that include younger voices. Panellists agreed on the necessity of involving diverse stakeholders in decision-making processes to ensure equitable AI policies.

Speakers also examined the role of tax incentives and enforcement mechanisms in supporting compliance with AI regulations. Melinda, a policy expert from Meta, underscored the importance of transparency and voluntary reporting frameworks to guide effective policy decisions. Andy Beaudoin of France echoed these sentiments, stressing the need for partnerships between public and private sectors.

The forum concluded with a call for harmonised efforts to create a unified, inclusive approach to AI governance. Yoichi Iida, who moderated the session, emphasised the urgency of addressing governance gaps while remaining optimistic about AI’s potential to drive global progress. Participants agreed that collaboration is key to ensuring AI benefits all regions equitably and responsibly.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Responsible AI development highlighted at IGF 2024

At the Internet Governance Forum (IGF) 2024 in Riyadh, Saudi Arabia, experts from across the globe gathered to tackle the complexities of transparency and explainability in AI. Moderated by Latifa Al Abdulkarim, the panel delved into these crucial concepts, highlighting their role in fostering trust and accountability in AI systems.

Doreen Bogdan Martin of the International Telecommunication Union (ITU) distinguished transparency as the process of designing and deploying AI systems, while explainability pertains to justifying AI decisions. Amal El Fallah Seghrouchni, executive president of the International Center of Artificial Intelligence of Morroco, added, ‘Transparency is about how a system meets expectations, while explainability is more technical—it justifies decisions made by the system.’

National and international initiatives showcased diverse approaches to ethical AI governance. President of the Saudi Data & AI Authority (SDAIA), Abdulah Bin Sharaf Alghamdi, outlined the nation’s progress in creating AI ethics frameworks and global partnerships. Gong Ke, from China’s Institute for Next-Generation AI, emphasised strategies to ensure responsible AI growth, while El Fallah Seghrouchni detailed Morocco’s efforts to address linguistic diversity challenges.

On the global stage, Doreen Bogdan Martin emphasised ITU’s collaboration on AI standards and sustainable initiatives. At the same time, UN representative Li Junhua spotlighted AI’s transformative potential for real-time policymaking, disaster response, and addressing inequality.

The discussion also tackled challenges in achieving transparency and explainability. Complexity in AI models, data privacy issues, and gaps in regulation were recurring themes. ‘Regulations need to adapt to the rapid evolution of AI,’ El Fallah Seghrouchni stressed.

Additionally, linguistic diversity and talent shortages in developing regions were identified as critical hurdles. Yet, participants remained optimistic about AI’s potential to accelerate sustainable development goals (SDGs), with Bogdan Martin noting, ‘AI could boost progress on SDGs by 70%,’ citing examples like AI glasses empowering a young girl in India and innovations in West Africa doubling agricultural yields.

Concluding the session, panellists called for global collaboration, capacity-building, and the development of frugal, inclusive, and trustworthy AI systems. Bogdan Martin emphasised the need for standardised frameworks to ensure ethical practices, while El Fallah Seghrouchni challenged the reliance on large datasets, advocating for quality over quantity.

Why does it matter?

The forum underscored the importance of ongoing dialogue and international cooperation in shaping a human-centric AI future that balances innovation with ethical accountability.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

AI technology aims to cut hospital visits for COPD patients

A pioneering NHS trust in Hull and East Yorkshire is harnessing AI to enhance its chronic respiratory illness care. The Lenus COPD support system, introduced in March 2023, has already reduced hospital visits by 40% and aims for even greater improvements with the integration of AI.

The app enables patients to monitor their symptoms through regular self-assessments and offers direct messaging to NHS clinics. AI will soon analyse the collected data to identify patterns and potential triggers, enabling earlier interventions to prevent hospitalisation.

Professor Mike Crooks, who leads the service, emphasised the proactive nature of the system. The AI-driven insights allow clinics to deliver timely care, helping patients stabilise their health before conditions worsen.

Patients like Ruth, diagnosed with COPD at just 14, report transformative results. Frequent hospital visits have become a rarity, and the app has provided her with a reliable lifeline for clinical support.

Google unveils Gemini 2.0 and futuristic AI applications

Google has launched the second generation of its AI model, Gemini, along with innovative applications like real-time AI-powered eyeglasses and a universal assistant, Project Astra. CEO Sundar Pichai called it the dawn of a ‘new agentic era,’ where virtual assistants can autonomously perform complex tasks under user supervision.

Gemini 2.0 now powers features such as AI Overviews in Google Search and includes advancements in image and audio processing. Google also revealed tools like Project Mariner for automating web tasks and Jules, an AI tool for software coding.

The company’s focus on embedding AI in widely used products like Search, YouTube, and Android is seen as a strategy to outpace competitors like OpenAI. Its Project Astra prototype can process multilingual conversations and integrate data from Maps and Lens. Testing on AI-enabled eyeglasses marks Google’s return to wearable tech, challenging rivals like Meta in the augmented reality space.

Canada considers $15 billion incentive to boost AI data centres

Canada’s federal government is exploring a proposal to offer up to $15 billion in incentives to encourage domestic pension funds to invest in AI data centres powered by clean energy. The initiative, reportedly discussed in private consultations, is part of Ottawa’s broader economic strategy to meet rising global demand for artificial intelligence infrastructure.

The growing adoption of AI tools, such as ChatGPT, has accelerated the need for advanced data centres, creating unprecedented demand for energy. While clean energy solutions are preferred, slow deployment has led many countries to rely on fossil fuels like natural gas and coal to bridge the gap.

Globally, the rush to develop AI infrastructure has exposed critical challenges in planning and power availability. Canada‘s proposed incentives aim to position the country as a leader in green-powered AI development while addressing both energy sustainability and economic opportunities.

Global stakeholders chart the course for digital governance at the IGF in Riyadh

Global digital governance was the main topic in a key discussion led by moderator Timea Suto, gathering experts to tackle challenges in AI, data management, and internet governance. At the Internet Governance Forum (IGF) in Riyadh, Saudi Arabia, speakers emphasised balancing innovation with regulatory consistency while highlighting the need for inclusive frameworks that address societal biases and underrepresented voices.

Thomas Schneider of Ofcom Switzerland underscored the Council of Europe‘s AI convention as a promising standard for global interoperability. Meta’s Flavia Alves advocated for open-source AI to drive global collaboration and safer products. Meanwhile, Yoichi Iida from Japan‘s Ministry of Communications outlined the G7 Hiroshima AI code as an international step forward, while concerns about dataset biases were raised from the audience.

Data governance discussions focused on privacy and trust in cross-border flows. Maarit Palovirta of Connect Europe called for harmonised regulations to protect privacy while fostering innovation. Yoichi Iida highlighted OECD initiatives on trusted data sharing, with Amr Hashem of the GSMA stressing the need to develop infrastructure alongside governance, particularly in underserved regions.

The future of internet governance also featured prominently, with Irina Soeffky from Germany‘s Digital Ministry reinforcing the multi-stakeholder model amid calls to update WSIS structures. Audience member Bertrand de La Chapelle proposed reforming the Internet Governance Forum to reflect current challenges. Jacques Beglinger of EuroDIG stressed the importance of grassroots inclusion, while Desiree Milosevic-Evans highlighted gender representation gaps in governance.

Canada‘s Larisa Galadza framed the coming year as critical for advancing the Global Digital Compact, with priorities on AI governance under Canada’s G7 presidency. Maria Fernanda Garza of the International Chamber of Commerce (ICC) called for alignment in governance while maintaining flexibility for local needs amid ongoing multilateral challenges.

Speakers concluded that collaboration, inclusivity, and clear mandates are key to shaping effective digital governance. As technological change accelerates, the dialogue reinforces the need for adaptable, action-oriented strategies to ensure equity and innovation globally.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.