Global South’s role in AI governance explored at IGF 2024

The inclusion of the Global South, particularly the MENA region, in AI governance emerged as a key focus in a recent panel discussion as part of the Internet Governance Forum 2024. Experts examined persistent challenges such as the technology gap, regulatory uncertainty, and limited local infrastructure, which hinder the region’s participation in the global AI ecosystem.

Nibal Idlebi from UN ESCWA emphasised that the lack of computational resources and access to local data significantly impedes AI development. Jill Nelson of the IEEE Standards Association stressed the need to improve AI literacy and foster talent pipelines, particularly in decision-making roles. Ethical considerations also featured prominently, with Jasmin Alduri highlighting concerns about the exploitation of click workers involved in AI data labelling.

Roeske Martin from Google MENA called for clearer regulations to boost private sector involvement, arguing that regulatory uncertainty holds back investment and innovation. He proposed accelerator programmes to support local AI startups, including those led by women. Panellists also urged better Arabic language integration in AI tools to increase accessibility and adoption across the MENA region.

Amid the challenges, opportunities for growth were identified, including leveraging synthetic data generation and creating public data-sharing initiatives. Collaboration between governments, industry, and civil society was deemed crucial to developing AI frameworks that address local needs while adhering to global standards.

The panel concluded with cautious optimism, underscoring the MENA region’s potential to become an AI innovation hub. With targeted investments in capacity building and infrastructure, the Global South can play a greater role in shaping the future of AI governance.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Reasoning AI to be unpredictable, says OpenAI co-founder

At the NeurIPS conference in Vancouver, Ilya Sutskever, co-founder of OpenAI, predicted that artificial intelligence will become increasingly unpredictable as its reasoning abilities grow. Speaking to thousands of attendees, Sutskever explained that while advancements in AI have relied on scaling data and computing power, this approach is nearing its limits due to finite resources like the internet.

To overcome these challenges, Sutskever suggested that AI could begin generating its own data or evaluating multiple responses to improve accuracy. He envisions a future where superintelligent machines, capable of reasoning like humans, become a reality. However, this reasoning power could lead to unexpected outcomes, as seen with AlphaGo’s famous move in a 2016 board game match or unpredictable strategies from advanced chess algorithms.

Sutskever emphasised that AI’s evolution will make it ‘radically different’ from what we know today, with deeper understanding and potential self-awareness. Yet, he warned that this reasoning could complicate predictability, as AI analyses millions of options to solve complex problems. This shift, he stated, marks the beginning of a new chapter in AI.

Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach

Global AI governance was the focus of a high-level forum at the IGF 2024 in Riyadh that brought together leaders from government, industry, civil society, and youth organisations. Discussions explored the current state of AI development, highlighting challenges such as bias, security risks, and the environmental impact of AI technologies. The need for global frameworks to govern AI responsibly was a central theme, with participants emphasising collaboration across regions and sectors.

Speakers stressed the importance of balancing innovation with regulation to ensure ethical and inclusive AI development. The discussion highlighted inequalities between developed and developing regions, with particular attention to Africa’s challenges in infrastructure and skills. Thelma Quaye, representing Smart Africa, noted the continent’s lack of data centres and trained professionals, which hinders its participation in the global AI landscape.

Data privacy, ownership, and localisation emerged as critical governance issues. Open-source AI was presented as a potential solution to foster innovation and flexibility, particularly for emerging economies. Audrey Plonk of the OECD stressed the need for inclusive frameworks that address regional disparities while promoting cultural and linguistic diversity in AI technologies.

Youth perspectives featured prominently, with contributions from Leydon Shantseko of Zambia Youth IGN and Levi, a youth representative. They highlighted the role of young people in shaping AI’s future and called for governance structures that include younger voices. Panellists agreed on the necessity of involving diverse stakeholders in decision-making processes to ensure equitable AI policies.

Speakers also examined the role of tax incentives and enforcement mechanisms in supporting compliance with AI regulations. Melinda, a policy expert from Meta, underscored the importance of transparency and voluntary reporting frameworks to guide effective policy decisions. Andy Beaudoin of France echoed these sentiments, stressing the need for partnerships between public and private sectors.

The forum concluded with a call for harmonised efforts to create a unified, inclusive approach to AI governance. Yoichi Iida, who moderated the session, emphasised the urgency of addressing governance gaps while remaining optimistic about AI’s potential to drive global progress. Participants agreed that collaboration is key to ensuring AI benefits all regions equitably and responsibly.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Responsible AI development highlighted at IGF 2024

At the Internet Governance Forum (IGF) 2024 in Riyadh, Saudi Arabia, experts from across the globe gathered to tackle the complexities of transparency and explainability in AI. Moderated by Latifa Al Abdulkarim, the panel delved into these crucial concepts, highlighting their role in fostering trust and accountability in AI systems.

Doreen Bogdan Martin of the International Telecommunication Union (ITU) distinguished transparency as the process of designing and deploying AI systems, while explainability pertains to justifying AI decisions. Amal El Fallah Seghrouchni, executive president of the International Center of Artificial Intelligence of Morroco, added, ‘Transparency is about how a system meets expectations, while explainability is more technical—it justifies decisions made by the system.’

National and international initiatives showcased diverse approaches to ethical AI governance. President of the Saudi Data & AI Authority (SDAIA), Abdulah Bin Sharaf Alghamdi, outlined the nation’s progress in creating AI ethics frameworks and global partnerships. Gong Ke, from China’s Institute for Next-Generation AI, emphasised strategies to ensure responsible AI growth, while El Fallah Seghrouchni detailed Morocco’s efforts to address linguistic diversity challenges.

On the global stage, Doreen Bogdan Martin emphasised ITU’s collaboration on AI standards and sustainable initiatives. At the same time, UN representative Li Junhua spotlighted AI’s transformative potential for real-time policymaking, disaster response, and addressing inequality.

The discussion also tackled challenges in achieving transparency and explainability. Complexity in AI models, data privacy issues, and gaps in regulation were recurring themes. ‘Regulations need to adapt to the rapid evolution of AI,’ El Fallah Seghrouchni stressed.

Additionally, linguistic diversity and talent shortages in developing regions were identified as critical hurdles. Yet, participants remained optimistic about AI’s potential to accelerate sustainable development goals (SDGs), with Bogdan Martin noting, ‘AI could boost progress on SDGs by 70%,’ citing examples like AI glasses empowering a young girl in India and innovations in West Africa doubling agricultural yields.

Concluding the session, panellists called for global collaboration, capacity-building, and the development of frugal, inclusive, and trustworthy AI systems. Bogdan Martin emphasised the need for standardised frameworks to ensure ethical practices, while El Fallah Seghrouchni challenged the reliance on large datasets, advocating for quality over quantity.

Why does it matter?

The forum underscored the importance of ongoing dialogue and international cooperation in shaping a human-centric AI future that balances innovation with ethical accountability.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Global connectivity takes centre stage at the IGF 2024 in Riyadh

The Internet Governance Forum (IGF) 2024 day first in Riyadh opened with one of the key sessions titled ‘Global Access, Global Progress: Managing the Challenges of Global Digital Adoption’, bringing together prominent panellists from government, private sectors, and civil society to address one of the world’s most pressing issues—bridging the digital divide. Moderated by Timea Suto, Global Digital Policy Lead at the International Chamber of Commerce, the session explored the need for universal internet connectivity, its life-changing impact, and the challenges of ensuring meaningful participation in the digital age.

Gbenga Sesan, Executive Director at Paradigm Initiative, highlighted the transformative power of connectivity with inspiring stories. ‘Connectivity is not just a privilege; it can mean life or death,’ he emphasised, sharing the success of individuals in underserved communities who leveraged digital access to escape poverty and access vital healthcare. Thelma Quaye of Smart Africa echoed his sentiment, stressing that affordability remains a significant barrier, particularly in Africa, where only 40% are connected despite wide mobile coverage. ‘Governments must invest in infrastructure to reach the last mile,’ she urged, citing the need for public-private partnerships and relevant content that empowers users economically.

The discussion expanded to community-driven solutions, with Sally Wentworth, President of the Internet Society, showcasing the successes of locally managed networks. She highlighted a project in Tanzania that trained thousands in digital skills, demonstrating the potential of bottom-up connectivity.

Japan’s Vice Minister, Dr Takuo Imagawa, shared Japan’s achievements in near-universal broadband coverage, pointing to combining government subsidies and competitive policies as a scalable model. Emerging technologies like AI were discussed as necessary tools to reduce the digital divide, but speakers cautioned that they must remain inclusive and address societal needs.

On the economic front, Shivnath Thukral, VP for Public Policy at Meta India, highlighted open-source AI technologies as solutions for education, agriculture, and linguistic inclusion. ‘AI can bridge both the connectivity and knowledge gaps, delivering localised, affordable solutions at scale,’ he said. Meanwhile, Tami Bhaumik of Roblox underscored the importance of digital literacy and safety, particularly for young users. ‘Technology is powerful, but education is key to ensuring people use it responsibly,’ she noted, advocating for collaboration between governments, tech companies, and educators.

Why does it matter?

The panellists expressed clearly that global digital adoption requires cooperation across sectors, inclusive policymaking, and a focus on empowering local communities. As stakeholders debated solutions, one message emerged clearly: connectivity alone is not enough. For the digital world to deliver real progress, investments in skills, affordability, and digital literacy must go hand-in-hand with technological innovation. That’s why IGF remains a vital platform to unite diverse perspectives and drive actionable solutions to bridge the digital divide.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

AI technology aims to cut hospital visits for COPD patients

A pioneering NHS trust in Hull and East Yorkshire is harnessing AI to enhance its chronic respiratory illness care. The Lenus COPD support system, introduced in March 2023, has already reduced hospital visits by 40% and aims for even greater improvements with the integration of AI.

The app enables patients to monitor their symptoms through regular self-assessments and offers direct messaging to NHS clinics. AI will soon analyse the collected data to identify patterns and potential triggers, enabling earlier interventions to prevent hospitalisation.

Professor Mike Crooks, who leads the service, emphasised the proactive nature of the system. The AI-driven insights allow clinics to deliver timely care, helping patients stabilise their health before conditions worsen.

Patients like Ruth, diagnosed with COPD at just 14, report transformative results. Frequent hospital visits have become a rarity, and the app has provided her with a reliable lifeline for clinical support.

xAI launches new Grok-2 chatbot on X

Elon Musk’s AI startup, xAI, revealed on Saturday that the latest version of its Grok-2 chatbot will be available for free to all users of the social media platform X. The new version of Grok-2 is part of xAI’s continued efforts to integrate AI technology into the platform, providing users with more advanced and efficient tools for interaction.

While the chatbot will be free for everyone, Premium and Premium+ users will benefit from higher usage limits and will be the first to experience new features as they are rolled out. This tiered approach ensures that paying users receive an enhanced experience, with priority access to future updates and capabilities.

xAI has been quietly testing the new Grok-2 model for several weeks, fine-tuning its performance and features in preparation for the public release. The improved version is expected to offer better capabilities and user interactions, marking a significant step forward in AI development for social media platforms.

Google unveils Gemini 2.0 and futuristic AI applications

Google has launched the second generation of its AI model, Gemini, along with innovative applications like real-time AI-powered eyeglasses and a universal assistant, Project Astra. CEO Sundar Pichai called it the dawn of a ‘new agentic era,’ where virtual assistants can autonomously perform complex tasks under user supervision.

Gemini 2.0 now powers features such as AI Overviews in Google Search and includes advancements in image and audio processing. Google also revealed tools like Project Mariner for automating web tasks and Jules, an AI tool for software coding.

The company’s focus on embedding AI in widely used products like Search, YouTube, and Android is seen as a strategy to outpace competitors like OpenAI. Its Project Astra prototype can process multilingual conversations and integrate data from Maps and Lens. Testing on AI-enabled eyeglasses marks Google’s return to wearable tech, challenging rivals like Meta in the augmented reality space.

Apple to replace Broadcom chips with in-house designs for Wi-Fi and Bluetooth

Apple plans to introduce its own chips for Bluetooth and Wi-Fi in devices starting in 2025, phasing out components currently supplied by Broadcom. The custom chip, code-named Proxima, has been in development for years and will debut in iPhones and smart home devices. Taiwan Semiconductor Manufacturing Company will handle production.

The shift aligns with Apple‘s broader strategy to reduce reliance on third-party suppliers. Alongside Proxima, Apple is also developing cellular modem chips to replace Qualcomm components, with plans to integrate both systems in the future.

In parallel, Apple is working on a server chip, internally called Baltra, to support AI processing. This move highlights the company’s efforts to reduce dependence on Nvidia‘s costly processors, which remain in high demand for AI applications.

Canada considers $15 billion incentive to boost AI data centres

Canada’s federal government is exploring a proposal to offer up to $15 billion in incentives to encourage domestic pension funds to invest in AI data centres powered by clean energy. The initiative, reportedly discussed in private consultations, is part of Ottawa’s broader economic strategy to meet rising global demand for artificial intelligence infrastructure.

The growing adoption of AI tools, such as ChatGPT, has accelerated the need for advanced data centres, creating unprecedented demand for energy. While clean energy solutions are preferred, slow deployment has led many countries to rely on fossil fuels like natural gas and coal to bridge the gap.

Globally, the rush to develop AI infrastructure has exposed critical challenges in planning and power availability. Canada‘s proposed incentives aim to position the country as a leader in green-powered AI development while addressing both energy sustainability and economic opportunities.