Parliamentarians urged to bridge the global digital divide

At the ‘IGF Parliamentary Track – Session 1’ session in Riyadh, parliamentarians, diplomats, and digital experts gathered to address persistent gaps in global digital governance. The session spotlighted two critical UN-led initiatives: the World Summit on the Information Society (WSIS) and the Global Digital Compact (GDC), underscoring their complementary roles in bridging the digital divide and addressing emerging digital challenges like AI and data governance.

Ambassador Muhammadou M.O. Kah, Chair of the Commission for Science and Technology for Development, stressed the urgency of digital inclusion. ‘Digital technologies are transforming our world at a remarkable pace, but we must confront the persistent divide,’ he said, remembering that twenty years after WSIS first set out a vision for an inclusive digital society, one-third of the world’s population remains unconnected, with inequalities deepening between urban and rural areas, genders, and socioeconomic groups.

The Global Digital Compact, introduced as a ‘refresh’ of WSIS priorities, emerged as a key focus of the discussion. From the UN Tech Envoy’s Office, Isabel de Sola presented the GDC’s five pillars: affordable internet access, tackling misinformation, data governance, fostering inclusive digital economies, and ensuring safe AI implementation. De Sola emphasised, ‘We need a holistic approach. Data governance, AI, and connectivity are deeply interconnected and must work in tandem to serve society fairly.’

Sorina Teleanu, the session’s moderator and Head of knowledge at Diplo, highlighted the need for urgent action, stating: ‘We have the Global Digital Compact, but what’s next? It’s about implementation—how we take global commitments and turn them into real, practical solutions at national and local levels,’ she urged parliamentarians to exercise their oversight role and push for meaningful progress.

The session exposed a growing disconnect between governments and parliaments on digital policy. Several parliamentarians voiced concerns about exclusion from international processes that shape national legislation and budgets. ‘We cannot act effectively if we are not included or informed,’ a delegate from South Africa noted, calling for better integration of lawmakers into global frameworks like the GDC and WSIS.

To close these gaps, speakers proposed practical solutions, including capacity-building programs, toolkits for mapping GDC priorities locally, and stronger regional parliamentary networks. ‘Parliamentarians are closest to the people’ Ambassador Kah reminded attendees, ‘they play a crucial role in translating global commitments into meaningful local action’

The discussion ended with a renewed call for collaboration: greater inclusion of lawmakers, better alignment of international frameworks with local needs, and stronger efforts to bridge the digital divide. As the world approaches WSIS’ 20-year review in 2025, the path forward requires a unified, inclusive effort to ensure digital advancements reach all corners of society.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Reimagining WSIS: a digital future for all

As the World Summit on the Information Society (WSIS) nears its 20-year milestone, global leaders gathered at IGF 2024 in Riyadh to reflect on achievements and lay the ground for a unified, inclusive digital future. Amid optimism over growing connectivity—from 1 billion internet users in 2005 to 5.5 billion today—discussions underlined the pressing need to address unresolved challenges, from digital divides to the ethical governance of emerging technologies like AI.

A key takeaway was WSIS’s enduring success in fostering multistakeholder collaboration. Stefan Schnorr, Germany’s State Secretary, lauded the summit for championing inclusivity by saying that ‘WSIS pioneered a framework where governments, private sectors, and civil society shaped digital cooperation together.’ The Internet Governance Forum (IGF), a cornerstone WSIS achievement, was celebrated for evolving into a platform addressing critical issues, including AI, misinformation, and connectivity gaps.

However, speakers stressed that progress remains uneven. Nthati Moorosi, Lesotho’s Minister of Information, told a sobering story of disparity: ‘We still have students sitting under trees to learn. Connecting schools is still a long journey for us.’ In other words, connectivity issues are exacerbated by affordability, skills deficits, and energy infrastructure gaps, particularly in the Global South. Thus, addressing such divides is central to aligning WSIS priorities with the recently adopted Global Digital Compact (GDC).

Environmental sustainability emerged as another important discussion, with concerns about the digital sector’s carbon footprint and e-waste. Robert Opp from UNDP emphasised that balancing digital innovation with ecological responsibility must guide the next phase: ‘Data centres alone emit as much carbon as entire nations.’

The discussions also spotlighted inclusivity, particularly in AI and data governance. Many speakers, including Angel González Sanz (UNCTAD), warned of a widening imbalance: ‘118 countries are excluded from AI governance discussions, risking further global inequality.’ A call for greater representation of the Global South resonated again across panellists, marking the necessity for equitable participation in shaping global digital frameworks.

Looking ahead, WSIS+20 will focus on adapting governance frameworks to address emerging technologies while keeping human rights, trust, and digital inclusion at the forefront. Doreen Bogdan-Martin, ITU Secretary-General, offered a rallying appeal by saying: ‘The digital future isn’t written yet, but we can write it together with inclusivity, security, and sustainability at its core.’

As the WSIS process evolves, stakeholders emphasise learning from the past while crafting forward-looking strategies, and IGF remains the crossroads platform for dialogue and a catalyst for solutions to ensure no one is left behind in the ongoing digital era.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Cambodian ministry and APLE team up to enhance online safety education through a new initiative

The Cambodian Ministry of Education, Youth and Sport (MoEYS) and Action Pour Les Enfants (APLE) have signed a three-year Memorandum of Understanding (MoU) to implement the ‘Promoting Internet Safety in Education’ project. That initiative promotes child online safety by integrating lessons on recognising and reporting online threats, such as grooming and coercion, into school curricula.

The project also aims to strengthen the capacities of educational institutions, including ministry departments and schools, while providing tailored resources and training for teachers to deliver online safety content. It involves collaboration with key stakeholders, including school administrations, teachers, education officials, parents, community members, and children, to foster a safer digital environment.

However, challenges such as limited resources and low awareness among parents and children pose significant barriers to implementation. The initial phase focuses on seven provinces, with plans for further expansion based on the project’s success.

Why does it matter?

APLE’s strong commitment to combating online sexual abuse and human trafficking reflects the urgency of addressing these critical issues in today’s digital society. The initiative aligns with national education strategies and ensures sustainability by equipping educators and students with the tools to navigate the internet safely.

Additionally, the project includes a comprehensive evaluation after three years to assess its impact and inform potential expansion to other provinces. That effort underscores the importance of empowering communities to prevent and report online exploitation effectively, creating a lasting effect on child safety.

Global connectivity takes centre stage at the IGF 2024 in Riyadh

The Internet Governance Forum (IGF) 2024 day first in Riyadh opened with one of the key sessions titled ‘Global Access, Global Progress: Managing the Challenges of Global Digital Adoption’, bringing together prominent panellists from government, private sectors, and civil society to address one of the world’s most pressing issues—bridging the digital divide. Moderated by Timea Suto, Global Digital Policy Lead at the International Chamber of Commerce, the session explored the need for universal internet connectivity, its life-changing impact, and the challenges of ensuring meaningful participation in the digital age.

Gbenga Sesan, Executive Director at Paradigm Initiative, highlighted the transformative power of connectivity with inspiring stories. ‘Connectivity is not just a privilege; it can mean life or death,’ he emphasised, sharing the success of individuals in underserved communities who leveraged digital access to escape poverty and access vital healthcare. Thelma Quaye of Smart Africa echoed his sentiment, stressing that affordability remains a significant barrier, particularly in Africa, where only 40% are connected despite wide mobile coverage. ‘Governments must invest in infrastructure to reach the last mile,’ she urged, citing the need for public-private partnerships and relevant content that empowers users economically.

The discussion expanded to community-driven solutions, with Sally Wentworth, President of the Internet Society, showcasing the successes of locally managed networks. She highlighted a project in Tanzania that trained thousands in digital skills, demonstrating the potential of bottom-up connectivity.

Japan’s Vice Minister, Dr Takuo Imagawa, shared Japan’s achievements in near-universal broadband coverage, pointing to combining government subsidies and competitive policies as a scalable model. Emerging technologies like AI were discussed as necessary tools to reduce the digital divide, but speakers cautioned that they must remain inclusive and address societal needs.

On the economic front, Shivnath Thukral, VP for Public Policy at Meta India, highlighted open-source AI technologies as solutions for education, agriculture, and linguistic inclusion. ‘AI can bridge both the connectivity and knowledge gaps, delivering localised, affordable solutions at scale,’ he said. Meanwhile, Tami Bhaumik of Roblox underscored the importance of digital literacy and safety, particularly for young users. ‘Technology is powerful, but education is key to ensuring people use it responsibly,’ she noted, advocating for collaboration between governments, tech companies, and educators.

Why does it matter?

The panellists expressed clearly that global digital adoption requires cooperation across sectors, inclusive policymaking, and a focus on empowering local communities. As stakeholders debated solutions, one message emerged clearly: connectivity alone is not enough. For the digital world to deliver real progress, investments in skills, affordability, and digital literacy must go hand-in-hand with technological innovation. That’s why IGF remains a vital platform to unite diverse perspectives and drive actionable solutions to bridge the digital divide.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Texas launches investigation into tech platforms over child safety

Texas Attorney General Ken Paxton has initiated investigations into more than a dozen technology platforms over concerns about their privacy and safety practices for minors. The platforms under scrutiny include Character.AI, a startup specialising in AI chatbots, along with social media giants like Instagram, Reddit, and Discord.

The investigations aim to determine compliance with two key Texas laws designed to protect children online. The Securing Children Online through Parental Empowerment (SCOPE) Act prohibits digital service providers from sharing or selling minors’ personal information without parental consent and mandates privacy tools for parents. The Texas Data Privacy and Security Act (TDPSA) requires companies to obtain clear consent before collecting or using data from minors.

Concerns over the impact of social media on children have grown significantly. A Harvard study found that major platforms earned an estimated $11 billion in advertising revenue from users under 18 in 2022. Experts, including US Surgeon General Vivek Murthy, have highlighted risks such as poor sleep, body image issues, and low self-esteem among young users, particularly adolescent girls.

Paxton emphasised the importance of enforcing the state’s robust data privacy laws, putting tech companies on notice. While some platforms have introduced tools to enhance teen safety and parental controls, they have not yet commented on the ongoing probes.

Australian Federal Police leverage AI for investigations

The Australian Federal Police (AFP) is increasingly turning to AI to handle the vast amounts of data it encounters during investigations. With investigations involving up to 40 terabytes of data on average, AI has become essential in sifting through information from sources like seized phones, child exploitation referrals, and cyber incidents. Benjamin Lamont, AFP’s manager for technology strategy, emphasised the need for AI, given the overwhelming scale of data, stating that AI is crucial to help manage cases, including reviewing massive amounts of video footage and emails.

The AFP is also working on custom AI solutions, including tools for structuring large datasets and identifying potential criminal activity from old mobile phones. One such dataset is a staggering 10 petabytes, while individual phones can hold up to 1 terabyte of data. Lamont pointed out that AI plays a crucial role in making these files easier for officers to process, which would otherwise be an impossible task for human investigators alone. The AFP is also developing AI systems to detect deepfake images and protect officers from graphic content by summarising or modifying such material before it’s viewed.

While the AFP has faced criticism over its use of AI, particularly for using Clearview AI for facial recognition, Lamont acknowledged the need for continuous ethical oversight. The AFP has implemented a responsible technology committee to ensure AI use remains ethical, emphasising the importance of transparency and human oversight in AI-driven decisions.

Google and Meta under European scrutiny over teen ad partnership

European regulators are investigating a previously undisclosed advertising partnership between Google and Meta that targeted teenagers on YouTube and Instagram, the Financial Times reports. The now-cancelled initiative aimed at promoting Instagram to users aged 13 to 17 allegedly bypassed Google’s policies restricting ad personalisation for minors.

The partnership, initially launched in the US with plans for global expansion, has drawn the attention of the European Commission, which has requested extensive internal records from Google, including emails and presentations, to evaluate potential violations. Google, defending its practices, stated that its safeguards for minors remain industry-leading and emphasised recent internal training to reinforce policy compliance.

This inquiry comes amid heightened concerns about the impact of social media on young users. Earlier this year, Meta introduced enhanced privacy features for teenagers on Instagram, reflecting the growing demand for stricter online protections for minors. Neither Meta nor the European Commission has commented on the investigation so far.

OpenAI expands AI tools with text-to-video feature

OpenAI has launched its text-to-video AI model, Sora, to ChatGPT Plus and Pro users, signalling a broader push into multimodal AI technologies. Initially limited to safety testers, Sora is now available as Sora Turbo at no additional cost, allowing users to create videos up to 20 seconds long in various resolutions and aspect ratios.

The move positions OpenAI to compete with similar tools from Meta, Google, and Stability AI. While the model is accessible in most regions, it remains unavailable in EU countries, the UK, and Switzerland due to regulatory considerations. OpenAI plans to introduce tailored pricing options for Sora next year.

The company emphasised safeguards against misuse, such as blocking harmful content like child exploitation and deepfake abuse. It also plans to gradually expand features, including uploads of people, as it enhances protections. Sora marks another step in OpenAI’s efforts to innovate responsibly in the AI space.

FTC targets data brokers over privacy concerns

Data brokers Mobilewalla and Gravy Analytics have agreed to stop using sensitive location data following a settlement with the US Federal Trade Commission (FTC). The agreement addresses concerns about tracking individuals’ religious beliefs, political leanings, and pregnancy status through mobile device data.

The settlement represents the first instance of banning the collection of location data through online advertising auctions. The FTC accused the companies of unfair practices, stating that Mobilewalla gathered information without consent from ad auction platforms. Such platforms allow advertisers to bid on specific audiences but inadvertently exposed consumers to privacy risks.

Gravy Analytics, owned by Unacast, sold location data to government contractors, prompting constitutional concerns from FTC commissioners. Mobilewalla disputed the allegations but stated the agreement allows it to continue offering insights while respecting privacy. Both companies committed to halting sensitive data usage and introducing opt-out options for consumers.

FTC Chair Lina Khan highlighted the broader risks of targeted advertising, warning that Americans’ sensitive data is at risk of misuse. The settlement is part of the Biden administration’s effort to regulate data brokers and strengthen privacy protections, as outlined by proposed rules from the US Consumer Financial Protection Bureau.

Australia pushes for new rules on AI in search engines

Australia‘s competition watchdog has called for a review of efforts to ensure more choice for internet users, citing Google’s dominance in the search engine market and the failure of its competitors to capitalise on the rise of AI. A report by the Australian Competition and Consumer Commission (ACCC) highlighted concerns about the growing influence of Big Tech, particularly Google and Microsoft, as they integrate generative AI into their search services. This raises questions about the accuracy and reliability of AI-generated search results.

While the use of AI in search engines is still in its early stages, the ACCC warns that large tech companies’ financial strength and market presence give them a significant advantage. The commission expressed concerns that AI-driven search could lead to misinformation, as consumers may find AI-generated responses both more useful and less accurate. In response to this, Australia is pushing for new regulations, including laws to prevent anti-competitive behaviour and improve consumer choice.

The Australian government has already introduced several measures targeting tech giants, such as requiring social media platforms to pay for news content and restricting access for children under 16. A proposed new law could impose hefty fines on companies that suppress competition. The ACCC has called for service-specific codes to address data advantages and ensure consumers have more freedom to switch between services. The inquiry is expected to close by March next year.