WSIS+20: Inclusive ICT policies urged to close global digital divide

At the WSIS+20 High-Level Event in Geneva, Dr Hakikur Rahman and Dr Ranojit Kumar Dutta presented a sobering picture of global digital inequality, revealing that more than 2.6 billion people remain offline. Their session, marking two decades of the World Summit on the Information Society (WSIS), emphasised that affordability, poor infrastructure, and a lack of digital literacy continue to block access, especially for marginalised communities.

The speakers proposed a structured three-pillar framework — inclusion, ethics, and sustainability- to ensure that no one is left behind in the digital age.

The inclusion pillar advocated for universal connectivity through affordable broadband, multilingual content, and skills-building programs, citing India’s Digital India and Kenya’s Community Networks as examples of success. On ethics, they called for policies grounded in human rights, data privacy, and transparent AI governance, pointing to the EU’s AI Act and UNESCO guidelines as benchmarks.

The sustainability pillar highlighted the importance of energy-efficient infrastructure, proper e-waste management, and fair public-private collaboration, showcasing Rwanda’s green ICT strategy and Estonia’s e-residency program.

Dr Dutta presented detailed data from Bangladesh, showing stark urban-rural and gender-based gaps in internet access and digital literacy. While urban broadband penetration has soared, rural and female participation lags behind.

Encouraging trends, such as rising female enrollment in ICT education and the doubling of ICT sector employment since 2022, were tempered by low data protection awareness and a dire e-waste recycling rate of only 3%.

The session concluded with a call for coordinated global and regional action, embedding ethics and inclusion in every digital policy. The speakers urged stakeholders to bridge divides in connectivity, opportunity, access, and environmental responsibility, ensuring digital progress uplifts all communities.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Building digital resilience in an age of crisis

At the WSIS+20 High-Level Event in Geneva, the session ‘Information Society in Times of Risk’ spotlighted how societies can harness digital tools to weather crises more effectively. Experts and researchers from across the globe shared innovations and case studies that emphasised collaboration, inclusiveness, and preparedness.

Chairs Horst Kremers and Professor Ke Gong opened the discussion by reinforcing the UN’s all-of-society principle, which advocates cooperation among governments, civil society, tech companies, and academia in facing disaster risks.

The Singapore team unveiled their pioneering DRIVE framework—Digital Resilience Indicators for Veritable Empowerment—redefining resilience not as a personal skill set but as a dynamic process shaped by individuals’ environments, from family to national policies. They argued that digital resilience must include social dimensions such as citizenship, support networks, and systemic access, making it a collective responsibility in the digital era.

Turkish researchers analysed over 54,000 social media images shared after the 2023 earthquakes, showing how visual content can fuel digital solidarity and real-time coordination. However, they also revealed how the breakdown of communication infrastructure in the immediate aftermath severely hampered response efforts, underscoring the urgent need for robust and redundant networks.

Meanwhile, Chinese tech giant Tencent demonstrated how integrated platforms—such as WeChat and AI-powered tools—transform disaster response, enabling donations, rescues, and community support on a massive scale. Yet, presenters cautioned that while AI holds promise, its current role in real-time crisis management remains limited.

The session closed with calls for pro-social platform designs to combat polarisation and disinformation, and a shared commitment to building inclusive, digitally resilient societies that leave no one behind.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Parliamentarians step up as key players in shaping the digital future

At the 2025 WSIS+20 High-Level Event in Geneva, lawmakers from Egypt, Uruguay, Tanzania, and Thailand united to call for a transformative shift in how parliaments approach digital governance. Hosted by ITU and the IPU, the session emphasised that legislators are no longer passive observers but essential drivers of digital policy.

While digital innovation presents opportunities for growth and inclusion, it also brings serious challenges, chief among them the digital divide, online harms, and the risks posed by AI.

Speakers underscored a shared urgency to ensure digital policies are people-centred and grounded in human rights. Egypt’s Amira Saber spotlighted her country’s leap toward AI regulation and its rapid expansion of connectivity, but also expressed concerns over online censorship and inequality.

Uruguay’s Rodrigo Goñi warned that traditional, reactive policymaking won’t suffice in the fast-paced digital age, proposing a new paradigm of ‘political intelligence.’ Thailand’s Senator Nophadol In-na praised national digital progress but warned of growing gaps between urban and rural communities. Meanwhile, Tanzania’s Neema Lugangira pushed for more capacity-building, especially for female lawmakers, and direct dialogue between legislators and big tech companies.

Across the board, there was strong consensus – parliamentarians must be empowered with digital literacy and AI tools to legislate effectively. Both ITU and IPU committed to ramping up support through training, partnerships, and initiatives like the AI Skills Coalition. They also pledged to help parliaments engage directly with tech leaders and tackle issues such as online abuse, misinformation, and accessibility, particularly in the Global South.

The discussion ended with cautious optimism. While challenges are formidable, the collaborative spirit and concrete proposals laid out in Geneva point toward a digital future where democratic values and inclusivity remain central. As the December WSIS+20 review approaches, these commitments could start a new era in global digital governance, led not by technocrats alone but by informed, engaged, and forward-thinking parliamentarians.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Report shows China outpacing the US and EU in AI research

AI is increasingly viewed as a strategic asset rather than a technological development, and new research suggests China is now leading the global AI race.

A report titled ‘DeepSeek and the New Geopolitics of AI: China’s ascent to research pre-eminence in AI’, authored by Daniel Hook, CEO of Digital Science, highlights how China’s AI research output has grown to surpass that of the US, the EU and the UK combined.

According to data from Dimensions, a primary global research database, China now accounts for over 40% of worldwide citation attention in AI-related studies. Instead of focusing solely on academic output, the report points to China’s dominance in AI-related patents.

In some indicators, China is outpacing the US tenfold in patent filings and company-affiliated research, signalling its capacity to convert academic work into tangible innovation.

Hook’s analysis covers AI research trends from 2000 to 2024, showing global AI publication volumes rising from just under 10,000 papers in 2000 to 60,000 in 2024.

However, China’s influence has steadily expanded since 2018, while the EU and the US have seen relative declines. The UK has largely maintained its position.

Clarivate, another analytics firm, reported similar findings, noting nearly 900,000 AI research papers produced in China in 2024, triple the figure from 2015.

Hook notes that governments increasingly view AI alongside energy or military power as a matter of national security. Instead of treating AI as a neutral technology, there is growing awareness that a lack of AI capability could have serious economic, political and social consequences.

The report suggests that understanding AI’s geopolitical implications has become essential for national policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU finalises AI code as 2025 compliance deadline approaches

The European Commission has released its finalised Code of Practice for general-purpose AI models, laying the groundwork for implementing the landmark AI Act. The new Code sets out transparency, copyright, and safety rules that developers must follow before deadlines.

Approved in March 2024 and effective from August, the AI Act introduces the EU’s first binding rules for AI. It bans high-risk applications such as real-time biometric surveillance, predictive policing, and emotion recognition in schools or workplaces.

Stricter obligations will apply to general-purpose models from August 2025, including mandatory documentation of training data, provided this does not violate intellectual property or trade secrets.

The Code of Practice, developed by experts with input from over 1,000 stakeholders, aims to guide AI providers through the AI Act’s requirements. It mandates model documentation, lawful content sourcing, risk management protocols, and a point of contact for copyright complaints.

However, industry voices, including the CCIA, have criticised the Code, saying it disproportionately burdens AI developers.

Member States and the European Commission will assess the effectiveness of the Code in the coming months. From August 2026, enforcement will begin for existing models, while new ones will be subject to the rules a year earlier.

The Commission says these steps are vital to ensure GPAI models are safe, transparent, and rights-respecting across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot relies on Musk’s views instead of staying neutral

Grok, the AI chatbot owned by Elon Musk’s company xAI, appears to search for Musk’s personal views before answering sensitive or divisive questions.

Rather than relying solely on a balanced range of sources, Grok has been seen citing Musk’s opinions when responding to topics like Israel and Palestine, abortion, and US immigration.

Evidence gathered from a screen recording by data scientist Jeremy Howard shows Grok actively ‘considering Elon Musk’s views’ in its reasoning process. Out of 64 citations Grok provided about Israel and Palestine, 54 were linked to Musk.

Others confirmed similar results when asking about abortion and immigration laws, suggesting a pattern.

While the behaviour might seem deliberate, some experts believe it happens naturally instead of through intentional programming. Programmer Simon Willison noted that Grok’s system prompt tells it to avoid media bias and search for opinions from all sides.

Yet, Grok may prioritise Musk’s stance because it ‘knows’ its owner, especially when addressing controversial matters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and Salesforce use AI to cut costs and reshape workforce

Microsoft is reporting substantial productivity improvements across its operations, thanks to the growing integration of AI tools in daily workflows.

Judson Althoff, the company’s chief commercial officer, stated during a recent presentation that AI contributed to savings of over $500 million in Microsoft’s call centres last year alone.

The technology has reportedly improved employee and customer satisfaction while supporting operations in sales, customer service, and software engineering. Microsoft is also now using AI to handle interactions with smaller clients, streamlining engagement without significantly expanding headcount.

The developments follow Microsoft’s decision to lay off over 9,000 employees last week, marking the third round of cuts in 2024 and bringing the total to around 15,000.

Although it remains unclear whether automation directly replaced job losses, CEO Satya Nadella has previously stated that AI now generates 20 to 30 percent of the code in Microsoft repositories.

Similar shifts occur at Salesforce, where CEO Marc Benioff has openly acknowledged AI’s growing role in company operations and resource planning.

During a recent analyst call, Robin Washington, Salesforce’s CFO and COO confirmed that hiring has slowed, and 500 customer service roles have been reassigned internally.

The adjustment is expected to result in cost savings of $50 million, as the company focuses on optimising operations through digital transformation. Benioff also disclosed that AI performs between 30 and 50 percent of work previously handled by staff, contributing to workforce realignment.

Companies across the tech sector are rapidly adopting AI to improve efficiency, even as the broader implications for employment and labour markets continue to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI glasses deliver real-time theatre subtitles

An innovative trial at Amsterdam’s Holland Festival saw Dutch company Het Nationale Theatre, in partnership with XRAI and Audinate, unveil smart glasses that project real-time subtitles in 223 languages via a Dante audio network and AI software.

Attendees of The Seasons experienced dynamic transcription and translation streamed directly to XREAL AR glasses. Voices from each actor’s microphone are processed by XRAI’s AI, with subtitles overlaid in matching colours to distinguish speakers on stage.

Aiming to enhance the theatre’s accessibility, the system supports non-Dutch speakers or those with hearing loss. Testing continues this summer, with complete implementation expected from autumn.

LiveText discards the dated method of back-of-house captioning. Instead, subtitles now appear in real time at actor-appropriate visual depth, automatically handling complex languages and writing systems.

Proponents believe the glasses mark a breakthrough for inclusion, with potential uses at international conferences, music festivals and other live events worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber defence effort returns to US ports post-pandemic

The US Cybersecurity and Infrastructure Security Agency (CISA) has resumed its seaport cybersecurity exercise programme. Initially paused due to the pandemic and other delays, the initiative is now returning to ports such as Savannah, Charleston, Wilmington and potentially Tampa.

These proof-of-concept tabletop exercises are intended to help ports prepare for cyber threats by developing a flexible, replicable framework. Each port functions uniquely, yet common infrastructure and shared vulnerabilities make standardised preparation critical for effective crisis response.

CISA warns that threats targeting ports have grown more severe, with nation states exploiting AI-powered techniques. Some US ports, including Houston, have already fended off cyberattacks, and Chinese-made systems dominate critical logistics, raising national security concerns.

Private ownership of most port infrastructure demands strong public-private partnerships to maintain cybersecurity. CISA aims to offer a shared model that ports across the country can adapt to improve cooperation, resilience, and threat awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!