WGIG reunion sparks calls for reform at IGF 2025 in Norway

At the Internet Governance Forum (IGF) 2025 in Lillestrøm, Norway, a reunion of the original Working Group on Internet Governance (WGIG) marked a significant reflection and reckoning moment for global digital governance. Commemorating the 20th anniversary of WGIG’s formation, the session brought together pioneers of the multistakeholder model that reshaped internet policy discussions during the World Summit on the Information Society (WSIS).

Moderated by Markus Kummer and organised by William J. Drake, the panel featured original WGIG members, including Ayesha Hassan, Raul Echeberria, Wolfgang Kleinwächter, Avri Doria, Juan Fernandez, and Jovan Kurbalija, with remote contributions from Alejandro Pisanty, Carlos Afonso, Vittorio Bertola, Baher Esmat, and others. While celebrating their achievements, speakers did not shy away from blunt assessments of the IGF’s present state and future direction.

Speakers universally praised WGIG’s groundbreaking work in legitimising multi-stakeholderism within the UN system. The group’s broad, inclusive definition of internet governance—encompassing technical infrastructure and social and economic policies—was credited for transforming how global internet issues are addressed.

Participants emphasised the group’s unique working methodology, prioritising transparency, pluralism, and consensus-building without erasing legitimate disagreements. Many argue that these practices remain instructive amid today’s fragmented digital governance landscape.

However, as the conversation shifted from legacy to present-day performance, participants voiced deep concerns about the IGF’s limitations. Despite successes in capacity-building and agenda-setting, the forum was criticised for its failure to tackle controversial issues like surveillance, monopolies, and platform accountability.

 Crowd, Person, People, Press Conference, Adult, Male, Man, Face, Head, Electrical Device, Microphone, Clothing, Formal Wear, Suit, Audience
Jovan Kurbalija, Executive Director of Diplo

Speakers such as Vittorio Bertola and Avri Doria lamented its increasingly top-down character. At the same time, Nandini Chami and Ariette Esterhuizen raised questions about the IGF’s relevance and inclusiveness in the face of growing power imbalances. Some, including Bertrand de la Chapelle and Jovan Kurbalija, proposed bold reforms, including establishing a new working group to address the interlinked challenges of AI, data governance, and digital justice.

The session closed on a forward-looking note, urging the IGF community to recapture WGIG’s original spirit of collaborative innovation. As emerging technologies raise the stakes for global cooperation, participants agreed that internet governance must evolve—not only to reflect new realities but to stay true to the inclusive, democratic ideals that defined its founding two decades ago.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Tailored AI agents improve work output—at a social cost

AI agents can significantly improve workplace productivity when tailored to individual personality types, according to new research from the Massachusetts Institute of Technology (MIT). However, the study also found that increased efficiency may come at the expense of human social interaction.

Led by Professor Sinan Aral and postdoctoral associate Harang Ju from MIT Sloan School of Management, the research revealed that human workers collaborating with AI agents completed tasks 60% more efficiently. This gain was partly attributed to a 23% reduction in social messages between team members.

The findings come amid a surge in the adoption of AI agents. A recent PwC survey found that 79% of senior executives had implemented AI agents in their organisations, with 66% reporting productivity gains. Agents are used in roles ranging from customer support to executive assistance and data analysis.

Aral and Ju developed a platform called Pairit (formerly MindMeld) to examine how AI affects team dynamics. In one of their experiments, over 2,000 participants were randomly assigned to human-only teams or teams mixed with AI agents. The groups were tasked with creating advertisements for a think tank.

Teams that included AI agents produced more content and higher-quality ad copy, but their human members communicated less, especially regarding emotional and rapport-building messages.

The study also highlighted the importance of matching AI traits to human personalities. For example, conscientious humans worked more effectively with open AI agents, whereas extroverted humans underperformed when paired with highly conscientious AI counterparts.

‘AI traits can complement human personalities to enhance collaboration,’ the researchers noted. However, they stressed that the same AI assistant may not suit everyone.

The insight underpins the launch of their new venture, Pairium AI, which aims to develop agentic AI that adapts to individual work styles. The company promotes its mission as ‘personalising the Agentic Age.’

Ju emphasised the importance of compatibility: ‘You don’t work the same way with all colleagues—AI should adapt in the same way.’

Devanshu Mehrotra, an analyst at Gartner, described the research as groundbreaking. ‘This opens the door to a much deeper conversation about the hyper-customisation of AI in the workplace.’

Looking ahead, Aral and Ju plan to explore how personalised AI can assist in negotiations, customer support, creative writing and coding tasks. Their findings suggest fitting AI to the user may become as critical as managing human team dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Heat action plans in India struggle to match rising urban temperatures

On 11 June, the India Meteorological Department (IMD) issued a red alert for Delhi as temperatures exceeded 45°C, with real-feel levels reaching 54°C.

Despite warnings, many outdoor workers in the informal sector continued working, highlighting challenges in protecting vulnerable populations during heatwaves.

The primary tool in India for managing extreme heat, the Heat Action Plan (HAP), is developed annually by city and state governments. While some regions, such as Ahmedabad and Tamil Nadu, have reported improved outcomes, most HAPs face implementation, funding, coordination, and data availability issues.

A 2023 study found that 95% of HAPs lacked detailed mapping of high-risk areas and vulnerable groups. Experts and non-governmental organisations recommend incorporating Geographic Information Systems (GIS) and remote sensing to improve targeting.

A study by the Ashoka Trust for Research in Ecology and the Environment (ATREE) in Bengaluru found up to 9°C variation in land-surface temperatures within a two-square-kilometre ward, driven by differences in building types and green cover.

Delhi’s 2025 HAP introduced ward-level land surface temperature maps to identify high-risk areas. However, experts note that many datasets are adapted from agricultural monitoring tools and may not offer the spatial resolution needed for urban planning.

Organisations such as SEEDS and Chintan are using AI models like Sunny Lives to assess indoor heat exposure in low-income settlements to address this. The models estimate indoor temperatures and wet-bulb heat stress using data on roof materials and construction types, offering building-level insights.

Researchers argue that future HAPs should operate at the ward level and be supported by local heat vulnerability indexes, allowing for tailored interventions such as adjusted work hours, targeted hydration stations, and heat shelters.

Some announced measures—such as deploying water coolers and establishing day shelters—remain pending. Power outages in some areas also reduce the effectiveness of heat relief efforts.

Only eight Indian states officially classify heatwaves as disasters, limiting access to dedicated funding and emergency response mandates. Heatwaves are not recognised under national disaster legislation, which affects formal policy prioritisation.

Experts emphasise that building long-term heat resilience requires integrating HAPs with broader policy areas such as energy, water, public health, and employment. Several national programmes could support these efforts, but local implementation often suffers from limited awareness of available resources.

As climate risks grow, timely, data-driven, and locally tailored heat response strategies will be key to reducing health and economic impacts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI and the continued importance of cybersecurity fundamentals

The introduction of generative AI (GenAI) is influencing developments in cybersecurity across industries.

AI-powered tools are being integrated into systems such as end point detection and response (EDR) platforms and security operations centres (SOCs), while threat actors are reportedly exploring ways to use GenAI to automate known attack methods.

While GenAI presents new capabilities, common cybersecurity vulnerabilities remain a primary concern. Issues such as outdated patching, misconfigured cloud environments, and limited incident response readiness are still linked to most breaches.

Cybersecurity researchers have noted that GenAI is often used to scale familiar techniques rather than create new attack methods.

Social engineering, privilege escalation, and reconnaissance remain core tactics, with GenAI accelerating their execution. There are also indications that some GenAI systems can be manipulated to reveal sensitive data, particularly when not properly secured or configured.

Security experts recommend maintaining strong foundational practices such as access control, patch management, and configuration audits. These measures remain critical, regardless of the integration of advanced AI tools.

Some organisations may prioritise tool deployment over training, but research suggests that incident response skills are more effective when developed through practical exercises. Traditional awareness programmes may not sufficiently prepare personnel for real-time decision-making.

Some companies implement cyber drills that simulate attacks under realistic conditions to address this. These exercises can help teams practise protocols, identify weaknesses in workflows, and evaluate how systems perform under pressure. Such drills are designed to complement, not replace, other security measures.

Although GenAI is expected to continue shaping the threat landscape, current evidence suggests that most breaches stem from preventable issues. Ongoing training, configuration management, and response planning efforts remain central to organisational resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft’s collaboration is near breaking point

The once-celebrated partnership between OpenAI and Microsoft is now under severe strain as disputes over control and strategic direction threaten to dismantle their alliance.

OpenAI’s move toward a for-profit model has placed it at odds with Microsoft, which has invested billions and provided exclusive access to Azure infrastructure.

Microsoft’s financial backing and technical involvement have granted it a powerful voice in OpenAI’s operations. However, OpenAI now appears determined to gain independence, even if it risks severing ties with the tech giant.

Negotiations are ongoing, but the growing rift could reshape the trajectory of generative AI development if the collaboration collapses.

Amid tensions, Microsoft evaluates alternative options, including developing AI tools and working with rivals like Meta and xAI.

Such a pivot suggests Microsoft is preparing for a future beyond OpenAI, potentially ending its exclusive access to upcoming models and intellectual property.

A breakdown could have industry-wide repercussions. OpenAI may struggle to secure the estimated $40 billion in fresh funding it seeks, especially without Microsoft’s support.

At the same time, the rivalry could accelerate competition across the AI sector, prompting others to strengthen or redefine their positions in the race for dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025 opens in Norway with focus on inclusive digital governance

Norway will host the 20th annual Internet Governance Forum (IGF) from 23 to 27 June 2025 in a hybrid format, with the main venue set at Nova Spektrum in Lillestrøm, just outside Oslo.

This milestone event marks two decades of the UN-backed forum that brings together diverse stakeholders to discuss how the internet should be governed for the benefit of all.

The overarching theme, Building Governance Together, strongly emphasises inclusivity, democratic values, and sustainable digital cooperation.

With participation expected from governments, the private sector, civil society, academia, and international organisations, IGF 2025 will continue to promote multistakeholder dialogue on critical topics, including digital trust, cybersecurity, AI, and internet access.

A key feature will be the IGF Village, where companies and organisations will showcase technologies and products aligned with global internet development and governance.

Norway’s Minister of Digitalisation and Public Governance, Karianne Oldernes Tung, underlined the significance of this gathering in light of current geopolitical tensions and the forthcoming WSIS+20 review later in 2025.

Reaffirming Norway’s support for the renewal of the IGF mandate at the UN General Assembly, Minister Tung called for unity and collaborative action to uphold an open, secure, and inclusive internet. The forum aims to assess progress and help shape the next era of digital policy.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

DeepSeek under fire for alleged military ties and export control evasion

The United States has accused Chinese AI startup DeepSeek of assisting China’s military and intelligence services while allegedly seeking to evade export controls to obtain advanced American-made semiconductors.

The claims, made by a senior US State Department official speaking anonymously to Reuters, add to growing concerns over the global security risks posed by AI.

DeepSeek, based in Hangzhou, China, gained international attention earlier this year after claiming its AI models rivalled those of leading United States firms like OpenAI—yet at a fraction of the cost.

However, US officials now say that the firm has shared data with Chinese surveillance networks and provided direct technological support to the People’s Liberation Army (PLA). According to the official, DeepSeek has appeared in over 150 procurement records linked to China’s defence sector.

The company is also suspected of transmitting data from foreign users, including Americans, through backend infrastructure connected to China Mobile, a state-run telecom operator. DeepSeek has not responded publicly to questions about these privacy or security issues.

The official further alleges that DeepSeek has been trying to access Nvidia’s restricted H100 AI chips by creating shell companies in Southeast Asia and using foreign data centres to run AI models on US-origin hardware remotely.

While Nvidia maintains it complies with export restrictions and has not knowingly supplied chips to sanctioned parties, DeepSeek is said to have secured several H100 chips despite the ban.

US officials have yet to place DeepSeek on a trade blacklist, though the company is under scrutiny. Meanwhile, Singapore has already charged three men with fraud in investigating the suspected illegal movement of Nvidia chips to DeepSeek.

Questions have also been raised over the credibility of DeepSeek’s technological claims. Experts argue that the reported $5.58 million spent on training their flagship models is unrealistically low, especially given the compute scale typically required to match OpenAI or Meta’s performance.

DeepSeek has remained silent amid the mounting scrutiny. Still, with the US-China tech race intensifying, the firm could soon find itself at the centre of new trade sanctions and geopolitical fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers gain control of tesla charger Through firmware downgrade

Tesla’s popular Wall Connector home EV charger was compromised at the January 2025 Pwn2Own Automotive competition, revealing how attackers could gain full control via the charging cable.

The Tesla Wall Connector Gen 3, a widely deployed residential AC charger delivering up to 22 kW, was exploited through a novel attack that used the physical charging connector as the main entry point.

The vulnerability allowed researchers to execute arbitrary code, potentially giving access to private networks in homes, hotels, or businesses.

Researchers from Synacktiv discovered that Tesla vehicles can update the Wall Connector’s firmware via the charging cable using a proprietary, undocumented protocol.

By simulating a Tesla car and exploiting Single-Wire CAN (SWCAN) communications over the Control Pilot line, the team downgraded the firmware to an older version with exposed debug features.

Using a custom USB-CAN adapter and a Raspberry Pi to emulate vehicle behaviour, they accessed the device’s setup Wi-Fi credentials and triggered a buffer overflow in the debug shell, ultimately gaining remote code execution.

The demonstration ended with a visual cue — the charger’s LED blinking — but the broader implication is access to internal networks and potential lateral movement across connected systems.

Tesla has since addressed the vulnerability by introducing anti-downgrade measures in newer firmware versions. The Pwn2Own event remains instrumental in exposing critical flaws in automotive and EV infrastructure, pushing manufacturers toward stronger security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon CEO warns staff to embrace AI or face job losses

Amazon CEO Andy Jassy has warned staff that they must embrace AI or risk losing their jobs.

In a memo shared publicly, Jassy said generative AI and intelligent agents are already transforming workflows at Amazon, and this shift will inevitably reduce the number of corporate roles in the coming years.

According to Jassy, AI will allow Amazon to operate more efficiently by automating specific roles and reallocating talent to new areas. He acknowledged that it’s difficult to predict the exact outcome but clarified that the corporate workforce will shrink as AI adoption expands across the company.

Those hoping to remain at Amazon will need to upskill quickly. Jassy stressed the need for employees to stay curious and proficient with AI tools to boost their productivity and remain valuable in an increasingly automated environment.

Amazon is not alone in the trend.

BT Group is restructuring to eliminate tens of thousands of roles. At the same time, other corporate leaders, including those at LVMH and ManPower, have echoed concerns that AI’s most significant disruption may be within human resources.

Executives now see AI as a tech shift and a workforce transformation demanding retraining and redefinition of roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoftBank plans $1 trillion AI and robotics park in Arizona

SoftBank founder Masayoshi Son is planning what could become his most audacious venture yet: a $1 trillion AI and robotics industrial park in Arizona.

Dubbed ‘Project Crystal Land’, the initiative aims to recreate a high-tech manufacturing hub reminiscent of China’s Shenzhen, focused on AI-powered robots and next-gen automation.

Son is courting global tech giants — including Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung — to join the vision, though none have formally committed.

The plan hinges on support from federal and state governments, with SoftBank already discussing possible tax breaks with US officials, including Commerce Secretary Howard Lutnick.

While TSMC is already investing $165 billion in Arizona facilities, sources suggest Son’s project has not altered the chipmaker’s current roadmap. SoftBank hopes to attract semiconductor and AI hardware leaders to power the park’s infrastructure.

Son has also approached SoftBank Vision Fund portfolio companies to participate, including robotics startup Agile Robots.

The park may serve as a production hub for emerging tech firms, complementing SoftBank’s broader investments, such as a potential $30 billion stake in OpenAI, a $6.5 billion acquisition of Ampere Computing, and funding for Stargate, a global data centre venture with OpenAI, Oracle, and MGX.

While the vision is still early, Project Crystal Land could radically shift US high-tech manufacturing. Son’s strategy relies heavily on project-based financing, allowing extensive infrastructure builds with minimal upfront capital.

As SoftBank eyes long-term AI growth and increased investor confidence, whether this futuristic park will become a reality — or another of Son’s high-stakes dreams remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!