There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.
Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.
Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.
Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.
Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.
AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.
Still, the rise of AGI raises difficult questions.
How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.
Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.
The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the Internet Governance Forum 2025 in Lillestrøm, Norway, Jovan Kurbalija launched the eighth edition of his seminal textbook ‘Introduction to Internet Governance’, marking a return to writing after a nine-year pause. Moderated by Sorina Teleanu of the Diplo, the session unpacked not just the content of the new edition but also the reasoning behind retaining its original title in an era buzzing with buzzwords like ‘AI governance’ and ‘digital governance.’
Kurbalija defended the choice, arguing that most so-called digital issues—from content regulation to cybersecurity—ultimately operate over internet infrastructure, making ‘Internet governance’ the most precise term available.
The updated edition reflects both continuity and adaptation. He introduced ‘Kaizen publishing,’ a new model that replaces the traditional static book cycle with a continuously updated digital platform. Driven by the fast pace of technological change and aided by AI tools trained on his own writing style, the new format ensures the book evolves in real-time with policy and technological developments.
The new edition is structured as a seven-floor pyramid tackling 50 key issues rooted in history and future internet governance trajectories. The book also traces digital policy’s deep historical roots.
Kurbalija highlighted how key global internet governance frameworks—such as ICANN, the WTO e-commerce moratorium, and UN cyber initiatives—emerged within months of each other in 1998, a pivotal moment he calls foundational to today’s landscape. He contrasted this historical consistency with recent transformations, identifying four key shifts since 2016: mass data migration to the cloud, COVID-19’s digital acceleration, the move from CPUs to GPUs, and the rise of AI.
Finally, the session tackled the evolving discourse around AI governance. Kurbalija emphasised the need to weigh long-term existential risks against more immediate challenges like educational disruption and concentrated knowledge power. He also critiqued the shift in global policy language—from knowledge-centric to data-driven frameworks—and warned that this transformation might obscure AI’s true nature as a knowledge-based phenomenon.
As geopolitics reasserts itself in digital governance debates, Kurbalija’s updated book aims to ground readers in the enduring principles shaping an increasingly complex landscape.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At the Internet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of artificial intelligence governance. The discussion, moderated by Kathleen Ziemann from the German development agency GIZ and Guilherme Canela of UNESCO, featured a rich exchange between government officials, private sector leaders, civil society voices, and multilateral organisations.
The session highlighted how AI governance is becoming a crowded yet fragmented space, shaped by overlapping frameworks such as the OECD AI Principles, the EU AI Act, UNESCO’s recommendations on AI ethics, and various national and regional strategies. While these efforts reflect progress, they also pose challenges in terms of coordination, coherence, and inclusivity.
Melinda Claybaugh, Director of Privacy Policy at Meta, noted the abundance of governance initiatives but warned of disagreements over how AI risks should be measured. ‘We’re at an inflection point,’ she said, calling for more balanced conversations that include not just safety concerns but also the benefits and opportunities AI brings. She argued for transparency in risk assessments and suggested that existing regulatory structures could be adapted to new technologies rather than replaced.
In response, Jhalak Kakkar, Executive Director at India’s Centre for Communication Governance, urged caution against what she termed a ‘false dichotomy’ between innovation and regulation. ‘We need to start building governance from the beginning, not after harms appear,’ she stressed, calling for socio-technical impact assessments and meaningful civil society participation. Kakkar advocated for multi-stakeholder governance that moves beyond formality to real influence.
Mlindi Mashologu, Deputy Director-General at South Africa’s Ministry of Communications and Digital Technology, highlighted the importance of context-aware regulation. ‘There is no one-size-fits-all when it comes to AI,’ he said. Mashologu outlined South Africa’s efforts through its G20 presidency to reduce AI-driven inequality via a new policy toolkit, stressing human rights, data justice, and environmental sustainability as core principles. He also called for capacity-building to enable the Global South to shape its own AI future.
Jovan Kurbalija, Executive Director of the Diplo Foundation, brought a philosophical lens to the discussion, questioning the dominance of ‘data’ in governance frameworks. ‘AI is fundamentally about knowledge, not just data,’ he argued. Kurbalija warned against the monopolisation of human knowledge and advocated for stronger safeguards to ensure fair attribution and decentralisation.
The need for transparency, explainability, and inclusive governance remained central themes. Participants explored whether traditional laws—on privacy, competition, and intellectual property—are sufficient or whether new instruments are needed to address AI’s novel challenges.
Audience members added urgency to the discussion. Anna from Mexican digital rights group R3D raised concerns about AI’s environmental toll and extractive infrastructure practices in the Global South. Pilar Rodriguez, youth coordinator for the IGF in Spain, questioned how AI governance could avoid fragmentation while still respecting regional sovereignty.
The session concluded with a call for common-sense, human-centric AI governance. ‘Let’s demystify AI—but still enjoy its magic,’ said Kurbalija, reflecting the spirit of hopeful realism that permeated the discussion. Panelists agreed that while many AI risks remain unclear, global collaboration rooted in human rights, transparency, and local empowerment offers the most promising path forward.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
Their work yielded four starkly different future scenarios, ranging from intensified geopolitical rivalry and internet fragmentation to overregulation and a transformative turn toward treating the internet as a public good. A central takeaway was the resurgence of state power as a dominant force shaping digital futures.
According to Pohler, geopolitical dynamics—especially the actions of the US, China, Russia, and the EU—emerged as the primary drivers across nearly all scenarios. That marked a shift from previous foresight efforts that had emphasised civil society or corporate actors.
The panellists underscored that today’s real-world developments are already outpacing the scenarios’ predictions, with multistakeholder models appearing increasingly hollow or overly institutionalised. While the scenarios themselves might not predict the exact future, the process of creating them was widely praised.
Panellists described the interviews and collaborative exercises as intellectually enriching and essential for thinking beyond conventional governance paradigms. Yet, they also acknowledged practical concerns: the abstract nature of such exercises, the lack of direct implementation, and the need to involve government actors more directly to bridge analysis and policy action.
Looking ahead, participants called for bolder and more inclusive approaches to internet governance. They urged forums like the IGF to embrace participatory methods—such as scenario games—and to address complex issues without requiring full consensus.
The session concluded with a sense of urgency: the internet we want may still be possible, but only if we confront uncomfortable realities and make space for more courageous, creative policymaking.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At the Internet Governance Forum 2025 in Lillestrøm, Norway, the ‘Building an International AI Cooperation Ecosystem’ session spotlighted the urgent need for international collaboration to manage AI’s transformative impact. Hosted by China’s Cyberspace Administration, the session featured a global roster of experts who emphasised that AI is no longer a niche or elite technology, but a powerful and widely accessible force reshaping economies, societies, and governance frameworks.
China’s Cyberspace Administration Director-General Qi Xiaoxia opened the session by stressing her country’s leadership in AI innovation, citing that over 60% of global AI patents originate from China. She proposed a cooperative agenda focused on sustainable development, managing AI risks, and building international consensus through multilateral collaboration.
Echoing her call, speakers highlighted that AI’s rapid evolution requires national regulations and coordinated global governance, ideally under the auspices of the UN.
Speakers, such as Jovan Kurbalija, executive director of Diplo, and Wolfgang Kleinwächter, emeritus professor for Internet Policy and Regulation at the University of Aarhus, warned against the pitfalls of siloed regulation and technological protectionism. Instead, they advocated for open-source standards, inclusive policymaking, and leveraging existing internet governance models to shape AI rules.
Regional case studies from Shanghai and Mexico illustrated diverse governance approaches—ranging from rights-based regulation to industrial ecosystem building—while initiatives like China Mobile’s AI+ Global Solutions showcased the role of major industry actors. A recurring theme throughout the forum was that no single stakeholder can monopolise effective AI governance.
Instead, a multistakeholder approach involving governments, civil society, academia, and the private sector is essential. Participants agreed that the goal is not just to manage risks, but to ensure AI is developed and deployed in a way that is ethical, inclusive, and beneficial to all humanity.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
Since 2015, 21 June marks the International Day of Yoga, celebrating the ancient Indian practice that blends physical movement, breathing, and meditation. But as the world becomes increasingly digital, yoga itself is evolving.
No longer limited to ashrams or studios, yoga today exists on mobile apps, YouTube channels, and even in virtual reality. On the surface, this democratisation seems like a triumph. But what are the more profound implications of digitising a deeply spiritual and embodied tradition? And how do emerging technologies, particularly AI, reshape how we understand and experience yoga in a hyper-connected world?
Tech and wellness: The rise of AI-driven yoga tools
The wellness tech market has exploded, and yoga is a major beneficiary. Apps like Down Dog, YogaGo, and Glo offer personalised yoga sessions, while wearables such as the Apple Watch or Fitbit track heart rate and breathing.
Meanwhile, AI-powered platforms can generate tailored yoga routines based on user preferences, injury history, or biometric feedback. For example, AI motion tracking tools can evaluate your poses in real-time, offering corrections much like a human instructor.
While these tools increase accessibility, they also raise questions about data privacy, consent, and the commodification of spiritual practices. What happens when biometric data from yoga sessions is monetised? Who owns your breath and posture data? These questions sit at the intersection of AI ethics and digital rights.
Beyond the mat: Virtual reality and immersive yoga
The emergence of virtual reality (VR) and augmented reality (AR) is pushing the boundaries of yoga practice. Platforms like TRIPP or Supernatural offer immersive wellness environments where users can perform guided meditation and yoga in surreal, digitally rendered landscapes.
These tools promise enhanced focus and escapism—but also risk detachment from embodied experience. Does VR yoga deepen the meditative state, or does it dilute the tradition by gamifying it? As these technologies grow in sophistication, we must question how presence, environment, and embodiment translate in virtual spaces.
Can AI be a guru? Empathy, authority, and the limits of automation
One provocative question is whether AI can serve as a spiritual guide. AI instructors—whether through chatbots or embodied in VR—may be able to correct your form or suggest breathing techniques. But can they foster the deep, transformative relationship that many associate with traditional yoga masters?
AI lacks emotional intuition, moral responsibility, and cultural embeddedness. While it can mimic the language and movements of yoga, it struggles to replicate the teacher-student connection that grounds authentic practice. As AI becomes more integrated into wellness platforms, we must ask: where do we draw the line between assistance and appropriation?
Community, loneliness, and digital yoga tribes
Yoga has always been more than individual practice—community is central. Yet, as yoga moves online, questions of connection and belonging arise. Can digital communities built on hashtags and video streams replicate the support and accountability of physical sanghas (spiritual communities)?
Paradoxically, while digital yoga connects millions, it may also contribute to isolation. A solitary practice in front of a screen lacks the energy, feedback, and spontaneity of group practice. For tech developers and wellness advocates, the challenge is to reimagine digital spaces that foster authentic community rather than algorithmic echo chambers.
Digital policy and the politics of platformised spirituality
Beyond the individual experience, there’s a broader question of how yoga operates within global digital ecosystems. Platforms like YouTube, Instagram, and TikTok have turned yoga into shareable content, often stripped of its philosophical and spiritual roots.
Meanwhile, Big Tech companies capitalise on wellness trends while contributing to stress-inducing algorithmic environments. There are also geopolitical and cultural considerations.
The export of yoga through Western tech platforms often sidesteps its South Asian origins, raising issues of cultural appropriation. From a policy perspective, regulators must grapple with how spiritual practices are commodified, surveilled, and reshaped by AI-driven infrastructures.
Toward inclusive and ethical design in wellness tech
As AI and digital tools become more deeply embedded in yoga practice, there is a pressing need for ethical design. Developers should consider how their platforms accommodate different bodies, abilities, cultures, and languages. For example, how can AI be trained to recognise non-normative movement patterns? Are apps accessible to users with disabilities?
Inclusive design is not only a matter of social justice—it also aligns with yogic principles of compassion, awareness, and non-harm. Embedding these values into AI development can help ensure that the future of yoga tech is as mindful as the practice it seeks to support.
Toward a mindful tech future
As we celebrate International Day of Yoga, we are called to reflect not only on the practice itself but also on its evolving digital context. Emerging technologies offer powerful tools for access and personalisation, but they also risk diluting the depth and ethics of yoga.
For policymakers, technologists, and practitioners alike, the challenge is to ensure that yoga in the digital age remains a practice of liberation rather than a product of algorithmic control. Yoga teaches awareness, balance, and presence. These are the very qualities we need to shape responsible digital policies in an AI-driven world.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
At the Internet Governance Forum (IGF) 2025 in Norway, a high-level networking session was held to share key outcomes from the 18th edition of the European Dialogue on Internet Governance (EuroDIG), which took place earlier this year from 12–14 May in Strasbourg, France. Hosted by the Council of Europe and supported by the Luxembourg Presidency of the Committee of Ministers, the Strasbourg conference centred on balancing innovation and regulation, strongly focusing on safeguarding human rights in digital policy.
Sandra Hoferichter, who moderated the session in Norway, opened by noting the symbolic significance of EuroDIG’s return to Strasbourg—the city where the forum began in 2008. She emphasised EuroDIG’s unique tradition of issuing “messages” as policy input, which IGF and other regional dialogues later adopted.
Swiss Ambassador Thomas Schneider, President of the EuroDIG Support Association, presented the community’s consolidated contributions to the WSIS+20 review process. “The multistakeholder model isn’t optional—it’s essential,” he said, adding that Europe strongly supports making the Internet Governance Forum a permanent institution rather than one renewed every decade. He called for a transparent and inclusive WSIS+20 process, warning against decisions being shaped behind closed diplomatic doors.
YouthDIG representative Frances Douglas Thomson shared insights from the youth-led sessions at EuroDIG. She described strong debates on digital literacy, particularly around the role of generative AI in schools. ‘Some see AI as a helpful assistant; others fear it diminishes critical thinking,’ she said. Content moderation also sparked division, with some young participants calling for vigorous enforcement against harmful content and others raising concerns about censorship. Common ground emerged around the need for greater algorithmic transparency so users understand how content is curated.
Hans Seeuws, business operations manager at EURid, emphasised the need for infrastructure providers to be heard in policy spaces. He supported calls for concrete action on AI governance and digital rights, stressing the importance of translating dialogue into implementation.
Chetan Sharma from the Data Mission Foundation Trust India questioned the practical impact of governance forums in humanitarian crises. Frances highlighted several EuroDIG sessions that tackled using autonomous weapons, internet shutdowns, and misinformation during conflicts. ‘Dialogue across stakeholders can shift how we understand digital conflict. That’s meaningful change,’ she noted.
A representative from Geneva Macro Labs challenged the panel to explain how internet policy can be effective when many governments lack technical literacy. Schneider replied that civil society, business, and academia must step in when public institutions fall short. ‘Democracy is not self-sustaining—it requires daily effort. The price of neglect is high,’ he cautioned.
Janice Richardson, an expert at the Council of Europe, asked how to widen youth participation. Frances praised YouthDIG’s accessible, bottom-up format and called for increased funding to help young people from underrepresented regions join discussions. ‘The more youth feel heard, the more they stay engaged,’ she said.
As the session closed, Hoferichter reminded attendees of the over 400 applications received for YouthDIG this year. She urged donors to help cover the high travel costs, mainly from Eastern Europe and the Caucasus. ‘Supporting youth in internet governance isn’t charity—it’s a long-term investment in inclusive, global policy,’ she concluded.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At the 2025 Internet Governance Forum in Lillestrøm, Norway, a parliamentary session titled ‘Click with Care: Protecting Vulnerable Groups Online’ gathered lawmakers, regulators, and digital rights experts from around the world to confront the urgent issue of online harm targeting marginalised communities. Speakers from Uganda, the Philippines, Malaysia, Pakistan, the Netherlands, Portugal, and Kenya shared insights on how current laws often fall short, especially in the Global South where women, children, and LGBTQ+ groups face disproportionate digital threats.
Research presented showed alarming trends—one in three African women experience online abuse, often with no support or recourse, and platforms’ moderation systems are frequently inadequate, slow, or biassed in favor of users from the Global North.
The session exposed critical gaps in enforcement and accountability, particularly regarding large platforms like Meta and Google, which frequently resist compliance with national regulations. Malaysian Deputy Minister Teo Nie Ching and others emphasised that individual countries struggle to hold tech giants accountable, leading to calls for stronger regional blocs and international cooperation.
Meanwhile, Philippine lawmaker Raoul Manuel highlighted legislative progress, including extraterritorial jurisdiction for child exploitation and expanded definitions of online violence, though enforcement remains patchy. In Pakistan, Nighat Dad raised the alarm over AI-generated deepfakes and the burden placed on victims to monitor and report their own abuse.
Panellists also stressed that simply taking down harmful content isn’t enough. They called for systemic platform reform, including greater algorithm transparency, meaningful reporting tools, and design changes that prevent harm before it occurs.
Behavioural economist Sandra Maximiano introduced the concept of ‘nudging’ safer user behavior through design interventions that account for human cognitive biases—approaches that could complement legal strategies by embedding protection into the architecture of online spaces.
Why does it matter?
A powerful takeaway from the session was the consensus that online safety must be treated as both a technological and human challenge. Participants agreed that coordinated global responses, inclusive policymaking, and engagement with community structures are essential to making the internet a safer place—particularly for those who need protection the most.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
The Internet Governance Forum (IGF) 2025 opened in Lillestrøm, Norway, marking its 20th anniversary and coinciding with the World Summit on the Information Society Plus 20 (WSIS+20) review.
UN Secretary-General António Guterres, in a video message, underscored that digital cooperation has shifted from aspiration to necessity. He highlighted global challenges such as the digital divide, online hate speech, and concentrated tech power, calling for immediate action to ensure a more equitable digital future.
Norwegian leaders, including Prime Minister Jonas Gahr Støre and Digitisation Minister Karianne Tung, reaffirmed their country’s commitment to democratic digital governance and human rights, echoing broader forum themes of openness, transparency, and multilateral cooperation. They emphasised the importance of protecting the internet as a public good in an era marked by fragmentation, misinformation, and increasing geopolitical tension.
The ceremony brought together diverse voices—from small island states and the EU to civil society and the private sector. Mauritius’ President Dharambeer Gokhool advocated for a citizen-centered digital transformation, while European Commission Vice President Henna Virkkunen introduced a new EU international digital strategy rooted in human rights and sustainability.
Actor and digital rights activist Joseph Gordon-Levitt cautioned against unregulated AI development, arguing for governance frameworks that protect human agency and economic fairness.
Why does it matter?
Echoing across speeches was a shared call to action: to strengthen the multistakeholder model of internet governance, bridge the still-massive digital divide, and develop ethical, inclusive digital policies. As stakeholders prepare to delve into deeper dialogues during the forum, the opening ceremony made clear that the next chapter of digital governance must be collaborative, human-centered, and urgently enacted.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At the Internet Governance Forum 2025 in Lillestrøm, Norway, a dynamic discussion unfolded on how small states and startups can influence the global AI landscape. The session, hosted by Norway, challenged the notion that only tech giants can shape AI’s future. Instead, it presented a compelling vision of innovation rooted in agility, trust, contextual expertise, and collaborative governance.
Norway’s Digitalisation Minister, Karianne Tung, outlined her country’s ambition to become the world’s most digitalised nation by 2030, citing initiatives like the Olivia supercomputer and open-access language models tailored to Norwegian society. Startups such as Cognite showcased how domain-specific data—particularly in energy and industry—can give smaller players a strategic edge.
Meanwhile, Professor Ole-Christopher Granmo introduced the Tsetlin Machine, an energy-efficient, transparent alternative to traditional deep learning, aligning AI development with environmental sustainability and ethical responsibility. Globally, voices like Rwanda’s Esther Kunda and Brookings Fellow Chinasa T. Okolo emphasised the power of contextual innovation, data sovereignty, and peer collaboration.
They argued that small nations can excel not by replicating the paths of AI superpowers, but by building inclusive, locally-relevant models and regulatory frameworks. Big tech representatives from Microsoft and Meta echoed the importance of open infrastructure, sovereign cloud services, and responsible partnerships, stressing that the future of AI must be co-created across sectors and scales.
The session concluded on a hopeful note: small players need not merely adapt to AI’s trajectory—they can actively shape it. By leveraging unique national strengths, fostering multistakeholder collaboration, and prioritising inclusive, ethical, and sustainable design, small nations and startups are positioned to become strategic leaders in the AI era.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.