There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.
Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.
Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.
Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.
Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.
AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.
Still, the rise of AGI raises difficult questions.
How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.
Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.
The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.
The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.
Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.
ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.
At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.
Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.
Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.
The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use.
AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps.
AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms.
Yet, concerns about fairness and data rights loomed large. Actor and entrepreneur Joseph Gordon-Levitt delivered a pointed critique of tech companies using creative work to train AI without consent or compensation.
He called for economic systems that reward human contributions, warning that failing to do so risks eroding creative and financial incentives. This argument underscored broader concerns about job displacement, automation, and the growing digital divide, especially among women and marginalised communities.
Debates also exposed philosophical rifts between regulatory approaches. While the US emphasised minimal interference to spur innovation, the European Commission and Norway called for risk-based regulation and international cooperation to ensure trust and equity. Speakers agreed on the need for inclusive governance frameworks and education systems that foster critical thinking, resist de-skilling, and prepare workers for an AI-augmented economy.
The session made clear that the future of work in the AI era depends on today’s collective choices that must centre people, fairness, and global solidarity.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
North Korean hackers have reportedly used deepfake technology to impersonate executives during a fake Zoom call in an attempt to install malware and steal cryptocurrency from a targeted employee.
Cybersecurity firm Huntress identified the scheme, which involved a convincingly staged meeting and a custom-built AppleScript targeting macOS systems—an unusual move that signals the rising sophistication of state-sponsored cyberattacks.
The incident began with a fraudulent Calendly invitation, which redirected the employee to a fake Zoom link controlled by the attackers. Weeks later, the employee joined what appeared to be a routine video call with company leadership. In reality, the participants were AI-generated deepfakes.
When audio issues arose, the hackers convinced the user to install what was supposedly a Zoom extension but was, in fact, malware designed to hijack cryptocurrency wallets and steal clipboard data.
Huntress traced the attack to TA444, a North Korean group also known by names like BlueNoroff and STARDUST CHOLLIMA. Their malware was built to extract sensitive financial data while disguising its presence and erasing traces once the job was done.
Security experts warn that remote workers and companies have to be especially cautious. Unfamiliar calendar links, sudden platform changes, or requests to install new software should be treated as warning signs.
Verifying suspicious meeting invites through alternative contact methods — like a direct phone call — is a vital but straightforward way to prevent damage.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new Trojan dubbed SparkKitty is stealing sensitive data from mobile phones, potentially giving hackers access to cryptocurrency wallets.
Cybersecurity firm Kaspersky says the malware hides in fake crypto apps, gambling platforms, and TikTok clones, spread through deceptive installs.
Once installed, SparkKitty accesses photo galleries and uploads images to a remote server, likely searching for screenshots of wallet seed phrases. Though mainly active in China and Southeast Asia, experts warn it could spread globally.
SparkKitty appears linked to the SparkCat spyware campaign, which also targeted seed phrase images.
The malware is found on iOS and Android platforms, joining other crypto-focused threats like Noodlophile and LummaC2.
TRM Labs recently reported that nearly 70% of last year’s $2.2 billion in stolen crypto came from infrastructure attacks involving seed phrase theft.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the Internet Governance Forum 2025 in Lillestrøm, Norway, a key session spotlighted the launch of the Freedom Online Coalition’s (FOC) updated Joint Statement on Artificial Intelligence and Human Rights. Backed by 21 countries and counting, the statement outlines a vision for human-centric AI governance rooted in international human rights law.
Representatives from governments, civil society, and the tech industry—most notably the Netherlands, Germany, Ghana, Estonia, and Microsoft—gathered to emphasise the urgent need for a collective, multistakeholder approach to tackle the real and present risks AI poses to rights such as privacy, freedom of expression, and democratic participation.
Ambassador Ernst Noorman of the Netherlands warned that human rights and security must be viewed as interconnected, stressing that unregulated AI use can destabilise societies rather than protect them. His remarks echoed the Netherlands’ own hard lessons from biassed welfare algorithms.
Other panellists, including Germany’s Cyber Ambassador Maria Adebahr, underlined how AI is being weaponised for transnational repression and emphasised Germany’s commitment by doubling funding for the FOC. Ghana’s cybersecurity chief, Divine Salese Agbeti, added that AI misuse is not exclusive to governments—citizens, too, have exploited the technology for manipulation and deception.
From the private sector, Microsoft’s Dr Erika Moret showcased the company’s multi-layered approach to embedding human rights in AI, from ethical design and impact assessments to rejecting high-risk applications like facial recognition in authoritarian contexts. She stressed the company’s alignment with UN guiding principles and the need for transparency, fairness, and inclusivity.
The discussion also highlighted binding global frameworks like the EU AI Act and the Council of Europe’s Framework Convention, calling for their widespread adoption as vital tools in managing AI’s global impact. The session concluded with a shared call to action: governments must use regulatory tools and procurement power to enforce human rights standards in AI, while the private sector and civil society must push for accountability and inclusion.
The FOC’s statement remains open for new endorsements, standing as a foundational text in the ongoing effort to align the future of AI with the fundamental rights of all people.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At the Internet Governance Forum 2025 in Lillestrøm, Norway, the ‘Building an International AI Cooperation Ecosystem’ session spotlighted the urgent need for international collaboration to manage AI’s transformative impact. Hosted by China’s Cyberspace Administration, the session featured a global roster of experts who emphasised that AI is no longer a niche or elite technology, but a powerful and widely accessible force reshaping economies, societies, and governance frameworks.
China’s Cyberspace Administration Director-General Qi Xiaoxia opened the session by stressing her country’s leadership in AI innovation, citing that over 60% of global AI patents originate from China. She proposed a cooperative agenda focused on sustainable development, managing AI risks, and building international consensus through multilateral collaboration.
Echoing her call, speakers highlighted that AI’s rapid evolution requires national regulations and coordinated global governance, ideally under the auspices of the UN.
Speakers, such as Jovan Kurbalija, executive director of Diplo, and Wolfgang Kleinwächter, emeritus professor for Internet Policy and Regulation at the University of Aarhus, warned against the pitfalls of siloed regulation and technological protectionism. Instead, they advocated for open-source standards, inclusive policymaking, and leveraging existing internet governance models to shape AI rules.
Regional case studies from Shanghai and Mexico illustrated diverse governance approaches—ranging from rights-based regulation to industrial ecosystem building—while initiatives like China Mobile’s AI+ Global Solutions showcased the role of major industry actors. A recurring theme throughout the forum was that no single stakeholder can monopolise effective AI governance.
Instead, a multistakeholder approach involving governments, civil society, academia, and the private sector is essential. Participants agreed that the goal is not just to manage risks, but to ensure AI is developed and deployed in a way that is ethical, inclusive, and beneficial to all humanity.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
Since 2015, 21 June marks the International Day of Yoga, celebrating the ancient Indian practice that blends physical movement, breathing, and meditation. But as the world becomes increasingly digital, yoga itself is evolving.
No longer limited to ashrams or studios, yoga today exists on mobile apps, YouTube channels, and even in virtual reality. On the surface, this democratisation seems like a triumph. But what are the more profound implications of digitising a deeply spiritual and embodied tradition? And how do emerging technologies, particularly AI, reshape how we understand and experience yoga in a hyper-connected world?
Tech and wellness: The rise of AI-driven yoga tools
The wellness tech market has exploded, and yoga is a major beneficiary. Apps like Down Dog, YogaGo, and Glo offer personalised yoga sessions, while wearables such as the Apple Watch or Fitbit track heart rate and breathing.
Meanwhile, AI-powered platforms can generate tailored yoga routines based on user preferences, injury history, or biometric feedback. For example, AI motion tracking tools can evaluate your poses in real-time, offering corrections much like a human instructor.
While these tools increase accessibility, they also raise questions about data privacy, consent, and the commodification of spiritual practices. What happens when biometric data from yoga sessions is monetised? Who owns your breath and posture data? These questions sit at the intersection of AI ethics and digital rights.
Beyond the mat: Virtual reality and immersive yoga
The emergence of virtual reality (VR) and augmented reality (AR) is pushing the boundaries of yoga practice. Platforms like TRIPP or Supernatural offer immersive wellness environments where users can perform guided meditation and yoga in surreal, digitally rendered landscapes.
These tools promise enhanced focus and escapism—but also risk detachment from embodied experience. Does VR yoga deepen the meditative state, or does it dilute the tradition by gamifying it? As these technologies grow in sophistication, we must question how presence, environment, and embodiment translate in virtual spaces.
Can AI be a guru? Empathy, authority, and the limits of automation
One provocative question is whether AI can serve as a spiritual guide. AI instructors—whether through chatbots or embodied in VR—may be able to correct your form or suggest breathing techniques. But can they foster the deep, transformative relationship that many associate with traditional yoga masters?
AI lacks emotional intuition, moral responsibility, and cultural embeddedness. While it can mimic the language and movements of yoga, it struggles to replicate the teacher-student connection that grounds authentic practice. As AI becomes more integrated into wellness platforms, we must ask: where do we draw the line between assistance and appropriation?
Community, loneliness, and digital yoga tribes
Yoga has always been more than individual practice—community is central. Yet, as yoga moves online, questions of connection and belonging arise. Can digital communities built on hashtags and video streams replicate the support and accountability of physical sanghas (spiritual communities)?
Paradoxically, while digital yoga connects millions, it may also contribute to isolation. A solitary practice in front of a screen lacks the energy, feedback, and spontaneity of group practice. For tech developers and wellness advocates, the challenge is to reimagine digital spaces that foster authentic community rather than algorithmic echo chambers.
Digital policy and the politics of platformised spirituality
Beyond the individual experience, there’s a broader question of how yoga operates within global digital ecosystems. Platforms like YouTube, Instagram, and TikTok have turned yoga into shareable content, often stripped of its philosophical and spiritual roots.
Meanwhile, Big Tech companies capitalise on wellness trends while contributing to stress-inducing algorithmic environments. There are also geopolitical and cultural considerations.
The export of yoga through Western tech platforms often sidesteps its South Asian origins, raising issues of cultural appropriation. From a policy perspective, regulators must grapple with how spiritual practices are commodified, surveilled, and reshaped by AI-driven infrastructures.
Toward inclusive and ethical design in wellness tech
As AI and digital tools become more deeply embedded in yoga practice, there is a pressing need for ethical design. Developers should consider how their platforms accommodate different bodies, abilities, cultures, and languages. For example, how can AI be trained to recognise non-normative movement patterns? Are apps accessible to users with disabilities?
Inclusive design is not only a matter of social justice—it also aligns with yogic principles of compassion, awareness, and non-harm. Embedding these values into AI development can help ensure that the future of yoga tech is as mindful as the practice it seeks to support.
Toward a mindful tech future
As we celebrate International Day of Yoga, we are called to reflect not only on the practice itself but also on its evolving digital context. Emerging technologies offer powerful tools for access and personalisation, but they also risk diluting the depth and ethics of yoga.
For policymakers, technologists, and practitioners alike, the challenge is to ensure that yoga in the digital age remains a practice of liberation rather than a product of algorithmic control. Yoga teaches awareness, balance, and presence. These are the very qualities we need to shape responsible digital policies in an AI-driven world.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
At the 2025 Internet Governance Forum in Lillestrøm, Norway, parliamentarians from around the world gathered to share perspectives on how to regulate harmful online content without infringing on freedom of expression and democratic values. The session, moderated by Sorina Teleanu, Diplo’s Director of Knowledge, highlighted the increasing urgency for social media platforms to respond more swiftly and responsibly to harmful content, particularly content generated by AI that can lead to real-world consequences such as harassment, mental health issues, and even suicide.
Pakistan’s Anusha Rahman Ahmad Khan delivered a powerful appeal, pointing to cultural insensitivity and profit-driven resistance by platforms that often ignore urgent content removal requests. Representatives from Argentina, Nepal, Bulgaria, and South Africa echoed the need for effective legal frameworks that uphold safety and fundamental rights.
Argentina’s Franco Metaza, Member of Parliament of Mercosur, cited disturbing content that promotes eating disorders among young girls and detailed the tangible danger of disinformation, including an assassination attempt linked to online hate. Nepal’s MP Yogesh Bhattarai advocated for regulation without authoritarian control, underscoring the importance of constitutional safeguards for speech.
Member of European Parliament, Tsvetelina Penkova from Bulgaria, outlined the EU’s multifaceted digital laws, like the Digital Services Act and GDPR, which aim to protect users while grappling with implementation challenges across 27 diverse member states.
Youth engagement and digital literacy emerged as key themes, with several speakers emphasising that involving young people in policymaking leads to better, more inclusive policies. Panellists also stressed that education is essential for equipping users with the tools to navigate online spaces safely and critically.
Calls for multistakeholder cooperation rang throughout the session, with consensus on the need for collaboration between governments, tech companies, civil society, and international organisations. A thought-provoking proposal from a Congolese parliamentarian suggested that digital rights be recognised as a new, fourth generation of human rights—akin to civil, economic, and environmental rights already codified in international frameworks.
Other attendees welcomed the idea and agreed that without such recognition, the enforcement of digital protections would remain fragmented. The session concluded on a collaborative and urgent note, with calls for shared responsibility, joint strategies, and stronger international frameworks to create a safer, more just digital future.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
A coalition of cybersecurity agencies, including the NSA, FBI, and CISA, has issued joint guidance to help organisations protect AI systems from emerging data security threats. The guidance explains how AI systems can be compromised by data supply chain flaws, poisoning, and drift.
Organisations are urged to adopt security measures throughout all four phases of the AI life cycle: planning, data collection, model building, and operational monitoring.
The recommendations include verifying third-party datasets, using secure ingestion protocols, and regularly auditing AI system behaviour. Particular emphasis is placed on preventing model poisoning and tracking data lineage to ensure integrity.
The guidance encourages firms to update their incident response plans to address AI-specific risks, conduct audits of ongoing projects, and establish cross-functional teams involving legal, cybersecurity, and data science experts.
With AI models increasingly central to critical infrastructure, treating data security as a core governance issue is essential.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!