At the Internet Governance Forum 2025 in Lillestrøm, Norway, the ‘Building an International AI Cooperation Ecosystem’ session spotlighted the urgent need for international collaboration to manage AI’s transformative impact. Hosted by China’s Cyberspace Administration, the session featured a global roster of experts who emphasised that AI is no longer a niche or elite technology, but a powerful and widely accessible force reshaping economies, societies, and governance frameworks.
China’s Cyberspace Administration Director-General Qi Xiaoxia opened the session by stressing her country’s leadership in AI innovation, citing that over 60% of global AI patents originate from China. She proposed a cooperative agenda focused on sustainable development, managing AI risks, and building international consensus through multilateral collaboration.
Echoing her call, speakers highlighted that AI’s rapid evolution requires national regulations and coordinated global governance, ideally under the auspices of the UN.
Speakers, such as Jovan Kurbalija, executive director of Diplo, and Wolfgang Kleinwächter, emeritus professor for Internet Policy and Regulation at the University of Aarhus, warned against the pitfalls of siloed regulation and technological protectionism. Instead, they advocated for open-source standards, inclusive policymaking, and leveraging existing internet governance models to shape AI rules.
Regional case studies from Shanghai and Mexico illustrated diverse governance approaches—ranging from rights-based regulation to industrial ecosystem building—while initiatives like China Mobile’s AI+ Global Solutions showcased the role of major industry actors. A recurring theme throughout the forum was that no single stakeholder can monopolise effective AI governance.
Instead, a multistakeholder approach involving governments, civil society, academia, and the private sector is essential. Participants agreed that the goal is not just to manage risks, but to ensure AI is developed and deployed in a way that is ethical, inclusive, and beneficial to all humanity.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
Since 2015, 21 June marks the International Day of Yoga, celebrating the ancient Indian practice that blends physical movement, breathing, and meditation. But as the world becomes increasingly digital, yoga itself is evolving.
No longer limited to ashrams or studios, yoga today exists on mobile apps, YouTube channels, and even in virtual reality. On the surface, this democratisation seems like a triumph. But what are the more profound implications of digitising a deeply spiritual and embodied tradition? And how do emerging technologies, particularly AI, reshape how we understand and experience yoga in a hyper-connected world?
Tech and wellness: The rise of AI-driven yoga tools
The wellness tech market has exploded, and yoga is a major beneficiary. Apps like Down Dog, YogaGo, and Glo offer personalised yoga sessions, while wearables such as the Apple Watch or Fitbit track heart rate and breathing.
Meanwhile, AI-powered platforms can generate tailored yoga routines based on user preferences, injury history, or biometric feedback. For example, AI motion tracking tools can evaluate your poses in real-time, offering corrections much like a human instructor.
While these tools increase accessibility, they also raise questions about data privacy, consent, and the commodification of spiritual practices. What happens when biometric data from yoga sessions is monetised? Who owns your breath and posture data? These questions sit at the intersection of AI ethics and digital rights.
Beyond the mat: Virtual reality and immersive yoga
The emergence of virtual reality (VR) and augmented reality (AR) is pushing the boundaries of yoga practice. Platforms like TRIPP or Supernatural offer immersive wellness environments where users can perform guided meditation and yoga in surreal, digitally rendered landscapes.
These tools promise enhanced focus and escapism—but also risk detachment from embodied experience. Does VR yoga deepen the meditative state, or does it dilute the tradition by gamifying it? As these technologies grow in sophistication, we must question how presence, environment, and embodiment translate in virtual spaces.
Can AI be a guru? Empathy, authority, and the limits of automation
One provocative question is whether AI can serve as a spiritual guide. AI instructors—whether through chatbots or embodied in VR—may be able to correct your form or suggest breathing techniques. But can they foster the deep, transformative relationship that many associate with traditional yoga masters?
AI lacks emotional intuition, moral responsibility, and cultural embeddedness. While it can mimic the language and movements of yoga, it struggles to replicate the teacher-student connection that grounds authentic practice. As AI becomes more integrated into wellness platforms, we must ask: where do we draw the line between assistance and appropriation?
Community, loneliness, and digital yoga tribes
Yoga has always been more than individual practice—community is central. Yet, as yoga moves online, questions of connection and belonging arise. Can digital communities built on hashtags and video streams replicate the support and accountability of physical sanghas (spiritual communities)?
Paradoxically, while digital yoga connects millions, it may also contribute to isolation. A solitary practice in front of a screen lacks the energy, feedback, and spontaneity of group practice. For tech developers and wellness advocates, the challenge is to reimagine digital spaces that foster authentic community rather than algorithmic echo chambers.
Digital policy and the politics of platformised spirituality
Beyond the individual experience, there’s a broader question of how yoga operates within global digital ecosystems. Platforms like YouTube, Instagram, and TikTok have turned yoga into shareable content, often stripped of its philosophical and spiritual roots.
Meanwhile, Big Tech companies capitalise on wellness trends while contributing to stress-inducing algorithmic environments. There are also geopolitical and cultural considerations.
The export of yoga through Western tech platforms often sidesteps its South Asian origins, raising issues of cultural appropriation. From a policy perspective, regulators must grapple with how spiritual practices are commodified, surveilled, and reshaped by AI-driven infrastructures.
Toward inclusive and ethical design in wellness tech
As AI and digital tools become more deeply embedded in yoga practice, there is a pressing need for ethical design. Developers should consider how their platforms accommodate different bodies, abilities, cultures, and languages. For example, how can AI be trained to recognise non-normative movement patterns? Are apps accessible to users with disabilities?
Inclusive design is not only a matter of social justice—it also aligns with yogic principles of compassion, awareness, and non-harm. Embedding these values into AI development can help ensure that the future of yoga tech is as mindful as the practice it seeks to support.
Toward a mindful tech future
As we celebrate International Day of Yoga, we are called to reflect not only on the practice itself but also on its evolving digital context. Emerging technologies offer powerful tools for access and personalisation, but they also risk diluting the depth and ethics of yoga.
For policymakers, technologists, and practitioners alike, the challenge is to ensure that yoga in the digital age remains a practice of liberation rather than a product of algorithmic control. Yoga teaches awareness, balance, and presence. These are the very qualities we need to shape responsible digital policies in an AI-driven world.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
At the 2025 Internet Governance Forum in Lillestrøm, Norway, parliamentarians from around the world gathered to share perspectives on how to regulate harmful online content without infringing on freedom of expression and democratic values. The session, moderated by Sorina Teleanu, Diplo’s Director of Knowledge, highlighted the increasing urgency for social media platforms to respond more swiftly and responsibly to harmful content, particularly content generated by AI that can lead to real-world consequences such as harassment, mental health issues, and even suicide.
Pakistan’s Anusha Rahman Ahmad Khan delivered a powerful appeal, pointing to cultural insensitivity and profit-driven resistance by platforms that often ignore urgent content removal requests. Representatives from Argentina, Nepal, Bulgaria, and South Africa echoed the need for effective legal frameworks that uphold safety and fundamental rights.
Argentina’s Franco Metaza, Member of Parliament of Mercosur, cited disturbing content that promotes eating disorders among young girls and detailed the tangible danger of disinformation, including an assassination attempt linked to online hate. Nepal’s MP Yogesh Bhattarai advocated for regulation without authoritarian control, underscoring the importance of constitutional safeguards for speech.
Member of European Parliament, Tsvetelina Penkova from Bulgaria, outlined the EU’s multifaceted digital laws, like the Digital Services Act and GDPR, which aim to protect users while grappling with implementation challenges across 27 diverse member states.
Youth engagement and digital literacy emerged as key themes, with several speakers emphasising that involving young people in policymaking leads to better, more inclusive policies. Panellists also stressed that education is essential for equipping users with the tools to navigate online spaces safely and critically.
Calls for multistakeholder cooperation rang throughout the session, with consensus on the need for collaboration between governments, tech companies, civil society, and international organisations. A thought-provoking proposal from a Congolese parliamentarian suggested that digital rights be recognised as a new, fourth generation of human rights—akin to civil, economic, and environmental rights already codified in international frameworks.
Other attendees welcomed the idea and agreed that without such recognition, the enforcement of digital protections would remain fragmented. The session concluded on a collaborative and urgent note, with calls for shared responsibility, joint strategies, and stronger international frameworks to create a safer, more just digital future.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
A coalition of cybersecurity agencies, including the NSA, FBI, and CISA, has issued joint guidance to help organisations protect AI systems from emerging data security threats. The guidance explains how AI systems can be compromised by data supply chain flaws, poisoning, and drift.
Organisations are urged to adopt security measures throughout all four phases of the AI life cycle: planning, data collection, model building, and operational monitoring.
The recommendations include verifying third-party datasets, using secure ingestion protocols, and regularly auditing AI system behaviour. Particular emphasis is placed on preventing model poisoning and tracking data lineage to ensure integrity.
The guidance encourages firms to update their incident response plans to address AI-specific risks, conduct audits of ongoing projects, and establish cross-functional teams involving legal, cybersecurity, and data science experts.
With AI models increasingly central to critical infrastructure, treating data security as a core governance issue is essential.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI tools are increasingly used in workplaces to enhance productivity but come with significant security risks. Workers may unknowingly breach privacy laws like GDPR or HIPAA by sharing sensitive data with AI platforms, risking legal penalties and job loss.
Experts warn of AI hallucinations where chatbots generate false information, highlighting the need for thorough human review. Bias in AI outputs, stemming from flawed training data or system prompts, can lead to discriminatory decisions and potential lawsuits.
Cyber threats like prompt injection and data poisoning can manipulate AI behaviour, while user error and IP infringement pose further challenges. As AI technology evolves, unknown risks remain a concern, making caution essential when integrating AI into business processes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canadian and US authorities have attributed a cyberattack on a Canadian telecommunications provider to state-sponsored actors allegedly linked to China. The attack exploited a critical vulnerability that had been patched 16 months earlier.
According to a statement issued on Monday by Canada’s Communications Security Establishment (CSE), the breach is attributed to a threat group known as Salt Typhoon, believed to be operating on behalf of the Chinese government.
‘The Cyber Centre is aware of malicious cyber activities currently targeting Canadian telecommunications companies,’ the CSE stated, adding that Salt Typhoon was ‘almost certainly’ responsible. The US FBI released a similar advisory.
Salt Typhoon is one of several threat actors associated with the People’s Republic of China (PRC), with a history of conducting cyber operations against telecommunications and infrastructure targets globally.
In late 2023, security researchers disclosed that over 10,000 Cisco devices had been compromised by exploiting CVE-2023-20198—a vulnerability rated 10/10 in severity.
The exploit targeted Cisco devices running iOS XE software with HTTP or HTTPS services enabled. Despite Cisco releasing a patch in October 2023, the vulnerability remained unaddressed in some systems.
In mid-February 2025, three network devices operated by an unnamed Canadian telecom company were compromised, with attackers retrieving configuration files and modifying at least one to create a GRE tunnel—allowing network traffic to be captured.
Cisco has also linked Salt Typhoon to a broader campaign using multiple patched vulnerabilities, including CVE-2018-0171, CVE-2023-20273, and CVE-2024-20399.
The Cyber Centre noted that the compromise could allow unauthorised access to internal network data or serve as a foothold to breach additional targets. Officials also stated that some activity may have been limited to reconnaissance.
While neither agency commented on why the affected devices had not been updated, the prolonged delay in patching such a high-severity flaw highlights ongoing challenges in maintaining basic cyber hygiene.
The authorities in Canada warned that similar espionage operations are likely to continue targeting the telecom sector and associated clients over the next two years.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Meta Platforms’ messaging service WhatsApp has been banned from all devices used by the US House of Representatives, according to an internal memo distributed to staff on Monday.
The memo, issued by the Office of the Chief Administrative Officer, stated that the Office of Cybersecurity had classified WhatsApp as a high-risk application.
The assessment cited concerns about the platform’s data protection practices, lack of transparency regarding user data handling, absence of stored data encryption, and associated security risks.
Staff were advised to use alternative messaging platforms deemed more secure, including Microsoft Teams, Amazon’s Wickr, Signal, and Apple’s iMessage and FaceTime.
Meta responded to the decision, stating it ‘strongly disagreed’ with the assessment and maintained that WhatsApp offers stronger security measures than some of the recommended alternatives.
The US House of Representatives has previously restricted other applications due to security concerns. In 2022, it prohibited the use of TikTok on official devices.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
McLaren Health Care in Michigan has begun notifying over 743,000 individuals that their personal and health data may have been compromised in a ransomware attack in August 2024.
The health system confirmed that unauthorised access to its systems began on 17 July and continued until 3 August 2024, affecting McLaren Health Care and its Karmanos Cancer Centers.
A forensic investigation concluded on 5 May 2025 revealed that files containing names, Social Security numbers, driver’s licence details, medical information, and insurance data were accessed.
Notification letters began going out on 20 June 2025, and recipients are being offered 12 months of complimentary credit monitoring and identity theft protection.
Although the incident has not been officially attributed to a specific ransomware group, industry reports have previously linked the attack to the Inc. Ransom group. However, McLaren Health Care has not confirmed this, and the group has not publicly listed McLaren on its leak site.
However, this is McLaren’s second ransomware incident within a year. A previous attack by the ALPHV/BlackCat group compromised the data of more than 2.1 million individuals.
Following the August 2024 attack, McLaren Health Care restored its IT systems ahead of schedule and resumed normal operations, including reopening emergency departments and rescheduling postponed appointments and surgeries.
However, data collected manually during the outage is still being integrated into the electronic health record (EHR) system, a process expected to take several weeks.
McLaren Health Care has stated that it continues to investigate the full scope of the breach and will issue further notifications if additional data exposures are identified. The organisation works with external cybersecurity experts to strengthen its systems and prevent future incidents.
The attack caused disruptions across all 13 hospitals in the McLaren system and affiliated cancer centres, surgery centres, and clinics. While systems have been restored, McLaren has encouraged patients to remain prepared by bringing essential documents and information to appointments.
The health system expressed appreciation for its staff’s efforts and patients’ patience during the response and recovery efforts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the Internet Governance Forum 2025 in Lillestrøm, Norway, global leaders, tech executives, civil society figures, and academics converged for a high-level session to confront one of the digital age’s most pressing dilemmas: how to protect democratic discourse and human rights amid big tech’s tightening control over the global information space. The session, titled ‘Losing the Information Space?’, tackled the rising threat of disinformation, algorithmic opacity, and the erosion of public trust, all amplified by powerful AI technologies.
Norwegian Minister Lubna Jaffery sounded the alarm, referencing the annulled Romanian presidential election as a stark reminder of how influence operations and AI-driven disinformation campaigns can destabilise democracies. She warned that while platforms have democratised access to expression, they’ve also created fragmented echo chambers and supercharged the spread of propaganda.
Estonia’s Minister of Justice and Digital Affairs Liisa Ly Pakosta echoed the concern, describing how her country faces persistent information warfare—often backed by state actors—and announced Estonia’s rollout of AI-based education to equip youth with digital resilience. The debate revealed deep divides over how to achieve transparency and accountability in tech.
TikTok’s Lisa Hayes defended the company’s moderation efforts and partnerships with fact-checkers, advocating for what she called ‘meaningful transparency’ through accessible tools and reporting. But others, like Reporters Without Borders’ Thibaut Bruttin, demanded structural reform.
He argued platforms should be treated as public utilities, legally obliged to give visibility to trustworthy journalism, and rejected the idea that digital space should remain under the control of private interests. Despite conflicting views on the role of regulation versus collaboration, panellists agreed that the threat of disinformation is real and growing—and no single entity can tackle it alone.
The session closed with calls for stronger international legal frameworks, cross-sector cooperation, and bold action to defend truth, freedom of expression, and democratic integrity in an era where technology’s influence is pervasive and, if unchecked, potentially perilous.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
Cloudflare has blocked what it describes as the largest distributed denial-of-service (DDoS) attack ever recorded after nearly 38 terabytes of data were unleashed in just 45 seconds.
The onslaught generated a peak traffic rate of 7.3 terabits per second and targeted nearly 22,000 destination ports on a single IP address managed by an undisclosed hosting provider.
Instead of relying on a mix of tactics, the attackers primarily used UDP packet floods, which accounted for almost all attacks. A small fraction employed outdated diagnostic tools and methods such as reflection and amplification to intensify the network overload.
These techniques exploit how some systems automatically respond to ping requests, causing massive data feedback loops when scaled.
Originating from 161 countries, the attack saw nearly half its traffic come from IPs in Brazil and Vietnam, with the remainder traced to Taiwan, China, Indonesia, and the US.
Despite appearing globally orchestrated, most traffic came from compromised devices—often everyday items infected with malware and turned into bots without their owners’ knowledge.
To manage the unprecedented data surge, Cloudflare used a decentralised approach. Traffic was rerouted to data centres close to its origin, while advanced detection systems identified and blocked harmful packets without disturbing legitimate data flows.
The incident highlights the scale of modern cyberattacks and the growing sophistication of defences needed to stop them.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!