
13 – 20 February 2026
HIGHLIGHT OF THE WEEK
AI governance moves South
This week, India is hosting the India AI Impact Summit 2026, held at Bharat Mandapam in New Delhi under the auspices of the Ministry of Electronics and Information Technology (MeitY). The event brought together leaders from governments, industry, civil society, and international organisations to advance discussions on the future, ethics, and governance of AI.
A new framework: MANAV vision for ethical AI. At the heart of the summit was Prime Minister Narendra Modi’s unveiling of the ‘MANAV Vision’, a human-centred approach to AI governance. Framed as a series of principles aimed at placing people at the centre of AI development and deployment, MANAV stands for:
- Moral and ethical systems — ensuring AI is guided by ethical norms
- Accountable governance — transparent rules and oversight mechanisms
- National sovereignty — rights over data and digital assets
- Accessible and inclusive AI — avoiding monopolies and broadening participation
- Valid and legitimate systems — lawful and verifiable technologies.
Modi described this framework as essential to preventing future disparities in AI’s impact and ensuring technology serves humanity’s welfare. He also emphasised that AI should be a medium for inclusion and empowerment, particularly for the Global South, rather than a tool that concentrates power among a few actors.

The impact. The summit drew over 100 countries, more than 20 heads of state, hundreds of ministers, and thousands of attendees from the tech, policy, and research sectors. The UN Secretary-General was also there (along with representatives of several other UN entities), calling for ‘policy that is as smart as the technology it seeks to guide’. CEOs from major technology firms, including OpenAI, Google, Microsoft, and Anthropic, participated in discussions, signalling broad engagement across private and public sectors.
Industry leaders used the occasion to emphasise open standards, content authenticity, and broad access to AI tools.
Why does it matter? This is the first time a global AI summit of this scale has taken place in the Global South, broadening AI governance debates beyond frontier safety and competition to include access to infrastructure, linguistic diversity, workforce impacts, and development priorities. It increases legitimacy by involving countries that are primarily AI adopters, making future rules more implementable.
Follow along! Diplo and GIP are providing AI-enhanced reporting from the Summit, using DiploAI to capture key discussions, outcomes, and trends. Readers can explore the full set of reports and insights on the dedicated summit web page.
The future. Switzerland’s government has announced that the next global AI summit will take place in Geneva in 2027. Bernard Maissen, Swiss State Secretary and Director General of the Federal Office of Communications, said the country views itself as a bridge between the Global North and the Global South — a role it aims to reinforce through the summit.
Guy Parmelin, President of the Swiss Confederation, added that hosting the gathering would further consolidate Switzerland’s standing in digital policy while reaffirming its longstanding commitment to a rules-based international order.
IN OTHER NEWS LAST WEEK
Governments take action against harmful digital practices worldwide
Children’s exposure to social media and digital platforms continues to draw unprecedented scrutiny.
The UK government announced a set of accelerated measures to strengthen protections for children using the internet. In the coming weeks, the government will introduce reforms to close loopholes in existing online safety laws and expand regulatory oversight to AI-powered services and chatbot technologies.
A children’s digital wellbeing consultation will launch next month to gather input from parents and young people. The government intends to act swiftly on its findings within months by introducing targeted legal powers that can be enacted rapidly as technology evolves. Potential measures include setting a minimum age limit for social media, restricting harmful features such as infinite scrolling, and examining protections against children sending or receiving explicit images. The consultation will also explore restrictions on children’s use of AI chatbots and limits on VPN use where it undermines safety protections.
In Spain, Prime Minister Pedro Sánchez has ordered prosecutors to investigate X, Meta, and TikTok over the alleged circulation of AI-generated child sexual abuse material (CSAM). The probe follows reports that platform systems may have enabled the creation and spread of sexually explicit deepfake imagery involving minors. Spanish authorities are examining whether companies failed to prevent the distribution of such content and whether AI tools embedded in or linked to the platforms contributed to the harm.
At the same time, the UK is introducing legislation requiring tech companies to remove non-consensual intimate images within 48 hours of being reported. Under the updated Crime and Policing Bill, firms that fail to comply risk fines of up to 10% of global revenue or potential service restrictions, with enforcement overseen by Ofcom.
In a Los Angeles court, Meta CEO Mark Zuckerberg is defending Instagram and YouTube in a landmark trial that could set the tone for thousands of similar cases. The plaintiff, who began using these platforms as a child, claims that features designed to maximise engagement contributed to long-term mental health harm. Zuckerberg insisted that Instagram prohibits users under 13 and that enforcing age limits is challenging, as many minors lie about their birth dates. He highlighted ongoing efforts to reduce screen time and improve safety features. Still, internal documents presented in court suggested that early teen engagement had been a strategic priority. The case is being closely watched as a potential blueprint for platform accountability regarding addictive features.
Meanwhile, Brussels is investigating whether Shein’s design elements, such as gamified engagement and opaque recommendation algorithms, undermine consumer safety and transparency obligations under the DSA. Investigators will examine whether Shein has failed to prevent the sale of illegal products — including items that may constitute child sexual abuse material. Shein’s risk-mitigation systems, product removal processes, and compliance with requirements to offer non-profiling recommendation options will be evaluated, with potential fines of up to 6 % of global turnover for confirmed breaches.
Data privacy and compliance remain key concerns globally. Nigeria’s Data Protection Commission (NDPC) has launched an inquiry into the Chinese e-commerce giant Temu over suspected violations of Nigeria’s data protection law. Authorities are probing the company’s data-handling practices, specifically, alleged non-transparent processing, intrusive surveillance mechanisms, cross-border transfers, and possible failure to limit data collection. Temu has pledged cooperation as regulators warn that non-compliance could trigger legal penalties and set precedents for data governance in Africa’s largest digital market.
Ireland’s Data Protection Commission (DPC) has initiated a large-scale GDPR investigation into X’s AI chatbot Grok, after reports that its generative AI capabilities have been used to produce harmful, non-consensual and sexualised content involving personal data. This probe, triggered by widespread controversy over Grok’s image outputs, unfolds alongside evidence that the chatbot has been gaining market share in the USA even as global regulators scrutinise its compliance with fundamental data protection standards.
Why it matters. As platforms push AI and algorithmic features to capture attention, policymakers are seeking tools to safeguard vulnerable users and set precedents for global tech governance. The outcomes of ongoing investigations and trials could reshape how platforms design features, manage data, and protect minors for years to come.
‘Freedom Is Coming’: Inside the US plan to let the world bypass local internet laws
The US Department of State is reportedly preparing to launch ‘freedom.gov,’ an online portal designed to help users worldwide, including in Europe and elsewhere, circumvent local content restrictions and access blocked material, including content their governments classify as hate speech or terrorist propaganda.
According to sources familiar with the plan, the website may include a built-in virtual private network (VPN) function that would make user traffic appear to originate from the USA. Sources say user activity on the site will not be tracked.
The big picture. A State Department spokesperson noted that ‘Digital freedom is a priority for the State Department, and that includes the proliferation of privacy and censorship-circumvention technologies like VPNs.’
The transatlantic tension. The move could bring forth a significant escalation in transatlantic tensions over content governance. While a State Department spokesperson stated that the USA has no ‘censorship circumvention program specific to Europe’, the European Union’s approach to content policy does differ fundamentally from the American tradition. While the USA protects virtually all forms of expression under the First Amendment, European regulations, particularly the Digital Services Act (DSA), require large online platforms to quickly remove content classified as illegal hate speech, terrorist material, or harmful disinformation.
What’s at stake? What’s at stake is how online content is regulated and who gets to define the limits of expression in an interconnected digital space.
Questions unanswered. It is unclear what advantages a government-backed portal would offer over existing commercial VPN services. Critics question whether the US government should be in the business of providing circumvention tools, and what legal protections would apply to users of the service. The project could put Washington in an unusual position, appearing to encourage citizens of other countries to violate local laws.
Gabon suspends social media
Gabon has imposed an indefinite suspension of social media platforms, citing the spread of false information, cyberbullying and the unauthorised disclosure of personal data.
Gabon’s media regulator, the High Authority for Communication (HAC), stated that existing moderation measures were not working and that the shutdown was necessary to stop violations of Gabon’s 2016 Communications Code.
What authorities framed as necessary, critics described as a disproportionate restriction on freedom of expression and access to information.
Why does it matter? The measure underscores a broader trend in which governments resort to connectivity disruptions during periods of instability, raising questions about proportionality, transparency, and compliance with international human rights standards.
Trusted Tech Alliance launched
At the Munich Security Conference, a coalition of major technology companies announced the creation of the Trusted Tech Alliance (TTA) and introduced a set of principles to define what constitutes ‘trusted’ digital infrastructure.
The alliance brings together firms spanning cloud computing, AI, telecommunications, and enterprise software. Members are committed to five core principles: transparent corporate governance, secure development and independent assessment, supply chain oversight, ecosystem openness, and adherence to the rule of law and data protection standards.
In context. The launch comes amid escalating debates over digital sovereignty. The initiative positions itself as a response to rising geopolitical fragmentation and growing scrutiny over the security of critical digital systems.
Scepticism remains. Analysts caution that while the alliance sets principles, it currently lacks strong enforcement or independent verification mechanisms, meaning compliance is largely voluntary. Critics also highlight that U.-based companies dominate membership, raising questions about whether the initiative genuinely addresses European concerns over strategic autonomy.
LAST WEEK IN GENEVA

The UN Institute for Disarmament Research (UNIDIR), in partnership with the Organisation internationale de la Francophonie (OIF), held an event to explore the phenomenon of hybrid threats. Experts recommended strengthening multilateral governance with harmonised norms for space and online platforms, building societal resilience through information and enforcement, and protecting critical infrastructure via cybersecurity and operational safeguards. Supporting less-equipped states with technical, regulatory, and risk-management tools is essential, alongside strategic signalling to make the consequences of unacceptable actions clear. Across all these measures, the guiding principle is integration—space, information, and cyber domains must be managed together to maintain global stability and resilience.
The World Intellectual Property Organization (WIPO) launched the 2026 edition of its World Intellectual Property Report, entitled ‘Technology on the Move’, on 17 February (Tuesday) in Geneva and online. The report analyses how technologies spread globally and the implications for economic development. It reveals a dramatic acceleration in global technology diffusion: older technologies like the telegraph and automobile took decades to diffuse, whereas contemporary digital innovations, such as generative AI, reach users worldwide within days thanks to mature global digital infrastructure. Adoption gaps between advanced and developing economies have narrowed for recent technologies, and usage intensity differences are diminishing, especially for digital technologies. However, significant disparities remain, notably in Africa, where infrastructure and access gaps persist. Innovation leadership remains concentrated in a handful of economies, including the USA, Western Europe, Japan and China. Successful diffusion depends on four key factors—technology characteristics, information flow, absorptive capacity, and public policy and IP frameworks. The report stresses that deliberate policy and investment are essential to translate rapid diffusion into inclusive economic development and growth.
LOOKING AHEAD

The 61st regular session of the United Nations Human Rights Council (HRC61) is scheduled to take place from 8 September to 23 February 2026 at the Palais des Nations in Geneva, Switzerland. This session provides a key platform for the international community to discuss, promote, and protect human rights worldwide. Major agenda items include the discussions on the promotion and protection of civil, political, economic, social, and cultural rights, and the review of specific human rights situations that require the Council’s attention.
Masters of Digital 2026 will take place on 26 February 2026 as a hybrid event in Brussels and online. Under the theme ‘Redesigning Europe’s Digital Power’, the conference will examine AI-driven competitiveness, digital security in a fragmented geopolitical landscape, and regulatory simplification. Day 1 centres on AI leadership, industrial strategy, investment, and the Future Unicorn Award. Day 2 addresses digital infrastructure, health, energy, cybersecurity, and transatlantic cooperation. Participation is open online, with limited in-person access by application.
READING CORNER
Can AI save endangered languages? From Google’s Woolaroo to Maori data sovereignty, explore how technology, policy, and diplomacy intersect to protect linguistic heritage.
‘This week, in the conference rooms of the AI Impact Summit in New Delhi, a large elephant will be lurking. It’s an elephant in the defining mantra of the modern AI era: The more GPU computing power we put in, the better AI we will have. But is this mantra actually true? This article aims to challenge that core assumption, arguing that, at best, it is naive and, at worst, dangerous for the modern economy, the future of our society.


