
6 March – 13 March 2026
HIGHLIGHT OF THE WEEK
Measuring hate’s footprint
Spain has unveiled HODIO, a new digital tool designed to systematically measure and expose the prevalence of hate speech across social media platforms. The tool—short for Huella del Odio y Polarización (Footprint of Hatred and Polarisation)—was announced by Pedro Sánchez at the inaugural Forum Against Hate in Madrid. It will be managed by the Spanish Observatory on Racism and Xenophobia (OBERAXE).
How will it work? HODIO will combine quantitative analysis, AI tools, and expert review to assess the scale and patterns of hate speech on major platforms. Every six months, the observatory will publish a report ranking platforms according to users’ exposure to hostile messages, allowing comparisons across services.
The goal. By measuring the ‘footprint’ of hate, the government hopes to create stronger evidence for policymaking and increase pressure on platforms to take action against harmful content.

Why does it matter? The tool reflects a growing push by governments to gather data on online harms and better understand how hate speech spreads across platforms. For instance, France is investigating influencer-driven hate and platform policies, Brazil is using AI to monitor anti-LGBTQ online content, and Indonesia has created a real-time dashboard revealing stark differences in platform toxicity during elections.
But: a slippery slope. Critics have raised concerns about the transparency of HODIO and how authorities will define and classify hate speech, warning that poorly defined criteria could infringe on freedom of expression.
Why does it matter? Ultimately, the dispute exposed a deeper structural tension. Advanced AI systems are increasingly central to military planning, logistics, intelligence analysis, and battlefield decision-making. At the same time, leading AI firms have articulated ethical boundaries around surveillance, lethal autonomy, and dual-use risks.
The confrontation between Anthropic and the Pentagon crystallised the question of who determines those boundaries when national security and corporate governance collide.
IN OTHER NEWS LAST WEEK
This week in AI governance
The USA. The US government is facing two lawsuits from AI firm Anthropic after the Pentagon designated the company a supply-chain risk, effectively barring its technology from defence contracts. Anthropic argues the move is unlawful and politically motivated, claiming the government is retaliating against the company for refusing to allow its AI models to be used for domestic surveillance or fully autonomous weapons. The lawsuits, filed in courts in California and Washington, D.C., challenge the rare use of national-security supply-chain rules against a US technology company.
The legal dispute has drawn support from across the tech sector, with companies including Microsoft, Google, Amazon and OpenAI backing Anthropic’s legal challenge through amicus filings. Industry leaders warn that the government’s designation could set a precedent that destabilises the US AI ecosystem and disrupts suppliers working with both government and private-sector AI systems.
The EU. The European Commission has released a second draft of its Code of Practice on marking and labelling AI-generated content, part of efforts to help companies comply with transparency requirements under Article 50 of the EU Artificial Intelligence Act. Section 1 of the code focuses on providers of generative AI systems and proposes a multi-layered approach to marking AI-generated content, including digitally signed metadata, imperceptible watermarking, and optional fingerprinting or logging. Providers are also expected to make detection tools available so users and authorities can verify whether content was generated or manipulated by AI. Section 2 addresses deployers of AI systems, requiring clear disclosure when deepfakes or AI-generated text intended to inform the public have been artificially generated or manipulated, using visible and accessible labels.
Meta reopens WhatsApp to third-party AI Chatbots amid EU pressure
Meta has announced that third-party AI chatbots will once again be allowed to operate through WhatsApp in Europe for a fee, reversing earlier restrictions that limited access to rival chatbot services on the platform.
The decision follows pressure from the European Commission, which had warned it could impose interim competition measures. Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months. The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem.
What’s next? EU competition chief Teresa Ribera said that the Commission would examine Meta’s proposal and decide whether further intervention is necessary.
UK rejects social media ban, opting for flexible measures
A proposed social media ban for under-16s has been rejected by UK MPs, with 307 voting against and 173 in favour. Instead, the parliamentarians supported an alternative plan put forward by Education Minister Olivia Bailey to give ministers flexible powers, enforceable after a consultation on online safety concludes.
Under this plan, technology secretary Liz Kendall could ‘restrict or ban children of certain ages from accessing social media services and chatbots’. She could also limit children’s VPN use, restrict access to addictive features, and change the age of digital consent in the UK.
At the same time, UK online safety regulators Ofcom and the Information Commissioner’s Office have also called on social media firms to better protect children online. Ofcom issued an open letter urging tech firms to keep underage children off their platforms, emphasising the importance of robust age verification systems. The guidance is backed by the Information Commissioner’s Office (ICO), which highlighted the need to protect children’s personal data and strengthen compliance with existing regulations.
Looking ahead. Where Bailey’s plan goes depends on the online safety consultation, which is set to end on 26 May. A joint statement by Ofcom and ICO, expected in March 2026, will clarify how online safety and data protection intersect in the context of age assurance, offering updated guidance to platforms on their responsibilities.
Why does it matter? In an era when an increasing number of countries are considering social media bans for minors, the UK’s recent decision stands out for explicitly rejecting a total ban. Its approach could set an influential example, potentially leading some governments that had been weighing bans to rethink their plans.
Cybersecurity: On the offensive
President Donald Trump released his administration’s national cybersecurity strategy, outlining priorities across six policy areas: offensive and defensive cyber operations, federal network security, critical infrastructure protection, regulatory reform, emerging technology leadership (including in AI), and workforce development.
Trump also signed an executive order the same day, directing the attorney general to prioritise cybercrime prosecution, tasking agencies with reviewing tools to counter international criminal organisations, and assigning the Department of Homeland Security expanded training responsibilities.
The strategy document spans five pages of substantive text, with administration officials describing it as intentionally high-level. The White House stated that more detailed implementation guidance would follow.
In the pipeline. A full analysis of the strategy by Diplo’s cybersecurity policy team. Stay tuned!
On the heels of the strategy comes the latest cyber episode in the USA-Israel-Iran conflict, with pro-Iranian hacker group Handala claiming responsibility for a cyberattack on US medical device giant Stryker. The group has stated that the cyberattack is retaliation for a missile strike on an elementary school in Iran. Stryker confirmed the cyberattack in a statement, noting that order processing, manufacturing and shipping are disrupted, but that connected products have not been impacted.
Meanwhile, European intelligence agencies are warning of a growing cyber-espionage campaign targeting accounts on encrypted messaging platforms such as Signal and WhatsApp. Authorities in the Netherlands reported that hackers—believed to be linked to Russia—have launched large-scale phishing operations aimed at diplomats, military personnel, government officials, and journalists. Instead of breaking the apps’ encryption, attackers trick users into sharing verification codes or linking devices, allowing them to take over accounts and access sensitive conversations.
Portugal’s intelligence service has issued a similar alert, describing a global campaign by foreign state-backed actors seeking access to the messaging accounts of officials and others with privileged information. Once inside an account, attackers can read chats, access shared files, and use the compromised profile to target additional victims through further phishing attempts.
The big picture. The past week confirms that cyberspace has become a domain of unrelenting offensive action. With the Global Mechanism on ICT security set to begin its work at the end of March, how these very incidents will inform its discussions is the question.
LOOKING AHEAD

The Global Fraud Summit 2026, convened by the UNODC in cooperation with INTERPOL, will take place on 16–17 March at the Vienna International Centre, Austria. This ministerial-level meeting brings together government officials, law enforcement, international organisations, the private sector, civil society, and academics to tackle fraud as a transnational threat. Participants will explore emerging trends, digital and cross-border challenges, prevention strategies, enforcement measures, and international information-sharing mechanisms.
WIPO will launch the AI Infrastructure Interchange (AIII) on 17 March in Geneva and online. The programme includes keynote remarks, panel discussions, and presentations addressing the role of technical collaboration between creators, rightsholders, and technology companies. Participants will also discuss the objectives of the AIII initiative and the establishment of a Technical Exchange Network intended to support ongoing expert dialogue on practical challenges and opportunities. Registration for the event is open.
Diplo is organising a webinar, ‘Technology Innovations for Creative Diplomacy’, on 18 March to examine the impact of technological innovation on diplomatic practice. The webinar will showcase creative applications of digital tools in public diplomacy and international engagement, and share practical experiences from technology-driven initiatives. Registration for the event is open.
READING CORNER
In the global rush to regulate AI, a consensus seems to have formed: that the real drama of AI lies in the future – in existential risks, deepfakes, or algorithmic bias. But for persons with disabilities, the crisis of AI is already here, and it is older than the technology itself. It is the story of being told a system is efficient and modern, only to find the door locked from the inside, writes Muhammad Shabbir.
Can AI ethics truly be universal? Or are we quietly building a global moral monoculture into our machines? Emmanuel Elolo Agbenonwossi explores how ideas like Ubuntu challenge dominant AI governance narratives and why pluralism may matter more than universality.
The management of a digital afterlife involves more than just closing accounts. It concerns the long-term security and ethical handling of personal data. Slobodan Kovrlija examines the growing need for international standards for digital legacy and what this means for the future of digital rights.



