
3 – 9 April 2026
HIGHLIGHT OF THE WEEK
AI meets cybersecurity as project Glasswing takes flight
This week, a veritable who’s who of tech—Amazon, Apple, Google, Microsoft, NVIDIA, and a dozen other giants—joined the Anthropic-led cybersecurity project Glasswing.
The launch partners will use Antropic’s unreleased Claude Mythos Preview as part of their defensive security work, a tool the company claims can already identify software vulnerabilities at a level surpassing that of most human experts.
The premise is straightforward and difficult to dispute: If AI systems can find and exploit vulnerabilities at scale, then those same capabilities should be deployed defensively, before less scrupulous actors gain access. Anthropic frames this as a narrow window of opportunity. Mythos Preview, it argues, has already uncovered thousands of high-severity vulnerabilities across major operating systems and browsers—an assertion that, if accurate, signals a step-change in the automation of software exploitation.

Yet the announcement also raises questions that go beyond the promises.
There is the question of verification. Claims that a model can ‘surpass all but the most skilled humans’ at vulnerability discovery are inherently difficult to evaluate externally, particularly when the system itself is not publicly available.
A systemic issue that could arise is coordination. If AI accelerates the rate at which vulnerabilities are found, it may also overwhelm remediation efforts, effectively creating a bottleneck.
The model remains unreleased, accessible only to a curated group of partners and selected infrastructure maintainers. This controlled access concentrates a powerful capability in the hands of a small set of actors. Smaller vendors, public institutions, and under-resourced open-source projects may benefit indirectly from disclosed fixes, but they are unlikely to operate on equal footing.
It is also worth noting that all core partners in Project Glasswing—from Amazon Web Services and Google to Microsoft, Apple, and Cisco—are headquartered in the United States. That matters, because access to the most sensitive capability—the model itself—appears tightly governed and selectively distributed. Even if non-US entities participate, they are unlikely to do so on equal terms. It reflects where frontier AI development and much of the global cybersecurity industry are currently anchored, but it also reinforces the geopolitical framing that increasingly surrounds these technologies.
That said, it would be misleading to see this as purely exclusionary. If the initiative results in patched vulnerabilities, improved open-source security, and shared findings, its effects will be globally distributed—whether or not governance is.
IN OTHER NEWS LAST WEEK
This week in AI governance
South Korea–France. South Korea and France are deepening cooperation through a new strategic AI and technology partnership, aimed at strengthening joint research, industrial collaboration and standard-setting across emerging technologies. The initiative reflects a broader effort to align capabilities in semiconductors, data infrastructure and advanced computing, while positioning both countries more competitively in the global AI landscape.
The USA. A federal appeals court in Washington, D.C. has declined to block the Pentagon’s national-security blacklisting of Anthropic, allowing the designation to remain in force while litigation continues. The ruling contrasts with a separate decision by a California judge who had earlier blocked part of the government’s action, highlighting a growing judicial split over the unprecedented move.
OpenAI has released a policy document entitled ‘Industrial Policy for the Intelligence Age: Ideas to Keep People First’. The document argues that while superintelligence promises extraordinary benefits, it also carries serious risks: job displacement, misuse by bad actors, loss of human control, and concentration of power and wealth. The proposals are organized into two sections. First, building an open economy: giving workers a voice in AI deployment, treating AI access as a fundamental right, creating a ‘Public Wealth Fund’ to give citizens direct stakes in AI growth, converting efficiency gains into shorter workweeks, and building adaptive safety nets that trigger automatically when disruption occurs. Second, building a resilient society: developing containment playbooks for dangerous AI, create verifiable trust stacks for content, strengthen independent auditing of frontier models, mandate incident reporting, and build international information-sharing networks.
The EU. If you want to let European lawmakers know what you think of the implementation of the bloc’s AI Act, there is still a bit of time. The feedback period on the draft Implementing Regulation related to the oversight of general-purpose AI models under Regulation (EU) 2024/1689 (the EU Act) will remain open until tonight, 9 April (midnight).
US Supreme Court narrows ISP liability, sharpens focus on intent with AI implications
A unanimous US Supreme Court ruling this week has narrowed the circumstances under which an internet service provider (ISP) can be held liable for users’ copyright infringement.
Writing for the Court, Justice Clarence Thomas said an ISP is liable only if its service was designed for unlawful activity or if it actively induced infringement.
The decision could have implications beyond ISPs, particularly in the escalating copyright battle between publishers/authors and generative AI firms.
The key distinction raised is that broadband networks function as neutral conduits, whereas large language models are built specifically to produce fluent, human-like writing, including prose, poetry and dialogue, that can resemble the work of human authors. If a subscriber uses broadband to pirate a novel, the ISP did not build its network to enable that outcome, but an AI model prompted to write in a specific author’s style is designed to fulfil that request.
US agencies warn of cyber intrusions into critical infrastructure systems
A joint cybersecurity advisory issued by the Federal Bureau of Investigation, Cybersecurity and Infrastructure Security Agency, National Security Agency, and several sector-specific partners warns US organisations of an ongoing campaign by actors targeting industrial control systems across US critical infrastructure.
The activity focuses on internet-exposed operational technology (OT), particularly programmable logic controllers (PLCs), which are widely used to automate industrial processes in sectors such as energy, water and wastewater systems, and government services.
According to the advisory, the attackers are exploiting PLCs by leveraging their direct exposure to the internet. The attackers gain initial access by scanning for internet-facing PLCs and connecting through commonly used industrial communication ports. Once access is established, the actors interact with device project files and manipulate data displayed on human-machine interfaces (HMI) and supervisory control and data acquisition (SCADA) systems. This enables them to disrupt industrial processes in real time. In several confirmed cases, such intrusions have resulted in operational disruption and financial loss, underscoring the tangible, physical-world impact of these cyber operations.
The campaign appears to be part of a broader escalation in Iranian-linked cyber activity, likely tied to geopolitical tensions involving the USA and its allies. The advisory links the activity to previously identified advanced persistent threat (APT) groups associated with Iran’s Islamic Revolutionary Guard Corps (IRGC).
Greece sets ‘digital age of majority,’ moving to ban under-15s from social media
Greece is moving to tighten restrictions on minors’ use of social media, with legislation expected later this year that would introduce a ban for children under 15. The measure is set to take effect on 1 January 2027 and is intended to be a framework that changes how platforms operate.
Platforms would be required to implement robust age verification mechanisms, including the re-verification of existing accounts, with oversight provided by national regulators such as the Hellenic Telecommunications and Post Commission (EETT).
The measure applies to social networking services where users create profiles, publish content, and interact publicly, while excluding private communication services.
The big picture. The proposal reflects an emerging policy pattern across Europe, where governments are increasingly willing to intervene more directly in platform access for minors. Athens is also seeking to elevate the issue at the European level. ‘Our goal is to push the European Union in this direction as well,’ Prime Minister Kyriakos Mitsotakis noted in a video about the measure posted on TikTok.
Brazil launches first national centre for assistive technology
Brazil has inaugurated its first Center for Access, Research and Innovation in Assistive Technology (Capta) at the Benjamin Constant Institute in Rio de Janeiro. Run by the Ministry of Science, Technology and Innovation (MCTI) under the National Plan for the Rights of People with Disabilities, the centre aims to foster the development, experimentation, and dissemination of assistive technologies that enhance autonomy, inclusion, and quality of life for people with disabilities.
The launch marks the first of several planned centres nationwide to expand access to these technologies.
Yes, but. The long-term impact will depend on sustained investment and the ability to scale these centres nationwide.
UPCOMING EVENTS
WTO deadlock, AI boom: Unpacking MC14 and looking ahead
Diplo, the Digital Trade and Data Governance Hub, and the Geneva Internet Platform will co-organise a webinar on 14 April (next Tuesday) that unpacks digital trade-related developments in the 14th WTO Ministerial Conference in Yaoundé and looks ahead to their implications for the rapidly expanding AI economy. As digital trade rules take shape through multiple channels, understanding the intersection between trade policy and AI governance becomes increasingly urgent. The speakers will explore:
- What to expect at the next General Council meeting in May and beyond
- The main outcomes and sticking points from MC14
- What the lapse of the e-commerce moratorium means — and what it does not mean
- How the plurilateral JSI e-commerce agreement may shape digital trade going forward
- The specific implications for AI development, including data flows, tariffs on digital services, and regulatory coherence
Registration for the event is open.
READING CORNER
The European Union is progressing into the implementation phase of its Artificial Intelligence Act, with emerging obligations for providers of general-purpose AI models. Guidance from the European Commission and the AI Office outlines compliance expectations as the EU operationalises its risk-based AI governance framework.

