weekly newsletter

Home | Newsletters & Shorts | Weekly #255 From content to design: Juries signal new era of accountability for tech giants

Weekly #255 From content to design: Juries signal new era of accountability for tech giants

 Logo, Text

20 – 27 March 2026


HIGHLIGHT OF THE WEEK

From content to design: Juries signal new era of accountability for tech giants

Two recent US jury verdicts are beginning to redraw the boundaries of responsibility for social media platforms, with implications that extend well beyond the individual cases. 

In New Mexico, a jury ordered Meta to pay $375 million after finding it misled users about the safety of its platforms for children. The case, brought by Attorney General Raul Torrez, centred on evidence that services such as Facebook and Instagram exposed minors to sexually explicit content and potential predators. Jurors were presented with internal research and testimony from former employees, including whistle-blower Arturo Béjar, suggesting the company was aware of these risks but failed to adequately warn the public or mitigate harm. Meta has rejected the verdict and plans to appeal.

Simultaneously, a Los Angeles jury reached a related conclusion in a different context. It found Meta and YouTube—owned by Google—negligent in the design and operation of their platforms in a case focused on social media addiction. The lawsuit, brought by a young woman identified as K.G.M., argued that compulsive use of these platforms during her teenage years contributed to depression, anxiety, and body dysmorphia. The jury agreed, awarding $6 million in damages and assigning 70% of the liability to Meta and 30% to Google. Both companies have said they will appeal, maintaining that mental health outcomes cannot be attributed to a single platform.

Why does it matter? The financial penalties in these cases are small for companies of this scale. The broader significance of the verdicts lies elsewhere. 

 Adult, Female, Person, Woman, Face, Head, Publication, Book

Historically, platforms have relied on legal protections—most notably Section 230 of the US Communications Act—to shield themselves from liability for user-generated content. These rulings, however, begin to test a different theory: that liability can arise not just from what users post, but from how platforms structure, recommend, and amplify content.

This distinction matters because it targets the core of the modern social media business model. Platforms like Meta and Google are built around maximising user engagement—time spent, interactions, and content consumption—which in turn drives advertising revenue. To achieve this, they rely on recommendation systems, frictionless interfaces, and behavioural design features such as autoplay, infinite scroll, and push notifications. These are not incidental elements; they are foundational to how platforms retain users and monetise attention.

The emerging legal argument is that some of these design choices may actively contribute to harm, particularly for minors. In the New Mexico case, the focus was on exposure to harmful and exploitative content. In Los Angeles, the emphasis was to compulsive use and its mental health effects. But both cases converge on a similar point: that platform architecture itself—not just isolated content failures—can create foreseeable risks.

If this reasoning gains traction in courts, it introduces a new kind of pressure on technology companies. The issue is not the size of any single fine, but the cumulative effect of thousands of similar lawsuits, rising compliance costs, and the possibility of precedent-setting rulings that reshape acceptable design practices. Engagement-maximising systems, long treated as a competitive advantage, could become a source of legal vulnerability.

That creates a structural tension. Reducing harmful outcomes may require dialling back precisely those features that make platforms so effective at capturing attention. Even modest declines, when applied across billions of users, can translate into significant revenue impacts.

The path forward? Companies are unlikely to abandon their core models outright. A more probable response is adaptation. This could include re-optimising algorithms toward safer forms of engagement, segmenting products by age with stricter defaults for minors, and investing in more robust safety and audit mechanisms. There may also be a gradual shift toward alternative revenue streams—such as subscriptions, creator monetisation, or commerce integrations—to reduce reliance on pure attention-based advertising.

Legal strategy will also play a role. Both Meta and Google are appealing these verdicts, and future rulings will determine how far courts are willing to go in attributing harm to design choices. Companies are likely to strengthen disclosures, expand parental controls, and document internal risk assessments to demonstrate due diligence. Such measures may not eliminate liability, but they can shape how responsibility is interpreted.

Ultimately, the key question is whether these cases represent isolated outcomes or the beginning of a broader legal shift. 

IN OTHER NEWS LAST WEEK

This week in AI governance

USA. The US government has unveiled a National AI Policy Framework outlining a comprehensive strategy for AI across federal agencies. The policy sets priorities for responsible AI development, data governance, workforce training and international collaboration, while emphasising ethical safeguards, public‑interest outcomes and national security. The framework also calls for accelerated investment in AI research and deployment, alongside coordinated oversight mechanisms to ensure transparency and accountability in federal AI systems.

Netherlands, France. A Dutch court has ordered xAI and its Grok chatbot not to create or distribute non‑consensual sexual images. The judgement requires Grok’s operators to implement technical measures to block prompts or outputs capable of producing non‑consensual intimate imagery. The decision was framed as a necessary enforcement of personal rights and dignity in the digital age, setting a potentially influential precedent for European courts grappling with AI‑generated harm.

Meanwhile, the Paris prosecutor’s office said that the controversy surrounding sexually explicit deepfakes generated by Grok may have been deliberately amplified. The alleged reason was to artificially boost the value of X and xAI ahead of June 2026, when the new entity created by the merger between SpaceX and xAI is planned to be listed on the stock market.

The UK. Secretary of State for Science, Innovation and Technology has called on online service providers to strengthen measures against digital harms targeting women and girls, as part of a commitment to halve such violence within a decade. The secretary called on tech companies to implement Ofcom’s guidance ‘A Safer Life Online for Women and Girls’, which outlines steps such as conducting risk assessments focused on women and girls, pre-launch abusability evaluations of features, strong default privacy settings, demonetising content promoting abuse, limiting the visibility of misogynistic content in search and recommendation feeds, and implementing rate limits to curb coordinated harassment. The guidelines should be implemented by the end of 2026 at the latest.

Australia. The eSafety Commissioner found that AI companion chatbots, including Character.AI, Nomi, Chai and Chub AI, are failing to protect children from harmful content, with weak safeguards against sexually explicit material and child sexual exploitation. Most platforms relied on self-declared age verification, lacked meaningful monitoring of AI inputs and outputs, and did not consistently provide links to crisis or mental health support. Commissioner Julie Inman Grant warned that as children increasingly use AI companions for emotional support, the absence of robust safety measures on self-harm, suicide and unlawful content poses serious risks, with non-compliance subject to civil penalties under Australia’s Age-Restricted Material Codes.

Russia. The Russian government is proposing rules that could ban or restrict foreign AI tools such as ChatGPT, Claude and Gemini if they fail to store Russian user data domestically and comply with Moscow’s regulatory requirements. The proposals, from the Ministry for Digital Development, aim to extend Russia’s push for a sovereign internet, protecting citizens from ‘covert manipulation’ and enforcing ‘traditional Russian spiritual and moral values.’ Under the draft rules, cross-border AI systems that transmit user data abroad would face restrictions, whereas foreign models that can operate entirely within Russian infrastructure, such as Qwen or DeepSeek, could be deployed safely.


Operationalising restrictions on children’s use of social media 

Restrictions on children’s use of social media are rapidly moving from political debate into policy design.

In Ecuador, the issue is framed in terms of security. A proposed ban on under-15s is linked to concerns that platforms are being used by criminal groups to contact and recruit minors. This shifts the rationale away from well-being and toward crime prevention, positioning social media restrictions as part of a broader security response.

The UK is not yet legislating; it is testing. A government-backed pilot is trialling different forms of restriction—full bans, time limits, and curfews— for six weeks. Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls. The UK’s approach reflects a lack of evidence on effectiveness, despite growing political pressure to act

At the same time, UK regulators are addressing a core constraint: how to implement age checks without undermining privacy. The Information Commissioner’s Office and Ofcom have issued joint guidance clarifying how age assurance should comply with both the Online Safety Act and data protection law. The key signal is not new obligations, but integration—age verification systems must be designed to deliver safety and privacy simultaneously, rather than treating them as competing requirements.

This implementation challenge is where Brazil is also moving decisively. The National Data Protection Agency (ANPD) published preliminary guidelines for implementing the Digital ECA. The guidance requires platforms such as social media, gaming, and adult content services to move beyond self-declared age checks and implement more robust verification systems, with penalties for non-compliance reaching up to 50 million reais or 10% of local revenue. Final rules are expected in August 2026 following public consultation.


EU disinformation code signatories publish first reports under DSA

Signatories to the EU Code of Conduct on Disinformation have published new transparency reports describing the measures they say they are taking to reduce the spread of disinformation online. 

Dedicated sections in the reports cover responses to ongoing crises, notably the conflict in Ukraine, as well as measures intended to safeguard the integrity of elections. Data on the implementation of disinformation-related measures is also included, alongside developments in signatories’ policies, tools, and partnerships under the Digital Services Act framework.

The reports are available through the Code’s Transparency Centre and come from a broad group of signatories, including online platforms such as Google, Meta, Microsoft, and TikTok, as well as fact-checkers, research organisations, civil society bodies, and representatives of the advertising industry. 

Why does it matter? The reports are the first ones submitted since the Code was recognised as a code of conduct under the Digital Services Act in February 2025. A more formal role now applies to the Code than under its earlier voluntary setup: By placing the disinformation Code inside the Digital Services Act framework, the Commission and the Board are using voluntary commitments, transparency reporting, and auditing as part of a co-regulatory approach to systemic online risks. 


Interim trade deal plausible at the WTO

Signatories to the E-Commerce Agreement, negotiated under the WTO Joint Statement Initiative (JSI), are planning to implement the deal on an interim basis despite continued opposition. At least 70 of the 72 countries that endorsed the agreement are expected to sign a declaration to that effect at the WTO Ministerial Conference (MC14) in Yaoundé, which kicked off yesterday.

The move comes as JSI members seek to advance the agreement despite the lack of consensus among the full WTO membership for its incorporation into the Organization’s Annex 4, a step that would require the support of all WTO members. The interim arrangement would take the form of a legally binding treaty among the signatories, expiring upon formal integration into the WTO framework.

The E-commerce Agreement, finalised in July 2024, includes provisions on trade facilitation (e-signatures, paperless trade, single window), personal data protection, and a commitment to refrain from imposing customs duties on electronic transmissions. The latter clause would ensure the continuation of duty-free e-commerce among signatories regardless of the outcome of the broader WTO moratorium on customs duties on electronic transmissions.



LOOKING AHEAD
 Person, Face, Head, Binoculars

The organisational session of the Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible State behaviour in the use of ICTs will be held on 30-31 March (Monday and Tuesday) in New York, USA. This session marks the start of the substantive work of the Global Mechanism, a new single-track, permanent forum on ICT security under UN auspices. Diplo and the GIP will provide reporting and expert insights from the session—bookmark our dedicated page on the Digital Watch Observatory to stay informed.

Also on Monday and Tuesday, ITU will hold a two-day workshop on ‘Trustable and Interoperable Digital Identities for Human and Agentic AI’ in Geneva. It will bring together stakeholders from governments, industry, academia, and standards bodies to examine technical approaches related to trust frameworks, trust management, security, and interoperability; and to investigate actionable recommendations and consolidated insights to advance standardisation work in the field. The event is open to ITU member states, sector members, associates, academic institutions, and other interested participants at no cost, but registration is required.

The Inter-Parliamentary Union will host a webinar on ‘Building AI Literacy in Parliaments‘ on Wednesday, 1 April 2026, to explore how parliaments can develop training and resources to support AI literacy among members, parliamentary staff, and IT teams. The webinar will highlight the IPU Guidelines for AI in parliaments, emphasising that AI literacy should reach all roles within parliaments.


READING CORNER
BLOG featured image 2026 37 WTO Ministerial

Digital trade is growing faster than traditional trade, but governance is struggling to keep up. At the WTO, key decisions could redefine global economic power. Carolina von der Weid, James Görgen and Marilia Macilel debate what’s at stake and who decides.

 
Open weight AI and small countries

The AI race isn’t just for the tech giants. Open-weight AI gives smaller countries a valuable new opportunity. With access to flexible and powerful models, these nations can protect their digital independence and adapt technology for their own languages and cultures, argues Slobodan Kovrlija.