Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.
The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.
Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.
Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.
Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.
LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.
X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.
TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.
European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.
As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
