Main Topic 4: Transatlantic rift on Freedom of Expression

13 May 2025 14:00h - 15:30h

Main Topic 4: Transatlantic rift on Freedom of Expression

Session at a glance

Summary

This EuroDIG session examined the growing tensions between European and American approaches to freedom of expression and platform regulation, particularly in light of the Trump administration’s policies and the EU’s Digital Services Act (DSA) enforcement. The discussion featured experts in European law, American law, and content moderation who explored how geopolitical conflicts are increasingly entangling internet governance issues with broader trade and defense disputes.


Berin Szóka argued that the Trump administration is using “jawboning” – regulatory pressure and threats – to force platforms to abandon content moderation, while simultaneously characterizing European regulations as censorship. He warned that Europe lacks the strategic autonomy to resist American pressure, especially when NATO support and trade relationships are at stake. Szóka emphasized that the DSA’s ambiguous language around “systemic risks” makes it vulnerable to misinterpretation and political weaponization.


Judith Bayer defended the European approach, explaining that it stems from a historical understanding of the state’s duty to protect citizens from powerful private actors. She argued that the DSA creates a system of checks and balances through transparency requirements and multi-stakeholder participation, rather than enabling political censorship. Bayer noted that platforms often remove content based on their own terms of service rather than legal requirements.


Nitsan Yasur provided civil society perspective, sharing data from crisis situations showing that platforms struggle with content moderation at scale, particularly for context-dependent harms like disinformation and hate speech. The discussion also addressed the limitations of community-based solutions like Twitter’s Community Notes, which participants noted cannot achieve consensus on divisive issues like election integrity.


The session concluded with calls for Europe to maintain its regulatory standards while seeking alliances with other democracies, emphasizing the need for effective implementation rather than regulatory retreat in the face of American pressure.


Keypoints

## Major Discussion Points:


– **Transatlantic tensions over freedom of expression**: The discussion centered on the growing rift between US and European approaches to regulating online content, particularly how the Trump administration views European regulations like the Digital Services Act (DSA) as censorship while Europe sees them as necessary protections for democratic discourse.


– **Platform accountability and content moderation challenges**: Speakers debated the effectiveness of different content moderation approaches, including the shift from fact-checking to community notes, the role of trusted flaggers, and how platforms handle harmful content during crises (with real-world data from the Israel-Gaza conflict showing significant delays and inconsistencies).


– **Legal and regulatory frameworks**: Extensive discussion of how the DSA’s risk mitigation requirements work in practice, the ambiguity in certain provisions, and whether European regulations need to be strengthened or clarified to withstand political pressure from the US administration.


– **Geopolitical implications and “jawboning”**: Analysis of how the US administration uses political and economic pressure (tariffs, trade threats) to influence European tech policy enforcement, and whether Europe has sufficient “strategic autonomy” to resist such pressure.


– **Civil society’s role and global perspectives**: Emphasis on the importance of civil society organizations as intermediaries between platforms, governments, and users, plus recognition that regulatory decisions made by the US and EU affect the entire world, including regions with less market power.


## Overall Purpose:


The discussion aimed to examine the deepening divide between American and European approaches to online content regulation, analyze the practical challenges of implementing the Digital Services Act under US political pressure, and explore potential paths forward for Europe to maintain its regulatory principles while managing transatlantic tensions.


## Overall Tone:


The discussion began with a serious, analytical tone as speakers presented their expertise and data. It became increasingly urgent and concerned as participants discussed the real-world implications of US political pressure on European tech regulation. The tone grew more collaborative and solution-oriented during the audience participation phase, with participants offering specific suggestions and corrections. Throughout, there was an underlying tension between idealistic regulatory goals and pragmatic political realities, with speakers expressing both determination to uphold European values and worry about Europe’s ability to resist US pressure effectively.


Speakers

**Speakers from the provided list:**


– **Cristina Herrera** – Session moderator for the fourth main session of EuroDIG on “transatlantic rift on freedom of expression”


– **Berin Szóka** – Runs a think tank called Tech Freedom, American lawyer cross-trained in European law, German citizen, expert in platform liability and freedom of expression


– **Judit Bayer** – Co-author of Freedom of Expression and Media Law, Associate Professor at University of Budapest for Economy, expert in digital European regulation, affiliated with University of Münster


– **Nitsan Yasur** – Disinformation and digital investigation lead at Israeli Internet Association, expert in digital safety and fighting disinformation


– **Karine Caunes** – Executive Director of the Center for AI and Digital Humanisms, expert in AI governance and information manipulation research


– **Brahim Baalla** – [Role/expertise not specified in transcript]

– **Torsten Krause** – Political science and child rights researcher at Digital Opportunities Foundation based in Berlin, Germany


– **Moderator** – Jyrki Ransipulo from Finland, program committee member responsible for drafting session messages


– **Speaker** – [Initial speaker introducing the session – specific identity not provided]

– **Audience** – Multiple audience members who made statements and asked questions


**Additional speakers:**


– **João Pedro Martins** – Remote moderator explaining Zoom session rules


– **Marie Bonner** – From Agence France-Presse and European Fact-Checking Standards Network


– **Daniel** – From Youthdig organization


– **Jorge Cancio** – From the Swiss government


– **David Crouch** – From Internet Society


– **Tim van der Belt** – Dutch Authority for Digital Infrastructure


– **Julie Pasetti** – Academic and journalist based in the UK


– **Olivier Cabana-Blanc** – ISOC UK England


Full session report

# EuroDIG Session Report: Transatlantic Rift on Freedom of Expression


## Executive Summary


This EuroDIG session examined tensions between European and American approaches to freedom of expression and platform regulation, featuring a new interactive format with 30 minutes of high-level statements, 45 minutes of pre-registered interventions, and 15 minutes for consensus-building on key messages. The discussion included three multidisciplinary experts: Berin Szóka (Tech Freedom think tank, American living in Europe), Judit Bayer (co-author of Freedom of Expression and Media Law, University of Budapest), and Nitsan Yasur (disinformation lead, Israeli Internet Association).


The session revealed disagreements about regulatory strategy and content moderation approaches, while uncovering consensus on core problems facing democratic discourse online. Speakers agreed that current platform content moderation systems are inadequate during crises, and that Europe needs to maintain its regulatory approach despite external pressure.


## Key Speaker Presentations


### Berin Szóka: American Perspective on Regulatory Pressure


Szóka, who runs the Tech Freedom think tank and holds both American and German citizenship, argued that the Trump administration is using “jawboning” – regulatory pressure and threats – to force platforms to abandon content moderation practices. He warned that platforms are withdrawing from voluntary commitments like the Code of Practice on Disinformation, with X (formerly Twitter) having already left and others potentially following.


Szóka expressed concern about ambiguous language in the Digital Services Act, particularly Article 35’s systemic risk provisions, arguing this creates vulnerability to political weaponization. He suggested that until Europe develops greater strategic autonomy, American political pressure may effectively constrain European regulatory enforcement.


### Judit Bayer: European Regulatory Philosophy


Bayer provided historical context for European governance approaches, explaining that European systems are rooted in people’s sovereignty where the state receives its mandate from citizens and has a duty to protect them against powerful private actors. She argued this creates different expectations about the state’s role compared to American approaches focused on protecting individual rights against state interference.


Defending the DSA’s design, Bayer emphasized that it creates multi-actor participation systems with checks and balances rather than enabling arbitrary censorship. She argued the DSA focuses on procedural guarantees and transparency requirements rather than dictating specific content decisions.


### Nitsan Yasur: Crisis Response Data


Yasur presented empirical evidence from the Israel-Gaza conflict showing significant platform moderation failures. Her organization’s research revealed platforms took an average of more than five days to respond to harmful content reports, with a 70% non-response rate for disinformation on Facebook. The data showed platforms handle graphic content relatively well using automated tools but struggle with hate speech, incitement, and disinformation requiring human moderation and cultural context understanding.


## Audience Interventions and Research Findings


### Karine Caunes: German Election Research


Caunes presented research analyzing over 500,000 tweets about German political parties, revealing astroturfing activities on X where the AfD (Alternative for Germany) gained significant visibility through foreign accounts and bots. She strongly opposed suggestions to revise the DSA, arguing that reopening the legislation would be counterproductive and that Europe has necessary enforcement tools if there’s political will to use them.


Caunes suggested targeted interventions like suspending recommender systems during electoral periods rather than shutting down entire platforms, which could address information manipulation while being harder to characterize as censorship.


### Brahim Baalla: Immigration Policy Connections


Baalla raised concerns about U.S. immigration policy regarding anti-Semitic activity on social media, noting that the U.S. memorandum from February creates potential conflicts between American domestic policy and European regulatory approaches.


### Civil Society and Community Moderation


Multiple interventions addressed community-driven content moderation. Szóka provided critical analysis of Twitter’s Community Notes system, arguing that while it works for factual corrections on non-divisive issues, it fails by design for contested topics. He illustrated this with election integrity: “If the question is who won the last election, you will never get a Community Note on that issue because certain parts of the community deny that the last election was legitimate.”


Torsten Krause highlighted concerns about protecting children as vulnerable users, noting that one-third of global internet users are minors requiring special protection under the Child Rights Convention.


## Areas of Consensus


Despite disagreements on implementation, speakers found consensus on several issues:


### Platform Inadequacy During Crises


All speakers agreed that current platform content moderation systems are fundamentally inadequate, particularly during crises. Yasur’s empirical evidence supported theoretical concerns about the impossibility of moderating content at required speed and scale.


### Civil Society’s Role


Strong agreement emerged on civil society organizations serving as essential intermediaries between platforms, governments, and users while maintaining independence and accountability.


### European Regulatory Persistence


Speakers consistently agreed that Europe should maintain its regulatory approach and defend its principles rather than retreating under external pressure.


## Key Disagreements


### DSA Implementation


Fundamental disagreement emerged between those believing the DSA needs revision to address ambiguities (Szóka) and those arguing that reopening legislation would be counterproductive (Caunes, Bayer).


### Community Moderation Effectiveness


While some advocated for expanding community-driven fact-checking models, experts expressed skepticism about their effectiveness for divisive issues, reflecting broader questions about democratic participation versus institutional expertise.


### Enforcement Capacity


Speakers disagreed about European capacity to enforce regulations against external pressure, with varying levels of optimism about European strategic autonomy.


## Final Messages and Consensus Building


The session concluded with a consensus-building process around key messages. Two draft messages were presented, with participants refining language through discussion. Specific objections were raised to words like “obvious,” “simplified,” and preferences between “American” versus “US” terminology.


The rough consensus process highlighted the challenge of finding common ground even among participants who agreed on fundamental principles, reflecting broader difficulties in transatlantic digital policy coordination.


## Technical and Regulatory Details


The discussion addressed specific DSA provisions including Articles 34, 35, and 52, distinguishing between Code of Practice and Code of Conduct approaches. Participants examined the difference between trusted flagger systems and broader risk mitigation approaches under the DSA framework.


The concept of “jawboning” was introduced as a framework for understanding how informal pressure can undermine formal legal frameworks, with implications beyond current US-EU tensions.


## Conclusion


This EuroDIG session demonstrated both the depth of transatlantic tensions on digital regulation and the possibility of finding common ground on fundamental challenges. While speakers disagreed on specific solutions, they shared concerns about platform inadequacy, the importance of democratic discourse protection, and the need for sophisticated regulatory approaches.


The session’s interactive format successfully facilitated substantive discussion across different perspectives, though the consensus-building process revealed the ongoing challenges in developing shared approaches to digital governance in an era of geopolitical tension.


The discussion concluded with an invitation to continue conversations at the social evening at Le Tigre, emphasizing the importance of informal dialogue in building understanding across different regulatory traditions and national perspectives.


Session transcript

Speaker: But before we start the next session, I would like to invite you to join us this evening for our social evening. We’re going this evening to Le Tigre. We’re meeting at 6.30 there and the first drink is on us. So I hope that you come and join us. You can pick up your voucher at the door. But before that, we are going to head into main session four, the transatlantic rift on freedom of expression. And I would like to invite the remote moderator, João Pedro Martins, to explain the session rules. Thank you very much. Hi, everyone. By now, you should be tired of hearing me with the Zoom session rules, but I’ll go once again. For those joining for this session through Zoom and online, please raise your hand when you want to take the floor. And as soon as the moderator opens for interventions, I will flag your presence and enable your microphone. For those who are joining Zoom also from the AmiCycle, please, when you do so, enter the Zoom meeting muted and with your speakers disabled. And now I give the floor to the moderator of the session.


Cristina Herrera : Welcome to the fourth main session of EuroDIG, transatlantic rift on freedom of expression. As you have probably noticed from previous sessions, this year, EuroDIG has changed the format of the session to make it more interactive. As such, the session will be divided in three parts, 30 minutes for high-level statements, followed by 45 minutes of statements of people that have pre-registered to give their statements. At this point, we will allow to have other interventions and questions, and 15 or 10 minutes in the end to agree on quick messages. The purpose of this format is to encourage the audience participation and to enrich the conversation with inputs from diverse stakeholders. This is your session, and we encourage you to think of questions and remarks you would like to make. I am joined today by three multidisciplinary experts in European law, American law, content moderation, and Internet governance. Berin, Nitzan, and Judith, who is joining us remotely. I will let them introduce themselves when they make their statements. First, to set the scene. For a long time, Europe and the United States have had different approaches to how they interpret freedom of expression and its limitations, as does the rest of the world. However, in the past few years and even months, we have seen these sessions intensify, especially in online environments. Next slide, please. The U.S. President signed a memorandum in February where he promised to defend American companies for what he perceives as overseas extortion. This includes considerations of tariffs to respond to fines and digital service taxes. Trump also ordered agencies to cease any contracts with companies that facilitate censorship. On this side of the Atlantic, the EU has started enforcing the Digital Service Act, or DSA, requiring companies to have safeguards in place to remove content that is illegal based on national and international law. We have started to see examples of investigations against American companies, including the release of preliminary findings for acts for breaching the DSA. With geopolitical tensions intensifying, questions arise regarding how the U.S. might pressure the EU as far as digital policies are concerned, as well as how the EU in general, and the European Commission in particular, will respond. In this session, we will delve more deeply into the roots of the different approaches, and we will try to find a way forward. Now we’re going to start the 30 minutes of high-level statements. Mariam, I will start with you. If you can tell us more about how is the Trump administration influencing the understanding of platform liability and freedom of expressions in the United States, and what legal and rhetorical tools is the U.S. administration using to address what they perceive as European censorship?


Berin Szóka: Thank you. I’m Barron Soka. I run a think tank called Tech Freedom. I have been based in the U.S., but now live in Europe. So in February, U.S. Vice President J.D. Vance accused Europe of retreating from some of its most fundamental values. Last year, he suggested that America’s commitment to defend its NATO allies would depend on whether they, quote, share American values, especially about some very basic things like free speech. But it is Trump and his administration who have betrayed American values. The First Amendment to the U.S. Constitution says Congress shall make no law abridging the freedom of speech or of the press. Yet the Trump administration is now trying to shut down broadcasters and suing newspapers and pollsters. Victor Orban must be very proud. Trump claims to be protecting free speech. But what his administration really means is that private media must carry lies about who won the 2020 presidential election, conspiracy theories about vaccines and the most hateful, toxic speech imaginable. Yes, the First Amendment means freedom for the thought we hate. In America, neo-Nazis do have a constitutional right to march in public, but they’ve never had the right to force private media to carry their venom. President Ronald Reagan once summarized what Republicans used to think. The framers of our First Amendment aimed, he said, to promote vigorous public debate and a diversity of viewpoints in the public forum as a whole, but not in any particular medium, let alone in any particular journalistic outlet. In other words, the First Amendment protects the marketplace of ideas against manipulation by the state. But it doesn’t require that marketplace to be a Hobbesian war of all against all. The Constitution is not, as Justice Robert Jackson said in 1949, a suicide pact. There have always been gatekeepers making editorial judgments about truth and decency. These judgments, too, are a vital form of free speech, perhaps the one most under attack by Trump. Nearly 30 years ago, the Supreme Court said the First Amendment fully protects the Internet. Last year, it reiterated that website operators have the same constitutional right to make editorial judgments as newspaper publishers. But in 2021, when tech companies exercised that right and banned Trump for inciting the January 6th insurrection, he started to make stopping big tech censorship central to MAGA politics. And this is now the top priority of Trump’s tech regulators. There’s a word for what the Trump administration is doing. Jawboning. Jawboning means using pressure, browbeating, and regulatory extortion to achieve results that regulators don’t have the legal authority to require directly. and it’s working. To appease Trump’s rage, major tech companies have abandoned fact-checking. Meta now allows denigration of immigrants, women, and sexual minorities, for example, the kind of absurd claims that Trump and Vance made last year about Haitian immigrants supposedly eating dogs and cats. Such claims resulted in bomb threats. This is exactly the kind of violence that could explode in the U.S. at any time. But with tech companies retreating on content moderation, MAGA needs a new villain, so it’s fixated on Europe and on the United Kingdom. J.D. Vance, in his speech last year, offered a litany of examples of restrictions on speech. Many of these, if not all of them, actually probably violate Article 10 of the European Convention on Human Rights. However legitimate their aim, they are hard to justify as proportionate. Should it really be a crime to pray silently within 50 meters of an abortion clinic? Well, it is in Bournemouth, England. Vance could have argued that Europe hasn’t lived up to its own values, that the European Court of Human Rights here in Strasbourg and the European Court of Justice in Luxembourg should do more to protect Europeans’ fundamental rights. The Strasbourg Court, in particular, must decide cases much faster. Both courts should give less deference on speech restrictions and apply more skepticism to laws that are not content-neutral and viewpoint-neutral. The US government, if it were serious about free speech, could file briefs here with the Strasbourg Court to defend free speech. But of course, that isn’t really the point. This isn’t really about legal doctrine. Trump and Vance are just using the term free speech as a rhetorical weapon. Vance accused Europe’s so-called old entrenched interests of, he said, hiding behind ugly Soviet-era words like misinformation and disinformation to censor those with an alternative viewpoint. He made very clear who he was talking about, the kinds of voices that were excluded from the Munich Security Forum for parroting Kremlin propaganda. So how should Europe respond to these threats? Well, consider Romania. Its constitutional court may have been right to annul last year’s elections. Campaign laws should be enforced. But look what happened. The far right nearly doubled its share of the vote in the election redo. Regulating speech or its impacts on elections may actually fuel populist rage. J.D. Vance could have invoked many such examples, but he picked one. He picked Thierry Breton, who was commissioner responsible for the Digital Services Act. And in 2023, Breton threatened to shut down social media during unrest for failing to remove hateful content. Vance didn’t mention the 67 civil society groups, nearly all European, who condemned Breton’s comments, warning that they could, quote, reinforce the weaponization of internet shutdowns and legitimize arbitrary blocking of online platforms by governments around the world. Such principled defense of free speech is what Europe needs more of, its European values at their best. Last year, Breton threatened Elon Musk with action under the Digital Services Act merely for hosting a conversation with candidate Trump, because Trump might incite violence, hate, or racism. But this time, only a handful of civil society groups spoke out, including my own. Failing to defend freedom of speech, even when it’s Donald Trump and Elon Musk speaking, isn’t just hypocritical. It proves J.D. Vance right, and it costs Europe our most precious asset, our moral authority. There’s a legal problem here as well. The Digital Services Act is ambiguous enough that Breton thought he could wield the law against content he didn’t like. Article 35 requires the largest platforms to mitigate systemic risks that are only loosely defined, risks like civic discourse and electoral processes. These are essentially the same concerns that Trump himself has invoked in trying to force social media to carry his lies about election fraud. Professor Martin Husovich argues that Article 35 doesn’t give regulators the power to dictate content-specific rules because the DSA doesn’t say so expressly, and Article 52 of the European Charter of Fundamental Rights requires that limitations on rights must be provided for by law. He’s probably right that the European Court of Justice would say so, eventually, but even he concedes that the answer is, quote, far from clear. In the new global culture war, this isn’t good enough. As President Reagan said, if you’re explaining, you’re losing. The DSA, the AI Act, and Europe’s other platform laws may be new, but they based on assumptions of a slower, better era, where it was good enough for the courts to work out such questions eventually. But in an era when policy is made as much by tweet as by legislation, what matters is threats and political pressure, jawboning. Internet platforms, writes Professor Derek Bambauer, face structural incentives to knuckle under government jawboning over content, which makes them unusually vulnerable to government pressures, both formal and informal. So increasingly, when it comes to online content, to paraphrase what Andy Warhol once said about art, law is what you can get away with. Terry Breton may not have gotten away with very much. He soon quit in a huff before he could be fired, but he gave MAGA all the ammunition that it needed to characterize the DSA, however unfairly, as a censorship regime and the commission as the new ministry of truth, and he may have set a dangerous precedent. So Europe should rethink its tech regulations by asking two questions. First, how can we guard against the law being mischaracterized? And second, how can we avoid it being abused? Breton might have proved more fool than villain, but consider how the DSA might be weaponized by a commissioner in the future sympathetic to Elon Musk. So I’ve been speaking to you as an American lawyer only recently cross-trained in European law, but I’m also a German citizen, and I see my own future here in Europe and the future of freedom in general, depending on the European Union. I can’t say that I’m optimistic. Europe lacks three things. The first is realism. The United States just hasn’t been an unreliable ally on tech and many other issues. The United States is increasingly an adversary to Europe and to liberal democracies around the world. Last week, J.D. Vance struck a more conciliatory note when he told the Munich Leaders Forum in Washington that the U.S. and Europe were on the same civilizational team, as he put it, but he also conspicuously avoided talking about speech. Maya Angelou wouldn’t be fooled. The poet said, when someone shows you who they are, believe them the first time.


Cristina Herrera : Europeans aren’t cynical enough when it comes to Trump. When you read that his administration is talking about breaking up big tech, don’t kid yourselves. This is just more jawboning, another way of asserting control. When you read that bipartisan legislation would protect kids online, don’t assume that Democrats have done enough to include safeguards against abuse by Trump. Trust me, our Congress is much too broken for such competent drafting. And if you see concepts popping up in U.S. law that resemble the DSA, like requiring non-arbitrary or non-discriminatory content moderation, understand that those concepts are being weaponized by MAGA to break content moderation. Moreover, don’t assume that European regulators will resist American jawboning if American support for NATO and Ukraine are at stake. And they are. Or that tariffs are at stake. And they are. Europe may have fine principles, but it lacks strategic autonomy. Has Stalin supposedly quipped, how many divisions does the Pope have? Until Europe can stand up for itself militarily, the Trump administration may effectively have veto power over the enforcement of the DSA and the enforcement of other European tech laws, and even member state laws regarding free speech. And the world may have fine principles, but it lacks strategic autonomy. Until Europe can stand up for itself militarily, the Trump administration may effectively have veto power over the enforcement of the DSA and the enforcement of other European tech laws. And if you see concepts popping up in Europe right now, understand that those concepts are being weaponized by MAGA to break content moderation. And they are. Or that the European Union has been too stubborn in its efforts to enforce the DSA. And it will. And it will. And it will. And it will. And it will. will use that power to protect Elon Musk and his allies. So finally, until Europe can produce tech services that Europeans want to use, the commission will always play a weak hand. It will have to try to graft European values onto American creations. We cannot simply regulate our way out of Europe’s failure to innovate. Europe, in short, has much to change. Unless it does, it may find, after a decade of the Brussels effect, that the next decade will be that of the Trump effect, and Trump will reshape the internet into a far darker place than even today’s deepest pessimists fear. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you, Marion. Over to you, Judith, online. How does the European approach differ from the American, and does the role of the European Commission as the enforcement body of the TSA for very large online platforms increase the risk of American influence, and how should Europe respond?


Judit Bayer: Thank you very much. I’m very much honored to be here. I’m a co-author of Freedom of Expression and Media Law, and have expertise in digital European regulation. I’m an associate professor of the University of Budapest for Economy, but I’ve spent my last five years in Germany researching this field, affiliated with the University of Münster, the Institute for Information, Telecommunication, and Media, and the Center for Advanced Internet Studies. May I ask for a bit of feedback if I am audible well? Yes, you are. You can continue. Thank you. So, there is a European ideal for governance which is rooted in the principle of people’s sovereignty, where the state receives its mandate from the people, and it’s the state’s duty to stand up for the interests of the people. against other private representatives of power. The state’s obligation to protect human rights extends to the protection from other fellow human beings or from companies. Perhaps this roots in feudalism which didn’t exist in the U.S. The time when the king provided freedom to the cities, freedom from the landlords, this was the cradle of urban liberalism and of citizenship rights. This historical perspective might explain why the state is obliged not only to refrain from interfering with citizens’ rights but also to protect citizens’ rights against another private actor, especially against another great power. That’s why Europe has a stronger protection of laborers’ rights, of working mothers, a better health care regulation, food safety and so forth. And this pattern is reflected in the EU regulation of big tech. For the same reason, the European interpretation of freedom of expression has a more systemic view than the American. Freedom of expression is seen more as a political right which enables people to participate in the democratic decision-making process. When this goal is threatened by an actor through speech or suppression of speech, then state might intervene in extreme cases to protect the systemic function of freedom of expression. In legal language, this means that most EU member states and the EU itself have a positive obligation to secure a framework or the framework conditions for a plural and free information system. And technically, platforms are not speakers themselves. They are aggregating and reorganizing information and disinformation. They promote some speech and suppress other speech without explanation. They exploit vulnerabilities through behavioral targeting and they contribute to polarization with their opaque algorithms. Contrary to how they try to position themselves, they are not speakers and neither are are the neutral mediators of speech, but they govern who will access what kind of speech. And this opinion power cannot remain unregulated in European logic. But still, the DSA doesn’t order platforms to remove certain content. Not more than the DMC, the Digital Millennium Copyright Act of the United States does, or not more than the E-commerce Directive has been doing it since 2000. What’s more, it protects users’ rights against platform privileges by introducing procedural guarantees that platforms must provide, such as to give explanation when they removed certain content, to be transparent with their terms of services, to keep to their terms of services, to make a dispute resolution mechanism available, and so forth. So, in fact, Facebook is often more restrictive in removing content than required by EU law in most cases of hate speech removal. For example, according to the record, they relied on their own terms of services and not any law. Their speech standards are significantly stricter than European standards. While they object against this, in reality, that they should provide information to those speakers whose content they removed, respond to counter notices, and that they should provide information about their removal practices in the monitoring procedure. Additionally, several European nations have passed laws, often criminal, to prohibit this information, at least under circumstances such as elections, and the DSA also aimed to reduce such fragmentation in the European market. The obligation to reduce systemic risk leaves a huge range of discretion for platform companies to provide a fair and safe environment. It demands to prevent that their platform is misused and manipulated. Why do I think that the DSA is so important? The DSA is not a tool for political censorship. The DSA and related other laws to regulate the digital communication environment use a technique which is similar to risk hedging. It’s a core regulation, largely built on the cooperation of the tech companies, but also of auditors, civil society actors, who can act as trusted flaggers, on national regulatory authorities, on researchers, fact-checkers, I probably couldn’t even give an exhaustive list. This multi-actor participation creates a complex web of collaboration. Yes, each of them is a potential risk of failure, but overall, this risk is distributed. The system has created a system based on mutual distrust, a certain system of checks and balances. The actors mutually supervise and control each other, and the biggest part of the regulation is just the requirement of transparency. Therefore, I sometimes like to say, ironically, that the DSA is a big research project, because the transparency requirements are so dominant. This huge amount of information is processed by experts, auditors, monitoring organizations, and not by political bodies. In sum, while the sword may be in the hands of the Commission, the Commission is not in a position to take an arbitrary decision, because it’s part of a large system. And sanctions can be imposed anyway only for the violation of the law, in all cases of risk management obligations, never for individual pieces of speech, nor even for inefficient self-regulation. It’s mainly only a reckless disregard of the risk management obligation that would establish the ground for a fine. And whether the fact that the DSA is enforced by the European Commission for very large online platforms and certain genes increased the risk of American influence? I think quite the contrary. First of all, the EU currently doesn’t have a central independent regulatory authority. with an executive power that would be comparable to that of the Commission, which is the executive body of the EU. The national regulatory authorities are unlikely to exert sufficient pressure to enforce the law individually, as shown by the example of the data protection authorities. And there is a digital services board, which combines all national regulatory authorities in the digital sphere, and which is an advisory body to the Commission. There is also an AI board, similarly providing a representation of all member states. So if you are raising the issue that the EU needs a central, supranational EU-level independent supervisory body for platform and AI regulation, that sounds to me as a great idea, but the EU has currently a more centralized structure, and that also has its advantages. Whereas the big tech companies are becoming political by allying with the US president and leveraging his political power to resist European regulation. It is a fact that these companies possess and exercise a certain form of functional sovereignty, similarly to laws in the feudal age. This is sometimes called digital feudalism, sometimes it’s called political capitalism, but the fact is that state power is de facto shared with these companies. And just as media used to be called the fourth branch of power, the power to form opinions lies today much more with these companies. But this power encompasses even more than that, because they possess enormous dynamic databases and technological power, which can be converted into industrial and military power. And to the further point which was raised here, yes, the EU relied too much on the US in the past decades and didn’t develop a strategic autonomy. The wake-up call is rather harsh and requires urgent action. But what does the EU have, what are its assets? It has prime markets still, which are desirable for the big tech companies and also for other companies. across the globe, for example, Chinese ones. It can make new alliances with the states of the global majority. And it still has a regulatory competence and high potential and expertise on regulation, which may still be an export product with minor amendments, perhaps. Change is necessary in several aspects within the EU. However, there is no reason for the EU to take back from its regulatory standards. There is also no reason for the Big Tech to panic, however, because the DSA is built on the tool of dialogue and cooperation and transparency. In my view, this effort should be maintained. The power that the Big Tech holds, the data, the technology and the opinion power could be lethal to any society if it’s combined with a populistic, extremist, authoritarian or merely reckless political power. Let’s assume for a minute that the U.S. had a president who has no moral considerations in achieving his power ambitions. In cooperation with Facebook and Twitter, which government on the world is it that it couldn’t overthrow, where it couldn’t incite a coup, a civil war or even a genocide, as it happened in Myanmar and some African countries might line up. So regulation is, in my view, absolutely necessary and the EU must carry on this project for the interest of the public within the EU and beyond. Thank you.


Cristina Herrera : Thank you, Judith. And now for our last high-level statement. Nisten, can you tell us from a civil society perspective, how the EU and the U.S. regulatory models influence well-being and safety of online users?


Nitsan Yasur: Hello everyone and thank you for the opportunity to be here. and share some of our data and field experience. My name is Miklani Yassour. I’m a disinformation and digital investigation lead in the Israeli Internet Association, an independent non-profit civil society organization. We operate internet infrastructure and domain registry services and focus on digital safety, fighting disinformation, bridging digital device research and internet policy. As mentioned before, and the other speakers mentioned, we are here today to discuss this tension between the two, let’s say, regulatory worlds, the European one and the American approach. And from our perspective, both approaches fundamentally impact how platforms moderate the content daily. I’m speaking from outside perspective, from a non-EU and non-US-based organization, but one that’s directly affected by both EU and US approaches. And I’m not a lawyer, so I’ll try to answer this question by walking you through our civil society experience and real-world data and experience from the ground. And from a more accountability point of view, showing you what this tension looks like when it’s translated into practice. And I wanted to mention and recommend also about the liability aspects of the… You can see the ISOC policy framework for the internet intermediaries and content for more liability point of view and examples. At ISOC-IL, we run a safety internet hotline that recognizes our trusted flagger by all major platforms, not just social networks, but also shorteners, hosting providers, dating apps, and even adult content sites. We don’t see ourselves a help desk for the platforms. We see the trusted flagger role as a representing of the public interests in front of the platforms. not on the other way around. When we report harmful content we can deal with multiple layers at once hosting link, shortening, DNS, communication, pieces of content and more. Each layer brings its own set of challenges. That’s why we believe it’s more effective to focus on specific and intermediary functions than just one broad category. The hotline receives reports from the public, helps them navigate the platform’s reporting system and escalates violations that fall clearly under the platform’s community standards or terms of service. But more than that, we are often the first and only human point of contact for users who face serious harm online and don’t get responses from the platform nor from the state. This example gives us a unique point of view. We see where users are hurting, recognize emerging online harm trends, and we can see how well platforms actually respond, moderate, and handle harmful content. The war broke on October 7th, shocked the region to its core, and it was designed to create viral media impacts in addition to the real-world harm, broadcast in real-time through social media platforms. The result was a massive flood of harmful content online, including graphic violence, terror content, incitement, disinformation, and more. Platforms were quick to issue statements like, we removed hundreds of thousands of posts and we opened war rooms, etc. During that time, usage of social media spiked by 35% and the reports to our hotline more than doubled. Trusted plug-in experience from other conflicts, such as Russia-Ukraine and other crises in other parts of Africa, have shown consistent patterns. Platforms simply lack the capacity to moderate content at the speed and scale demanded during times of crisis. In a moment, I would like to share a few examples. I will share some of our findings from the recent war in Israel-Gaza research we conduct, which I believe can help to inform and shape the broader discussion around these challenges. We analyzed how the platform handled the reports we submitted during the first months of the war, after carefully filtering and categorizing them according to each platform’s relevant policies. And here are the key findings. On average, it took the platforms more than five days to respond to our reports. And we could see a difference between the platforms’ results. We also noticed a clear trend. We got no response during weekends, like Sunday and Saturday were just silence. Next, we looked at the nature and quality of response we received. We could see the platforms were generally quick to remove graphic content, terrorism-related materials, or sexual abuse, harmful content that can be efficiently handled by automated tools. But hate speech, incitement, and disinformation, however, required more skilled human moderation and understanding of language, context, and local culture, and were much less consistently and properly handled. When we looked on disinformation alone, excluding cases where also involved graphic content and incitement, we found that Facebook had the worst performance. They failed to respond for more than 70% of the disinformation reports we submitted. From ISOCIL’s direct experience, the day-to-day reality of content moderation requires approaches that recognize the complex, multifunctional nature of platforms and harms. Our data shows that platforms cannot be treated as a single, uniform entity. Each one has its own vulnerability and the way they respond to harm. This is important to remember when designing solutions and adapting policies. One size can’t fit all. Disinformation has emerged as a major public concern, something people increasingly recognize as harm and report to our hotline. While it’s likely here to stay, it’s far from the only form of harm. Disinformation is just one among many threats users face. And when we talk about platforms’ accountability to user safety and privacy, the conversation must extend far beyond just that one issue and protect the public and the user holistically. Looking at the broader power triangle between platforms, government, and users, we see different models in the U.S. and Europe when it comes to defining the relationship between the state and the platform. Most regulatory approaches still overemphasize platform control moderation, which our data shows is failing during crisis. I believe that there is a real value in involving citizens and users, shaping knowledge, and holding platforms accountable. However, I remain deeply concerned about the vulnerabilities of a public-only based solution approach. We see in our experience that the platforms still haven’t solved the old problems of inauthentic manipulation. So what protects the community-driven tools from being exploited in the same way? I want to suggest that the way to include the public in the loop is with civil society, such as we are, with a meaningful role in this dynamic. The public’s trust in civil society, its ability to remain independent from government, to be critical to platforms while also advocating for user protection and safety, makes it uniquely positioned to help bridging the gap. Civil society can already support platforms in understanding local contexts, needs, and sensitivities. It’s a critical balancing force in the evolving landscape that should be handled and designed with a multi-stakeholder approach. Finally, while today’s conversations center on the US-EU axis, we must not forget that a large part of the world falls outside of those spheres, places and regions considered by the platform as a small market, less widely spoken languages, and with different governance systems. The discussions we hold here have a global effect. The US and EU must ensure that the rest of the world is not left neglected in the digital shadows. Thank you very much, and I’m looking forward to this rich discussion now.


Cristina Herrera : Thank you. Thank you very much. It was very interesting to see how public…


Karine Caunes: Thank you for having this open discussion. My name is Karine Caunes and I’m the Executive Director of the Center for AI and Digital Humanisms, which aims to ensure a humanistic governance of AI in Europe and beyond, and we participated in many negotiations at Council of Europe, EU, Organization of American States or UNESCO level. So I would like to address this topic by relying on one of the studies we did regarding information manipulation on social media, more precisely the one we did with regarding to the German elections and astroturfing on X. So on X or Twitter, we analyzed all of tweets and retweets mentioning one of the main German political parties in the first part of January in order to avoid any lawsuits, and we also analyzed more than 500,000 tweets. So what we did in this regard, what we analyzed, what we saw here, is that through the support of foreign accounts, the AfD party has gained an overwhelming visibility advantage in Germany compared to other German political parties, and this was supported by the use of bots and the creation of fake accounts. So bots do not have freedom of expression, and through them and the increased visibility they get thanks to extreme commander system, it is the freedom of expression sought and information of German citizens which are under threat and so is democracy. So what we have observed is actually a continuous artificial amplification of private content and reverse censorship against all other contents. So yes, we actually agree with the U.S. that freedom of expression should be safeguarded, and it is exactly what the EU DSA allows the EU and its member states to do through its risk management systems. As for the EU Act, if the recommended system playing a part constitutes a prohibited practice under the EU Act, and we can also request social media to suspend not the whole social media, but the recommended system under the DSA during the electoral period. So we have all the tools which would allow us to fight against information manipulation while preserving freedom of expression, and this is the reason why actually we agree with the U.S. that we should preserve freedom of expression, and actually if there are risks, they come from bots that we find on social media. And for those interested, we also did studies regarding the information manipulation in Romania and regarding TikTok. Thank you, I’m sorry I have a very limited amount of time. If there are questions, I’m very happy to give further information. So if the U.S. respects fundamental values and democratic values,


Cristina Herrera : I think there is absolutely no rift. Here, Simona? No, okay. Georgie? Do we have anyone more on the line? Rahim? Good morning, everyone. Thank you for this opportunity to be able to


Brahim Baalla: intervene at this very interesting panel. Well, my statement is following. According to a note released on date 9 April 2025 by the U.S. Department of Homeland Security, Quote, Today U.S. Citizenship and Immigration Services, USCIS, will begin considering aliens’ anti-Semitic activity on social media and physical harassment of Jewish individuals as grounds for denying an immigration benefit request. This will immediately affect aliens applying for lawful permanent resident status for students and aliens affiliated with educational institutions linked to anti-Semitic activity, unquote. Given that every act of anti-Semitism and racism must be condemned in the clearest and strongest way possible, and I applaud the involvement of civil society in content moderation highlighted by first interventions, there might be issues related to the consequences of these new policies or rights and legal conditions of thousands of European students which every year choose to pursue their studies in the U.S. Many of these students make that choice, in fact, also given the academic freedom and freedom of expression which represented an important part of the history of the country. In fact, whilst the aim of such policy is totally understandable and shareable, issues might raise in respect of what might be interpreted as a violation on these terms based on how the policy is written. The given definition, in fact, is not specific enough to be predictable by the thousands of the European citizens potentially influenced by this new policy. It will be then appropriate for the diplomatic bodies of national governments and the European institutions to request further clarifications on the matter.


Cristina Herrera : Thank you. Thank you. Just in time. Is anyone else that had registered before in the room that hasn’t spoken yet? Oh, 97. Thanks.


Torsten Krause: Hello, I’m Torsten Krause. I’m working as a political science and child rights researcher at the Digital Opportunities Foundation based in Berlin, Germany. And I would like to draw your attention to one-third of the globally Internet users, which are minors, children, recognized as a vulnerable group with special human rights laid down in the Child Rights Convention and specified in the General Comment 25 with regards to the digital environment. When the UN established General Comment 25, they delivered a consultation process where around about a thousand children were involved from all over the world. And one finding was that there was a strong need and interest and trust with the And my question is with regard to the shift in the content moderation from fact-checking going to community notes. If we could keep this trustworthy content and trusted flaggers and other fact-checking and resources, if we can keep this with the DSA, or if what maybe Judith Beyer would assume, if the EU were not strong enough to keep these resources in the services for the European Union. And I would like also to ask Nissan Yasuo with regard to your comment that community involvement is an opportunity and a risk. If it’s maybe a wrong assumption by me that community notes are the worthless solution with regard to fact-checking or other way around, how do community notes have to work to be a good solution in content moderation? Thanks.


Audience: Thank you. Do you want to start? Is it working? No, 82. Okay, now it’s working. The question was in which way community notes can work. I think it’s, as I mentioned, I think the way and other places in platforms that can be manipulated by, let’s say, fake users or in coordinated inauthentic behavior. We still have this problem and I can’t see how community nodes will not have the same problems if we still have, you know, in other sections in the platform. What I think I try to suggest is to combine community in the group but not in general individuals by, let’s say, trusted flaggers and other entities which are from the community, for the community, but still have some kind of prestige or ability to be accountable and responsible in that sense. Thank you. Then perhaps for the first part of the question, if we want to go to Judy online or if you would like to answer, I think, yeah, if the EU can influence to keep trusted flaggers and other mechanisms.


Judit Bayer: I’m happy to answer. Am I on? Yes, you are. Yeah, thank you. So, first of all, I didn’t want to make the assumption that I should say, I just said that the national regulatory authorities individually don’t have, but I think that EU, this central structure, centralized structure by the commission and with the help of the board, I think that has a potential. And importantly, so this risk mitigation system means that it’s up to the platforms to define how they, to decide how they mitigate the risk. And they have to explain and to show evidence that they have reduced the They have eliminated, I don’t know, harmful material for minors or hate speech or whatever it is. And if it’s community notes, then it can be community notes. So it’s up to the discussion. And I think that this is coming to continue this discussion between the monitoring bodies, the commission, perhaps auditors, and the platforms to see how effective the community notes are. I think the idea is good. I’ve seen scholarly descriptions of how this might work. I don’t know if this is how it works currently with Twitter or with Facebook. So it has to be elaborated, obviously. And so it’s up to the practical solution and the evidence that shows which works and how much it works. Thank you.


Audience: Does anyone else want to make a statement or make a question to the panelists? One, four, six. Yes, thank you, Marie Bonner, I’m from Agence France-Presse and also from the European Fact-Checking Standards Network. And I had a question, maybe more for Judith Meyer, about the DSA implementation. We have been participating in the conversations of the Code of Practice on this information and followed up also on the transformation of the Code of Practice into a Code of Conduct within the DSA. Does that change anything about the way, how the platform has to explain what they do in terms of risk mitigation for this information, for example? What’s the role of the Code of Conduct within the legislation?


Judit Bayer: The Code of Conduct can serve as a guideline from which the platforms can voluntarily choose and pick which measure they want to take for themselves and to commit to, and then basically they can put together their own self-regulation and make a commitment that they are going to comply with that. And then in the monitoring and auditing procedure, what will be examined is whether they fulfill their own commitment to that set of measurements that are in the Code of Conduct and how well they comply with those measurements. So the problem emerges if they don’t take enough measures, if they don’t commit, like Twitter, which didn’t commit at all to the Code of Conduct or the Code of Practice, or if the commitments are insufficient, then it becomes difficult to argue that on both sides that they fulfilled the risk mitigation obligations, but if they have sufficient commitments and they can show that they fulfilled their commitments, so basically it’s an effort-based obligation, not a result-based, they don’t have to achieve, well, yeah, both of them a little bit because it has to be efficient. But the compliance with the Code of Conduct is a sign, a probability, that the platform has made it all it could, a best effort, to comply with the risk mitigation obligation. I hope I answered the question.


Cristina Herrera : Thank you, Anne. Very well. This one? This one?


Berin Szóka: Yeah, I mean, look, this is the whole ballgame, right? If American companies, especially under pressure from the Trump administration, will back out of their commitment to follow those guidelines, which is exactly what’s happened, it puts the Commission in a really difficult position, right? As I said, the DSA is written to be content neutral. It does not include any specific authority to change what is lawful, right? It simply describes the process by which unlawful content gets removed and the process by which the terms of service of the platforms, especially the very large platforms, are written and enforced. And the critical provision there when it comes to actually enforcing the risk mitigation provisions is that when the Commission brings an enforcement action, before it can actually issue any findings of any conclusions, it actually has to suggest what the platform did wrong, what it should be doing, right? So consider the situation right now that the Commission finds itself in with respect to X, right? So the Commission brought an enforcement action that covered multiple failures by X to comply with the Digital Services Act. Some of those were very easy, like selling blue checkmarks, right? That’s a very simple case. The Commission has acted already on some of those counts. It has not acted on the harder ones, specifically fact-checking, right? So the Commission could say that community notes, in the way it’s designed, can’t be an adequate risk mitigation measure for certain classes of systemic risk, because by definition the way that community notes works requires consensus. across the community, and you will, by definition, never get consensus about those categories of risk of lies about elections that are most important. The Commission could say that, but what exactly is it that the Commission expects X to do in that circumstance? It cannot require fact-checking as such, right? What the risk mitigation provision requires, first of all, you have to assess the risk under Article 34, and then under Article 35, you have to define some measure that you, the platform, are proposing to mitigate that risk. And it doesn’t just have to be fact-checking. It could be anything, right? The DSA, in that sense, is intended to be technology-neutral, but it essentially assumes that these platforms are operating in good faith and that they will make some effort to propose some mechanism. Maybe it’s slowing the spread of content or architectural changes, and the Commission, you know, if the companies won’t do that, the Commission may find itself in a position where it just doesn’t know what to do. So it’s kind of stuck, and meanwhile, the Commission is facing political pressure not to enforce the Act at all, and so the result, this is why jawboning can work, the result may be that we just never see any action on that aspect of the enforcement action. And if the Commission won’t take any action, and the Commission, the companies, won’t sign on to codes of conduct, then the DSA is sort of a dead letter on that issue. I mean, that’s the point. That’s why jawboning can work here. When I say that the Commission may lack the strategic autonomy to actually enforce the DSA, this is exactly what I’m talking about. There is no easy remedy for that problem.


Cristina Herrera : Thank you. Any reactions from the audience? Karen, I’m unmuting you, so you’ll have the floor. Yes, thank you. Just very quickly to taking all the points together.


Karine Caunes: So indeed, with regard to community notes and the bias that we have seen, you know, I think it’s very important for the DSA to be able to make a decision on its own. You can look at the reports from Viginoum, the French Viginoum, they reported about bias and issues for precisely information manipulation. Obviously, we have the system of trusted flaggers under the DSA, the problem is like you would have 50 reports. This is not enough. That’s the reason why the risk mitigation system is very interesting, Article 34, 35 of the commission of the DSA, because it means that we can make some reports based on millions of tweets, on millions of TikTok contents. And this is basically what we at our DigiHumanism are doing. And there is a difference here. If you go through trusted flaggers, the competent authority is a national authority. If you go through risk mitigation, the European Commission is directly responsible. And it’s not just up to social media. This is really based on the evidence we can bring to the Commission to prove that there were systemic risks to political discourse, to freedom of expression, to freedom of thought and so on. So if we have hard evidence, the Commission might be able to act. However, currently, I believe that they are waiting. Why? Because they have a trade war going on and there is a 90 days suspension of the tariffs and they are waiting to see what will happen with the U.S. But I do believe that ultimately the DSA will be applied and maybe we will first go to enforcement with regard to TikTok, since the U.S. is pressuring us to do so, but then we will go back to U.S.


Berin Szóka: social media, don’t worry. We’re all working on it. on measures that are being used to, quote, coerce American companies to moderate content. That’s in a report that has been presented to the White House. They haven’t taken action on it yet. In other words, whatever is happening right now on trade is separate from what the administration will do on that particular point. They will continue to use free speech and so-called censorship as a justification for tariffs as a tool to coerce the European Union. The Union is not powerless. It does have some mechanisms that it could use. We haven’t talked about this yet, but you may all be aware that the anti-coercion instrument is intended, it was drafted not with the U.S. in mind, but with other more traditionally authoritarian governments in mind. And that could be used, for example, to suspend the enforcement of intellectual property for Elon Musk and his companies in Europe. So that’s where we’re heading. It’s that kind of pressure being brought to bear on Europe and Europe trying to respond with measures like that.


Cristina Herrera : Thank you. Very interesting remarks of the geopolitics at play. Does anyone from the audience have any remarks?


Audience: I’m Daniel. I’m from the Youthdig, and I believe that the community-based fact-checking model, like community notes on X, should be expanded across all major social media platforms. This approach gives power back to the people, limits the risk of government or corporate bias in labeling political narratives as misinformation, and it strengthens democratic accountability. One possible path forward is a public European-developed API for fact-checking. Interoperable across platforms, enabling transparent community-driven moderation. It would be a tool that protects both sovereignty and freedom of expression. This is not regulating through restriction. It’s regulating through innovation. Thank you. Do we have any reactions to that? No. I just want to remind that the social media promise was to to promote democracy and to give voice to all. But then it was used to people in power, like states, governments, and political actors, to use those platforms to manipulate them and to pretend to be a huge amount of people that support candidates or support an issue. And I still don’t trust the platforms to do them by themselves yet. So this is what is my position on Community Notes.


Cristina Herrera : Thank you. Great. I’ll just say briefly, the question isn’t whether Community Notes are good or bad. Community Notes is a great idea for a lot of things.


Berin Szóka: The point is that it, by design, again, it doesn’t work for the things that matter most. If the question is who won the last election, you will never get a Community Note on that issue because certain parts of the community in the United States in particular, but this will happen in other countries, deny that the last election was legitimate. So Community Notes on its own can’t work for certain classes of systemic risks. And this is where the approach of the DSA, I think, is exactly right, that in general, we have to ask, what are the risks, how do we mitigate them, and what are the tools? And systemic risks might be mitigated through Community Notes for certain kinds of things. But for other things where society is deeply divided, and it’s not just elections, it might be vaccines, for example, you have to have some other way of dealing with that problem, or you will have the delegitimization of elections, you will have, as we had in the United States, an attempted coup, right? This is going to happen in other countries, and there has to be some other way of dealing with those problems. And it’s not going to come from the bottom up.


Judit Bayer: It has to come from editorial intervention. Can we get just a reaction from a speaker first, and then we go for the follow-up? Judy, please unmute yourself. Thank you very much. Maybe to react first on the last words of Berin, you said editorial intervention, absolutely. So I would just like to emphasize that the DSA… is not the only regulatory tool of the European Union. In fact, the digital regulatory package of the European Union includes several other laws, the DMA, Critical Advertising Act, now the European Media Freedom Act, and some others, which are to regulate the information environment and such. And when we talk about the healthy information environment, I think one of the major tasks is to reinforce the position of quality media, whether it’s online or whatever transmission method is chosen, and to push basically social media to the place that it deserves, and to emphasize that facts can be learned from quality media and not from social media. But you’re absolutely right regarding the COVID note, and back to the enforcement, I wanted to react that I agree the European Commission will probably balance the political risk with the risk that the platforms may mean for the European society. That’s also a political risk. And I think ultimately, the Commission had tried to block these platforms, to block access to the platforms within the European Union. So geo-blocking is a possibility, which is offered by the DSA. And I think


Cristina Herrera : this is an ultima ratio, which we must also keep in sight, if all the dialogues are broken up between the US and Europe. Thank you. Yeah. Yes, go ahead. 62. Yeah, thank you. Regarding your community notes, I wanted to say it’s not only a matter of


Audience: because of not dealing with divided societies or issues. But also, it can be, there’s another risk of manipulation, like we see already in the last years in Wikipedia and we can see in other places, how community knows can become a new field for manipulation, for intervention, or political manipulation, or even coordinated inauthentic behavior, trying to coordinate these kinds of information, and it can just add another layer of risk for information integrity, and at the same time, allow platforms to bypass regulation and not be accountable for their role in safeguarding safety and rights of users. So I think we cannot look at these new mechanisms of community notes as a perfect solution, because we already know there are many new risks that have to be somehow balanced with the responsibility and accountability of the platforms themselves. Yes, thank you. Do you have any remarks?


Cristina Herrera : Sorry, 2.15. Thank you.


Audience: I speak in my capacity of the advisory board of EDMO, so we are dealing with these issues every day, as you can imagine. I have two considerations that I want to bring to your attention. The first is that I don’t think that there will be a solution in the next months, in the next years, between this huge divide that there is between Europe and the US. So I think that what we have to do immediately is start to look for alliances in the rest of the world. There is a risk for democracy, and so all democracies are concerned, and I think that on this basis we can build alliances with countries, and we need to especially try to find alliances between the group of 77, because we need to ask the countries that are committed to democracy to work on this together with Europe. and I think that we have lost time because we were not we have not considered this as a priority in the past and it was a big mistake. The second thing is that there is an opportunity always in European legislation that is the deadline of 2027 for the personal interface for news that is forecast in the MFA, in the European Media Freedom Act. This is something that we need to consider seriously because this could be a long-term solution. Before was mentioned the an API for a common API for fact-checking but this is even better because if you have an interface that could allow you to access only two news that are guaranteed and comes from sources that you know that are trustable then part of the problem can be solved. And we have two years to come and this is a useful time that we can employ to arrive to the right conclusion. Thank you. Very interesting idea. I’m gonna go back to you but we’re gonna go


Karine Caunes: first online. Corinne I just ask you to unmute. Yes thank you. I just wanted to react to what Judith Beyer said and Edmo as well. So with regard to the shutdown of social media actually the first country which did it was the US shutting down TikTok for less than a day. We did it somehow, we cancelled also elections in Romania and what happened it had the negative, we got a negative reaction and if we were to shut down social media in Europe it would be used again for a new wave of information manipulation and this is a problem that all states, member states have in mind. So this is why for example with regard to manipulation, information manipulation in the context of the German elections, but we are suggesting it also with regard to the pending elections in Romania, in Portugal, in Poland, because they are going on here in May. We are requesting for the suspension of the recommender system, and the suspension of the recommender system is essential because this is how astroturfing is working, and this is also through the recommender system that illegal content, discriminatory content, revisionist content, anti-Semitic and anti-immigrant content is displayed in the For You feed of the users, even if they didn’t ask for it. So the suspension of the recommender system would be a middle ground, and I’m not sure how the US could say that it would be an attack on freedom of expression, so this could be a good tool. And as for alliances in the world with other countries which respect democratic values, yes, but we also have China, and with China we can find a middle ground, I mean a common position with regard to the labelling of AI-generated content, because we know that fake content is used to manipulate opinions, whatever the topic, and they issued rules, and we have issued rules, and we have to implement this rule in Europe, and we can check with China what we can do together to have some kind of set common standards while respecting our own values. Thank you. We’re going to have one last statement from 383.


Cristina Herrera : Well, manipulation and coups existed long before the internet, and they will continue with or


Audience: without it. Your country knows more than anyone else about it, and whether it’s Trump or any other American president, Republican or Democrat, the reality of the power games stays the same. Community notes isn’t perfect, but it’s the most transparent tool. we have at the moment in our social medias and this is the foundation we can build on through innovation. When I talk about the European fact-checking API, the goal isn’t to give the governments the power to decide what is true, it gives the user the power to open the centralized democratic tools that let people verify information collectively and not being dictated by states


Berin Szóka: or corporations. Many people don’t want to know what’s true, they don’t care about fact-checking, those are the problem. So if you design a system that is intended to fix the problem and the people who are the problem don’t want to use it, you haven’t fixed anything. Right, so with that we


Cristina Herrera : finished the section of statements and now we’re going to go for the messages drafted by the program committee. You can see them on the screen. And remember we’re going to ask for consensus, if you don’t agree with anything that is said here, let us know and there will also be a link to follow up after the session. Go ahead, thank you.


Moderator: My name is Jyrki Ransipulo from Finland and I’ve been trying to craft a couple of messages from this discussion. It was a really good discussion and very detailed on some questions. It would be very nice to write a 10-page report of the discussion, however, when you try to rewrite a couple of messages you have to work as an editor that I have been and you have to make it short. So basically, since the basic question on this part of the Euro-rig was what should Europe do, I’ve been concentrated on two things here, describing the situation and then what Europe should do. The first message, tensions between tech giants and European regulations. are nothing new, but they are now getting increasingly entangled with transatlantic political conflicts, with Internet issues risking to become pawns in disputes on trade and defence policies. This has exasperated, at least on a rhetorical level, European and American interpretations on freedom of expression. There have been attempts to label European regulation against harmful content or election interference as censorship. So this sort of paints the picture of where we are. The second, which is what we should do, as to how Europe should reply to the pressures, there was a consensus that retreating was no option. While continuing the transatlantic dialogue and trying to correct obvious American misunderstandings about the nature of the DSA, DMA and other regulations, Europe should make clear that it will defend its basic principles. On the other hand, the European regulatory instruments continue to be refined, simplified and made smarter.


Cristina Herrera : Thank you. Is there any notes about what we’re seeing? Anyone disagrees? Go ahead.


Berin Szóka: Yeah, I think that the problem is not that the European regulatory instruments are not simple enough, but that they have not been designed with this kind of conflict in mind. I think if we knew that Trump would be president again, the Digital Services Act would have been written more carefully. And I think that’s what needs to happen now. And the specific recommendation should be to rethink both the DSA, the entire package of regulation, with the current context in mind. And that should include the provisions that are most ambiguous, like, for example, what is a risk to civic discourse or a risk to electoral processes? Those terms aren’t defined. What exactly are the requirements of risk mitigation? And most importantly, there should be something in the text of the law that says that it cannot be used to deal with specific kinds of content. I don’t think it’s good enough to simply say that that’s implicit in European fundamental rights law. I mean, for example, we already have clear, explicit prohibitions on European law being interpreted to require monitoring of user communication, right? That probably was implicit in the law, in fundamental rights law, but we put it into legislation explicitly. The same sort of thing should happen in the DSA as it’s revised.


Cristina Herrera : Any reactions? The audience? Karine, I’m asking, do you want to mute? Go ahead. Yes.


Karine Caunes: As we said in committee notes, we hardly agree. I would disagree with the last position. I think that to reopen the DSA, the EOA Act, and the GDPR is a big mistake. It would only go to downplay its content, and it is not the aim. I think we have the tools. The question is, do we have the political will to enforce them? Let’s see what happens in the future. Yes. Right there. Thank you. I hope you hear me okay.


Audience: Jorge Cancio from the Swiss government. Just wanted to make a comment. I don’t have a specific wording proposal, but I have the impression that, to a certain extent, we are conflating at the European level. European Union legislations and pieces of legislations with European legislation. And there is a lot of confusion. I think we have to be careful. Thank you very much. the difference. Of course, I think we share a tradition, we share also approaches, but not necessarily on the details, on the specific instruments itself that are not adopted. For instance, here we are in the Council of Europe, 46 countries, and there are many countries who don’t exactly follow the same line as European Union instruments, Switzerland included. So I would ask the drafters to have a bit of a different wording, perhaps with a EG or with, for instance, or things like that. And on the other side of the Atlantic, I also have some difficulty coming from a global and an international background when we talk about America, because American is Mexico, is Canada, American is many things. I think we should say US-American or something in that direction. Thank you. Great, thank you. We got here. Thank you. 58. Hi, hello. I’m David Crouch from Internet Society. I thought we should be in Europe regulating or changing regulations depending on who’s sitting on the White House. We should make further proof regulation. What I would like to see there is that any decisions going forward should preserve the open nature of the Internet. There are some recommendations on intermediary liabilities that should be taken into account, and that’s it. Thank you. There was someone here. Go ahead. 327. Thank you. Tim van der Belt of the Dutch Authority for Digital Infrastructure. I would like to address the political side of the enforcement of regulation, because the regulator or the supervisory authority ought to be independent. But when you say it’s a political issue to enforce, an authority cannot be independent. So I would like to remove the political willfulness on the enforcement part and rather look at the system or the design of the regulation rather than the political will to enforce or whether an authority is allowed to enforce. Because for an authority, only the public interest is at stake over them. the main issue, not the political part, for that the state or the ministries are appointed institutions. Thank you. Thank you. 250. Yeah. One remark on the text that I think that, of course, I agree with Jorge about we are talking about the US, not of the Americas. I think that when we, in the last phrase of the second point, we say regulatory instruments continue to be refined. I think that before they’re refined, we need to implement. Because the political decision of the old package that has been elaborated in the last five years by the European Commission, regulating the digital world, is now arriving at the implementation phase. If the implementation will not be there, this will decredibilize the old process. We know that according to European legislation, we need to have years before the procedures for infringement arriving to the final point. This is the case now. We are the first procedures against the violation to GDPR arriving at the moment of the enforcement. We have the same for the first DSA provision, et cetera, et cetera, and the violation of the code of practice. So we need to implement. If we don’t implement, the old architecture that we have built during these five years will have no future, no credibility in the rest of the world. So this is the only thing that we need to do now. Thank you.


Cristina Herrera : Thank you. Sorry, an answer. 25. Yeah, is that on? Yeah. Thank you very much for all those remarks. The drafting continues with the whole program committee now until the 25th, I believe.


Moderator: So can I ask, do we have taken into account those remarks? and we try to reflect them in the text. Can we ask for a rough consensus on this? On the basic things. Maybe we’re going to take the last remarks for 1.52 and then we can make the vote. Thanks. My name is Julie Pasetti. I’m an academic and a journalist based in the UK.


Audience: I very much agree with the Swiss colleagues’ comments because unfortunately the UK is no longer part of the EU, especially so that we need to reflect those different standards and norms that should be aligned. And I just wanted to reiterate, I can’t remember who it was who said this, but the need to be more creative and networked in our response to these challenges because although most Americans apparently couldn’t see Trump 2.0 coming, those of us who work with large data sets analysing disinformation and hate speech online could see this surge and did predict this. So it is unfortunate that the various pieces of legislation were not crafted to actually predict a sort of tilt towards authoritarianism in the land that previously marked itself out as a genuine bastion of freedom of expression. But one of the most disturbing things that I have heard here in the past couple of days came from a sideline conversation where a representative of state apparatus was suggesting that we’ve already lost the battle to regulate, so we should just give up because there is no political will. And the second point that that person made was that we shouldn’t be regulating AI because we don’t know what damage it can do yet and we have to wait for the damage to be done before we regulate to prevent harm. And I don’t know what my ask is here other than to reinforce that the function of the Council of Europe could in fact be to bring together not just… European nations, but to think, as our African colleagues requested in a plenary session yesterday, to act in such a way that would take account of the networked effects of unregulated, largely US-based platforms, which now form part of what we refer to as the burligarchy, with the political power of the Trump administration reinforcing their dominance. And to do so in a way that perhaps brought together like-minded democracies, as someone suggested, in North America, such as Canada, and in Oceania, such as Australia and New Zealand, and many others. You know, we could consider South Africa and a whole range of other countries that are intent on trying to regulate in an effective way to defend democracy, human rights, and the rule of law, none of which we can save without a concerted accountability mechanism, which includes regulation. Thank you. Thank you very much for all your comments. To


Moderator: add to the request whether there is rough consensus, with the messages we ask if there are no strong objections to the messages that you see displayed here on the screen. If there is rough consensus, or if there are no strong objections, then the organizing team will take your comments and questions that you’ve made during this session to make the changes that you’ve proposed.


Berin Szóka: You had a strong, do you have a strong objection? We’ve talked a lot about regulation, but I think what you just said reminded me that at the end of the day, the rule of law is not primarily about legislation or regulation. It’s about courts. And in particular, the Council of Europe, the entire system of the European Convention on Human Rights assumes that you have an effective court to deal with claims. And the way in which J.D. Vance is most correct in his criticism of, in particular, law in the UK as I’m so sorry to interrupt, but may I ask, what specifically is the objection to- My objection is that what we need to do is not only talk about regulation, but also say something about the importance of effective judicial supervision to ensure fundamental rights, because that is not happening in the Council of Europe system.


Cristina Herrera : Thank you very much. That’s been well noted for the transcript, and that’s something that the organizing team will take into consideration when drafting the further messages. Are there any other strong objections? 483 has a strong objection.


Audience: Yes, hello, Olivier Cabana-Blanc, ISOC UK England. I just have an objection to use the word in the second part, while continuing the transatlantic dialogue and trying to correct obvious American misunderstandings, I would propose striking off obvious, because I think it’s a bit condescending to say that it’s an obvious misunderstanding. Thank you very much for your comment, this has been well noted. Any other strong objections? Nadia from online, we have an objection to the word simplified, and Judith would like to raise a strong objection. There is a strong objection to simplified, it’s been well noted. This has been well noted, thank you very much for your comment. Any other? It’s less American misunderstandings than it is mischaracterizations by the Trump administration. Thank you very much, we’ll make a note of that as well. If there are no further objections, then I hand over the moderation back to the moderator. Thank you.


Moderator: Thank you all very much for attending this session, and I think it was very interactive as Eurodig wanted, and yeah, thank you all for participating. And of course, thank you very much for the moderator for leading us into this very interesting session. We can see how many people are passionate about this topic, how many different views and ideas are about it. And hopefully you will choose not to end this conversation here, but to join us for the social evening tonight and continue these conversations where we can meet each other socially and passionately continue this. We’ll be meeting each other hopefully this evening at the Tigre. It’s not going to be a grand party, but at least the first drink is on us. So I hope that you will come and join us. And otherwise, you will see here tomorrow for main topic five, which is going to be on age verification dilemma, balancing child protection and digital access rights. Have a wonderful end of your day and a good evening. Thank you very much. Thank you. Thank you.


B

Berin Szóka

Speech speed

159 words per minute

Speech length

2911 words

Speech time

1097 seconds

Trump administration betrays American values by trying to shut down media while claiming to protect free speech

Explanation

Szóka argues that while Trump claims to protect free speech, his administration is actually trying to shut down broadcasters and sue newspapers and pollsters. He contends that what Trump really means is forcing private media to carry lies about the 2020 election, conspiracy theories, and hateful speech.


Evidence

Trump administration suing newspapers and pollsters, trying to shut down broadcasters; tech companies abandoning fact-checking to appease Trump; Meta now allowing denigration of immigrants, women, and sexual minorities


Major discussion point

Transatlantic Differences in Freedom of Expression Approaches


DSA is ambiguous enough to be weaponized, with Article 35 systemic risk provisions being loosely defined

Explanation

Szóka warns that the Digital Services Act’s ambiguous language, particularly Article 35’s loosely defined systemic risks like ‘civic discourse and electoral processes,’ could be misused by regulators. He argues this ambiguity allows for potential abuse similar to Trump’s own attempts to control social media content.


Evidence

Thierry Breton’s threats against Elon Musk and social media platforms; Article 35 requirements for platforms to mitigate systemic risks that are only loosely defined; Professor Martin Husovich’s analysis that the answer on regulatory power is ‘far from clear’


Major discussion point

Digital Services Act (DSA) Implementation and Enforcement


Disagreed with

– Karine Caunes

Disagreed on

Whether to revise the Digital Services Act (DSA) to address ambiguities


Trump administration uses jawboning – regulatory pressure and extortion – to force tech companies to abandon content moderation

Explanation

Szóka explains that jawboning involves using pressure, browbeating, and regulatory extortion to achieve results that regulators don’t have legal authority to require directly. He argues this tactic is working, as major tech companies have abandoned fact-checking to appease Trump’s rage.


Evidence

Major tech companies abandoning fact-checking; Meta allowing denigration of immigrants, women, and sexual minorities; claims about Haitian immigrants eating dogs and cats resulting in bomb threats


Major discussion point

US Political Pressure and Jawboning Tactics


US threatens tariffs and trade measures against European digital policies perceived as censorship

Explanation

Szóka notes that the US President signed a memorandum promising to defend American companies from what he perceives as overseas extortion, including considerations of tariffs to respond to fines and digital service taxes. This represents a direct threat to European regulatory autonomy.


Evidence

US President’s February memorandum promising to defend American companies; considerations of tariffs to respond to fines and digital service taxes; Trump ordering agencies to cease contracts with companies that facilitate censorship


Major discussion point

US Political Pressure and Jawboning Tactics


Disagreed with

– Moderator
– Audience

Disagreed on

Approach to US-EU tensions and dialogue


European Commission enforcement may lack strategic autonomy due to US pressure on NATO, Ukraine, and trade issues

Explanation

Szóka argues that Europe lacks strategic autonomy and that American support for NATO and Ukraine, as well as tariff threats, may effectively give the Trump administration veto power over DSA enforcement. Until Europe can stand up for itself militarily, it may be unable to resist American jawboning.


Evidence

US pressure on NATO and Ukraine support; tariff threats; Europe’s dependence on American military support; lack of European tech services that Europeans want to use


Major discussion point

Digital Services Act (DSA) Implementation and Enforcement


Disagreed with

– Judit Bayer

Disagreed on

European Commission’s enforcement capacity and independence


Community notes fail for divisive issues like election legitimacy where consensus is impossible

Explanation

Szóka explains that while community notes are a great idea for many things, they don’t work for the most important issues by design. For deeply divisive topics like election legitimacy or vaccines, society will never reach consensus, making community notes ineffective for addressing systemic risks.


Evidence

Example of election legitimacy disputes where certain parts of the US community deny the last election was legitimate; vaccines as another example where society is deeply divided; need for editorial intervention rather than bottom-up solutions


Major discussion point

Platform Content Moderation Challenges


Agreed with

– Nitsan Yasur

Agreed on

Platforms lack capacity for effective content moderation during crises


Disagreed with

– Nitsan Yasur
– Audience

Disagreed on

Effectiveness of community notes for content moderation


Europe needs strategic autonomy, realism about US as adversary, and domestic tech innovation capacity

Explanation

Szóka argues that Europe lacks three critical things: realism about the US being an unreliable ally and increasingly an adversary, strategic autonomy to resist American pressure, and the ability to produce tech services that Europeans actually want to use. Without these, Europe will play a weak hand in tech regulation.


Evidence

US as increasingly unreliable ally on tech and other issues; Europe’s inability to stand up militarily; failure to produce competitive tech services; Commission having to graft European values onto American creations


Major discussion point

European Response Strategies


MAGA politics characterizes DSA as censorship regime despite its content-neutral design

Explanation

Szóka explains that MAGA politics uses free speech as a rhetorical weapon, characterizing the DSA unfairly as a censorship regime and the Commission as a ministry of truth. This mischaracterization is used to justify pressure against European regulation, even though the DSA is designed to be content-neutral.


Evidence

J.D. Vance’s accusations about Europe retreating from fundamental values; characterization of Commission as ‘new ministry of truth’; Thierry Breton’s actions giving MAGA ammunition for these characterizations


Major discussion point

US Political Pressure and Jawboning Tactics


J

Judit Bayer

Speech speed

139 words per minute

Speech length

2247 words

Speech time

964 seconds

European approach sees freedom of expression as a political right enabling democratic participation, with state obligation to protect against private power

Explanation

Bayer explains that the European interpretation of freedom of expression has a more systemic view than the American approach, seeing it as a political right that enables people to participate in democratic decision-making. When this goal is threatened by any actor, the state might intervene to protect the systemic function of freedom of expression.


Evidence

EU member states and EU having positive obligation to secure framework for plural and free information system; platforms aggregating and reorganizing information without explanation; platforms exploiting vulnerabilities through behavioral targeting


Major discussion point

Transatlantic Differences in Freedom of Expression Approaches


European governance rooted in people’s sovereignty where state protects citizens from other private powers, unlike US approach

Explanation

Bayer argues that European governance is rooted in the principle of people’s sovereignty, where the state receives its mandate from the people and has a duty to protect citizens’ interests against other private representatives of power. This historical perspective, rooted in feudalism, explains why Europe has stronger protections in various areas.


Evidence

Historical context of feudalism where kings provided freedom to cities from landlords; stronger protection of laborers’ rights, working mothers, better health care regulation, food safety; state obligation to protect human rights from fellow human beings or companies


Major discussion point

Transatlantic Differences in Freedom of Expression Approaches


DSA doesn’t order content removal but protects users’ rights through procedural guarantees and transparency requirements

Explanation

Bayer clarifies that the DSA doesn’t order platforms to remove certain content any more than existing laws like the US DMCA or EU E-commerce Directive. Instead, it protects users’ rights by introducing procedural guarantees such as explanations for content removal, transparency in terms of service, and dispute resolution mechanisms.


Evidence

Comparison to DMCA and E-commerce Directive; Facebook often more restrictive in removing content than required by EU law; procedural guarantees including explanation requirements, transparency with terms of service, dispute resolution mechanisms


Major discussion point

Digital Services Act (DSA) Implementation and Enforcement


DSA creates multi-actor participation system with checks and balances, not arbitrary political censorship tool

Explanation

Bayer describes the DSA as creating a complex web of collaboration involving tech companies, auditors, civil society actors, national regulatory authorities, researchers, and fact-checkers. This multi-actor participation creates a system of mutual distrust and checks and balances, with transparency requirements being dominant.


Evidence

Cooperation between tech companies, auditors, civil society actors as trusted flaggers, national regulatory authorities, researchers, fact-checkers; system based on mutual distrust with actors supervising each other; transparency requirements being dominant, making DSA ‘a big research project’


Major discussion point

Digital Services Act (DSA) Implementation and Enforcement


Agreed with

– Nitsan Yasur

Agreed on

Civil society plays crucial intermediary role in content governance


Disagreed with

– Berin Szóka

Disagreed on

European Commission’s enforcement capacity and independence


Platform power combined with authoritarian political power poses existential threat to any society

Explanation

Bayer warns that the power held by Big Tech – including data, technology, and opinion power – could be lethal to any society if combined with populistic, extremist, authoritarian, or reckless political power. She uses examples of platform-enabled violence to illustrate this danger.


Evidence

Examples of Myanmar and African countries where platform power contributed to coups, civil wars, or genocide; Big Tech’s possession of enormous dynamic databases and technological power convertible to industrial and military power


Major discussion point

Global Democratic Implications


European regulatory approach should be export product for global democratic protection

Explanation

Bayer argues that the EU still has regulatory competence and high potential expertise on regulation, which may be an export product with minor amendments. She emphasizes that regulation is absolutely necessary for public interest within the EU and beyond.


Evidence

EU’s prime markets still desirable for big tech companies; potential for new alliances with states of the global majority; regulatory expertise as potential export product


Major discussion point

Global Democratic Implications


Agreed with

– Audience

Agreed on

Need for global democratic alliances beyond US-EU axis


N

Nitsan Yasur

Speech speed

142 words per minute

Speech length

1214 words

Speech time

511 seconds

US and EU regulatory models fundamentally impact how platforms moderate content daily, with different accountability frameworks

Explanation

Yasur explains that both European and American approaches fundamentally impact how platforms moderate content on a daily basis. Speaking from a non-EU, non-US perspective, she emphasizes the real-world implications of this regulatory tension for organizations directly affected by both approaches.


Evidence

Israeli Internet Association’s experience as trusted flagger by all major platforms; direct impact on civil society organizations operating between both regulatory frameworks


Major discussion point

Transatlantic Differences in Freedom of Expression Approaches


Platforms lack capacity to moderate content at speed and scale during crises, with average 5+ day response times

Explanation

Yasur presents data showing that platforms simply cannot moderate content effectively during times of crisis. Her organization’s research during the Israel-Gaza conflict revealed significant delays and inconsistent responses, with platforms lacking human capacity for complex moderation decisions.


Evidence

October 7th war analysis showing social media usage spiked 35%, reports to hotline doubled; average 5+ day response time from platforms; no responses during weekends; trusted flagger experience from Russia-Ukraine and African conflicts showing consistent patterns


Major discussion point

Platform Content Moderation Challenges


Agreed with

– Berin Szóka

Agreed on

Platforms lack capacity for effective content moderation during crises


Platforms handle graphic content well with automated tools but struggle with hate speech requiring human moderation

Explanation

Yasur’s research found that platforms were generally quick to remove graphic content, terrorism-related materials, or sexual abuse that can be efficiently handled by automated tools. However, hate speech, incitement, and disinformation required more skilled human moderation and understanding of language, context, and local culture, and were much less consistently handled.


Evidence

Analysis of platform responses during Israel-Gaza conflict; Facebook had worst performance with 70% failure rate on disinformation reports; clear distinction between automated content removal success and human moderation failures


Major discussion point

Platform Content Moderation Challenges


Civil society should play meaningful role as trusted intermediary between platforms, government, and users

Explanation

Yasur argues that civil society organizations are uniquely positioned to bridge the gap between platforms, governments, and users due to public trust, independence from government, and ability to be critical of platforms while advocating for user protection. She suggests this multi-stakeholder approach is essential for effective content moderation.


Evidence

Israeli Internet Association’s role as trusted flagger representing public interest; civil society’s ability to support platforms in understanding local contexts, needs, and sensitivities; independence from government while being critical of platforms


Major discussion point

European Response Strategies


Agreed with

– Judit Bayer

Agreed on

Civil society plays crucial intermediary role in content governance


Disagreed with

– Berin Szóka
– Audience

Disagreed on

Effectiveness of community notes for content moderation


K

Karine Caunes

Speech speed

138 words per minute

Speech length

1181 words

Speech time

510 seconds

Information manipulation through bots and fake accounts threatens German democratic processes and freedom of expression

Explanation

Caunes presents research showing that through foreign accounts, bots, and fake accounts, the AfD party gained overwhelming visibility advantage in Germany compared to other political parties. She argues that bots don’t have freedom of expression, and their artificial amplification threatens German citizens’ freedom of expression and democracy.


Evidence

Analysis of over 500,000 tweets mentioning German political parties; AfD’s overwhelming visibility advantage through foreign account support; use of bots and fake accounts for artificial amplification; continuous artificial amplification creating reverse censorship


Major discussion point

Platform Content Moderation Challenges


Agreed with

– Judit Bayer
– Moderator

Agreed on

Europe should not retreat from regulatory standards despite US pressure


B

Brahim Baalla

Speech speed

132 words per minute

Speech length

262 words

Speech time

118 seconds

New US immigration policies consider social media activity as grounds for denying benefits, affecting European students

Explanation

Baalla highlights a new US Department of Homeland Security policy that considers aliens’ anti-Semitic activity on social media and physical harassment of Jewish individuals as grounds for denying immigration benefits. While supporting the condemnation of anti-Semitism, he raises concerns about the policy’s impact on European students and the lack of specific definitions.


Evidence

US Department of Homeland Security note from April 9, 2025; policy affecting aliens applying for lawful permanent resident status and students; thousands of European students choosing to study in US for academic freedom; lack of predictable definitions in the policy


Major discussion point

US Political Pressure and Jawboning Tactics


T

Torsten Krause

Speech speed

108 words per minute

Speech length

242 words

Speech time

133 seconds

One-third of global internet users are minors requiring special protection under Child Rights Convention

Explanation

Krause emphasizes that children represent one-third of global internet users and are recognized as a vulnerable group with special human rights under the Child Rights Convention. He references UN General Comment 25 which involved consultation with around 1,000 children worldwide and found strong need for trustworthy content and trusted flaggers.


Evidence

UN General Comment 25 consultation process involving around 1,000 children from all over the world; Child Rights Convention and General Comment 25 specifications for digital environment; children’s expressed need for trustworthy content


Major discussion point

Child Protection and Vulnerable Groups


M

Moderator

Speech speed

120 words per minute

Speech length

579 words

Speech time

287 seconds

Europe should defend basic principles while refining regulatory instruments, not retreat from standards

Explanation

The moderator summarized the session consensus that retreating was not an option for Europe. While continuing transatlantic dialogue and correcting American misunderstandings about European regulations, Europe should make clear it will defend its basic principles and continue refining its regulatory instruments.


Evidence

Session consensus from discussion; need to continue transatlantic dialogue; correction of misunderstandings about DSA, DMA and other regulations


Major discussion point

European Response Strategies


Agreed with

– Judit Bayer
– Karine Caunes

Agreed on

Europe should not retreat from regulatory standards despite US pressure


Disagreed with

– Berin Szóka
– Audience

Disagreed on

Approach to US-EU tensions and dialogue


A

Audience

Speech speed

130 words per minute

Speech length

2259 words

Speech time

1038 seconds

Need for alliances with democratic countries beyond US-EU axis, including Global South nations

Explanation

An audience member from EDMO’s advisory board argued that there won’t be a solution to the US-EU divide in the coming months or years, so Europe should immediately start looking for alliances with other democracies. They emphasized the need for alliances with the Group of 77 and other countries committed to democracy.


Evidence

Risk to democracy affecting all democracies; need for alliances with Group of 77; acknowledgment that not prioritizing this in the past was a big mistake


Major discussion point

Global Democratic Implications


Agreed with

– Judit Bayer

Agreed on

Need for global democratic alliances beyond US-EU axis


Disagreed with

– Berin Szóka
– Moderator

Disagreed on

Approach to US-EU tensions and dialogue


C

Cristina Herrera

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

US-EU tensions on freedom of expression have intensified in online environments with geopolitical implications

Explanation

Herrera sets the scene by explaining that while Europe and the United States have long had different approaches to freedom of expression, these tensions have intensified in recent years, especially in online environments. She notes that geopolitical tensions are raising questions about how the US might pressure the EU on digital policies and how the EU will respond.


Evidence

US President’s February memorandum promising to defend American companies from overseas extortion; EU enforcement of Digital Services Act requiring content removal safeguards; preliminary findings against American companies for breaching DSA


Major discussion point

Transatlantic Differences in Freedom of Expression Approaches


EuroDIG has changed format to encourage more interactive audience participation and diverse stakeholder input

Explanation

Herrera explains that EuroDIG has modified its session format this year to make it more interactive, dividing sessions into high-level statements, pre-registered statements, and audience interventions. The purpose is to encourage audience participation and enrich conversations with inputs from diverse stakeholders.


Evidence

New session format with 30 minutes for high-level statements, 45 minutes for pre-registered statements, and time for audience interventions and questions


Major discussion point

European Response Strategies


Session aims to find way forward despite deepening transatlantic rift on freedom of expression

Explanation

Herrera frames the session’s purpose as delving more deeply into the roots of different approaches between the US and EU on freedom of expression. The goal is to try to find a way forward despite the intensifying tensions and geopolitical pressures.


Evidence

Session structure designed to explore roots of different approaches; focus on finding solutions amid geopolitical tensions


Major discussion point

European Response Strategies


Multidisciplinary expertise needed to address complex freedom of expression challenges

Explanation

Herrera introduces the panel as consisting of three multidisciplinary experts in European law, American law, content moderation, and Internet governance. This highlights the need for diverse expertise to tackle the complex challenges of transatlantic differences in freedom of expression approaches.


Evidence

Panel includes experts in European law, American law, content moderation, and Internet governance including Berin, Nitzan, and Judith joining remotely


Major discussion point

European Response Strategies


Interactive session format encourages diverse stakeholder participation in internet governance discussions

Explanation

Herrera explains that EuroDIG has changed its format to be more interactive, dividing the session into three parts: high-level statements, pre-registered statements, and audience interventions. This format is designed to encourage audience participation and enrich conversations with inputs from diverse stakeholders.


Evidence

Session divided into 30 minutes for high-level statements, 45 minutes for pre-registered statements, and time for audience questions and interventions; explicit encouragement for audience to think of questions and remarks


Major discussion point

European Response Strategies


Transatlantic tensions on freedom of expression have intensified particularly in online environments with geopolitical implications

Explanation

Herrera sets the context by explaining that while the US and Europe have long had different approaches to freedom of expression, these differences have intensified in recent years, especially online. She notes that geopolitical tensions raise questions about US pressure on EU digital policies and how the EU will respond.


Evidence

US President’s February memorandum promising to defend American companies from overseas extortion; EU enforcement of Digital Services Act requiring content removal safeguards; preliminary findings against American companies for breaching DSA; considerations of tariffs and digital service taxes


Major discussion point

Transatlantic Differences in Freedom of Expression Approaches


Multidisciplinary expertise is essential for addressing complex freedom of expression challenges

Explanation

Herrera introduces the panel as consisting of experts with diverse backgrounds in European law, American law, content moderation, and Internet governance. This highlights the need for varied expertise to tackle the complex challenges of transatlantic differences in freedom of expression approaches.


Evidence

Panel includes Berin, Nitzan, and Judith with expertise spanning European law, American law, content moderation, and Internet governance; Judith joining remotely to provide additional perspective


Major discussion point

European Response Strategies


Session aims to find constructive path forward despite deepening transatlantic rift

Explanation

Herrera frames the session’s purpose as going beyond simply identifying differences to actually finding solutions. The goal is to delve into the roots of different approaches and try to find a way forward despite intensifying tensions and geopolitical pressures.


Evidence

Session structure designed to explore roots of different approaches; explicit focus on finding solutions amid geopolitical tensions; emphasis on moving from problem identification to solution-finding


Major discussion point

European Response Strategies


S

Speaker

Speech speed

161 words per minute

Speech length

196 words

Speech time

72 seconds

EuroDIG social evening provides networking opportunity to continue policy discussions

Explanation

The speaker promotes the social evening at Le Tigre as an opportunity for participants to continue conversations from the sessions in a more informal setting. This represents the multi-stakeholder approach to internet governance where informal networking complements formal sessions.


Evidence

Meeting at Le Tigre at 6:30 with first drink provided; vouchers available at the door; encouragement to continue passionate conversations from the session


Major discussion point

European Response Strategies


Agreements

Agreement points

Platforms lack capacity for effective content moderation during crises

Speakers

– Berin Szóka
– Nitsan Yasur

Arguments

Community notes fail for divisive issues like election legitimacy where consensus is impossible


Platforms lack capacity to moderate content at speed and scale during crises, with average 5+ day response times


Summary

Both speakers agree that current platform content moderation systems are fundamentally inadequate, particularly during times of crisis or for highly divisive content where automated systems fail and human moderation cannot keep pace with demand.


Topics

Human rights | Cybersecurity


Civil society plays crucial intermediary role in content governance

Speakers

– Judit Bayer
– Nitsan Yasur

Arguments

DSA creates multi-actor participation system with checks and balances, not arbitrary political censorship tool


Civil society should play meaningful role as trusted intermediary between platforms, government, and users


Summary

Both speakers emphasize the importance of civil society organizations as trusted intermediaries that can bridge gaps between platforms, governments, and users while maintaining independence and accountability.


Topics

Human rights | Legal and regulatory


Europe should not retreat from regulatory standards despite US pressure

Speakers

– Judit Bayer
– Moderator
– Karine Caunes

Arguments

European regulatory approach should be export product for global democratic protection


Europe should defend basic principles while refining regulatory instruments, not retreat from standards


Information manipulation through bots and fake accounts threatens German democratic processes and freedom of expression


Summary

There is strong consensus that Europe should maintain its regulatory approach and defend its principles rather than backing down in response to US political pressure, with the DSA and related regulations seen as necessary tools for democratic protection.


Topics

Legal and regulatory | Human rights


Need for global democratic alliances beyond US-EU axis

Speakers

– Judit Bayer
– Audience

Arguments

European regulatory approach should be export product for global democratic protection


Need for alliances with democratic countries beyond US-EU axis, including Global South nations


Summary

Both speakers recognize the need for Europe to build broader international coalitions with democratic countries, particularly in the Global South, rather than relying solely on transatlantic cooperation.


Topics

Legal and regulatory | Development


Similar viewpoints

While they have different perspectives on the DSA’s design, both speakers acknowledge the importance of preventing arbitrary enforcement and ensuring proper checks and balances in digital regulation, though Szóka is more concerned about potential abuse while Bayer emphasizes the built-in safeguards.

Speakers

– Berin Szóka
– Judit Bayer

Arguments

DSA is ambiguous enough to be weaponized, with Article 35 systemic risk provisions being loosely defined


DSA creates multi-actor participation system with checks and balances, not arbitrary political censorship tool


Topics

Legal and regulatory | Human rights


All three speakers recognize that political manipulation and pressure tactics are being used to undermine content moderation systems, whether through direct government pressure or through coordinated inauthentic behavior that exploits platform vulnerabilities.

Speakers

– Berin Szóka
– Nitsan Yasur
– Karine Caunes

Arguments

Trump administration uses jawboning – regulatory pressure and extortion – to force tech companies to abandon content moderation


Platforms handle graphic content well with automated tools but struggle with hate speech requiring human moderation


Information manipulation through bots and fake accounts threatens German democratic processes and freedom of expression


Topics

Human rights | Cybersecurity


Both speakers frame the transatlantic differences as fundamental philosophical disagreements about the role of the state in protecting democratic discourse, with European approaches emphasizing systemic protection of democratic participation versus American emphasis on individual rights against state interference.

Speakers

– Judit Bayer
– Cristina Herrera

Arguments

European approach sees freedom of expression as a political right enabling democratic participation, with state obligation to protect against private power


US-EU tensions on freedom of expression have intensified in online environments with geopolitical implications


Topics

Human rights | Legal and regulatory


Unexpected consensus

Criticism of community notes as insufficient solution

Speakers

– Berin Szóka
– Nitsan Yasur
– Audience

Arguments

Community notes fail for divisive issues like election legitimacy where consensus is impossible


Civil society should play meaningful role as trusted intermediary between platforms, government, and users


Need for alliances with democratic countries beyond US-EU axis, including Global South nations


Explanation

Despite coming from different perspectives (American legal expert, Israeli civil society, and European audience), there was unexpected consensus that community-driven content moderation like community notes, while valuable, cannot address the most critical challenges of democratic discourse protection, particularly for divisive issues where societal consensus is impossible.


Topics

Human rights | Sociocultural


Recognition of US as increasingly unreliable partner

Speakers

– Berin Szóka
– Judit Bayer
– Audience

Arguments

Europe needs strategic autonomy, realism about US as adversary, and domestic tech innovation capacity


Platform power combined with authoritarian political power poses existential threat to any society


Need for alliances with democratic countries beyond US-EU axis, including Global South nations


Explanation

Surprisingly, even the American legal expert Berin Szóka joined European speakers in acknowledging that the US has become an unreliable ally and that Europe needs to develop strategic autonomy and seek alternative partnerships. This represents a significant shift from traditional transatlantic cooperation assumptions.


Topics

Legal and regulatory | Economic


Overall assessment

Summary

The discussion revealed strong consensus on several key points: platforms are inadequate for content moderation during crises, civil society plays a crucial intermediary role, Europe should maintain its regulatory standards, and there’s a need for broader democratic alliances beyond the US-EU relationship. Speakers also agreed on the fundamental inadequacy of community-driven solutions for the most divisive content.


Consensus level

High level of consensus on core issues despite different backgrounds and perspectives. The implications are significant – there’s broad agreement that the current transatlantic approach to digital governance is failing and that Europe needs to develop more autonomous and globally inclusive strategies for protecting democratic discourse online. This consensus suggests a potential paradigm shift away from US-centric approaches toward more multilateral, civil society-inclusive governance models.


Differences

Different viewpoints

Whether to revise the Digital Services Act (DSA) to address ambiguities

Speakers

– Berin Szóka
– Karine Caunes

Arguments

DSA is ambiguous enough to be weaponized, with Article 35 systemic risk provisions being loosely defined


As we said in committee notes, we hardly agree. I would disagree with the last position. I think that to reopen the DSA, the EOA Act, and the GDPR is a big mistake. It would only go to downplay its content, and it is not the aim. I think we have the tools. The question is, do we have the political will to enforce them?


Summary

Szóka argues the DSA needs revision due to ambiguous language that could be weaponized, while Caunes believes reopening the DSA would be a mistake and that the current tools are sufficient if there’s political will to enforce them.


Topics

Legal and regulatory | Human rights


Effectiveness of community notes for content moderation

Speakers

– Berin Szóka
– Nitsan Yasur
– Audience

Arguments

Community notes fail for divisive issues like election legitimacy where consensus is impossible


Civil society should play meaningful role as trusted intermediary between platforms, government, and users


Community-based fact-checking model, like community notes on X, should be expanded across all major social media platforms


Summary

Szóka argues community notes fail for the most important divisive issues, Yasur advocates for civil society intermediaries over pure community-driven solutions, while some audience members support expanding community notes across platforms.


Topics

Human rights | Sociocultural


European Commission’s enforcement capacity and independence

Speakers

– Berin Szóka
– Judit Bayer

Arguments

European Commission enforcement may lack strategic autonomy due to US pressure on NATO, Ukraine, and trade issues


DSA creates multi-actor participation system with checks and balances, not arbitrary political censorship tool


Summary

Szóka argues the Commission lacks strategic autonomy and may be subject to US pressure, while Bayer contends the DSA creates a system of checks and balances that prevents arbitrary political decisions.


Topics

Legal and regulatory | Human rights


Approach to US-EU tensions and dialogue

Speakers

– Berin Szóka
– Moderator
– Audience

Arguments

US threatens tariffs and trade measures against European digital policies perceived as censorship


Europe should defend basic principles while refining regulatory instruments, not retreat from standards


Need for alliances with democratic countries beyond US-EU axis, including Global South nations


Summary

Szóka emphasizes the severity of US threats and Europe’s vulnerability, the Moderator advocates for continuing dialogue while defending principles, and audience members suggest looking beyond the US-EU axis for democratic alliances.


Topics

Legal and regulatory | Development


Unexpected differences

Level of optimism about European regulatory capacity

Speakers

– Berin Szóka
– Judit Bayer

Arguments

Europe needs strategic autonomy, realism about US as adversary, and domestic tech innovation capacity


European regulatory approach should be export product for global democratic protection


Explanation

Despite both being European law experts, Szóka (American living in Europe) is deeply pessimistic about Europe’s capacity to resist US pressure, while Bayer (European academic) maintains confidence in European regulatory frameworks and their global applicability. This represents an unexpected divide between American and European perspectives on European capabilities.


Topics

Legal and regulatory | Economic


Role of civil society versus community-driven solutions

Speakers

– Nitsan Yasur
– Audience

Arguments

Civil society should play meaningful role as trusted intermediary between platforms, government, and users


Community-based fact-checking model, like community notes on X, should be expanded across all major social media platforms


Explanation

Unexpectedly, there was disagreement between civil society representatives themselves – Yasur (representing established civil society organization) advocated for institutional civil society roles, while audience members from youth organizations favored more direct community participation. This reveals tensions within civil society about democratization of content moderation.


Topics

Human rights | Sociocultural


Overall assessment

Summary

The discussion revealed fundamental disagreements about regulatory strategy (revision vs. enforcement), content moderation approaches (institutional vs. community-driven), and European capacity to resist US pressure. While speakers agreed on core problems (inadequate content moderation, threats to democracy, need to protect freedom of expression), they diverged significantly on solutions.


Disagreement level

Moderate to high disagreement on implementation strategies despite consensus on core problems. The disagreements have significant implications as they reflect deeper philosophical divides about regulatory approaches, democratic participation, and transatlantic power dynamics that could affect the future of internet governance and digital rights protection.


Partial agreements

Partial agreements

Similar viewpoints

While they have different perspectives on the DSA’s design, both speakers acknowledge the importance of preventing arbitrary enforcement and ensuring proper checks and balances in digital regulation, though Szóka is more concerned about potential abuse while Bayer emphasizes the built-in safeguards.

Speakers

– Berin Szóka
– Judit Bayer

Arguments

DSA is ambiguous enough to be weaponized, with Article 35 systemic risk provisions being loosely defined


DSA creates multi-actor participation system with checks and balances, not arbitrary political censorship tool


Topics

Legal and regulatory | Human rights


All three speakers recognize that political manipulation and pressure tactics are being used to undermine content moderation systems, whether through direct government pressure or through coordinated inauthentic behavior that exploits platform vulnerabilities.

Speakers

– Berin Szóka
– Nitsan Yasur
– Karine Caunes

Arguments

Trump administration uses jawboning – regulatory pressure and extortion – to force tech companies to abandon content moderation


Platforms handle graphic content well with automated tools but struggle with hate speech requiring human moderation


Information manipulation through bots and fake accounts threatens German democratic processes and freedom of expression


Topics

Human rights | Cybersecurity


Both speakers frame the transatlantic differences as fundamental philosophical disagreements about the role of the state in protecting democratic discourse, with European approaches emphasizing systemic protection of democratic participation versus American emphasis on individual rights against state interference.

Speakers

– Judit Bayer
– Cristina Herrera

Arguments

European approach sees freedom of expression as a political right enabling democratic participation, with state obligation to protect against private power


US-EU tensions on freedom of expression have intensified in online environments with geopolitical implications


Topics

Human rights | Legal and regulatory


Takeaways

Key takeaways

The transatlantic rift on freedom of expression is intensifying, with US-EU tensions becoming entangled with broader political conflicts over trade and defense policies


The Trump administration is using ‘jawboning’ tactics – regulatory pressure and threats of tariffs – to coerce European regulators and tech companies to abandon content moderation practices


The Digital Services Act (DSA) faces enforcement challenges due to ambiguous language around systemic risks and potential US political pressure that may undermine European strategic autonomy


Platform content moderation fails during crises, with companies unable to handle the speed and scale required, particularly for context-dependent content like hate speech and disinformation


Community notes and similar bottom-up moderation approaches cannot address divisive issues where societal consensus is impossible, such as election legitimacy or vaccine misinformation


Europe needs to build strategic autonomy through domestic tech innovation, military independence, and alliances with other democratic nations beyond the US


Civil society organizations should play a crucial intermediary role between platforms, governments, and users, particularly as trusted flaggers and in risk assessment


The regulatory approach should focus on transparency, procedural guarantees, and multi-stakeholder participation rather than direct content control


Resolutions and action items

Continue transatlantic dialogue while defending European regulatory principles and correcting mischaracterizations of the DSA


Refine and implement existing European regulatory instruments (DSA, DMA, AI Act) rather than retreating from standards


Develop alliances with democratic countries globally, including the Global South, Canada, Australia, and New Zealand


Maintain enforcement of DSA risk mitigation systems despite US pressure


Consider suspension of recommender systems during electoral periods as a middle-ground approach to combat information manipulation


Explore development of interoperable European fact-checking APIs and community-driven moderation tools


Strengthen quality media and reduce reliance on social media for factual information


Program committee to incorporate session feedback into final EuroDIG messages by the deadline


Unresolved issues

How to effectively enforce DSA provisions when US companies withdraw from voluntary commitments like codes of conduct


Whether the European Commission has sufficient political will and strategic autonomy to enforce regulations against US pressure


How to address the fundamental ambiguity in DSA language around ‘systemic risks’ to civic discourse and electoral processes


What specific mechanisms can replace fact-checking when platforms abandon these practices


How to prevent manipulation of community-driven moderation systems by coordinated inauthentic behavior


Whether to revise existing regulations (DSA, AI Act) or work within current frameworks


How to balance child protection needs with broader content moderation challenges


What role geo-blocking and platform shutdowns should play as ultima ratio enforcement tools


Suggested compromises

Suspend recommender systems during electoral periods rather than shutting down entire platforms


Combine community-based moderation with trusted civil society organizations rather than relying solely on individual users


Focus on transparency requirements and procedural guarantees rather than direct content mandates


Develop technology-neutral risk mitigation approaches that allow platforms flexibility in implementation methods


Create European alternatives and APIs for fact-checking while maintaining platform choice in moderation approaches


Strengthen judicial oversight and court systems as complement to regulatory approaches


Build coalitions with like-minded democracies while maintaining dialogue with the US


Implement existing regulations fully before considering major revisions


Thought provoking comments

Trump claims to be protecting free speech. But what his administration really means is that private media must carry lies about who won the 2020 presidential election, conspiracy theories about vaccines and the most hateful, toxic speech imaginable… There’s a word for what the Trump administration is doing. Jawboning. Jawboning means using pressure, browbeating, and regulatory extortion to achieve results that regulators don’t have the legal authority to require directly.

Speaker

Berin Szóka


Reason

This comment reframes the entire debate by introducing the concept of ‘jawboning’ – a sophisticated form of regulatory pressure that operates outside formal legal channels. It challenges the surface-level narrative about free speech protection and reveals the underlying power dynamics at play.


Impact

This fundamentally shifted the discussion from a simple US vs EU regulatory comparison to a deeper analysis of how informal pressure and political coercion can undermine formal legal frameworks. The concept of jawboning became a recurring theme throughout the session, with multiple speakers referencing it when discussing enforcement challenges.


The DSA is ambiguous enough that Breton thought he could wield the law against content he didn’t like… In the new global culture war, this isn’t good enough. As President Reagan said, if you’re explaining, you’re losing.

Speaker

Berin Szóka


Reason

This insight highlights a critical vulnerability in European regulation – that ambiguous language can be weaponized by both sides. The Reagan quote crystallizes how political perception often matters more than legal precision in today’s environment.


Impact

This comment prompted deeper examination of the DSA’s structural weaknesses and led to discussions about whether European regulations need to be rewritten with current geopolitical tensions in mind. It also sparked debate about the balance between regulatory flexibility and legal clarity.


There is a European ideal for governance which is rooted in the principle of people’s sovereignty, where the state receives its mandate from the people, and it’s the state’s duty to stand up for the interests of the people against other private representatives of power… Perhaps this roots in feudalism which didn’t exist in the U.S.

Speaker

Judit Bayer


Reason

This comment provides crucial historical context that explains the fundamental philosophical differences between European and American approaches to regulation. By tracing the roots back to feudalism and the role of states in protecting citizens from powerful private actors, it offers a deeper understanding of why the two systems diverge.


Impact

This historical framing helped participants understand that the transatlantic divide isn’t just about current politics but reflects centuries-old differences in governance philosophy. It elevated the discussion from tactical regulatory differences to fundamental questions about the role of the state in protecting citizens from private power.


Platforms simply lack the capacity to moderate content at the speed and scale demanded during times of crisis… On average, it took the platforms more than five days to respond to our reports… hate speech, incitement, and disinformation, however, required more skilled human moderation and understanding of language, context, and local culture, and were much less consistently and properly handled.

Speaker

Nitsan Yasur


Reason

This comment grounds the theoretical debate in concrete, empirical evidence from crisis situations. It reveals the practical limitations of content moderation systems and highlights the gap between policy aspirations and operational reality.


Impact

This shifted the conversation from abstract regulatory principles to practical implementation challenges. It influenced subsequent discussions about community notes and alternative moderation approaches, with speakers referencing the need for solutions that can actually work at scale during crises.


Until Europe can stand up for itself militarily, the Trump administration may effectively have veto power over the enforcement of the DSA and the enforcement of other European tech laws… Europe may have fine principles, but it lacks strategic autonomy.

Speaker

Berin Szóka


Reason

This comment connects digital regulation to broader geopolitical realities, revealing how military dependence can undermine regulatory sovereignty. It’s a stark assessment that challenges European assumptions about their regulatory independence.


Impact

This comment fundamentally reframed the discussion by connecting digital policy to defense and trade issues. It led to conversations about the need for European technological independence and influenced the final messages about regulations becoming ‘pawns in disputes on trade and defence policies.’


Community Notes is a great idea for a lot of things. The point is that it, by design, again, it doesn’t work for the things that matter most. If the question is who won the last election, you will never get a Community Note on that issue because certain parts of the community… deny that the last election was legitimate.

Speaker

Berin Szóka


Reason

This insight cuts through the community notes debate by identifying its fundamental structural limitation – it cannot work for the most divisive and important issues where consensus is impossible by definition.


Impact

This comment effectively ended the idealistic discussion about community-driven solutions and forced participants to confront the reality that some problems require editorial intervention rather than democratic consensus. It influenced the final discussions about the need for multiple approaches to content moderation.


Overall assessment

These key comments transformed what could have been a surface-level comparison of regulatory approaches into a sophisticated analysis of power, politics, and practical implementation challenges. The introduction of concepts like ‘jawboning’ and ‘strategic autonomy’ elevated the discussion beyond legal technicalities to examine the underlying power dynamics that determine whether regulations can actually be enforced. The historical context provided by Bayer helped participants understand that current tensions reflect deeper philosophical differences, while Yasur’s empirical evidence grounded theoretical debates in operational reality. Together, these insights created a more nuanced understanding of the challenges facing European digital regulation in an era of geopolitical tension and technological dependence.


Follow-up questions

How can the DSA guard against being mischaracterized and avoid being abused by future commissioners?

Speaker

Berin Szóka


Explanation

This is critical for preventing the weaponization of the DSA by commissioners who might be sympathetic to actors like Elon Musk, and for maintaining the law’s credibility in the global culture war context.


What specific legal safeguards should be added to the DSA to prevent content-specific enforcement?

Speaker

Berin Szóka


Explanation

The ambiguity in terms like ‘risks to civic discourse’ and ‘electoral processes’ needs to be addressed with explicit prohibitions, similar to existing prohibitions on monitoring user communications.


How can Europe develop strategic autonomy, particularly in military terms, to resist American jawboning on tech regulation enforcement?

Speaker

Berin Szóka


Explanation

Until Europe can stand up for itself militarily, the Trump administration may effectively have veto power over DSA enforcement and other European tech laws.


How can Europe produce tech services that Europeans want to use to strengthen its regulatory position?

Speaker

Berin Szóka


Explanation

Without European tech alternatives, the Commission will always play a weak hand, having to graft European values onto American creations.


Should the EU create a central, supranational independent supervisory body for platform and AI regulation?

Speaker

Judit Bayer


Explanation

This would address concerns about centralized enforcement while potentially reducing American influence compared to the current Commission-led approach.


How can community-driven content moderation tools be protected from the same manipulation problems that affect other platform functions?

Speaker

Nitsan Yasur


Explanation

If platforms haven’t solved inauthentic manipulation in general, community notes and similar tools remain vulnerable to the same coordinated inauthentic behavior.


What evidence and methodology should be used to prove systemic risks under Articles 34-35 of the DSA?

Speaker

Karine Caunes


Explanation

Research based on millions of tweets and content pieces could provide the hard evidence needed for the Commission to act on risk mitigation, but clear standards are needed.


How should the U.S. policy on considering anti-Semitic social media activity for immigration decisions affect European students?

Speaker

Brahim Baalla


Explanation

The policy’s vague definition could unpredictably affect thousands of European students, requiring diplomatic clarification to protect their rights and academic freedom.


How can trusted flaggers and fact-checking resources be maintained under the DSA if community notes become the dominant approach?

Speaker

Torsten Krause


Explanation

This is particularly important for protecting children as vulnerable users who need reliable content moderation and fact-checking services.


What is the role and effectiveness of the Code of Conduct within the DSA framework for platform risk mitigation?

Speaker

Marie Bonner


Explanation

Understanding how the transformation from Code of Practice to Code of Conduct changes platform obligations for disinformation risk mitigation is crucial for implementation.


How can Europe build alliances with democratic countries outside the US-EU axis to address global platform governance?

Speaker

Audience member (EDMO advisory board)


Explanation

With no immediate solution to the US-EU divide, Europe needs to work with other democracies, particularly in the Global South, to maintain democratic governance of digital platforms.


How should the personal interface for news deadline in 2027 under the European Media Freedom Act be implemented as a long-term solution?

Speaker

Audience member (EDMO advisory board)


Explanation

This could provide access to guaranteed trustworthy news sources, potentially solving part of the disinformation problem if properly implemented.


Should recommender systems be suspended during electoral periods as a middle-ground approach to combating information manipulation?

Speaker

Karine Caunes


Explanation

This could address astroturfing and illegal content promotion without full platform shutdown, while being harder for the US to characterize as censorship.


How can a European fact-checking API be developed to enable transparent, community-driven moderation across platforms?

Speaker

Daniel (Youthdig)


Explanation

This would give users power to verify information collectively without state or corporate control, representing regulation through innovation rather than restriction.


How can effective judicial supervision be strengthened to ensure fundamental rights protection in the Council of Europe system?

Speaker

Berin Szóka


Explanation

The rule of law depends primarily on courts rather than just legislation, and the European Court of Human Rights needs to decide cases faster and apply more skepticism to speech restrictions.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.