Workshop 7: Generative AI and Freedom of Expression: mutual reinforcement or forced exclusion?

13 May 2025 12:30h - 13:30h

Workshop 7: Generative AI and Freedom of Expression: mutual reinforcement or forced exclusion?

Session at a glance

Summary

This EuroDIG session examined the complex relationship between generative AI and freedom of expression under Article 10 of the European Convention on Human Rights. The discussion was moderated by Giulia Lucchese from the Council of Europe, who noted that an expert committee is currently developing guidance on this topic, with a public consultation planned for summer 2025.


Andrin Eichin, chair of the expert committee, outlined key structural implications of generative AI on freedom of expression. While these systems enhance access to information and lower barriers to creative expression, they also tend to standardize outputs and reduce linguistic diversity, potentially diminishing minority voices. He highlighted concerns about content integrity, attribution problems, and the emergence of “hallucinations” where AI generates false information. Eichin also discussed AI’s persuasive capabilities and its role as a new informational gatekeeper, creating what he termed an “audience of one” where individuals receive hyper-personalized content that fragments public discourse.


Alexandra Borchardt presented findings from her research on newsrooms adopting generative AI, revealing a fundamental contradiction: journalism focuses on facts while AI calculates probabilities. Despite this tension, many news organizations are embracing AI for tasks like transcription, translation, and content personalization. However, newsrooms are proceeding cautiously due to concerns about accuracy and maintaining audience trust, which forms the foundation of their business models.


David Caswell argued for taking advanced AI forms seriously, citing rapid progress toward artificial general intelligence (AGI). He warned of society bifurcating into super-empowered individuals who leverage AI effectively and those who become distracted or disempowered by it. Caswell also highlighted risks of AI becoming a “complex system” that operates beyond human understanding, similar to current financial systems.


Julie Posetti emphasized the interconnection between privacy rights and freedom of expression in the AI context. She discussed how AI can be weaponized for disinformation campaigns and highlighted gender-specific risks, including the use of deepfakes to silence women journalists and political figures. The session concluded with participants agreeing that while AI presents opportunities for enhanced expression and journalism, it requires careful governance to protect human dignity and democratic values.


Keypoints

## Major Discussion Points:


– **Structural implications of generative AI on freedom of expression**: Including enhanced access and improved expression capabilities, but also risks of standardization that could diminish unique voices and minority languages, plus integrity issues around content attribution and the rise of “hallucination” in AI-generated information.


– **Impact on journalism and news media**: The contradiction between journalism’s focus on facts versus AI’s probabilistic content generation, opportunities for newsrooms to enhance efficiency and reach audiences, but significant challenges around accuracy, trust, business model sustainability, and maintaining human connections with audiences.


– **Risks of AI concentration and societal bifurcation**: Concerns about market concentration among AI companies creating new gatekeepers, the potential for society to split between “super-empowered” individuals who leverage AI effectively and those who become distracted or disempowered by it, and the emergence of AI as “persuasion machines” capable of unprecedented influence.


– **Surveillance and control implications**: The integration of AI with surveillance technologies (like AI glasses with facial recognition), the risk of creating new forms of social control and perception manipulation, and the intersection between privacy rights and freedom of expression in an AI-dominated landscape.


– **Gender and diversity concerns**: How generative AI can be weaponized through deepfakes and technology-based violence against women, particularly targeting female journalists and political figures, and the broader implications for silencing marginalized voices and reinforcing existing inequalities.


## Overall Purpose:


This was a EuroDIG session examining the relationship between generative AI and freedom of expression under Article 10 of the European Convention on Human Rights. The discussion aimed to explore both the opportunities and risks that generative AI presents for freedom of expression, with particular focus on implications for journalism, democratic discourse, and human rights protection. The session also served to inform ongoing work by the Council of Europe’s expert committee developing guidance on this topic.


## Overall Tone:


The discussion maintained a serious, analytical tone throughout, with speakers presenting both opportunities and significant concerns about generative AI’s impact. While there were moments of cautious optimism about AI’s potential benefits for journalism and creative expression, the overall tone became increasingly concerned and urgent as speakers addressed risks like surveillance, manipulation, and societal fragmentation. The tone was notably alarmed when discussing issues like the “cult-like” behavior of AI industry leaders and the potential for AI to create new forms of social control, ending with calls for collective action and human rights-centered governance approaches.


Speakers

**Speakers from the provided list:**


– **Giulia Lucchese** – In-person moderator, works at the Council of Europe at the Freedom of Expression and CDMSI Division


– **Online moderator (João)** – Remote moderator for the online session


– **Andrin Eichin** – Senior Policy Advisor on Online Platforms, Algorithms, and Digital Policy at the Swiss Federal Office of Communications of ComSwitzerland; Chair of the Expert Committee MSI-AI tasked to draft the Guidance Note on implications of Generative AI on Freedom of Expression


– **Alexandra Borchardt** – Senior journalist, leadership professor, media consultant and senior research associate at the Reuters Institute for the Study of Journalism at the University of Oxford; Author of EBU News Report 2025 “Leading Newsroom in the Age of Generative AI”


– **David Caswell** – Product developer, consultant and researcher of computational and automated forms of journalism; Member of the MSAI experts committee drafting the guidance note


– **Julie Posetti** – Feminist journalist, author, researcher, professor, Global Director of Research International Centre for Journalists and Professor of Journalism at City, University of London


– **Audience** – Multiple audience members asking questions during the Q&A session


– **Desara Dushi** – EuroDIG Secretary from the EuroDIG Programme Committee


**Additional speakers:**


– **Vrije** – Appears to be associated with Desara Dushi’s role/organization


Full session report

# EuroDIG Session: Generative AI and Freedom of Expression Under Article 10 ECHR


## Executive Summary


This EuroDIG session examined the relationship between generative artificial intelligence and freedom of expression under Article 10 of the European Convention on Human Rights. Moderated by Giulia Lucchese from the Council of Europe’s Freedom of Expression and CDMSI Division, with online moderation by João, the discussion brought together technical experts, journalists, researchers, and policymakers to explore both opportunities and risks that generative AI presents for democratic discourse and human rights protection.


The session directly informed ongoing work by the Council of Europe’s Committee of Experts on the Implications of Generative AI on Freedom of Expression (MSI-AI), which is developing guidance on this topic with a public consultation planned for summer of this year and a final guidance note expected by the end of 2025. The discussion revealed both consensus on key risks and significant disagreements about timelines and appropriate responses to AI development.


## Key Participants and Their Perspectives


### Andrin Eichin: MSI-AI Committee Framework


Andrin Eichin, Senior Policy Advisor on Online Platforms at the Swiss Federal Office of Communications and Chair of the Expert Committee MSI-AI, presented the committee’s analytical framework for understanding generative AI’s impact on freedom of expression. He outlined both opportunities and risks identified in their ongoing work.


On opportunities, Eichin highlighted how AI enhances access to information and lowers barriers to creative expression through intuitive interfaces that reduce requirements for language skills and technical expertise. These systems can democratize content creation by making sophisticated tools available to broader populations.


However, he emphasized significant risks including standardization and reduced diversity. Due to AI’s statistical nature reflecting dominant patterns in training data, these systems tend to reinforce existing biases while diminishing minority voices. He provided specific examples including the Italian “brain rot” social media trend, gender bias in AI-generated images of professionals, and Google Gemini creating non-existent idioms.


Eichin introduced the concept of the “audience of one” – where individuals interact with AI systems separately and receive hyper-personalized content that no one else receives. This could erode shared public discourse and increase societal fragmentation. He also noted concerns about new economic gatekeepers in the AI space creating market concentration risks.


Despite these concerns, Eichin described himself as “less pessimistic and doomy” about immediate timelines compared to some other experts, suggesting a more measured approach to AI governance timelines.


### Alexandra Borchardt: Journalism’s AI Contradiction


Alexandra Borchardt, Senior Research Associate at the Reuters Institute for the Study of Journalism at Oxford University and author of the EBU News Report 2025, presented her research on AI adoption in newsrooms. She highlighted a fundamental tension: “Journalism is about facts and generative AI calculates probabilities.”


Despite this contradiction, Borchardt’s research showed news organizations are adopting AI for various applications. She provided specific examples including RTS’s story angle generator, Swedish radio’s news query system, and Bayerische Rundfunk’s regional news updates. The technology offers opportunities for transcription, translation, and content personalization through “liquid formats” that adapt content for different platforms.


However, newsrooms face challenges maintaining audience trust, which Borchardt identified as journalism’s core business model. She emphasized that accountability becomes increasingly valuable when “you try to hold an algorithm accountable.” News organizations must focus on meaning-making and quality journalism that provides unique value beyond what AI can generate.


### David Caswell: Urgent AI Development Timeline


David Caswell, a product developer, consultant and researcher of computational journalism serving on the MSI-AI experts committee, presented the most urgent perspective on AI development. He argued for taking seriously expert predictions that artificial general intelligence (AGI) – defined as “AI that’s as smart as the smartest individual human in any digital domain” – could emerge within “two, three years.”


Caswell warned of potential societal bifurcation between “super-empowered” individuals who effectively leverage AI tools and those who become distracted by them. For some, AI represents “having your own personal newsroom,” while for others, “it’s an escape. It’s a distraction. It’s a way out of reality.”


He highlighted AI’s capabilities as “persuasion machines” with demonstrated effectiveness 3-6 times greater than human baseline performance in influencing beliefs and behaviors. Despite these concerns, Caswell explicitly stated his intention to be “excited and optimistic” about AI’s potential for dramatically increasing societal awareness and problem-solving capabilities.


Caswell also noted the difficulty of opting out of AI systems as they become integral to economic participation, comparing future AI avoidance to choosing to live like “Amish or Mennonite communities.”


### Julie Posetti: Human Rights and Gender Perspectives


Julie Posetti, Global Director of Research at the International Centre for Journalists and Professor of Journalism at City, University of London, emphasized the intersection between privacy rights and freedom of expression in AI governance. She highlighted how AI can be weaponized for disinformation campaigns and drew attention to gender-specific risks, including deepfakes used to silence women journalists and political figures.


Posetti critiqued characterizations of AI industry leaders as neutral experts, referencing a video clip about Silicon Valley leaders operating like “a cult” where “these men see themselves as prophets.” She argued for separating “independent expert perspectives from those who stand to massively profit from the technology they’re propagating.”


She warned against “platform capture” similar to Web 2.0 experiences and advocated for AI governance approaches that embed human rights considerations from the outset, protecting both “data and dignity.”


## Q&A Session Highlights


The session included audience questions addressing several key concerns:


**Technical Implementation**: Questions arose about how news organizations can practically implement AI while maintaining editorial standards and human oversight.


**Economic Sustainability**: Participants asked about the long-term viability of journalism business models in an AI-dominated landscape, particularly regarding potential paywall circumvention.


**Regulatory Approaches**: Discussion focused on how governance frameworks can keep pace with rapid technological development while balancing innovation with rights protection.


**Social Cohesion**: Concerns were raised about AI’s potential to further fragment public discourse and erode shared information foundations necessary for democratic societies.


## Areas of Consensus


### Information Integrity Challenges


All speakers agreed that AI poses significant risks to information integrity through hallucination, attribution problems, and potential for disinformation campaigns. They recognized that AI systems create new forms of gatekeeping and control over information access.


### Trust and Accountability


Speakers emphasized that trust and accountability represent critical values for journalism and democratic discourse in the AI age, with these human-centered values potentially becoming competitive advantages over AI-generated content.


### Need for Serious Policy Attention


Despite different perspectives on specific approaches, speakers agreed that AI development requires coordinated responses from policymakers and civil society rather than isolated efforts.


## Key Disagreements


### Timeline and Urgency


The most significant disagreement concerned AGI development timelines. Caswell presented an urgent 2-3 year timeline requiring immediate attention, while Eichin suggested longer timescales and described himself as less pessimistic about immediate urgency.


### Industry Characterization


Speakers disagreed about AI industry leaders’ credibility and motivations. Caswell characterized figures like Sam Altman as experts whose predictions should be taken seriously, while Posetti argued these individuals represent vested commercial interests rather than neutral expertise.


### Discourse Framing


Caswell emphasized being “excited and optimistic” about AI possibilities, while Posetti argued for more direct articulation of concerns and risks, representing different approaches to public AI discourse.


## Workshop Conclusions


The session concluded with workshop messages drafted by Desara Dushi and refined through participant feedback. Key conclusions included:


**Balanced Approach**: Recognition that AI presents both significant opportunities for democratizing expression and serious risks requiring governance attention.


**Human-Centered Values**: Emphasis on maintaining human agency, accountability, and meaning-making functions in AI-enhanced information systems.


**Adaptive Governance**: Need for regulatory frameworks that can evolve with technological development while protecting fundamental rights.


**Multi-Stakeholder Engagement**: Importance of involving diverse perspectives, including gender and diversity considerations, in AI governance processes.


## Implications for MSI-AI Committee Work


The discussion provided valuable input for the Committee of Experts’ ongoing guidance development. The strong consensus on information integrity risks suggests guidance should prioritize measures addressing hallucination, attribution problems, and AI-enabled disinformation. The emphasis on trust and accountability indicates frameworks should maintain space for human agency in information systems.


However, disagreements on timelines and industry characterization highlight challenges in creating guidance that accommodates different perspectives on urgency and appropriate responses. The committee must balance immediate concerns with longer-term considerations while avoiding both premature restrictions on beneficial innovation and inadequate protection against identified risks.


## Next Steps


The insights from this session will inform the MSI-AI committee’s continued work, with public consultation planned for summer of this year and final guidance expected by the end of 2025. The discussion emphasized that governance frameworks must be adaptive and responsive while maintaining core commitments to human rights, democratic values, and protection of vulnerable populations from AI-enabled harms.


The session reinforced that the future of freedom of expression in the AI age will depend on choices made by policymakers, industry leaders, and civil society about how these technologies are developed, deployed, and governed, with implications extending far beyond technical considerations to fundamental questions of human agency and democratic participation.


Session transcript

Giulia Lucchese: Afternoon, everyone. Thank you very much for joining the Euredic Session dedicated to Generative AI and Freedom of Expression, Mutual Reinforcement, or Forced Exclusion. My name is Giulia Lucchese. I work at the Council of Europe at the Freedom of Expression and CDMSI Division. And I will be your in-person moderator for the next hour. I immediately pass the floor to the Euredic Secretariat to walk us through the rules applying to this session. Thank you. Welcome.


Online moderator: I am João. I’ll be your remote moderator for the online. And I’ll be reading the rules. Session rules. Please enter with your full name. To ask a question, raise hand using the Zoom function. You will be unmuted when the floor is given to you. When speaking, switch on the video, state your name and affiliation, and do not share links to the Zoom meetings, not even with your colleagues.


Giulia Lucchese: Thank you very much, João. Easy rules. We can keep them in mind. Now, with the session, we are looking into the potentials and risks inherent to the use of Generative AI when this affects, somehow, freedom of expression understood under Article 10 of the European Convention on Human Rights. We should consider the profound impact that this has on freedom of expression today, or this could have in the next future. Please note that on this topic, the Council of Europe is already working. Indeed, in this very moment, we are elaborating a guidance note on the implications of Generative AI on freedom of expression. An expert committee is dedicated to this task, which is the MSIAI. And if everything goes well, we will have a guidance note by the end. of this year. Now, let me introduce our outstanding panel. We have Andrin Eichin, Alexandra Borchardt, David Caswell, and Julie Posetti. I’m absolutely honored to have you here. Thank you very much for accepting the invitation. The first speaker is Andrin Eichin. He’s Senior Policy Advisor on Online Platforms, Algorithms, and Digital Policy at the Swiss Federal Office of Communications of ComSwitzerland. Andrin is also the Chair of the Expert Committee tasked to draft the Guidance Note, the MSI-AI. Andrin, could you please help us to set the scene, understand which are the challenges, what are we dealing with, and why should we care? Thank you.


Andrin Eichin: Thank you very much, Giulia. Hi, everybody. As Giulia said, I have the honor to currently serve as the Chair of the MSI-AI, the Committee of Experts on the Implications of Generative AI on Freedom of Expression. Here you can see our expert committee. We have been tasked to develop guidelines on the implications of generative AI on freedom of expression by the end of 2025. So it’s too shortly. You cannot see the whole slide, so maybe I’m not sure whether we can remove the panel on the side. Okay. So I will try to share with you some of the implications that we are currently considering. I hope this will set the scene for the discussion that we are having afterwards. Let me stress what I present today is only just a glimpse of the work that we’re doing. Unfortunately, we don’t have the time to go into all of it, but I want to highlight that for those of you that are interested that we aim to have a public consultation of the document in summer of this year. So stay tuned for that. Now let me dive into some of the structural implications that we are looking at. The first implication we look at is with regard to enhanced access, better understanding and improved expression. You all know these interfaces by now, and I could have added many others. They are easy and intuitive to use. Many generative AI systems improve access to information and make interaction with text, audio and video easier, and maybe as easy as never before. They allow us to better access and receive information, and they lower or even remove barriers to language, technical and artistic skill, and sometimes even for people with disabilities. But they also have other abilities, and this is maybe a bit lighter, and some of you might know this. This is the latest social media trend called Italian brain rot. I don’t want to get into the cultural value of this. We can discuss this afterwards during the coffee break. But the point is this new social media trend is entirely made by generative AI, and it shows that these systems also facilitate creative expression, including art, parody, satire, or just silly little characters that make us laugh on social media. The second implication that we are looking at is touching on diversity and the standardization of expression. Generative AI systems are statistical and probabilistic machines, as you know, and as such, they tend to standardize outputs and reflect dominant patterns in training data. And studies already show today that it can reduce linguistic and content diversity. And of course, with regard to freedom of expression, this has a potential to diminish unique voices, including minority languages and underrepresented communities. And of course, there is also risk of reinforcing existing inequalities and stereotypes. I’m sure we heard all about the impact of data biases. and I guess you will have seen this or a variation of this picture already. In this example from Dali, the prompt for the upper picture was to depict somebody who practices medicine or runs a restaurant or business and Dali suggested only men. When asked to generate images of someone who works as a nurse in domestic care or as a home assistant, it suggests woman. And of course we see various elements and variations of this with other characteristics as well. Next, perhaps the most talked about implication, integrity and the attribution of human expression. It is widely known that AI tends to hallucinate, so make up facts or fill up elements it does not have. And you again know various different examples of this. This is a very recent example when Google Gemini, in its new AI overview on top of Google search, comes up with explanations to entirely random and made up idioms and sayings like here it tries to. Or what never wash a rabbit in a cabbage means. Or what the bicycle eats first means. Of course, this is very funny, but these are top of the pages explanations on Google. Here they are in a very benign and certainly not harmful context. But how does this other, how does this affect other information that we rely on to be factual? Besides just hallucination, we also see that there is a problem of source attribution and therefore dissociation from authorship. We don’t know anymore who creates content, if it was human, and if we can trust its integrity. And this of course makes the system prone to be used to deceive, impersonate or manipulate. They allow to mimic individuals, including through deepfakes and voice cloning, like last year with Keir Starmer’s voice cloning ahead of the UK elections. Or like in the Doppelganger case, to spoof legitimate news sources and spread disinformation by abusing a media brand to imply trustworthiness. The next structural implication we’re looking at is agency and opinion formation. Various new studies show that generative AI systems can engage in very effective persuasion through hyper-personalization and ongoing user interaction. They can really influence beliefs and opinions of human beings by using psychological tricks. And of course, this is highly relevant in the context of opinion formation. And I think, David, you will mention this later on in a bit more detail. The next implication is media and information pluralism and the impact AI has on information pluralism. While AI can enhance media efficiency, it also introduces a new economic and informational gatekeeper. Here is a chat GPT search from yesterday that I made when I asked for a summary of current news across Europe. We see a couple of relevant and interesting themes here. For example, with regard to the selection and prioritization of content, number three on the list was the power outage from now already two weeks ago. Clearly important. Is it the most relevant that happened yesterday? Probably not. We also tend to have something that is positive. We start to have transparency and traceability. Chat GPT provides us with sources, but it’s currently not clear which sources are selected, why and on what basis I see them, and whether this is just based on my news consumption or if other readers would see a similar source selection. And this is exactly the point that creates an entirely new challenge that we are dealing with, what we call in our in our guidance note the audience of one. This stands for an information environment where everyone interacts with generative AI systems and powered information separately and receives hyper personalized and unique content which will not be received by anyone else. And this in turn potentially erodes shared public discourse, increases fragmentation and can lead to even more polarization. Because of time I will only say very little about the last implication, market dynamics. We know that in some areas of the generative AI market, especially when we look at the foundation layer and the models, the market tends to be highly concentrated. And of course a highly concentrated market with single individual players that have a lot of power raise concerns about market dominance and freedom of expression. I’ll stop here for the time being and I’m sure we’ll have more time to discuss these elements in more detail. Thanks.


Giulia Lucchese: Thank you. Thank you very much, Andrín. Precious introduction. Thank you for sticking to the time and also for stressing the opportunity to engage on the public consultations on the guidance note. This is a very interesting opportunity for the audience at large, so please keep an eye on the website of the freedom of expression of the Council of Europe because the guidance note will be made available normally during the summer for comments to be received by whoever has a keen interest on the area. Now the next speaker is Alexandra Borchardt. She’s senior journalist, leadership professor, media consultant and senior research associate at the Reuters Institute for the Study of Journalism at the University of Oxford. Alexandra was also very recently author of for the EBU News Report 2025 Leading Newsroom in the Age of Generative AI. Alexandra, you interviewed over 20 newsroom leaders and other top researchers in the field. Would you like to share with us your findings and add further reflections? Thank you.


Alexandra Borchardt: Yeah, thank you so much, Giulia. And thanks everyone for being in the audience. We have almost full room here and also for everyone who joins remotely. Yeah, Leading Newsrooms in the Age of Generative AI is already the second EBU News Report on AI. The first was Trust Journalism in the Age of Generative AI. And this is also a public service. These reports can be downloaded freely without registering by everyone. And it’s a qualitative piece of work. And I’m so glad Andrin set the scene and also alerted you to the risks. And that gives me an opportunity to also show you some about the opportunities. But first of all, I wanted to start with a provocation here. Contradict each other, if you really put it clearly. Journalism is about facts and generative AI calculates probabilities. In fact, I learned, I was an expert in the expert committee on quality journalism here. And accuracy is the very core of journalism. It’s really at the core of the definition. Nevertheless, there are lots of opportunities that newsrooms see. And you might be surprised to see after the elaborations before that so many in the media industry are actually excited about AI. Because it actually helps them with all kinds of things. It helps them with news gathering, for example, in data journalism, doing verification, document analysis, helping them to augment images, brainstorming for ideas. There’s lots of stuff there. It helps them with news distribution, with production, news production. transcribing, translating, helping with titling, subtitling and particularly liquid formats. This is a key word here, switching easily among different formats or between formats, converting text to video, vice versa, audio. So everyone gets what they like, the audience of one that was just referred to. And then in the end, news distribution. You can personalize news. You can address different audiences by different needs. Also by, for example, by their location and their preferences, all kinds of things that really help. And this is Ezra Eman, director of strategy and innovation of the public broadcaster of the Netherlands. One of them. And he says with Generative AI, we can fulfill our public service mission better. It will enhance interactivity, accessibility, creativity. It helps us to bring more of our content to our audiences. And there are actually some examples and there are nine use cases in this report. And actually we had 15 in the previous report. And I just touched on three of them to give you a clear example. For example, some internal thing that RTS in Switzerland developed the story angle generator. This is for like day two after a breaking news situation when newsrooms might run out of steam a little bit and lack ideas what to do next. And this angle generator gives them an idea like, oh, maybe you can produce some entertaining stuff or some explanatory journalism out of this. So really helps them to be more creative with one news piece. Also, we will see a lot more chat formats. And this is from Swedish radio. They together with the EBU developed this news query thing where you can actually interact with news. And then last but not least, and I’m German and based in Munich. So you will see the regional update that Bayerische Rundfunk developed where you can put your postal code in and then sort of draw a line what kind of region you want your news from. And then it will create automated podcasts for you to listen to. So you’re always up to date. on what’s in your region. Nevertheless, when I was commissioned to do the second report, I was expecting actually that much more would have happened. But no, while the tech industry is really forging ahead at speed, the media companies are much slower. They are taking a much more intentional approach and that for a good reason, because the trust of their audiences is at stake and actually therefore their business models, because the major business model of journalism is audience’s trust. If you lose trust, you lose a business model. In fact, audiences are really quite tolerant about how newsrooms use AI. They find it totally okay if they use it for like brainstorming and image recognition or automating layouts like these print layouts. No one wants to put effort into any longer, but they are absolutely skeptical when it comes to stuff like generating a virtual presenter or visualizing the past. This is what studies reveal. Nevertheless, these audience perceptions are strongly influenced by people’s own, your own experience with using AI, so they are most likely to shift their attitudes to what is acceptable and what not. And this is Jiri Kivimäki from the Finnish broadcaster Wiley, and he said, we started labeling these AI summaries and our users actually said, hey, come on guys, we don’t care what you use it for, just do your jobs, do your job. We trust you that you do the right thing. So they got really angry, he said, which is really interesting. And I will confront you with three big questions that the report actually revealed and that can be discussed and that newsrooms will discuss and the media industry will discuss. The first big question is about accuracy. I already mentioned that, the accuracy problem, how to solve it. And there was BBC research that came out in March this year that actually showed when AI assistants brought up took news content and served people. So with news from that, there was actually an accuracy problem in every second piece of news. And that is a problem the media has to face because accuracy is at the very definition of journalism. And Peter Archer, the Director Generative AI at the BBC says we need a constructive conversation in the industry, the tech industry and the media industry need to team up. And we need to be part of this because also the tech companies can only be interested in having that problem solved. Big question number two, and I’m particularly fond of that, will AI make people creative or will it make us lazy? And my response to that would be, well, if we want to be creative or if people want to be creative, AI can make people more creative. But if you just want to offload work, just press a button, not think about something, it can also make you lazy. This is Professor Patti Mast from MIT Media Labs, and I really appreciate her input to this report. And she said, actually, this is not a given. We can actually tease people a little bit so that they are creative. It is possible to build AI systems that challenge the user a little bit. And we don’t have to simplify everything for everybody. And I find that quite important. And the third big question is, will there be money in it? And that’s a big question for newsrooms. Will their business model survive? Because the visibility of journalism is threatened, and we will learn more about that. And also the huge dependence on these tech companies. And Professor Charlie Beckett, he’s the director of the Journalism AI Program at London School of Economics. He said, yeah, but if you are entirely reliant on AI, what happens if, you know, they put up the price five for the tech companies or suddenly change what the stuff can do? So we are in the hands of tech companies, and it is really important to be aware of these dependencies. And the big question really then is, and I just mentioned at how to keep journalism visible. Because as content has become a commodity and is being produced at scale, it will be so much more important than ever to invest in the journalism and in direct human connections with audiences to really establish the legitimacy of journalism in this age of content abundance. And there’s Laura Ellis also from the BBC who said something that I found very smart. If we just automate everything, because it’s so easy to automate, will we then lose connections to our audiences even further? Will we still have someone in our newsrooms who speaks with that voice of the audience? So that is really something that we should consider. So to finish up with this, what do news organizations need to do? And I’m not going into what regulators need to do, but just plainly news organizations. Mostly investing in quality journalism is key really to secure their survival and maintain their legitimacy as the providers of trusted news and information. Building direct audience connections, really knowing who they serve and actually getting those email addresses and connections so that you can actually reach to your audiences, because the platforms will be otherwise determining and controlling all your access to audiences. Then also making things easy in the newsroom so that actually people in the newsroom adopt these AI tools and use the right tools to begin with, but don’t make it too easy. Really don’t let people stop thinking about it. And then the human qualities of it all. Be accessible, approachable, and accountable, and be human. This will be decisive, a decisive quality for news organizations. And let me conclude with a quote by Anna Lagerkrantz, who’s the director general of Swedish television, and she says very Finally, journalism has to move up in the value chain. In other words, journalism has to get a lot better because the copy and paste journalism that we are still confronted with these days, it doesn’t serve us well any longer. And she said also something very important, journalism, journalist institutions, media institutions need to be accountable because accountability will be a rare commodity. She said in our talk, in our interview, try to hold an algorithm accountable. Maybe try to hold a platform company accountable, but we are there. People can walk up to our front steps and hold us accountable. And that is really important. And she also reminds us that journalists will need to shift from just being content creators and curators to meaning makers, because we need journalism to make meaning of this complex world and an overabundance of choices. Thank you.


Giulia Lucchese: Thank you very much, Alexandra, this was very insightful. Notwithstanding the clear contradiction, I was at least pleased to learn about the opportunities for news outlets, but also the creative use made of generative AI. Thank you also for stressing the concepts of accuracy, trust, but also accountability. Now, without further ado, I’m now invited to intervene our next speaker. David Caswell is product developer, consultant and researcher of computational and automated forms of journalism, is also a member of the MSAI, the experts committee drafting the guidance note we mentioned before. David, please, would you provide us with your perspective on upcoming challenges? And I hope you do have solutions for it.


David Caswell: Yes, solutions. That’s the big question. I’ll just go through the where I see kind of. the state of the future, I guess, and then maybe a couple of solutions or prospective solutions at the end. So what I’m going to do in this seven minutes is to just try to persuade you as to why you should take the more exotic forms of AI that you kind of hear talked about, AGI, super intelligence, seriously, and then kind of connect that with some of the risks, and maybe a few opportunities in journalism and in expression, human expression and information more broadly. And so to take, you know, these forms of AI seriously, you know, one reason to do that is to look at the trend lines from the last half decade. And on every trend line, you can look at the benchmarks, the scaling laws, the reasoning abilities. Essentially, we have maxed out the benchmarks, we’ve got to 100% and can’t go any further. There’s a real problem right now in AI about how to measure how smart these things are, because the benchmarks are saturated. And things are just getting started. We’ve got literally more than a trillion US dollars in soft commitments for AI infrastructure that have been announced in the last year, 18 months. And some of that is not going to happen and all the rest of it. But it’s a vast, vast amount of money, right? It’s money on the scale of the, you know, the moonshot that the US did in the 60s. And the effects of that investment haven’t begun to show up yet. So another reason we should take AI seriously is because the experts are taking it seriously. So Sam Altman at OpenAI does not think he’s going to be smarter than GPT-5. Dario Amodi, the CEO of Entropic, another big model maker, he likens what’s coming to a country of geniuses in a data center. So say a country of 5 million people, each of them an Albert Einstein in a data center in San Antonio, Texas. That’s the kind of thing to imagine here. And you see this again and again and again. These people do have biases, but only in the same way that climate scientists have biases and vaccine experts have biases. We listen to those experts and we maybe should listen to these experts a little bit too. Maybe not completely, but a little bit. We do have independent studies of this by very, very qualified and principled people. There’s one I highly recommend, the AI 2027 report. But the interesting thing, both in the experts and in these independent analyses, is that even the critics of this concept of AGI and superintelligence, even they accept that dramatic things are gonna happen. So even the critics, even the people who are downplaying what’s going on are still painting a pretty dramatic picture. Another reason that we should take AI seriously is because of consumer adoption. So if we look at the use of AI, this is from work that was done by the US Federal Reserve back in September. At the moment, for example, in the US working population, about a quarter of the US working population uses generative AI at work once a week or more. If you look at it on the content side, if you look at the entire amount of text content produced in the US, in major areas, significant portions of that are already generative AI generated. So for example, about a quarter of all press releases, corporate press releases are AI generated. So this stuff is showing up very, very rapidly already in double digits in use, in weekly use and in content. Another reason to take this stuff seriously is just play with the tools. Like honestly, everybody here sign up for the tools, sit down, play with the most advanced models, really exercise them, learn what these reasoning models can do, learn what tools like deep research can do or agents like Manus. These are kind of the leading edge of where AI is, but they’re completely accessible. You don’t need technical skills. You don’t need special access. You just need a little bit. bit of curiosity. And if you play with those tools and really exercise them on a subject that you know well, you will be pretty convinced that big things are coming. So I would suggest that engaging with the tools and judging for yourself is a good reason to take it seriously. And then we should look at sort of the progress over the last half decade on the largest possible benchmarks, benchmarks on the largest scale. So for most of my life, the big sort of golden ideal of AI was the Turing test, passing the Turing test. Well, we passed that in about 2019, and we didn’t even notice it. So that’s gone. The next sort of milestone, large, large benchmark here is AI that’s as smart as the sort of the regular average median modal human in a vast array of tasks, the most digitally accessible tasks. That’s kind of gone. If you’ve played with these tools at all recently, you’ll see that they can draw better, they can write better, they can reason probably better, they can do most things better than the average or median human. Another possible benchmark is AI that’s as smart as the smartest individual human in any digital domain. And this is what is my personal definition of AGI. It’s what a lot of people think of as AGI. We are not quite there. That’s a dashed line. But we are almost there. If you really get involved with some of these reasoning models on a subject that you know well, you will see that, you know, pick your topic, you will see that we are making significant, the models are making significant progress in that direction. So there’s a reasonable case, we’re going to get to that point within a couple of years, two, three years. And then lastly, there’s this other category, human beings are smart, not just because we’re individually smart, we’re smart, because as a society of 8 billion people, we can do amazing things. And this idea that we could have models or machines that are smarter than all of us collectively, sometimes called superintelligence, that’s taken very, very seriously by some very serious people in this world, not just people at the model companies, but startups, investors, governments, and so on. A little further out, but pretty significant. So there are risks, obviously, with all of this. One risk, and this was something spoken about earlier, is this risk, this significant risk of the bifurcation of societies into super-empowered and disempowered people. So if you look at all the possibilities in media that generative AI can bring, for some people, it is like having your own personal newsroom. It’s like having your own army of academics and researchers and analysts. It’s like having your own personal central intelligence agency. It super-empowers what you can do. For others, it’s an escape. It’s a distraction. It’s a way out of reality. It’s a way to avoid dealing with things you need to deal with. And the thing here is that these are feedback loops. The more empowered you are, the more empowered you become, and the more distracted and confused and escape-focused you are, the more it goes that way. And so you end up with some parts of society having a dramatic gain in the agency that they have, and some losing agency. So that’s a risk. That’s a very real risk. That’s already happening in some degree. Here’s another risk. News as a complex system. So here’s a kind of a series of events in a newsroom, say your average newsroom. Step one, AI shows up. You say, right, we can use this to make our jobs as journalists easier. That’s great. So you say, well, we can actually use it to do whole jobs that we don’t wanna do. These are jobs that we don’t like or that we have trouble filling. We’ll just get AI to do those jobs. Well, that’s all right. Then you’re in this situation where you have AI and it’s doing most jobs. So you can go home. You can have a three-day week or. You can come in at 11 and go home at three because the AI is doing most of the jobs. And that sounds kind of nice, right? And then you get to this point where, what exactly is the AI doing? You know, I haven’t been checking in for a few weeks and what is it doing? And then you’re at this point where you don’t know where your information is coming from. The whole ecosystem works as it works now. Your phone has got alerts and you’ve got news on webpages and you’re talking to chat GPT about news and all the rest of it, but you don’t know where that’s coming from. The situation is that it’s got so complex that it’s a complex system. And this idea of big chunks of our society being a complex system, our financial system went that way. Very few people understand how the financial system works, even though we all depend on it. And there are researchers, many researchers right now who study the financial system as a complex system. Here’s another reason to take AI seriously in terms of risk, which is this idea of persuasion machines. And we got a little early glimpse of that recently by this study from a team at the University of Zurich. And what they basically did was they put a set of AI agents on a Reddit, on a subreddit called Change My View. And Change My View is a subreddit where you kind of put a point of view and then if somebody changes your mind, you award them with a little bonus point. And so they were able to use that setup to do this very, very high scale test. And there were ethical issues around the study, so it’s kind of a little obscured. But in the paper that they would have published had it passed the ethics guidelines, they found that these models could achieve persuasion rates between three and six times higher than the human baseline. So the idea of machines that are hyper persuasive for political or for commercial purposes, not a far-fetched idea at all. And so just in legacy news media, finally, ChatGPT shows up in late. 2022 and people start in the newsroom start building guidelines and providing access and doing prompt training and all that kind of stuff. You get into like 2023 newsrooms are starting to do things like summaries, they’re starting to automate tasks. You get into 2024 the more advanced newsrooms are building little chatbots that can chat with their archive or semantic search where you can kind of get better search. A lot of them are building toolkits where you can you can automate a lot of tasks in a newsroom, that’s quite common. And then at the moment I think this year a lot of newsrooms are using AI to do news gathering, to do news gathering at scale. So there are opportunities here for legacy news media but it’s kind of a race really at some level. The change that’s coming is dramatic, change that’s here is dramatic, the change that’s coming is dramatic and there’s an open question here about whether legacy news media can take advantage of those opportunities. There’s other opportunities as well right, if you look at how informed societies are at the moment, why would we consider that to be an end state? You know if you take a scale here from medieval ignorance on one end, say a peasant in a village in 1425, to god-like super intelligence on the other end, we have come a long way along that scale using technology like the printing press, the invention of journalism, radio, broadcast, television, the internet, social networks. What might we be able to do in terms of informing society once we diffuse all of these AI tools we have at the moment into our ecosystem? What would we do with AGI? What might we do with super intelligence? So there really are opportunities here to dramatically increase the level of awareness that people have about their environment. I’ll just leave it there. Thank you, thank you.


Giulia Lucchese: Thank you very much, David, for addressing this exotic form of AI, the AGI, and the relation to human expressions. It seems like we are running late on a lot of the challenges you listed, but you also were so kind to conclude your presentation with opportunities. The end, at least. Last but not least, I pass the floor to Julie Posetti. Julie is a feminist journalist, author, researcher, professor at the Global Director of Research International Centre for Journalists and Professor of Journalism at City, University of London. Julie, I know you would like to offer your perspective on the issue by starting with a video. Yes, I do plan to do that, and it segues directly from David’s conclusion, which was with reference to godlike omniscience. If we can play the video, please.


Julie Posetti: I think we’re having trouble with the audio. We’re having trouble with the audio, is that right? We’re having trouble with the audio. You also want to live forever. If you think about AI and you think about God, what is God? God is this thing that is all-knowing, it’s all-seeing, it’s all-powerful, it transcends time, it’s immortal. If you talk to a lot of these guys, the very senior ones who are building the GI, artificial general intelligence, creating something that has all human knowledge put into it, that surpasses any single human in its understanding of the world and the universe, and that is everywhere connected to every device in every city and every home that’s watching you and thinking about you. And if we turn it on and let it start to influence society, that it’s very subtly making decisions about you, where you can kind of feel it a little bit, but you can’t see it or touch it. And then imagine you have a bunch of men who also want to live forever, defeat death, become immortal. And in order to do that, they have to find a way to connect themselves to this creation. These men see themselves as prophets. Brian Johnson, the guy that we had dinner with, literally said, and this is in the podcast, we’ve got it wrong. God didn’t create us, we’re going to create God, and then we’re going to merge with him. And all the weird things that these guys say and do, if you start to understand that there’s aspects of this that are like a cult, a fundamentalist cult, or a new religious movement, a lot of their actions start to make a lot more sense. And if you actually start to interpret these statements, not as just some passing flippant comment, but that there’s a pattern to it, I think that we’re dealing with a cult in Silicon Valley. OK, apologies for the issues with the sound and the video sync. That was a clip from a panel discussion at the International Centre for Journalists, sorry, the International Journalism Festival in Perugia last month. For those of you who have forgotten, Christopher Wiley is not just a commentator on AI, he was in fact the Cambridge Analytica whistleblower. He is the one who revealed the data scandal that saw millions of Facebook users’ data breached and compromised. And you’ll remember that the Cambridge Analytica scandal involved an early kind of iteration of AI tools that were designed to micro-target with macro-influencing in the context of political campaigns. So several people have said that his comments sound alarmist, but he also pointed out that we need to stop being so polite, that we need to actually articulate the concerns and the risks associated not just with the technology but with the business models behind the technology that are designed to further enrich billionaires who are actually those that stand to profit most from the mainstreaming of AI. And ultimately, as David has pointed out, the objective is superintelligence or AGI and then superintelligence. So it might sound alarmist but the facts are alarming and I think they’re particularly alarming and they should be particularly alarming for people and states and intergovernmental organisations that are invested in securing and reinforcing human rights in the age of AI. So as I said, Chris exposed the Cambridge Analytica scandal and when he talks about this desire for Omniscience and Omnipresence Among the AI Tycoons. I think it’s important to highlight the rights, the links between the rights to privacy and the rights to freedom of expression and freedom of thought. And he does that in a podcast, an investigative podcast that he was speaking about there, which was published by Coda Story, which is a global-facing investigative journalism outlet that emphasizes the identification of prescient trends, so particularly with regard to disinformation and narrative capture. And that podcast is called Captured, the secrets behind Silicon Valley’s AI takeover. And I’ve used that example partly because Coda Story is one of the research subjects for a global study that I currently lead called Disarming Disinformation. And it’s looking at the way news organizations are confronting and responding to and trying to counter disinformation, particularly in the context of the challenges and opportunities that AI presents. So I think it’s important, as I said, to consider the right to privacy in combination with the right to freedom of expression and therefore to think about AI and all its integrated forms and the responses to it holistically. So before I turn specifically to generative AI and freedom of expression, I also want to highlight the need to consider the implications of the AI of things. So in particular, the application of AI glasses, which do pose a significant risk to freedom of expression of the kind that relies on the right to privacy, such as investigative journalism that’s dependent on confidential sources, such as Christopher Wiley. So he was initially the whistleblower who was a confidential source to start with before he identified himself for The Guardian and The New York Times and others, Channel 4 and others, for the Cambridge Analytica reporting. And it’s noteworthy that Mark Zuckerberg recently invited Meta’s users, nearly a billion of them, to download a new AI app that will network and integrate all of their data, including Meta’s new or upgraded AI glasses, which include facial recognition. And that prompted John McLean to write in The Hill, a newspaper coming out of DC, that Mark Zuckerberg is building a new surveillance state. So again, I think we need to consider surveillance in the context of freedom of expression. And he wrote, these glasses are not just watching the world. They’re interpreting. They’re filtering and rewriting it with the full force of Meta’s algorithms behind the lens. They’ll not only collect data, but also send it back to Meta’s servers to be processed, monetized, and repurposed. Facial recognition, behavioural prediction, sentiment analysis, they’ll all happen in real time. And the implications are staggering. It’s not just about surveillance. It’s about the control of perception. That’s a very important consideration when it comes to the function of independent journalism in democratic contexts, but also freedom of expression more broadly, and particularly issues around election integrity, for example, connected directly to information integrity. And coming back to generative AI specifically, an example from Australia, which we’re starting to see replicated in the very recent Australian elections. So the ABC’s, Australian Broadcasting Corporation’s chief technology reporter, working with. the fact-checking team, and in some ways using AI technologies to analyze large data sets through natural language processing, for example, identified the function of Russian disinformation in attempting to pollute chats. So as a way of polluting information, she referred to it as working the same way as food poisoning works. So inserting disinformation into large language models by flooding the zone with literally fake news. So these artificial news websites, one that they identified was called Pravda Australia. And it’s an iterative title. It is largely derived from telegram chats full of Russian disinformation. And that disinformation is being surfaced in the context of queries in the major chatbots that are being used. So this is something that I think needs to be really carefully considered with regard to accuracy and verification, which are real challenges with regard to chat GPT or any other tool that you’re using to query large language models. And the second point that I want to make is around the ability, therefore, to influence the outputs with not just disinformation of a foreign state actors, a political persuasion, but also hate speech and general disinformation connected to health, for example. If the objective is to radicalize certain citizens or societies as a whole, and to roll back rights, then this is another weapon that the agents for such pursuits have available to them. And we heard an example yesterday from Neyma Lugangira, who’s the chair of the African and Parliamentary Network on Internet Governance, of her experience of seeing generative AI used on X, so groke on X, to generate deep fakes effectively. And her point was that generative AI can be used to really reinforce sexist stereotypes, but also to generate misogynistic images, hyper-sexualized images. And when we know about deep fakes in the context of deep fake porn, we’re seeing this used against journalists, we’re seeing this used against political actors, as that example showed. So I think that we need to be aware if we’re to look at opportunities, at the tactics of those actors. They tend to be networked, they’re very creative, and they’re transnational, they’re cross-border. So the challenge for us, those of us trying to reinforce human rights, the rule of law and democracy, is to act in similarly networked and creative ways. And I’ll leave it there. Thank you.


Giulia Lucchese: Thank you very much, Julie. I’m fascinated and surely alarmed right now. Thank you for starting with this tough, provoking question to the audience. This comes at a very good moment because now we open the floor for questions. Please, both online and in person, do not hesitate. Yet, I would ask you to be focused asking your questions so that we can give voice to diverse participants. Please, who is willing to break the ice? Yes, would you start? Thank you.


Audience: All right. So first, thanks of all for the very interesting insights. The point I want to raise is referring to the rather first part of the presentation. And… The point I want to get your opinion on and how you think it’s going to develop in the future is we have academic research showing that every group of humans using AI to write an essay or a newspaper article, that the output on a collective level will be more similar. So we see on a collective level we have a more homogeneous output. On the other hand, we always argue that AI is going to help for a more personalized experience. It’s creating a more individual, like how we consume the content will be more individual. So for me, that seems kind of contradictory and I wanted to get your opinion on that point. Thanks.


Giulia Lucchese: Thank you. Should we collect a couple of questions and then, okay, thank you.


Audience: First of all, thank you very much for all the interventions. They were very inspiring and interesting. I would have just a quick and simple question. We are still witnessing that in many fields AI does not make key decisions in relation to the production of content. Like, will we be able to someday witness an AI, maybe applied to press activity or related, tell us someday that we won’t get access to a certain piece of information or news because of a decision that was made by AI and it was an exclusive decision of the system. Thank you very much.


Giulia Lucchese: Thank you. Yes, please.


Audience: Thank you to all the speakers. I have a question regarding something that was touched upon I believe in the first presentation, the fact that the use or the dependency on AI systems also makes journalism and in general information dependent on the prices that are set by these corporations. And I was wondering how you see the quality of information diminishing in relation to the possibility of more paywalls being introduced and so access to accurate and verified information also becoming a socio-economic issue. Thank you.


Giulia Lucchese: Thank you. I’ll take the last question. No, there’s not a last question. Oh, yes, there is. Please. Thanks.


Audience: Sorry, just quickly. It’s probably for David Caswell, if I may. You outlined those two clear distinctions of where you see society going. And I’ve thought about this before, this idea that you have massive polarization if there are individuals who just get distracted by social media, sucked into maybe the algorithm and seeing more simpler things, not engaging with intellectual actions of curiosity or writing. And then you have those who really understand and comprehend the system. And so you have massive intellectual polarization. But could you not also argue that there’s a third category of people who just say, I don’t necessarily want to know, like in the financial system, it’s different, right, because you have to be in the system of finance. But couldn’t you say there’s a third group who just says, I don’t want to be in the system. So it’s not being sucked in, being made to not like brainwash, but almost, but just completely escaping and getting out of maybe the matrix in this way. So do you think that it’s possible to achieve that?


Giulia Lucchese: Thank you. As I’m mindful of the timing, I would like to propose that, starting with Andrin, you do provide at least a reply to one of the questions you like the most, and then we move on. And maybe we avoid the final round, so if you have any final remark right now, I would like you to condense it in two minutes intervention. Thank you.


Andrin Eichin: I will just answer to three of them, but very shortly, because I think they’re very good. The first question, which was on the output on standardized expression and how it interacts with individual expression. I think this is a really good question. It’s something that we were considering as well in the expert committee. I think it really depends on what kind of tasks we’re looking at. There will be a lot of standardized tasks, writing emails, summarizing reports, where we will see the standardized expression, and this will probably increase and will also create a problem with regard to the data set that we’re using. And then we have creative expression that is being used, the way how we interact with generative AI systems as well to increase our ability, as we’re seeing in a creative way with memes, but also in journalism. So I think there will be elements where we have standardized expression, and there will be elements where generative AI will expand our expression as well. And you can even go to the element, to the side of the text as well, right, with regards to words and in relation to ideas. Maybe on the question too, on will it make key decisions with respect to content? I believe so. It will definitely. I think it will be, maybe I’m less pessimistic and doomy as David may be. I think that the timescale will be. will be a bit longer. If we look at how productive the interaction with AI systems actually is today, it’s still quite low. A lot of it is for entertainment purposes, but this will change. And this leads me to question four. Although you addressed it to David, I try to jump in. If you’d be able to escape, maybe there is this third form, but I don’t think it will last long. Again, for me, we’re speaking still in the future, not two or three years, but at one point, generative AI systems will be such a part of our economy, they will be so important for you to be productive and to participate in the economy that there will be almost no option to opt out. If you today don’t use a smartphone, it will be very difficult to participate in society. And there are people that don’t use the financial markets today, but they are very different.


Alexandra Borchardt: Is it a contradiction? Will it make people more creative, reflective on things? Well, this really also depends on the systems design. And also, of course, what you want to do with it, and that is the socioeconomic differentiation that might happen, as has already happened with social media. I mean, with social media, it’s also the case that if you want to get really a lot of information, particularly, for example, take the pandemic, I mean, you could directly interact with scientists and whatever to find out everything. But if you didn’t have the first idea of knowledge, if you didn’t have the access, if you didn’t know which scientists were really good at this, you didn’t get, maybe you didn’t get any information, or you got information from the basic information from public service media. So it really depends on what people are going to. do, but it also depends on the system’s design. And what Professor Patti Mase said was you can really challenge people a little bit just by asking like one more question, not just, you know, pressing a button, output, get rid of the output. And I don’t know if you’ve experienced that, you know, oh, produce a report for me. Oh, might I format it for you and send it away? So you can just do things without ever even, you know, engaging your brain. But if there’s just one back question, like, would you think this really makes sense? Or ask me something back. And that is the system’s design that can really help to engage people a lot more. I hope that makes you happy. Then I guess I’m the person with the paywall question, because I spent almost 30 years in the media industry, and I’m really worried about business models. I think the paywall problem, we have, I mean, we have that now already, we’ve had that for some time, that there’s quality media, and to survive, that they really try to engage, that there are set paywalls to make people pay for news, which makes a lot of sense. I mean, you can’t really go to the bakery and help yourself, you just need to pay. So the idea of many news organizations is, you know, this is quality information. If it’s worth something to you, then you pay. In fact, AI will undermine paywalls, can undermine paywalls, and actually can give you the output. So the paywall, no idea if this is going to be the thing of the future. If you see generative search emerging, and you ask just questions on AI, you will get responses. And sometimes there are responses, stuff that is actually behind paywalls. So we will, news organizations will need to do a lot more to engage people, to show them that they can create real value in their lives, and to really make them pay. And and public service media will probably also become a lot more important in that context and necessary. So the future of the business model is really something that worries me. And then the third one was about what I would like to comment on, yeah, will these systems make decisions? And yet, of course, with agentic AI emerging, obviously, agentic AI means that you optimize for some or you set some goal and then these agents really sort of independently, that’s why they’re called agents, make decisions on your behalf. But what these agents probably won’t do is, for example, doing investigative research, because there’s no incentive that they have to do so. So it’s probably, it’s most likely this is what journalism really needs to to intensify its efforts, becoming more investigative, holding power to account and really, you know, going for these things that AI won’t do. But, you know, we will, we might be surprised what AI will do in the future. So I don’t really think I can give you the final answers here. But David might. He’s the super expert here.


David Caswell: Well, sorry, I first want to just clarify that I did not intend to be pessimistic and gloomy. I had been going for excited and optimistic, but obviously failed. I’ll just quickly go through just my brief responses to some of the questions. I think that question about the balance between the narrowing of the distribution of expression on the one hand versus all of these opportunities to be more expressive, to be articulate, to be creative, to be artistic. On the other hand, that’s a very real question and the honest answer is don’t know. But I think what it does is it brings up very clearly the fact that this is a dynamical system. in that there’s going to be certain things that change that move freedom of expression or the makeup of the information ecosystem or our relationship with information that move those things in certain directions and there’s going to be other factors that come from this that move it in other directions. And so I think that’s this sort of uncertainty that we’re in right now, is that we’re going to have all of these things kind of changing at once and what the net of that ends up being, we don’t really know. In terms of AI making decisions, that happened long ago. If you get your news from social media, from Facebook, from Google News, AI is figuring out what you’re going to see. It’s already happening inside news organizations with generative AI in terms of story selection, angles, and all the rest of it. And even in terms of agents, there was an interesting little thing about two months ago. There’s a company in the UK called NewsQuest. They hired their first, what was the title for their newsroom? It was AI agent orchestrator. So they hired a journalist whose job it was to manage or is to manage a team of agents to make journalistic decisions and do journalistic things. So I think we have passed that milestone. AI is already making fairly profound editorial decisions. Not broadly, and I think a lot of newsrooms that are maybe touching on the edge of this don’t want to talk about it, but I think the trend is pretty clear there. On the cost thing, I’m not sure if I got the question right, but I think this might have been reacting to that slide with Charlie Beckett’s quote around what happens if the model companies increase the cost of these models by five times. I’m not sure that’s going to happen. I think one of the surprises of the last year or two is that these models might be much more like electricity. They might be much more like a utility than some kind of special thing like a social network. Social networks, because of network effects, had this one size. are one take all kind of dynamic. And these models might not have that. It might be that anybody can build one of these and get to some level of intelligence, just like you build a power station and generate electricity. It’s expensive, but anyone can do it. So I’m not that worried about the underlying cost thing on this. The bifurcation one, that was a very good question. And absolutely, I agree with Andrin that I think it’s going to be hard to opt out of this. The example I would use is not so much opting out of smartphones. It’s worse than that. It’s more like being an Amish or Mennonite, old order Mennonite, where you’re basically, you’re picking a point in time and sticking with that. The Anabaptist religions are very, there’s a large population in Canada and the US and South America. It’s not nothing. But it is that kind of a scale, I think. That bifurcation and the analysis of that came from a very comprehensive scenario planning analysis that I led last year. It’s called the AI in Journalism Futures Report. You can find the PDF online. And there were five scenarios. One was the bifurcation. But another one, there was a whole other scenario around that opt out option. And this was a consolidation of the points of view for about 1000 people. And one of the key findings from that was that most people who thought about this assumed there’s going to be some portion of the population would opt out.


Julie Posetti: Thanks, David. I think everybody’s questions have been answered. So I’ll just make a couple of remarks reflecting on what’s been said, and picking up on both Alexandra and David’s presentations in particular. And that Charlie Beckett quote, it does concern me that we have not spent much time during this discussion, addressing questions around regulation and embedding human rights in in these processes, who has written a lot about the future of journalism and technology led approaches to journalistic activity, which during Web 2.0 led to what I termed platform capture, that we haven’t necessarily learned the lessons from that period where news organizations and individual journalists became trapped within the platform walls. I realize this is different technology, but we failed to be appropriately critical, I think, and we failed to necessarily look at the risks in a way that enabled protections for business models that allowed and ensured a sort of editorially led approach to engaging with technology. And so I and others have spoken about the risk of sort of platform capture in Web 2.0 with regard to a ready embrace of AI without necessarily appropriate levels of critical engagement with not just the technology, but the characters. And I would sort of slightly disagree with David characterizing Sam Altman as an expert and comparing him to climate scientists, for example. I mean, those are independent experts, and we have those in this field. But I think we need to separate out expert perspectives from those who stand to massively profit from the technology that they’re propagating. And I think it’s important to highlight that. And we didn’t, I didn’t speak enough about the gender implications or the implications for diversity more broadly. But again, I think we need to reinforce, particularly in the current geopolitical climate where diversity is verboten. in some contexts and has been weaponized, then I think we need to reinforce the human, the humanity here in these discussions. And that does go to Alexandra’s point, which I’ve heard multiple times from journalists internationally trying to figure out what is the unique selling proposition or the unique offering of professional independent journalism or producers of public interest information more broadly. And that is sense making, meaning making, interpretation. And sometimes that does actually involve considered prediction, you know, based on facts, which helps societies prepare for risks. So I think that’s where I will leave it apart from to quote from a Kenyan woman politician who spoke yesterday, who said that we need AI governance that protects not just data, but dignity. And so I think that’s a good place to end it. Thank you.


Giulia Lucchese: Thank you very much, Julie. Thank you to all our panellists. I will give the floor to Desara Dushi, IGF Secretary, EuroDIG Secretary, my apologies, for the conclusions to be agreed by the participants.


Desara Dushi, Vrije: Hello, everyone. I’m going to share the screen with the messages that I try to draft during the workshop. And I’m going to read them one by one. So I’m from the EuroDIG Programme Committee, and we need to draft three messages for each workshop. The first message that I tried to identify is that generative AI has the potential to diminish unique voices. Including minority languages. It poses integrity issues, problems with identifying whether content is created by humans or technology. It also has the power of persuasion, including by disinformation that it via disinformation that it enables and influences market dynamics. The second message is journalism and generative AI contradict each other. The former is about facts, so the later generates content irrelevant of facts. There is a risk of standardised experimentation, risk of standardised expressions as well. However, generative AI offers also opportunities for journalism, helping to bring more content to the audiences. Questions though still remain, such as accuracy, impact on humans, The risk of using AI in journalism is losing control over news production and quality which might impact also the future of the business model. One of the main issues will be keeping journalism visible and keeping the connection with the audience. And the last message would be we should take AI seriously, be aware of what it can and cannot do and the rapid development impact in the near future, creating a lot of uncertainty in terms of dynamics and impact on freedom of expression. There is a risk of omnipresence as well. AI, including generative AI, has implications not only on freedom of expression but also on privacy, such as by surveillance in terms of freedom of expression, which leads to control of perception. We need to act on a networked and collective level. Now, I would ask everyone if there are any major objections against these messages, which means that you do not need to worry about the formatting, the language and editing and so on, because the organizing team will take care afterwards. But do you see any major objections regarding what was said during the session?


Alexandra Borchardt: In the second, I mean, that was meant as a provocation. It generates content not irrelevant of facts, but it basically calculates probabilities. So it could be true or it could not be true. And that makes it so difficult to figure out because the thing is just generative AI does something that sounds convincing. It’s really optimizing for credibility, but not for facts. So maybe that should be toned down a little bit. Because obviously, most of the stuff that generative AI…


David Caswell: It’s like food that’s 95% edible.


Julie Posetti: Well, maybe 75%. Just one thing that I think it’s nothing wrong in terms of what you’ve represented that I said, but it would be good to get the gender element in, which I think is very important. So the ways in which generative AI can be used to facilitate technology-based violence against women, for example. So Deepfakes was one example used against women political actors and women journalists, which is about silencing, which is about chilling freedom of expression. So I think that would be an important point to add.


A

Andrin Eichin

Speech speed

153 words per minute

Speech length

1620 words

Speech time

634 seconds

Enhanced access and improved expression through intuitive interfaces that lower barriers to language, technical skills, and accessibility

Explanation

Generative AI systems provide easy and intuitive interfaces that make interaction with text, audio and video easier than ever before. They allow better access to information and remove barriers related to language, technical and artistic skills, and sometimes even for people with disabilities.


Evidence

Examples of various AI interfaces and the Italian brain rot social media trend entirely made by generative AI, showing how these systems facilitate creative expression including art, parody, and satire


Major discussion point

AI democratization of content creation


Topics

Human rights | Development | Sociocultural


Risk of standardization and reduced diversity due to AI’s statistical nature reflecting dominant patterns in training data

Explanation

Generative AI systems are statistical and probabilistic machines that tend to standardize outputs and reflect dominant patterns in training data. Studies show this can reduce linguistic and content diversity, potentially diminishing unique voices including minority languages and underrepresented communities.


Evidence

DALL-E example showing gender bias where prompts for doctors/business owners generated only men, while nurses/domestic care workers generated only women


Major discussion point

AI bias and representation


Topics

Human rights | Sociocultural


Disagreed with

– David Caswell

Disagreed on

Timeline and urgency of AGI/superintelligence development


Integrity challenges including hallucination, lack of source attribution, and potential for deception through deepfakes and voice cloning

Explanation

AI systems tend to hallucinate by making up facts or filling in elements they don’t have, creating problems with source attribution and dissociation from authorship. This makes systems prone to being used for deception, impersonation, or manipulation.


Evidence

Google Gemini creating explanations for entirely made-up idioms like ‘never wash a rabbit in a cabbage’; Keir Starmer voice cloning before UK elections; Doppelganger case spoofing legitimate news sources


Major discussion point

Information integrity and authenticity


Topics

Human rights | Legal and regulatory


Agreed with

– Alexandra Borchardt
– Julie Posetti

Agreed on

AI poses significant risks to information integrity and authenticity


Agency concerns through AI’s persuasive capabilities via hyper-personalization and psychological manipulation

Explanation

Various studies show that generative AI systems can engage in very effective persuasion through hyper-personalization and ongoing user interaction. They can influence beliefs and opinions of human beings by using psychological tricks, which is highly relevant in the context of opinion formation.


Major discussion point

AI influence on human decision-making


Topics

Human rights | Sociocultural


Information pluralism threats from new economic gatekeepers creating ‘audience of one’ scenarios that fragment public discourse

Explanation

AI introduces new economic and informational gatekeepers that create an ‘audience of one’ – an information environment where everyone receives hyper-personalized and unique content. This potentially erodes shared public discourse, increases fragmentation and can lead to more polarization.


Evidence

ChatGPT search example showing unclear source selection and prioritization, such as listing a two-week-old power outage as current news


Major discussion point

Information ecosystem fragmentation


Topics

Human rights | Sociocultural


Agreed with

– David Caswell
– Julie Posetti

Agreed on

AI creates new forms of gatekeeping and control over information access


Market concentration risks in foundation AI models raising concerns about dominance over freedom of expression

Explanation

In some areas of the generative AI market, especially at the foundation layer and models, the market tends to be highly concentrated. A highly concentrated market with individual players having significant power raises concerns about market dominance and freedom of expression.


Major discussion point

Market concentration in AI


Topics

Economic | Legal and regulatory


A

Alexandra Borchardt

Speech speed

164 words per minute

Speech length

2477 words

Speech time

904 seconds

Fundamental contradiction between journalism’s focus on facts and AI’s probability-based content generation

Explanation

Journalism is fundamentally about facts and accuracy is at the very core of journalism’s definition, while generative AI calculates probabilities. This creates a fundamental tension between the precision required in journalism and the probabilistic nature of AI systems.


Evidence

BBC research from March showing accuracy problems in every second piece of news when AI assistants used news content


Major discussion point

Journalism-AI compatibility


Topics

Human rights | Sociocultural


Agreed with

– Andrin Eichin
– Julie Posetti

Agreed on

AI poses significant risks to information integrity and authenticity


Significant opportunities for news gathering, production efficiency, and audience personalization through liquid formats

Explanation

AI helps newsrooms with various tasks including news gathering through data journalism and verification, news production through transcribing and translating, and news distribution through personalization. The concept of ‘liquid formats’ allows easy switching between different content formats to serve diverse audience preferences.


Evidence

Examples from RTS Switzerland’s story angle generator, Swedish radio’s news query chat format, and Bayerische Rundfunk’s regional update with postal code-based automated podcasts


Major discussion point

AI applications in journalism


Topics

Sociocultural | Economic


Trust as the core business model for journalism, requiring careful AI adoption to maintain audience confidence

Explanation

The major business model of journalism is audience trust – if trust is lost, the business model is lost. Media companies are taking a more intentional and slower approach to AI adoption compared to the tech industry because their audiences’ trust is at stake.


Evidence

Audience studies showing tolerance for AI use in brainstorming and automation but skepticism toward virtual presenters; Finnish broadcaster Yle users getting angry about AI labeling, saying they trust the organization to do the right thing


Major discussion point

Trust in AI-assisted journalism


Topics

Human rights | Economic


Agreed with

– Julie Posetti

Agreed on

Trust and accountability are critical for journalism in the AI age


Need for journalism to move up the value chain, focusing on meaning-making rather than content creation

Explanation

Journalism needs to evolve from being content creators and curators to meaning makers, helping audiences make sense of a complex world with an overabundance of choices. Copy-and-paste journalism no longer serves society well in the AI age.


Evidence

Quote from Anna Lagerkrantz, director general of Swedish television, emphasizing that journalism must get better and move up in the value chain


Major discussion point

Evolution of journalistic roles


Topics

Sociocultural | Economic


Risk of losing human connections with audiences through over-automation

Explanation

There’s a concern that if newsrooms automate everything because it’s easy to do so, they may lose connections to their audiences even further. The question arises whether newsrooms will still have someone who speaks with the voice of the audience.


Evidence

Laura Ellis from BBC questioning whether automation might disconnect newsrooms from audience voices


Major discussion point

Human element in journalism


Topics

Sociocultural | Human rights


Importance of maintaining accountability as a rare commodity in the AI age

Explanation

Accountability will become a rare commodity in the AI age, and journalism institutions need to emphasize this quality. Unlike algorithms or platform companies that are difficult to hold accountable, news organizations remain accessible and accountable to their communities.


Evidence

Anna Lagerkrantz noting that people can walk up to news organizations’ front steps and hold them accountable, unlike trying to hold an algorithm or platform company accountable


Major discussion point

Accountability in media


Topics

Human rights | Legal and regulatory


Agreed with

– Julie Posetti

Agreed on

Trust and accountability are critical for journalism in the AI age


D

David Caswell

Speech speed

173 words per minute

Speech length

2844 words

Speech time

984 seconds

Serious consideration needed for AGI and superintelligence based on expert predictions and current trend lines

Explanation

Current AI development shows maxed-out benchmarks, massive investment commitments, and expert predictions suggesting dramatic changes ahead. Even critics of AGI concepts still paint dramatic pictures of what’s coming, indicating the need to take these developments seriously.


Evidence

Trillion-dollar AI infrastructure commitments; Sam Altman not expecting to be smarter than GPT-5; Dario Amodei comparing future AI to ‘a country of geniuses in a data center’; AI 2027 report; consumer adoption showing 25% of US workers using AI weekly


Major discussion point

Future AI capabilities


Topics

Economic | Sociocultural


Agreed with

– Julie Posetti
– Desara Dushi

Agreed on

AI development requires serious consideration and collective response


Disagreed with

– Julie Posetti

Disagreed on

Characterization of AI industry leaders as experts


Risk of societal bifurcation between super-empowered and disempowered populations through AI access

Explanation

AI creates feedback loops where some people become super-empowered with personal newsrooms and intelligence agencies, while others use it as escape and distraction. This leads to increasing divergence between those gaining agency and those losing it.


Evidence

Description of AI as providing some with personal newsrooms, armies of researchers, and intelligence agencies, while offering others escape and distraction from reality


Major discussion point

Digital divide and AI access


Topics

Development | Human rights | Sociocultural


Complex systems risk where news production becomes too automated to understand or control

Explanation

A progression from AI helping with jobs to doing whole jobs to complete automation creates a complex system where people no longer understand where information comes from. This mirrors how the financial system became too complex for most people to understand despite everyone depending on it.


Evidence

Comparison to financial system complexity; scenario of newsroom progression from AI assistance to full automation to loss of understanding of information sources


Major discussion point

Loss of human oversight


Topics

Sociocultural | Legal and regulatory


Agreed with

– Andrin Eichin
– Julie Posetti

Agreed on

AI creates new forms of gatekeeping and control over information access


AI as persuasion machines with demonstrated capability to influence human beliefs at scale

Explanation

Research shows AI systems can achieve persuasion rates significantly higher than human baselines, creating concerns about hyper-persuasive machines used for political or commercial purposes. This represents a significant risk to independent thought and decision-making.


Evidence

University of Zurich study on Reddit’s Change My View showing AI agents achieved persuasion rates 3-6 times higher than human baseline


Major discussion point

AI manipulation capabilities


Topics

Human rights | Sociocultural


Opportunities for dramatically increased societal awareness through advanced AI tools

Explanation

AI presents opportunities to move society further along the scale from medieval ignorance toward greater intelligence and awareness. The question is what might be achieved in terms of informing society with current AI tools and future AGI or superintelligence.


Evidence

Historical progression from medieval peasant ignorance through printing press, journalism, radio, television, internet, and social networks toward greater societal awareness


Major discussion point

AI potential for societal benefit


Topics

Development | Sociocultural


Disagreed with

– Julie Posetti

Disagreed on

Optimism vs. alarm about AI development


Difficulty of opting out of AI systems as they become integral to economic participation

Explanation

While some people might try to opt out of AI systems, it will become increasingly difficult as these systems become essential for economic productivity and societal participation. The comparison is made to smartphone adoption, but suggests AI integration will be even more pervasive.


Evidence

Comparison to smartphone necessity for societal participation; analogy to Amish/Mennonite communities choosing specific technological points to maintain


Major discussion point

AI ubiquity and choice


Topics

Economic | Sociocultural


J

Julie Posetti

Speech speed

129 words per minute

Speech length

2026 words

Speech time

938 seconds

Need to consider privacy rights alongside freedom of expression in AI governance frameworks

Explanation

The rights to privacy and freedom of expression are interconnected, particularly in contexts like investigative journalism that depends on confidential sources. AI governance must consider these rights holistically rather than in isolation.


Evidence

Reference to Christopher Wiley as Cambridge Analytica whistleblower who was initially a confidential source; connection between surveillance and freedom of expression


Major discussion point

Interconnected rights framework


Topics

Human rights | Legal and regulatory


Surveillance state risks through AI-enabled devices like smart glasses with facial recognition

Explanation

AI-enabled devices like Meta’s AI glasses with facial recognition create surveillance infrastructure that threatens freedom of expression by monitoring, interpreting, and controlling perception. This represents a shift from simple data collection to active interpretation and behavioral prediction.


Evidence

Mark Zuckerberg’s invitation for nearly a billion Meta users to download AI app integrating data including AI glasses; John McLean’s analysis of Meta building a ‘surveillance state’ with real-time facial recognition and behavioral prediction


Major discussion point

AI surveillance infrastructure


Topics

Human rights | Cybersecurity


Agreed with

– Andrin Eichin
– David Caswell

Agreed on

AI creates new forms of gatekeeping and control over information access


Disagreed with

– David Caswell

Disagreed on

Optimism vs. alarm about AI development


Disinformation threats through AI-generated content polluting information ecosystems

Explanation

Foreign state actors and other malicious entities are using AI to flood information systems with disinformation, working like ‘food poisoning’ by inserting false information into large language models. This pollutes the information environment and affects the accuracy of AI-generated responses.


Evidence

Australian ABC’s identification of Russian disinformation through fake news websites like ‘Pravda Australia’ derived from Telegram chats, surfacing in major chatbot queries


Major discussion point

Information pollution through AI


Topics

Human rights | Cybersecurity


Agreed with

– Andrin Eichin
– Alexandra Borchardt

Agreed on

AI poses significant risks to information integrity and authenticity


Gender-based violence facilitated by AI through deepfakes targeting women journalists and politicians

Explanation

Generative AI is being used to create deepfakes and hyper-sexualized images targeting women, reinforcing sexist stereotypes and creating new forms of technology-based violence. This has a chilling effect on freedom of expression, particularly for women in public roles.


Evidence

Example from Neyma Lugangira about AI-generated misogynistic and hyper-sexualized images on X; reference to deepfake porn targeting journalists and political actors


Major discussion point

Gender-based AI abuse


Topics

Human rights | Gender rights online


Importance of protecting dignity alongside data in AI governance approaches

Explanation

AI governance frameworks should focus not just on data protection but on protecting human dignity. This broader approach recognizes that AI impacts go beyond privacy to fundamental questions of human worth and treatment.


Evidence

Quote from Kenyan woman politician emphasizing need for ‘AI governance that protects not just data, but dignity’


Major discussion point

Human dignity in AI governance


Topics

Human rights | Legal and regulatory


Risk of platform capture similar to Web 2.0 without critical engagement with AI business models

Explanation

There’s a risk of repeating the mistakes of Web 2.0 where news organizations became trapped within platform walls. Without appropriate critical engagement with AI business models and the characters behind them, journalism risks similar capture by AI platforms.


Evidence

Reference to previous work on ‘platform capture’ during Web 2.0; distinction between independent experts and those who profit from AI technology like Sam Altman


Major discussion point

Platform dependency risks


Topics

Economic | Legal and regulatory


Disagreed with

– David Caswell

Disagreed on

Characterization of AI industry leaders as experts


G

Giulia Lucchese

Speech speed

138 words per minute

Speech length

952 words

Speech time

412 seconds

Development of guidance notes on generative AI implications for freedom of expression through expert committees

Explanation

The Council of Europe is actively working on guidance notes addressing the implications of generative AI on freedom of expression through dedicated expert committees. This represents a structured approach to understanding and addressing AI’s impact on fundamental rights.


Evidence

Reference to MSI-AI expert committee tasked with drafting guidance notes by end of 2025; mention of upcoming public consultation process during summer


Major discussion point

Institutional AI governance


Topics

Legal and regulatory | Human rights


Need for public consultation processes to engage stakeholders in AI governance discussions

Explanation

Public consultation processes are essential for developing comprehensive AI governance frameworks that consider diverse perspectives and expertise. These processes allow broader stakeholder engagement beyond expert committees.


Evidence

Announcement of public consultation on guidance note to be available during summer for comments from anyone with keen interest in the area


Major discussion point

Participatory governance


Topics

Legal and regulatory | Human rights


O

Online moderator

Speech speed

168 words per minute

Speech length

71 words

Speech time

25 seconds

Importance of session rules and structured dialogue for meaningful AI policy discussions

Explanation

Structured dialogue with clear rules and moderation is essential for productive discussions about complex AI policy issues. This includes both technical requirements and behavioral guidelines to ensure effective participation.


Evidence

Detailed session rules including full name entry, hand-raising protocols, video activation requirements, and link-sharing restrictions


Major discussion point

Structured policy dialogue


Topics

Legal and regulatory


D

Desara Dushi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

AI’s potential to diminish unique voices while creating integrity and persuasion challenges

Explanation

Generative AI poses multiple risks including diminishing unique voices and minority languages, creating integrity issues around human vs. AI content identification, and enabling powerful persuasion including through disinformation. These factors combine to create significant challenges for freedom of expression.


Evidence

Synthesis of workshop discussions covering voice diversity, content integrity, and persuasion risks


Major discussion point

Comprehensive AI risks


Topics

Human rights | Sociocultural


Journalism-AI contradiction requiring balance between opportunities and quality control

Explanation

The fundamental contradiction between fact-based journalism and probability-based AI creates challenges around standardized expression and quality control. However, AI also offers opportunities for bringing more content to audiences, requiring careful balance in adoption.


Evidence

Workshop synthesis highlighting the journalism-AI tension and need for maintaining quality while leveraging opportunities


Major discussion point

Journalism-AI integration challenges


Topics

Human rights | Sociocultural


Need for serious consideration of AI’s rapid development and networked collective responses

Explanation

The rapid development of AI creates significant uncertainty about its dynamics and impact on freedom of expression. This requires taking AI seriously, understanding its capabilities and limitations, and responding through networked collective action rather than isolated efforts.


Evidence

Workshop synthesis emphasizing the need for awareness, collective action, and serious engagement with AI development trends


Major discussion point

Collective AI governance response


Topics

Legal and regulatory | Human rights


Agreed with

– David Caswell
– Julie Posetti

Agreed on

AI development requires serious consideration and collective response


A

Audience

Speech speed

138 words per minute

Speech length

517 words

Speech time

223 seconds

Contradiction between AI’s homogenizing effect on content creation and personalized consumption experiences

Explanation

Academic research shows that when groups use AI to write content, the collective output becomes more similar and homogeneous. However, AI is simultaneously promoted as creating more personalized and individual content consumption experiences, creating an apparent contradiction.


Evidence

Academic research showing collective AI-generated content becomes more similar across groups


Major discussion point

AI standardization vs personalization paradox


Topics

Sociocultural | Human rights


Concern about AI making exclusive content access decisions without human oversight

Explanation

There is a question about whether AI systems will eventually make autonomous decisions about information access, potentially denying users access to certain news or information based solely on algorithmic determinations. This raises concerns about AI gatekeeping of information without human intervention.


Major discussion point

AI autonomous content gatekeeping


Topics

Human rights | Legal and regulatory


Economic barriers to quality information access through AI-driven paywalls and pricing

Explanation

The dependency on AI systems controlled by corporations creates concerns about access to accurate information becoming a socio-economic issue. As AI companies set prices and implement paywalls, quality verified information may become accessible only to those who can afford it.


Major discussion point

Economic inequality in information access


Topics

Economic | Human rights | Development


Possibility of complete opt-out from AI systems as a third societal category

Explanation

Beyond the bifurcation between super-empowered and distracted users, there might be a third category of people who choose to completely escape AI systems rather than being either empowered or manipulated by them. This represents a conscious choice to exit the AI ecosystem entirely, similar to opting out of the matrix.


Major discussion point

AI system opt-out possibilities


Topics

Sociocultural | Human rights


D

Desara Dushi, Vrije

Speech speed

149 words per minute

Speech length

357 words

Speech time

143 seconds

Need for collective agreement on AI governance messages through structured consultation

Explanation

Effective AI governance requires structured processes to develop consensus messages that capture key concerns and opportunities. This involves synthesizing complex discussions into clear, actionable points while allowing for stakeholder input and refinement.


Evidence

Process of drafting three key messages from workshop discussions and seeking participant feedback on major objections


Major discussion point

Consensus building in AI governance


Topics

Legal and regulatory | Human rights


Importance of including gender perspectives in AI governance frameworks

Explanation

AI governance discussions must explicitly address gender implications, particularly how AI technologies can facilitate violence against women through deepfakes and other forms of technology-based harassment. This represents a critical human rights dimension that requires specific attention in policy frameworks.


Evidence

Recognition of the need to include gender elements in governance messages, specifically addressing technology-based violence against women


Major discussion point

Gender inclusion in AI policy


Topics

Human rights | Gender rights online


Agreements

Agreement points

AI poses significant risks to information integrity and authenticity

Speakers

– Andrin Eichin
– Alexandra Borchardt
– Julie Posetti

Arguments

Integrity challenges including hallucination, lack of source attribution, and potential for deception through deepfakes and voice cloning


Fundamental contradiction between journalism’s focus on facts and AI’s probability-based content generation


Disinformation threats through AI-generated content polluting information ecosystems


Summary

All three speakers agree that AI systems create fundamental challenges to information integrity through hallucination, probabilistic content generation that may not align with facts, and the potential for malicious actors to pollute information ecosystems with disinformation.


Topics

Human rights | Cybersecurity


AI creates new forms of gatekeeping and control over information access

Speakers

– Andrin Eichin
– David Caswell
– Julie Posetti

Arguments

Information pluralism threats from new economic gatekeepers creating ‘audience of one’ scenarios that fragment public discourse


Complex systems risk where news production becomes too automated to understand or control


Surveillance state risks through AI-enabled devices like smart glasses with facial recognition


Summary

Speakers agree that AI introduces new forms of information control, whether through economic gatekeepers, automated systems beyond human understanding, or surveillance infrastructure that controls perception and access to information.


Topics

Human rights | Legal and regulatory


Trust and accountability are critical for journalism in the AI age

Speakers

– Alexandra Borchardt
– Julie Posetti

Arguments

Trust as the core business model for journalism, requiring careful AI adoption to maintain audience confidence


Importance of maintaining accountability as a rare commodity in the AI age


Summary

Both speakers emphasize that trust forms the foundation of journalism’s business model and social function, and that accountability becomes even more valuable in an age where algorithms and platforms are difficult to hold accountable.


Topics

Human rights | Economic


AI development requires serious consideration and collective response

Speakers

– David Caswell
– Julie Posetti
– Desara Dushi

Arguments

Serious consideration needed for AGI and superintelligence based on expert predictions and current trend lines


Need for serious consideration of AI’s rapid development and networked collective responses


Need for serious consideration of AI’s rapid development and networked collective responses


Summary

Speakers agree that the rapid pace of AI development and its potential impacts require serious attention from policymakers and society, with coordinated collective responses rather than isolated efforts.


Topics

Legal and regulatory | Human rights


Similar viewpoints

Both speakers recognize that AI systems create divisions in society – Eichin focuses on how AI reduces diversity and reinforces dominant patterns, while Caswell emphasizes how this leads to a bifurcation between those who are empowered and those who are marginalized by AI access.

Speakers

– Andrin Eichin
– David Caswell

Arguments

Risk of standardization and reduced diversity due to AI’s statistical nature reflecting dominant patterns in training data


Risk of societal bifurcation between super-empowered and disempowered populations through AI access


Topics

Human rights | Sociocultural


Both speakers are concerned about the loss of human agency and choice as AI becomes more pervasive – Borchardt worries about journalism losing human connections, while Caswell notes that opting out of AI systems will become increasingly difficult as they become essential for economic participation.

Speakers

– Alexandra Borchardt
– David Caswell

Arguments

Risk of losing human connections with audiences through over-automation


Difficulty of opting out of AI systems as they become integral to economic participation


Topics

Sociocultural | Economic


Both speakers emphasize the critical importance of addressing gender-specific harms from AI technologies, particularly how AI can be weaponized against women in public roles, and the need to explicitly include gender considerations in governance frameworks.

Speakers

– Julie Posetti
– Desara Dushi, Vrije

Arguments

Gender-based violence facilitated by AI through deepfakes targeting women journalists and politicians


Importance of including gender perspectives in AI governance frameworks


Topics

Human rights | Gender rights online


Unexpected consensus

AI offers significant opportunities alongside risks

Speakers

– Andrin Eichin
– Alexandra Borchardt
– David Caswell

Arguments

Enhanced access and improved expression through intuitive interfaces that lower barriers to language, technical skills, and accessibility


Significant opportunities for news gathering, production efficiency, and audience personalization through liquid formats


Opportunities for dramatically increased societal awareness through advanced AI tools


Explanation

Despite the serious risks discussed, there was unexpected consensus that AI offers genuine opportunities for democratizing content creation, improving journalism efficiency, and potentially enhancing societal awareness. This balanced perspective was surprising given the focus on risks in AI governance discussions.


Topics

Development | Sociocultural


AI systems are already making editorial and content decisions

Speakers

– David Caswell
– Alexandra Borchardt

Arguments

Complex systems risk where news production becomes too automated to understand or control


Risk of losing human connections with audiences through over-automation


Explanation

There was unexpected agreement that AI is already making significant editorial decisions in newsrooms and content platforms, not just as a future possibility but as a current reality. This consensus suggests the debate has moved beyond whether AI should make such decisions to how to manage the fact that it already does.


Topics

Sociocultural | Legal and regulatory


Overall assessment

Summary

The speakers demonstrated strong consensus on the fundamental risks AI poses to information integrity, the need for serious policy attention, and the importance of maintaining human values like trust and accountability. There was also surprising agreement on AI’s current capabilities and opportunities alongside the risks.


Consensus level

High level of consensus on core issues with nuanced differences in emphasis. The agreement spans technical experts, journalists, researchers, and policymakers, suggesting broad recognition of AI’s transformative impact on freedom of expression. This consensus provides a strong foundation for developing comprehensive governance frameworks that balance innovation with rights protection.


Differences

Different viewpoints

Timeline and urgency of AGI/superintelligence development

Speakers

– David Caswell
– Andrin Eichin

Arguments

Serious consideration needed for AGI and superintelligence based on expert predictions and current trend lines


Risk of standardization and reduced diversity due to AI’s statistical nature reflecting dominant patterns in training data


Summary

David Caswell presents an urgent timeline for AGI development (2-3 years) and emphasizes immediate serious consideration, while Andrin Eichin suggests a longer timescale and is ‘less pessimistic and doomy’ about the timeline, indicating disagreement on how quickly these changes will occur.


Topics

Economic | Sociocultural


Characterization of AI industry leaders as experts

Speakers

– David Caswell
– Julie Posetti

Arguments

Serious consideration needed for AGI and superintelligence based on expert predictions and current trend lines


Risk of platform capture similar to Web 2.0 without critical engagement with AI business models


Summary

David characterizes Sam Altman and other AI company leaders as experts comparable to climate scientists, while Julie explicitly disagrees, arguing we need to separate independent expert perspectives from those who stand to massively profit from the technology they’re propagating.


Topics

Economic | Legal and regulatory


Optimism vs. alarm about AI development

Speakers

– David Caswell
– Julie Posetti

Arguments

Opportunities for dramatically increased societal awareness through advanced AI tools


Surveillance state risks through AI-enabled devices like smart glasses with facial recognition


Summary

David explicitly states he intended to be ‘excited and optimistic’ about AI possibilities for societal benefit, while Julie emphasizes alarming risks and the need to stop being ‘so polite’ about articulating concerns, representing fundamentally different approaches to AI discourse.


Topics

Human rights | Sociocultural


Unexpected differences

Correction of AI content generation characterization

Speakers

– Alexandra Borchardt
– Desara Dushi

Arguments

Fundamental contradiction between journalism’s focus on facts and AI’s probability-based content generation


Journalism-AI contradiction requiring balance between opportunities and quality control


Explanation

Alexandra corrected the workshop summary’s characterization that AI ‘generates content irrelevant of facts,’ clarifying that AI ‘calculates probabilities’ and ‘optimizes for credibility, but not for facts.’ This unexpected disagreement reveals the importance of precise language in describing AI capabilities and limitations.


Topics

Human rights | Sociocultural


Gender inclusion in AI governance frameworks

Speakers

– Julie Posetti
– Desara Dushi

Arguments

Gender-based violence facilitated by AI through deepfakes targeting women journalists and politicians


Need for collective agreement on AI governance messages through structured consultation


Explanation

Julie pointed out that gender elements were missing from the workshop summary messages, despite her presentation on technology-based violence against women. This unexpected disagreement highlights how gender perspectives can be inadvertently excluded from AI governance discussions even when explicitly raised.


Topics

Human rights | Gender rights online


Overall assessment

Summary

The discussion revealed moderate disagreements primarily around timeline urgency, characterization of AI industry actors, and overall framing (optimistic vs. cautionary). Most speakers agreed on core risks and opportunities but differed on severity, timeline, and appropriate responses.


Disagreement level

Moderate disagreement with significant implications – the differences in timeline assessment, trust in industry leaders, and framing approach could lead to substantially different policy responses and governance strategies. The disagreements suggest a need for more nuanced approaches that balance innovation opportunities with rights protection.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers recognize that AI systems create divisions in society – Eichin focuses on how AI reduces diversity and reinforces dominant patterns, while Caswell emphasizes how this leads to a bifurcation between those who are empowered and those who are marginalized by AI access.

Speakers

– Andrin Eichin
– David Caswell

Arguments

Risk of standardization and reduced diversity due to AI’s statistical nature reflecting dominant patterns in training data


Risk of societal bifurcation between super-empowered and disempowered populations through AI access


Topics

Human rights | Sociocultural


Both speakers are concerned about the loss of human agency and choice as AI becomes more pervasive – Borchardt worries about journalism losing human connections, while Caswell notes that opting out of AI systems will become increasingly difficult as they become essential for economic participation.

Speakers

– Alexandra Borchardt
– David Caswell

Arguments

Risk of losing human connections with audiences through over-automation


Difficulty of opting out of AI systems as they become integral to economic participation


Topics

Sociocultural | Economic


Both speakers emphasize the critical importance of addressing gender-specific harms from AI technologies, particularly how AI can be weaponized against women in public roles, and the need to explicitly include gender considerations in governance frameworks.

Speakers

– Julie Posetti
– Desara Dushi, Vrije

Arguments

Gender-based violence facilitated by AI through deepfakes targeting women journalists and politicians


Importance of including gender perspectives in AI governance frameworks


Topics

Human rights | Gender rights online


Takeaways

Key takeaways

Generative AI presents a fundamental contradiction with journalism – journalism focuses on facts while AI generates content based on probabilities, creating accuracy challenges


AI offers significant opportunities for journalism including enhanced news gathering, production efficiency, and audience personalization, but risks losing human connections and editorial control


There is a serious risk of societal bifurcation between super-empowered users who leverage AI effectively and disempowered users who become distracted or manipulated by it


AI systems pose threats to information pluralism through the creation of ‘audience of one’ scenarios that fragment public discourse and erode shared understanding


The technology enables new forms of persuasion and manipulation at unprecedented scale, with demonstrated capabilities 3-6 times more effective than human baseline


Gender-based violence and targeting of women journalists and politicians through AI-generated deepfakes represents a significant threat to freedom of expression


Market concentration in AI foundation models creates new gatekeeping risks that could dominate freedom of expression


Trust remains the core business model for journalism, requiring careful AI adoption strategies to maintain audience confidence and accountability


AI governance must protect both data and dignity, considering privacy rights alongside freedom of expression in a holistic approach


The rapid development toward AGI and superintelligence requires serious consideration from policymakers and human rights organizations


Resolutions and action items

Council of Europe expert committee (MSI-AI) to complete guidance note on generative AI implications for freedom of expression by end of 2025


Public consultation on the guidance note to be conducted during summer 2025 for stakeholder input


Participants encouraged to engage with advanced AI tools directly to better understand capabilities and risks


News organizations need to invest in quality journalism and direct audience connections to maintain relevance


Journalism must move up the value chain to focus on meaning-making rather than content creation


Need for networked and collective responses to address transnational AI challenges


Unresolved issues

How to balance AI’s standardization effects with opportunities for enhanced creative expression


Whether AI systems will make autonomous editorial decisions and how to maintain human oversight


The sustainability of journalism business models in the face of AI-enabled content generation and potential paywall circumvention


How to prevent the ‘audience of one’ phenomenon from further fragmenting public discourse


Whether it will be possible to opt out of AI systems as they become integral to economic participation


How to address the concentration of power among AI companies and prevent platform capture


The timeline and implications of achieving AGI and superintelligence


How to ensure AI system design encourages critical thinking rather than passive consumption


Effective regulatory approaches that can keep pace with rapid technological development


Suggested compromises

AI system design that challenges users with follow-up questions rather than providing immediate outputs to encourage engagement


Transparency measures including source attribution in AI-generated content while acknowledging current limitations


Gradual adoption of AI tools in newsrooms with maintained human oversight and editorial control


Public service media playing an increased role in providing trusted information as commercial models face challenges


Treating AI models more like utilities (similar to electricity) rather than proprietary platforms to reduce concentration risks


Balancing AI efficiency gains with preservation of human connections between journalists and audiences


Developing AI governance frameworks that address both opportunities and risks without stifling innovation


Thought provoking comments

The audience of one. This stands for an information environment where everyone interacts with generative AI systems and powered information separately and receives hyper personalized and unique content which will not be received by anyone else. And this in turn potentially erodes shared public discourse, increases fragmentation and can lead to even more polarization.

Speaker

Andrin Eichin


Reason

This concept brilliantly captures a fundamental paradox of AI personalization – while it promises to serve individual needs better, it threatens the shared information foundation that democratic discourse requires. It reframes the personalization debate from individual benefit to societal risk.


Impact

This concept became a recurring theme throughout the discussion, with other speakers referencing personalization risks and the challenge of maintaining shared public discourse. It shifted the conversation from technical capabilities to democratic implications.


Journalism is about facts and generative AI calculates probabilities. In fact, I learned, I was an expert in the expert committee on quality journalism here. And accuracy is the very core of journalism. It’s really at the core of the definition.

Speaker

Alexandra Borchardt


Reason

This stark contradiction cuts to the heart of the tension between AI and journalism. It’s not just about technical limitations but about fundamentally incompatible approaches to truth – one seeks facts, the other optimizes for plausibility.


Impact

This framing became central to understanding why AI adoption in journalism is proceeding cautiously. It provided a conceptual framework that other speakers built upon when discussing accuracy, trust, and the future of news production.


There’s a reasonable case, we’re going to get to that point within a couple of years, two, three years… AI that’s as smart as the smartest individual human in any digital domain. And this is what is my personal definition of AGI.

Speaker

David Caswell


Reason

This timeline prediction for AGI is both specific and alarming, challenging participants to consider imminent rather than distant future scenarios. It forces consideration of how quickly current regulatory and ethical frameworks might become obsolete.


Impact

This shifted the discussion from current AI limitations to urgent preparation for superintelligence. It elevated the stakes of the conversation and influenced other speakers to address more dramatic transformation scenarios rather than incremental change.


The bifurcation of societies into super-empowered and disempowered people… For some people, it is like having your own personal newsroom… For others, it’s an escape. It’s a distraction. It’s a way out of reality… these are feedback loops.

Speaker

David Caswell


Reason

This insight reveals how AI could exacerbate existing inequalities in unprecedented ways, creating not just economic but cognitive and informational stratification. The feedback loop concept shows how these divides could become self-reinforcing and permanent.


Impact

This concept prompted deeper discussion about social implications and influenced the final messages about collective action. It moved the conversation beyond technical considerations to fundamental questions about social cohesion and equity.


I think that we’re dealing with a cult in Silicon Valley… These men see themselves as prophets… God didn’t create us, we’re going to create God, and then we’re going to merge with him.

Speaker

Julie Posetti (quoting Christopher Wiley)


Reason

This provocative framing recontextualizes AI development from technological progress to ideological movement, suggesting that the motivations behind AI development may be more concerning than the technology itself. It challenges the audience to consider the human actors and their belief systems.


Impact

This dramatically shifted the tone and perspective of the discussion, introducing questions about the motivations and worldviews of AI developers. It prompted more critical examination of who controls AI development and their ultimate objectives.


We need AI governance that protects not just data, but dignity.

Speaker

Julie Posetti (quoting a Kenyan woman politician)


Reason

This succinct formulation captures the human rights dimension that technical discussions often miss. It reframes AI governance from technical data protection to fundamental human dignity, emphasizing the human cost of getting this wrong.


Impact

This provided a powerful concluding framework that synthesized the discussion’s themes around human rights, gender implications, and the need for human-centered approaches to AI governance.


Journalism has to move up in the value chain… journalism, journalist institutions, media institutions need to be accountable because accountability will be a rare commodity… try to hold an algorithm accountable.

Speaker

Alexandra Borchardt (quoting Anna Lagerkrantz)


Reason

This insight identifies accountability as journalism’s unique competitive advantage in an AI-dominated information landscape. It suggests that human accountability becomes more valuable precisely because algorithmic accountability is nearly impossible.


Impact

This comment influenced the discussion about journalism’s future role and survival strategies, emphasizing human qualities that AI cannot replicate rather than competing on efficiency or speed.


Overall assessment

These key comments fundamentally shaped the discussion by elevating it from a technical examination of AI capabilities to a profound exploration of societal transformation. The conversation evolved through several phases: Eichin’s ‘audience of one’ concept established the democratic stakes; Borchardt’s facts vs. probabilities contradiction grounded the journalism-specific challenges; Caswell’s AGI timeline and bifurcation scenarios raised the urgency and scale of potential changes; and Posetti’s interventions about Silicon Valley ideology and dignity-centered governance provided critical perspective on power dynamics and human rights. Together, these comments transformed what could have been a routine technology discussion into a urgent examination of democracy, human agency, and social cohesion in the age of AI. The discussion’s trajectory moved from identifying current challenges to confronting existential questions about the future of human expression and democratic discourse.


Follow-up questions

How to solve the accuracy problem when AI assistants serve news content, given that BBC research showed accuracy problems in every second piece of news served by AI

Speaker

Alexandra Borchardt


Explanation

This is critical for journalism’s core mission of providing factual information, and requires collaboration between tech and media industries


Will AI make people creative or will it make us lazy, and how can AI systems be designed to challenge users rather than just simplify everything

Speaker

Alexandra Borchardt (referencing Professor Patti Mast from MIT Media Labs)


Explanation

This affects the future of human creativity and intellectual engagement with AI tools


Will there be money in journalism’s AI-enhanced business model, given the visibility threats and dependence on tech companies

Speaker

Alexandra Borchardt (referencing Professor Charlie Beckett)


Explanation

This determines the economic sustainability of independent journalism in the AI era


How to measure AI intelligence when current benchmarks are saturated and maxed out at 100%

Speaker

David Caswell


Explanation

There’s a real problem in AI development about how to assess the capabilities of increasingly sophisticated systems


Whether legacy news media can take advantage of AI opportunities given the dramatic pace of change

Speaker

David Caswell


Explanation

This is described as ‘kind of a race’ that will determine the survival of traditional journalism


How the contradiction between standardized AI output and personalized AI experience will resolve

Speaker

Audience member


Explanation

Academic research shows collective homogenization while AI promises individual personalization


Whether AI will make exclusive decisions about access to information or news without human oversight

Speaker

Audience member


Explanation

This concerns the potential for AI to become an autonomous gatekeeper of information


How quality information access will be affected by AI-related paywalls and socio-economic barriers

Speaker

Audience member


Explanation

This addresses equity in access to verified information as AI changes media economics


Whether there’s a third category of people who can completely opt out of AI systems rather than being either empowered or distracted by them

Speaker

Audience member


Explanation

This challenges the binary view of AI’s societal impact and explores alternatives to participation


How to embed human rights and appropriate regulation in AI development processes

Speaker

Julie Posetti


Explanation

Critical for ensuring AI development serves human dignity and democratic values rather than just profit


How to address gender implications and technology-based violence against women facilitated by generative AI

Speaker

Julie Posetti


Explanation

Important for understanding how AI can be weaponized to silence women journalists and political actors through deepfakes and other means


How to prevent ‘platform capture’ in the AI era, learning from lessons of Web 2.0 where news organizations became trapped within platform walls

Speaker

Julie Posetti


Explanation

Essential for maintaining editorial independence and avoiding over-dependence on AI platforms


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.