Workshop 7: Generative AI and Freedom of Expression: mutual reinforcement or forced exclusion?
13 May 2025 12:30h - 13:30h
Workshop 7: Generative AI and Freedom of Expression: mutual reinforcement or forced exclusion?
Session at a glance
Summary
This discussion focused on the implications of generative AI for freedom of expression and journalism. Experts highlighted both opportunities and risks associated with AI technologies. They noted that generative AI can enhance access to information and facilitate creative expression, but also risks standardizing outputs and diminishing unique voices, including minority languages. There are concerns about AI’s impact on information integrity, as it can generate convincing but potentially inaccurate content. The panelists discussed how AI is already influencing editorial decisions in newsrooms and may further reshape journalism practices.
The experts emphasized the rapid development of AI capabilities, with some predicting the emergence of artificial general intelligence (AGI) in the near future. This raises questions about AI’s potential to surpass human intelligence and its implications for society. The discussion touched on the risk of creating a bifurcated society, with some individuals super-empowered by AI while others become disempowered or opt out entirely.
Panelists stressed the need for critical engagement with AI technologies, considering not just their potential benefits but also risks to privacy, diversity, and human rights. They highlighted the importance of maintaining human elements in journalism, such as sense-making and investigative reporting. The discussion concluded with calls for AI governance that protects both data and human dignity, and for networked, collective action to address the challenges posed by AI to freedom of expression and democratic discourse.
Keypoints
Major discussion points:
– The potential impacts of generative AI on freedom of expression, including risks of diminishing unique voices, integrity issues, and persuasive power
– Opportunities and challenges for journalism in using generative AI, including accuracy concerns and maintaining audience connections
– The rapid development of AI and need to take it seriously, considering implications for privacy and control of perception
– Gender implications of generative AI, including potential for technology-facilitated violence against women
– Questions around regulation and embedding human rights protections in AI development
Overall purpose:
The discussion aimed to explore the implications of generative AI for freedom of expression and journalism, considering both potential benefits and risks.
Tone:
The tone was largely serious and cautionary, with speakers emphasizing the need to critically examine AI’s impacts. There were some notes of optimism about potential opportunities, but overall the tone conveyed a sense of urgency about addressing challenges. The tone became more explicitly concerned toward the end when discussing regulation and human rights protections.
Speakers
– David Caswell: Product developer, consultant and researcher of computational and automated forms of journalism; Member of MSAI expert committee
– Giulia Lucchese: Works at the Council of Europe at the Freedom of Expression and CDMSI Division; In-person moderator
– Alexandra Borchardt: Senior journalist, leadership professor, media consultant and senior research associate at the Reuters Institute for the Study of Journalism at the University of Oxford; Author of EBU News Report 2025 “Leading Newsroom in the Age of Generative AI”
– Julie Posetti: Feminist journalist, author, researcher, professor; Global Director of Research at International Centre for Journalists; Professor of Journalism at City, University of London
– Andrin Eichin: Senior Policy Advisor on Online Platforms, Algorithms, and Digital Policy at the Swiss Federal Office of Communications; Chair of the MSI-AI expert committee
Additional speakers:
– Online moderator (João): Remote moderator
– Audience members: Asked questions during Q&A session
Full session report
Expanded Summary: Implications of Generative AI for Freedom of Expression and Journalism
This EuroDIG session, featuring experts in journalism, technology, and policy, explored the profound implications of generative AI for freedom of expression and journalism. The panelists examined both opportunities and risks associated with AI technologies, highlighting the urgent need for critical engagement and governance frameworks to address emerging challenges.
Impact on Freedom of Expression and Information Diversity
Andrin Eichin, Chair of the Expert Committee tasked to draft the Guidance Note on the implications of Generative AI on freedom of expression, raised concerns about AI’s potential to diminish unique voices and minority languages. As statistical and probabilistic machines, AI systems tend to standardize outputs and reflect dominant patterns in training data, potentially reducing linguistic and content diversity. This standardization poses a significant threat to freedom of expression, particularly for underrepresented groups.
The discussion also touched on the risks to information integrity and attribution. David Caswell emphasized the persuasive power of AI-generated content and its potential to influence beliefs and opinions, raising concerns about the spread of disinformation and manipulation of public discourse.
Challenges and Opportunities for Journalism
Alexandra Borchardt highlighted the tension between AI-generated content, based on probabilities, and factual journalism. While AI offers opportunities to enhance news gathering and distribution, it also risks undermining core journalistic principles. Borchardt stressed the importance of maintaining human connection with audiences and the need for journalism to “move up in the value chain” by focusing on higher-value activities that AI cannot easily replicate.
Borchardt referenced the EBU News Report “Leading Newsrooms in the Age of Generative AI,” which explores various AI applications in journalism, such as story angle generators and personalized news updates. David Caswell noted that AI is already making editorial decisions in some newsrooms, citing NewsQuest’s hiring of an AI agent orchestrator as an example.
The discussion revealed a tension between AI’s potential to dramatically increase societal awareness and information access, as noted by Caswell, and the risk of losing control over news production and quality, as cautioned by Borchardt.
Future Implications of Advanced AI
Caswell urged taking the potential of Artificial General Intelligence (AGI) and superintelligence seriously, citing expert opinions from AI company leaders. He painted a dramatic picture of AI’s future impact, even when considering the perspectives of critics and skeptics. Caswell warned of the risk of societal bifurcation, with some individuals becoming super-empowered by AI while others are disempowered or opt out entirely, a scenario explored in the AI in Journalism Futures Report he led.
Julie Posetti cautioned against equating AI company leaders with independent experts like climate scientists. She emphasized the need to separate expert perspectives from those who stand to profit from AI technologies, highlighting the importance of critical engagement with AI development.
Privacy, Human Rights, and Gender Implications
Posetti raised significant concerns about AI’s implications for privacy and human rights, highlighting the potential for AI-enabled surveillance and technology-facilitated violence against women. She provided examples of deepfakes being used against women political actors and journalists to silence and chill freedom of expression. Posetti argued for AI governance that protects not just data, but human dignity.
Eichin noted the pervasive nature of AI technologies in modern society, comparing the difficulty of opting out of AI systems to the current necessity of using smartphones to participate in society. This raises questions about individual autonomy and the right to privacy in an AI-driven world.
Critical Engagement and Governance
A key theme that emerged was the need for critical engagement with AI development and implementation. Posetti stressed the importance of embedding human rights in AI processes and called for reinforcing humanity in these discussions, especially in the current geopolitical climate where diversity is sometimes weaponized.
The panelists agreed on the need for networked, collective action to address the challenges posed by AI to freedom of expression and democratic discourse. They called for AI governance frameworks that protect both data and human dignity, recognizing the complex interplay between technological advancement and fundamental human rights.
Conclusion and Future Directions
The discussion concluded with a recognition of the ongoing work by the Council of Europe to develop guidance on the implications of generative AI for freedom of expression, to be completed by the end of 2024. A public consultation on the guidance document is planned for summer 2024, highlighting the importance of multi-stakeholder engagement in shaping AI governance.
Several unresolved issues were identified, including how to balance standardization of expression with opportunities for enhanced creativity, maintain journalism’s visibility and connection with audiences, and ensure appropriate critical engagement with AI technology and business models.
In summary, this EuroDIG session underscored the complex and multifaceted nature of AI’s impact on freedom of expression and journalism. While recognizing the potential benefits of AI technologies, the panelists emphasized the urgent need for critical engagement, ethical governance, and the protection of human rights in the face of rapid technological advancement. The session concluded with the drafting of key messages from the discussion, following EuroDIG’s established process for capturing and disseminating insights from these important dialogues.
Session transcript
Giulia Lucchese: Afternoon, everyone. Thank you very much for joining the Euredic Session dedicated to Generative AI and Freedom of Expression, Mutual Reinforcement, or Forced Exclusion. My name is Giulia Lucchese. I work at the Council of Europe at the Freedom of Expression and CDMSI Division. And I will be your in-person moderator for the next hour. I immediately pass the floor to the Euredic Secretariat to walk us through the rules applying to this session. Thank you. Welcome.
Online moderator: I am João. I’ll be your remote moderator for the online. And I’ll be reading the rules. Session rules. Please enter with your full name. To ask a question, raise hand using the Zoom function. You will be unmuted when the floor is given to you. When speaking, switch on the video, state your name and affiliation, and do not share links to the Zoom meetings, not even with your colleagues.
Giulia Lucchese: Thank you very much, João. Easy rules. We can keep them in mind. Now, with the session, we are looking into the potentials and risks inherent to the use of Generative AI when this affects, somehow, freedom of expression understood under Article 10 of the European Convention on Human Rights. We should consider the profound impact that this has on freedom of expression today, or this could have in the next future. Please note that on this topic, the Council of Europe is already working. Indeed, in this very moment, we are elaborating a guidance note on the implications of Generative AI on freedom of expression. An expert committee is dedicated to this task, which is the MSIAI. And if everything goes well, we will have a guidance note by the end. of this year. Now, let me introduce our outstanding panel. We have Andrin Eichin, Alexandra Borchardt, David Caswell, and Julie Posetti. I’m absolutely honored to have you here. Thank you very much for accepting the invitation. The first speaker is Andrin Eichin. He’s Senior Policy Advisor on Online Platforms, Algorithms, and Digital Policy at the Swiss Federal Office of Communications of ComSwitzerland. Andrin is also the Chair of the Expert Committee tasked to draft the Guidance Note, the MSI-AI. Andrin, could you please help us to set the scene, understand which are the challenges, what are we dealing with, and why should we care? Thank you.
Andrin Eichin: Thank you very much, Giulia. Hi, everybody. As Giulia said, I have the honor to currently serve as the Chair of the MSI-AI, the Committee of Experts on the Implications of Generative AI on Freedom of Expression. Here you can see our expert committee. We have been tasked to develop guidelines on the implications of generative AI on freedom of expression by the end of 2025. So it’s too shortly. You cannot see the whole slide, so maybe I’m not sure whether we can remove the panel on the side. Okay. So I will try to share with you some of the implications that we are currently considering. I hope this will set the scene for the discussion that we are having afterwards. Let me stress what I present today is only just a glimpse of the work that we’re doing. Unfortunately, we don’t have the time to go into all of it, but I want to highlight that for those of you that are interested that we aim to have a public consultation of the document in summer of this year. So stay tuned for that. Now let me dive into some of the structural implications that we are looking at. The first implication we look at is with regard to enhanced access, better understanding and improved expression. You all know these interfaces by now, and I could have added many others. They are easy and intuitive to use. Many generative AI systems improve access to information and make interaction with text, audio and video easier, and maybe as easy as never before. They allow us to better access and receive information, and they lower or even remove barriers to language, technical and artistic skill, and sometimes even for people with disabilities. But they also have other abilities, and this is maybe a bit lighter, and some of you might know this. This is the latest social media trend called Italian brain rot. I don’t want to get into the cultural value of this. We can discuss this afterwards during the coffee break. But the point is this new social media trend is entirely made by generative AI, and it shows that these systems also facilitate creative expression, including art, parody, satire, or just silly little characters that make us laugh on social media. The second implication that we are looking at is touching on diversity and the standardization of expression. Generative AI systems are statistical and probabilistic machines, as you know, and as such, they tend to standardize outputs and reflect dominant patterns in training data. And studies already show today that it can reduce linguistic and content diversity. And of course, with regard to freedom of expression, this has a potential to diminish unique voices, including minority languages and underrepresented communities. And of course, there is also risk of reinforcing existing inequalities and stereotypes. I’m sure we heard all about the impact of data biases. and I guess you will have seen this or a variation of this picture already. In this example from Dali, the prompt for the upper picture was to depict somebody who practices medicine or runs a restaurant or business and Dali suggested only men. When asked to generate images of someone who works as a nurse in domestic care or as a home assistant, it suggests woman. And of course we see various elements and variations of this with other characteristics as well. Next, perhaps the most talked about implication, integrity and the attribution of human expression. It is widely known that AI tends to hallucinate, so make up facts or fill up elements it does not have. And you again know various different examples of this. This is a very recent example when Google Gemini, in its new AI overview on top of Google search, comes up with explanations to entirely random and made up idioms and sayings like here it tries to. Or what never wash a rabbit in a cabbage means. Or what the bicycle eats first means. Of course, this is very funny, but these are top of the pages explanations on Google. Here they are in a very benign and certainly not harmful context. But how does this other, how does this affect other information that we rely on to be factual? Besides just hallucination, we also see that there is a problem of source attribution and therefore dissociation from authorship. We don’t know anymore who creates content, if it was human, and if we can trust its integrity. And this of course makes the system prone to be used to deceive, impersonate or manipulate. They allow to mimic individuals, including through deepfakes and voice cloning, like last year with Keir Starmer’s voice cloning ahead of the UK elections. Or like in the Doppelganger case, to spoof legitimate news sources and spread disinformation by abusing a media brand to imply trustworthiness. The next structural implication we’re looking at is agency and opinion formation. Various new studies show that generative AI systems can engage in very effective persuasion through hyper-personalization and ongoing user interaction. They can really influence beliefs and opinions of human beings by using psychological tricks. And of course, this is highly relevant in the context of opinion formation. And I think, David, you will mention this later on in a bit more detail. The next implication is media and information pluralism and the impact AI has on information pluralism. While AI can enhance media efficiency, it also introduces a new economic and informational gatekeeper. Here is a chat GPT search from yesterday that I made when I asked for a summary of current news across Europe. We see a couple of relevant and interesting themes here. For example, with regard to the selection and prioritization of content, number three on the list was the power outage from now already two weeks ago. Clearly important. Is it the most relevant that happened yesterday? Probably not. We also tend to have something that is positive. We start to have transparency and traceability. Chat GPT provides us with sources, but it’s currently not clear which sources are selected, why and on what basis I see them, and whether this is just based on my news consumption or if other readers would see a similar source selection. And this is exactly the point that creates an entirely new challenge that we are dealing with, what we call in our in our guidance note the audience of one. This stands for an information environment where everyone interacts with generative AI systems and powered information separately and receives hyper personalized and unique content which will not be received by anyone else. And this in turn potentially erodes shared public discourse, increases fragmentation and can lead to even more polarization. Because of time I will only say very little about the last implication, market dynamics. We know that in some areas of the generative AI market, especially when we look at the foundation layer and the models, the market tends to be highly concentrated. And of course a highly concentrated market with single individual players that have a lot of power raise concerns about market dominance and freedom of expression. I’ll stop here for the time being and I’m sure we’ll have more time to discuss these elements in more detail. Thanks.
Giulia Lucchese: Thank you. Thank you very much, AndrÃÂn. Precious introduction. Thank you for sticking to the time and also for stressing the opportunity to engage on the public consultations on the guidance note. This is a very interesting opportunity for the audience at large, so please keep an eye on the website of the freedom of expression of the Council of Europe because the guidance note will be made available normally during the summer for comments to be received by whoever has a keen interest on the area. Now the next speaker is Alexandra Borchardt. She’s senior journalist, leadership professor, media consultant and senior research associate at the Reuters Institute for the Study of Journalism at the University of Oxford. Alexandra was also very recently author of for the EBU News Report 2025 Leading Newsroom in the Age of Generative AI. Alexandra, you interviewed over 20 newsroom leaders and other top researchers in the field. Would you like to share with us your findings and add further reflections? Thank you.
Alexandra Borchardt: Yeah, thank you so much, Giulia. And thanks everyone for being in the audience. We have almost full room here and also for everyone who joins remotely. Yeah, Leading Newsrooms in the Age of Generative AI is already the second EBU News Report on AI. The first was Trust Journalism in the Age of Generative AI. And this is also a public service. These reports can be downloaded freely without registering by everyone. And it’s a qualitative piece of work. And I’m so glad Andrin set the scene and also alerted you to the risks. And that gives me an opportunity to also show you some about the opportunities. But first of all, I wanted to start with a provocation here. Contradict each other, if you really put it clearly. Journalism is about facts and generative AI calculates probabilities. In fact, I learned, I was an expert in the expert committee on quality journalism here. And accuracy is the very core of journalism. It’s really at the core of the definition. Nevertheless, there are lots of opportunities that newsrooms see. And you might be surprised to see after the elaborations before that so many in the media industry are actually excited about AI. Because it actually helps them with all kinds of things. It helps them with news gathering, for example, in data journalism, doing verification, document analysis, helping them to augment images, brainstorming for ideas. There’s lots of stuff there. It helps them with news distribution, with production, news production. transcribing, translating, helping with titling, subtitling and particularly liquid formats. This is a key word here, switching easily among different formats or between formats, converting text to video, vice versa, audio. So everyone gets what they like, the audience of one that was just referred to. And then in the end, news distribution. You can personalize news. You can address different audiences by different needs. Also by, for example, by their location and their preferences, all kinds of things that really help. And this is Ezra Eman, director of strategy and innovation of the public broadcaster of the Netherlands. One of them. And he says with Generative AI, we can fulfill our public service mission better. It will enhance interactivity, accessibility, creativity. It helps us to bring more of our content to our audiences. And there are actually some examples and there are nine use cases in this report. And actually we had 15 in the previous report. And I just touched on three of them to give you a clear example. For example, some internal thing that RTS in Switzerland developed the story angle generator. This is for like day two after a breaking news situation when newsrooms might run out of steam a little bit and lack ideas what to do next. And this angle generator gives them an idea like, oh, maybe you can produce some entertaining stuff or some explanatory journalism out of this. So really helps them to be more creative with one news piece. Also, we will see a lot more chat formats. And this is from Swedish radio. They together with the EBU developed this news query thing where you can actually interact with news. And then last but not least, and I’m German and based in Munich. So you will see the regional update that Bayerische Rundfunk developed where you can put your postal code in and then sort of draw a line what kind of region you want your news from. And then it will create automated podcasts for you to listen to. So you’re always up to date. on what’s in your region. Nevertheless, when I was commissioned to do the second report, I was expecting actually that much more would have happened. But no, while the tech industry is really forging ahead at speed, the media companies are much slower. They are taking a much more intentional approach and that for a good reason, because the trust of their audiences is at stake and actually therefore their business models, because the major business model of journalism is audience’s trust. If you lose trust, you lose a business model. In fact, audiences are really quite tolerant about how newsrooms use AI. They find it totally okay if they use it for like brainstorming and image recognition or automating layouts like these print layouts. No one wants to put effort into any longer, but they are absolutely skeptical when it comes to stuff like generating a virtual presenter or visualizing the past. This is what studies reveal. Nevertheless, these audience perceptions are strongly influenced by people’s own, your own experience with using AI, so they are most likely to shift their attitudes to what is acceptable and what not. And this is Jiri Kivimäki from the Finnish broadcaster Wiley, and he said, we started labeling these AI summaries and our users actually said, hey, come on guys, we don’t care what you use it for, just do your jobs, do your job. We trust you that you do the right thing. So they got really angry, he said, which is really interesting. And I will confront you with three big questions that the report actually revealed and that can be discussed and that newsrooms will discuss and the media industry will discuss. The first big question is about accuracy. I already mentioned that, the accuracy problem, how to solve it. And there was BBC research that came out in March this year that actually showed when AI assistants brought up took news content and served people. So with news from that, there was actually an accuracy problem in every second piece of news. And that is a problem the media has to face because accuracy is at the very definition of journalism. And Peter Archer, the Director Generative AI at the BBC says we need a constructive conversation in the industry, the tech industry and the media industry need to team up. And we need to be part of this because also the tech companies can only be interested in having that problem solved. Big question number two, and I’m particularly fond of that, will AI make people creative or will it make us lazy? And my response to that would be, well, if we want to be creative or if people want to be creative, AI can make people more creative. But if you just want to offload work, just press a button, not think about something, it can also make you lazy. This is Professor Patti Mast from MIT Media Labs, and I really appreciate her input to this report. And she said, actually, this is not a given. We can actually tease people a little bit so that they are creative. It is possible to build AI systems that challenge the user a little bit. And we don’t have to simplify everything for everybody. And I find that quite important. And the third big question is, will there be money in it? And that’s a big question for newsrooms. Will their business model survive? Because the visibility of journalism is threatened, and we will learn more about that. And also the huge dependence on these tech companies. And Professor Charlie Beckett, he’s the director of the Journalism AI Program at London School of Economics. He said, yeah, but if you are entirely reliant on AI, what happens if, you know, they put up the price five for the tech companies or suddenly change what the stuff can do? So we are in the hands of tech companies, and it is really important to be aware of these dependencies. And the big question really then is, and I just mentioned at how to keep journalism visible. Because as content has become a commodity and is being produced at scale, it will be so much more important than ever to invest in the journalism and in direct human connections with audiences to really establish the legitimacy of journalism in this age of content abundance. And there’s Laura Ellis also from the BBC who said something that I found very smart. If we just automate everything, because it’s so easy to automate, will we then lose connections to our audiences even further? Will we still have someone in our newsrooms who speaks with that voice of the audience? So that is really something that we should consider. So to finish up with this, what do news organizations need to do? And I’m not going into what regulators need to do, but just plainly news organizations. Mostly investing in quality journalism is key really to secure their survival and maintain their legitimacy as the providers of trusted news and information. Building direct audience connections, really knowing who they serve and actually getting those email addresses and connections so that you can actually reach to your audiences, because the platforms will be otherwise determining and controlling all your access to audiences. Then also making things easy in the newsroom so that actually people in the newsroom adopt these AI tools and use the right tools to begin with, but don’t make it too easy. Really don’t let people stop thinking about it. And then the human qualities of it all. Be accessible, approachable, and accountable, and be human. This will be decisive, a decisive quality for news organizations. And let me conclude with a quote by Anna Lagerkrantz, who’s the director general of Swedish television, and she says very Finally, journalism has to move up in the value chain. In other words, journalism has to get a lot better because the copy and paste journalism that we are still confronted with these days, it doesn’t serve us well any longer. And she said also something very important, journalism, journalist institutions, media institutions need to be accountable because accountability will be a rare commodity. She said in our talk, in our interview, try to hold an algorithm accountable. Maybe try to hold a platform company accountable, but we are there. People can walk up to our front steps and hold us accountable. And that is really important. And she also reminds us that journalists will need to shift from just being content creators and curators to meaning makers, because we need journalism to make meaning of this complex world and an overabundance of choices. Thank you.
Giulia Lucchese: Thank you very much, Alexandra, this was very insightful. Notwithstanding the clear contradiction, I was at least pleased to learn about the opportunities for news outlets, but also the creative use made of generative AI. Thank you also for stressing the concepts of accuracy, trust, but also accountability. Now, without further ado, I’m now invited to intervene our next speaker. David Caswell is product developer, consultant and researcher of computational and automated forms of journalism, is also a member of the MSAI, the experts committee drafting the guidance note we mentioned before. David, please, would you provide us with your perspective on upcoming challenges? And I hope you do have solutions for it.
David Caswell: Yes, solutions. That’s the big question. I’ll just go through the where I see kind of. the state of the future, I guess, and then maybe a couple of solutions or prospective solutions at the end. So what I’m going to do in this seven minutes is to just try to persuade you as to why you should take the more exotic forms of AI that you kind of hear talked about, AGI, super intelligence, seriously, and then kind of connect that with some of the risks, and maybe a few opportunities in journalism and in expression, human expression and information more broadly. And so to take, you know, these forms of AI seriously, you know, one reason to do that is to look at the trend lines from the last half decade. And on every trend line, you can look at the benchmarks, the scaling laws, the reasoning abilities. Essentially, we have maxed out the benchmarks, we’ve got to 100% and can’t go any further. There’s a real problem right now in AI about how to measure how smart these things are, because the benchmarks are saturated. And things are just getting started. We’ve got literally more than a trillion US dollars in soft commitments for AI infrastructure that have been announced in the last year, 18 months. And some of that is not going to happen and all the rest of it. But it’s a vast, vast amount of money, right? It’s money on the scale of the, you know, the moonshot that the US did in the 60s. And the effects of that investment haven’t begun to show up yet. So another reason we should take AI seriously is because the experts are taking it seriously. So Sam Altman at OpenAI does not think he’s going to be smarter than GPT-5. Dario Amodi, the CEO of Entropic, another big model maker, he likens what’s coming to a country of geniuses in a data center. So say a country of 5 million people, each of them an Albert Einstein in a data center in San Antonio, Texas. That’s the kind of thing to imagine here. And you see this again and again and again. These people do have biases, but only in the same way that climate scientists have biases and vaccine experts have biases. We listen to those experts and we maybe should listen to these experts a little bit too. Maybe not completely, but a little bit. We do have independent studies of this by very, very qualified and principled people. There’s one I highly recommend, the AI 2027 report. But the interesting thing, both in the experts and in these independent analyses, is that even the critics of this concept of AGI and superintelligence, even they accept that dramatic things are gonna happen. So even the critics, even the people who are downplaying what’s going on are still painting a pretty dramatic picture. Another reason that we should take AI seriously is because of consumer adoption. So if we look at the use of AI, this is from work that was done by the US Federal Reserve back in September. At the moment, for example, in the US working population, about a quarter of the US working population uses generative AI at work once a week or more. If you look at it on the content side, if you look at the entire amount of text content produced in the US, in major areas, significant portions of that are already generative AI generated. So for example, about a quarter of all press releases, corporate press releases are AI generated. So this stuff is showing up very, very rapidly already in double digits in use, in weekly use and in content. Another reason to take this stuff seriously is just play with the tools. Like honestly, everybody here sign up for the tools, sit down, play with the most advanced models, really exercise them, learn what these reasoning models can do, learn what tools like deep research can do or agents like Manus. These are kind of the leading edge of where AI is, but they’re completely accessible. You don’t need technical skills. You don’t need special access. You just need a little bit. bit of curiosity. And if you play with those tools and really exercise them on a subject that you know well, you will be pretty convinced that big things are coming. So I would suggest that engaging with the tools and judging for yourself is a good reason to take it seriously. And then we should look at sort of the progress over the last half decade on the largest possible benchmarks, benchmarks on the largest scale. So for most of my life, the big sort of golden ideal of AI was the Turing test, passing the Turing test. Well, we passed that in about 2019, and we didn’t even notice it. So that’s gone. The next sort of milestone, large, large benchmark here is AI that’s as smart as the sort of the regular average median modal human in a vast array of tasks, the most digitally accessible tasks. That’s kind of gone. If you’ve played with these tools at all recently, you’ll see that they can draw better, they can write better, they can reason probably better, they can do most things better than the average or median human. Another possible benchmark is AI that’s as smart as the smartest individual human in any digital domain. And this is what is my personal definition of AGI. It’s what a lot of people think of as AGI. We are not quite there. That’s a dashed line. But we are almost there. If you really get involved with some of these reasoning models on a subject that you know well, you will see that, you know, pick your topic, you will see that we are making significant, the models are making significant progress in that direction. So there’s a reasonable case, we’re going to get to that point within a couple of years, two, three years. And then lastly, there’s this other category, human beings are smart, not just because we’re individually smart, we’re smart, because as a society of 8 billion people, we can do amazing things. And this idea that we could have models or machines that are smarter than all of us collectively, sometimes called superintelligence, that’s taken very, very seriously by some very serious people in this world, not just people at the model companies, but startups, investors, governments, and so on. A little further out, but pretty significant. So there are risks, obviously, with all of this. One risk, and this was something spoken about earlier, is this risk, this significant risk of the bifurcation of societies into super-empowered and disempowered people. So if you look at all the possibilities in media that generative AI can bring, for some people, it is like having your own personal newsroom. It’s like having your own army of academics and researchers and analysts. It’s like having your own personal central intelligence agency. It super-empowers what you can do. For others, it’s an escape. It’s a distraction. It’s a way out of reality. It’s a way to avoid dealing with things you need to deal with. And the thing here is that these are feedback loops. The more empowered you are, the more empowered you become, and the more distracted and confused and escape-focused you are, the more it goes that way. And so you end up with some parts of society having a dramatic gain in the agency that they have, and some losing agency. So that’s a risk. That’s a very real risk. That’s already happening in some degree. Here’s another risk. News as a complex system. So here’s a kind of a series of events in a newsroom, say your average newsroom. Step one, AI shows up. You say, right, we can use this to make our jobs as journalists easier. That’s great. So you say, well, we can actually use it to do whole jobs that we don’t wanna do. These are jobs that we don’t like or that we have trouble filling. We’ll just get AI to do those jobs. Well, that’s all right. Then you’re in this situation where you have AI and it’s doing most jobs. So you can go home. You can have a three-day week or. You can come in at 11 and go home at three because the AI is doing most of the jobs. And that sounds kind of nice, right? And then you get to this point where, what exactly is the AI doing? You know, I haven’t been checking in for a few weeks and what is it doing? And then you’re at this point where you don’t know where your information is coming from. The whole ecosystem works as it works now. Your phone has got alerts and you’ve got news on webpages and you’re talking to chat GPT about news and all the rest of it, but you don’t know where that’s coming from. The situation is that it’s got so complex that it’s a complex system. And this idea of big chunks of our society being a complex system, our financial system went that way. Very few people understand how the financial system works, even though we all depend on it. And there are researchers, many researchers right now who study the financial system as a complex system. Here’s another reason to take AI seriously in terms of risk, which is this idea of persuasion machines. And we got a little early glimpse of that recently by this study from a team at the University of Zurich. And what they basically did was they put a set of AI agents on a Reddit, on a subreddit called Change My View. And Change My View is a subreddit where you kind of put a point of view and then if somebody changes your mind, you award them with a little bonus point. And so they were able to use that setup to do this very, very high scale test. And there were ethical issues around the study, so it’s kind of a little obscured. But in the paper that they would have published had it passed the ethics guidelines, they found that these models could achieve persuasion rates between three and six times higher than the human baseline. So the idea of machines that are hyper persuasive for political or for commercial purposes, not a far-fetched idea at all. And so just in legacy news media, finally, ChatGPT shows up in late. 2022 and people start in the newsroom start building guidelines and providing access and doing prompt training and all that kind of stuff. You get into like 2023 newsrooms are starting to do things like summaries, they’re starting to automate tasks. You get into 2024 the more advanced newsrooms are building little chatbots that can chat with their archive or semantic search where you can kind of get better search. A lot of them are building toolkits where you can you can automate a lot of tasks in a newsroom, that’s quite common. And then at the moment I think this year a lot of newsrooms are using AI to do news gathering, to do news gathering at scale. So there are opportunities here for legacy news media but it’s kind of a race really at some level. The change that’s coming is dramatic, change that’s here is dramatic, the change that’s coming is dramatic and there’s an open question here about whether legacy news media can take advantage of those opportunities. There’s other opportunities as well right, if you look at how informed societies are at the moment, why would we consider that to be an end state? You know if you take a scale here from medieval ignorance on one end, say a peasant in a village in 1425, to god-like super intelligence on the other end, we have come a long way along that scale using technology like the printing press, the invention of journalism, radio, broadcast, television, the internet, social networks. What might we be able to do in terms of informing society once we diffuse all of these AI tools we have at the moment into our ecosystem? What would we do with AGI? What might we do with super intelligence? So there really are opportunities here to dramatically increase the level of awareness that people have about their environment. I’ll just leave it there. Thank you, thank you.
Giulia Lucchese: Thank you very much, David, for addressing this exotic form of AI, the AGI, and the relation to human expressions. It seems like we are running late on a lot of the challenges you listed, but you also were so kind to conclude your presentation with opportunities. The end, at least. Last but not least, I pass the floor to Julie Posetti. Julie is a feminist journalist, author, researcher, professor at the Global Director of Research International Centre for Journalists and Professor of Journalism at City, University of London. Julie, I know you would like to offer your perspective on the issue by starting with a video. Yes, I do plan to do that, and it segues directly from David’s conclusion, which was with reference to godlike omniscience. If we can play the video, please.
Julie Posetti: I think we’re having trouble with the audio. We’re having trouble with the audio, is that right? We’re having trouble with the audio. You also want to live forever. If you think about AI and you think about God, what is God? God is this thing that is all-knowing, it’s all-seeing, it’s all-powerful, it transcends time, it’s immortal. If you talk to a lot of these guys, the very senior ones who are building the GI, artificial general intelligence, creating something that has all human knowledge put into it, that surpasses any single human in its understanding of the world and the universe, and that is everywhere connected to every device in every city and every home that’s watching you and thinking about you. And if we turn it on and let it start to influence society, that it’s very subtly making decisions about you, where you can kind of feel it a little bit, but you can’t see it or touch it. And then imagine you have a bunch of men who also want to live forever, defeat death, become immortal. And in order to do that, they have to find a way to connect themselves to this creation. These men see themselves as prophets. Brian Johnson, the guy that we had dinner with, literally said, and this is in the podcast, we’ve got it wrong. God didn’t create us, we’re going to create God, and then we’re going to merge with him. And all the weird things that these guys say and do, if you start to understand that there’s aspects of this that are like a cult, a fundamentalist cult, or a new religious movement, a lot of their actions start to make a lot more sense. And if you actually start to interpret these statements, not as just some passing flippant comment, but that there’s a pattern to it, I think that we’re dealing with a cult in Silicon Valley. OK, apologies for the issues with the sound and the video sync. That was a clip from a panel discussion at the International Centre for Journalists, sorry, the International Journalism Festival in Perugia last month. For those of you who have forgotten, Christopher Wiley is not just a commentator on AI, he was in fact the Cambridge Analytica whistleblower. He is the one who revealed the data scandal that saw millions of Facebook users’ data breached and compromised. And you’ll remember that the Cambridge Analytica scandal involved an early kind of iteration of AI tools that were designed to micro-target with macro-influencing in the context of political campaigns. So several people have said that his comments sound alarmist, but he also pointed out that we need to stop being so polite, that we need to actually articulate the concerns and the risks associated not just with the technology but with the business models behind the technology that are designed to further enrich billionaires who are actually those that stand to profit most from the mainstreaming of AI. And ultimately, as David has pointed out, the objective is superintelligence or AGI and then superintelligence. So it might sound alarmist but the facts are alarming and I think they’re particularly alarming and they should be particularly alarming for people and states and intergovernmental organisations that are invested in securing and reinforcing human rights in the age of AI. So as I said, Chris exposed the Cambridge Analytica scandal and when he talks about this desire for Omniscience and Omnipresence Among the AI Tycoons. I think it’s important to highlight the rights, the links between the rights to privacy and the rights to freedom of expression and freedom of thought. And he does that in a podcast, an investigative podcast that he was speaking about there, which was published by Coda Story, which is a global-facing investigative journalism outlet that emphasizes the identification of prescient trends, so particularly with regard to disinformation and narrative capture. And that podcast is called Captured, the secrets behind Silicon Valley’s AI takeover. And I’ve used that example partly because Coda Story is one of the research subjects for a global study that I currently lead called Disarming Disinformation. And it’s looking at the way news organizations are confronting and responding to and trying to counter disinformation, particularly in the context of the challenges and opportunities that AI presents. So I think it’s important, as I said, to consider the right to privacy in combination with the right to freedom of expression and therefore to think about AI and all its integrated forms and the responses to it holistically. So before I turn specifically to generative AI and freedom of expression, I also want to highlight the need to consider the implications of the AI of things. So in particular, the application of AI glasses, which do pose a significant risk to freedom of expression of the kind that relies on the right to privacy, such as investigative journalism that’s dependent on confidential sources, such as Christopher Wiley. So he was initially the whistleblower who was a confidential source to start with before he identified himself for The Guardian and The New York Times and others, Channel 4 and others, for the Cambridge Analytica reporting. And it’s noteworthy that Mark Zuckerberg recently invited Meta’s users, nearly a billion of them, to download a new AI app that will network and integrate all of their data, including Meta’s new or upgraded AI glasses, which include facial recognition. And that prompted John McLean to write in The Hill, a newspaper coming out of DC, that Mark Zuckerberg is building a new surveillance state. So again, I think we need to consider surveillance in the context of freedom of expression. And he wrote, these glasses are not just watching the world. They’re interpreting. They’re filtering and rewriting it with the full force of Meta’s algorithms behind the lens. They’ll not only collect data, but also send it back to Meta’s servers to be processed, monetized, and repurposed. Facial recognition, behavioural prediction, sentiment analysis, they’ll all happen in real time. And the implications are staggering. It’s not just about surveillance. It’s about the control of perception. That’s a very important consideration when it comes to the function of independent journalism in democratic contexts, but also freedom of expression more broadly, and particularly issues around election integrity, for example, connected directly to information integrity. And coming back to generative AI specifically, an example from Australia, which we’re starting to see replicated in the very recent Australian elections. So the ABC’s, Australian Broadcasting Corporation’s chief technology reporter, working with. the fact-checking team, and in some ways using AI technologies to analyze large data sets through natural language processing, for example, identified the function of Russian disinformation in attempting to pollute chats. So as a way of polluting information, she referred to it as working the same way as food poisoning works. So inserting disinformation into large language models by flooding the zone with literally fake news. So these artificial news websites, one that they identified was called Pravda Australia. And it’s an iterative title. It is largely derived from telegram chats full of Russian disinformation. And that disinformation is being surfaced in the context of queries in the major chatbots that are being used. So this is something that I think needs to be really carefully considered with regard to accuracy and verification, which are real challenges with regard to chat GPT or any other tool that you’re using to query large language models. And the second point that I want to make is around the ability, therefore, to influence the outputs with not just disinformation of a foreign state actors, a political persuasion, but also hate speech and general disinformation connected to health, for example. If the objective is to radicalize certain citizens or societies as a whole, and to roll back rights, then this is another weapon that the agents for such pursuits have available to them. And we heard an example yesterday from Neyma Lugangira, who’s the chair of the African and Parliamentary Network on Internet Governance, of her experience of seeing generative AI used on X, so groke on X, to generate deep fakes effectively. And her point was that generative AI can be used to really reinforce sexist stereotypes, but also to generate misogynistic images, hyper-sexualized images. And when we know about deep fakes in the context of deep fake porn, we’re seeing this used against journalists, we’re seeing this used against political actors, as that example showed. So I think that we need to be aware if we’re to look at opportunities, at the tactics of those actors. They tend to be networked, they’re very creative, and they’re transnational, they’re cross-border. So the challenge for us, those of us trying to reinforce human rights, the rule of law and democracy, is to act in similarly networked and creative ways. And I’ll leave it there. Thank you.
Giulia Lucchese: Thank you very much, Julie. I’m fascinated and surely alarmed right now. Thank you for starting with this tough, provoking question to the audience. This comes at a very good moment because now we open the floor for questions. Please, both online and in person, do not hesitate. Yet, I would ask you to be focused asking your questions so that we can give voice to diverse participants. Please, who is willing to break the ice? Yes, would you start? Thank you.
Audience: All right. So first, thanks of all for the very interesting insights. The point I want to raise is referring to the rather first part of the presentation. And… The point I want to get your opinion on and how you think it’s going to develop in the future is we have academic research showing that every group of humans using AI to write an essay or a newspaper article, that the output on a collective level will be more similar. So we see on a collective level we have a more homogeneous output. On the other hand, we always argue that AI is going to help for a more personalized experience. It’s creating a more individual, like how we consume the content will be more individual. So for me, that seems kind of contradictory and I wanted to get your opinion on that point. Thanks.
Giulia Lucchese: Thank you. Should we collect a couple of questions and then, okay, thank you.
Audience: First of all, thank you very much for all the interventions. They were very inspiring and interesting. I would have just a quick and simple question. We are still witnessing that in many fields AI does not make key decisions in relation to the production of content. Like, will we be able to someday witness an AI, maybe applied to press activity or related, tell us someday that we won’t get access to a certain piece of information or news because of a decision that was made by AI and it was an exclusive decision of the system. Thank you very much.
Giulia Lucchese: Thank you. Yes, please.
Audience: Thank you to all the speakers. I have a question regarding something that was touched upon I believe in the first presentation, the fact that the use or the dependency on AI systems also makes journalism and in general information dependent on the prices that are set by these corporations. And I was wondering how you see the quality of information diminishing in relation to the possibility of more paywalls being introduced and so access to accurate and verified information also becoming a socio-economic issue. Thank you.
Giulia Lucchese: Thank you. I’ll take the last question. No, there’s not a last question. Oh, yes, there is. Please. Thanks.
Audience: Sorry, just quickly. It’s probably for David Caswell, if I may. You outlined those two clear distinctions of where you see society going. And I’ve thought about this before, this idea that you have massive polarization if there are individuals who just get distracted by social media, sucked into maybe the algorithm and seeing more simpler things, not engaging with intellectual actions of curiosity or writing. And then you have those who really understand and comprehend the system. And so you have massive intellectual polarization. But could you not also argue that there’s a third category of people who just say, I don’t necessarily want to know, like in the financial system, it’s different, right, because you have to be in the system of finance. But couldn’t you say there’s a third group who just says, I don’t want to be in the system. So it’s not being sucked in, being made to not like brainwash, but almost, but just completely escaping and getting out of maybe the matrix in this way. So do you think that it’s possible to achieve that?
Giulia Lucchese: Thank you. As I’m mindful of the timing, I would like to propose that, starting with Andrin, you do provide at least a reply to one of the questions you like the most, and then we move on. And maybe we avoid the final round, so if you have any final remark right now, I would like you to condense it in two minutes intervention. Thank you.
Andrin Eichin: I will just answer to three of them, but very shortly, because I think they’re very good. The first question, which was on the output on standardized expression and how it interacts with individual expression. I think this is a really good question. It’s something that we were considering as well in the expert committee. I think it really depends on what kind of tasks we’re looking at. There will be a lot of standardized tasks, writing emails, summarizing reports, where we will see the standardized expression, and this will probably increase and will also create a problem with regard to the data set that we’re using. And then we have creative expression that is being used, the way how we interact with generative AI systems as well to increase our ability, as we’re seeing in a creative way with memes, but also in journalism. So I think there will be elements where we have standardized expression, and there will be elements where generative AI will expand our expression as well. And you can even go to the element, to the side of the text as well, right, with regards to words and in relation to ideas. Maybe on the question too, on will it make key decisions with respect to content? I believe so. It will definitely. I think it will be, maybe I’m less pessimistic and doomy as David may be. I think that the timescale will be. will be a bit longer. If we look at how productive the interaction with AI systems actually is today, it’s still quite low. A lot of it is for entertainment purposes, but this will change. And this leads me to question four. Although you addressed it to David, I try to jump in. If you’d be able to escape, maybe there is this third form, but I don’t think it will last long. Again, for me, we’re speaking still in the future, not two or three years, but at one point, generative AI systems will be such a part of our economy, they will be so important for you to be productive and to participate in the economy that there will be almost no option to opt out. If you today don’t use a smartphone, it will be very difficult to participate in society. And there are people that don’t use the financial markets today, but they are very different.
Alexandra Borchardt: Is it a contradiction? Will it make people more creative, reflective on things? Well, this really also depends on the systems design. And also, of course, what you want to do with it, and that is the socioeconomic differentiation that might happen, as has already happened with social media. I mean, with social media, it’s also the case that if you want to get really a lot of information, particularly, for example, take the pandemic, I mean, you could directly interact with scientists and whatever to find out everything. But if you didn’t have the first idea of knowledge, if you didn’t have the access, if you didn’t know which scientists were really good at this, you didn’t get, maybe you didn’t get any information, or you got information from the basic information from public service media. So it really depends on what people are going to. do, but it also depends on the system’s design. And what Professor Patti Mase said was you can really challenge people a little bit just by asking like one more question, not just, you know, pressing a button, output, get rid of the output. And I don’t know if you’ve experienced that, you know, oh, produce a report for me. Oh, might I format it for you and send it away? So you can just do things without ever even, you know, engaging your brain. But if there’s just one back question, like, would you think this really makes sense? Or ask me something back. And that is the system’s design that can really help to engage people a lot more. I hope that makes you happy. Then I guess I’m the person with the paywall question, because I spent almost 30 years in the media industry, and I’m really worried about business models. I think the paywall problem, we have, I mean, we have that now already, we’ve had that for some time, that there’s quality media, and to survive, that they really try to engage, that there are set paywalls to make people pay for news, which makes a lot of sense. I mean, you can’t really go to the bakery and help yourself, you just need to pay. So the idea of many news organizations is, you know, this is quality information. If it’s worth something to you, then you pay. In fact, AI will undermine paywalls, can undermine paywalls, and actually can give you the output. So the paywall, no idea if this is going to be the thing of the future. If you see generative search emerging, and you ask just questions on AI, you will get responses. And sometimes there are responses, stuff that is actually behind paywalls. So we will, news organizations will need to do a lot more to engage people, to show them that they can create real value in their lives, and to really make them pay. And and public service media will probably also become a lot more important in that context and necessary. So the future of the business model is really something that worries me. And then the third one was about what I would like to comment on, yeah, will these systems make decisions? And yet, of course, with agentic AI emerging, obviously, agentic AI means that you optimize for some or you set some goal and then these agents really sort of independently, that’s why they’re called agents, make decisions on your behalf. But what these agents probably won’t do is, for example, doing investigative research, because there’s no incentive that they have to do so. So it’s probably, it’s most likely this is what journalism really needs to to intensify its efforts, becoming more investigative, holding power to account and really, you know, going for these things that AI won’t do. But, you know, we will, we might be surprised what AI will do in the future. So I don’t really think I can give you the final answers here. But David might. He’s the super expert here.
David Caswell: Well, sorry, I first want to just clarify that I did not intend to be pessimistic and gloomy. I had been going for excited and optimistic, but obviously failed. I’ll just quickly go through just my brief responses to some of the questions. I think that question about the balance between the narrowing of the distribution of expression on the one hand versus all of these opportunities to be more expressive, to be articulate, to be creative, to be artistic. On the other hand, that’s a very real question and the honest answer is don’t know. But I think what it does is it brings up very clearly the fact that this is a dynamical system. in that there’s going to be certain things that change that move freedom of expression or the makeup of the information ecosystem or our relationship with information that move those things in certain directions and there’s going to be other factors that come from this that move it in other directions. And so I think that’s this sort of uncertainty that we’re in right now, is that we’re going to have all of these things kind of changing at once and what the net of that ends up being, we don’t really know. In terms of AI making decisions, that happened long ago. If you get your news from social media, from Facebook, from Google News, AI is figuring out what you’re going to see. It’s already happening inside news organizations with generative AI in terms of story selection, angles, and all the rest of it. And even in terms of agents, there was an interesting little thing about two months ago. There’s a company in the UK called NewsQuest. They hired their first, what was the title for their newsroom? It was AI agent orchestrator. So they hired a journalist whose job it was to manage or is to manage a team of agents to make journalistic decisions and do journalistic things. So I think we have passed that milestone. AI is already making fairly profound editorial decisions. Not broadly, and I think a lot of newsrooms that are maybe touching on the edge of this don’t want to talk about it, but I think the trend is pretty clear there. On the cost thing, I’m not sure if I got the question right, but I think this might have been reacting to that slide with Charlie Beckett’s quote around what happens if the model companies increase the cost of these models by five times. I’m not sure that’s going to happen. I think one of the surprises of the last year or two is that these models might be much more like electricity. They might be much more like a utility than some kind of special thing like a social network. Social networks, because of network effects, had this one size. are one take all kind of dynamic. And these models might not have that. It might be that anybody can build one of these and get to some level of intelligence, just like you build a power station and generate electricity. It’s expensive, but anyone can do it. So I’m not that worried about the underlying cost thing on this. The bifurcation one, that was a very good question. And absolutely, I agree with Andrin that I think it’s going to be hard to opt out of this. The example I would use is not so much opting out of smartphones. It’s worse than that. It’s more like being an Amish or Mennonite, old order Mennonite, where you’re basically, you’re picking a point in time and sticking with that. The Anabaptist religions are very, there’s a large population in Canada and the US and South America. It’s not nothing. But it is that kind of a scale, I think. That bifurcation and the analysis of that came from a very comprehensive scenario planning analysis that I led last year. It’s called the AI in Journalism Futures Report. You can find the PDF online. And there were five scenarios. One was the bifurcation. But another one, there was a whole other scenario around that opt out option. And this was a consolidation of the points of view for about 1000 people. And one of the key findings from that was that most people who thought about this assumed there’s going to be some portion of the population would opt out.
Julie Posetti: Thanks, David. I think everybody’s questions have been answered. So I’ll just make a couple of remarks reflecting on what’s been said, and picking up on both Alexandra and David’s presentations in particular. And that Charlie Beckett quote, it does concern me that we have not spent much time during this discussion, addressing questions around regulation and embedding human rights in in these processes, who has written a lot about the future of journalism and technology led approaches to journalistic activity, which during Web 2.0 led to what I termed platform capture, that we haven’t necessarily learned the lessons from that period where news organizations and individual journalists became trapped within the platform walls. I realize this is different technology, but we failed to be appropriately critical, I think, and we failed to necessarily look at the risks in a way that enabled protections for business models that allowed and ensured a sort of editorially led approach to engaging with technology. And so I and others have spoken about the risk of sort of platform capture in Web 2.0 with regard to a ready embrace of AI without necessarily appropriate levels of critical engagement with not just the technology, but the characters. And I would sort of slightly disagree with David characterizing Sam Altman as an expert and comparing him to climate scientists, for example. I mean, those are independent experts, and we have those in this field. But I think we need to separate out expert perspectives from those who stand to massively profit from the technology that they’re propagating. And I think it’s important to highlight that. And we didn’t, I didn’t speak enough about the gender implications or the implications for diversity more broadly. But again, I think we need to reinforce, particularly in the current geopolitical climate where diversity is verboten. in some contexts and has been weaponized, then I think we need to reinforce the human, the humanity here in these discussions. And that does go to Alexandra’s point, which I’ve heard multiple times from journalists internationally trying to figure out what is the unique selling proposition or the unique offering of professional independent journalism or producers of public interest information more broadly. And that is sense making, meaning making, interpretation. And sometimes that does actually involve considered prediction, you know, based on facts, which helps societies prepare for risks. So I think that’s where I will leave it apart from to quote from a Kenyan woman politician who spoke yesterday, who said that we need AI governance that protects not just data, but dignity. And so I think that’s a good place to end it. Thank you.
Giulia Lucchese: Thank you very much, Julie. Thank you to all our panellists. I will give the floor to Desara Dushi, IGF Secretary, EuroDIG Secretary, my apologies, for the conclusions to be agreed by the participants.
Desara Dushi, Vrije: Hello, everyone. I’m going to share the screen with the messages that I try to draft during the workshop. And I’m going to read them one by one. So I’m from the EuroDIG Programme Committee, and we need to draft three messages for each workshop. The first message that I tried to identify is that generative AI has the potential to diminish unique voices. Including minority languages. It poses integrity issues, problems with identifying whether content is created by humans or technology. It also has the power of persuasion, including by disinformation that it via disinformation that it enables and influences market dynamics. The second message is journalism and generative AI contradict each other. The former is about facts, so the later generates content irrelevant of facts. There is a risk of standardised experimentation, risk of standardised expressions as well. However, generative AI offers also opportunities for journalism, helping to bring more content to the audiences. Questions though still remain, such as accuracy, impact on humans, The risk of using AI in journalism is losing control over news production and quality which might impact also the future of the business model. One of the main issues will be keeping journalism visible and keeping the connection with the audience. And the last message would be we should take AI seriously, be aware of what it can and cannot do and the rapid development impact in the near future, creating a lot of uncertainty in terms of dynamics and impact on freedom of expression. There is a risk of omnipresence as well. AI, including generative AI, has implications not only on freedom of expression but also on privacy, such as by surveillance in terms of freedom of expression, which leads to control of perception. We need to act on a networked and collective level. Now, I would ask everyone if there are any major objections against these messages, which means that you do not need to worry about the formatting, the language and editing and so on, because the organizing team will take care afterwards. But do you see any major objections regarding what was said during the session?
Alexandra Borchardt: In the second, I mean, that was meant as a provocation. It generates content not irrelevant of facts, but it basically calculates probabilities. So it could be true or it could not be true. And that makes it so difficult to figure out because the thing is just generative AI does something that sounds convincing. It’s really optimizing for credibility, but not for facts. So maybe that should be toned down a little bit. Because obviously, most of the stuff that generative AI…
David Caswell: It’s like food that’s 95% edible.
Julie Posetti: Well, maybe 75%. Just one thing that I think it’s nothing wrong in terms of what you’ve represented that I said, but it would be good to get the gender element in, which I think is very important. So the ways in which generative AI can be used to facilitate technology-based violence against women, for example. So Deepfakes was one example used against women political actors and women journalists, which is about silencing, which is about chilling freedom of expression. So I think that would be an important point to add.
Andrin Eichin
Speech speed
153 words per minute
Speech length
1620 words
Speech time
634 seconds
Difficulty of opting out as AI becomes ubiquitous
Explanation
Andrin argues that as AI becomes increasingly integrated into the economy and society, it will become very difficult for individuals to opt out of using AI technologies. This is similar to how difficult it is to participate in society without using a smartphone today.
Major discussion point
Future Implications of Advanced AI
David Caswell
Speech speed
173 words per minute
Speech length
2844 words
Speech time
984 seconds
Power of persuasion and influence on beliefs/opinions
Explanation
Generative AI systems can engage in effective persuasion through hyper-personalization and ongoing user interaction. They can influence beliefs and opinions of humans using psychological tricks.
Evidence
A study from the University of Zurich showed AI agents achieving persuasion rates 3-6 times higher than human baseline on Reddit.
Major discussion point
Impact of Generative AI on Freedom of Expression
Agreed with
– Andrin Eichin
– Alexandra Borchardt
– Julie Posetti
Agreed on
Generative AI has significant implications for freedom of expression
Need to take potential of AGI and superintelligence seriously
Explanation
David argues that we should take the potential of AGI and superintelligence seriously due to rapid progress in AI capabilities. He cites expert opinions and consumer adoption trends as reasons to consider these possibilities.
Evidence
Examples of AI passing benchmarks, expert opinions from AI company leaders, and consumer adoption statistics were provided.
Major discussion point
Future Implications of Advanced AI
Disagreed with
– Julie Posetti
Disagreed on
Potential of AGI and superintelligence
Risk of bifurcation of society into super-empowered and disempowered
Explanation
David suggests that AI could lead to a division in society between those who are super-empowered by AI tools and those who are disempowered or distracted by them. This could create feedback loops that amplify these differences.
Major discussion point
Future Implications of Advanced AI
Potential for dramatically increased societal awareness and information
Explanation
David argues that AI technologies could significantly increase the level of awareness and information available to society. He suggests this could be a continuation of the trend of increasing information access through technological advancements.
Evidence
Historical examples of technological advancements in information dissemination were mentioned.
Major discussion point
Future Implications of Advanced AI
Disagreed with
– Alexandra Borchardt
Disagreed on
Relationship between AI-generated content and journalism
Alexandra Borchardt
Speech speed
164 words per minute
Speech length
2477 words
Speech time
904 seconds
Contradiction between AI-generated content and factual journalism
Explanation
Alexandra points out that journalism is about facts, while generative AI calculates probabilities. This creates a fundamental contradiction as AI optimizes for credibility rather than factual accuracy.
Major discussion point
Opportunities and Challenges for Journalism
Agreed with
– Andrin Eichin
– David Caswell
– Julie Posetti
Agreed on
Generative AI has significant implications for freedom of expression
Disagreed with
– David Caswell
Disagreed on
Relationship between AI-generated content and journalism
Opportunities to enhance news gathering and distribution
Explanation
Alexandra highlights that AI can help with various aspects of journalism, including news gathering, data journalism, verification, and news distribution. It can also assist in personalizing news and addressing different audience needs.
Evidence
Examples of AI use in newsrooms for tasks like transcribing, translating, and creating personalized content were provided.
Major discussion point
Opportunities and Challenges for Journalism
Risk of losing control over news production and quality
Explanation
Alexandra expresses concern about the potential loss of control over news production and quality as AI becomes more prevalent in journalism. This could impact the core business model of journalism, which relies on audience trust.
Major discussion point
Opportunities and Challenges for Journalism
Importance of maintaining human connection with audience
Explanation
Alexandra emphasizes the need for journalism to maintain direct connections with audiences in the age of AI. This involves knowing who they serve and securing direct communication channels.
Evidence
Quote from Laura Ellis of the BBC about the importance of maintaining a human voice in newsrooms.
Major discussion point
Opportunities and Challenges for Journalism
Value of human sense-making and interpretation in journalism
Explanation
Alexandra argues that in the age of AI and content abundance, journalism needs to focus on making meaning of complex information. This human-driven interpretation and sense-making will be crucial for journalism’s value proposition.
Evidence
Quote from Anna Lagerkrantz about journalism needing to move up the value chain and become meaning makers.
Major discussion point
Critical Engagement with AI Development
Agreed with
– Julie Posetti
Agreed on
Need for critical engagement with AI development
Julie Posetti
Speech speed
129 words per minute
Speech length
2026 words
Speech time
938 seconds
Implications for privacy through AI-enabled surveillance
Explanation
Julie highlights the privacy concerns associated with AI-enabled surveillance technologies. She emphasizes the interconnection between privacy rights and freedom of expression.
Evidence
Example of Meta’s new AI glasses with facial recognition capabilities.
Major discussion point
Privacy and Human Rights Concerns
Risk of control over perception and information integrity
Explanation
Julie warns about the potential for AI to be used to control perception and compromise information integrity. This includes the use of AI for generating and spreading disinformation.
Evidence
Example of Russian disinformation being inserted into large language models and surfacing in chatbot responses.
Major discussion point
Privacy and Human Rights Concerns
Agreed with
– Andrin Eichin
– David Caswell
– Alexandra Borchardt
Agreed on
Generative AI has significant implications for freedom of expression
Need for AI governance protecting dignity as well as data
Explanation
Julie emphasizes the importance of AI governance that goes beyond data protection to also safeguard human dignity. This broader approach to governance is crucial in the current geopolitical climate.
Evidence
Quote from a Kenyan woman politician about the need for AI governance that protects dignity.
Major discussion point
Privacy and Human Rights Concerns
Potential for technology-facilitated violence against women
Explanation
Julie points out the gender-specific risks of AI, particularly how it can be used to facilitate technology-based violence against women. This includes the creation of deepfakes to target women political actors and journalists.
Evidence
Example of deepfakes being used against women political actors and journalists.
Major discussion point
Privacy and Human Rights Concerns
Importance of embedding human rights in AI processes
Explanation
Julie stresses the need to embed human rights considerations in AI development and governance processes. This is crucial to ensure that AI technologies respect and protect fundamental rights.
Major discussion point
Critical Engagement with AI Development
Agreed with
– Alexandra Borchardt
Agreed on
Need for critical engagement with AI development
Need to separate expert perspectives from profit motives
Explanation
Julie argues for the importance of distinguishing between independent expert opinions and the perspectives of those who stand to profit from AI technologies. This is crucial for a balanced and critical engagement with AI development.
Major discussion point
Critical Engagement with AI Development
Agreed with
– Alexandra Borchardt
Agreed on
Need for critical engagement with AI development
Disagreed with
– David Caswell
Disagreed on
Potential of AGI and superintelligence
Importance of networked, collective action on AI governance
Explanation
Julie emphasizes the need for a networked and collective approach to AI governance. This is in response to the networked and transnational nature of actors using AI for potentially harmful purposes.
Major discussion point
Critical Engagement with AI Development
Giulia Lucchese
Speech speed
138 words per minute
Speech length
952 words
Speech time
412 seconds
Council of Europe is working on guidance for generative AI and freedom of expression
Explanation
Giulia mentions that the Council of Europe is currently elaborating a guidance note on the implications of generative AI on freedom of expression. An expert committee called MSIAI is dedicated to this task, with the goal of having a guidance note by the end of the year.
Evidence
The MSIAI expert committee is working on the guidance note
Major discussion point
Regulatory Efforts on AI and Freedom of Expression
Desara Dushi, Vrije
Speech speed
149 words per minute
Speech length
357 words
Speech time
143 seconds
Generative AI has both potential benefits and risks for freedom of expression
Explanation
Desara summarizes the discussion points, noting that generative AI can diminish unique voices and minority languages, pose integrity issues, and influence market dynamics. However, it also offers opportunities for journalism to bring more content to audiences.
Major discussion point
Impact of Generative AI on Freedom of Expression
Uncertainty about the future impact of AI on freedom of expression
Explanation
Desara highlights that the rapid development of AI creates uncertainty about its future dynamics and impact on freedom of expression. There are concerns about AI’s omnipresence and its implications for privacy and control of perception.
Major discussion point
Future Implications of Advanced AI
Online moderator
Speech speed
168 words per minute
Speech length
71 words
Speech time
25 seconds
Rules for participation in the session
Explanation
The online moderator outlined rules for participating in the session. These included entering with full names, using the Zoom hand-raise function to ask questions, and not sharing links to the Zoom meeting.
Evidence
Specific rules mentioned included turning on video when speaking and stating name and affiliation.
Major discussion point
Session Logistics
Audience
Speech speed
138 words per minute
Speech length
517 words
Speech time
223 seconds
Contradiction between personalized AI experiences and homogeneous collective output
Explanation
An audience member pointed out a seeming contradiction between AI creating more personalized experiences for individuals, while academic research shows AI leading to more homogeneous collective output. They asked for the speakers’ opinions on how this might develop in the future.
Evidence
Referenced academic research showing groups using AI to write essays or articles produce more similar outputs collectively.
Major discussion point
Impact of Generative AI on Expression
Potential for AI to make key content decisions
Explanation
An audience member asked if we might someday see AI making exclusive decisions about access to certain pieces of information or news in press activities. This raises questions about AI’s role in content curation and information access.
Major discussion point
Future of AI in Journalism
Concern about quality of information and socioeconomic access
Explanation
An audience member expressed concern about the quality of information potentially diminishing due to dependency on AI systems and their pricing. They questioned if this could lead to more paywalls, making access to accurate and verified information a socioeconomic issue.
Major discussion point
Economic Implications of AI in Journalism
Possibility of opting out of AI-driven systems
Explanation
An audience member questioned whether there could be a third category of people who choose to opt out of AI-driven systems entirely, rather than being super-empowered or disempowered by them. They asked if it’s possible to achieve this ‘escape’ from the system.
Major discussion point
Societal Response to AI Advancement
Agreements
Agreement points
Generative AI has significant implications for freedom of expression
Speakers
– Andrin Eichin
– David Caswell
– Alexandra Borchardt
– Julie Posetti
Arguments
Generative AI has the potential to diminish unique voices. Including minority languages. It poses integrity issues, problems with identifying whether content is created by humans or technology. It also has the power of persuasion, including by disinformation that it via disinformation that it enables and influences market dynamics.
Power of persuasion and influence on beliefs/opinions
Contradiction between AI-generated content and factual journalism
Risk of control over perception and information integrity
Summary
All speakers agreed that generative AI has profound implications for freedom of expression, including potential risks to information integrity, persuasion capabilities, and impacts on diverse voices.
Need for critical engagement with AI development
Speakers
– Alexandra Borchardt
– Julie Posetti
Arguments
Value of human sense-making and interpretation in journalism
Importance of embedding human rights in AI processes
Need to separate expert perspectives from profit motives
Summary
Both speakers emphasized the importance of critically engaging with AI development, maintaining human involvement in journalism, and considering human rights implications.
Similar viewpoints
Both speakers highlighted concerns about the potential for AI to create or exacerbate societal divisions and power imbalances, whether through empowerment disparities or privacy infringements.
Speakers
– David Caswell
– Julie Posetti
Arguments
Risk of bifurcation of society into super-empowered and disempowered
Implications for privacy through AI-enabled surveillance
Both speakers emphasized the importance of maintaining human-centric approaches in the face of AI advancements, whether in journalism or governance.
Speakers
– Alexandra Borchardt
– Julie Posetti
Arguments
Importance of maintaining human connection with audience
Need for AI governance protecting dignity as well as data
Unexpected consensus
Potential benefits of AI for journalism and information dissemination
Speakers
– Alexandra Borchardt
– David Caswell
Arguments
Opportunities to enhance news gathering and distribution
Potential for dramatically increased societal awareness and information
Explanation
Despite overall caution about AI’s impacts, both speakers acknowledged significant potential benefits for journalism and information dissemination, which was somewhat unexpected given the general tone of concern in the discussion.
Overall assessment
Summary
The main areas of agreement centered on the significant implications of generative AI for freedom of expression, the need for critical engagement with AI development, and the importance of maintaining human-centric approaches in journalism and governance.
Consensus level
There was a moderate level of consensus among the speakers, particularly on the potential risks and challenges posed by generative AI. However, there were also nuanced differences in perspectives on the opportunities and ways to address these challenges. This level of consensus suggests a shared recognition of the importance and complexity of the issues surrounding generative AI and freedom of expression, while also highlighting the need for continued dialogue and diverse approaches in addressing these challenges.
Differences
Different viewpoints
Potential of AGI and superintelligence
Speakers
– David Caswell
– Julie Posetti
Arguments
Need to take potential of AGI and superintelligence seriously
Need to separate expert perspectives from profit motives
Summary
David Caswell argues for taking the potential of AGI and superintelligence seriously, citing expert opinions from AI company leaders. Julie Posetti, however, cautions against equating AI company leaders with independent experts, emphasizing the need to distinguish between expert perspectives and those who stand to profit from AI technologies.
Relationship between AI-generated content and journalism
Speakers
– Alexandra Borchardt
– David Caswell
Arguments
Contradiction between AI-generated content and factual journalism
Potential for dramatically increased societal awareness and information
Summary
Alexandra Borchardt highlights the fundamental contradiction between AI-generated content, which is based on probabilities, and factual journalism. David Caswell, on the other hand, sees potential for AI to dramatically increase societal awareness and information access.
Unexpected differences
Optimism about AI’s potential
Speakers
– David Caswell
– Julie Posetti
Arguments
Potential for dramatically increased societal awareness and information
Importance of critical engagement with AI development
Explanation
Despite both being experts in the field, David Caswell expresses more optimism about AI’s potential benefits, while Julie Posetti emphasizes a more cautious, critical approach. This difference in perspective was unexpected given their shared expertise.
Overall assessment
Summary
The main areas of disagreement centered around the potential of AGI and superintelligence, the relationship between AI-generated content and journalism, and the approach to addressing AI’s societal impacts.
Disagreement level
The level of disagreement among the speakers was moderate. While there were clear differences in perspectives, particularly between David Caswell and Julie Posetti, there were also areas of partial agreement. These disagreements reflect the complex and multifaceted nature of AI’s impact on freedom of expression and journalism. The implications of these disagreements suggest a need for continued dialogue and diverse perspectives in shaping AI governance and its application in journalism.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers highlighted concerns about the potential for AI to create or exacerbate societal divisions and power imbalances, whether through empowerment disparities or privacy infringements.
Speakers
– David Caswell
– Julie Posetti
Arguments
Risk of bifurcation of society into super-empowered and disempowered
Implications for privacy through AI-enabled surveillance
Both speakers emphasized the importance of maintaining human-centric approaches in the face of AI advancements, whether in journalism or governance.
Speakers
– Alexandra Borchardt
– Julie Posetti
Arguments
Importance of maintaining human connection with audience
Need for AI governance protecting dignity as well as data
Takeaways
Key takeaways
Generative AI has significant implications for freedom of expression, including potential to diminish unique voices and minority languages
AI poses risks to information integrity and attribution of human expression
AI offers both opportunities and challenges for journalism, potentially enhancing news gathering but risking loss of control over production
Advanced AI like AGI could dramatically increase societal awareness but also risks bifurcating society
AI development raises important privacy and human rights concerns that need to be addressed
Critical engagement with AI development is needed, including embedding human rights and separating expert views from profit motives
Resolutions and action items
Council of Europe is developing guidance on implications of generative AI for freedom of expression, to be completed by end of 2025
Public consultation on the guidance document planned for summer 2024
Unresolved issues
How to balance standardization of expression with opportunities for enhanced creativity
How to maintain journalism’s visibility and connection with audiences as AI becomes more prevalent
How to govern AI in a way that protects both data and human dignity
How to address the potential for AI to facilitate technology-based violence against women
How to ensure appropriate critical engagement with AI technology and business models
Suggested compromises
Designing AI systems to challenge users and encourage engagement rather than passive consumption
Focusing on human sense-making and interpretation as journalism’s unique value proposition in the age of AI
Thought provoking comments
Generative AI systems are statistical and probabilistic machines, as you know, and as such, they tend to standardize outputs and reflect dominant patterns in training data. And studies already show today that it can reduce linguistic and content diversity.
Speaker
Andrin Eichin
Reason
This comment highlights a key concern about how AI may reduce diversity of expression, which is crucial for freedom of speech.
Impact
It shifted the discussion to focus more on the potential negative impacts of AI on diversity and representation.
While AI can enhance media efficiency, it also introduces a new economic and informational gatekeeper.
Speaker
Andrin Eichin
Reason
This insight raises important questions about AI’s role as a new power broker in media and information dissemination.
Impact
It prompted further discussion on the implications of AI for media pluralism and the concentration of power in information ecosystems.
Journalism has to move up in the value chain. In other words, journalism has to get a lot better because the copy and paste journalism that we are still confronted with these days, it doesn’t serve us well any longer.
Speaker
Alexandra Borchardt (quoting Anna Lagerkrantz)
Reason
This comment provides a provocative perspective on how journalism needs to evolve in response to AI.
Impact
It sparked discussion about the future role of human journalists and the need to focus on higher-value activities that AI cannot easily replicate.
Even the critics of this concept of AGI and superintelligence, even they accept that dramatic things are gonna happen. So even the critics, even the people who are downplaying what’s going on are still painting a pretty dramatic picture.
Speaker
David Caswell
Reason
This observation underscores the significance of AI’s potential impacts, even among skeptics.
Impact
It heightened the sense of urgency in the discussion and encouraged participants to take AI’s potential seriously.
News as a complex system. So here’s a kind of a series of events in a newsroom, say your average newsroom. Step one, AI shows up. You say, right, we can use this to make our jobs as journalists easier. That’s great. So you say, well, we can actually use it to do whole jobs that we don’t wanna do. These are jobs that we don’t like or that we have trouble filling. We’ll just get AI to do those jobs. Well, that’s all right. Then you’re in this situation where you have AI and it’s doing most jobs. So you can go home. You can have a three-day week or. You can come in at 11 and go home at three because the AI is doing most of the jobs. And that sounds kind of nice, right? And then you get to this point where, what exactly is the AI doing? You know, I haven’t been checking in for a few weeks and what is it doing? And then you’re at this point where you don’t know where your information is coming from.
Speaker
David Caswell
Reason
This scenario vividly illustrates the potential for gradual loss of control and understanding in newsrooms as AI is increasingly adopted.
Impact
It prompted deeper reflection on the long-term consequences of AI adoption in journalism and raised concerns about maintaining human oversight and understanding.
We need AI governance that protects not just data, but dignity.
Speaker
Julie Posetti (quoting a Kenyan woman politician)
Reason
This succinct statement encapsulates a crucial ethical consideration often overlooked in technical discussions of AI.
Impact
It broadened the conversation to include human rights and ethical considerations, emphasizing the need for a holistic approach to AI governance.
Overall assessment
These key comments shaped the discussion by highlighting the complex interplay between AI and freedom of expression, emphasizing both opportunities and risks. They prompted a more nuanced examination of AI’s potential impacts on diversity, media ecosystems, and journalistic practices. The comments also elevated the conversation to consider broader ethical and societal implications, stressing the need for careful governance and human-centered approaches to AI development and deployment in the context of freedom of expression.
Follow-up questions
How will the balance between standardized expression and increased individual creativity through AI evolve?
Speaker
Audience member
Explanation
This question addresses the apparent contradiction between AI’s tendency to standardize outputs and its potential to enhance personal creativity, which is crucial for understanding AI’s impact on freedom of expression.
Will AI systems eventually make exclusive decisions about access to certain pieces of information or news?
Speaker
Audience member
Explanation
This question explores the potential future role of AI in gatekeeping information, which has significant implications for freedom of expression and access to information.
How will the quality of information be affected by potential increases in paywalls and the socio-economic implications of access to accurate information?
Speaker
Audience member
Explanation
This question addresses concerns about the future of journalism business models and equitable access to quality information in the age of AI.
Is it possible for a significant portion of society to opt out of AI systems entirely?
Speaker
Audience member
Explanation
This question explores the feasibility of avoiding AI integration in daily life, which is important for understanding the societal impact of AI and potential digital divides.
How can we ensure appropriate regulation and embedding of human rights in AI processes?
Speaker
Julie Posetti
Explanation
This area for further research highlights the need for critical engagement with AI technology and its developers to protect human rights and dignity.
What are the specific gender implications of generative AI, particularly in relation to technology-based violence against women?
Speaker
Julie Posetti
Explanation
This area for further research emphasizes the importance of understanding how AI can be used to facilitate gender-based violence and silence women, impacting freedom of expression.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
