Day 0 Event #12 Tackling Misinformation with Information Literacy
Day 0 Event #12 Tackling Misinformation with Information Literacy
Session at a Glance
Summary
This discussion focused on tackling misinformation and promoting information literacy in the digital age. The speakers, including representatives from Google and other tech policy experts, explored various strategies and challenges in addressing online misinformation.
The conversation began by highlighting how misinformation is a persistent and complex problem that has evolved, especially since the 2016 US election. Platforms have implemented various approaches, including labeling, fact-checking partnerships, and reducing the reach of potentially false content. However, these methods face challenges, as misinformation is often not entirely false and can be difficult to definitively label.
The speakers emphasized the importance of information literacy skills for users. Google has developed tools like “About this result” to help users evaluate sources and claims more easily. The discussion also touched on the impact of AI-generated content, noting that while it presents new challenges, many core information literacy principles still apply.
A key point was the need for a holistic, multi-stakeholder approach to combat misinformation. This includes efforts from platforms, governments, civil society, and users themselves. The speakers stressed that user preferences and behaviors play a significant role in exposure to misinformation, highlighting the importance of individual media consumption habits.
The discussion also covered specific issues like gender-based online violence and the effectiveness of information literacy efforts. The speakers acknowledged that while progress has been made, misinformation remains a complex issue requiring ongoing research, collaboration, and adaptation of strategies.
In conclusion, the speakers emphasized the need for continued experimentation, cross-sector collaboration, and a focus on empowering users with information literacy skills to navigate the evolving digital information landscape.
Keypoints
Major discussion points:
– Challenges of addressing misinformation, including its “sticky” nature and the difficulty of labeling content
– Google’s approach to information literacy, including tools like “About this result” and watermarking for AI-generated images
– The role of user preferences and behavior in encountering misinformation
– The need for a multi-stakeholder approach to information literacy that includes users, not just governments and tech companies
– Evolving challenges with AI-generated content and deepfakes
Overall purpose:
The goal of this discussion was to explore approaches to tackling misinformation through information literacy, focusing on strategies used by tech platforms and the challenges involved.
Tone:
The tone was informative and collaborative throughout. The speakers shared insights from their professional experiences in a constructive manner, acknowledging the complexity of the issues. There was an emphasis on the need for continued experimentation and multi-stakeholder cooperation to address misinformation challenges.
Speakers
– Jim Prendergast: Moderator, with the Galway Strategy Group
– Sarah Al-Husseini: Head of Government Affairs and Public Policy for Saudi Arabia for Google
– Katie Harbath: Founder and CEO of Anchor Change
– Zoe Darmé: Director of Trust Strategy at Google
– Audience: Various audience members who asked questions
Additional speakers:
– Lina: From Search for Common Ground and the Council on Tech and Social Cohesion
– Ian: From the Brazilian Association of Internet Service Providers
Full session report
Tackling Misinformation and Promoting Information Literacy in the Digital Age
This comprehensive discussion, moderated by Sarah Al-Husseini, Head of Government Affairs and Public Policy for Saudi Arabia at Google, brought together experts from Google and the tech policy sector to explore strategies for addressing online misinformation and enhancing information literacy. The panel included Katie Harbath, Founder and CEO of Anchor Change, and Zoe Darmé, Director of Trust Strategy at Google.
Evolving Challenges of Misinformation
The session began with an interactive quiz, highlighting the difficulty in identifying misinformation. Zoe Darmé shared research from Australia suggesting people’s accuracy in spotting false information is only slightly better than a coin toss, emphasizing the need for more sophisticated approaches to combating misinformation.
Darmé aptly quoted Kate Klonick, stating, “You can’t bring logic to a feelings fight,” underscoring the emotional aspect of misinformation consumption. The discussion highlighted the persistent and complex nature of misinformation, which has evolved significantly since the 2016 US election.
Platform Strategies and Their Limitations
The speakers discussed various approaches implemented by platforms to address misinformation, including labelling, fact-checking partnerships, and reducing the reach of potentially false content. They also mentioned pre-bunking as a strategy used by platforms. However, they acknowledged the limitations of these methods, particularly given that misinformation is often not entirely false and can be difficult to definitively label.
Katie Harbath pointed out that people interpret labels and warnings differently, which can sometimes lead to unintended consequences. She also highlighted the importance of understanding the different modes people are in when consuming content online, which affects how they interact with information.
Google’s Approach to Information Literacy
Zoe Darmé detailed Google’s efforts to promote information literacy, focusing on tools like “About this result” that aim to help users evaluate sources and claims more easily. She also mentioned a new Google feature that shows whether search results are personalized or not, empowering users with more context about their search experience.
The discussion touched on Google’s development of SynthID watermarking technology for AI-generated images, though Darmé acknowledged that such technical solutions are not foolproof, especially when considering issues like screenshots that could potentially circumvent watermarking.
The Role of User Preferences and Behaviour
A key insight from the discussion was the significant role that user preferences and behaviours play in exposure to misinformation. Darmé revealed findings from a recent study on search engines showing that users often encounter misinformation when they are explicitly searching for unreliable sources, challenging assumptions about algorithmic responsibility and highlighting the need for approaches that address individual media consumption habits.
Emerging Challenges: AI-Generated Content and Deepfakes
The speakers acknowledged the evolving landscape of misinformation, particularly with the rise of AI-generated content and deepfakes. While these technologies present new challenges, the panel emphasised that many core information literacy principles still apply. They stressed the need for ongoing research and adaptation of strategies to address these emerging issues.
Darmé also discussed Google’s measures to combat involuntary synthetic pornographic imagery, including efforts to remove such content and provide support for victims.
Multi-stakeholder Approach and Collaboration
A recurring theme throughout the discussion was the necessity of a holistic, multi-stakeholder approach to combat misinformation. The speakers emphasised that effective solutions require efforts from platforms, governments, civil society organisations, and users themselves. They highlighted the importance of cross-industry collaboration, especially as AI technology continues to evolve.
However, both Harbath and Darmé addressed the challenges of implementing multi-stakeholder approaches to information literacy, including issues of coordination, resource allocation, and measuring effectiveness.
Addressing Specific Concerns
The discussion also touched on specific issues, such as health and election misinformation, with Al-Husseini mentioning targeted approaches being developed by platforms. In her closing remarks, she emphasized the importance of addressing youth populations in information literacy efforts.
Unresolved Issues and Future Directions
While the discussion provided valuable insights into current strategies and challenges, several unresolved issues emerged. These included questions about how to effectively address user preferences for unreliable sources, balancing platform intervention with concerns about censorship and free expression, and measuring the long-term effectiveness of information literacy efforts.
Conclusion
The discussion concluded with a call for ongoing collaboration and experimentation in addressing the challenges of misinformation. The speakers emphasised the importance of balancing technological solutions with user education and critical thinking skills. They acknowledged the complexity of the issue but expressed optimism that multi-stakeholder approaches and continued innovation could lead to more effective strategies for promoting information literacy and combating misinformation in the digital age.
Session Transcript
Jim Prendergast: Thank you for coming. This is the session, Tackling Misinformation with Information Literacy. My name is Jim Prendergast with the Galway Strategy Group. I’m going to help kick it off and I’m also going to help monitor the online activity. Fair warning, we want this to be a highly interactive session. You will be taking a quiz at some point. No pressure, it’s pretty easy and there are no wrong answers. We really want people to learn from this session, walk away with some new ideas, some new thinking, and by all means, we welcome your questions and interaction. I’m going to now kick it off to Sarah Al-Husseini, who’s the Head of Government Affairs and Public Policy for Saudi Arabia for Google.
Sarah Al-Husseini: Great. Thank you, Jim, and thank you everybody for joining us this afternoon. I know it’s day zero and it’s the end of the day, so those of you who have made it this far, thank you for being here. As Jim mentioned, I’m Sarah Al-Husseini, I lead Government Affairs and Public Policy for Google in Saudi and Egypt. I think this session is hugely timely with everything that’s happening on a global scale and the amount of information that is present online. I hope that you will be very engaged with our session today. We have a wonderful speaker lineup. With that being said, we’ll start with a presentation by Katie Harbeth, Founder and CEO of Anchor Change on the challenges of addressing misinformation. We’ll go over to Zoe Darma, who is the Director of Trust Strategy at Google, who will present on Google’s approach to information literacy, and then we’ll go into a Q&A session. I’ll take the prerogative of being the moderator and the first few questions, and then hand it over to those of you in the audience who would like to engage as well. With that, I will hand it over to Katie. Katie, thank you so much for joining us, Zoe. Thank you as well. Happy to have you here.
Katie Harbeth: Yeah. Thank you so much for having me, and I’m sorry I can’t be there in person. But I wanted to start today by just sharing a little bit of the history of companies working around misinformation and some of the things that they have tried and how they approach this problem. Just to sort of ground ourselves, the current iteration of working on misinformation really started after the 2016 election. Misinformation has been around for quite some time in various forms and companies have been working with it and trying to combat it for quite some time. But after the 2016 election in the United States, there was a lot of stories about initially Macedonian teenagers spreading fake news to make money. And it wasn’t until later in 2017 that we also started to realize and find the Russian Internet Research Agency ads that were on, I worked at Facebook, so not only on the Facebook platform, but many other platforms. And this is what really spurred a lot of the companies, I can speak on behalf of the work at Facebook in particular, to start adding labels and working with fact checkers around trying to combat some of this. And misinformation is not just stuff that is around elections and hot button issues. Some of the earlier stuff too is things like a celebrity died in your hometown or those type of clickbaity headlines that the companies were trying to fight. The initial focus was also very much on foreign activity, so foreign adversaries trying to influence different elections around the world. The other important thing to remember about a lot of this is that much of this work is very much focused sometimes on the behavior of these actors, not necessarily the content as well. So this means that pretending to be somebody that they’re not on platforms that require people to use their real names, if they’re coordinating with other types of accounts that they’ve created to try to amplify things. So as we’re talking about this, it’s not just what they’re saying and if it’s false, but it’s also how they might be trying to amplify it in inauthentic ways. You can go to the next slide. There we go. So I think a couple of other things to remember too as we think about this. Misinformation is sticky and fast, which means that it can spread very, very quickly. and it’s something that very much sticks with people and it’s very hard and can be very hard to change their minds. Things can also, we find that most of the times, things are not completely false or completely true. There’s usually a kernel of truth in this with a lot of misinformation around it, which makes it a lot trickier for trying to figure out what to do because you can’t just fully label it false or true. You also have things like satire, parody, hyperbole that exist in many places that are perfectly legal types of speech and understanding the intention of the poster and what they mean for it to be and doing that at scale is something that is incredibly tricky for many companies in which to do. And overall, these platforms very much do not wanna be the arbiters of truth. They do not wanna be the ones that are making decisions of whether or not something is true or false or what the facts are because they have seen and they have been accused of the risk of censorship, whether that’s true or just perceived, but that has become a huge political problem for them, particularly in the United States, but also around the world. And also trying to, and sometimes defining subcategories of misinfo and dealing with these specifically can be a better way for platforms to develop and enforce policy. So rather than having a blanket one, you might prioritize it because health misinformation, for instance, you may have more facts and authoritative information that you can refer to on that. The same thing with elections, where, when and how to vote is something that election authorities have that is easier to point to than something that might be more amorphous or there’s disagreeing opinions about what is happening. And so sometimes you’ll see companies start to parse this out on the types of content that they’re seeing and the topic of it in order. or to try to better to figure out how to combat this and mitigate the risks that appear to them. Sorry, I haven’t had enough coffee. The risks that might happen to them. Jim, if you can go to the next slide. So a couple of strategies that we’ve seen companies do in addition to just, so most companies do not take down fake information unless again, it’s about some very, very specific topics, health and elections are two that I can think of. But other strategies that we have seen companies take, one is pre-bunking. So giving people a warning of the types of information that they might see, the types of stuff that could potentially be false, or even directing them to authoritative information on these sensitive topics. So saying that, a lot of you may have seen during COVID, platforms will put a label about where you could get more information about COVID during election season and might be going to authoritative information there. A lot of them, as I mentioned earlier, work with fact-checkers all around the world. Many work through the Pointer Fact-Checking Consortium in order to, so what that means is that the platforms aren’t making the decision, but they’re working with these fact-checkers, they’re giving them a dashboard, and those fact-checkers can decide what stories it is they wanna fact-check. They can write their piece, and then if they determine that it is false or partially false, a label will be applied to that. And then that will take the person to that fact-check. The other thing it will do too is that it reduces the reach of this content, so less people can see it, but it doesn’t fully remove it. And then as I’ve been mentioning too, there’s the labeling component of this. And so people can see whether or not what these fact-checkers are saying while they’re consuming this different types of content. And Jim, if you can go to the next slide. And a couple of notes about labels. So a lot of this work continues to be something that people are, there’s some trial and error and experimentation to it. Because as platforms. been implementing it. I know it’s really easy to just be like, just put a label on it. That’ll help people understand that it’s not fully true or what more context. But unfortunately, how people interpret that, as we’ve been seeing with research, is a lot more murkier. So for some people, when it says alter content, does that mean it was made with AI, edited with AI? Was it used to Photoshop? There’s a lot of different ways of manipulating or altering content. And not all are bad. Many people use editing software in perfectly legitimate ways. And so how do you help distinguish for a user between that versus stuff that might have been nefariously edited? We find that people have many interpretations of what labels mean. And so they may not even have, the platform might not even have the proper label or enough information to even label it. The other thing too is that when some content is labeled as false, if content is unlabeled, sometimes they will infer that that content is true, even though that may not be the case. It may be that a fact checker hasn’t gotten to it. And so what sort of things are we training users to think about in ways that were unintended? And so platforms are very much trying to experiment with different ways of helping to build up, and I know Zoe is going to go into this, the information literacy of them, because it is not as simple as just putting a label on it, because how people interpret that is very different across generations, across cultures, across many different factors. If you want to go to the next slide. The one other thing I wanted to mention, and this is something, this is a study that Google’s Jigsaw division, along with GEMIC, did earlier this year looking at how Gen Z in particular, but I do think you can pull this out more broadly, but how Gen Z approaches when they go online and how they think about information. And what this study found was that there are seven different modes that people are in when they go online. And they plotted this on this sort of axis, where on the far right, you have heavy content. This is stuff that is news, politics, weighs heavily on your mind, versus on the far left, it’s more lighthearted content. Think cats on Zumba, stuff like that. On the vertical axis, you have on the bottom, things that have social consequences, affect others. People think that they need to do something. At the top, it only affects them, so it’s not necessarily something they feel like they have to act on. So what they found is that most people are in that upper left quadrant, which is the time pass and lifestyle aspiration modes. This is where they’re just hoping to, they’re kind of at the end of the day, they’re trying to kind of emotionally equilibrium, they’re just trying to kind of, you know, zone out a little bit and relax. And when they’re in these modes, they don’t care if stuff is true or not. However, what they found is that as they were absorbing it over time, they did start to believe some of the things that they were reading and consuming. And what they also found with this is that people do still want that heavier stuff, that heavier news and information, but they want to be intentional about it. They want to know when they’re going to get it, and when they go in to get that information, they want to get it quickly, they want a summary, and then they want to get out of it. And so something about this as we continue to have this conversation over the coming years is going to be, how can we reach people where they’re at? And we also have to recognize that their feelings play a huge role in trying to combat misinformation. And as a common friend of Zoe and mine’s and many as Kate Klonick has said in a recent paper, you can’t bring logic to a feelings fight. And this is something that we’re very much trying to think through and figure out when it comes to combating misinformation. Because again, logically, we think just label it, just tell them. And what we have found is that is not actually how the human psyche works. So I can’t remember if I’ve got one more slide or if we’re going over to Zoe.
Sarah Al-Husseini: I think that’s the final slide for you, Katie. And thank you so much for that insight. And I think there are tons of approaches and strategies that can be used to safeguard users, both proactive and then reactive. And we’ll get into that. information literacy with Zoe in a second. Just for everybody in the room, if you haven’t had a chance to get a headset, they’re on the table over here. We also have the captioning behind us, so please feel free. Great to see that a few more people have joined the room. Wonderful. And so with that, Zoe, I’ll hand over for you for Google’s approach to information literacy. Great. Thanks so much, Sarah. And thanks, everybody. Jim did mention that we were going
Zoe Darma: to start with a quiz and that there are no wrong answers, but there actually are. There actually are right and wrong answers for this next quiz, so I just want you to, there are three simple questions, and I want you to basically keep track for yourself. So the first question here, and folks on the chat are free to put their answers in the chat, which one of these has not been created with AI? Is it the photo on the left or the photo on the right, A or B, which has not been created with AI? Not everybody has microphones in the audience, so maybe we’ll take a show of hands. Yeah, you can just also just keep track for yourself, because we’ll reveal at the end. And we’re getting some answers in chat and Zoom, so thank you. Great. Now the next one is, I see Jim struggling with the clicker. Now, which photo is more or less as it is described? Is it the photo on the left from WWF, the claim here seems to be about deforestation, or is it the photo on the right, which also seems to be somewhat climate related with a warship in the Danube finally being revealed because of low water levels? Which one is more or less as it is described? Great. Next one. Now, which one of these is a real product? Is it cheeseburger Oreo or is it spicy chicken wing flavor Oreo? Hopefully neither. Yeah, that was my answer Sarah hopefully neither. Okay, Jim, we can advance. So, the house on the left is a real place in Poland. The, the post about the sunken ship is is unaltered I would say and accurate. The post on the left actually from the WWF is a cropped photo. And it’s the same photo taken on the same day not from 2009 and 2019. And unfortunately I hate to say it but spicy chicken wings Oreo were was unfortunately a real product, I think it was a marketing stunt, even still, I’d be so scared to eat it. And a person, a reviewer said it was the worst part of the experience was the greasy split it left behind in my mouth, that still haunts me. So I’d love a show of hands and maybe in the, in the chat to see how many people got all three correct. Any, anybody. I’m not seeing too many hands. And don’t feel bad about yourself because next slide. We are actually pretty bad at identifying this info when presented in this way. And a group of researchers from Australia found that our accuracy is a little bit better than a coin toss. to very easily always identify what’s wrong in an image, or to identify whether an image is misleading or not. We’re also not able to identify very effectively what it is about that image that’s wrong. What’s the salient part of the image? So this group of researchers actually tracked people’s eye movements to see if they were focusing on the part of the image that had been altered. And we’re just not trained visual photo authenticity experts. And so if it’s hard for us to do this, even in a setting like this one, it’s hard to think about what Katie mentioned when folks are just in time pass mode, they’re not going always to be doing a great job at this. Next slide, please. I think also in this day and age, when there’s a lot of synthetic or generated content, we’re getting caught up perhaps in the wrong question as well. So also, as Katie mentioned, a lot of people just want us to label things, label things misinfo or not misinfo, label things generated or not generated. But is this generated does not always mean the same thing as is this trustworthy? It really depends on the context. So on the left here, you see a photo that is a real, a quote unquote real photo of trash in Hyde Park. And the claim is about that this trash was left by climate protesters. But actually this is a very common tactic and technique for misinfo. It’s just a real image taken out of context. This was actually a marijuana celebration for 420 day. And that makes a lot more sense that there would be a lot of trash left over. Now this photo on the right is just something that I created. I said, create me. and art of Hyde Park, London with trash. And so, it really depends not only how something was created, but how it’s being used and in the context it’s being used with the caption and label and everything like that. Next slide, please. So, we’ll still need your plain old vanilla information literacy tools. These will still need to evolve given that there is more generated content. There is more synthetic content out there. Certainly, our tools need to evolve. But there’s not going to be a magical technical silver bullet for generated content, just like there’s not a magic silver bullet for mis- and disinformation overall. And so, the way that we’re thinking about these things at Google is inferred provenance or inferred context over here. This is like your classic information literacy techniques, training users to think about when did the image or claim first appear? Where did it come from? Who’s behind it? And what’s the claim they’re making? And what do other sources say about that same claim? And then, tools on the right, which are assertive provenance tools. These can either be user visible or not. They’re explicit disclosures for AI-generated content like watermarking, fingerprinting, markup metadata, and labels. Next slide, please, Jim. Thank you. So, we have set this out in a new white paper. You can scan to read it here. Or if you give your email to Sarah, I can connect with you, and we’ll make sure that we send you a copy. But this white paper kind of sets out how we’re thinking about both inferred context and assertive provenance, and what these two things, how they both play a role in meeting the current moment. around generated content, trustworthiness, and misinformation. Next slide, please. Now, what Katie talked about are a bunch of tools that are happening across many different platforms. I’m going to focus on some of the tools and features that we brought in directly to Google Search. So first, we have about this result. Next to any blue link or web result on Google Search, there are a set of three dots. And if you click into those three dots, you can get this tool, which is designed to encourage easier information literacy practices like lateral reading, or basically doing more research on a given topic. And so, this will tell you what the source says about itself, what other people says about a source, and what other people are saying about the same topic that you search for. So let’s say there was a piece of misinfo about the King of Bahrain having a robot bodyguard. So when you click on the three dots next to that, you’ll see not only information about the source, but also web results about that topic. Spoiler alert, the King of Bahrain did not have a robot bodyguard, just in case you were wondering. Next slide, please. This is just another layer into about this result. It brings all of this information into one page, and it helps users carry out the CORE and SIFT methods. SIFT stands for Stop, Investigate the Source, Find Other Sources, and Trace the Claim. That’s really hard to expect people to do when they’re in time pass mode, so we wanted to put a tool directly into search. just to make this as easy as possible for folks. Because one of the criticisms of inferred provenance or inferred context is it puts a lot of responsibility onto the user. When we’re thinking about all those other modes, let’s say where it might be more important for users, let’s say making a big decision, like a financial decision, for example. We want to make sure that users have the tools that they need when they really feel motivated to go that extra mile. We’ve also built a similar feature into image results. You can also, in the image viewer, click on a result, you’ll see three dots. This is like a supercharged reverse image search directly in the image viewer. It will tell you an image’s history when we, Google, have first indexed an image. Because sometimes an old image, again, will be taken out of context and go viral for a completely different reason. It’ll also show you an image’s metadata. That brings us, next slide, to assertive provenance. For Google’s consumer AI products like Gemini, for example, that power Gemini or Vertex AI in Cloud, we are providing a watermark, a durable watermark for content. Oftentimes, that’s not user visible. In about this image, you can see here if that Synth ID watermark is present, we’ll provide this generated with Google AI directly into the image viewer so that you can see it was produced by one of our image generation products. Now, the reason that it’s hard to just do this as Google for every image out there in the universe, is for the reason that Katie mentioned earlier. Let’s take the example of Russian-backed Macedonian teens. They’re probably not using tools that are using watermarking. If they’re running a derivative of an open model, for example, there’s no way to force those other providers to watermark their content. There’s no motivation for the content creator in that example to use a watermark or a label. Until we have, and we’re never going to have 100 percent accurate AI detectors that are able to suck out all of the information on the Internet, send it through an AI detector and spit out a label or a watermark that’s accurate 100 percent of the time. So really, we need a holistic approach that involves inferred and assertive provenance and a whole of a society solution. Next slide, please. The last thing I’ll say is that there is a lot of talk about the role of recommendations and algorithms and how they’re designed, and whether that is what is creating or promoting or giving more reach to this misinformation that is sticky and fast, as Katie mentioned. But a recent study, at least looking at search, and this is looking at Bing actually, shows that there is consistent evidence that user preferences play an important role in both exposure to and engagement with unreliable sites from search. So what does this mean? Searchers are coming across misinformation, when there is high user intent for them to find it. That means they are searching explicitly for unreliable sources. And so it’s not Taylor Swift that’s bringing misinfo about Taylor Swift, but it’s when you’re searching for Taylor Swift plus the site that you like to go to. Now, that site may not be reliable. That might not be a reliable news site. But that is really when folks are most likely to encounter misinformation in the wild on a search engine. And so really, we have to focus on individual choices with these so-called navigational queries, because that’s what’s driving engagement. And it really has to do with what users are actively seeking out. And that’s a bit of an uncomfortable conversation, because it goes to a question of, like, how do you get users on a more nutritional or healthy media diet, rather than, like, how do we just label something or how do we just backtack something? And that’s a much harder problem to solve. So I’ll stop there and turn back to Sarah.
Sarah Al-Husseini: Thank you very much, Zoe. And it’s great to see the Google tools that are helping empower people to make the best decisions and really find the best information for their decision-making and consumption. So thank you. With that, I’m going to turn over to the Q&A portion of the session. Oh, maybe Jim first. I have one online. Oh, great. Fantastic.
Zoe Darma: Wonderful. I see the question. So I’ll read this question out. Does Google’s watermarking for AI-generated images, like those created with Imogen, rely on metadata? If so, can it be removed by clearing the metadata, or is it embedded directly into the image itself? And, yes, that slide that y’all were on just now with the chart might be a helpful one. to go to, but essentially the answer is no. It doesn’t rely on metadata. It does produce metadata that shows that the image has been created with a Google generative AI product. It is a tamper resistant, I will say Adderall. So not tamper, like nothing is 100% tamper proof. However, SynthID is tamper resistant. So it is hard to remove from that image and doing something like clearing the images metadata is not going to remove the watermark. Now, this is a little bit different from other types of provenance solutions in the past. Some other types of metadata are easy to edit using very common image editing software. So IPTC metadata, for example, you can edit, and it was not designed to be a provenance tool the way that we’re thinking about it now, but there are ongoing conversations happening both with C2PA and IPTC about how durable that metadata should be. Where we have metadata from SynthID or from IPTC, for example, we are including it in the image viewer the way that I just showed you in about this image. Thank you for the question.
Sarah Al-Husseini: Thanks so much. And maybe Katie, back over to you. So misinformation and disinformation, you mentioned are very sticky problems with big impacts on society. So there’s a lot of external pressure for platforms to do something about these issues, but also a lot of concern about platforms overreaching, potential freedom of expression issues, and so on. Can you talk about how platforms think about these challenges of misinformation and what kind of tools and approaches they have to address them?
Katie Harbeth: Yeah, absolutely. And I think this is why you’ve seen a variety of approaches. for platforms to try. I think one thing in particular is a question about people’s right to say something versus the right for that to be amplified. And so you oftentimes are seeing platforms where they’re again, not taking it down but they are adding the labeling, they are trying to reduce the reach that also has brought criticism on them as well. And sort of a question around the principles of that. I think this pre-bunking is really important as well in trying to just give people other information and context that they can see and reach and be able to understand when they’re consuming this content. You’re starting to see new approaches from places like Blue Sky, which are more decentralized, where they are instead, not only them labeling content but anybody can label content. And then the user can decide what sort of content they do or do not want to see in their feed. And so it is much more putting the power back in the hands of the users versus the platform themselves making some of those decisions. I think a lot of this will continue to evolve and change too as AI continues to play a big role in how we summarize what sort of information that we get. And people are also thinking about that and what types of information are pulled into it. But this is sort of an ever evolving thing as the pressure on them from, as you mentioned, people who are saying that they should do more but others who are saying they’re taking down too much around this. And at the moment you are seeing more platforms taking less of a strong approach around, again, the leave it up or take it down and instead trying to find some of these other ways. Another one I should just mention is X slash Twitter. This is before Musk took over, but it still exists. They have community notes. And so they have larger number of people that can help to add a community note to something to give it more context, to say if something might be true or partially true. And they have some really great mechanisms in place to make sure that that cannot be gained as part of. of that. But I think we’ll continue to see a lot of experimentation on this as they try to balance that, you know, the freedom of expression versus also the safety of people who are using these
Sarah Al-Husseini: platforms. Fantastic. Thank you for that, Katie. That was really helpful. And maybe shifting gears a little bit because I see a few questions coming up on on AI in the chat. Maybe Zoe will tee up. Can you talk about how the proliferation of AI generated content changes the conversation around information literacy? And then we’ll go in and take a viral question in the chat.
Zoe Darma: Yeah, I don’t I think it’s like evolutional, not revolutional. I mean, it’s, it’s more in terms of the volume of content that people are seeing. But edited content is the an age old problem. So the the very first photographic hoax was like of a ghost. And it was using like a daguerreotype, for example. And so as long as images have created, there has been this issue of what do we do about if it’s been edited, if it’s been altered. And, and that’s why I’m a strong believer that our, our, our information literacy muscle really needs to grow as a society because whether something is generated or not, doesn’t necessarily always change the question, is this trustworthy or not? And that’s the key question that we have to remind people. Is this trustworthy generation, whether it’s generated or not, it’s like one element. So and it really depends on the context. And so I think our information, what needs to change is we need to ask, yes, is this generated or not, but still ask all of those other questions that we’ve always been asking ourselves when we’re encountering content that could be potentially suspect. Hope that answers your question, Sarah. Yeah, great. Thank you. And I know Zoe,
Sarah Al-Husseini: you answered directly in the chat, but maybe for those in the room, does Google’s watermarking for AI generated images like those created with image in rely on metadata? If so, can it be removed by clearing the metadata, or is it embedded directly into the image itself?
Zoe Darma: Yeah, so it provides metadata, but that metadata cannot be stripped easily. I say easily because nothing is 100% tamper-proof, but SynthID is very tamper-resistant, so it’s not editable the way that other metadata is. And I think that’s a critical piece of what we as Google are doing. Again, neither Google, nor OpenAI, nor Anthropic, nor Meta, none of us control all of the generation tools that are out there. And it’s very difficult to make folks who are using a derivative of open source, maybe smaller models, for example, models that are being run by other companies. There’s not a way for us to force other companies to watermark. And so this is where it becomes really difficult, because we’ll never have 100% coverage on the open web, even if the biggest players are all in C2PA and all responsibly watermarking and or labeling where appropriate. There’s always going to be some proportion of content that’s generated out there on the open web that does not include a watermark, for example.
Sarah Al-Husseini: Fantastic. Thank you, Zoe and Iberal for those questions and answers. And maybe to a question
Audience: in the room. Yeah, thanks so much. This is Lina from Search for Common Ground and the Council on Tech and Social Cohesion. So it’s powerful. You said it again, Zoe, that you can’t actually force others to do certain things, which then in some ways pokes holes in your valiant efforts. So there is a growing evidence about really harmful tech facilitated gender-based violence. And I’m just curious, you know, are we seeing attention on this growing because we do hear that there’s specific things you’ve put in place for health and elections. And a lot of that’s because of the excellent work of the two of you, right? So what would it take for us to also begin to think differently about the harms around tech, TFGBV? Do we need to rally other companies so that there is that standardization of the watermarking of that kind of harmful content? Just where do you think the conversation’s at right now?
Zoe Darma: Thanks. That’s a fantastic question. And image-based sexual abuse, I’ll just say, in terms of like, quote, unquote, deepfake pornography, that’s not what we call it internally. We call it involuntary synthetic pornographic imagery, is a great example of a problem that Google didn’t necessarily create, right? We are not allowing our models to be used for image generation of deepfake pornography or ISPI. However, it is an issue that we’re deeply grappling with, especially on the product that I work on, often like Google Search, because a lot of that material is out there now on the open web. So what we’ve done, and I can only speak for ourselves, is we’ve taken an approach that relies both on technical and kind of multi-stakeholder and industry solutions. So one of the things that we’ve done is we have implemented new ranking protections and ranking solutions so that we can better recognize that type of content, and not necessarily always like recognizing, is this AI generated or not? There are other signals that we can use as well. So for example, if the page itself is advertising it as like deepfake, celebrity pornography, for example, we can detect that and we’re applying ranking treatments so that that’s not ranking highly or well. We also have long had a content policy to remove that type of imagery as well. The other thing that we’re doing is we’re providing more automated tools to victim survivors. When you report even just regular non-synthetic but just regular NCEI to us, there are a couple things we do on the back end. One is if that image is found to be violative, we not only remove the image, we also dedupe using hashing technology. Now, hashing can be evaded with some alterations to the image itself. We also give an option for reporting users to check a box to say that they want explicit images, the best of our ability, removed for queries about them. If the query that I’m reporting, for example, is Zoe Darmay leaked nudes, I can check a box saying, I also want explicit imagery filtered for results about my name. Zoe Darmay leaked nudes, Zoe Darmay, et cetera. That’s another way we’re addressing the problem through automation that doesn’t necessarily rely on finding all of the generated imagery or not, but attacks the problem through another dimension. Those are a couple of the ways. I’ll throw in the chat our most recent blog post on involuntary synthetic pornographic imagery and all of the ranking protections that we’re applying to demote such content and also raise up authoritative content on queries like, for example, deepfake pornography where we’re really trying to return authoritative trusted sources about that. a particular topic and an issue rather than problematic sites that are promoting such content.
Sarah Al-Husseini: Fantastic. Thank you. And I think we have another question in the room before bouncing back to online.
Audience: Hello. Hi. Okay. Katie and Zoe, thank you for the presentations and wonderful answers as well. My name is Ian. I’m from the Brazilian Association of Internet Service Providers. I have more of a doubt than a question. Do we have any studies already showing the effectiveness of the literacy on actually identifying and combating misinformation? I mean, does it have an actual impact or how much can we measure it already?
Katie Harbeth: I think. Yeah, I was gonna say but isn’t there a jigsaw one so we, I feel like you all have done quite a bit of research on this but I think there’s been some very preliminary stuff on pre bunking, where I first saw it really be effective and folks starting to realize its effectiveness was particularly when Russia invaded Ukraine, and sort of that pre bunking ahead of that actually, that actually happening but I’ll toss it to Zoe I don’t want to take Google’s thunder for some of the great research that they’ve done on this too.
Zoe Darma: Oh, and it wasn’t my research either so big shout out to our colleague Beth Goldberg who’s not here but has led a lot of our pre bunking work. Yeah, we can try to dig that up and throw it in the chat. In terms of, so Katie covered pre bunking, I’ll just cover kind of information literacy so there is a lot of evidence that the SIFT and CORE methods work. And so we searched for evidence based practices that we could make easier in the product itself. So, first, the first way I’ll answer your question is, yes, these are evidence-based practices. SIFT, for example, was developed by Mike Caulfield, who is a misinformation researcher who was most recently with the University of Washington, and he’s since moved on, so I don’t know his affiliation right now. CORE was by Sam Weinberg, another misinformation and information literacy researcher. When we’ve done user research on, about this result, for example, we’ve actually seen folk theories decrease. So, I’ll say that the user research that we’ve done, I’ll caveat this by saying small sample size, more research needs to be done, but internal indications for us are that consistent use of about this result, for example, reduced folk theories in terms of how somebody was being shown a certain result. So, for us, that was a really positive indicator that they had a better understanding of not only the results they were seeing, but how those results were chosen for them. And what I’ll say is that a lot of people think, for example, that they have a lot of folk theories about why we’re showing certain results, you know, are we snooping in your Gmail and then giving you results based on things that you’re emailing, blah, blah, blah, you know, all those types of folk theories. And when you actually just say, it really has to do with the keywords that you’re putting in the search box, people understand that that’s why they’re seeing those types of results. A lot of folks think that the results are so relevant to them, they must be, we must know something about them, when oftentimes we’re just using what’s in the, what the user puts in the box. People are unfortunately not as unique as they think that they are. And so, we know a lot about what people want when they’re searching. for, gosh, Katie and I were talking about beach umbrellas yesterday. So people searching for best beach umbrellas, we know a lot about them, and are serving great relevant results based on that, and people think, oh, this is exactly what I need, it must have to do with something about me. The other thing I’ll just say, which is a new feature that you can check for yourself, is we’re rolling out a new feature at the footer of the search results page that will say these results are personalized, these results are not personalized, and if they are personalized, you can try without personalization. I would encourage everybody to check that out, because a lot of the search results pages you’ll find are not personalized. A great many of them are not. And the ones that are right now are on things like what to watch on Netflix, for example. And so then you can see it just goes right at the bottom of your page. You can even click out a
Sarah Al-Husseini: personalization to see how those results would change, and you can check that you’re not in, like, an echo chamber filter bubble. Thank you for the great question and answers from both our panelists. I’m being told we have five more minutes left, and so maybe one quick one from the room in follow-up, and I think this one is probably aimed at Zoe. What if a screenshot is taken then would there be any way of tracking it in the context of watermarks? Can they be
Zoe Darma: removed easily? Yeah, that’s a great question, and for SynthID, I’m sorry, SynthID, I hate to do this weird Google thing where we’re 200,000 people and it’s a different product area that created it. And for screenshots and SynthID, I don’t know the answer directly, but I will certainly follow up and put it in the chat while Sarah’s wrapping up. But I’ll say generally, yes, taking a screenshot is one way to strip metadata off an image. for example. So that was like the classic example of evasion for certain other kind of metadata techniques that we talked about. There are other ways to evade. We talked about evasion of hashing, for example, which can be done by adding a watermark or slightly modifying the image in some ways. There are always ways to get around technical tools with really motivated actors who want to do that. And so we have made it as difficult as possible to strip that metadata. But that’s why we’re saying in a presentation like this, we cannot always rely on 100% technical solutions. We have to think about these other ecosystem solutions as well. And that’s why I come to these presentations and always talk also about inferred provenance and inferred context. However, I will say that we’ve made it tamper resistant. So again, you can’t go into photo editing software and remove it, for example, and things like that. But I’ll get you an answer. It’s a good one. It’s a good question about screenshotting in particular.
Sarah Al-Husseini: Fantastic. Thank you so much, Zoe. Any other questions from the room? And if not, maybe I’ll wrap up with one quick question. Yep. Fantastic. And Katie, I’d love to start with you. So just given the platform of IGF, we’re all here this week in Riyadh for the event. What are some of the challenges of implementing a multi-stakeholder approach to information literacy? How can these challenges be overcome, especially at a platform like IGF?
Katie Harbeth: Yeah, I think this work is absolutely multi-stakeholder and needs to be done from multiple different approaches. It’s not just enough to ask the platforms in which to do this. And I think Taiwan is frankly a really great example of how you see in a multi-stakeholder approach to mis- and disinformation in their country. I think one of the biggest challenges. that I’ve seen that I continue to want to work on is trying to help those that have not been inside of a company sort of understand the scale and the operational challenges to some of the solutions and better thinking and brainstorming about how we might do that. And then on the platform side, helping them to understand sort of the approaches that civil society and others that they are finding when they’re trying to combat all of this in their countries and in their regions around all of this. So I think continued cross collaboration is really important. And then the other thing too, is that this does need to continue to be experimental because if there were a silver bullet to this, we would have all figured this out a long time ago, but this is a really hard and tricky problem. And I think having open dialogue and conversations like this will continue to be important, particularly again, as we go into this new era of AI, which is very much going to change how we generate and consume information, that now is really the time to be thinking about how we shape what those models and things are gonna look like for the next, at least five to 10 years.
Sarah Al-Husseini: Fantastic, thank you so much, Katie and Zoe.
Zoe Darma: The same question. Yeah, I just, before I answer your question, I actually just wanted to go back to the watermarking question because I found the answers through the best product ever, Google search, just a quick search. So SynthID uses two neural networks. One takes the original image, produces another image almost identical to it, but embeds a pattern that is invisible to the human eye. That’s the watermark. The second neural network can spot that pattern and tell users whether it detects that watermark and suspects the image has one or finds that it doesn’t have a watermark. So SynthID is designed in a way that means that watermark can still be. detected, even if the image is screenshotted or edited in another way, like rotating or resizing it. So I was looking that up. Can you repeat your your final wrap up question to me, Sarah? Sorry, of course, and thank you for taking the time. I think we’re a little biased towards Google search being the best, which I absolutely love. We drink the Kool-Aid for sure. And the question is, what are some of the challenges in implementing a multi stakeholder approach to information literacy? And how can the challenges be overcome? Yeah, I think one of the biggest challenges is just the idea that it takes a lot of time and it’s too much to expect of users. So I really think what the multi stakeholder approaches, even though they’re called multi stakeholder, they’re often really focused on governments, what governments can do, and what tech companies can do. And I think one of the things that you’ve heard us consistently throughout this really fascinating talk, and thank you so much, Sarah, for a great job facilitating it, is the third leg of this stool is what users contribute, what users do, what users seek out, and what users are consuming, and how they’re consuming it. And so I think that’s the biggest challenge, like one of the studies I mentioned earlier, really focused on how much users expressed preferences play into the fact that they are finding or not finding this information, like our users actively seeking out unreliable sources. That’s a hard problem to solve. And there’s a reason that multi stakeholder approaches really want to focus on governments or technology companies, they’re at the table, we’re the ones who are doing the talking. But we really are missing a huge piece of the puzzle if we’re not talking about user expressed preferences, what they want to find what they’re seeking out, and then how they’re consuming that, and how we can get them to be stronger. and reliable consumers and creators in the information ecosystem. And that’s a tough, that’s a tall order.
Sarah Al-Husseini: Thank you for that Zoe. I think we’re at time. So maybe just to wrap up a big, huge thank you to our panelists, to Jim Zena, our clicker, stepping in with the internet connection. And just to say, I think it’s from the conversation today, very apparent that we need a holistic approach to the shared responsibility of information literacy and protecting our users and education with regards to our stakeholders and government users, especially youth, because somebody who works in one of the largest youth populations in the world, this is something that sometimes gets overlooked, but is really important. Start them young and bringing in civil society to the conversation is always really important. So maybe I’ll hand back over to Jim.
Jim Prendergast: Yeah, no, thanks everybody. Especially thanks for the great questions, both in person and online. That’s what we really look forward to is the interaction with everybody instead of talking amongst ourselves. So appreciate it. Everybody have a good evening and we’ll see you back here tomorrow. Can the speaker stay online for two seconds for a quick photo? Great. Thanks everyone. Thank you.
Katie Harbeth
Speech speed
185 words per minute
Speech length
2677 words
Speech time
867 seconds
Misinformation is sticky and spreads quickly
Explanation
Katie Harbeth explains that misinformation can spread rapidly across platforms and is difficult to counteract once it has taken hold. This stickiness makes it challenging for platforms to effectively combat false information.
Evidence
No specific evidence provided in the transcript.
Major Discussion Point
Challenges of Misinformation
Agreed with
Zoe Darma
Agreed on
Misinformation is a complex and challenging problem
People interpret labels and warnings differently
Explanation
Harbeth points out that users may have varying interpretations of content labels and warnings. This variability makes it difficult for platforms to effectively communicate the reliability or potential issues with certain content.
Evidence
Example given of how ‘altered content’ label could be interpreted as AI-generated, Photoshopped, or other forms of manipulation.
Major Discussion Point
Challenges of Misinformation
Differed with
Zoe Darma
Differed on
Effectiveness of labeling and warnings
Users’ emotional state affects how they consume information
Explanation
Harbeth discusses how users’ emotional states and intentions when using platforms influence their information consumption. She notes that people in different modes (e.g., relaxation vs. intentional information seeking) interact with content differently.
Evidence
Reference to a study by Google’s Jigsaw division and GEMIC on how Gen Z approaches online information.
Major Discussion Point
Challenges of Misinformation
Agreed with
Zoe Darma
Agreed on
User behavior and preferences play a significant role
Platforms struggle to balance free expression and content moderation
Explanation
Harbeth highlights the challenge platforms face in moderating content while respecting freedom of expression. She notes that platforms are often criticized for both over-censorship and under-moderation.
Major Discussion Point
Challenges of Misinformation
Platforms use fact-checking, labeling, and reducing reach of suspect content
Explanation
Harbeth outlines various strategies platforms employ to combat misinformation. These include partnering with fact-checkers, applying labels to questionable content, and reducing the visibility of potentially false information.
Evidence
Mention of platforms working with fact-checkers through the Pointer Fact-Checking Consortium.
Major Discussion Point
Approaches to Combating Misinformation
Pre-bunking can be an effective strategy
Explanation
Harbeth suggests that pre-bunking, or providing users with warnings about potential misinformation before they encounter it, can be an effective approach. This strategy aims to prepare users to critically evaluate information they may come across.
Evidence
Reference to pre-bunking being effective during Russia’s invasion of Ukraine.
Major Discussion Point
Approaches to Combating Misinformation
Cross-industry collaboration is needed as AI evolves
Explanation
Harbeth emphasizes the importance of collaboration across industries to address the challenges posed by evolving AI technologies. She suggests that as AI changes how information is generated and consumed, stakeholders need to work together to shape future models and approaches.
Major Discussion Point
Evolving Landscape of Information and AI
Agreed with
Zoe Darma
Agreed on
Multi-stakeholder approaches are necessary
Zoe Darma
Speech speed
142 words per minute
Speech length
4387 words
Speech time
1850 seconds
Accuracy in identifying misinformation is only slightly better than chance
Explanation
Darma cites research showing that people’s ability to identify misinformation, particularly in images, is not much better than random guessing. This highlights the difficulty individuals face in distinguishing between authentic and manipulated content.
Evidence
Reference to a study by Australian researchers finding that accuracy in identifying misinformation is only slightly better than a coin toss.
Major Discussion Point
Challenges of Misinformation
Agreed with
Katie Harbeth
Agreed on
Misinformation is a complex and challenging problem
User preferences play a role in exposure to unreliable sources
Explanation
Darma discusses how users’ own search behaviors and preferences contribute to their exposure to unreliable information. She notes that people often actively seek out specific unreliable sources, rather than stumbling upon them randomly.
Evidence
Reference to a study on Bing search results showing that user preferences play an important role in exposure to and engagement with unreliable sites.
Major Discussion Point
Challenges of Misinformation
Agreed with
Katie Harbeth
Agreed on
User behavior and preferences play a significant role
Google implements tools like “About this result” to encourage information literacy
Explanation
Darma explains Google’s approach to promoting information literacy through features like “About this result”. This tool provides users with context about search results, encouraging critical evaluation of sources.
Evidence
Detailed description of the “About this result” feature in Google Search, including its functionality and purpose.
Major Discussion Point
Approaches to Combating Misinformation
Differed with
Katie Harbeth
Differed on
Effectiveness of labeling and warnings
Watermarking and metadata for AI-generated content can help, but are not foolproof
Explanation
Darma discusses Google’s use of watermarking and metadata for AI-generated content as a means of identification. However, she notes that these methods are not perfect solutions, as they can potentially be circumvented or may not be universally adopted.
Evidence
Description of Google’s SynthID watermarking technology and its resistance to tampering.
Major Discussion Point
Approaches to Combating Misinformation
Multi-stakeholder approaches are needed, including user education
Explanation
Darma emphasizes the importance of involving multiple stakeholders in addressing misinformation, including users themselves. She argues that user education and improving information literacy are crucial components of any comprehensive strategy.
Major Discussion Point
Approaches to Combating Misinformation
Agreed with
Katie Harbeth
Agreed on
Multi-stakeholder approaches are necessary
Information literacy skills remain crucial even with new AI tools
Explanation
Darma stresses that traditional information literacy skills are still important, even as new AI tools emerge. She argues that these skills need to evolve to address the challenges posed by AI-generated content, but remain fundamental to navigating the information landscape.
Evidence
Reference to SIFT and CORE methods as evidence-based practices for information literacy.
Major Discussion Point
Approaches to Combating Misinformation
AI-generated content adds new challenges to information literacy
Explanation
Darma discusses how the proliferation of AI-generated content creates additional complexities for information literacy. She notes that distinguishing between human-created and AI-generated content is becoming increasingly difficult.
Major Discussion Point
Evolving Landscape of Information and AI
Distinguishing between generated and trustworthy content is complex
Explanation
Darma explains that the trustworthiness of content is not solely determined by whether it is AI-generated or not. She argues that context and use of the content are crucial factors in assessing its reliability.
Evidence
Example of a real photo taken out of context versus an AI-generated image used appropriately.
Major Discussion Point
Evolving Landscape of Information and AI
Platforms are developing new tools to address AI-generated misinformation
Explanation
Darma outlines Google’s efforts to develop tools for identifying and managing AI-generated content. She discusses both technical solutions like watermarking and user-facing features to promote critical evaluation of information.
Evidence
Description of Google’s SynthID watermarking technology and the “About this result” feature.
Major Discussion Point
Evolving Landscape of Information and AI
Google is implementing measures to combat involuntary synthetic pornographic imagery
Explanation
Darma discusses Google’s approach to addressing the issue of involuntary synthetic pornographic imagery (ISPI). She outlines various technical and policy measures implemented to detect, demote, and remove such content from search results.
Evidence
Mention of ranking protections, automated tools for victim-survivors, and content removal policies.
Major Discussion Point
Addressing Specific Misinformation Concerns
Sarah Al-Husseini
Speech speed
193 words per minute
Speech length
932 words
Speech time
289 seconds
Platforms are developing targeted approaches for health and election misinformation
Explanation
Al-Husseini notes that platforms are creating specific strategies to combat misinformation in critical areas such as health and elections. This suggests a recognition of the particular importance and potential impact of misinformation in these domains.
Major Discussion Point
Addressing Specific Misinformation Concerns
Audience
Speech speed
137 words per minute
Speech length
233 words
Speech time
101 seconds
Tech-facilitated gender-based violence is an emerging concern
Explanation
An audience member raises the issue of technology-facilitated gender-based violence as a growing problem. This highlights the need for platforms to address specific forms of harmful content and behavior beyond general misinformation.
Major Discussion Point
Addressing Specific Misinformation Concerns
Agreements
Agreement Points
Misinformation is a complex and challenging problem
Katie Harbeth
Zoe Darma
Misinformation is sticky and spreads quickly
Accuracy in identifying misinformation is only slightly better than chance
Both speakers emphasize the difficulty in combating misinformation due to its rapid spread and the challenges users face in identifying it accurately.
Multi-stakeholder approaches are necessary
Katie Harbeth
Zoe Darma
Cross-industry collaboration is needed as AI evolves
Multi-stakeholder approaches are needed, including user education
Both speakers stress the importance of collaboration across industries and involving multiple stakeholders, including users, in addressing misinformation.
User behavior and preferences play a significant role
Katie Harbeth
Zoe Darma
Users’ emotional state affects how they consume information
User preferences play a role in exposure to unreliable sources
Both speakers highlight the importance of user behavior and preferences in the spread and consumption of misinformation.
Similar Viewpoints
Both speakers discuss the implementation of various tools and strategies by platforms to combat misinformation and promote information literacy.
Katie Harbeth
Zoe Darma
Platforms use fact-checking, labeling, and reducing reach of suspect content
Google implements tools like “About this result” to encourage information literacy
Both speakers acknowledge the complexity of content evaluation and the challenges in effectively communicating content reliability to users.
Katie Harbeth
Zoe Darma
People interpret labels and warnings differently
Distinguishing between generated and trustworthy content is complex
Unexpected Consensus
Importance of traditional information literacy skills
Katie Harbeth
Zoe Darma
Pre-bunking can be an effective strategy
Information literacy skills remain crucial even with new AI tools
Despite discussing advanced technological solutions, both speakers unexpectedly emphasize the continued importance of traditional information literacy skills and strategies like pre-bunking.
Overall Assessment
Summary
The speakers largely agree on the complexity of misinformation, the need for multi-stakeholder approaches, the importance of user behavior and education, and the ongoing relevance of traditional information literacy skills alongside new technological solutions.
Consensus level
High level of consensus on the main challenges and general approaches to combating misinformation. This agreement suggests a shared understanding of the problem and potential solutions, which could facilitate more coordinated efforts across platforms and stakeholders in addressing misinformation.
Differences
Different Viewpoints
Effectiveness of labeling and warnings
Katie Harbeth
Zoe Darma
People interpret labels and warnings differently
Google implements tools like “About this result” to encourage information literacy
While Harbeth emphasizes the challenges of user interpretation of labels, Darma presents Google’s approach as a potential solution, highlighting a difference in perspective on the effectiveness of such tools.
Unexpected Differences
Overall Assessment
summary
The main areas of disagreement revolve around the effectiveness of specific strategies to combat misinformation, such as labeling and user education.
difference_level
The level of disagreement among the speakers is relatively low. They largely agree on the challenges posed by misinformation and the need for multi-faceted approaches. The differences are primarily in emphasis and specific strategies, rather than fundamental disagreements. This suggests a general consensus on the importance of addressing misinformation, which could facilitate collaborative efforts in developing comprehensive solutions.
Partial Agreements
Partial Agreements
Both speakers agree on the need for technical solutions to combat misinformation, but differ in their emphasis on specific approaches. Harbeth focuses on fact-checking and labeling, while Darma highlights watermarking and metadata for AI-generated content.
Katie Harbeth
Zoe Darma
Platforms use fact-checking, labeling, and reducing reach of suspect content
Watermarking and metadata for AI-generated content can help, but are not foolproof
Both speakers agree on the need for collaborative approaches, but Darma places more emphasis on user education as a crucial component.
Katie Harbeth
Zoe Darma
Cross-industry collaboration is needed as AI evolves
Multi-stakeholder approaches are needed, including user education
Similar Viewpoints
Both speakers discuss the implementation of various tools and strategies by platforms to combat misinformation and promote information literacy.
Katie Harbeth
Zoe Darma
Platforms use fact-checking, labeling, and reducing reach of suspect content
Google implements tools like “About this result” to encourage information literacy
Both speakers acknowledge the complexity of content evaluation and the challenges in effectively communicating content reliability to users.
Katie Harbeth
Zoe Darma
People interpret labels and warnings differently
Distinguishing between generated and trustworthy content is complex
Takeaways
Key Takeaways
Misinformation is a complex, evolving challenge that requires multi-stakeholder approaches
Technical solutions alone are insufficient; user education and information literacy remain crucial
Platforms are developing new tools and strategies to combat misinformation, including AI-generated content
User preferences and behaviors play a significant role in exposure to misinformation
Balancing free expression with content moderation remains an ongoing challenge for platforms
Resolutions and Action Items
Google to continue developing and improving tools like ‘About this result’ and SynthID watermarking
Platforms to explore pre-bunking as an effective strategy against misinformation
Stakeholders to focus on user education and improving information literacy skills
Unresolved Issues
How to effectively address user preferences for unreliable sources
Balancing platform intervention with concerns about censorship and free expression
Addressing misinformation from sources not using responsible AI practices or watermarking
Measuring long-term effectiveness of information literacy efforts
Suggested Compromises
Platforms focusing on reducing reach and amplification of misinformation rather than outright removal
Implementing user-choice features like community notes or decentralized labeling systems
Balancing automated detection with human review and fact-checking partnerships
Thought Provoking Comments
Misinformation is sticky and fast, which means that it can spread very, very quickly and it’s something that very much sticks with people and it can be very hard to change their minds.
speaker
Katie Harbeth
reason
This comment succinctly captures a key challenge in combating misinformation – its rapid spread and persistence in people’s minds.
impact
It set the stage for discussing the complexities of addressing misinformation and why simple solutions like labeling may not be sufficient.
You can’t bring logic to a feelings fight.
speaker
Katie Harbeth (quoting Kate Klonick)
reason
This pithy statement encapsulates a crucial insight about the emotional nature of misinformation and why purely factual approaches often fail.
impact
It shifted the conversation towards considering the psychological aspects of misinformation and how to address them.
We are actually pretty bad at identifying this info when presented in this way. And a group of researchers from Australia found that our accuracy is a little bit better than a coin toss.
speaker
Zoe Darma
reason
This comment, backed by research, challenges assumptions about people’s ability to identify misinformation and manipulated content.
impact
It highlighted the need for better tools and education to help people identify misinformation, leading to discussion of Google’s approaches.
Is this generated does not always mean the same thing as is this trustworthy? It really depends on the context.
speaker
Zoe Darma
reason
This insight shifts the focus from simply identifying AI-generated content to evaluating its trustworthiness in context.
impact
It broadened the discussion beyond technical solutions to include the importance of context and critical thinking skills.
Searchers are coming across misinformation, when there is high user intent for them to find it. That means they are searching explicitly for unreliable sources.
speaker
Zoe Darma
reason
This comment reveals a counterintuitive finding about how users encounter misinformation, challenging assumptions about algorithmic responsibility.
impact
It introduced the idea that user behavior and preferences play a significant role in misinformation exposure, shifting the conversation towards individual responsibility and media literacy.
Overall Assessment
These key comments shaped the discussion by highlighting the complex, multifaceted nature of the misinformation problem. They moved the conversation beyond simplistic technical solutions to consider psychological factors, user behavior, and the importance of context. The discussion evolved from focusing solely on platform responsibilities to emphasizing the need for a holistic approach involving user education, critical thinking skills, and understanding the limitations of both human perception and technical solutions in identifying and combating misinformation.
Follow-up Questions
How effective are information literacy efforts in actually identifying and combating misinformation?
speaker
Ian from the Brazilian Association of Internet Service Providers
explanation
Understanding the measurable impact of information literacy initiatives is crucial for evaluating and improving strategies to combat misinformation.
How can SynthID watermarking be affected by screenshots?
speaker
Audience member (unnamed)
explanation
This explores potential limitations of watermarking technology, which is important for understanding its effectiveness in identifying AI-generated content.
What would it take to begin thinking differently about the harms around tech-facilitated gender-based violence?
speaker
Lina from Search for Common Ground and the Council on Tech and Social Cohesion
explanation
This highlights the need to address specific types of harmful content and suggests exploring standardization of watermarking across companies for such content.
How can we better incorporate user preferences and behaviors into multi-stakeholder approaches to information literacy?
speaker
Zoe Darma
explanation
This area of research emphasizes the importance of understanding and addressing user behavior in consuming and seeking out information, which is often overlooked in current approaches.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online