Deepfakes for good or bad?

8 Jul 2025 15:45h - 16:35h

Session at a glance

Summary

This panel discussion at the AI for Good Summit examined the challenges and opportunities presented by AI’s impact on media, particularly focusing on deepfakes and synthetic content creation. The session featured Mikaela Ternasky-Holland, an Emmy-winning immersive director, and Sam Gregory, executive director of human rights organization Witness, with a video contribution from Dan Neely of content authentication company Vamilio.


Sam Gregory highlighted the complex landscape of deepfakes through Witness’s Deepfake Rapid Response Force, which helps journalists and human rights defenders verify contentious media. He presented cases showing three scenarios: content that appears fake but is real, content that cannot be definitively verified, and real content dismissed as AI-generated. Gregory emphasized that non-consensual sexual deepfakes represent the most widespread current threat, while political disinformation using deepfakes is steadily increasing. He stressed that detection tools are not keeping pace with creation capabilities, particularly for non-English content and underrepresented populations.


Mikaela Ternasky-Holland discussed her creative work using AI tools like Sora, emphasizing the distinction between closed-source platforms (which retain user data and IP) and open-source alternatives. She warned creators about participating in alpha/beta programs where their innovations may be incorporated into commercial products without compensation. Despite these concerns, she views AI as an empowering tool that enables new forms of storytelling and social impact work, describing it as “more of an intern than a god.”


Both panelists agreed that solutions require a multi-faceted approach including provenance technologies, better detection systems, regulatory frameworks, and clearer red lines for unacceptable uses. They expressed cautious optimism that with proper safeguards, people will be able to trust online content in five years, though this will require technical systems, education, and institutional support rather than naive trust.


Keypoints

## Major Discussion Points:


– **The dual nature of AI’s impact on media**: The discussion explored how AI simultaneously creates opportunities for creative democratization while posing significant threats to information integrity, with deepfakes becoming increasingly sophisticated and harder to detect.


– **Real-world challenges in detecting and combating deepfakes**: Sam Gregory presented concrete examples from the Deepfakes Rapid Response Force, showing the complexity of determining what’s real versus synthetic, including cases where politicians dismiss real recordings as AI-generated and situations where authenticity cannot be determined.


– **Creative industry implications and IP concerns**: Michaela discussed how generative AI tools like Sora are reshaping creative work, highlighting the distinction between open-source and closed-source platforms, and raising concerns about tech companies appropriating artists’ work and creative styles without compensation.


– **Technical solutions and policy gaps**: The conversation addressed the need for better provenance technologies, watermarking systems, and detection tools, while identifying critical gaps in global regulation and the need for a “pipeline of responsibility” from AI model makers to end users.


– **Future outlook and practical safeguards**: Both panelists discussed what trust in digital media might look like in five years, emphasizing the need for technical systems, education, institutional support, and clearer language around synthetic versus human-verified content.


## Overall Purpose:


The discussion aimed to examine the challenges AI poses to media integrity and information ecosystems, while exploring how society can embrace AI’s creative potential while safeguarding against its harmful applications, particularly deepfakes and misinformation.


## Overall Tone:


The tone was thoughtful and pragmatic throughout, balancing concern with cautious optimism. The panelists acknowledged serious threats while maintaining that solutions are achievable through proper technical infrastructure, regulation, and education. The conversation remained constructive and solution-oriented, with both speakers offering concrete recommendations rather than dwelling on doom-and-gloom scenarios.


Speakers

– **Greg Williams**: Moderator/Host of the session on AI’s impact on media and information ecosystem


– **Sam Gregory**: Executive Director of Witness, a global human rights organization that helps people use video and technology to protect and defend human rights. Works on AI detection, provenance technologies, and runs the Deepfakes Rapid Response Force


– **Michaela Ternasky Holland**: Emmy award-winning and Peabody nominated immersive director whose work fuses compelling storytelling with emerging technologies. Works with augmented reality, virtual reality, generative AI to create films and installations, has experience in journalism and with the United Nations


– **Dan Neely**: CEO and founder of Vamilio, a company focused on content provenance and AI-powered media authentication. Serial entrepreneur with over 20 years of experience in AI, named a Time 100 recipient as one of the most influential voices in AI (participated via video as he couldn’t join in person)


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# AI’s Impact on Media and Information Ecosystem: Panel Discussion Report


## Introduction


This panel discussion examined the challenges and opportunities presented by artificial intelligence’s impact on media and information ecosystems. Moderated by Greg Williams, the session featured three experts: Sam Gregory, Executive Director of Witness, a global human rights organisation that helps people use video and technology to protect and defend human rights; Michaela Ternasky-Holland, an Emmy award-winning and Peabody nominated immersive director whose work fuses compelling storytelling with emerging technologies; and Dan Neely, CEO and founder of Vamilio, who participated via video contribution.


## Current State of Deepfake Technology and Detection Challenges


### The Complexity of Real vs. Fake Content


Sam Gregory opened by discussing his organisation’s Deepfakes Rapid Response Force, which assists journalists and human rights defenders in verifying contentious media. He illustrated the complexity through three case studies that demonstrate the nuanced nature of synthetic media verification:


“So we had one audio case in which a politician called a voter gullible and encouraged lying to them. And it turned out that it was proved real. We had a second case that was a walkie-talkie conversation where a military leader called for bombing civilians, and we were unable to prove whether it was real or false. And we had a third case where a politician claimed that recordings released publicly were made with AI and dismissed them, and they were in fact real.”


Gregory also referenced the evolution of AI quality, noting how the Will Smith eating spaghetti video, which was obviously fake a year ago, now represents a benchmark that current AI can easily surpass. He mentioned the Pikachu protest example, where AI-generated content of Pikachu protesting was shared, illustrating how synthetic content can blur the lines between reality and fiction.


### Primary Current Threats


Gregory emphasized that while political disinformation receives significant attention, the most immediate widespread threat comes from elsewhere: “The most clear widespread threat we see right now are deepfake, non-consensual sexual images being shared of women, right? And it’s pervasive, and that is the first place we need to start because it’s so widespread.”


Regarding political impacts, he noted that while deepfakes in political contexts are increasing, they haven’t yet reached a level that destabilizes elections.


### Detection Technology Limitations


A critical concern raised by Gregory was the inadequacy of current detection capabilities. He highlighted that detection tools “aren’t built for those who need them most” and noted particular gaps for non-English content and people “not represented in the data sets.” He emphasized that the realism of AI-generated content is rapidly improving while detection capabilities lag behind.


## Creative Industry Perspectives and Practical Applications


### Michaela’s Creative Projects and AI Integration


Michaela Ternasky-Holland provided detailed examples of her work integrating AI with creative storytelling. She described several projects:


– **Morning Light**: A VR project about facial prosthetics that she’s developing with AI assistance


– **Kappa**: A project involving Japanese folklore


– **Great Debate installation**: An interactive piece where audiences can debate with historical figures


She explained how AI tools function in her creative process: “I think of them more like interns than gods, requiring significant human guidance and iteration.”


### Platform Exploitation Concerns


Ternasky-Holland raised significant concerns about how major technology companies exploit creators through closed-source platforms. She shared her personal experience with OpenAI’s Sora platform:


“The reality is when you start playing with closed source materials… anything that you are kind of giving to that mechanism, whether that’s your voice as a writer for ChatGPT, or if it’s your image as a concept artist to then generate video, the reality is all of that is getting kind of plugged and played back into their systems.”


More specifically, she revealed: “Even my own work that I did with Sora is now implemented as a single button of presets inside the public-facing product, and at this point, I don’t see any of those residuals from that button that I helped create unknowingly.”


She warned creators about participating in alpha and beta programs, describing them as situations where “creator programs by tech companies use artists as unpaid think tanks for product development.”


### Technical Recommendations for Creators


Ternasky-Holland provided specific technical guidance, recommending open-source alternatives like ComfyUI and Hugging Face over closed-source platforms for better data protection, though acknowledging they require more technical expertise.


## Technology Solutions and Verification Systems


### TraceID Technology


Dan Neely presented his company’s TraceID technology as a solution for identifying AI-generated content while protecting individual identity rights. He mentioned partnerships with companies including Pocket Watch, Sony Pictures, and Sony Music, focusing on content authentication and AI-powered media verification.


### Provenance and Detection Approaches


Gregory outlined the need for provenance technology that can show the “recipe” of AI and human contribution in content creation, though he acknowledged this is technically challenging and must work across platforms while protecting privacy rights.


A notable disagreement emerged regarding detection methods. While Gregory envisioned AI agents helping users identify suspicious content, Ternasky-Holland warned against AI-based detection systems: “Detection systems using AI to fight AI may be unreliable due to rapid platform updates.” She instead advocated for “mathematical and blockchain-based detection technologies” for greater stability.


The discussion also touched on heat maps and other verification systems that could help users understand content authenticity.


## Policy and Regulatory Considerations


### Responsibility Framework


Gregory articulated the need for a “pipeline of responsibility” extending from AI model makers to end users throughout the entire ecosystem, rather than placing the burden solely on individual users to identify synthetic content.


### Regulatory Gaps


The discussion identified significant gaps in current regulatory frameworks. Gregory called for “clear red lines” for unacceptable uses such as non-consensual sexual imagery. Ternasky-Holland advocated for licensing systems that would compensate creators whose work is used in AI training data.


Williams highlighted the disparity between regulatory approaches, noting that lawmakers, particularly in the United States, have been less effective at regulating large technology companies compared to European efforts.


## Future Outlook and Evolving Concepts of Truth


### Reframing Authenticity


Ternasky-Holland proposed evolving the language around content authenticity: “I think it first starts with the idea of how we talk about these things… we used to say landline now we say mobile now we say cell phone. So my hope is that also our language around truth continues to expand. So human verified content or human verified assets… versus synthetic media.”


### Five-Year Outlook


When asked whether we will be able to trust what we see online in five years, both panelists expressed cautious optimism. Gregory emphasized that “trust online will require technical systems, education, and institutional support rather than naive acceptance.”


Gregory provided important perspective on the scope of the problem: “99% of the time, it’s going to be telling us that we don’t need to care that something is made with AI, because it’s fun, it’s everyday communication, it’s entertainment, it’s not malicious or deceptive.” This reframed the debate by suggesting that most AI-generated content is benign, with the real challenge being distinguishing between harmful and harmless uses.


## Key Recommendations


The discussion generated several practical recommendations:


**For Creators:**


– Be cautious of alpha and beta programs from technology companies


– Consider open-source alternatives to closed-source AI platforms


– Experiment with AI tools to understand their limitations


**For Policy Makers:**


– Develop licensing systems that compensate creators for training data use


– Establish clear regulatory frameworks with defined red lines


– Create comprehensive provenance system requirements


**For Technology Development:**


– Prioritize mathematical and blockchain-based detection over AI-based systems


– Develop accessible detection tools for global populations


– Implement comprehensive provenance tracking systems


## Conclusion


This panel discussion moved beyond simple technical concerns about deepfake detection to explore the multifaceted challenges posed by synthetic media. The conversation highlighted the immediate need to address non-consensual sexual imagery as the most pressing current threat, while also examining the creative potential and economic implications of AI tools.


The speakers demonstrated that while significant challenges exist—from inadequate detection tools to creator exploitation—there are pathways forward through technical innovation, regulatory frameworks, and evolving conceptual approaches to authenticity. The discussion emphasized that solutions must be systemic rather than placing responsibility solely on individual users, and that the rules governing these technologies are being determined now, giving current stakeholders the opportunity to influence outcomes.


The overall message was one of cautious optimism: with proper technical infrastructure, regulatory frameworks, and evolved approaches to understanding media authenticity, society can navigate the challenges of synthetic media while preserving beneficial uses of AI technology.


Session transcript

Greg Williams: Good afternoon, everyone. How’s everyone doing? So we have heard a lot today about the opportunities and the upsides of AI, all of which has enormous merit, may I add. But in this session, we’re going to examine some of the challenges. So how AI is reshaping media, degrading the information ecosystem, and blurring the boundaries between reality and fabrication. So deepfake content is proliferating at a staggering rate. And as it’s becoming more convincing, the challenge of maintaining trust in digital media and digital content is intensifying. But at the same time, AI tools are unlocking creative possibilities, they’re democratising content creation. So this afternoon, we’re going to explore how we can embrace AI’s potential in the creative industries, while also safeguarding the integrity of our media landscape. So we’ve got two amazing panellists here, we were supposed to have a third, he unfortunately can’t be with us today. So let me introduce our two panellists. So Mikaela first, Mikaela Ternasky-Holland is an Emmy award winning and Peabody nominated immersive director whose work fuses compelling storytelling with emerging technologies. And Sam Gregory is the executive director of Witness, a global human rights organisation that helps people use video and technology to protect and defend human rights. We were supposed to have Dan Neely with us, he’s the CEO and founder of Familio, a company focused on content provenance and AI powered media authentication. He couldn’t join us, we will be showing a short video about his work straight afterwards. So welcome, guys, great to have you with us. The two of you were able to join us. So let’s offer maybe the audience a little bit of context about your work. Mikaela, do you want to maybe sort of like walk us through what you what you do, please?


Michaela Ternasky Holland: Sure. I have a slide that is coming up. And maybe since I don’t want to like turn my head, I’m going to stand up and kind of come over to the side so I can present these slides. So this is a little bit about me. I know I already had a wonderful, beautiful intro. Some other things I’ve been doing specific to emerging technology is I’ve been playing a lot with augmented reality, virtual reality, generative AI to create films, but also installations. I have also created museum exhibits, I’ve done research around immersive technology, and I’ve also worked in journalism as well as with the United Nations in previous projects. So we’re going to talk a little today about how I utilize deep fake technology and generative AI technology with the installations I create, and these are some of the things that I’m thinking about when I create installations. So going backwards, I created a VR project about a woman whose face was unfortunately taken from a gunshot, and so she utilized a facial prosthetic, and in order to tell the story, we created original VR headsets that allowed you to wear your own facial prosthetic, and here are a few people who were in the installation wearing these facial prosthetics. And the reason I’m bringing this up today is because when we’re talking about deep fake technology, we’re talking about the ability for digital technology to expand our understanding of what humanness is, but there’s also ways that physical assets can do this as well. So when I start creating in generative AI, I created a project called Morning Light, where we used emerging technology to be able to read your tea leaf and give you an astrological reading about things happening inside your teacup. I also created a project called Kappa, which utilized deep fake technology, where we showcased people in the indigenous Filipino islands, what they might have looked like outside of colonial contact, and this was this idea of how you create connection between not just your personal life as a Filipino American, but the past lives of your ancestors. We also utilized social media to portray the fact that these were not real people, so that you could see real people from social media in collaboration with the deep fake technology. We also showed Filipino Americans throughout American history in spaces that we knew they existed but do not have documentation. And finally, I’m here to talk a little bit also about the Great Debate, where we’re really playtesting now this idea of what deep fake technology can do to connect people. So the Great Debate is an interactive installation that utilizes large language models. We have people playtesting it and demoing it all over the world, but basically, you are able to create a political candidate based on each of the large language models that you see here, Gemini, Claude, and Chachupiti. You can create a name for them, you can create a political meaning for them, and then it generates a backstory of these political candidates based on those large language models. And the reason we’re doing this is because right now we want to be able to talk about what it’s like to run for presidency in the United States specifically, but we also want to give the audience opportunities to ask these LLMs very difficult questions and see how they would react not just to the question, but also to each other. And so here we’re asking the LLM about AI governance, but right now all you hear is a voice. What would happen if we were able to actually give these audience members deep fakes of these candidates? What if we utilized actors instead of deep fake technology? These are the questions we’re asking ourselves when we’re creating this installation, which is still a work in progress. Thank you.


Greg Williams: Thank you, Michaela. Sam, over to you.


Sam Gregory: I’ll also stand up. It’s an honor to be here on this session. I’m Sam Greger. I’m the Executive Director of the Human Rights and Technology Network WITNESS. I also have some slides that should pop up any moment now. Fingers crossed. Okay, let me continue. There we go. Great. Thank you. And we’re a human rights network that works with journalists and human rights defenders around the world. And of course the people we work with put a heavy premium on how do we trust the information they create, and they are canaries in the coal mine for the problems that the rest of our societies face. And seven years ago we launched an initiative called Prepare Don’t Panic, which focused on how do you prepare early and base how we build an AI ecosystem on the needs and demands of the most vulnerable and on human rights values. And we’ve done everything from training people to spot AI in elections, to working on the technologies and the infrastructure for authenticity, provenance, and detection and the regulation around them. And I’m going to talk about one thing that we’ve been working on over the last 18 months to illustrate the landscape of deepfakes in the world today. We run something called the Deepfakes Rapid Response Force. It’s a mechanism in which local journalists, human rights defenders, and fact-checkers can share contentious deepfakes or material that might be real but is being claimed to be a deepfake to media forensics experts to get feedback. I want to talk first about three audio cases we had, and they illustrate the complexities of the real world. So we had one audio case in which a politician called a voter gullible and encouraged lying to them. And it turned out that it was proved real. We had a second case that was a walkie-talkie conversation where a military leader called for bombing civilians, and we were unable to prove whether it was real or false. And we had a third case where a politician claimed that recordings released publicly were made with AI and dismissed them, and they were in fact real. And so there you see the panoply of cases that we see in the real world, cases where it really is faked, cases where reality is dismissed as faked, and cases where you can’t prove it. And one thing I want to point out here is it is harder to prove it in most global scenarios where you’re not working with English, where you’re working with real-world formats, and you’re dealing with people who are not represented in the data sets that we use to train them. It is a messy world. So again, falsified with AI, impossible to determine real, but claimed to be AI. And I used audio examples there, but the reality is we have a huge range right now. We have everything from image to video, that’s the top left. We have traditional deep fakes, where someone fakes the words in someone’s mouth. We have what is happening right now with a tool like Google VO, you see that in the bottom left, where you can completely fake audio and video synced. And you have faked images where people are confused by the detector results where they try them out first. That’s the bottom right case, where we found people dropping an image that they thought was AI into a public detector. It told them that was AI, but it confused them. The reality was in fact that it was poorly falsified and not what they thought it was. And the other thing I want to point out just as we head into the conversation here is the realism keeps increasing. And I’m not going to show a human rights example here, but the Will Smith eating spaghetti benchmark that you may be familiar with. And this shows how AI video has improved from 2023 to 2024 to 2025 to the most current iterations with that Google model I just mentioned. And the final thing I’ll say, and this is to say how confusing the real world is when you are trying to prove what is real or false. Here’s an example of a Pikachu running through a protest. This is real footage. Someone then made these AI generated images of Pikachu in a protest and Pikachu plus the Joker and Batman in a protest. Those are AI generated images. Then someone made an image of Pikachu running from police based off the. and Dan’s company, Vamilio, to make the AI-generated video. So just think about all the layers you’re dealing with here, and think about the questions that raises about how we prove what is real and false, and how do we deal with that in the most critical situations. Thank you.


Greg Williams: Thank you, Sam. So we’re now going to have a video from Dan’s company, Vamilio. So he works with creators, brands and platforms to ensure that digital content can be trusted in the age of generative media. If you could play the video, please.


Dan Neely: Hi, I’m Dan Neely, CEO of Vamilio and serial entrepreneur with over 20 years of experience in AI. I was honoured to be named a Time 100 recipient as one of the most influential voices in AI. I’m sorry I can’t be with you in person there today. At Vamilio, we’re committed to ensuring that AI aligns with and respects artists’ rights. We believe humanity should thrive in the era of AI, and that’s why we’re building essential guardrails for the generative internet. One of those guardrails is TraceID. We’ve tested the power of TraceID through partnerships with Pocket Watch, Sony Pictures and Sony Music. Our platform tracked millions of AI outputs from superhero characters to spider sonos. Most notably, we partnered with Sony Music to create the first ethical, authorised remix application with a major label. TraceID tracks thousands of hours of generative music and proved we could identify these derivatives anywhere online. Today, we’re working closely with music labels to continue to identify artists’ work with a high level of accuracy. In our industry tests, we used datasets containing masters, genAI tracks and AI-manipulated tracks. This means we can identify with precision and accuracy when AI music appears across the web and is created by AI platforms. I wanted to take a moment to share something important with a community that deeply values the responsible use of AI. At Vimelia, we’ve been thinking a lot about how to empower individuals like yourselves as generative AI becomes more widespread. That’s why we’re making our premium AI protection tool, TraceID, available to individuals around the world at no cost. With the growing concerns around deepfake scams, AI-generated misinformation and the misuse of personal likeness, we believe access to tools that help protect your identity shouldn’t be limited. TraceID helps people detect and remove unauthorized content and take back control over how your name, image and likeness are used in AI systems. The AI for Good Summit is the perfect place to start this next chapter, and we’re proud to be part of a global conversation about how AI can serve everyone, not just a few. We see this as a meaningful first step forward, making sure everyone has the tools to navigate this new era with confidence. Use the QR code here to get yourself protected.


Greg Williams: So, guys, let’s get into the conversation. So, let’s get into the… I can’t think my mic’s on. If you guys can hear me, then hopefully everyone can hear me now. Great. So, Sam, let me start with you, if that’s okay. Incredible kind of presentation and very shocking in some ways. Just kind of thinking about those images that you showed us, what do you think the biggest threat is around deepfakes today? Is it political disinformation, identity misuse, the erosion of fact-based decision-making or something else?


Sam Gregory: I think it’s all of the above, and it also depends on who’s being impacted. So, the most clear widespread threat we see right now are deepfake, non-consensual sexual images being shared of women, right? And it’s pervasive, and that is the first place we need to start because it’s so widespread. What we saw in deepfakes used for political disinformation, and we ran this deepfakes rapid response force throughout the sort of gangbusters election year last year and continue to do it now, is that you’re seeing a steady upward increase in the usage, and you’re seeing that the tools are not keeping pace to detect them or to show when AI is being used in everyday content. And so, although it’s not a nightmare scenario yet, we’re not seeing deepfakes destabilizing elections in the political context, what you’re seeing is a growing disparity between the ability to create synthetic content that looks like reality, the ability to weaponize it, versus the ability to prove when something is real and to show when something is synthesized. And that’s not to say that when something is synthesized, it’s inherently wrong, right? It is to say that the solutions that we need, and we focus on two within our work, one is the detection scenario that I described in some of those examples. At the moment, those tools are neither available nor built for the people who need them most. And then the other set of solutions that are around what’s called provenance, which is ways to show essentially the recipe of how AI and human mix in a piece of media are not widespread enough. So, we need to have either of the solutions that are going to help us fight back against a political disinformation crisis in a few years.


Greg Williams: Just a follow-up question on that provenance issue. If synthetic media clearly is just massively increasing, why is it hard for provenance technologies, watermarking, whatever it might be, to keep pace with that? Why can’t we do that?


Sam Gregory: Yeah. So, provenance technology at its heart is this idea that we show the recipe of the mix of AI and human in a piece of content. And a lot of the approaches that are being promoted, and that includes leadership work from the ITU here in its AMAS initiative, is to embed metadata, to link it to watermarks, to link it to a fingerprint so that you can durably retain the information as it moves through the internet. And that’s hard, right, to find something that can remain with content across the internet and do that interoperably across mainstream and commercial and open source platforms. And the other thing to remember is, can you also do that in a way that protects privacy, that doesn’t create another way to surveil people, and doesn’t, for example, arbitrarily start mixing in things like identity to the fact that you used an AI tool. So getting it right is conceptually hard, and getting it right is technically hard. But it’s a really important thing to do because, as you saw, the detection side is really hard, and it requires expertise, and it is fallible. So we need provenance solutions that are widespread, that are available to people, and that work for the world we live in. And 99% of the time, it’s going to be telling us that we don’t need to care that something is made with AI, because it’s fun, it’s everyday communication, it’s entertainment, it’s not malicious or deceptive. And that’s where something like provenance helps, because it enables you to sort between the stuff you need to worry about and the stuff that is just our everyday world in which AI is integrated into everything.


Greg Williams: Okay, I want to come back to you on that 90% of content, we don’t need to care. But Michaela, I’d love to come to you next. So Sam’s talked about the degradation or fact-based information. There’s also concerns that AI is going to decimate the creative industries. And I’d like to get your viewpoint as someone who’s worked with Sora. How do you see generative AI tools like Sora shifting the boundaries of authorship and creative control in the arts and the creative industries?


Michaela Ternasky Holland: Yeah, it’s a great question. I think specifically with not just Sora, but all of these other tools that are out there, there’s a huge difference between open source and closed source. And the reality is when you start playing with closed source materials, basically, or closed source platforms, the ones you have to pay for, the ones that are being given to you for very cheap, what seems very cheap right now, but I’m sure is going to increase over time by the Googles, the Metas, the open AIs, that’s all closed source technology. So anything that you are kind of giving to that mechanism, whether that’s your voice as a writer for ChatGPT, or if it’s your image as a concept artist to then generate video, the reality is all of that is getting kind of plugged and played back into their systems of closed sourceness. Versus open source, which is more like comfy UI or hugging face, these are all systems that run on the idea that products don’t need to be quote-unquote paid for because there’s no reason to have people pay for things that should be easily accessible because it’s internet and it’s code, democratization of the internet, the wide open internet. So if you are a creative going into this, you need to ask yourself, am I utilizing closed source because it’s easier and simple and the context of what I’m using it for makes sense? Or am I going to try and really protect my data and my IP and go more of the open source route? And the reality is, too, just to say it, there’s a lot of these closed source platforms that have creator programs, they have director programs, they have artist programs, and be very careful because it might seem like you’re getting amazing access to alpha, beta tools that no one else has access to yet, but the reality is they are using you as a think tank, they are using you as a brain trust. Even my own work that I did with Sora is now implemented as a single button of presets inside the public-facing product, and at this point, I don’t see any of those residuals from that button that I helped create unknowingly. So there are some very real things about being a creative and having these tech companies start to encroach on creative tools. That being said, on the other side of the context, they’re very empowering, they’re very eye-opening. I think the first thing you should do is just start playing with them so you can start to see where the boundaries are because a lot of the content you’re going to see on LinkedIn and on the marketing side of these tools is going to be the bigger, better, best thing, and that’s actually not going to show you the hundreds and thousands of hours that people like me and my teams have spent knocking our head against the wall because it just can’t get very simple things right. And so one thing I often say, too, is that it’s more of an intern than a god. It’s more of this mechanism that you really have to work with and you really have to say, okay, you can’t do this, so we actually have to pivot our creative ideas, we have to pivot our storytelling ideas to do it more like a human being. and the AI. This is often happening behind the scenes of creative work. It does allow us to do things that at a certain level of budget that we wouldn’t have been able to do before, we are able to do. Some of the stories can be for good. Some of the stories can be very impactful. Some of the stories can go on to have a really incredible resonance throughout our communities, even when they are made with synthetic media. Of course, the dangers on the other side are just like the dangers of the digital world. It can build a house. It can also do some very violent things. Those are some of the things that we need to be thinking about when we are utilising these tools in the creative industry.


Greg Williams: I am fascinated by the fact that you are okay with the platform taking your IP without any compensation. Is it in your mind just because the tool gives you the ability to have social impact, so there is a benefit there that outweighs the IP theft? I guess what I am asking you, is that is it a good thing that you are okay with the platform taking your IP without any compensation? Another question is that, what is the business model? How can you continue to do your work, which is in social good space, if very large, let’s face it, some of the largest, most powerful companies in the world are stealing your content?


Michaela Ternasky Holland: Yes, so the context of that button was that I was an Alpha Programme artist. At the time, the product had not been released to the public yet. I was willingly utilising the product to create, you know, socially impactful stories. But knowing that the content was being taken from a digital platform, I was okay with that. The other question is, how can you continue to do your work, which is in social good space, if very large, let’s face it, some of the largest, most powerful companies in the world are stealing your content? Yes, so the context of that button was that I was willingly utilising the product to create, you know, socially impactful stories. But knowing that something could be utilised, the way they went about it was basically getting me on a Zoom call and being like, oh, my gosh, did you notice our new button? It’s inspired by one of your art styles. And my art style isn’t IP. I mean, the art style is paper craft animation. So it’s a very well-known animation style. But they saw how successful it was in reaching audience as an art style, and they decided to embed it into the actual button. So the context there quite isn’t my direct IP. And the reason I stand here to talk about it is so that I can warn other creatives that this is a what-happens, and this is what can happen if you join those kind of alpha-beta programmes. That being said, on the other side of the coin is I don’t think right now, for me, the grey areas of my IP and the grey areas of what I have are very much still in the process of the courts as well. Because the reality is, the courts could come to the end consideration that anything synthetic doesn’t have any ability to IP as ownership. And that’s something I do willingly, knowingly, because my source of income comes from the courts. And I don’t think that’s a good thing. And I don’t think that’s a good thing. And I don’t think that’s a good thing. And I don’t think that’s a good thing. And I don’t think that’s a good thing. So even if my source of income comes from the clients or my source of income comes from the grants, it doesn’t necessarily come from the licensing of my IP, which is slightly different than maybe other people’s process and other people’s business models.


Greg Williams: OK. So, as Marc Brenioff said earlier, the law will catch up with these guys. But we’ll see. I mean, with that note, back to you, Sam. Clearly, you know, we’ve seen complete failure of lawmakers in the US, less so in Europe, but lawmakers in the US, to actually grapple with large technology companies around social media. I don’t know how many congressional hearings we’ve had, but I think we’re up to like nine or something at the moment with absolutely no legislation whatsoever. From your perspective, just listening a little bit to what Michaela is saying around, you know, and there’s someone working in media, I’m constantly thinking about our ability to sort of continue to function if we’re having our content sort of scraped constantly by these companies. What gaps exist in global policy or global regulation around deepfakes that you feel like urgently need addressing? What can people in this room do in terms of putting pressure on lawmakers?


Sam Gregory: Yeah. And the social media analogy is so real to the people we work with. They felt like they were completely left out of the decision making on that. And it drives why we work on AI. I think there’s four things that are missing that need to be put in place. One is we need a pipeline of responsibility for how we think about AI that runs through the whole ecosystem all the way from the model makers to the deployers, to the tools, to the end users. We can’t blame you or I for not spotting the Pope in the puffer jacket or spotting the fake as you saw from the realism. It’s just going to keep increasing. And that pipeline of responsibility needs to be linked to a robust system of provenance. So we know when AI is being used in our content and communication and can make decisions about whether we think something is being used maliciously or deceptively. And that needs to be regulated to be there but in a way that protects human rights and the rights of people to free expression and privacy. The third thing is we do need this access to detection because people are going to evade the safeguards. So we need to resource it. We need to resource it for the world that exists the real world and for everyone not just a small minority of people in the global north and in the global north news industry frankly. And then the final thing is we need to be really clear on our red lines where things are not acceptable. Right. And a good example of that are those non-consensual sexual images generated with AI that are now permeating everything from schools to attacks on the lives of politicians. So it’s those four things that if we can get those in place we can start to think about a future for communication in which we can trust the stuff we want to trust. We can play with AI when we want to do that. But we also prevent the harms that are so clearly manageable and avoidable it should be illegal. How optimistic are you that we can have this very quickly deployed globally within the next few years. I think if we can deploy an imperfect system in the next few years we can reduce the harms that are possible and we can start to realize some of the potential because I think there is potential for creativity and storytelling and even for news media in much more AI being used. But only if we set these safeguards in place.


Greg Williams: OK final question for both of you. So Sam I’ll come to you first if that’s OK. So let’s look five years ahead. Will we be able to trust what we see online or not. Sam your thoughts.


Sam Gregory: I think we will be able to trust what we see online with the help of systems we put in place to help us do that. That is systems that are technical like provenance education systems that help people understand AI and institutions like the news media that help them do that. And if we also potentially use AI to help us sort that right. I don’t expect everyone to look at the metadata of every file but I do hope I’ll have an AI agent in five years that will say you shouldn’t care about a hundred pieces of information but this one looks suspicious and I want to tell you why. So I think if we can put those in place I don’t think people should naively just trust what they see. But the last thing in the world I want as a human rights expert who depends on visual evidence is people being sceptical of everything because that is corrosive and it’ll be exploited by people in power.


Greg Williams: Absolutely. Mikaela over to you. Five years we’ll be able to trust what we see online.


Michaela Ternasky Holland: Well I think it first starts with the idea of how we talk about these things. You know we think about truth we think about fiction we think about nonfiction and the reality is I think with synthetic media all of that is going to kind of get put in a blender and then exported for us. So we used to say landline I was talking about this backstage we used to say landline now we say mobile now we say cell phone. So my hope is that also our language around truth continues to expand. So human verified content or human verified assets assets that are being verified as someone human capture that and there’s been no trace of tampering to it versus synthetic media and if we’re even able to get heat maps around how that is and I do think you know similar to the arms race of nuclear weapons you know we create technology that can do a lot of bad in the world we can also create amazing technology that can help us disarm those weapons and I think it’s the same thing in the sense of deep fake technology we are rapidly creating these technologies that can do a lot of harm for the world. We can also rapidly create technologies that can start to combat that and disarm those technologies ability to create disinformation to create the sense of of hopelessness or the ability to not trust anything you see online. And I really think that takes this idea of having equity in this space and how the money is moving in this space. So going back to just a little bit into the business model I do think we should start we should be licensing all the training data all of the organizations creative artists that get pulled into an answer or get pulled into an export need to have a little bit of money going back into their pockets as soon as that’s being used because that is your IP even if your IP is never fully protected from generative AI if your IP is being used in the training data in the export process in the kind of answer that you receive then you should have you should see that money going back to you and if we can create a better system of how the money flows we can then create a better system of creating technologies that help combat these things. And some of the things that I’m being warned against even in this space is don’t use generative AI technology to fight generative AI technology. Try and use other systems and methodologies and technology. So if you’re out there and you’re like oh great I have a really good detector look into the back end. Are they using AI to detect AI. It might not be the most full safe system for the next two months because there could be one update that completely breaks the system. Try to find different types of technologies like mathematical technologies or blockchain technologies things that are not being necessarily in I guess transitioned through the software hardware platform updates that we’re seeing happening so rapidly.


Greg Williams: Thank you both for such a thoughtful and I really enjoyed about this conversation is both of you have come up with some very very clear pragmatic concrete ways that we can move forward and just to sort of like sign off I’d say that everyone in this room should maybe remember is like everyone in this room has agency. The rules of how we use this technology are being determined at the moment. So please do participate and please do ensure that you are following these conversations that Sam and Michaela are happening. Thank you so much.


S

Sam Gregory

Speech speed

190 words per minute

Speech length

2007 words

Speech time

630 seconds

Non-consensual sexual images are the most widespread current threat from deepfakes

Explanation

Gregory identifies deepfake non-consensual sexual images being shared of women as the most clear and widespread threat currently facing society. He emphasizes this is pervasive and should be the first priority to address.


Major discussion point

Current State and Challenges of Deepfake Technology


Topics

Human rights | Cybersecurity | Sociocultural


Political disinformation through deepfakes is steadily increasing but not yet destabilizing elections

Explanation

Through running the Deepfakes Rapid Response Force during election year, Gregory observed a steady upward increase in deepfake usage for political purposes. However, he notes that while concerning, deepfakes are not yet destabilizing elections in a nightmare scenario.


Evidence

Examples from the Deepfakes Rapid Response Force including cases of politicians calling voters gullible, military leaders calling for bombing civilians, and politicians dismissing real recordings as AI-generated


Major discussion point

Current State and Challenges of Deepfake Technology


Topics

Human rights | Cybersecurity | Legal and regulatory


Detection tools are not keeping pace with creation capabilities and aren’t built for those who need them most

Explanation

Gregory argues there’s a growing disparity between the ability to create convincing synthetic content and the ability to detect it or prove authenticity. The existing detection tools are neither available nor designed for the people who need them most – journalists, human rights defenders, and fact-checkers.


Evidence

Examples from the Deepfakes Rapid Response Force showing cases where content was real but claimed to be AI, impossible to determine, or actually falsified


Major discussion point

Current State and Challenges of Deepfake Technology


Topics

Human rights | Cybersecurity | Infrastructure


Agreed with

– Michaela Ternasky Holland

Agreed on

Current detection tools are inadequate and not accessible to those who need them most


Real-world deepfake detection is harder in non-English contexts and with underrepresented populations

Explanation

Gregory points out that detection becomes significantly more challenging when working outside English-language contexts, with real-world formats, and when dealing with people not represented in the datasets used to train detection systems. This creates a messy, complex landscape for verification.


Evidence

Audio cases from different global contexts showing varying levels of detection difficulty


Major discussion point

Current State and Challenges of Deepfake Technology


Topics

Human rights | Sociocultural | Infrastructure


The realism of AI-generated content is rapidly improving, making detection increasingly difficult

Explanation

Gregory demonstrates how AI-generated content quality has dramatically improved over short time periods, making it increasingly difficult for people to distinguish between real and synthetic content. This trend is accelerating and making detection more challenging.


Evidence

The Will Smith eating spaghetti benchmark showing improvement from 2023 to 2025, and the Pikachu protest example showing layers of real and AI-generated content


Major discussion point

Current State and Challenges of Deepfake Technology


Topics

Infrastructure | Cybersecurity | Sociocultural


Provenance technology showing the “recipe” of AI and human contribution is essential but technically challenging

Explanation

Gregory explains that provenance technology, which shows how AI and human elements mix in content creation, is crucial for establishing trust. However, implementing this technology across platforms while maintaining durability and interoperability is technically complex and challenging.


Evidence

Discussion of metadata embedding, watermarking, and fingerprinting challenges across mainstream, commercial, and open source platforms


Major discussion point

Technology Solutions and Infrastructure Needs


Topics

Infrastructure | Legal and regulatory | Human rights


Watermarking and metadata solutions must work across platforms while protecting privacy

Explanation

Gregory emphasizes that effective provenance solutions must be able to retain information as content moves across the internet while simultaneously protecting user privacy and avoiding creating new surveillance mechanisms. This balance is both conceptually and technically difficult to achieve.


Evidence

Reference to ITU’s AMAS initiative and the need for interoperable solutions that don’t arbitrarily mix identity with AI tool usage


Major discussion point

Technology Solutions and Infrastructure Needs


Topics

Human rights | Infrastructure | Legal and regulatory


A pipeline of responsibility is needed from model makers to end users throughout the AI ecosystem

Explanation

Gregory argues for a comprehensive system of responsibility that extends through the entire AI ecosystem, from those who create the models to those who deploy tools to end users. He emphasizes that individual users cannot be blamed for failing to spot sophisticated fakes.


Evidence

Reference to the Pope in puffer jacket example and increasing realism making individual detection impossible


Major discussion point

Policy and Regulatory Framework Requirements


Topics

Legal and regulatory | Human rights | Cybersecurity


Agreed with

– Michaela Ternasky Holland

Agreed on

Individual users cannot be solely responsible for identifying synthetic content


Clear red lines must be established for unacceptable uses like non-consensual sexual imagery

Explanation

Gregory calls for establishing clear boundaries around what uses of AI-generated content are completely unacceptable and should be illegal. He specifically highlights non-consensual sexual images as an example of content that is clearly harmful and should be prohibited.


Evidence

Examples of non-consensual sexual images permeating schools and being used in attacks on politicians’ lives


Major discussion point

Policy and Regulatory Framework Requirements


Topics

Human rights | Legal and regulatory | Cybersecurity


Robust provenance systems need regulation while protecting free expression and privacy rights

Explanation

Gregory advocates for regulated provenance systems that can help establish content authenticity while simultaneously protecting fundamental human rights including free expression and privacy. This requires careful balance in implementation.


Major discussion point

Policy and Regulatory Framework Requirements


Topics

Human rights | Legal and regulatory | Infrastructure


Trust online will require technical systems, education, and institutional support rather than naive acceptance

Explanation

Gregory envisions a future where online trust is maintained through a combination of technical provenance systems, educational programs to help people understand AI, and institutional support from organizations like news media. He emphasizes this is preferable to naive trust or complete skepticism.


Major discussion point

Future Outlook and Trust in Digital Media


Topics

Human rights | Sociocultural | Infrastructure


AI agents may help users sort trustworthy from suspicious content in the future

Explanation

Gregory suggests that future AI agents could assist users by automatically sorting through large amounts of content and flagging only suspicious items that require human attention. This would help manage the overwhelming volume of content while focusing human scrutiny where it’s most needed.


Evidence

Example of an AI agent that would say ‘you shouldn’t care about a hundred pieces of information but this one looks suspicious’


Major discussion point

Future Outlook and Trust in Digital Media


Topics

Infrastructure | Human rights | Cybersecurity


Disagreed with

– Michaela Ternasky Holland

Disagreed on

Approach to AI detection systems


M

Michaela Ternasky Holland

Speech speed

214 words per minute

Speech length

2312 words

Speech time

645 seconds

Closed-source platforms exploit creators by incorporating their work into systems without compensation

Explanation

Ternasky Holland warns that closed-source platforms from companies like Google, Meta, and OpenAI incorporate creators’ contributions into their systems without providing compensation. She uses her own experience where her work was turned into a preset button in Sora without receiving residuals.


Evidence

Personal experience with Sora where her paper craft animation style was incorporated as a preset button without compensation after participating in their alpha program


Major discussion point

Creative Industry Impact and Authorship Concerns


Topics

Intellectual property rights | Economic | Human rights


Disagreed with

– Greg Williams

Disagreed on

Compensation expectations for creator contributions to AI systems


Open-source tools offer more data protection but require more technical expertise than closed-source alternatives

Explanation

Ternasky Holland explains that open-source tools like ComfyUI and Hugging Face provide better data protection since they don’t feed user inputs back into proprietary systems. However, these tools require more technical knowledge and effort compared to the convenience of closed-source platforms.


Evidence

Comparison between closed-source platforms (Google, Meta, OpenAI) and open-source alternatives (ComfyUI, Hugging Face)


Major discussion point

Creative Industry Impact and Authorship Concerns


Topics

Human rights | Infrastructure | Economic


AI tools function more like interns than gods, requiring significant human guidance and iteration

Explanation

Ternasky Holland emphasizes that AI tools are not magical solutions but rather require extensive human oversight and iteration. She describes spending hundreds of hours working through limitations and having to pivot creative ideas based on what the AI can and cannot accomplish.


Evidence

Personal experience spending ‘hundreds and thousands of hours’ working through AI limitations and having to pivot creative and storytelling ideas


Major discussion point

Creative Industry Impact and Authorship Concerns


Topics

Economic | Sociocultural | Infrastructure


Agreed with

– Sam Gregory

Agreed on

Individual users cannot be solely responsible for identifying synthetic content


Creator programs by tech companies use artists as unpaid think tanks for product development

Explanation

Ternasky Holland warns that tech companies’ creator and director programs, while appearing to offer exclusive access to new tools, actually exploit artists as unpaid research and development resources. Companies use these programs to gather insights that inform their product development.


Evidence

Personal experience with alpha/beta programs where her work was incorporated into product features without her knowledge or compensation


Major discussion point

Creative Industry Impact and Authorship Concerns


Topics

Economic | Intellectual property rights | Human rights


AI enables creative work at budget levels previously impossible, allowing for impactful storytelling

Explanation

Ternasky Holland acknowledges that despite the challenges, AI tools do enable creators to produce work at budget levels that would have been impossible before. This democratization can lead to impactful storytelling that resonates with communities, even when using synthetic media.


Evidence

Examples of her own projects like Morning Light (tea leaf reading), Kappa (pre-colonial Filipino representation), and The Great Debate (political candidate simulation)


Major discussion point

Creative Industry Impact and Authorship Concerns


Topics

Economic | Sociocultural | Development


Detection systems using AI to fight AI may be unreliable due to rapid platform updates

Explanation

Ternasky Holland warns against detection systems that use AI to detect AI-generated content, as these systems can be broken by software and hardware platform updates. She suggests these systems may not be reliable for more than a few months due to rapid technological changes.


Major discussion point

Technology Solutions and Infrastructure Needs


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Agreed with

– Sam Gregory

Agreed on

Current detection tools are inadequate and not accessible to those who need them most


Disagreed with

– Sam Gregory

Disagreed on

Approach to AI detection systems


Mathematical and blockchain-based detection technologies may be more stable than AI-based solutions

Explanation

Ternasky Holland recommends seeking detection technologies based on mathematical or blockchain approaches rather than AI-based systems. She argues these alternatives are less susceptible to the rapid updates and changes that affect AI-based detection systems.


Major discussion point

Technology Solutions and Infrastructure Needs


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Agreed with

– Sam Gregory
– Dan Neely

Agreed on

Technology solutions must be developed to combat deepfake harms


Training data licensing should provide compensation to creators whose work is used in AI systems

Explanation

Ternasky Holland argues that all training data should be licensed and that creators whose work contributes to AI outputs should receive compensation. She believes this is essential for creating equity in the AI space and ensuring fair distribution of economic benefits.


Major discussion point

Policy and Regulatory Framework Requirements


Topics

Intellectual property rights | Economic | Legal and regulatory


Agreed with

– Greg Williams

Agreed on

Compensation and fair economic distribution are needed in AI development


Current legal frameworks are inadequate as courts may rule synthetic content has no IP ownership rights

Explanation

Ternasky Holland points out that current legal frameworks are insufficient, with courts potentially ruling that synthetic content cannot have intellectual property ownership. This uncertainty affects how creators can protect and monetize their work in the AI era.


Major discussion point

Policy and Regulatory Framework Requirements


Topics

Legal and regulatory | Intellectual property rights | Economic


Language around truth and verification needs to evolve to include concepts like “human verified content”

Explanation

Ternasky Holland suggests that our language around truth, fiction, and nonfiction will need to evolve as synthetic media becomes prevalent. She proposes concepts like ‘human verified content’ to distinguish between different types of media authenticity.


Evidence

Analogy of how language evolved from ‘landline’ to ‘mobile’ to ‘cell phone’


Major discussion point

Future Outlook and Trust in Digital Media


Topics

Sociocultural | Human rights | Legal and regulatory


Technology to combat deepfakes can develop as rapidly as the harmful applications themselves

Explanation

Ternasky Holland draws an analogy to nuclear weapons, suggesting that just as we developed technologies to disarm dangerous weapons, we can rapidly develop technologies to combat the harmful uses of deepfake technology. She expresses optimism about our ability to create defensive technologies.


Evidence

Nuclear weapons analogy – creating technology to disarm dangerous weapons


Major discussion point

Future Outlook and Trust in Digital Media


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Equity in funding and development is crucial for creating effective counter-technologies

Explanation

Ternasky Holland emphasizes that creating better systems to combat deepfake harms requires more equitable distribution of funding and resources in the AI space. She connects this to the need for fair compensation systems for creators whose work contributes to AI development.


Major discussion point

Future Outlook and Trust in Digital Media


Topics

Economic | Development | Human rights


D

Dan Neely

Speech speed

167 words per minute

Speech length

386 words

Speech time

138 seconds

TraceID technology can identify AI-generated content and protect individual identity rights

Explanation

Neely presents TraceID as a solution that can detect AI-generated content with high accuracy and help individuals protect their identity from unauthorized use in AI systems. The technology is being made available at no cost to help people navigate deepfake threats and maintain control over their digital likeness.


Evidence

Partnerships with Pocket Watch, Sony Pictures, and Sony Music tracking millions of AI outputs; industry tests using datasets with masters, genAI tracks, and AI-manipulated tracks; first ethical authorized remix application with Sony Music


Major discussion point

Technology Solutions and Infrastructure Needs


Topics

Human rights | Cybersecurity | Intellectual property rights


Agreed with

– Sam Gregory
– Michaela Ternasky Holland

Agreed on

Technology solutions must be developed to combat deepfake harms


G

Greg Williams

Speech speed

159 words per minute

Speech length

994 words

Speech time

373 seconds

Everyone has agency in determining how AI technology will be used and regulated

Explanation

Williams emphasizes that the rules governing AI technology use are currently being established, and individuals in positions of influence have the power to participate in shaping these outcomes. He encourages active participation in ongoing conversations about AI governance rather than passive acceptance of how technology develops.


Evidence

Direct appeal to audience members to participate and follow conversations about AI regulation and policy


Major discussion point

Policy and Regulatory Framework Requirements


Topics

Human rights | Legal and regulatory | Sociocultural


US lawmakers have failed to effectively regulate large technology companies compared to European efforts

Explanation

Williams points out the stark contrast between US and European approaches to tech regulation, noting that despite numerous congressional hearings with tech executives, the US has produced no meaningful legislation. He suggests this pattern of regulatory failure may continue with AI technology.


Evidence

Reference to approximately nine congressional hearings with no resulting legislation, contrasted with more effective European regulatory approaches


Major discussion point

Policy and Regulatory Framework Requirements


Topics

Legal and regulatory | Economic | Human rights


Content creators face existential business model threats from AI companies scraping their work

Explanation

Williams raises concerns about the sustainability of creative industries when large technology companies continuously scrape and use content without compensation. He questions how media professionals and content creators can continue to function economically under these conditions.


Evidence

Reference to his own experience in media and the constant scraping of content by technology companies


Major discussion point

Creative Industry Impact and Authorship Concerns


Topics

Economic | Intellectual property rights | Human rights


Agreed with

– Michaela Ternasky Holland

Agreed on

Compensation and fair economic distribution are needed in AI development


Disagreed with

– Michaela Ternasky Holland

Disagreed on

Compensation expectations for creator contributions to AI systems


Legal frameworks will eventually catch up to regulate AI technology companies

Explanation

Williams references Marc Benioff’s earlier statement suggesting that while current legal protections may be inadequate, the legal system will eventually develop appropriate regulations for AI technology. However, he expresses some skepticism about the timeline and effectiveness of this process.


Evidence

Reference to Marc Benioff’s earlier statement at the same event


Major discussion point

Policy and Regulatory Framework Requirements


Topics

Legal and regulatory | Economic | Human rights


Agreements

Agreement points

Current detection tools are inadequate and not accessible to those who need them most

Speakers

– Sam Gregory
– Michaela Ternasky Holland

Arguments

Detection tools are not keeping pace with creation capabilities and aren’t built for those who need them most


Detection systems using AI to fight AI may be unreliable due to rapid platform updates


Summary

Both speakers agree that existing detection technologies are insufficient, with Gregory emphasizing they’re not built for journalists and human rights defenders, while Ternasky Holland warns that AI-based detection systems are inherently unstable due to rapid technological changes.


Topics

Infrastructure | Cybersecurity | Human rights


Technology solutions must be developed to combat deepfake harms

Speakers

– Sam Gregory
– Michaela Ternasky Holland
– Dan Neely

Arguments

Provenance technology showing the ‘recipe’ of AI and human contribution is essential but technically challenging


Mathematical and blockchain-based detection technologies may be more stable than AI-based solutions


TraceID technology can identify AI-generated content and protect individual identity rights


Summary

All speakers advocate for technological solutions to address deepfake threats, though they propose different approaches – Gregory focuses on provenance systems, Ternasky Holland suggests mathematical/blockchain alternatives, and Neely presents TraceID as a detection solution.


Topics

Infrastructure | Cybersecurity | Human rights


Individual users cannot be solely responsible for identifying synthetic content

Speakers

– Sam Gregory
– Michaela Ternasky Holland

Arguments

A pipeline of responsibility is needed from model makers to end users throughout the AI ecosystem


AI tools function more like interns than gods, requiring significant human guidance and iteration


Summary

Both speakers agree that the burden of identifying or managing AI-generated content cannot rest solely on individual users, with Gregory calling for systemic responsibility and Ternasky Holland emphasizing the complexity of working with AI tools.


Topics

Legal and regulatory | Human rights | Infrastructure


Compensation and fair economic distribution are needed in AI development

Speakers

– Michaela Ternasky Holland
– Greg Williams

Arguments

Training data licensing should provide compensation to creators whose work is used in AI systems


Content creators face existential business model threats from AI companies scraping their work


Summary

Both speakers express concern about the economic exploitation of creators by AI companies, with Ternasky Holland advocating for licensing compensation and Williams highlighting the threat to sustainable creative business models.


Topics

Economic | Intellectual property rights | Human rights


Similar viewpoints

Both speakers acknowledge the rapid advancement of AI technology but maintain optimism that defensive technologies can keep pace with harmful applications, though they emphasize different aspects of this technological arms race.

Speakers

– Sam Gregory
– Michaela Ternasky Holland

Arguments

The realism of AI-generated content is rapidly improving, making detection increasingly difficult


Technology to combat deepfakes can develop as rapidly as the harmful applications themselves


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Both speakers believe that maintaining trust in digital media will require fundamental changes in how we approach and conceptualize authenticity, moving beyond simple trust/distrust to more nuanced verification systems.

Speakers

– Sam Gregory
– Michaela Ternasky Holland

Arguments

Trust online will require technical systems, education, and institutional support rather than naive acceptance


Language around truth and verification needs to evolve to include concepts like ‘human verified content’


Topics

Sociocultural | Human rights | Infrastructure


Both speakers express concern about the adequacy of current regulatory frameworks, with Gregory calling for balanced regulation that protects rights and Williams noting the failure of US lawmakers to effectively regulate tech companies.

Speakers

– Sam Gregory
– Greg Williams

Arguments

Robust provenance systems need regulation while protecting free expression and privacy rights


US lawmakers have failed to effectively regulate large technology companies compared to European efforts


Topics

Legal and regulatory | Human rights | Economic


Unexpected consensus

Acceptance of AI tool limitations and the need for human-AI collaboration

Speakers

– Michaela Ternasky Holland
– Sam Gregory

Arguments

AI tools function more like interns than gods, requiring significant human guidance and iteration


AI agents may help users sort trustworthy from suspicious content in the future


Explanation

Despite their different professional backgrounds (creative arts vs. human rights), both speakers converge on a pragmatic view of AI as a collaborative tool rather than a replacement for human judgment. This consensus is unexpected given the often polarized discourse around AI capabilities.


Topics

Infrastructure | Human rights | Sociocultural


Optimism about technological solutions despite acknowledging severe current problems

Speakers

– Sam Gregory
– Michaela Ternasky Holland
– Dan Neely

Arguments

Trust online will require technical systems, education, and institutional support rather than naive acceptance


Technology to combat deepfakes can develop as rapidly as the harmful applications themselves


TraceID technology can identify AI-generated content and protect individual identity rights


Explanation

All speakers maintain technological optimism despite presenting serious concerns about current deepfake threats. This consensus on the potential for technological solutions is unexpected given the severity of the problems they describe, suggesting a shared belief in human agency to address these challenges.


Topics

Infrastructure | Cybersecurity | Human rights


Overall assessment

Summary

The speakers demonstrate strong consensus on the inadequacy of current systems, the need for technological solutions, the importance of systemic responsibility rather than individual burden, and the necessity of fair economic compensation for creators. They also share optimism about developing effective counter-technologies despite acknowledging serious current threats.


Consensus level

High level of consensus with complementary rather than conflicting perspectives. The speakers approach the deepfake challenge from different professional angles but converge on similar solutions and principles. This strong agreement suggests a mature understanding of the problem space and indicates potential for collaborative policy and technical solutions. The consensus implies that stakeholders across creative, human rights, and technology sectors can work together on comprehensive approaches to deepfake governance.


Differences

Different viewpoints

Approach to AI detection systems

Speakers

– Sam Gregory
– Michaela Ternasky Holland

Arguments

AI agents may help users sort trustworthy from suspicious content in the future


Detection systems using AI to fight AI may be unreliable due to rapid platform updates


Summary

Gregory envisions AI agents helping users identify suspicious content in the future, while Ternasky Holland warns against using AI-based detection systems because they can be broken by rapid platform updates and recommends mathematical or blockchain-based alternatives instead.


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Compensation expectations for creator contributions to AI systems

Speakers

– Michaela Ternasky Holland
– Greg Williams

Arguments

Closed-source platforms exploit creators by incorporating their work into systems without compensation


Content creators face existential business model threats from AI companies scraping their work


Summary

While both acknowledge the problem of AI companies using creator content without compensation, Ternasky Holland appears more accepting of this situation when working on social impact projects, whereas Williams expresses stronger concern about the existential threat this poses to creative industries’ business models.


Topics

Economic | Intellectual property rights | Human rights


Unexpected differences

Optimism about technological solutions to deepfake problems

Speakers

– Sam Gregory
– Michaela Ternasky Holland

Arguments

Detection tools are not keeping pace with creation capabilities and aren’t built for those who need them most


Technology to combat deepfakes can develop as rapidly as the harmful applications themselves


Explanation

Despite both being experts in AI and media, they have notably different levels of optimism about technological solutions. Gregory emphasizes the growing gap between creation and detection capabilities, while Ternasky Holland is more optimistic about our ability to rapidly develop counter-technologies, using a nuclear weapons analogy.


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Overall assessment

Summary

The speakers showed remarkable consensus on identifying problems (deepfake threats, creator exploitation, need for better systems) but differed primarily on implementation strategies and timelines for solutions.


Disagreement level

Low to moderate disagreement level. The speakers largely agreed on fundamental issues and goals but differed on technical approaches and optimism levels. This suggests a healthy debate about methods rather than fundamental philosophical differences, which could lead to complementary rather than competing solutions in addressing deepfake challenges.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers acknowledge the rapid advancement of AI technology but maintain optimism that defensive technologies can keep pace with harmful applications, though they emphasize different aspects of this technological arms race.

Speakers

– Sam Gregory
– Michaela Ternasky Holland

Arguments

The realism of AI-generated content is rapidly improving, making detection increasingly difficult


Technology to combat deepfakes can develop as rapidly as the harmful applications themselves


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Both speakers believe that maintaining trust in digital media will require fundamental changes in how we approach and conceptualize authenticity, moving beyond simple trust/distrust to more nuanced verification systems.

Speakers

– Sam Gregory
– Michaela Ternasky Holland

Arguments

Trust online will require technical systems, education, and institutional support rather than naive acceptance


Language around truth and verification needs to evolve to include concepts like ‘human verified content’


Topics

Sociocultural | Human rights | Infrastructure


Both speakers express concern about the adequacy of current regulatory frameworks, with Gregory calling for balanced regulation that protects rights and Williams noting the failure of US lawmakers to effectively regulate tech companies.

Speakers

– Sam Gregory
– Greg Williams

Arguments

Robust provenance systems need regulation while protecting free expression and privacy rights


US lawmakers have failed to effectively regulate large technology companies compared to European efforts


Topics

Legal and regulatory | Human rights | Economic


Takeaways

Key takeaways

Non-consensual sexual deepfake images represent the most widespread and immediate threat requiring urgent attention


A four-pillar framework is needed: pipeline of responsibility from AI creators to users, robust provenance systems, accessible detection tools, and clear red lines for unacceptable uses


Creative professionals face exploitation through closed-source platforms that incorporate their work without compensation, while open-source alternatives offer better data protection


Detection capabilities are failing to keep pace with creation tools, particularly for non-English content and underrepresented populations


Trust in digital media will require technical systems, education, and institutional support rather than naive acceptance of content


Language and frameworks around truth verification need to evolve to include concepts like ‘human verified content’ versus synthetic media


Provenance technology showing the ‘recipe’ of AI and human contribution is essential but technically challenging to implement across platforms


The rules governing AI technology use are being determined now, giving current stakeholders agency to influence outcomes


Resolutions and action items

Creators should be cautious of alpha/beta programs from tech companies that may exploit their work as unpaid think tanks


People should start experimenting with AI tools to understand their limitations rather than relying on marketing materials


Stakeholders should push for licensing systems that compensate creators whose work is used in AI training data


Detection technology should avoid using AI to fight AI due to instability from rapid platform updates


Mathematical and blockchain-based detection technologies should be prioritized over AI-based solutions for stability


Everyone in the room should participate in conversations about AI governance as the rules are being determined now


Unresolved issues

How to achieve global deployment of comprehensive AI safeguards within the next few years


Whether courts will rule that synthetic content has no IP ownership rights, affecting creator compensation


How to make detection tools accessible and effective for global populations, particularly non-English speakers


How to balance AI innovation with creator rights and fair compensation systems


How to implement provenance technology across platforms while protecting privacy and free expression


How to prevent the ‘arms race’ dynamic between increasingly sophisticated creation and detection technologies


How to address the failure of lawmakers, particularly in the US, to effectively regulate large technology companies


Suggested compromises

Accept imperfect but functional safeguard systems in the near term to reduce harms while working toward better solutions


Use AI agents to help users sort content rather than expecting manual verification of everything


Allow synthetic media for creative and beneficial purposes while establishing clear red lines for harmful uses


Balance closed-source convenience with open-source data protection based on specific use cases and risk tolerance


Develop equity-based funding models where money flows back to creators can enable better counter-technologies


Thought provoking comments

So we had one audio case in which a politician called a voter gullible and encouraged lying to them. And it turned out that it was proved real. We had a second case that was a walkie-talkie conversation where a military leader called for bombing civilians, and we were unable to prove whether it was real or false. And we had a third case where a politician claimed that recordings released publicly were made with AI and dismissed them, and they were in fact real.

Speaker

Sam Gregory


Reason

This comment is deeply insightful because it reveals the complex reality of deepfake detection beyond simple ‘real vs fake’ scenarios. It introduces three critical categories: content that is real but shocking, content that cannot be verified either way, and real content being dismissed as fake. This reframes the entire deepfake problem from a technical detection issue to a more nuanced epistemological challenge about truth and verification.


Impact

This comment fundamentally shifted the discussion from theoretical concerns about deepfakes to concrete, real-world complexities. It established that the problem isn’t just about detecting fake content, but about navigating a world where the line between real and synthetic creates multiple layers of uncertainty. This set up the entire framework for discussing provenance and detection challenges that followed.


The reality is when you start playing with closed source materials… anything that you are kind of giving to that mechanism, whether that’s your voice as a writer for ChatGPT, or if it’s your image as a concept artist to then generate video, the reality is all of that is getting kind of plugged and played back into their systems… Even my own work that I did with Sora is now implemented as a single button of presets inside the public-facing product, and at this point, I don’t see any of those residuals from that button that I helped create unknowingly.

Speaker

Michaela Ternasky Holland


Reason

This comment is particularly thought-provoking because it exposes the hidden economics of AI development where creators unknowingly become unpaid contributors to corporate AI systems. Her personal experience with Sora reveals how tech companies extract value from creative work under the guise of ‘collaboration’ or ‘early access,’ turning artists into unwitting data sources for commercial products.


Impact

This comment introduced a crucial economic dimension to the discussion that hadn’t been explicitly addressed. It moved the conversation beyond technical and ethical concerns to examine the power dynamics and exploitation inherent in current AI development models. This led directly to Greg Williams’ follow-up questions about IP theft and sustainable business models for creators, fundamentally expanding the scope of the discussion.


99% of the time, it’s going to be telling us that we don’t need to care that something is made with AI, because it’s fun, it’s everyday communication, it’s entertainment, it’s not malicious or deceptive. And that’s where something like provenance helps, because it enables you to sort between the stuff you need to worry about and the stuff that is just our everyday world in which AI is integrated into everything.

Speaker

Sam Gregory


Reason

This comment is insightful because it reframes the entire AI authenticity debate by suggesting that most AI-generated content is benign and that the real challenge is developing systems to distinguish between harmful and harmless uses. This perspective challenges the common narrative that all synthetic media is inherently problematic and instead advocates for nuanced, context-aware approaches to AI governance.


Impact

This comment provided a crucial counterbalance to the more alarmist aspects of the deepfake discussion. It helped establish a more pragmatic framework for thinking about AI regulation and detection, suggesting that the goal shouldn’t be to eliminate AI-generated content but to develop systems that can identify when such content is being used maliciously. This perspective influenced the later discussion about practical solutions and regulatory approaches.


I think it first starts with the idea of how we talk about these things… we used to say landline now we say mobile now we say cell phone. So my hope is that also our language around truth continues to expand. So human verified content or human verified assets… versus synthetic media and if we’re even able to get heat maps around how that is

Speaker

Michaela Ternasky Holland


Reason

This comment is thought-provoking because it suggests that our current binary thinking about ‘real’ versus ‘fake’ content is inadequate for the AI age. By proposing new linguistic frameworks like ‘human verified content’ and suggesting that truth itself needs to be reconceptualized, she challenges fundamental assumptions about authenticity and proposes a more nuanced taxonomy for understanding media in the synthetic age.


Impact

This comment provided a philosophical foundation for reimagining how society might adapt to widespread synthetic media. Rather than fighting against the technology, it suggested evolving our conceptual frameworks and language to accommodate new realities. This perspective helped conclude the discussion on a more optimistic note, suggesting that adaptation and evolution of our understanding, rather than resistance, might be the path forward.


The most clear widespread threat we see right now are deepfake, non-consensual sexual images being shared of women, right? And it’s pervasive, and that is the first place we need to start because it’s so widespread.

Speaker

Sam Gregory


Reason

This comment is crucial because it grounds the abstract discussion of deepfakes in immediate, concrete harm affecting real people. By identifying non-consensual sexual imagery as the most pressing current threat, it shifts focus from hypothetical future political manipulation to present-day gender-based violence, providing moral urgency and clear priorities for action.


Impact

This comment immediately humanized the deepfake discussion and established clear ethical priorities. It moved the conversation away from theoretical concerns about political disinformation toward addressing immediate harm to vulnerable populations. This helped establish a framework for thinking about AI governance that prioritizes protecting the most vulnerable, which influenced the subsequent discussion about regulatory approaches and red lines.


Overall assessment

These key comments fundamentally shaped the discussion by moving it beyond simple technical concerns about deepfake detection to a more sophisticated understanding of the multifaceted challenges posed by synthetic media. Sam Gregory’s real-world examples established the complex epistemological landscape we’re navigating, while Michaela’s insights about creator exploitation and the need for new conceptual frameworks provided both economic and philosophical dimensions. Together, these comments created a discussion that was simultaneously grounded in practical experience and forward-thinking about systemic solutions. The conversation evolved from identifying problems to proposing concrete pathways forward, including technical solutions (provenance systems), regulatory approaches (pipeline of responsibility), economic reforms (fair compensation for training data), and conceptual evolution (new language for truth and authenticity). The overall effect was to transform what could have been an alarmist discussion about AI threats into a nuanced exploration of how society can adapt and create safeguards while still embracing beneficial uses of the technology.


Follow-up questions

What would happen if we were able to actually give these audience members deep fakes of these candidates? What if we utilized actors instead of deep fake technology?

Speaker

Michaela Ternasky Holland


Explanation

These are key questions she’s exploring in her ongoing ‘Great Debate’ installation project to understand the impact of different representation methods on audience engagement with AI-generated political candidates


How do we prove what is real and false, and how do we deal with that in the most critical situations?

Speaker

Sam Gregory


Explanation

This fundamental question emerges from the complex layers of real and AI-generated content, as illustrated by his Pikachu protest example, and is crucial for human rights and journalism contexts


Why is it hard for provenance technologies, watermarking, whatever it might be, to keep pace with synthetic media proliferation?

Speaker

Greg Williams


Explanation

This follow-up question addresses a critical gap in current technology solutions and the technical/practical challenges of implementing content authentication at scale


How can you continue to do your work, which is in social good space, if very large companies are taking your content without compensation?

Speaker

Greg Williams


Explanation

This question explores the sustainability of creative work and social impact projects when major tech companies use creators’ work to improve their platforms without compensation


What gaps exist in global policy or global regulation around deepfakes that urgently need addressing? What can people in this room do to put pressure on lawmakers?

Speaker

Greg Williams


Explanation

This addresses the urgent need for regulatory frameworks and actionable steps for advocacy, given the current policy gaps in addressing AI-generated content


How optimistic are you that we can have this very quickly deployed globally within the next few years?

Speaker

Greg Williams


Explanation

This follow-up question seeks to understand the realistic timeline for implementing the four-point framework Sam outlined for AI responsibility and safeguards


Will we be able to trust what we see online in five years?

Speaker

Greg Williams


Explanation

This forward-looking question is crucial for understanding the trajectory of digital trust and the effectiveness of proposed solutions


How should we expand our language around truth to accommodate synthetic media?

Speaker

Michaela Ternasky Holland


Explanation

She suggests the need to develop new terminology like ‘human verified content’ versus ‘synthetic media’ to better navigate the blended reality of AI-generated content


How can we create better systems for licensing training data and ensuring creators receive compensation when their IP is used?

Speaker

Michaela Ternasky Holland


Explanation

This addresses the fundamental business model and equity issues in AI development, suggesting that better financial flows could lead to better safeguard technologies


What non-AI technologies can be used to detect AI-generated content more reliably?

Speaker

Michaela Ternasky Holland


Explanation

She warns against using AI to detect AI and suggests exploring mathematical or blockchain technologies that won’t be affected by rapid AI platform updates


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.