Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology
23 Jun 2025 13:00h - 13:30h
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology
Session at a glance
Summary
This discussion focused on C2PA (Coalition for Content Provenance and Authenticity), an open standard for content authenticity and provenance, presented by BBC Media Action and BBC R&D representatives along with international media partners. Muge Ozkaptan from BBC Media Action introduced the session, explaining how their organization supports media outlets in 30 countries with digital transformation and AI adoption, particularly through their “Pursuit of Truth” initiative supporting 30,000 media professionals and 1,000 media outlets in fragile environments.
Charlie Halford from BBC R&D explained that C2PA addresses the growing problem of disinformation by attaching cryptographic signatures to content, similar to website security certificates, allowing users to verify the origin and authenticity of media. He demonstrated how fake BBC content has been created simply by adding BBC logos and graphics to misleading information, highlighting the need for content verification technology. The BBC’s research showed that when audiences were provided with C2PA transparency data about content origins, they demonstrated significantly higher trust levels, particularly among users who weren’t regular BBC website visitors.
International perspectives came from media partners facing real-world challenges with disinformation. Khalifa Said Rashid from Tanzania’s Chanzo digital outlet described problems with brand impersonation and out-of-context video content being recycled during crisis situations. Kyrylo Lesin from Ukraine’s Suspilne public service media explained how they face aggressive disinformation campaigns, particularly since Russia’s invasion, and view C2PA as crucial for helping audiences distinguish trustworthy content from other sources.
The discussion concluded with recognition that broader adoption requires platform support, improved media literacy, and continued development of security procedures and AI content labeling capabilities.
Keypoints
**Major Discussion Points:**
– **C2PA Technology Overview and Implementation**: Charlie Halford explained C2PA (Coalition for Content Provenance and Authenticity) as an open standard that uses cryptographic signatures to verify content authenticity and origin. The BBC has been piloting this technology, attaching verification data to content to help audiences distinguish genuine news from disinformation.
– **Global Disinformation Challenges**: Multiple speakers highlighted how media organizations worldwide face brand impersonation and content manipulation. Examples included fake BBC-branded content and recycled videos taken out of context during crises, particularly affecting outlets in Tanzania and Ukraine during wartime.
– **Media Literacy and User Trust Research**: The BBC conducted studies showing that when audiences were provided with C2PA provenance data, they demonstrated significantly higher trust levels in the content, especially among users who weren’t already familiar with the BBC brand.
– **Platform Adoption and AI Content Labeling**: Discussion covered how social media platforms like TikTok are beginning to integrate C2PA standards, particularly for detecting and labeling AI-generated content from tools like OpenAI, though broader adoption across platforms remains limited.
– **Barriers to Global Implementation**: Key challenges identified include the need for device-level integration, security procedures for private key management, platform cooperation, and extensive media literacy education to help users understand and utilize provenance information effectively.
**Overall Purpose:**
The discussion aimed to present C2PA as a promising solution for combating disinformation and building content authenticity, while gathering insights from international media partners about practical implementation challenges and needs in diverse global contexts.
**Overall Tone:**
The tone was consistently optimistic and collaborative throughout. Speakers maintained an educational and forward-looking approach, acknowledging current limitations while expressing confidence in the technology’s potential. The discussion emphasized partnership and collective action rather than dwelling on problems, with participants sharing practical experiences and research findings in a constructive manner.
Speakers
– **Muge Ozkaptan** – Senior Product and AI Lead at BBC Media Action, supports country offices and media organizations for digital transformation and AI adoption with focus on responsible and ethical approaches
– **Charlie Halford** – Principal Research Engineer at BBC R&D, works on C2PA technology implementation and content authenticity solutions
– **Khalifa Said Rashid** – Editor-in-Chief of the Chanzo, a digital media platform in Tanzania focusing on public interest journalism, public accountability and investigation
– **Kyrylo Lesin** – Senior Product Manager at Suspilne (public service media from Ukraine), works on digital transformation and journalism delivery
– **Audience** – Participant asking questions during the Q&A session
**Additional speakers:**
– **Amy Mitchell** – From Center for News Technology Innovation, researcher focusing on public service aspects of news technology
Full session report
# Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges
## Introduction and Context
This discussion centred on the Coalition for Content Provenance and Authenticity (C2PA), an open standard for content authenticity and provenance, presented by representatives from BBC Media Action and BBC R&D alongside international media partners. The session brought together diverse perspectives from global media organisations to examine how technological solutions can address the growing challenges of disinformation and content manipulation.
Muge Ozkaptan from BBC Media Action opened the session by establishing the organisation’s global reach and mission. BBC Media Action operates in 30 countries with content in 50 languages, focusing particularly on supporting media organisations in fragile environments. Ozkaptan emphasised the importance of bringing diverse voices into technology discussions, noting that “when we talk about technology generally, we talk about specification and applications, but it’s important that bringing those diverse voices and understand their actual needs, how they work, what kind of challenges that they are facing in their day-to-day life and work, and how C2PA innovation solutions like C2PA can fit in that area.”
## Technical Overview of C2PA Technology
Charlie Halford from BBC R&D provided a comprehensive explanation of C2PA technology and its implementation. As Halford explained, “C2PA itself is a standard, a technical standard. And what it does is it describes how you attach a signature, a cryptographic signature, the same kind that you might use on a website to give it that green lock.” The technology addresses the growing problem of disinformation by attaching verification data to content, allowing users to confirm the origin and authenticity of media they encounter.
Halford demonstrated the practical challenges facing media organisations by showing examples of fake BBC content. He explained that sophisticated artificial intelligence isn’t always necessary for effective disinformation: “These aren’t pieces of AI disinformation. This is just somebody with a video editor. They found the BBC logo. They found the BBC font. They know what the BBC’s graphics look like. And they’ve put out what the footage underneath them isn’t fake. They’ve just changed the message.” This observation highlighted how simple brand impersonation can be highly effective in misleading audiences who trust established media brands.
The BBC has conducted research into C2PA implementation, working with partners including IPTC for publisher certificates. The technology currently works with existing software and tools, including various cameras and content creation applications. Halford also explained the concept of “redaction” within C2PA systems, which allows for the removal of sensitive information like location and time data that could endanger subjects or photographers while maintaining content authenticity verification.
## Global Perspectives on Disinformation Challenges
### Tanzania: Brand Impersonation and Crisis Communication
Khalifa Said Rashid, Editor-in-Chief of the Chanzo, a digital media platform in Tanzania focusing on public interest journalism and accountability, provided crucial insights via recorded audio about the challenges facing media organisations in developing countries. The Chanzo struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content regularly.
Rashid explained the particular vulnerability that brand trust creates: “And it have been very difficult for us to deal with a situation like that because many people trust our brand and when they see content online with our logos and brand colours, they can be very difficult for average reader to tell whether it’s real or not.” This perspective illustrated how established media brands become targets for manipulation precisely because of the trust they have built with their audiences.
### Ukraine: Wartime Disinformation and Hybrid Warfare
Kyrylo Lesin, Senior Product Manager at Suspilne, Ukraine’s public service media, brought a unique perspective shaped by operating under wartime conditions. Suspilne, established eight years ago and recognised by independent watchdog organisations for delivering trustworthy journalism, faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since Russia’s invasion.
Lesin highlighted how disinformation campaigns affect content distribution systems: “For example, as Google discover all of these products, they operate, at some extent, as black boxes, and there are really lack of signals and parameters they can embrace to arrange the content with the most value for the end user.” This observation introduced an important dimension to the C2PA discussion—the potential for authenticity signals to influence algorithmic content distribution, helping platforms prioritise trustworthy content over manipulated material.
## Research Findings and Platform Implementation
The BBC’s research into user response to C2PA technology has involved multiple studies with different methodologies. Charlie Halford presented findings showing that users respond positively to additional transparency data, with research indicating that around 80% of users found extra data more useful, even without recognising the C2PA brand specifically. This finding was particularly significant because it suggested that the mere presence of verification information builds trust, regardless of technical literacy or brand recognition.
A separate study conducted by the University of Bergen expanded on the BBC’s research, providing additional validation of user interest in content authenticity features. However, as Amy Mitchell from the Center for News Technology Innovation pointed out, important questions remain about the distinction between user interest in authenticity features versus actual behavioural change in content consumption patterns.
Regarding platform adoption, Halford reported mixed progress. Social media platforms show positive but limited response, primarily adopting C2PA for AI-generated content labelling rather than general content verification. TikTok, for example, has begun integrating C2PA standards, particularly for detecting and labelling AI-generated content from tools like OpenAI and Microsoft, though broader adoption across platforms remains limited.
Major technology companies including Adobe, Microsoft, Google, Meta, and OpenAI are part of the C2PA coalition, and the technology works with existing software and cameras currently available. However, broader implementation faces several challenges, including the need for device-level integration, security procedures for private key management, platform cooperation, and extensive media literacy education.
## Implementation Challenges and Media Literacy
Despite the consensus on C2PA’s value, speakers identified several significant barriers to global implementation. Media literacy education emerged as a crucial requirement, with Charlie Halford noting that “you can’t just put this information in front of people and expect them to understand it so we have to use our products we have to use our journalism to explain to people what this means.”
The discussion revealed that while users respond positively to transparency data, C2PA as a brand lacks public recognition. This creates a challenge for implementation, as the technology’s effectiveness depends partly on user understanding and trust in the verification system itself.
Technical implementation challenges include the need for broader device and tool integration to make C2PA automatic rather than requiring special procedures. Media organisations also need to develop robust security procedures for managing the private keys required for C2PA implementation, ensuring the integrity and trustworthiness of the system.
## Audience Engagement and Future Development
The session included interactive elements, with audience participation facilitated through Slido Q&A system and QR codes for real-time questions. Participants raised important questions about regulatory integration, asking about plans for integrating C2PA into existing regulations for information integrity such as the EU Digital Services Act or UK Online Safety Act.
The discussion concluded with several concrete action items and future development plans. BBC Media Action committed to continuing support for global media organisations through workshops and conversations to include diverse voices in C2PA development. A pilot implementation is planned between BBC and Suspilne to integrate C2PA into end-to-end web publishing processes, providing a practical test case for the technology in a challenging operational environment.
## Ongoing Challenges and Considerations
Several significant issues remain unresolved and require continued attention. Limited social media platform adoption beyond AI content labelling represents a major challenge, as platforms show mixed response to general content verification features. The lack of public recognition of the C2PA brand itself requires significant media literacy education efforts to achieve meaningful adoption.
The challenge of scaling adoption across diverse media environments with varying technical capabilities remains substantial. Implementation needs to account for different levels of technical infrastructure and resources available to media organisations in different regions.
There are also ongoing questions about achieving broader device and tool adoption so that C2PA becomes built into cameras and content creation tools by default, making the technology seamless rather than requiring special technical knowledge from users. Additionally, the need for better AI-generated content detection and improved reliability of AI labelling in C2PA was acknowledged, as current AI detection methods are not completely reliable.
## Conclusion
The discussion demonstrated strong consensus among diverse stakeholders about the value of C2PA technology for addressing global challenges in content authenticity and disinformation. The perspectives from media organisations operating in different contexts—from the BBC’s established presence to the Chanzo’s work in Tanzania to Suspilne’s wartime operations—illustrated both universal challenges and context-specific needs.
The conversation successfully balanced technical capabilities with practical implementation concerns, emphasising that successful C2PA adoption requires not just technical standards but also media literacy education, platform cooperation, and understanding of diverse global media environments. The planned pilot implementations and continued research efforts indicate positive momentum towards broader adoption of content authenticity standards in the global media landscape.
Session transcript
Muge Ozkaptan: Let me see who is here. Hello everyone, I’m Muge Ozkaptan, Senior Product and AI Lead at BBC Media Action. And I support our country offices and media organisations for their digital transformation and AI adoption, especially for responsible and ethical point of view. We also support the innovation solutions including C2PA, being sure that it’s scalable, practical and impactful. So I’d like to introduce my colleagues from the BBC, Charlie Halford, which is Principal Research Engineer at the BBC R&D, and we have Krylo Iesin, he’s a Senior Product Manager, and we also have Khalifa Said Rashid, who is Editor-in-Chief from the Chanso, the Chanso is a digital outlet in Tanzania. He couldn’t come here today, but he’s attending through a recorded audio, and we will hear from him about his thoughts about C2PA. I’d like to talk about BBC Media Action a little bit, and then BBC Media Action’s approach on C2PA, and then I will hand over to Charlie, and then Charlie will talk about what is C2PA in detail and in action, and how the BBC is using it. And then we will hear from our media partners, Suspine and the Chanso, about their reflections and needs around C2PA. We will have some Q&A at the end, but if you’d like to join online, we have Slido, so you can see the QR code on the screen, and also you can type 3710912 for your questions, or you can ask it directly here. So BBC Media Action is BBC’s international charity, we work in 30 countries, and co-create content in 50 languages. We are fully funded by donors and our supporters. We are a front line of the global challenges like disinformation, and also declassification and violating public trust. We support media organisations, media professionals to enhance their abilities, make them more strengthened, resilient for fragile environments. And we believe that C2PA is a very crucial, very important development in the open standards, and we are really interested in being part of these global conversations from now on. And C2PA is an open standard for content authenticity and prominence, offers one promising approach to help audiences verify where the content comes from and how or where it’s altered. We believe that including the voices from the global majority to make these standards more applicable and relevant to the global audience. When we talk about technology generally, we talk about specification and applications, but it’s important that bringing those diverse voices and understand their actual needs, how they work, what kind of challenges that they are facing in their day-to-day life and work, and how C2PA innovation solutions like C2PA can fit in that area. So this is where we are focusing on. And at BBC Media Action, we launched an initiative called Pursuit of Truth. We are supporting a cohort of 30,000 media professionals and 1,000 vital media outlets, especially work and serve audiences in fragile environments. And as part of this commitment, we want to provide tools, technology, and innovation solutions for them to gather the facts and deal with the external pressures and give a platform to diverse voices. And C2PA is sitting perfectly in that branch, including other open standards in that field. And also draw on world-class expertise and innovation to advance the ethical use of AI and content verification in media around the world. And we are also providing a big commitment to supporting research to understand how this information spreads and how to respond it more effectively. So I want to hand over to Charlie. So Charlie, what is C2PA in action and how the BBC is using it?
Charlie Halford: Thank you very much. Hello, everybody. Yes, I’m Charlie, as Miguel has let you all know by now. So yeah, I’m just going to take you through what C2PA is, how we’re using it at the BBC, and how we’d love to see C2PA adopted, I guess, around the world and across the media ecosystem. And some of the challenges that we see in that area and how maybe we can all work together to make it work. So let’s first start with part of our problem. So these are three examples of disinformation that have had the BBC logo attached to them. These aren’t pieces of AI disinformation. This is just somebody with a video editor. They found the BBC logo. They found the BBC font. They know what the BBC’s graphics look like. And they’ve put out what the footage underneath them isn’t fake. They’ve just changed the message. They’ve changed the message. They’ve put something on there that the BBC hasn’t published. And so this kind of problem is the one that we really wanted to address with C2PA. So when the origin of content is really hidden, maybe on a social media platform, all content can look the same. You might think all content is equally trustworthy. So in this world of disinformation, as a group of media organizations and other people across the world, how can our commitment to accuracy make a difference? So that was one of the problems that we tried to think about when we were looking at creating a new technology in BBC research and development. So what are some of the others? So answering this question, is the media I see genuine or authentic is the general thing we’re trying to solve. So we know that there’s no real way to securely understand what or who created a piece of content. The existing metadata that we have, which can be really useful, is very easily faked. Anybody can add to it. Anybody can manipulate it. The media itself, as we just saw, is very easily manipulated, and there’s no guarantee that it’s original. And there’s no clear way, if you wanted to see it, to understand the journey of that content. What’s happened to it when it’s left the camera? Who’s changed it? Who’s added what? Who’s modified what? And so C2PA was really created to address some of those problems. These are some of the people that were involved, and it’s really grown to be a pretty huge coalition at this point. So you can see along the top there, I’ve included some of the tech companies and product organizations in there. So Adobe are a huge driver in this, but so are Microsoft, Google, Meta, OpenAI, Sony, all part of the C2PA board. And then I’ve added some of the news organizations that are getting involved, because this problem is one that we’re really trying to solve. So WDR, BBC, CBC, Radio Canada, AFP, France TV. Many people are involved in this process. There’s many more on that list that I’ve not added. So I just wanted to do a…before I get into this in detail, understand how many people in our audience here are sort of technical, have a technical background? Yeah, got a few people there. So C2PA itself is a standard, a technical standard. And what it does is it describes how you attach a signature, a cryptographic signature, the same kind that you might use on a website to give it that green lock. How you attach that cryptographic signature to a piece of content. So it goes inside a file and binds to the image or the video or the audio to let you understand where that piece of content has come from. So we use a hash to link it to the video and the audio, and we then use a signature across that. So when we started working on this, we tried to understand, we wanted to ask our users if this kind of thing makes sense. makes an actual difference. And so we’ve run a few studies. There’s recently been one run, I think, by the University of Bergen that’s expanded on this. But when we did it, we asked people, we gave them two sets of content, the same kind of content, and then one had no provenance on it, the other one had this C2PA data on it, and we said, do you trust, what’s your level of trust in each of these pieces of content? And the significant one there was that when we added extra transparency data, when we told people where this stuff had come from, they were more inclined to trust that content. And the important thing for other media organizations here is it was the people that didn’t use our website already that were the most affected by that. And so we then ran a trial. So this is a piece of content. We ran, that came into our verification team, BBC Verify. They then did manual verification checks. And what we wanted to do is take the output of their verification checks and add it into the content, so attach it to the content, and then we showed that to our audiences. We did that with about five pieces of content as our trial. Gives you much more detail when you click that blue button to expand it. And we did the same thing. We asked people if they would find it more trustworthy, and they came back about, I think it was about 80% of the people said they found that piece of content, that extra data more useful, and it added more trust to the story. And then what we’ve also been doing is working with an organization called the IPTC to establish a way for publishers to get a certificate that proves who they are, so that people can’t impersonate you. So the BBC or AFP, in this example, gets a certificate from GlobalSign. They send it to the IPTC, and then we add that to a list of not trusted organizations. It’s actually just verified organizations, organizations whose identity has been verified. So if you wanted to know how can you use it now, all of these pieces of software, and in some cases cameras, are available right now to make use of it. So if any of these are in use by you today, I’d encourage you to go and check them out. More are being developed. And with that, who are we handing to first?
Muge Ozkaptan: Yeah. We’ve been close to working with the BBC since last year, and we included the diverse voices from our partners to the workshops and the global conversations. And we want to show you one of the talks from the Chanzo Editor-in-Chief Khalifa Said Rashid is sharing with us about his challenges in Tanzania around mis-disinformation and how the C2PA is relevant to his work.
Khalifa Said Rashid: Hello. My name is Khalifa Said Rashid. I am the Editor-in-Chief of the Chanzo. The Chanzo is a digital media platform based here in Jerusalem, Tanzania, focusing on public interest journalism and public accountability and investigation. The major problem that we face here in Tanzania with regard to misinformation and disinformation includes but not limited to impersonification of brands, a phenomenon that have affected many media outlets here in Tanzania, including the Chanzo, where we have been forced on numerous accounts to come to public and deny the content that have been shared on social media platform, impersonating our brands. And it have been very difficult for us to deal with a situation like that because many people trust our brand and when they see content online with our logos and brand colors, they can be very difficult for average reader to tell whether it’s real or not. But another types of disinformation or misinformation is we have seen, especially during the time of crisis, for example, all the video taken out of context resurfaced on social media, purported to be about the events that are happening during that week or day. And we have been battling with these problems where you have multiple media outlets in Tanzania, which have produced a number of content, it may be two or three years ago, but they are not dated. And when, if for example, they are related to demonstrations or protest, and if there is a protest on that day, this video resurfaces on social media, purporting it to be happening on that day. And so in this context, we are very optimistic that a technology like C2PA offers such a huge potential for us as editors and journalists to counter misinformation and disinformation because it allows the user to tell if the content is real or not because the technology allows media content in partnership with the platforms like Twitter, and now Facebook and others to sign their content that allows users to tell that this is really from the chance or this is really from this particular media outlet and not an impersonation. And of course, we are also happy to work with the BBC Media Action, which is helping us better understand this technology and apply it in our day-to-day operations. Thank you, but goodbye.
Muge Ozkaptan: Well, thank you, Khalifa. I want to turn to you, Krylo. And can you talk about Suspilne a little bit? Who is Suspilne and what do you do? And what are your challenges? And then what do you think about C2PA in your day-to-day work? Why is it relevant to you?
Kyrylo Lesin: Yeah, thank you, Mirko. Hello, everyone. So I represent Suspilne. This is public service media from Ukraine. It was established as an independent media institution eight years ago. And since that time, specifically five years ago, we started intense digital transformation. And there are a lot of outputs of this process, like organizational, content-wise, and BBC Media Action and other partners invest a lot of their time and resources to support us on this journey. And specifically, five years ago, our flagship digital news platform was launched. It is called Suspilne.media, and you can access it through any browser. And using this platform and our other digital channels, we deliver high-quality journalism. What does it mean? So we recognize our main mission is to empower citizens’ decision-making by providing them with high-quality journalism. And our output is recognized by independent watchdog organization as one of the most trustworthy journalistic products, meaning that we totally adhere to journalistic standards. And operating in Ukrainian media context and global context, we encountered, obviously, really aggressive competition for the attention of our audiences. And specifically, now we’re countering hybrid warfare and disinformation that has been identified severely since the full-scale Russia invasion to Ukraine. Also, it’s affected the operational conditions, both for audience and for media. For example, only this night, Russia launched more than 370 drones and missiles in total. So also, another take of media sphere that get us tackling these challenges is the rise of AI-supporting, AI-powered systems. Also, algorithmic-based newsfeeds. For example, as Google discover all of these products, they operate, at some extent, as black boxes, and there are really lack of signals and parameters they can embrace to arrange the content with the most value for the end user. And talking about C2P technology and the pilot we would like to run with the BBC and the process that now leading by Charlie is to get this technology incorporated into the end-to-end process of web publishing so we can provide our audience with at least one additional mean to draw a distinctive line between the trustworthy content and kind of other content. So the value is huge, and it might sound boring when it comes to C2P. recognized as some standard but in general for APBC we recognize it as the innovation vibe so some piece of code can just dramatically change the way content appears into the screens of our end users and they can end up with a change in their behavior recognizing the high quality content and you know put the preferences on that compared to to some other resources so yeah this is a value.
Muge Ozkaptan: Thank you so much and Charlie I want to ask you where do you see this technology is going next and how actually we can make it broader adoption especially by the global majority
Charlie Halford: okay hopefully where we will see the technology going next is expansion in where it’s being used so we’re really hoping that we’re going to see more media organizations using it we’re really hoping to be to be able to put a pilot in place with the Spillner that would be fantastic there’s lots of other people that are involved we’d also love to see more support from platforms social media platforms I think most media organizations get a lot of their traffic of the lots of their audiences come via me social media platforms that might be tick-tock it might be YouTube it might be Facebook lots of them and then there’s also a few other considerations I guess to help us get that broader adoption so there’s a concept called redaction in C2PA so that’s the idea that you want to show people as much information as possible right to help them make a decision about is this trustworthy but sometimes that information can hurt people maybe it could hurt the subject of the photo maybe it could hurt the the person taking it so location and date and time so having the ability to remove those things where somebody might be put at risk is really important that’s redaction so we need to see that implemented we’ve got device and tool adoption so we can’t we need to get to a place where it’s possible for any organization or any person taking a picture with their camera that it’s just built in they don’t need to do anything special I think that’s starting to happen but we need that to roll out more we also need if you’re going to be part of this as a media organization you need to be able to look after your private key that’s the thing that’s going to be really important to you so developing security procedures we’ve talked about platforms pilots so I think really it’s about finding the right use case what’s the best the best thing that helps you out maybe it’s showing users on your platform more detail about your content maybe it’s telling people on social media that this is really from you from you another one is considering how content comes into your organization if lots of people send you images maybe images and videos from users being able to detect maybe whether they are genuine is really important so do that at that point and then media literacy is probably one of the the biggest ones on this list helping your users understand what all of this means you can’t just put this information in front of people and expect them to understand it so we have to use our products we have to use our journalism to explain to me people what what this means and thank you very much and how c2p works for AI generated content okay so on AI generated content a few organizations now so open AI and Microsoft are actually putting AI labels into their content and they’re using c2p a and so if you click through to the next slide I think if some social media companies are now using that so if you click through again this is an example of tick-tock and where they detect and a c2p a AI label they actually let users know and we’re hoping this will become more broadly adopted as we go forward if you click yet so that’s just the AI label and then I think this next one just a little video so if you click play on that so this is a prototype that we’re working on and so here I’ve added an AI image and I’m just in the background I’m inspecting the c2p a label and because it was produced by one of those tools you can see that that’s been AI generated they’re not bulletproof at the moment we still need to use other techniques to detect whether something’s AI generated but this is a good first start at that
Muge Ozkaptan: thank you very much we have some time for questions and please fire up and we like to hear from you your thoughts and any experiences any challenges that you’re facing and do you think c2p a is important to you so please go in and I’m also checking slider if anything comes up on online just a second I think there is a question from from there this is
Audience: great and I’m such a I’m a big supporter of the c2p a but I wanted to ask a couple questions on the public side of it in terms of the public response and recognition of it I’m Amy Mitchell with Center for news technology innovation and look a lot at sort of how do we think about really public service in these kinds of things and there can be value in internal kinds of signals that maybe aren’t meant for public facing purposes but in the space where you’re looking to really have the public benefit and understand the integrity in your research and tests have you seen them wreck it you know recognize the sig the sign the print the content label that’s on there and respond positively in choice or is it more at this point about interest in it you know interest in having something like that be a sign I’d be I’d be curious Charlie would
Charlie Halford: like yeah sure thank you that’s a yeah that’s a really good question so when we the research I showed there we we showed people without any any extra data and we showed them with it with the extra data we showed them a like a c2p a logo we didn’t get any comments back about any recognition yet so as a brand I don’t think the c2p a has much public recognition so I think in terms of media literacy that’s a job for us to do but in terms of us giving them the extra data that was a big trust indicator so that that had a direct impact not just on interest but on how trustworthy people found the content itself
Muge Ozkaptan: Thank you we have a question on slider is there any plan to integrate c2p a to existing regulations for information integrity such as EU DSA or online safety act of the UK
Charlie Halford: so I I guess there there’s it’s been looked at by many different organizations many regulatory bodies around the world I’m not sure if it has been named directly in any of them but there’s quite a lot of regulation that’s starting to come out that’s talking about the need to label things particularly from an AI perspective so you’re getting a lot more AI labeling requirements whether we would ever push to get c2p a as a technology embedded into legislation I’m not sure I it might be it might be useful to get some movement but then if there’s standard changes at a later date maybe we’d want some flexibility but the idea of provenance the idea of labeling I think would be really great.
Muge Ozkaptan: I think we have time for one more question if you want to ask any from the room I just want to ask last question about the social media platforms how they’re adopting so far and what’s the response in detail.
Charlie Halford: so I think the response has been mixed but positive so most of the adoption we see are in social media platforms using c2p a to understand whether something has been labeled as AI they’re most interested in that in that situation we’ve seen less adoption for people interested in labeling things as from the BBC from sys bilna or maybe showing you more detail about your media but we’re hopeful that the more content we see the more we publish the more social media organizations will start to adopt and really it’s it’s for us to request that of those platforms I think
Muge Ozkaptan: well thank you so much we are at the end of session now but if you’re interested investing into this standards and if you want to have questions or sharing ideas they’re just here and just come and then join to a conversation thank you very much thank you
Charlie Halford
Speech speed
155 words per minute
Speech length
2132 words
Speech time
823 seconds
C2PA is an open standard that uses cryptographic signatures to verify content authenticity and provenance, addressing problems where media can be easily manipulated or impersonated
Explanation
C2PA describes how to attach cryptographic signatures (similar to website security certificates) to content files, binding to images, videos, or audio to show their origin. This addresses the problem that existing metadata is easily faked and there’s no secure way to understand what or who created content.
Evidence
Examples of BBC logo being used in fake content with manipulated messages; demonstration that anyone with video editing software can impersonate media brands; involvement of major tech companies like Adobe, Microsoft, Google, Meta, OpenAI, Sony as part of C2PA board
Major discussion point
Content authenticity and verification technology
Topics
Digital standards | Content policy | Liability of intermediaries
Agreed with
– Khalifa Said Rashid
– Kyrylo Lesin
Agreed on
C2PA technology offers valuable solutions for content authenticity verification
BBC has conducted trials showing that content with C2PA provenance data increases user trust by about 80%, particularly among users who don’t regularly use BBC’s website
Explanation
BBC ran studies comparing user trust levels between content with and without C2PA provenance data, finding significant increases in trust when transparency data was added. They conducted trials with BBC Verify team’s verification checks attached to content, which users found more useful and trustworthy.
Evidence
User study results showing 80% of people found extra provenance data more useful and trustworthy; specific mention that people who didn’t use BBC website were most affected by the additional transparency
Major discussion point
User trust and content verification effectiveness
Topics
Content policy | Consumer protection | Digital identities
C2PA works with existing software and cameras that are available now, with major tech companies like Adobe, Microsoft, Google, Meta, and OpenAI being part of the coalition
Explanation
The technology is currently implementable through existing tools and devices, with widespread industry support from major technology companies. The coalition has grown significantly and includes both tech companies and news organizations working together on the standard.
Evidence
List of software and cameras currently supporting C2PA; mention of tech companies (Adobe, Microsoft, Google, Meta, OpenAI, Sony) and news organizations (WDR, BBC, CBC, Radio Canada, AFP, France TV) as coalition members
Major discussion point
Technology adoption and industry collaboration
Topics
Digital standards | Digital business models | Convergence and OTT
BBC faces impersonation problems where fake content uses BBC logos and branding, making it difficult for audiences to distinguish authentic content
Explanation
The BBC regularly encounters disinformation that uses their visual branding, logos, and fonts to create fake content that appears legitimate. This creates confusion for audiences who cannot easily distinguish between authentic BBC content and impersonated content on social media platforms.
Evidence
Three specific examples of disinformation with BBC logos attached; explanation that these weren’t AI-generated but created with basic video editing tools using BBC branding elements
Major discussion point
Brand impersonation and media authenticity challenges
Topics
Content policy | Intellectual property rights | Liability of intermediaries
Agreed with
– Khalifa Said Rashid
Agreed on
Brand impersonation is a major challenge for media organizations
Broader adoption requires expansion to more media organizations, increased platform support, device integration, and better security procedures for managing private keys
Explanation
For C2PA to be effective globally, it needs wider implementation across media organizations, better support from social media platforms, built-in camera/device integration, and robust security procedures. The technology also needs concepts like redaction to protect sensitive information while maintaining transparency.
Evidence
Mention of redaction concept for protecting location/date/time data that could hurt subjects; need for device integration so cameras automatically include C2PA without special procedures; importance of private key security management
Major discussion point
Technology scaling and implementation challenges
Topics
Digital standards | Network security | Privacy and data protection
Media literacy education is crucial for helping users understand what C2PA information means, as the technology itself doesn’t have much public recognition yet
Explanation
While users respond positively to additional transparency data, they don’t yet recognize the C2PA brand or understand what the information means. Media organizations need to use their platforms and journalism to educate users about content provenance and verification.
Evidence
Research showing users didn’t recognize C2PA logo but responded positively to extra transparency data; acknowledgment that C2PA as a brand has little public recognition
Major discussion point
Public education and technology literacy
Topics
Online education | Content policy | Multilingualism
Social media platforms show mixed but positive response, primarily adopting C2PA for AI-generated content labeling rather than general content verification
Explanation
Social media platforms are beginning to implement C2PA technology, but mainly focus on detecting and labeling AI-generated content rather than broader content verification. There’s less adoption for showing detailed media provenance or publisher verification.
Evidence
Examples of TikTok detecting C2PA AI labels and notifying users; mention of OpenAI and Microsoft putting AI labels into content using C2PA; demonstration of prototype detecting AI-generated images
Major discussion point
Platform adoption and AI content labeling
Topics
Content policy | Digital standards | Liability of intermediaries
Disagreed with
– Kyrylo Lesin
Disagreed on
Platform adoption priorities and effectiveness
Khalifa Said Rashid
Speech speed
130 words per minute
Speech length
387 words
Speech time
178 seconds
The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content
Explanation
The Chanzo faces two main disinformation challenges: impersonation using their logos and brand colors, and old video content being taken out of context and reshared during current events. These problems are particularly difficult because readers trust their brand, making fake content appear credible.
Evidence
Specific examples of having to publicly deny impersonated content; description of old protest/demonstration videos resurfacing during current events without proper dating; explanation of how trusted brand recognition makes fake content more believable
Major discussion point
Regional media challenges with disinformation
Topics
Content policy | Intellectual property rights | Freedom of the press
Agreed with
– Charlie Halford
– Kyrylo Lesin
Agreed on
C2PA technology offers valuable solutions for content authenticity verification
Kyrylo Lesin
Speech speed
123 words per minute
Speech length
441 words
Speech time
214 seconds
Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion
Explanation
Suspilne operates in an environment of hybrid warfare where disinformation is used as a weapon, with conditions severely affected since Russia’s full-scale invasion. They face aggressive competition for audience attention while trying to maintain journalistic standards and provide trustworthy content.
Evidence
Mention of 370+ drones and missiles launched in a single night; description of hybrid warfare and disinformation intensifying since Russian invasion; recognition by independent watchdog organizations as trustworthy journalism
Major discussion point
Media operations during wartime and hybrid warfare
Topics
Cyberconflict and warfare | Content policy | Freedom of the press
Agreed with
– Charlie Halford
– Khalifa Said Rashid
Agreed on
C2PA technology offers valuable solutions for content authenticity verification
Disagreed with
– Charlie Halford
Disagreed on
Platform adoption priorities and effectiveness
Suspilne is Ukraine’s public service media established eight years ago, focusing on digital transformation and delivering trustworthy journalism recognized by independent watchdog organizations
Explanation
Suspilne was established as an independent media institution that has undergone significant digital transformation, launching their flagship digital platform Suspilne.media. Their mission is to empower citizens’ decision-making through high-quality journalism that adheres to professional standards.
Evidence
Specific timeline of 8 years since establishment and 5 years of digital transformation; launch of Suspilne.media platform; recognition by independent watchdog organizations as trustworthy journalism; support from BBC Media Action and other partners
Major discussion point
Public service media digital transformation
Topics
Digital business models | Online education | Freedom of the press
Agreed with
– Muge Ozkaptan
Agreed on
Media organizations need support for digital transformation and capacity building
Muge Ozkaptan
Speech speed
131 words per minute
Speech length
952 words
Speech time
435 seconds
BBC Media Action works in 30 countries with content in 50 languages, focusing on supporting media organizations in fragile environments through their ‘Pursuit of Truth’ initiative
Explanation
BBC Media Action is BBC’s international charity that operates globally to support media organizations and professionals in challenging environments. They focus on enhancing capabilities and resilience of media organizations facing disinformation and threats to public trust.
Evidence
Specific numbers: 30 countries, 50 languages, fully funded by donors; description of working on frontline of global challenges like disinformation; focus on fragile environments and global majority voices
Major discussion point
International media development and support
Topics
Capacity development | Cultural diversity | Freedom of the press
The organization supports 30,000 media professionals and 1,000 media outlets, providing tools and technology to deal with external pressures and gather facts
Explanation
Through the Pursuit of Truth initiative, BBC Media Action provides comprehensive support including tools, technology, and innovation solutions to help media professionals work effectively under pressure. They aim to advance ethical AI use and content verification while supporting research on disinformation.
Evidence
Specific numbers: 30,000 media professionals and 1,000 media outlets; mention of providing tools, technology, and innovation solutions; commitment to supporting research on how disinformation spreads
Major discussion point
Media capacity building and technology support
Topics
Capacity development | Digital access | Online education
Agreed with
– Kyrylo Lesin
Agreed on
Media organizations need support for digital transformation and capacity building
Audience
Speech speed
160 words per minute
Speech length
161 words
Speech time
60 seconds
Research shows public interest in content integrity signals, with users responding positively to additional transparency data even without recognizing the C2PA brand specifically
Explanation
Questions were raised about public recognition and response to C2PA technology, specifically whether users recognize the content labels and respond positively in their choices or if it’s more about general interest in integrity signals. The focus is on understanding the public benefit and service aspect of the technology.
Evidence
Reference to research by Center for News Technology Innovation; distinction between internal signals versus public-facing purposes; question about recognition of signatures and positive response in choice behavior
Major discussion point
Public awareness and response to content verification technology
Topics
Consumer protection | Online education | Content policy
Agreements
Agreement points
Brand impersonation is a major challenge for media organizations
Speakers
– Charlie Halford
– Khalifa Said Rashid
Arguments
BBC faces impersonation problems where fake content uses BBC logos and branding, making it difficult for audiences to distinguish authentic content
The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content
Summary
Both BBC and The Chanzo face significant challenges with their brands being impersonated through fake content using their logos and visual branding, creating confusion for audiences who trust these brands
Topics
Content policy | Intellectual property rights | Liability of intermediaries
C2PA technology offers valuable solutions for content authenticity verification
Speakers
– Charlie Halford
– Khalifa Said Rashid
– Kyrylo Lesin
Arguments
C2PA is an open standard that uses cryptographic signatures to verify content authenticity and provenance, addressing problems where media can be easily manipulated or impersonated
The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content
Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion
Summary
All speakers recognize C2PA as a promising technology solution that can help media organizations verify content authenticity and combat disinformation challenges they face in their respective contexts
Topics
Digital standards | Content policy | Cyberconflict and warfare
Media organizations need support for digital transformation and capacity building
Speakers
– Muge Ozkaptan
– Kyrylo Lesin
Arguments
The organization supports 30,000 media professionals and 1,000 media outlets, providing tools and technology to deal with external pressures and gather facts
Suspilne is Ukraine’s public service media established eight years ago, focusing on digital transformation and delivering trustworthy journalism recognized by independent watchdog organizations
Summary
Both speakers emphasize the importance of supporting media organizations through digital transformation initiatives, providing tools and technology to enhance their capabilities
Topics
Capacity development | Digital business models | Online education
Similar viewpoints
Both speakers emphasize the critical importance of education and capacity building – Charlie focuses on media literacy for users to understand C2PA technology, while Muge focuses on supporting media organizations globally with tools and knowledge
Speakers
– Charlie Halford
– Muge Ozkaptan
Arguments
Media literacy education is crucial for helping users understand what C2PA information means, as the technology itself doesn’t have much public recognition yet
BBC Media Action works in 30 countries with content in 50 languages, focusing on supporting media organizations in fragile environments through their ‘Pursuit of Truth’ initiative
Topics
Online education | Content policy | Capacity development
Both media organizations operate in challenging environments where they face sophisticated disinformation campaigns that threaten their credibility and require active countermeasures
Speakers
– Khalifa Said Rashid
– Kyrylo Lesin
Arguments
The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content
Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion
Topics
Content policy | Freedom of the press | Cyberconflict and warfare
Unexpected consensus
User trust increases with transparency even without brand recognition
Speakers
– Charlie Halford
– Audience
Arguments
BBC has conducted trials showing that content with C2PA provenance data increases user trust by about 80%, particularly among users who don’t regularly use BBC’s website
Research shows public interest in content integrity signals, with users responding positively to additional transparency data even without recognizing the C2PA brand specifically
Explanation
It’s somewhat unexpected that users would respond so positively to technical transparency data (C2PA provenance information) even when they don’t recognize or understand the specific technology brand. This suggests that the mere presence of additional verification information builds trust, regardless of technical literacy
Topics
Consumer protection | Online education | Content policy
Global media challenges are remarkably similar across different contexts
Speakers
– Charlie Halford
– Khalifa Said Rashid
– Kyrylo Lesin
Arguments
BBC faces impersonation problems where fake content uses BBC logos and branding, making it difficult for audiences to distinguish authentic content
The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content
Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion
Explanation
Despite operating in vastly different contexts (UK public broadcaster, Tanzanian digital outlet, Ukrainian public media during wartime), all three organizations face remarkably similar challenges with brand impersonation and content manipulation, suggesting these are universal problems in the digital media landscape
Topics
Content policy | Freedom of the press | Digital standards
Overall assessment
Summary
There is strong consensus among all speakers that content authenticity and brand impersonation are critical challenges facing media organizations globally, and that C2PA technology offers a promising solution. All speakers agree on the need for capacity building, education, and technological solutions to combat disinformation.
Consensus level
High level of consensus with no significant disagreements identified. The implications are positive for C2PA adoption, as there appears to be unified support from diverse stakeholders (technology developers, international media development organizations, and media outlets from different regions). This consensus suggests strong potential for collaborative implementation and scaling of the technology across different contexts and regions.
Differences
Different viewpoints
Platform adoption priorities and effectiveness
Speakers
– Charlie Halford
– Kyrylo Lesin
Arguments
Social media platforms show mixed but positive response, primarily adopting C2PA for AI-generated content labeling rather than general content verification
Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion
Summary
Charlie presents a measured view of platform adoption focusing on AI content labeling, while Kyrylo emphasizes the urgent need for broader content verification tools due to wartime disinformation challenges. Their perspectives differ on the adequacy of current platform responses.
Topics
Content policy | Liability of intermediaries | Cyberconflict and warfare
Unexpected differences
Public readiness versus technology deployment
Speakers
– Charlie Halford
– Audience
Arguments
Media literacy education is crucial for helping users understand what C2PA information means, as the technology itself doesn’t have much public recognition yet
Research shows public interest in content integrity signals, with users responding positively to additional transparency data even without recognizing the C2PA brand specifically
Explanation
While both acknowledge positive user response to transparency data, there’s an unexpected tension between Charlie’s emphasis on the need for extensive media literacy education and the audience member’s research suggesting users already respond positively without brand recognition. This reveals disagreement about whether public education should precede or accompany technology deployment.
Topics
Online education | Consumer protection | Content policy
Overall assessment
Summary
The discussion shows minimal direct disagreement, with most differences stemming from varying operational contexts rather than fundamental philosophical disputes about C2PA technology
Disagreement level
Low level of disagreement with high consensus on C2PA’s value. The main tensions relate to implementation priorities, urgency levels, and sequencing of education versus deployment. This suggests strong foundational agreement that should facilitate collaborative implementation, though coordination may be needed to address different regional and operational priorities.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize the critical importance of education and capacity building – Charlie focuses on media literacy for users to understand C2PA technology, while Muge focuses on supporting media organizations globally with tools and knowledge
Speakers
– Charlie Halford
– Muge Ozkaptan
Arguments
Media literacy education is crucial for helping users understand what C2PA information means, as the technology itself doesn’t have much public recognition yet
BBC Media Action works in 30 countries with content in 50 languages, focusing on supporting media organizations in fragile environments through their ‘Pursuit of Truth’ initiative
Topics
Online education | Content policy | Capacity development
Both media organizations operate in challenging environments where they face sophisticated disinformation campaigns that threaten their credibility and require active countermeasures
Speakers
– Khalifa Said Rashid
– Kyrylo Lesin
Arguments
The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content
Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion
Topics
Content policy | Freedom of the press | Cyberconflict and warfare
Takeaways
Key takeaways
C2PA is a promising open standard for content authenticity that uses cryptographic signatures to verify media provenance and combat disinformation
Research demonstrates that C2PA significantly increases user trust in content, with 80% of users finding provenance data more useful and trustworthy
Media organizations globally face similar challenges with brand impersonation and content manipulation, from BBC’s logo misuse to Tanzania’s Chanzo dealing with fake branded content
The technology is currently available and being implemented by major tech companies, but broader adoption requires coordinated effort across platforms, devices, and organizations
Media literacy education is crucial for public understanding and adoption, as users respond positively to transparency data even without recognizing the C2PA brand
C2PA shows particular promise for AI-generated content labeling, with platforms like TikTok already implementing detection and labeling systems
Resolutions and action items
BBC Media Action will continue supporting global media organizations through workshops and conversations to include diverse voices in C2PA development
A pilot implementation is planned between BBC and Suspilne to integrate C2PA into end-to-end web publishing processes
Media organizations need to develop security procedures for managing private keys required for C2PA implementation
Continued research and user studies are needed to understand public response and optimize implementation strategies
Unresolved issues
Limited social media platform adoption beyond AI content labeling – platforms show mixed response to general content verification features
Lack of public recognition of the C2PA brand itself, requiring significant media literacy education efforts
Need for broader device and tool integration to make C2PA automatic rather than requiring special procedures
Implementation of redaction capabilities to protect sensitive information while maintaining transparency
Uncertainty about regulatory integration with existing information integrity laws like EU DSA or UK Online Safety Act
Challenge of scaling adoption across the global majority and diverse media environments with varying technical capabilities
Suggested compromises
Flexible approach to regulatory integration that allows for standard changes while promoting provenance labeling requirements
Gradual implementation starting with specific use cases (like AI content labeling) before expanding to general content verification
Balancing transparency with safety through redaction capabilities that can hide sensitive location, date, or personal information when needed
Thought provoking comments
When we talk about technology generally, we talk about specification and applications, but it’s important that bringing those diverse voices and understand their actual needs, how they work, what kind of challenges that they are facing in their day-to-day life and work, and how C2PA innovation solutions like C2PA can fit in that area.
Speaker
Muge Ozkaptan
Reason
This comment is insightful because it highlights a critical gap in technology development – the tendency to focus on technical specifications without adequately considering the real-world needs of diverse global users. It challenges the typical tech-centric approach and emphasizes the importance of inclusive design.
Impact
This comment set the foundational framework for the entire discussion, establishing that the session would prioritize voices from the global majority rather than just technical implementation. It directly led to featuring perspectives from Tanzania and Ukraine, demonstrating practical challenges in different contexts.
These aren’t pieces of AI disinformation. This is just somebody with a video editor. They found the BBC logo. They found the BBC font. They know what the BBC’s graphics look like. And they’ve put out what the footage underneath them isn’t fake. They’ve just changed the message.
Speaker
Charlie Halford
Reason
This observation is thought-provoking because it reframes the disinformation problem beyond AI-generated content to include simple brand impersonation. It demonstrates that sophisticated AI isn’t always necessary for effective disinformation, making the problem more accessible and widespread.
Impact
This comment shifted the discussion from focusing solely on AI-generated content to broader authenticity challenges. It provided concrete context that resonated with the media partners’ experiences, particularly Khalifa’s later description of brand impersonation issues in Tanzania.
And it have been very difficult for us to deal with a situation like that because many people trust our brand and when they see content online with our logos and brand colors, they can be very difficult for average reader to tell whether it’s real or not.
Speaker
Khalifa Said Rashid
Reason
This comment is particularly insightful because it illustrates how brand trust, typically an asset, becomes a vulnerability in the disinformation landscape. It shows the real-world impact on media organizations in developing countries where resources for combating impersonation may be limited.
Impact
This comment provided crucial validation for the C2PA initiative by demonstrating actual harm experienced by media organizations. It moved the discussion from theoretical benefits to concrete use cases, strengthening the argument for C2PA adoption.
For example, as Google discover all of these products, they operate, at some extent, as black boxes, and there are really lack of signals and parameters they can embrace to arrange the content with the most value for the end user.
Speaker
Kyrylo Lesin
Reason
This comment is thought-provoking because it identifies a systemic problem with algorithmic content distribution – the lack of quality signals that algorithms can use to prioritize trustworthy content. It suggests that C2PA could serve as a quality signal in algorithmic systems.
Impact
This comment expanded the discussion beyond direct user verification to consider how C2PA could influence content distribution algorithms. It introduced a new dimension of impact – not just helping users identify trustworthy content, but potentially helping platforms prioritize it.
You can’t just put this information in front of people and expect them to understand it so we have to use our products we have to use our journalism to explain to people what this means.
Speaker
Charlie Halford
Reason
This comment is insightful because it acknowledges that technical solutions alone are insufficient – they require accompanying education and communication strategies. It recognizes the responsibility of media organizations to bridge the gap between technical capability and user understanding.
Impact
This comment introduced the critical element of media literacy as essential for C2PA success. It shifted the conversation from technical implementation to user education, highlighting that adoption requires both technological and educational components.
In the space where you’re looking to really have the public benefit and understand the integrity in your research and tests have you seen them recognize the sign the print the content label that’s on there and respond positively in choice or is it more at this point about interest in it?
Speaker
Amy Mitchell (Audience)
Reason
This question is thought-provoking because it challenges the distinction between user interest in authenticity features versus actual behavioral change. It probes whether C2PA creates measurable impact on user decision-making or remains at the level of expressed preference.
Impact
This question prompted important clarification about the current state of C2PA recognition and effectiveness. It revealed that while users respond positively to additional transparency data, C2PA as a brand lacks public recognition, highlighting the need for better communication strategies.
Overall assessment
These key comments collectively shaped the discussion by establishing a human-centered rather than technology-centered approach to C2PA adoption. The conversation evolved from technical specifications to real-world applications, then to implementation challenges, and finally to the critical importance of user education and platform adoption. The diverse perspectives from different global contexts (UK, Tanzania, Ukraine) demonstrated both universal challenges (brand impersonation, content authenticity) and context-specific needs (operating under warfare conditions, resource constraints). The discussion successfully balanced technical capabilities with practical implementation concerns, ultimately emphasizing that successful C2PA adoption requires not just technical standards but also media literacy, platform cooperation, and understanding of diverse global media environments.
Follow-up questions
How can we achieve broader adoption of C2PA, especially by the global majority?
Speaker
Muge Ozkaptan
Explanation
This addresses the need to expand C2PA implementation beyond current adopters to include more diverse global voices and organizations, particularly those in fragile environments and developing countries
How do we get more support from social media platforms for C2PA implementation?
Speaker
Charlie Halford
Explanation
Platform adoption is crucial since most media organizations get significant traffic through social media, and broader platform support would increase the technology’s effectiveness
How can we improve public recognition and understanding of C2PA branding and signaling?
Speaker
Amy Mitchell (audience member)
Explanation
Research showed that while extra data increased trust, there was no recognition of C2PA as a brand, indicating a need for better public awareness and media literacy efforts
What are the plans for integrating C2PA into existing regulations for information integrity such as EU DSA or UK Online Safety Act?
Speaker
Online participant (via Slido)
Explanation
Understanding regulatory integration could help accelerate adoption and provide legal framework support for the technology
How can we implement redaction capabilities in C2PA to protect people who might be at risk?
Speaker
Charlie Halford
Explanation
This addresses the need to balance transparency with safety, allowing removal of sensitive information like location and time data that could endanger subjects or photographers
How can we develop better security procedures for media organizations to manage their private keys?
Speaker
Charlie Halford
Explanation
Private key management is critical for maintaining the integrity and trustworthiness of the C2PA system for media organizations
How can we achieve device and tool adoption so C2PA is built into cameras and content creation tools by default?
Speaker
Charlie Halford
Explanation
Seamless integration into content creation workflows is essential for widespread adoption without requiring special technical knowledge from users
How can we better detect AI-generated content and improve the reliability of AI labeling in C2PA?
Speaker
Charlie Halford
Explanation
Current AI detection methods are not bulletproof, and improving these capabilities is crucial as AI-generated content becomes more sophisticated
How can we develop effective media literacy programs to help users understand what C2PA information means?
Speaker
Charlie Halford
Explanation
Simply providing technical information isn’t enough; users need education to understand and effectively use provenance data for decision-making
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
