Lightning Talk #118 Building Resilience How We Fight Disinformation

27 Jun 2025 11:50h - 12:20h

Lightning Talk #118 Building Resilience How We Fight Disinformation

Session at a glance

Summary

This discussion focused on combating misinformation and disinformation through collaborative fact-checking and verification efforts, featuring presentations from Norway’s Faktisk organization and the Philippines’ Rappler newsroom. Olav Ostrem from Faktisk explained how their organization was founded in 2017 as a collaborative effort between six major Norwegian media companies in response to the rise of fake news, Trump’s election, and Russian aggression against Ukraine. With only 15 employees, Faktisk operates two divisions: technical fact-checking and verification, and media literacy education, demonstrating how collaboration with media partners amplifies their reach and impact.


Morten Langfeldt Dahlback discussed the technological evolution of their fact-checking tools, moving from simple transcription services in 2017 to sophisticated AI-powered solutions including object recognition algorithms and facial expression analysis. He highlighted how the shift from text-based to audiovisual content on platforms like TikTok and Instagram required new verification methods, exemplified by their analysis of Norwegian flags in a National Day parade that debunked claims about foreign flag prevalence.


Silje Forsund detailed their verification work during the Ukraine conflict, explaining how competing Norwegian media outlets collaborated at a shared verification desk to authenticate images and videos flooding social media. She provided examples of their work, including exposing a staged video falsely claiming to show a Norwegian soldier’s death and using satellite imagery to document filtering camps and mass graves in conflict zones.


The discussion concluded with presentations from Rappler about building networks of truth-tellers and developing AI tools trained on verified content, emphasizing that combating disinformation requires global, collaborative efforts combining data analysis, community engagement, and technological innovation.


Keypoints

## Major Discussion Points:


– **Collaborative fact-checking models**: Faktisk’s unique structure as Norway’s fact-checking organization, founded and owned by six major media companies, demonstrating how competing news organizations can work together to combat misinformation more effectively than individual efforts.


– **Evolution of misinformation tactics and verification methods**: The shift from text-based political fact-checking in 2017 to sophisticated audiovisual content verification, requiring advanced technical tools like AI object detection, facial recognition, and satellite imagery analysis to counter increasingly complex disinformation campaigns.


– **Real-time verification during crises**: The establishment of collaborative verification desks during major events like the Ukraine invasion, where journalists from competing organizations worked together to verify images and videos flooding social media, sharing skills and resources across the industry.


– **Technology-enhanced verification tools**: Development and deployment of AI-powered solutions including object recognition algorithms (YOLO), facial expression analysis, and data forensics mapping to track how disinformation spreads across networks and platforms at unprecedented speed and scale.


– **Community-based truth networks**: Rappler’s approach to building resilient information ecosystems through multisectoral coalitions like Facts First PH, combining data mapping, citizen engagement, and safe digital spaces to create networks of truth-tellers that can counter disinformation at the community level.


## Overall Purpose:


The discussion aimed to share strategies and collaborative approaches for combating misinformation and disinformation in the digital age, showcasing how news organizations, technology partners, and communities can work together to verify content, educate citizens, and build resilient information ecosystems.


## Overall Tone:


The tone was professional and solution-oriented throughout, with speakers presenting their work as urgent but manageable challenges. The presenters maintained an optimistic outlook despite acknowledging the serious threats posed by disinformation, emphasizing collaboration and innovation as key to success. The tone remained consistently focused on practical solutions and shared learning, with audience questions reflecting genuine interest in the ethical and technical aspects of the work presented.


Speakers

– **Olav Ostrem**: News editor of Faktisk, Norway’s only fact-checking organisation


– **Morten Langfeldt Dahlback Rapler**: Head of technology at Faktisk


– **Silje Forsund**: Head of verification at Faktisk (also mentioned as head of strategy and innovation)


– **Speaker**: Role/title not specified


– **Audience**: Various audience members asking questions, including Surabhi from RNW Media (a media development organization based in the Netherlands)


Additional speakers:


– **Carla**: Representative from Rappler, a newsroom in the Philippines


Full session report

# Comprehensive Report: Collaborative Approaches to Combating Misinformation and Disinformation


## Executive Summary


This discussion brought together practitioners from Norway’s Faktisk fact-checking organisation and the Philippines’ Rappler newsroom to examine collaborative strategies for combating misinformation and disinformation. The conversation focused on the evolution from text-based fact-checking to audiovisual content verification, emphasizing international cooperation, technological tools, and community engagement approaches.


## Key Participants and Their Roles


The discussion featured Olav Ostrem, news editor of Faktisk, Norway’s dedicated fact-checking organisation; Morten Langfeldt Dahlback, head of technology at Faktisk, who demonstrated verification tools and methodologies; and Silje Forsund, who was introduced as head of strategy and innovation but identified herself as head of verification at Faktisk. A Rappler representative presented the Philippine perspective on building truth-telling networks. Audience participation included Surabhi from RNW Media, who raised questions about AI implementation in journalism.


## The Collaborative Foundation: Faktisk’s Model


### Origins and Structure


Faktisk was founded in 2017 during what Olav described as “the tornado called Fake News.” The organisation’s distinguishing feature is its collaborative structure—it was founded and is owned by six major Norwegian media companies who recognised that working together against disinformation was more effective than competing individually.


As Olav explained: “We are only 15 people, so we need a little help from our friends.” This collaborative philosophy extends internationally, with Faktisk working with Nordic colleagues including Danish Tjekte, Swedish KjellkritikbyrĂ¥n, and Finnish Faktabari through the Nordic hub.


### Operational Framework


Despite having only 15 employees, Faktisk operates with two main functions: fact-checking and verification, and media literacy education. The organisation’s effectiveness is amplified through partnerships with media ecosystem partners who provide support, financing, and distribution channels.


## Technological Evolution in Verification


### From Text to Audiovisual Content


Morten detailed the technological evolution since Faktisk’s founding. Initially focused on transcription services for text-based political fact-checking, the organisation has adapted as misinformation evolved to sophisticated audiovisual content on platforms like TikTok and Instagram.


The organisation now employs advanced verification tools including object detection algorithms. A notable example involved analyzing footage from a Norwegian National Day parade to count flags and verify claims about the prevalence of foreign flags among Norwegian ones.


### AI Applications and Ethical Boundaries


The technological toolkit includes facial expression recognition technology for analyzing social media content patterns. However, Morten emphasized ethical limitations: these tools should only be used on public figures with significant impact, not private citizens, and personal information should be disaggregated from analyzed content.


Frame-by-frame video analysis has become crucial for identifying inconsistencies, editing cuts, and staging in propaganda content, allowing verification teams to identify subtle manipulations that could alter meaning or context.


## Crisis Verification: The Ukraine Conflict


### Collaborative Verification Response


The Ukraine invasion marked a significant shift in verification journalism. As Olav noted, the conflict brought “a flood of images and videos, and the big medias, they didn’t know what videos and images that occurred on social media that were to be trusted.”


This crisis prompted collaborative verification desks where journalists from competing Norwegian media organisations worked together in real-time to authenticate social media content, representing a departure from traditional competitive journalism models.


### Practical Verification Examples


Silje provided examples of their verification work, including exposing a staged video that falsely claimed to show the death of a Norwegian soldier. Through frame-by-frame analysis, the team identified inconsistencies and editing cuts that revealed the video’s fabricated nature.


The team also utilized satellite imagery and open-source intelligence to document conflict-related activities. Their training programs have equipped approximately 60 journalists worldwide in verification methods, particularly crucial for journalists working in conflict zones or in exile.


## Global Perspectives: The Philippine Model


### Network-Based Approaches


The Rappler representative introduced the Facts First PH coalition, which includes over 140 units working together across multiple sectors. This model demonstrates multi-sectoral approaches to combating disinformation through collective action extending beyond traditional media organisations.


The Philippine approach emphasizes understanding disinformation as orchestrated campaigns rather than random falsehoods: “You can’t fight what you can’t see… disinformation doesn’t just appear randomly, it is very much orchestrated.”


### Community Engagement Strategies


Rappler’s approach focuses on building networks through training programs and community roadshows. The organisation has developed AI tools trained on verified content to provide real-time fact-checking capabilities, creating digital spaces where communities can access reliable information.


The strategy recognizes that effective responses require understanding how disinformation spreads: “If lies spread a certain way, then we’re able to combat that by knowing where do they start, how are our audiences, our communities, responding to it, and what formats will they actually understand.”


## Technological Challenges and Ethical Considerations


### The Speed Challenge


A significant challenge identified was the mismatch between disinformation spread speed and verification time requirements. Silje identified this as their main operational challenge—stories spread extremely fast while verification work is inherently time-consuming.


This challenge has been exacerbated by AI technology, which has lowered barriers to creating and distributing disinformation at unprecedented scale and speed.


### Ethical Implementation of AI Tools


Surabhi from RNW Media raised questions about reconciling the use of tools like facial recognition with ethical journalism standards. The speakers demonstrated consensus on ethical boundaries, with Morten emphasizing practical guidelines such as limiting facial recognition use to public figures and ensuring personal information protection.


The conversation addressed compliance with regulations such as GDPR while maintaining editorial exemptions necessary for journalistic work, recognizing the tension between utilizing analytical tools and protecting privacy rights.


## Unresolved Challenges


### Hidden Disinformation Networks


An audience member highlighted the difficulty of tracking disinformation that spreads through private channels, targeted advertising, and ephemeral websites that disappear quickly. This represents a fundamental limitation of current fact-checking approaches, which primarily focus on publicly available content.


### Scaling Verification Efforts


The discussion highlighted the ongoing challenge of scaling verification efforts to match the volume and speed of AI-generated disinformation. While technological tools are becoming more sophisticated, verification still requires human judgment and time, while false information can be generated and distributed automatically.


## Areas of Consensus


Throughout the discussion, speakers demonstrated agreement on fundamental principles: collaboration is essential for effective fact-checking, technology offers both solutions and challenges, and education and training are crucial for building verification capabilities.


This consensus extended to ethical considerations, with speakers showing agreement on limiting the use of powerful AI tools when they might compromise privacy or ethical standards. All speakers emphasized the need for international cooperation, recognizing disinformation as a global challenge requiring coordinated responses.


## Conclusion


The discussion revealed both significant progress in combating disinformation and substantial remaining challenges. Speakers demonstrated that effective responses require collaboration across traditional competitive boundaries, sophisticated technological tools deployed within ethical frameworks, and community engagement strategies.


The conversation highlighted fundamental structural challenges including the speed mismatch between verification and disinformation spread, the difficulty of monitoring private disinformation networks, and the need to balance analytical tools with privacy considerations.


The strong consensus on core principles suggests the global fact-checking community has developed shared professional standards that could support more effective collaborative responses to what all speakers recognized as a global challenge requiring coordinated international action.


Session transcript

Olav Ostrem: Yes, it’s a privilege to have the opportunity to be giving this presentation to all of you. Yes, it’s a privilege to have the opportunity to be giving this presentation to all of you. Yes, it’s a privilege to have the opportunity to be giving this presentation to all of you. Yes, it’s a privilege to have the opportunity to be giving this presentation to all of you. At my side, I have my colleague Morten and I have Silje on the far right here. At my side, I have my colleague Morten and I have Silje on the far right here. She’s head of strategy and innovation. Hearing them diving into how to counter mis- and disinformation. Hearing them diving into how to counter mis- and disinformation. My name is Olav, I’m the news editor of Faktisk, which is Norway’s only fact-checking organisation. My name is Olav, I’m the news editor of Faktisk, which is Norway’s only fact-checking organisation. We are now 15 employees and we are part of two divisions. One is the technicals on fact-checks and verification. The other part is the media literacy department tank, or think in English, who is making educational material out of the same sort of material we are using for the fact-checks. who is making educational material out of the same sort of material we are using for the fact-checks. We’re going to address, as I said, disinformation, but first I have to take you into where we’re from, We’re going to address, as I said, disinformation, but first I have to take you into where we’re from, where Faktisk is from and how collaboration is a vital part of our history. where Faktisk is from and how collaboration is a vital part of our history. We were founded in 2017, just after the tornado called Fake News. It was after the start of the Russian aggression against Ukraine. It was after the first Trump election. It was a great need for finding a way to counter all of this misinformation being spread online. It was a great need for finding a way to counter all of this misinformation being spread online. In Norway, the solution was that the big media companies got together and found a way to do this together, making an own independent organisation called Faktisk and starting up there. As you can see, we were founded by and are still owned by the six big companies in Norway, As you can see, we were founded by and are still owned by the six big companies in Norway, which is now owned by Shipsted and Aller, and we have the two broadcasters, which is NRK and TV2, and we have two big media companies behind us that own a lot of regional and local papers. and we have two big media companies behind us that own a lot of regional and local papers. So where are we now? We want to do, I think, how to say the core mission that we’ve always been into, is to curate knowledge and share know-how and stay relevant. And how do we do it? Yeah, we are fact-checking and we are doing verifications, and we are also looking into the incidents when AI becomes a digital threat. And with that, we do fact-checking verification, publish it online, we have social media, and we have a TikTok account and newsletter, and at our best, we do this quick. We can make an article, a fact-check verification, and at the same time, our media literacy division, they make this into educational material that can be very useful for the teacher the next day in the classroom. This is when we are at our best. Then we do the debunking and pre-bunking all together. So how do we do this with very limited resources? We are only 15 people, so we need a little help from our friends. And with the friends, we have, I mean, like the big media ecosystem in Norway. And as I said, we are owned by the six big companies, so that means that not all of the media industry, but a large part of it are behind us and giving us support and backing. In addition to financing, we share ideas and we share know-how, and sometimes we visit them, they visit us, we can work together. And in this way, we can be stronger together. And at the same time, they are making most of our distribution, because it’s possible for them to republish our articles, and in that way, we get much more visibility. We get a big audience through our owners and through the rest of the Norwegian media. So this is how collaboration is so important, and we join our forces. Then a small organization like ours can still strengthen the methods and skills in this very important issue of our time. Just a few words on the development during these years. At the start, it was almost only fact-checking we did, and it was a lot of claims from the politicians. We’re still doing that. And at that time, like eight years ago, there was a lot of viral misinformation we also looked into. I mean, like, there was a lot of strange and funny webpages which had to be looked upon. There was a big change in 2020 because of the pandemics. Then almost all of our sources went into that, and fact-checking all the claims related to the debates on vaccine and what the authorities were to be doing. Afterwards, we had a Russian full-scale in Ukraine, and after that again, it was an energy crisis, and this occupied all of our strength almost. This was what we were doing. But the big change, I think, in the way we prioritize our journalism would be like the invasion in Ukraine because what we saw was a flood of images and videos, and the big medias, they didn’t know what videos and images that occurred on social media that were to be trusted. So we had to find a method and a way to verify all these images and videos. And the way we did it, or the Norwegian media did it, we sat together at our own verification desk, which was facilitated and administered by Faktisk, and it was staffed by journalists from all through Norway. So journalists that otherwise were from competing newspapers, they got together, found new methods, found new skills, and at the same time, they answered to this assignment on providing verified images and videos for the TV stations and the papers. And the third thing we made possible then was give education to all the journalists in how to develop this field, and they could bring that back to the organization they belonged to otherwise. So a lot of things happening, and Silje will later on tell you a bit more about that part. At the end, I’d just like to say that, of course, we have a lot of collaboration in Norway, but we also have collaboration outside, across the borders, and we work tight together with our Nordic colleagues, the Danish Tjekte, the Swedish KjellkritikbyrĂ¥n, and the Finnish Faktabari. And all of those, and we in Faktisk, we also take part in a Nordic hub of fact-checking organization, together with academic institutions and a tech company, which is called Nordisk, where also new ideas and methods are being developed. And we’re also part of an international network, IFCN, which is like 180 fact-checking organizations, and the European fact-checking standards network. So in that sense, we share the ideas, and we share the methods, and we share the tools. That is my bottom line. We’re better together.


Morten Langfeldt Dahlback Rapler: Thanks. So I’m going to talk a bit about how we actually get better by being together with other partners. I’m Morten. I’m the head of technology at Faktisk. So as Ola mentioned, back in 2017, we mostly fact-checked claims from politicians. And at that time, our tech suite looked like this. So this is our original toolbox. It’s transcriptions from public broadcasting with some entity recognition, so we could see what people were talking about when they were on the radio or on TV shows, and picking up on the claims that they made, so we could fact-check them. looked a bit into polling, so we had our own poll aggregator service. I’m not going to talk that much about that now. But times have changed quite significantly since 2017, and the ability of bad actors to create misinformation has become much more technologically sophisticated, and we’ve also seen that people consume much more audiovisual content online rather than text-based content. So back in 2017, we mainly worried about Facebook, which is a text-based platform, but now we have TikTok, we have Instagram, we have YouTube, we have all these platforms where people mostly share video, and sometimes long-form video too. So we need to update our toolbox, and we’ve done that in collaboration with our academic partners and our tech partners. So I’m going to give you a couple of brief examples, and then Celia will maybe tell you something more about how this actually works in practice. So the first example is this. So as you can see, these are Norwegian flags with boxes around them. The boxes are an object recognition algorithm that’s called YOLO. It means you only look once. This is a test case, so we tried to see how many non-Norwegian flags there were in the National Day Parade in Oslo. Every year, there is a debate in Norway about how many foreign flags there are in this parade, and the assumption seems to be that there is definitely some significant amount of them. So we used this object detection technology to actually look at the flags and try to verify if there were any foreign ones, and the answer was a very clear no. This was a myth. So we had to run through several hours of parade footage with this algorithm to actually get to this answer, something that would have been completely impossible just a few years ago. I think we counted around 80,000 flags, or flag instances, to be very technical about it, because the same flag can appear multiple times in the same video. We’ve also gone through TikTok videos. This one, I’m not sure if you can see. You cannot see. So here we used facial expression recognition because we wanted to see what the mood is like on Norwegian TikTok. So the box here actually looks at someone’s face, and it tries to estimate what emotion their facial expression signifies. So here, I’m not sure if it detects anything, but you can see it was actually quite instructive because we discovered that, one, people don’t really share a lot of substantial content on TikTok, and most people are happy. So that was pretty much contrary to our expectations. Here’s the algorithm, if you want to try it. It’s called FVR. It’s open source, so you can use it. I’m going to skip the next one because I can see that we are running a bit out of time. I’m going to advertise we also have a freedom of information platform that the journalists can use to get access to public records, or at least records that should be in public, through submitting FOIA requests to all sorts of municipalities, state organizations, and so on. I think there is at least 30 million documents in there, so it’s quite large. But that’s sort of the overarching technology stuff. I think it’s much more interesting for you to hear about how it’s used for verification work. So, Silja.


Silje Forsund: Thank you. My name is Silja. I’m the head of verification at Faktisk. And as Ola was saying earlier, this is a project organized under Faktisk that we started in the spring of 2022. And so, as you see from this image, this was posted by the former prime minister in Sweden, Carl Bildt. He shared this image on his Twitter account on the 26th of February in 2022. So this was two days after Russia’s full-scale invasion of Ukraine. And he wrote, there are photos that will be with us for a long time. And he was indeed right about that. When this image was verified, it turned out to be six years old. So it did not show two children and soldiers up against Russian soldiers in the full-scale invasion. And this exemplifies what the media and the society as well were up against because traditional reporting tools were not enough anymore. And social media was overflowing the front line footage. And some of it was authentic and some of it was manipulated. So, and with the restricted access on the ground for journalists, newsrooms, they relied increasingly on user-generated content for their news coverage. So the newsroom, they lacked the skills and the methods to verify content, and it became very urgent for everyone to be able to separate facts from fabrications. So competing national media in Norway, we got together and we established a method and a work process on how to try to separate facts from fabrications. And three years later, we now use it as a big part of our journalism to use verification, and we’ve trained a lot of Norwegian journalists. About 60 Norwegian journalists have been part of our newsroom and got training in it. And we also trained journalists all over the world, many of them living in exiles. So we’ve been training them, journalists from Gaza, Yemen, Syria, Libya, Afghanistan, many corners of the world. So let’s give an example of how we work when we verify content, mainly videos and images. This video that you see screenshots of on the screen, this was a video that surfaced on Russian telegram channels, and it was claiming to show a Norwegian soldier being killed in Ukraine, and he was wearing a Norwegian uniform. And the footage, it seemed authentic, and it seemed to have been shot with a GoPro camera, showing a Russian soldier throwing a grenade into a bunker where the Norwegian soldier was hit. And the video, it was spread with allegations of NATO’s involvement in the war in Ukraine, and dead Norwegian soldiers that fed the Russian narrative of NATO’s involvement in the war. But something was clearly off, and verifying the video and analyzing it frame by frame, we could map and see how sun and shades revealed inconsistencies and editing cuts, and we could document that the video, it was staged, and it did not in fact show or capture a Russian grenade killing a Norwegian soldier at all. It was clearly propaganda and disinformation. So this is an example of the work we do. This other example, this is from the early part of the war in Ukraine. At the beginning, it had been rumors that there had been established a filtering camp outside of Mariupol, the Russian-controlled Ukrainian city. And we wanted to look into it and see if we could find some kind of evidence to support these rumors and stories that we heard. So we examined satellite images from the area, and at the start of the war, we could not see any of the tents that were meant to be put up, but after a few months, we could clearly see, using the satellite images and comparing them, we could see that there had been put up about 20 blue tents. So by this, we use satellite images and open source intelligence to turn rumors into documented facts. And this other example is from Syria, where we also have used satellite images to document signs of mass graves in several places in Syria after the fall of the regime of Bashar al-Assad in December last year. And these satellite images documented how large trenches appeared around the time that thousands of Syrian people had been reported missing. So through satellite data, we could monitor the grave expansion and compare timelines and add facts to the reports from civilians on the ground. So, at Faktis we of course work to debunk misinformation and manipulated content from conflict areas around the world. And during the India-Pakistan conflict this spring, several videos emerged online and they were claiming to show Pakistan shooting down Indian fighter jets. Our analysis showed that clips were manipulated and they were from a military-themed video game. We tracked the source material, compared visual frame by frame and identified that none of the footage had any real-world connection. This case shows how visual content, whether it was intentional or not, can be misused and provoke an escalation between nuclear powers. Increasingly, our work is about exposing manipulation. We investigate content created by AI and video game simulations, as you saw, and also real images such as this, but connected with false claims. This example is from a famous image just a few weeks back, where it was claimed that the French president was accused of cocaine use after simply clearing away a napkin before a photo shoot on a train to Kiev. Stories such as this spread extremely fast and verifying content can be time-consuming. This is our main dilemma and challenge at the moment, and this is why we’re focusing on developing technical tools, such as the one Morten was telling you about with object detection with the Norwegian flags. These technical tools and methods are necessary for us to keep improving and develop in order to make verification faster and even more accurate. This is the main challenge that we’re up against at the moment, and it needs to be a global and joint force to tackle this, I believe.


Speaker: I will let Carla take over now. Thank you, Celia. I’m Carla. I’m from Rappler, a newsroom in the Philippines. As we saw from everything that’s been presented, this information is real and it’s happening here and now. Our goal really in tackling this is what do we do now and how do we continue the battle for truth and start it in a scale that matters to us all. First off, what is the cost of this information? It’s been reported that this information actually is costing us globally $78 billion every single year because of market manipulation, reputation risk, cyber crime, and everything that this information touches. It is designed to divide communities, distort reality, and ultimately destroy trust. The social cost for this is really immeasurable. The fact that we no longer have a shared reality is what we’re living through. Of course, the new entrant, a big entrant in accelerating everything, including this information, is AI. Now, the barrier to entry is very low, low cost, high scale in terms of production, producing, distributing, in a pace that we’ve never seen before. This is what we’re battling with in Rappler. As mentioned, we’re a Philippine independent newsroom headed and led by the courageous Maria Ressa, who’s right there. We’ve seen firsthand how this information has been weaponized to attack journalists, manipulate propaganda, and definitely destroy democracies. Ultimately, our question was, how do we make sure that we tackle disinformation or build resilience in a world where lies move faster than the truth? We’ve seen that this is not limited to just one tool, one newsroom, or even one solution, but it is meant to be a collective system that brings together the best of data, the most effective communities, and the right and impactful technology. Let’s start with data. It really is taking a look at how we fight back with it, because ultimately, you can’t fight what you can’t see. We are able to build out a data forensics company called NERV, where we expose different networks. Ultimately, how does this information travel across platforms, across networks, pages, bots, accounts? Because disinformation doesn’t just appear randomly, it is very much orchestrated. What you’re seeing here in this slide is actually our visual map of how one disinformation or crisis flows through the information ecosystem or platforms, whether it’s Facebook, TikTok, social media. What you’re seeing on the left is how the tendency is that for a piece of information to connect with various communities online splinters into niche communities. You would then be connected or be attracted to certain narratives, and then that’s what you tend to spread. Our ability to then map out where do the facts lie, where are the lies starting, and how do these lies spread, allow us to then connect to the networks who would have most influence to stop that same spread. Being able to then visualize this in a map is very important, because ultimately, the goal is to provide that kind of information and facts to a set of communities that are ready to act. Through MOVE.ph and Rapplern, we’ve been able to build out an effective network of civic engagers through various training, roadshows. Here’s a quick sample of what that looks like, where we bring together multilateral, multisectoral groups from youth leaders, local government units, and making sure that disinformation is a common enemy that we all need to be able to focus and fix. And a big part of the problem, a contributing factor, is technology. If internet or if technology is being used for criminal ends, there must be cooperation from platforms, telcos. I really appreciate these kinds of forums, because this will enable communities to be cyber safe. So let’s focus on educating on how to deal with the internet. If you really want to keep the discourse alive, you also need to keep a conversation with your local government leaders. So all this engagement has then allowed us to really collectively grow a movement within the Philippines. It’s called Facts First PH. This was started last 2022. And what makes it different and really powerful is that it is multisectoral, and everybody gets involved. So it’s over 140 units, and it is very difficult to bring together people, right? But the fact that we are battling through a common concern or crisis is what brings us together. So it’s not just about banding together, but making sure there’s a system where everybody gets to report, verify, check, and spread a piece of fake news or a piece of disinformation, and that’s all then collected. So why is this important? Because in this coalition, we then are able to piece together a network of truth tellers that then allows us to match where do the lies spread with where the facts should be placed. So here you’re seeing a network map of various Facts First partner communities, and how that then ripples out into their respective networks as well. So if lies spread a certain way, then we’re able to combat that by knowing where do they start, how are our audiences, our communities, responding to it, and what formats will they actually understand. So this way, we’re not just… Here’s a piece of news, this is fake, this is real, but in a language that they understand, in formats that they would actually bother to watch. So it’s knowing our communities, knowing what matters to them, and ultimately knowing how to engage with them better. And as we know, systems, broken systems at that, cannot be fixed overnight, so we built our own. Because the goal is to be able to reclaim the narrative from those that are spreading this information. So what you’re seeing here is Rappler Communities, powered by the Matrix Protocol. It is a safe space where everyone can actually jump in, join in, and have real connections direct with our journalists and editors, those who are also guardians of facts and truth. And how it works is, beyond just your typical news feed, you can actually then select which communities you want to be able to engage with online. Chat real time, always on, and then get to even chat with our AI, called Rai, that is trained on vetted articles and vetted facts from Rappler, through content and data. And this is how, if you’re visiting Rappler, you can then connect with Rai, ask it questions, verify yourself, if there’s any disinformation or lie that you come across, and it’ll provide you the response that is then connected to factual hard data. So it’s fact-based, it’s designed for civic engagement, it’s designed for you to be able to also spread the truth yourself to your respective communities. So ultimately, for us to be able to really address the issue of disinformation, it’s a long, hard road ahead, but we’ve seen that what has worked for us in Rappler is that really seeing how do we build out a network of truth, by understanding what the data shows, equipping citizens and communities to act with information that’s readily available, and also providing safe spaces for good, not just for profit. So it’s been a long journey, but we’re here to continue to battle for truth. Thank you.


Olav Ostrem: Thank you. So, questions? I’m not sure if there is a… Is there a mic? Here’s a mic. Oh, it seems to be working now, yeah.


Audience: Great. Yeah, thanks for the presentations, really interesting work. I’m Surabhi from RNW Media, we are a media development organization based in the Netherlands. I was just wondering about the tools that you mentioned, for instance, the facial recognition tool and some of the other AI tools that you’re using. How are you reconciling that with the ethical implications of using these tools in your work? Have there been discussions within the organization about the ethical, responsible implications of these tools? And I’m just interested in knowing how are you navigating some of those discussions, and if you have any practical insights on that.


Morten Langfeldt Dahlback Rapler: Oh, definitely. So we did have those discussions when we had this TikTok project, and one of the most important things for us was selecting only accounts that had a big public impact already. So we didn’t want to store facial expression information about normal private citizens. We selected only the most important influencers who have their face plastered everywhere anyway. We also didn’t publish, I think, almost none of the material anywhere else. As journalists, of course, we have the luxury of having an exemption from the GDPR, which means that we can store personal information as long as it’s for editorial purposes. But we’ve tried to be as careful as possible, especially both with people’s faces, but also with comments. That’s something else that we’ve stored and analyzed. And we’ve always tried to disaggregate the names of the people making the comments from the comment itself, because people can really expose their names, for example. So yeah, it’s usually on a per-project basis. We would have other considerations if we were looking into more, let’s say, random activity from non-public figures, for example.


Audience: Thank you very much for this presentation. I assume that the data that you showed about this information source is usually publicly observable websites. It actually will be visible by anyone. Now, I’m wondering if you also have a project to deal with the more hidden form of this information, especially the one that goes by way of personal targeting and delivery by way of ephemeral websites. So these websites will not be visible to anybody who knows what to do about it. And you will not be able to observe what it is. Of course, there could be ways to collaborate with the networks to enable people to report what they see. But it would actually take a bigger project. So I wonder if there’s anything going there. Yeah. So all of our work is based on publicly available data. And what we’ve observed is that they really do work in patterns. So while we’re able to map out various accounts and networks, it’s important to note that every account plays a role. And an account will then have their own respective private connections. And so if you’re able to then identify what is happening in the public space, what are the messages that are building up in certain clusters, understanding who would be the main influences or leaders or accounts in specific clusters, we can then connect that to potential private behavior as well. Because whatever you deal with publicly also would have an influence on how you spread it privately. So we take a look at patterns, we take a look at how the behavior is, as well as essentially the playbook of how messages spread. Then we are able to then track that out within private spaces as well.


Olav Ostrem: It seems that we have run out of time. So I think we could just take the question over here. Okay. Thank you for attention. Thank you, everyone. Thank you.


O

Olav Ostrem

Speech speed

162 words per minute

Speech length

1348 words

Speech time

498 seconds

Faktisk was founded in 2017 by six major Norwegian media companies as a collaborative response to misinformation

Explanation

Faktisk was established after major events like the ‘Fake News tornado’, Russian aggression against Ukraine, and Trump’s first election, when there was a great need to counter misinformation spreading online. The solution in Norway was for big media companies to collaborate and create an independent fact-checking organization.


Evidence

Founded by and owned by six big Norwegian media companies including Shipsted and Aller, broadcasters NRK and TV2, and two big media companies that own regional and local papers


Major discussion point

Collaborative Fact-Checking and Organizational Structure


Topics

Sociocultural | Human rights


Small organizations with limited resources need collaboration with media ecosystem partners for support, financing, and distribution

Explanation

With only 15 employees, Faktisk requires help from the broader Norwegian media ecosystem. The collaboration provides not just financing but also idea sharing, know-how exchange, and crucially, distribution through republishing of articles.


Evidence

Faktisk has 15 employees and gets support from six big media companies that provide financing, share ideas and know-how, and enable much greater visibility through republishing articles


Major discussion point

Collaborative Fact-Checking and Organizational Structure


Topics

Sociocultural | Development


Agreed with

– Speaker

Agreed on

Collaboration is essential for effective fact-checking and combating disinformation


Collaboration extends internationally through Nordic fact-checking networks and global organizations like IFCN

Explanation

Faktisk works closely with Nordic colleagues and participates in international networks to share ideas, methods, and tools. This includes both regional Nordic cooperation and global fact-checking networks.


Evidence

Works with Danish Tjekte, Swedish KjellkritikbyrĂ¥n, Finnish Faktabari, participates in Nordic hub with academic institutions and tech companies, and is part of IFCN (180 fact-checking organizations) and European fact-checking standards network


Major discussion point

Collaborative Fact-Checking and Organizational Structure


Topics

Sociocultural | Development


Agreed with

– Speaker

Agreed on

Collaboration is essential for effective fact-checking and combating disinformation


Educational material can be created simultaneously with fact-checks for immediate classroom use

Explanation

Faktisk’s media literacy division can transform fact-checking material into educational content that teachers can use in classrooms the next day. This simultaneous approach combines debunking and pre-bunking efforts effectively.


Evidence

Media literacy division creates educational material from the same content used for fact-checks, enabling teachers to use it in classrooms immediately after publication


Major discussion point

Community Engagement and Education Strategies


Topics

Sociocultural | Development


Agreed with

– Silje Forsund
– Speaker

Agreed on

Training and education are crucial for building verification capabilities


M

Morten Langfeldt Dahlback Rapler

Speech speed

191 words per minute

Speech length

855 words

Speech time

267 seconds

Object detection algorithms like YOLO can verify claims by analyzing large volumes of visual content, such as counting flags in parades

Explanation

The YOLO (You Only Look Once) algorithm can automatically detect and count objects in video footage, enabling fact-checkers to verify claims that would be impossible to check manually. This technology allows for systematic analysis of large amounts of visual content.


Evidence

Used YOLO algorithm to count approximately 80,000 flag instances in several hours of Norwegian National Day Parade footage to verify claims about foreign flags, finding the claims to be a myth


Major discussion point

Technological Tools and Methods for Verification


Topics

Sociocultural | Infrastructure


Facial expression recognition technology can analyze mood and content patterns on social media platforms like TikTok

Explanation

Facial expression recognition algorithms can estimate emotions from facial expressions in social media content, providing insights into the mood and nature of content on platforms. This technology revealed unexpected patterns about content and user emotions.


Evidence

Used FVR (open source facial expression recognition) on TikTok videos, discovering that people don’t share substantial content and most people appear happy, contrary to expectations


Major discussion point

Technological Tools and Methods for Verification


Topics

Sociocultural | Human rights


Misinformation has evolved from simple text-based claims to sophisticated audiovisual content across multiple platforms

Explanation

Since 2017, the landscape has shifted from primarily text-based misinformation on platforms like Facebook to sophisticated audiovisual content on TikTok, Instagram, and YouTube. Bad actors have become more technologically sophisticated in creating misinformation.


Evidence

Original toolbox in 2017 focused on transcriptions and text-based content from politicians, but now must handle video content across multiple platforms including TikTok, Instagram, and YouTube


Major discussion point

Evolution of Misinformation Challenges


Topics

Sociocultural | Cybersecurity


Agreed with

– Speaker

Agreed on

Technology and AI present both opportunities and challenges in the disinformation landscape


Facial recognition tools should only be used on public figures with significant impact, not private citizens

Explanation

When using facial recognition technology, ethical considerations require limiting analysis to public figures who already have significant public exposure. This approach respects privacy while still enabling important research and verification work.


Evidence

Selected only the most important influencers who already have their faces widely public, avoiding storage of facial expression information about normal private citizens


Major discussion point

Ethical Considerations in Fact-Checking Technology


Topics

Human rights | Sociocultural


Personal information should be disaggregated from analyzed content to protect individual privacy

Explanation

When analyzing user-generated content like comments, it’s important to separate personal identifiers from the content itself to protect individual privacy. This allows for content analysis while maintaining ethical standards.


Evidence

Disaggregated names of people making comments from the comment content itself to protect privacy, while still being able to analyze comment patterns


Major discussion point

Ethical Considerations in Fact-Checking Technology


Topics

Human rights | Legal and regulatory


S

Silje Forsund

Speech speed

120 words per minute

Speech length

1012 words

Speech time

505 seconds

Frame-by-frame video analysis can reveal inconsistencies, editing cuts, and staging in propaganda content

Explanation

Detailed analysis of video content frame by frame can expose manipulated or staged content by revealing inconsistencies in lighting, shadows, and editing. This method is crucial for identifying sophisticated propaganda and disinformation.


Evidence

Analyzed Russian Telegram video claiming to show Norwegian soldier killed in Ukraine, revealing through sun and shadow analysis that it was staged propaganda, not authentic footage


Major discussion point

Technological Tools and Methods for Verification


Topics

Sociocultural | Cybersecurity


Satellite imagery and open source intelligence can document evidence of conflict-related activities like filtering camps and mass graves

Explanation

Satellite imagery can provide objective evidence of activities in conflict zones by showing changes over time, such as the appearance of structures or excavations. This method can turn rumors into documented facts through visual evidence.


Evidence

Used satellite images to document 20 blue tents appearing near Mariupol for filtering camps, and documented mass graves in Syria by tracking large trenches that appeared when thousands were reported missing


Major discussion point

Technological Tools and Methods for Verification


Topics

Cybersecurity | Human rights


User-generated content from conflict zones requires new verification methods due to restricted journalist access

Explanation

Traditional reporting tools are insufficient when newsrooms rely increasingly on user-generated content from conflict areas where journalists cannot access directly. New verification skills and methods become essential for separating authentic from manipulated content.


Evidence

During Ukraine invasion, social media overflowed with front-line footage, some authentic and some manipulated, while journalists had restricted ground access, requiring newsrooms to develop new verification capabilities


Major discussion point

Evolution of Misinformation Challenges


Topics

Sociocultural | Cybersecurity


Training programs have equipped about 60 Norwegian journalists and journalists worldwide in verification methods

Explanation

Systematic training programs can build verification capabilities across the journalism community, extending beyond national borders to support journalists in exile and conflict areas. This capacity building approach multiplies the impact of verification expertise.


Evidence

Trained about 60 Norwegian journalists through verification newsroom participation, and trained journalists from Gaza, Yemen, Syria, Libya, Afghanistan and other regions worldwide


Major discussion point

Community Engagement and Education Strategies


Topics

Development | Sociocultural


Agreed with

– Olav Ostrem
– Speaker

Agreed on

Training and education are crucial for building verification capabilities


S

Speaker

Speech speed

141 words per minute

Speech length

1220 words

Speech time

518 seconds

Multi-sectoral coalitions like Facts First PH with over 140 units can effectively combat disinformation through collective action

Explanation

Large-scale coalitions bringing together diverse sectors can create powerful networks for fighting disinformation by combining different expertise and reach. The multi-sectoral approach enables comprehensive coverage and response to misinformation campaigns.


Evidence

Facts First PH coalition started in 2022 with over 140 units from various sectors, creating a system where everybody can report, verify, check, and spread information about fake news and disinformation


Major discussion point

Collaborative Fact-Checking and Organizational Structure


Topics

Sociocultural | Development


Agreed with

– Olav Ostrem

Agreed on

Collaboration is essential for effective fact-checking and combating disinformation


AI has lowered barriers to creating and distributing disinformation at unprecedented scale and speed

Explanation

Artificial intelligence has made it easier and cheaper for bad actors to create and distribute disinformation at a scale and pace never seen before. This technological advancement represents a significant escalation in the disinformation threat landscape.


Evidence

AI enables low cost, high scale production and distribution of disinformation at a pace never seen before, with very low barriers to entry


Major discussion point

Evolution of Misinformation Challenges


Topics

Cybersecurity | Sociocultural


Agreed with

– Morten Langfeldt Dahlback Rapler

Agreed on

Technology and AI present both opportunities and challenges in the disinformation landscape


Disinformation costs the global economy $78 billion annually and destroys shared reality and trust

Explanation

Disinformation has measurable economic impacts through market manipulation, reputation damage, and cybercrime, while also causing immeasurable social costs by eliminating shared reality and trust in society. The problem affects both economic and social foundations of communities.


Evidence

Global cost of $78 billion annually from market manipulation, reputation risk, and cybercrime; designed to divide communities, distort reality, and destroy trust, leading to loss of shared reality


Major discussion point

Evolution of Misinformation Challenges


Topics

Economic | Sociocultural


Building networks of civic engagers through training and roadshows creates effective community-based responses

Explanation

Training programs and community engagement initiatives can build networks of informed citizens who can actively combat disinformation in their communities. This approach creates grassroots resistance to misinformation campaigns.


Evidence

Built networks through training and roadshows bringing together multilateral, multisectoral groups including youth leaders and local government units, focusing on disinformation as a common enemy


Major discussion point

Community Engagement and Education Strategies


Topics

Development | Sociocultural


Agreed with

– Olav Ostrem
– Silje Forsund

Agreed on

Training and education are crucial for building verification capabilities


Safe digital spaces powered by verified content and AI trained on factual data can provide real-time fact-checking

Explanation

Creating secure digital platforms where users can access verified information and interact with AI systems trained on factual content provides an alternative to misinformation-prone social media. These spaces enable real-time verification and community engagement around truth.


Evidence

Rappler Communities powered by Matrix Protocol provides safe space with AI called Rai trained on vetted Rappler articles and data, enabling real-time chat with journalists and fact-checking


Major discussion point

Community Engagement and Education Strategies


Topics

Infrastructure | Sociocultural


A

Audience

Speech speed

148 words per minute

Speech length

397 words

Speech time

160 seconds

Questions about responsible use of AI tools in journalism require ongoing ethical discussions

Explanation

The use of AI tools like facial recognition in journalism raises important ethical questions that need to be addressed through organizational discussions and practical guidelines. Media organizations must navigate the balance between technological capabilities and ethical responsibilities.


Evidence

Questions raised about ethical implications of facial recognition tools and other AI technologies used in fact-checking work, asking about organizational discussions and practical insights


Major discussion point

Ethical Considerations in Fact-Checking Technology


Topics

Human rights | Legal and regulatory


Challenges exist in tracking ephemeral and privately targeted disinformation beyond publicly observable content

Explanation

While public disinformation can be tracked and analyzed, there are significant challenges in addressing more hidden forms of disinformation that use personal targeting and ephemeral websites. These forms of disinformation require different approaches and potentially larger collaborative projects to address effectively.


Evidence

Questions about dealing with hidden disinformation through personal targeting and ephemeral websites that are not visible to general observers, noting this would require bigger collaborative projects


Major discussion point

Ethical Considerations in Fact-Checking Technology


Topics

Cybersecurity | Legal and regulatory


Agreements

Agreement points

Collaboration is essential for effective fact-checking and combating disinformation

Speakers

– Olav Ostrem
– Speaker

Arguments

Small organizations with limited resources need collaboration with media ecosystem partners for support, financing, and distribution


Collaboration extends internationally through Nordic fact-checking networks and global organizations like IFCN


Multi-sectoral coalitions like Facts First PH with over 140 units can effectively combat disinformation through collective action


Summary

Both speakers emphasize that fighting disinformation requires collaborative approaches, whether through media partnerships, international networks, or multi-sectoral coalitions. They agree that no single organization can effectively combat disinformation alone.


Topics

Sociocultural | Development


Technology and AI present both opportunities and challenges in the disinformation landscape

Speakers

– Morten Langfeldt Dahlback Rapler
– Speaker

Arguments

Misinformation has evolved from simple text-based claims to sophisticated audiovisual content across multiple platforms


AI has lowered barriers to creating and distributing disinformation at unprecedented scale and speed


Summary

Both speakers acknowledge that technological advancement, particularly AI, has fundamentally changed the disinformation landscape by making it easier to create and distribute false content while also providing new tools for verification.


Topics

Cybersecurity | Sociocultural


Training and education are crucial for building verification capabilities

Speakers

– Olav Ostrem
– Silje Forsund
– Speaker

Arguments

Educational material can be created simultaneously with fact-checks for immediate classroom use


Training programs have equipped about 60 Norwegian journalists and journalists worldwide in verification methods


Building networks of civic engagers through training and roadshows creates effective community-based responses


Summary

All three speakers agree that education and training programs are fundamental to building capacity for fighting disinformation, whether for journalists, educators, or civic communities.


Topics

Development | Sociocultural


Similar viewpoints

Both speakers demonstrate how advanced technological tools can be used for verification work, from automated analysis to detailed manual examination of visual content.

Speakers

– Morten Langfeldt Dahlback Rapler
– Silje Forsund

Arguments

Object detection algorithms like YOLO can verify claims by analyzing large volumes of visual content, such as counting flags in parades


Frame-by-frame video analysis can reveal inconsistencies, editing cuts, and staging in propaganda content


Satellite imagery and open source intelligence can document evidence of conflict-related activities like filtering camps and mass graves


Topics

Sociocultural | Cybersecurity


Both acknowledge the importance of ethical considerations when using AI and facial recognition technologies in journalism, emphasizing the need for privacy protection and responsible implementation.

Speakers

– Morten Langfeldt Dahlback Rapler
– Audience

Arguments

Facial recognition tools should only be used on public figures with significant impact, not private citizens


Personal information should be disaggregated from analyzed content to protect individual privacy


Questions about responsible use of AI tools in journalism require ongoing ethical discussions


Topics

Human rights | Legal and regulatory


Unexpected consensus

Ethical use of AI surveillance technologies in journalism

Speakers

– Morten Langfeldt Dahlback Rapler
– Audience

Arguments

Facial recognition tools should only be used on public figures with significant impact, not private citizens


Personal information should be disaggregated from analyzed content to protect individual privacy


Questions about responsible use of AI tools in journalism require ongoing ethical discussions


Explanation

It’s unexpected to see such strong consensus on limiting the use of powerful AI tools, especially when these tools could potentially enhance fact-checking capabilities. The speakers prioritize ethical considerations over technological possibilities, showing restraint in tool deployment.


Topics

Human rights | Legal and regulatory


Global scope of disinformation requiring international cooperation

Speakers

– Olav Ostrem
– Silje Forsund
– Speaker

Arguments

Collaboration extends internationally through Nordic fact-checking networks and global organizations like IFCN


Training programs have equipped about 60 Norwegian journalists and journalists worldwide in verification methods


Multi-sectoral coalitions like Facts First PH with over 140 units can effectively combat disinformation through collective action


Explanation

Despite representing different organizations from different countries (Norway and Philippines), all speakers converge on the need for international cooperation and knowledge sharing, suggesting a mature understanding that disinformation is a global challenge requiring coordinated responses.


Topics

Development | Sociocultural


Overall assessment

Summary

The speakers demonstrate strong consensus on key principles: collaboration is essential, technology offers both solutions and challenges, education builds capacity, and ethical considerations must guide tool deployment. They agree on the global nature of disinformation and the need for coordinated international responses.


Consensus level

High level of consensus with significant implications for the field. The agreement suggests a maturing discipline with shared professional standards, ethical frameworks, and recognition of collective action needs. This consensus could facilitate better international cooperation, standardized training programs, and more effective collaborative responses to disinformation campaigns.


Differences

Different viewpoints

Unexpected differences

Overall assessment

Summary

The discussion showed remarkable consensus among speakers with no direct disagreements identified. The main areas of variation were in approach and emphasis rather than fundamental disagreement.


Disagreement level

Very low disagreement level. The speakers presented complementary perspectives on fact-checking and disinformation combat, with differences mainly in focus areas (Norwegian collaborative model vs. Philippine community engagement vs. technical tools) rather than conflicting viewpoints. The audience questions revealed some tension around ethical implementation of AI tools and the scope of verification challenges, but these were more about refining approaches than fundamental disagreements. This high level of consensus suggests strong professional alignment in the fact-checking community, though it may also indicate that more contentious aspects of the field were not deeply explored in this particular forum.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers demonstrate how advanced technological tools can be used for verification work, from automated analysis to detailed manual examination of visual content.

Speakers

– Morten Langfeldt Dahlback Rapler
– Silje Forsund

Arguments

Object detection algorithms like YOLO can verify claims by analyzing large volumes of visual content, such as counting flags in parades


Frame-by-frame video analysis can reveal inconsistencies, editing cuts, and staging in propaganda content


Satellite imagery and open source intelligence can document evidence of conflict-related activities like filtering camps and mass graves


Topics

Sociocultural | Cybersecurity


Both acknowledge the importance of ethical considerations when using AI and facial recognition technologies in journalism, emphasizing the need for privacy protection and responsible implementation.

Speakers

– Morten Langfeldt Dahlback Rapler
– Audience

Arguments

Facial recognition tools should only be used on public figures with significant impact, not private citizens


Personal information should be disaggregated from analyzed content to protect individual privacy


Questions about responsible use of AI tools in journalism require ongoing ethical discussions


Topics

Human rights | Legal and regulatory


Takeaways

Key takeaways

Collaborative approaches are essential for effective fact-checking, as demonstrated by Faktisk’s founding by six major Norwegian media companies and international partnerships


Technology tools like object detection algorithms, facial expression recognition, and satellite imagery analysis are crucial for modern verification work, especially for audiovisual content


Misinformation has evolved significantly since 2017, moving from simple text-based claims to sophisticated AI-generated content that spreads faster and at greater scale


Community engagement and education are vital components of combating disinformation, requiring multi-sectoral coalitions and training programs for both journalists and citizens


The economic and social costs of disinformation are substantial, with $78 billion in annual global costs and the destruction of shared reality and trust


Speed is a critical challenge in verification work – lies spread faster than fact-checkers can debunk them, necessitating faster technical tools and methods


Building safe digital spaces with AI trained on verified content can provide real-time fact-checking capabilities for communities


Resolutions and action items

Continue developing technical tools to make verification faster and more accurate


Expand training programs for journalists worldwide, particularly those in conflict zones and exile


Maintain and grow multi-sectoral coalitions like Facts First PH to create networks of truth-tellers


Develop AI-powered tools trained on vetted content to provide real-time fact-checking assistance


Continue international collaboration through Nordic networks and global organizations like IFCN


Unresolved issues

How to effectively track and combat ephemeral and privately targeted disinformation that isn’t publicly observable


Balancing the use of AI and facial recognition tools with ethical considerations and privacy protection


Scaling verification efforts to match the speed and volume of AI-generated disinformation


Addressing the fundamental challenge that verification is time-consuming while false information spreads extremely fast


Determining best practices for responsible use of surveillance and recognition technologies in journalism


Suggested compromises

Using facial recognition technology only on public figures with significant impact rather than private citizens


Disaggregating personal information from analyzed content to protect individual privacy while maintaining editorial exemptions under GDPR


Focusing on publicly observable patterns to infer private disinformation behavior rather than directly accessing private communications


Balancing the need for fast verification with thorough ethical considerations on a per-project basis


Thought provoking comments

We’re better together… So how do we do this with very limited resources? We are only 15 people, so we need a little help from our friends.

Speaker

Olav Ostrem


Reason

This comment crystallizes a fundamental insight about combating disinformation – that it cannot be effectively addressed by isolated organizations but requires collaborative ecosystems. It challenges the traditional competitive model of journalism and proposes cooperation as a survival strategy.


Impact

This comment established the central theme of collaboration that ran throughout the entire presentation. It shifted the discussion from individual organizational capabilities to systemic approaches, setting up the framework for all subsequent examples of cross-border cooperation, shared verification desks, and multi-stakeholder coalitions.


The big change, I think, in the way we prioritize our journalism would be like the invasion in Ukraine because what we saw was a flood of images and videos, and the big medias, they didn’t know what videos and images that occurred on social media that were to be trusted.

Speaker

Olav Ostrem


Reason

This observation identifies a pivotal moment where traditional journalism had to fundamentally adapt its methods. It highlights how geopolitical events can accelerate technological and methodological evolution in media, forcing newsrooms to develop entirely new skill sets.


Impact

This comment marked a transition in the presentation from discussing general fact-checking to specialized verification techniques. It introduced the concept of real-time verification under crisis conditions and led directly to the technical demonstrations and case studies that followed.


You can’t fight what you can’t see… disinformation doesn’t just appear randomly, it is very much orchestrated.

Speaker

Carla (Rappler)


Reason

This insight reframes disinformation from random falsehoods to systematic, strategic operations. It introduces the concept of disinformation as warfare that requires intelligence-gathering approaches rather than just fact-checking responses.


Impact

This comment shifted the discussion toward a more sophisticated understanding of disinformation as organized campaigns. It introduced the need for network analysis and data forensics, leading to the presentation of visual mapping tools and the concept of fighting networks with networks.


If lies spread a certain way, then we’re able to combat that by knowing where do they start, how are our audiences, our communities, responding to it, and what formats will they actually understand.

Speaker

Carla (Rappler)


Reason

This comment reveals a strategic insight about matching counter-narratives to the specific communication patterns and preferences of target communities. It moves beyond simply debunking to understanding audience psychology and communication effectiveness.


Impact

This observation elevated the discussion from technical verification methods to strategic communication theory. It introduced the concept of community-specific responses and led to the presentation of the Facts First PH coalition model, showing how understanding audience behavior can inform counter-disinformation strategies.


How are you reconciling that with the ethical implications of using these tools in your work? Have there been discussions within the organization about the ethical, responsible implications of these tools?

Speaker

Surabhi (Audience)


Reason

This question introduced a critical tension in the discussion – the ethical implications of using AI and surveillance technologies to combat disinformation. It challenged the presenters to consider whether the means justify the ends.


Impact

This question forced a shift from celebrating technological capabilities to examining their ethical boundaries. It introduced complexity to the narrative and prompted a discussion about GDPR compliance, consent, and the responsibilities that come with powerful verification tools. It grounded the technical discussion in real-world ethical considerations.


Overall assessment

These key comments transformed what could have been a straightforward presentation about fact-checking tools into a nuanced discussion about the systemic nature of disinformation warfare. The progression moved from individual organizational challenges to collaborative solutions, then to sophisticated network-based approaches, and finally to ethical considerations. The comments collectively established that combating disinformation requires not just better technology, but fundamental changes in how media organizations work together, understand their adversaries, and engage with communities – all while maintaining ethical standards. The discussion evolved from reactive fact-checking to proactive, strategic, and ethically-conscious information warfare.


Follow-up questions

How are you reconciling the use of AI tools like facial recognition with ethical implications in your work?

Speaker

Surabhi from RNW Media


Explanation

This addresses the important ethical considerations around using AI surveillance and analysis tools in journalism, particularly regarding privacy and responsible use of personal data


Do you have projects to deal with hidden forms of disinformation, especially those delivered through personal targeting and ephemeral websites that are not publicly observable?

Speaker

Audience member (unnamed)


Explanation

This highlights a significant gap in current fact-checking capabilities – the inability to monitor and counter disinformation that spreads through private channels, targeted advertising, or temporary websites that disappear quickly


How to make verification faster while maintaining accuracy in the face of rapidly spreading disinformation

Speaker

Silje Forsund


Explanation

This was identified as the main challenge they face – stories spread extremely fast while verification is time-consuming, creating a fundamental mismatch in response times


How to scale truth-telling networks to match the scale and speed of disinformation spread

Speaker

Carla from Rappler


Explanation

This addresses the need to build systematic approaches that can compete with the orchestrated nature of disinformation campaigns across multiple platforms and communities


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.