Lightning Talk #7 Privacy Redefined: equitable Access in the AI Age

26 Jun 2025 14:00h - 14:30h

Lightning Talk #7 Privacy Redefined: equitable Access in the AI Age

Session at a glance

Summary

This discussion was a lightning talk session at the Internet Governance Forum 2025 in Lillestrom, presented by researchers from the Safer Internet Lab, a collaborative initiative between the Center for Strategic and International Studies in Jakarta and Google Indonesia. Beltsazar Krisetya and Patricia Larasgita presented their multi-year research on information ecosystem challenges across the Asia-Pacific region, focusing particularly on disinformation, electoral integrity, and online safety issues.


The researchers outlined four distinct studies they conducted. Their first research involved surveying 200-300 stakeholders across Indonesia, including government officials, tech platforms, and civil society members, revealing significant disagreements about what constitutes misinformation and disinformation. They found that while civil society and fact-checkers were perceived as most transparent and effective, collaborations between stakeholders remained largely ad hoc and unsustainable due to differing priorities and lack of institutional mechanisms.


Their second study examined generative AI’s impact on elections across ten Asia-Pacific countries, finding patchy and inadequate regulatory responses despite civil society being at the forefront of combating AI-based electoral disinformation. The third research focused on online scams as a form of financial disinformation, discovering that frequent scam encounters paradoxically reduce general internet distrust, potentially harming the region’s growing digital economy. Their final study on Foreign Information Manipulation and Interference (FIMI) showed increasing trends of foreign influence operations in both digital and traditional media, with spikes corresponding to geopolitical tensions.


During the Q&A session, participants discussed the researchers’ engagement with Indonesian electoral bodies and ICT ministries, challenges in addressing romance scams on platforms, and Indonesia’s experience managing generative AI threats during their 2024 election. The presenters emphasized that regulatory frameworks often lag behind technological developments, requiring updated approaches to match evolving information manipulation tactics.


Keypoints

**Major Discussion Points:**


– **Multi-stakeholder research on information ecosystem challenges in Asia-Pacific**: The Safer Internet Lab presented their collaborative research initiative between CSIS Jakarta and Google Indonesia, focusing on disinformation, electoral interference, online scams, and AI-generated content across multiple countries in the region.


– **Lack of consensus on defining misinformation and disinformation**: Research findings revealed significant disagreement among stakeholders (government, tech platforms, civil society) about what constitutes misinformation, leading to inconsistent content moderation approaches and policy responses.


– **Impact of generative AI on electoral processes**: Discussion of how AI technologies, particularly deepfakes, are being used in election manipulation across Asia-Pacific countries, with regulatory frameworks struggling to keep pace with technological advancement.


– **Online scams as financial disinformation**: Examination of romance scams, business fraud, and other online deception tactics, highlighting that victims span all demographics and that overconfidence in detecting scams is a common vulnerability.


– **Foreign Information Manipulation and Interference (FIMI)**: Analysis of increasing foreign influence operations in Asia-Pacific digital and traditional media, particularly during geopolitical flashpoints like the Russia-Ukraine war.


**Overall Purpose:**


The discussion aimed to present research findings on internet safety challenges across Asia-Pacific, share policy recommendations for addressing information ecosystem threats, and promote multi-stakeholder collaboration to combat disinformation, AI misuse, and online fraud in the region.


**Overall Tone:**


The discussion maintained a professional, academic tone throughout, with presenters delivering research findings in an informative manner. The tone remained collaborative and constructive during the Q&A session, with speakers providing thoughtful responses to questions about electoral integrity, platform responsiveness, and policy implementation. There was an underlying sense of urgency about the evolving nature of digital threats, but the overall atmosphere was solution-oriented and focused on knowledge sharing.


Speakers

– **Beltsazar Krisetya**: From Safer Internet Lab, research collaborative based in Jakarta, Indonesia. Works on disinformation research, electoral disinformation, and information ecosystem issues in the Asia-Pacific region.


– **Patricia Larasgita**: From Safer Internet Lab, colleague of Beltsazar Krisetya. Focuses on multi-stakeholder collaboration and research on Internet safety in Asia-Pacific, including foreign information manipulation and interference, deepfakes in online fraud and scams, and democratic impact of generative AI.


– **Vicky Bowman**: Chair of Global Network Initiative, has worked extensively in Myanmar. Has experience with online romance scams and platform responsiveness research.


– **Audience**: Multiple audience members participated, including a journalist from Indonesia and an individual who was part of ASEAN parliamentarians’ effort to monitor elections in Jakarta in 2024.


**Additional speakers:**


None identified beyond those in the speakers names list.


Full session report

# Discussion Report: Internet Safety and Information Ecosystem Challenges in Asia-Pacific


## Executive Summary


This lightning talk session at the Internet Governance Forum 2025 in Lillestrom featured researchers from the Safer Internet Lab, a collaborative initiative between CSIS Jakarta and Google Indonesia. Beltsazar Krisetya and Patricia Larasgita presented findings from their multi-year research programme examining information ecosystem challenges across the Asia-Pacific region. This marked their third consecutive year presenting at IGF, following presentations in Kyoto (2023) and Riyadh (2024).


## Background and Context


Patricia Larasgita opened by explaining that the Safer Internet Lab is a multi-stakeholder partnership focused on disinformation research, electoral disinformation, and information ecosystem issues throughout the Asia-Pacific region. She noted their engagement with international networks including the Network of Global Centers, Association of Internet Researchers, and the International Panel on the Information Environment.


Beltsazar Krisetya clarified that their presentation would cover “four different and distinct research” studies conducted by their team, each addressing specific aspects of internet safety and information integrity in the region.


## Key Research Findings


### Multi-Stakeholder Perceptions Study


The first study surveyed 200 to 300 stakeholders across Indonesia, including government officials, technology platforms, and civil society members. Krisetya highlighted a fundamental challenge: “There is hardly any consensus about any types of content that are determined as misinformation and disinformation. And that brings to the consequences of the way we treat the content, because not every content can be solved with a blunt instrument of takedown, for example.”


The research found that civil society organisations and fact-checkers were perceived as the most trusted and effective actors, while technology platforms and government bodies were seen as requiring improvements in both performance and transparency. A key finding was that “multi-stakeholder collaborations remain ad hoc and temporal due to different priorities and absence of institutional mechanisms.”


### Generative AI Impact on Electoral Processes


The second study examined generative AI’s influence on elections across 10 Asia-Pacific countries. Krisetya found that civil society organisations are “at the forefront of combating AI-based electoral disinformation but limited by funding and weak state partnerships.”


The research revealed “patchy and uneven legal responses across countries to address AI usage in elections.” Reflecting on Indonesia’s 2024 election, Krisetya noted: “I believe that the way we handled the 2024 election was still using the 2023 playbook… as the threat changes, so does the playbook.”


### Online Scams Research


The third study focused on online scams, revealing unexpected findings about user behaviour. Krisetya presented a counterintuitive discovery: “The more people encounter scams, at least once a month, the less likely it is for them to distrust the internet in general. And this can bring dire consequences to a region that are otherwise blooming with the digital economy.”


The research found that overconfidence affects all demographics, with younger people and foreign-born populations showing particular vulnerability. The study also revealed that stakeholders focus primarily on financial losses while neglecting psychological and physical impacts of romance scams.


### Foreign Information Manipulation and Interference (FIMI)


Patricia Larasgita presented the final study examining foreign influence operations across digital and traditional media platforms, noting clear correlations with geopolitical tensions and events such as the Russia-Ukraine conflict.


## Discussion and Questions


### Platform Accountability


Vicky Bowman from the Global Network Initiative raised questions about platform responsiveness to fraudulent profiles used in romance scams, noting: “It seems to be a bit of a grey area as to whether or not these kind of things are actually illegal, even though they’re obviously setting themselves up for defrauding the people who are going to fall for them.”


### Electoral Integrity


Questions from Indonesian journalists and ASEAN parliamentary election monitors focused on generative AI’s impact on democratic processes and Indonesia’s experience managing deepfakes during their 2024 elections. Krisetya emphasised that current approaches were inadequate for addressing rapidly evolving AI-based threats.


### Government Engagement


Regarding their work with Indonesian government bodies, Krisetya explained their dual approach: direct engagement with electoral supervisory bodies during elections, and longer-term collaboration with ICT ministries for regulatory development.


### Media Literacy Approaches


Krisetya cautioned against over-reliance on media literacy programmes, stating: “There is a thin line between educating users and shifting the responsibility to users to make their own judgment whether this information is factual or not… it needs to be superseded with other interventions that can stop misinformation right at the point of infection and not only at the point of the recipient.”


## Key Challenges Identified


The discussion highlighted several persistent challenges:


– Lack of consensus among stakeholders on definitions of misinformation and disinformation


– Inadequate and inconsistent regulatory frameworks across the region


– Temporal nature of multi-stakeholder collaborations due to different priorities


– Regulatory responses that lag behind technological developments


– Legal grey areas regarding online fraud prevention


## Follow-up Actions


The researchers committed to making their findings available through their website (saferinternetlab.org) and providing specific country reports to interested stakeholders. They mentioned having printed materials available at their booth in the AGA Village and provided a QR code for easy access to their website.


## Conclusion


The session demonstrated the complexity of internet safety challenges in the Asia-Pacific region, revealing significant gaps between stakeholder understanding and practical implementation of solutions. The researchers emphasised the need for upstream interventions rather than relying solely on user education, and highlighted the importance of developing adaptive approaches that can keep pace with technological change.


The Safer Internet Lab’s multi-country research approach provides valuable insights for policymakers and practitioners working to address information ecosystem challenges, while also revealing the substantial coordination and regulatory work that remains to be done across the region.


Session transcript

Beltsazar Krisetya: Thank you for attending our lightning talk session. And thank you to the audience that are also joining us online. Welcome again to the third day, I suppose, to Internet Governance Forum 2025 in Lillestrom. So we are from the Safer Internet Lab. We will be telling you more about what our initiative actually is. My name is Baldezar from Safer Internet Lab and this is my colleague, Patricia, also from Safer Internet Lab. So as the name may have slightly suggested, we are a research collaborative, we are a research initiative based in Jakarta, Indonesia. We are humbled to have been given the opportunity to present our work, starting by introducing Safer Internet Lab in Internet Governance Forum 2023 in Kyoto, followed by the 2024 edition in Riyadh, and then now 2025 in Lillestrom. So we’re grateful for the continuation so that we can present the progress of our research and engagement in the Asia-Pacific region. So we begin the journey rather new, it was probably two and a half or three years ago. It is a research collaborative between our home institution, which is the Center for Strategic and International Studies in Jakarta and Google Indonesia, and we aim to identify policy gaps and points of intervention within the upstream part of this information campaign. What we mean by this is, during that year when this research initiative is established, the region is warming up. Many of the countries within the region are preparing for the year to come, which is 2024, which is the year of great election, as we all may know. Indonesia is one of the countries. We had an election in 2024, and so we began the research on disinformation, particularly electoral disinformation. We researched how the political buzzers or political cyber troopers coalesce and then form a sub-industry, and then how they move and create the so-called influence operation during the elections, and whether they will create a similar campaign in the 2024 election. We conducted a survey nationwide to investigate how the susceptibility of voters, of users, to misinformation, whether they are still believing to, for example, misinformation that was circulated 10 years ago. So we did research on the production side of disinformation, on the recipient side of disinformation, but we also conducted research on the platform side of disinformation, so how content moderation evolves, as well as what are the avenues for government and civil society to influence the way content is decided to be treated in social media ecosystem in Indonesia. So following the spirit of Internet Governance Forum, we employed multi-stakeholderism, and we employed a multidisciplinary approach to capture the magnitude and gravity of disinformation. But we are currently also branching out to other issues related to the information ecosystem as the issues that we are going to talk about more in detail later on. My colleague here will tell more about what we have done in the previous year.


Patricia Larasgita: Okay, as often discussed in the global forums, ensuring a safer Internet calls for joint action across different sectors, across borders. So we are also trying to turn multi-stakeholder partnership into action. As part of our effort to strengthen the regional and international collaboration, we joined several networks, such as the Network of Global Centers and also Association of Internet Researchers and the International Panel on the Information Environment. We also engaged with a wide range of stakeholders across Asia-Pacific, from government institutions, civil society to academia, to exchange knowledge, insights, and also perspective on information landscape. Alongside our effort on the multi-stakeholder collaboration, research is also what we have done. Conducting research is a key part of understanding the information landscape. And this year, we are focusing on the Internet safety in Asia-Pacific, particularly on the foreign information manipulation and interference, deepfakes in online fraud and scams, and also the democratic impact of generative AI. So what did we find?


Beltsazar Krisetya: Thank you. So before I’m given the opportunity to present a tiny bit of the findings that we did throughout the years, a bit of disclaimer first. Whatever it is that I’m going to convey in the next few minutes, it’s a part of the four different and distinct research. So if you want to learn more about it, we’re always available in our booth just right around the corner. You will find the Safer Internet Lab booth, in which we’ve already printed some of the research for you. And if you find that there are countries that are relevant to your current area of work, we’ll be happy to provide you with that report as well, as we are also continuously branching out to all countries and policy areas within Asia-Pacific. So first, this is a research that is conducted by a colleague of ours back in Jakarta, back in the CSIS. We conducted an expert survey to complement the public survey. So we asked a series of questions to stakeholders in Indonesia, whether it is from tech platforms, civil society, government actors related to the information ecosystem, across all levels of seniority, across various backgrounds, and across various levels of decision-making. We set up a few questions to try to, as best as possible, map the stakeholder perception to the way we govern the information ecosystem. One of the findings of probably 60 to 70 questions that we asked, and we had a rather good turnout, I would say. We had 200 to 300 stakeholders participating, from the Ministry of ICT to big tech platform and to civil society. So we are happy with the turnout of the survey. But one of the findings is that, no matter how defined the academic consensus is about disinformation and misinformation, the way our stakeholders perceive what disinformation and misinformation entails are still varied. The screen here might be too small for you to see the details, but we asked and we gave the stakeholders some examples, whether they think that pseudoscience is a form of misinformation, whether they think that satire and parody are misinformation, whether they think that propaganda is misinformation. But the takeaway from this, the data that we crunched, is that there is hardly any consensus about any types of content that are determined as misinformation and disinformation. And that brings to the consequences of the way we treat the content, because not every content can be solved with a blunt instrument of takedown, for example. There are contents that require nuances, and yet the way stakeholders perceive whether this content is a disinformation are still varied. We are also asking stakeholders of different backgrounds whether they think that they’re doing a good job, and whether they think that the other stakeholders are doing a good job. This is one of the backbone of multi-stakeholder collaboration that we try to investigate further, whether the government thinks that society is doing a good job in handling misinformation. but the civil society are thinking that tech platforms are doing a good job in encountering this information. We put up two metrics, perception on performance and transparency, and as it turns out, unsurprisingly, civil society and fact-checkers are the ones that are deemed by stakeholders to be having a good transparency and good performance, followed by international organizations and mass media, and then electoral management bodies, the ICT ministry, and the tech platform are perceived, they could have done more, they would do well to be more transparent and poor performance, and we convey this to them. Lastly, on this part of the research, we also ask that despite the blooming programmatic partnership between stakeholders, we’ve seen many media and fact-checking partnerships going on. We’ve seen many initiatives, our initiative is one of them, but can that collaboration continue? Can that collaboration be sustainable? Or what kind of challenges does the collaboration face? Most of the stakeholders would say that at the very essential, government, tech platform, and civil society are having different priorities. And then there’s also a difference in vision and priorities amongst stakeholders, as well as the absence of institutional mechanism that can safeguard the continuation of such collaboration. While we are happy that the collaborations are blooming, sadly, at least in the case of Indonesia, most partnerships remain to be ad hoc and temporal, if you will. So that’s one research. The second research is, we conducted interviews with, I think, 10 countries from the Philippines, Australia, Taiwan, South Korea, Indonesia, Malaysia, whether they have experienced the extent of generative AI to election, and whether the regulation, the electoral regulation, as well as the ICT regulation, are well prepared to overcome the incoming impact of generative AI to election. And what we found is, obviously civil society are at the forefront of generative AI-based disinformation in election, but they are, again, often limited by the lack of stable and continuous funding, short electoral cycle, and weak state partnership. And this table is just to illustrate the variety of AI usage in election that we are telling in more details in each of the country report that hopefully is still available in our booth, as well as the patchy legal responses by the government of state jurisdiction, and the capacity and the focus of the civil society operating within that jurisdiction. So that’s research number two. Research number three, that uneven and patchy landscape is also visible when it comes to online scam, because the way we think about it, online scam is also a form of financial disinformation. So we also examine whether each of the countries being examined has an adequate infrastructure, innovation, and data governance, and we see that there is also a patchy terrain in that as well. But what is not patchy is that the more people encounter scams, at least once a month, the less likely it is for them to distrust the internet in general. And this can bring dire consequences to a region that are otherwise blooming with the digital economy as a new catalyst of the economy. And everyone can be a scam victim, regardless of their demographics. The younger people in Singapore, for example, tend to be overconfident with their ability to detect scams, but then again, 30% of the victims in Singapore are aged 29 and below. So it’s not always the digitally illiterate ones that can fall prey to online scams. Foreign-born population in South Korea are vulnerable targets for online scams due to language barrier. So again, we see whether it is an electoral issue or a financial and fraud issue, we see a patchy regulatory response in all countries being represented. And lastly, this is a research on FIMI, which is on foreign information manipulation and influence, a topic of research that is otherwise more present in areas of conflict zones. It’s widely discussed in Eastern Europe, for example, and we try to carry on the discussion in the Asia-Pacific given the increasing geopolitical tension and geopolitical competition in Asia-Pacific. And we see that this blue line is the amount of foreign information manipulation and influence in the digital media, and the orange line is the influence on traditional media. We see a spike in 2022 during the start of Russia-Ukraine war, but we see an overall increasing trend of information manipulation and influence across both channels. Not only that, we also see the rise of non-state and unspecified actors that are also participating in influencing the public opinion through various tactics such as, you name it, astroturfing or flooding or just good old disinformation. The spike, the lows and the highs of this graph correspond to the geopolitical flashpoint, if you will, to geopolitical episodes, and we believe that it is also something that needs to be accounted for by policymakers in Asia-Pacific. So that is a slight bit of our research. Where do we go from here?


Patricia Larasgita: All of our research findings and op-eds are available on our website, saverinternetlab.org, or you can simply scan the QR code on the screen. You can also visit our booth at the AGA Village to pick up the printed copies of our publications, to discuss about the issues and connect directly with our team. And we now open the floor for any questions or comments you might have. Thank you. Yes? Okay, the lady in the back.


Audience: Hi, I’m a journalist from Indonesia. I want to run it back to you and also just help me paint the picture. So you guys did research around the election last year, yes? Do you guys run your findings with the government directly, or is there any just with the stakeholder mentioned in the publications in the screen? Just let me know. Thank you.


Beltsazar Krisetya: Thank you for your question. So our primary stakeholders of engagement during the election season are obviously the electoral supervisory body and the electoral commission. So the one that runs the election and the one that supervises the election. Because those two are the direct mandate carrier on how campaigns are being operated, and how violations of campaign, whether through online media and through traditional media, is being investigated. So the advisory that we engage are primarily with the electoral actors, although we were also aware that given the limited regulatory agency of said electoral body, we are also engaging with the ICT ministry, because they’re the ones that are more capable in implementing a more binding regulation related to online ecosystem. So we also engage with the ICT ministry from the previous administration, because now we have a new administration with a newly elected president. So I believe that the engagement, I would say, is twofold back during the election time. One is… in the implementation side, in which we engage more with the electoral bodies, and one is more on the long-term side in the ICT ministry. Thank you.


Vicky Bowman: Hi, I’m Vicky, I’m the Chair of Global Network Initiative, but I’ve worked a lot in Myanmar. The other month I was looking at online romance scams on Facebook, and just sort of randomly seeing whether I could get Facebook to take down what were obviously fraudulent profiles. I noticed there were a lot of a particular, for example, Malaysian influencer, and I just wondered what your studies on online romance scams in the area have shown, both in terms of the responsiveness of platforms, but also in terms of the regulatory framework around this, because it seems to be a bit of a grey area as to whether or not these kind of things are actually illegal, even though they’re obviously setting themselves up for defrauding the people who are going to fall for them.


Beltsazar Krisetya: Thank you, Vicky, and thank you for coming. Indeed, I think we used the term love scam, which I think romance scam is more appropriate. So, we did research and I think interviewed some people who fall prey to romance, to love scam in Philippines, I think. But then we framed the love scam activities as part of the tactics. So, there’s this love scam, but there’s also a business scam, there’s also other methods of scam. I’m not sure if in our book we’ve already printed the quantified version of that research, but we will definitely get back to you on the landscape of the number of people that fall prey, how many people that fall prey to scam actually fall prey to love scam, and the amount of money that they lost from that scam. I would say that the key commonalities between all these methods of scam being examined is overconfidence, I think. Because not only in love scam, but in business scam, a company treasurer was scammed to transfer a huge amount of money through deep fake Zoom by the crime actors. And also in love scam, they are confident to transfer the money because they’ve met the person for several cases. And we did also try to engage with the stakeholders that are responsible for love scam, but across many markets being examined, unfortunately, we did not see a differentiation. Not at all, but we don’t see much differentiation between the types of methods and the types of loss that are entailing each of the scams. So the bottom line here, I think stakeholders are still valuing the financial loss of the scam more than the psychological loss of love scam, as well as the physical loss of love scam, and I think there is definitely a future policy pathway to be advocated in the region. So thank you.


Audience: Thank you very much for the presentation. I enjoyed it very much. Perhaps I would like to ask more about the follow-up actions from the results of your research. It seems like there are some concerns, a risk that generative AI may have influenced the results of our elections. And it’s sad to say that the influence is quite dire because that determines the quality of our democracy. And in that sense, what would be the Indonesian experience to be shared with us in terms of following up from that situation? The way the electoral commissions or the civil society make sure that this kind of practices, the generative AI in terms of deep fakes and disinformation, misinformation can be tackled. Because I remember clearly in 2024, I was in Jakarta, part of the ASEAN parliamentarians’ effort to monitor the election. And in general, it was quite okay, but then when you speak to certain actors, they have grave concerns of how the election was being conducted. I just want to know about the experience of Indonesia in that sense. Thank you so much.


Beltsazar Krisetya: Thank you for the question. I would say before the answer, I would like to provide some comments probably. So I think across many areas being examined, there is this, I wouldn’t say like overconfidence, but there is a big trust in the media and literacy program, which is good in itself. But if the policy makers are only focusing on the media information literacy, there is a thin line between educating users and shifting the responsibility to users to make their own judgment whether this information is factual or not. Because I think in some of the regulatory output that are focusing more on the literacy program, it needs to be superseded with other interventions that can stop misinformation right at the point of infection and not only at the point of the recipient. So that’s the first comment. And definitely the way Indonesia handled the previous election, especially with the ongoing deep AI use and misuse, because when we’re talking about the government, the government operates with a regulation that legitimizes their action. And then the regulation that legitimizes their action comes first, and then a year later, generative AI was booming. So basically what I’m saying is, I believe that the way we handled the 2024 election was still using the 2023 playbook. Because the playbook was written back then, and then let’s say the pace of the regulation cannot match with the pace of use and misuse of generative AI. But then again, same goes with the civil society that tries to go into places where the government or tech platform cannot. I believe that there’s also, if I’m allowed to criticize fairly, civil society are also still using the 2020 playbook, in which, again, misinformation and disinformation are the primary tactics of the previous election cycle. But in this particular election cycle, I’m not sure if misinformation is the primary tactic for public opinion manipulation. So I believe that the message here is, as the threat changes, so does the playbook. Thank you.


Patricia Larasgita: Okay, I think that wraps up our session. Thank you once again for your time and the discussion. We look forward to continuing the conversation at our booth. Thank you. Thank you.


B

Beltsazar Krisetya

Speech speed

126 words per minute

Speech length

2661 words

Speech time

1263 seconds

Multi-stakeholder and multidisciplinary approach to capture disinformation magnitude across production, recipient, and platform sides

Explanation

Safer Internet Lab employs a comprehensive research approach that examines disinformation from multiple angles – how it’s produced, how recipients respond to it, and how platforms handle it. This approach follows the spirit of Internet Governance Forum’s multi-stakeholderism to capture the full magnitude and gravity of disinformation.


Evidence

Research conducted on political buzzers/cyber troopers forming sub-industries, nationwide survey on voter susceptibility to misinformation including content from 10 years ago, and research on content moderation evolution and government/civil society influence on social media platforms


Major discussion point

Research Methodology and Approach of Safer Internet Lab


Topics

Sociocultural | Legal and regulatory


Focus on upstream policy gaps and intervention points in disinformation campaigns, particularly electoral disinformation

Explanation

The research initiative specifically targets identifying policy gaps and intervention opportunities at the early stages of disinformation campaigns. This focus was particularly relevant given that 2024 was a major election year across the Asia-Pacific region, including Indonesia.


Evidence

Research on how political buzzers coalesce to form sub-industries and create influence operations during elections, with specific focus on whether similar campaigns from previous elections would be repeated in 2024


Major discussion point

Research Methodology and Approach of Safer Internet Lab


Topics

Sociocultural | Legal and regulatory


Lack of consensus among stakeholders on what constitutes misinformation and disinformation, despite academic definitions

Explanation

Despite clear academic consensus on definitions of misinformation and disinformation, stakeholders across different sectors have varied perceptions of what content actually constitutes these categories. This lack of consensus affects how content is treated and moderated.


Evidence

Expert survey of 200-300 stakeholders from Ministry of ICT, big tech platforms, and civil society asking whether pseudoscience, satire/parody, and propaganda constitute misinformation, showing no consensus on any content types


Major discussion point

Stakeholder Perceptions and Collaboration Challenges


Topics

Sociocultural | Legal and regulatory


Civil society and fact-checkers perceived as most transparent and effective, while tech platforms and government bodies seen as needing improvement

Explanation

Survey results show stakeholders rate civil society and fact-checkers highest for transparency and performance in handling misinformation. Tech platforms, ICT ministry, and electoral management bodies are perceived as needing significant improvement in both areas.


Evidence

Survey results showing civil society and fact-checkers rated highest, followed by international organizations and mass media, while electoral management bodies, ICT ministry, and tech platforms rated as needing more transparency and better performance


Major discussion point

Stakeholder Perceptions and Collaboration Challenges


Topics

Sociocultural | Legal and regulatory


Multi-stakeholder collaborations remain ad hoc and temporal due to different priorities and absence of institutional mechanisms

Explanation

Despite the growth of partnerships between media, fact-checkers, and other stakeholders, these collaborations face sustainability challenges. The main barriers are differing priorities among government, tech platforms, and civil society, plus lack of institutional mechanisms to ensure continuity.


Evidence

Survey findings showing stakeholders identify different priorities between government, tech platforms, and civil society as main challenge, along with differences in vision and absence of institutional mechanisms for sustainable collaboration


Major discussion point

Stakeholder Perceptions and Collaboration Challenges


Topics

Legal and regulatory | Sociocultural


Agreed with

– Patricia Larasgita
– Vicky Bowman

Agreed on

Multi-stakeholder collaboration is essential but faces significant challenges


Civil society at forefront of combating AI-based electoral disinformation but limited by funding and weak state partnerships

Explanation

Research across 10 countries found that civil society organizations are leading efforts to address generative AI threats to elections, but they face significant constraints. These limitations include unstable funding, short electoral cycles, and insufficient partnership with government entities.


Evidence

Interviews conducted across Philippines, Australia, Taiwan, South Korea, Indonesia, Malaysia examining generative AI impact on elections and regulatory preparedness


Major discussion point

Generative AI Impact on Elections


Topics

Sociocultural | Legal and regulatory


Patchy and uneven legal responses across countries to address AI usage in elections

Explanation

The research revealed inconsistent and inadequate legal frameworks across different jurisdictions for handling generative AI in electoral contexts. This creates an uneven landscape where some countries are better prepared than others to address AI-related electoral threats.


Evidence

Country reports showing variety of AI usage in elections and patchy legal responses by governments, with varying capacity and focus of civil society across different jurisdictions


Major discussion point

Generative AI Impact on Elections


Topics

Legal and regulatory | Sociocultural


Agreed with

– Vicky Bowman
– Audience

Agreed on

Current regulatory frameworks are inadequate and patchy across the region


Indonesian election handled with outdated regulatory playbook that couldn’t match pace of generative AI development

Explanation

Indonesia’s 2024 election was managed using regulatory frameworks developed in 2023, before the widespread adoption of generative AI. This mismatch between regulatory preparation and technological advancement meant authorities were unprepared for AI-related electoral challenges.


Evidence

Explanation that government regulations legitimizing actions were created in 2023, but generative AI became prominent later, creating a gap between regulatory readiness and actual technological threats during the 2024 election


Major discussion point

Generative AI Impact on Elections


Topics

Legal and regulatory | Sociocultural


Agreed with

– Vicky Bowman
– Audience

Agreed on

Current regulatory frameworks are inadequate and patchy across the region


Frequent scam encounters lead to decreased internet trust, threatening digital economy growth in the region

Explanation

Research found that people who encounter online scams frequently (at least once monthly) become less trusting of the internet overall. This declining trust poses serious risks to the Asia-Pacific region’s growing digital economy, which relies on user confidence in online platforms.


Evidence

Research showing correlation between frequent scam encounters and decreased internet trust, with implications for a region where digital economy serves as a new economic catalyst


Major discussion point

Online Scams and Financial Disinformation


Topics

Economic | Cybersecurity


Agreed with

– Vicky Bowman

Agreed on

Online scams and fraud represent serious threats requiring better understanding and response


Overconfidence affects all demographics – younger people and foreign-born populations particularly vulnerable

Explanation

The research challenges assumptions about who falls victim to online scams, showing that digital literacy doesn’t guarantee protection. Younger people often overestimate their ability to detect scams, while language barriers make foreign-born populations especially vulnerable.


Evidence

Singapore data showing younger people tend to be overconfident in scam detection abilities, yet 30% of victims are aged 29 and below; South Korea data showing foreign-born population vulnerable due to language barriers


Major discussion point

Online Scams and Financial Disinformation


Topics

Economic | Cybersecurity


Agreed with

– Vicky Bowman

Agreed on

Online scams and fraud represent serious threats requiring better understanding and response


Stakeholders focus more on financial losses than psychological and physical impacts of romance scams

Explanation

Current policy approaches prioritize the monetary damage from scams while undervaluing the psychological trauma and physical consequences that victims of romance scams experience. This narrow focus limits the effectiveness of anti-scam policies and support systems.


Evidence

Observation that across examined markets, stakeholders don’t differentiate much between scam types and methods, valuing financial loss over psychological and physical impacts of love scams


Major discussion point

Online Scams and Financial Disinformation


Topics

Economic | Human rights


Agreed with

– Vicky Bowman

Agreed on

Online scams and fraud represent serious threats requiring better understanding and response


Disagreed with

– Vicky Bowman

Disagreed on

Effectiveness of current regulatory approaches to romance scams


Direct engagement with electoral supervisory bodies and commissions as primary stakeholders during election season

Explanation

During Indonesia’s election period, Safer Internet Lab focused their engagement efforts on the electoral supervisory body and electoral commission as these are the institutions with direct mandates for overseeing campaign operations and investigating violations in both online and traditional media.


Evidence

Explanation that these two bodies are the direct mandate carriers for campaign operations and violation investigations across online and traditional media platforms


Major discussion point

Government Engagement and Implementation


Topics

Legal and regulatory | Sociocultural


Engagement with ICT ministry for long-term regulatory solutions given their broader online ecosystem authority

Explanation

Beyond electoral bodies, the research team also engaged with the ICT ministry because of their greater regulatory capacity for implementing binding regulations related to the online ecosystem. This engagement focused on long-term solutions rather than immediate election-period responses.


Evidence

Recognition that electoral bodies have limited regulatory agency, while ICT ministry has more capability for implementing binding online regulations, leading to engagement with previous administration’s ICT ministry


Major discussion point

Government Engagement and Implementation


Topics

Legal and regulatory | Infrastructure


Need to balance media literacy programs with interventions that stop misinformation at source rather than shifting responsibility to users

Explanation

While media literacy programs are valuable, there’s a risk of over-relying on them and inadvertently shifting the burden of identifying misinformation entirely to users. Effective policy requires interventions that address misinformation at its point of origin, not just at the recipient level.


Evidence

Observation that policy makers focusing only on media literacy programs risk shifting responsibility to users to make their own judgments, which needs to be supplemented with interventions at the point of infection


Major discussion point

Policy Recommendations and Future Directions


Topics

Sociocultural | Legal and regulatory


Importance of updating response strategies as threats evolve beyond traditional misinformation tactics

Explanation

Both government and civil society responses need to evolve as the nature of information manipulation changes. The 2024 election cycle may have involved different primary tactics than previous elections, requiring updated approaches rather than relying on outdated playbooks.


Evidence

Criticism that civil society still uses 2020 playbook when misinformation may not be the primary tactic in the current election cycle, emphasizing that as threats change, so must the playbook


Major discussion point

Policy Recommendations and Future Directions


Topics

Sociocultural | Legal and regulatory


P

Patricia Larasgita

Speech speed

115 words per minute

Speech length

270 words

Speech time

140 seconds

Regional collaboration through networks and engagement with government, civil society, and academia across Asia-Pacific

Explanation

Safer Internet Lab actively participates in multiple international networks and engages with diverse stakeholders across the Asia-Pacific region to strengthen collaboration and knowledge exchange. This approach recognizes that ensuring internet safety requires coordinated action across sectors and borders.


Evidence

Membership in Network of Global Centers, Association of Internet Researchers, and International Panel on the Information Environment; engagement with government institutions, civil society, and academia across Asia-Pacific


Major discussion point

Research Methodology and Approach of Safer Internet Lab


Topics

Sociocultural | Legal and regulatory


Agreed with

– Beltsazar Krisetya
– Vicky Bowman

Agreed on

Multi-stakeholder collaboration is essential but faces significant challenges


Four distinct research areas: stakeholder perceptions, generative AI in elections, online scams, and foreign information manipulation

Explanation

The lab’s current research focuses on four key areas affecting internet safety in Asia-Pacific: foreign information manipulation and interference, deepfakes in online fraud and scams, and the democratic impact of generative AI. This comprehensive approach addresses multiple dimensions of information security threats.


Evidence

Specific mention of research focus on foreign information manipulation and interference, deepfakes in online fraud and scams, and democratic impact of generative AI


Major discussion point

Research Methodology and Approach of Safer Internet Lab


Topics

Sociocultural | Cybersecurity | Legal and regulatory


V

Vicky Bowman

Speech speed

175 words per minute

Speech length

136 words

Speech time

46 seconds

Questions about platform responsiveness to fraudulent profiles and unclear regulatory frameworks for romance scams

Explanation

Based on personal experience trying to get Facebook to remove obviously fraudulent profiles used in romance scams, there are concerns about how responsive platforms are to these reports. The regulatory framework around these activities appears to be unclear, creating challenges for effective enforcement.


Evidence

Personal experience attempting to get Facebook to take down fraudulent profiles, observation of Malaysian influencer profiles being used in romance scams


Major discussion point

Platform Responsiveness and Regulatory Frameworks


Topics

Legal and regulatory | Economic


Agreed with

– Beltsazar Krisetya

Agreed on

Online scams and fraud represent serious threats requiring better understanding and response


Disagreed with

– Beltsazar Krisetya

Disagreed on

Effectiveness of current regulatory approaches to romance scams


Concerns about whether romance scam activities are adequately addressed as illegal under current frameworks

Explanation

There appears to be a regulatory grey area regarding romance scam activities, where it’s unclear whether setting up fraudulent profiles with intent to defraud constitutes illegal activity under current legal frameworks. This ambiguity may hinder effective prosecution and prevention efforts.


Evidence

Observation that romance scam setup activities seem to be in a grey area regarding their legal status, even though they obviously set up situations for defrauding victims


Major discussion point

Platform Responsiveness and Regulatory Frameworks


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Beltsazar Krisetya
– Audience

Agreed on

Current regulatory frameworks are inadequate and patchy across the region


A

Audience

Speech speed

100 words per minute

Speech length

248 words

Speech time

148 seconds

Concerns about generative AI’s dire influence on election results and democratic quality

Explanation

There are significant concerns that generative AI technologies, particularly deepfakes and AI-generated misinformation, may have influenced election outcomes in ways that undermine democratic processes. The impact is considered particularly serious because it directly affects the quality and integrity of democratic governance.


Evidence

Reference to being in Jakarta as part of ASEAN parliamentarians’ effort to monitor the 2024 election, noting that while the election seemed okay generally, certain actors had grave concerns about how it was conducted


Major discussion point

Electoral Integrity and Democratic Quality


Topics

Sociocultural | Legal and regulatory


Agreed with

– Beltsazar Krisetya
– Vicky Bowman

Agreed on

Current regulatory frameworks are inadequate and patchy across the region


Questions about Indonesian experience in tackling deepfakes and misinformation during 2024 elections

Explanation

There is interest in understanding how Indonesia’s electoral commissions and civil society organizations specifically addressed the challenges of deepfakes and misinformation during the 2024 election cycle. This seeks to identify best practices and lessons learned from Indonesia’s experience.


Evidence

Specific mention of monitoring the Indonesian election as part of ASEAN parliamentarians’ effort and hearing concerns from various actors about election conduct


Major discussion point

Electoral Integrity and Democratic Quality


Topics

Sociocultural | Legal and regulatory


Inquiry about whether research findings were shared directly with government stakeholders

Explanation

A journalist asked for clarification about whether the Safer Internet Lab’s research findings, particularly those related to the 2024 election, were directly communicated to government officials and other stakeholders mentioned in their publications.


Evidence

Direct question from Indonesian journalist asking about engagement with government and stakeholders mentioned in publications


Major discussion point

Government Engagement and Implementation


Topics

Legal and regulatory | Sociocultural


Agreements

Agreement points

Multi-stakeholder collaboration is essential but faces significant challenges

Speakers

– Beltsazar Krisetya
– Patricia Larasgita
– Vicky Bowman

Arguments

Multi-stakeholder collaborations remain ad hoc and temporal due to different priorities and absence of institutional mechanisms


Regional collaboration through networks and engagement with government, civil society, and academia across Asia-Pacific


Questions about platform responsiveness to fraudulent profiles and unclear regulatory frameworks for romance scams


Summary

All speakers acknowledge that while multi-stakeholder collaboration is necessary for addressing internet safety issues, there are substantial barriers including different priorities among stakeholders, lack of institutional mechanisms, and unclear regulatory frameworks that hinder effective cooperation.


Topics

Legal and regulatory | Sociocultural


Current regulatory frameworks are inadequate and patchy across the region

Speakers

– Beltsazar Krisetya
– Vicky Bowman
– Audience

Arguments

Patchy and uneven legal responses across countries to address AI usage in elections


Indonesian election handled with outdated regulatory playbook that couldn’t match pace of generative AI development


Concerns about whether romance scam activities are adequately addressed as illegal under current frameworks


Concerns about generative AI’s dire influence on election results and democratic quality


Summary

There is consensus that existing regulatory frameworks across the Asia-Pacific region are insufficient, inconsistent, and unable to keep pace with technological developments, particularly regarding AI and online fraud.


Topics

Legal and regulatory | Sociocultural


Online scams and fraud represent serious threats requiring better understanding and response

Speakers

– Beltsazar Krisetya
– Vicky Bowman

Arguments

Frequent scam encounters lead to decreased internet trust, threatening digital economy growth in the region


Overconfidence affects all demographics – younger people and foreign-born populations particularly vulnerable


Stakeholders focus more on financial losses than psychological and physical impacts of romance scams


Questions about platform responsiveness to fraudulent profiles and unclear regulatory frameworks for romance scams


Summary

Both speakers recognize that online scams pose significant threats not only financially but also to trust in digital systems, affecting diverse demographics and requiring more comprehensive policy responses that address psychological impacts and platform accountability.


Topics

Economic | Cybersecurity | Legal and regulatory


Similar viewpoints

Both speakers from Safer Internet Lab emphasize the importance of comprehensive, multi-stakeholder approaches to research and policy development, recognizing that internet safety issues require collaboration across sectors and borders.

Speakers

– Beltsazar Krisetya
– Patricia Larasgita

Arguments

Multi-stakeholder and multidisciplinary approach to capture disinformation magnitude across production, recipient, and platform sides


Regional collaboration through networks and engagement with government, civil society, and academia across Asia-Pacific


Topics

Sociocultural | Legal and regulatory


Both acknowledge serious concerns about the impact of generative AI and deepfakes on electoral integrity, recognizing that current responses are inadequate and that civil society faces significant constraints in addressing these challenges.

Speakers

– Beltsazar Krisetya
– Audience

Arguments

Civil society at forefront of combating AI-based electoral disinformation but limited by funding and weak state partnerships


Indonesian election handled with outdated regulatory playbook that couldn’t match pace of generative AI development


Concerns about generative AI’s dire influence on election results and democratic quality


Questions about Indonesian experience in tackling deepfakes and misinformation during 2024 elections


Topics

Sociocultural | Legal and regulatory


Unexpected consensus

Overconfidence as a universal vulnerability factor in online scams

Speakers

– Beltsazar Krisetya
– Vicky Bowman

Arguments

Overconfidence affects all demographics – younger people and foreign-born populations particularly vulnerable


Questions about platform responsiveness to fraudulent profiles and unclear regulatory frameworks for romance scams


Explanation

There was unexpected consensus that overconfidence, rather than lack of digital literacy, is a key factor making people vulnerable to online scams across all demographics. This challenges common assumptions about who falls victim to online fraud and suggests that traditional education-based approaches may be insufficient.


Topics

Economic | Cybersecurity


Stakeholder perception gaps despite academic consensus on definitions

Speakers

– Beltsazar Krisetya
– Vicky Bowman

Arguments

Lack of consensus among stakeholders on what constitutes misinformation and disinformation, despite academic definitions


Questions about platform responsiveness to fraudulent profiles and unclear regulatory frameworks for romance scams


Explanation

Both speakers highlighted that despite clear academic definitions and frameworks, practical implementation faces significant challenges due to varied stakeholder perceptions and unclear regulatory boundaries. This suggests a fundamental disconnect between theoretical understanding and practical application.


Topics

Legal and regulatory | Sociocultural


Overall assessment

Summary

The discussion revealed strong consensus on several critical issues: the necessity but difficulty of multi-stakeholder collaboration, the inadequacy of current regulatory frameworks across the Asia-Pacific region, and the serious threats posed by online scams and AI-generated content to democratic processes and digital trust. There was also agreement on the need for more comprehensive approaches that address both technical and human factors in internet safety.


Consensus level

High level of consensus on problem identification and challenges, but limited discussion of specific solutions. The implications suggest that while stakeholders agree on the severity and nature of internet safety challenges in the Asia-Pacific region, there is still significant work needed to develop effective, coordinated responses that can keep pace with technological developments and address the complex interplay of technical, regulatory, and social factors.


Differences

Different viewpoints

Effectiveness of current regulatory approaches to romance scams

Speakers

– Beltsazar Krisetya
– Vicky Bowman

Arguments

Stakeholders focus more on financial losses than psychological and physical impacts of romance scams


Questions about platform responsiveness to fraudulent profiles and unclear regulatory frameworks for romance scams


Summary

Vicky Bowman highlighted concerns about platform responsiveness and regulatory clarity for romance scams, while Beltsazar acknowledged that stakeholders don’t adequately differentiate between scam types and undervalue psychological impacts. Their perspectives differ on where the primary problems lie – platform enforcement versus stakeholder prioritization.


Topics

Legal and regulatory | Economic | Cybersecurity


Unexpected differences

Scope of regulatory responsibility for romance scams

Speakers

– Beltsazar Krisetya
– Vicky Bowman

Arguments

Stakeholders focus more on financial losses than psychological and physical impacts of romance scams


Concerns about whether romance scam activities are adequately addressed as illegal under current frameworks


Explanation

This disagreement was unexpected because both speakers were addressing the same problem of romance scams, but approached it from different angles. Vicky focused on the legal ambiguity and platform enforcement issues, while Beltsazar emphasized the need for stakeholders to recognize non-financial impacts. Their different professional backgrounds (platform governance vs. research) led to different problem framings.


Topics

Legal and regulatory | Economic | Human rights


Overall assessment

Summary

The discussion showed minimal direct disagreement, with most differences arising from varying perspectives on implementation approaches rather than fundamental disagreements on goals. The main areas of difference were around regulatory effectiveness for romance scams and the adequacy of current responses to AI threats in elections.


Disagreement level

Low to moderate disagreement level. The speakers generally aligned on identifying problems but differed on solutions and priorities. This suggests a constructive environment where stakeholders recognize similar challenges but bring different expertise and perspectives to addressing them. The implications are positive for collaborative policy development, as the disagreements appear to be complementary rather than conflicting.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from Safer Internet Lab emphasize the importance of comprehensive, multi-stakeholder approaches to research and policy development, recognizing that internet safety issues require collaboration across sectors and borders.

Speakers

– Beltsazar Krisetya
– Patricia Larasgita

Arguments

Multi-stakeholder and multidisciplinary approach to capture disinformation magnitude across production, recipient, and platform sides


Regional collaboration through networks and engagement with government, civil society, and academia across Asia-Pacific


Topics

Sociocultural | Legal and regulatory


Both acknowledge serious concerns about the impact of generative AI and deepfakes on electoral integrity, recognizing that current responses are inadequate and that civil society faces significant constraints in addressing these challenges.

Speakers

– Beltsazar Krisetya
– Audience

Arguments

Civil society at forefront of combating AI-based electoral disinformation but limited by funding and weak state partnerships


Indonesian election handled with outdated regulatory playbook that couldn’t match pace of generative AI development


Concerns about generative AI’s dire influence on election results and democratic quality


Questions about Indonesian experience in tackling deepfakes and misinformation during 2024 elections


Topics

Sociocultural | Legal and regulatory


Takeaways

Key takeaways

There is a significant lack of consensus among stakeholders (government, tech platforms, civil society) on what constitutes misinformation and disinformation, despite clear academic definitions


Multi-stakeholder collaborations in the information ecosystem remain largely ad hoc and temporal due to different priorities and absence of institutional mechanisms for sustainability


Civil society and fact-checkers are perceived as most effective and transparent actors, while tech platforms and government bodies are seen as needing improvement in performance and transparency


Current regulatory frameworks across Asia-Pacific countries are patchy and inadequate for addressing emerging threats like generative AI in elections and online scams


The pace of regulatory development cannot match the rapid evolution of threats – Indonesia’s 2024 election was handled with a 2023 regulatory playbook while generative AI was rapidly advancing


Online scams affect all demographics regardless of digital literacy levels, with overconfidence being a key vulnerability factor


Frequent exposure to scams decreases overall internet trust, potentially threatening the growth of digital economies in the region


There is a need to balance media literacy programs with upstream interventions that address misinformation at its source rather than shifting all responsibility to users


Foreign information manipulation and interference is increasing in Asia-Pacific, corresponding with geopolitical tensions and flashpoints


Resolutions and action items

Safer Internet Lab will continue making research findings available through their website (saferinternetlab.org) and booth for stakeholder access


The lab will provide specific country reports to interested parties working in relevant areas


Follow-up engagement promised with Vicky Bowman regarding quantified data on romance scam research and financial/psychological impact analysis


Continued engagement with electoral bodies and ICT ministries for both implementation and long-term regulatory solutions


Unresolved issues

How to create sustainable institutional mechanisms for multi-stakeholder collaboration beyond ad hoc partnerships


How to achieve consensus among stakeholders on definitions of misinformation and disinformation for effective policy implementation


How to develop regulatory frameworks that can keep pace with rapidly evolving technological threats like generative AI


How to address the grey areas in legal frameworks regarding romance scams and similar fraudulent activities


How to balance platform responsiveness with regulatory oversight for emerging scam tactics


How to differentiate policy responses for different types of scams beyond just focusing on financial losses


How to update civil society and government response strategies beyond traditional misinformation playbooks


How to address the psychological and physical impacts of romance scams, not just financial losses


Suggested compromises

Balancing media literacy education with upstream intervention strategies rather than relying solely on user education


Engaging with both electoral bodies for immediate implementation needs and ICT ministries for long-term regulatory solutions


Focusing on both financial and non-financial impacts of scams to create more comprehensive policy responses


Thought provoking comments

There is hardly any consensus about any types of content that are determined as misinformation and disinformation. And that brings to the consequences of the way we treat the content, because not every content can be solved with a blunt instrument of takedown, for example. There are contents that require nuances, and yet the way stakeholders perceive whether this content is a disinformation are still varied.

Speaker

Beltsazar Krisetya


Reason

This comment challenges the fundamental assumption that there’s clarity around what constitutes misinformation. It reveals a critical gap between academic consensus and stakeholder understanding, highlighting how definitional ambiguity undermines effective content moderation strategies.


Impact

This insight reframes the entire discussion from focusing on solutions to first addressing the foundational problem of definitional consensus. It introduces the concept that content moderation requires nuanced approaches rather than blanket policies, setting up the framework for understanding why multi-stakeholder collaboration faces challenges.


The more people encounter scams, at least once a month, the less likely it is for them to distrust the internet in general. And this can bring dire consequences to a region that are otherwise blooming with the digital economy as a new catalyst of the economy.

Speaker

Beltsazar Krisetya


Reason

This counterintuitive finding challenges conventional wisdom about how exposure to online threats affects user behavior. It reveals a paradoxical relationship where familiarity with scams breeds complacency rather than caution, with significant economic implications for the Asia-Pacific region.


Impact

This observation shifts the conversation from viewing scams as isolated security issues to understanding them as systemic threats to digital economic development. It introduces the concept that repeated exposure can desensitize users, requiring different intervention strategies than traditional awareness campaigns.


There is a thin line between educating users and shifting the responsibility to users to make their own judgment whether this information is factual or not… it needs to be superseded with other interventions that can stop misinformation right at the point of infection and not only at the point of the recipient.

Speaker

Beltsazar Krisetya


Reason

This comment critically examines the limitations of media literacy as a primary solution, arguing that over-reliance on user education essentially shifts responsibility away from platforms and regulators. It advocates for upstream interventions rather than downstream solutions.


Impact

This insight redirected the discussion toward systemic solutions and challenged the audience member’s implicit assumption about media literacy being sufficient. It introduced the concept of ‘point of infection’ versus ‘point of recipient’ interventions, providing a new framework for thinking about misinformation countermeasures.


I believe that the way we handled the 2024 election was still using the 2023 playbook… as the threat changes, so does the playbook.

Speaker

Beltsazar Krisetya


Reason

This metaphor effectively captures the mismatch between rapidly evolving technological threats and slower-adapting institutional responses. It highlights how both government and civil society can become trapped in outdated approaches when facing new challenges like generative AI.


Impact

This comment provided a concrete framework for understanding regulatory lag and institutional inertia. It shifted the discussion from criticizing specific responses to understanding the structural challenges of keeping pace with technological change, offering a more nuanced view of why election integrity measures may fall short.


It seems to be a bit of a grey area as to whether or not these kind of things are actually illegal, even though they’re obviously setting themselves up for defrauding the people who are going to fall for them.

Speaker

Vicky Bowman


Reason

This comment identifies a critical gap between the obvious harm of romance scams and the unclear legal framework for addressing them. It highlights the challenge of regulating preparatory or setup activities that aren’t technically fraudulent until money is transferred.


Impact

This observation introduced a new dimension to the scam discussion, moving beyond detection and response to prevention and legal frameworks. It prompted deeper consideration of how regulatory frameworks struggle with emerging forms of online harm that don’t fit traditional legal categories.


Overall assessment

These key comments fundamentally shaped the discussion by challenging assumptions and introducing systemic perspectives. Rather than focusing on surface-level solutions, they revealed underlying structural problems: definitional ambiguity in misinformation, the paradox of scam familiarity, the limitations of user-focused interventions, regulatory lag, and legal grey areas. The comments collectively shifted the conversation from tactical responses to strategic thinking about information ecosystem governance. They demonstrated how research findings can complicate seemingly straightforward policy approaches, forcing stakeholders to grapple with the complexity and interconnectedness of digital safety challenges in the Asia-Pacific region. The discussion evolved from a presentation of findings to a more nuanced exploration of why traditional approaches may be insufficient for emerging threats.


Follow-up questions

What is the quantified breakdown of different types of scams, particularly romance/love scams versus other methods, and how much money people lost from each type?

Speaker

Vicky Bowman


Explanation

This information was requested but not available in the current research findings, indicating a need for more detailed quantitative analysis of scam types and financial impacts


What is the responsiveness of platforms to taking down fraudulent profiles used in romance scams?

Speaker

Vicky Bowman


Explanation

This addresses the effectiveness of platform content moderation for scam prevention, which is crucial for understanding how well current systems work


What is the regulatory framework around romance scams, and are these activities actually illegal in different jurisdictions?

Speaker

Vicky Bowman


Explanation

This highlights a grey area in current legal frameworks that needs clarification for effective policy responses


How can stakeholders better address the psychological and physical losses from love scams, not just financial losses?

Speaker

Beltsazar Krisetya


Explanation

Current policy responses focus primarily on financial damage while neglecting other significant impacts of romance scams


How can regulatory frameworks be updated to match the pace of generative AI development and misuse?

Speaker

Beltsazar Krisetya


Explanation

There’s a significant gap between the speed of technological advancement and regulatory adaptation that needs to be addressed


What new playbooks and tactics should be developed for civil society and government to address evolving threats beyond traditional misinformation?

Speaker

Beltsazar Krisetya


Explanation

Current approaches are based on outdated threat models and need updating to address new forms of information manipulation


How can sustainable institutional mechanisms be created to support multi-stakeholder collaboration beyond ad hoc partnerships?

Speaker

Beltsazar Krisetya


Explanation

Research findings showed that most collaborations remain temporary and lack institutional support for continuity


How can interventions be developed to stop misinformation at the point of infection rather than just focusing on recipient education?

Speaker

Beltsazar Krisetya


Explanation

There’s an over-reliance on media literacy programs which may shift responsibility to users rather than addressing root causes


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.