Day 0 Event #75 Addressing Information Manipulation in Southeast Asia

15 Dec 2024 13:30h - 15:00h

Day 0 Event #75 Addressing Information Manipulation in Southeast Asia

Session at a Glance

Summary

This discussion focused on foreign information manipulation and interference (FEMI) in Southeast Asian countries. Experts from Indonesia, Australia, the Philippines, and Vietnam shared insights on the information landscape and challenges in their respective countries. They highlighted how disinformation, both domestic and foreign, impacts public opinion and political processes.

The speakers noted that while disinformation is widely recognized as a problem, FEMI is not consistently perceived as a threat across Southeast Asian nations. They discussed various approaches to combating disinformation, including government regulations, platform accountability, and digital literacy campaigns. However, they also acknowledged the difficulties in balancing effective governance with preserving democratic freedoms and free speech.

The discussion revealed that the sources and nature of disinformation vary across countries, with some facing more domestic issues while others contend with foreign interference. The rise of generative AI and deepfakes was identified as an emerging challenge, particularly in election contexts. The speakers emphasized the need for multi-stakeholder approaches involving governments, civil society, and tech platforms to address these complex issues.

Questions from the audience prompted discussions on the real-world impacts of disinformation, the role of social media platforms, and the challenges of determining who should be the arbiter of truth. The speakers agreed on the importance of regional cooperation and inter-regional dialogue to tackle FEMI effectively. They also highlighted the need for context-specific solutions and the challenges of implementing uniform approaches across diverse political systems in Southeast Asia.

Keypoints

Major discussion points:

– The information landscape and challenges with disinformation/foreign interference in Southeast Asian countries like Indonesia, Philippines, Vietnam

– Government and civil society responses to combat disinformation and foreign information manipulation

– The role of social media platforms and need for better content moderation

– Balancing regulation of disinformation with freedom of expression

– The need for regional cooperation and multi-stakeholder approaches

Overall purpose:

The goal of this discussion was to examine the issue of foreign information manipulation and interference (FIMI) in Southeast Asia, share case studies from different countries, and explore potential solutions and best practices for addressing this challenge.

Tone:

The overall tone was academic and analytical, with speakers presenting research findings and policy perspectives in a neutral, factual manner. There was a sense of concern about the impacts of disinformation, but the tone remained measured and solution-oriented throughout. The Q&A portion allowed for some more pointed questions and debate, but the tone remained largely collegial and constructive.

Speakers

– BELTSAZAR KRISETYA: Researcher from CSIS Indonesia, moderator

– PIETER ALEXANDER PANDIE: Researcher at the Safer Internet Lab and Department of International Relations at CSIS Indonesia

– FITRI BINTANG TIMUR (FITRIANI): Senior Analyst at Australia Strategic Policy Institute (ASPI)

– MARIA ELIZE H. MENDOZA: Assistant Professor, Department of Political Science, University of Philippines

– BICH TRAN: Postdoctoral fellow, Lekwungen School of Public Policy, National University of Singapore

Additional speakers:

– Alexander Pandi: Researcher from CSIS Indonesia

– Koichiro: Cybersecurity expert from Japan

– Luisa: Advisor for the German-Brazilian Digital Dialogue Initiative

– Nidhi: Audience member

– Eliza: From Vietnam, working in Germany

– Fawaz: From Center for Communication and Governance, New Delhi

Full session report

Foreign Information Manipulation and Interference in Southeast Asia: Challenges and Responses

This discussion brought together experts from Indonesia, Australia, the Philippines, and Vietnam to examine the issue of foreign information manipulation and interference (FIMI) in Southeast Asia. The speakers shared insights on the information landscape and challenges in their respective countries, highlighting how disinformation, both domestic and foreign, impacts public opinion and political processes.

Research Initiatives and Information Landscape

Beltsazar Krisetya introduced the Safer Internet Lab (SAIL) research program, which focuses on studying online harms, platform governance, and digital rights. SAIL collaborates with various stakeholders, including civil society organizations, to address these issues.

Pieter Alexander Pandie presented findings from an Indonesian case study, drawing on a database of FIMI instances in Southeast Asia from 2019 to 2024. He noted the increasing use of AI-generated disinformation in elections, including a deepfake video of a former Indonesian president.

Bich Tran outlined Vietnam’s information landscape, consisting of three main components: domestic media, foreign media with Vietnamese language service, and social media. She highlighted Vietnam’s concerns about China’s disinformation campaigns regarding South China Sea disputes.

Maria Elize H. Mendoza described the Philippines’ information ecosystem as saturated with “independent” media practitioners spreading disinformation, mentioning AI-generated audio of the current Philippine president as an example.

Government and Societal Responses

The speakers discussed various approaches to combating disinformation, including government regulations, platform accountability, and digital literacy campaigns. However, they also acknowledged the difficulties in balancing effective governance with preserving democratic freedoms and free speech.

There were notable differences in how governments approach the issue. Bich Tran mentioned that Vietnam created Task Force 47 to counter “wrong views” on the internet, taking a more active and restrictive approach. In contrast, Maria Elize H. Mendoza stated that the Philippine government has failed to effectively address electoral disinformation, leading to civil society taking on more responsibility.

Fitri Bintang Timur (Fitriani) shared information about the ASEAN task force on countering fake news and its guidelines, highlighting regional efforts to address the issue.

The speakers agreed on the need for a multi-stakeholder approach involving government, civil society, and tech platforms to address disinformation effectively. They also emphasised the importance of regional cooperation and intelligence sharing, particularly given the disparities in cybersecurity capabilities among Southeast Asian nations.

Challenges in Combating Disinformation

Several key challenges were identified in the fight against disinformation:

1. Defining and attributing foreign information manipulation and interference consistently, with a need for context-specific definitions for Southeast Asia or the Asia-Pacific region

2. Balancing political stability concerns with freedom of expression

3. Addressing the lack of digital literacy, which exacerbates susceptibility to disinformation

4. Combating confirmation bias, which makes people susceptible to believing disinformation

5. Dealing with the rise of generative AI and deepfakes, particularly in election contexts

6. Potential misuse of anti-fake news laws to infringe on freedom of speech

The speakers agreed that technical solutions alone are insufficient to combat disinformation. They highlighted the need to consider sociological factors and implement a more holistic approach.

Recommendations and Future Directions

The discussion yielded several recommendations for addressing disinformation:

1. Develop a Southeast Asian or Asia-Pacific specific definition for FIMI

2. Strengthen regional cooperation and intelligence sharing on disinformation issues

3. Incorporate digital literacy education at all levels of schooling

4. Engage in multi-stakeholder and inter-regional cooperation to research disinformation and its real-world impacts

5. Implement voluntary codes for tech platforms while maintaining government’s ability to intervene if needed

6. Balance effective governance of information ecosystems with protections for democratic freedoms and civil liberties

7. Encourage tech platforms to address confirmation bias through algorithm transparency

The speakers emphasised the need for context-specific solutions, acknowledging that a one-size-fits-all approach to combating disinformation in Southeast Asia may not be effective.

Unresolved Issues and Future Considerations

Several issues remained unresolved and warrant further discussion:

1. How to effectively regulate tech platforms without infringing on freedom of speech

2. Who should be the arbiter of truth in determining what constitutes disinformation

3. How to address the broader sociological problem of confirmation bias and incentives for spreading disinformation

4. How to improve the effectiveness of digital literacy campaigns, especially for those who haven’t formed their opinions yet

5. How multiple countries can work together to more effectively demand action from tech platforms in addressing disinformation

6. Determining the best platform for addressing FEMI in the Asia-Pacific region, as raised by Koichiro from Japan

In conclusion, the discussion highlighted the complex and multifaceted nature of foreign information manipulation and interference in Southeast Asia. While there was consensus on the need for collaborative, multi-stakeholder approaches, the speakers also acknowledged the challenges in implementing uniform solutions across diverse political systems and information landscapes. As the threat of disinformation continues to evolve, particularly with the rise of AI-generated content, ongoing regional cooperation and adaptive strategies will be crucial in addressing this pressing issue.

Session Transcript

BELTSAZAR KRISETYA: to Alexander Pandi, my colleague, researcher from CSIS Indonesia as well, and also Dr Bik Chan, postdoctoral fellow, Lekwungen School of Public Policy, National University of Singapore. And also joining us online is Maria Mendoza, Assistant Professor, Department of Political Science, University of Philippines, as well as Dr Fitriani, Senior Analyst at Australia Strategic Policy Institute, or ASPI. Okay, before we begin our session, kindly allow me to provide a little bit of context about who we are as the organisers, and also why do we pick this topic to be presented amongst other ongoing research projects that we are also presenting, that we are also conducting in Southeast Asia and the Pacific in general. So, MeatSaver Internet Lab, it is a research program that were co-constructed, if you will, or co-concepted by CSIS, our home institution in partnership with Google, Google Indonesia back then, and followed by Google Asia Pacific later on. It is a research hub that convinces all researchers and also practitioners working on information ecosystem. So, on the first year, we are trying to capture the whole supply chain, if you will, of this information. So, we conducted some kind of anthropological research to disinformation actors. We tried to cover how buzzers or cyber troopers and bots conducted the influence operation campaign in Indonesia. We also conduct a user-centered research by conducting surveys on public susceptibility on disinformations to promote the balance between digital information and political literacy. And we also conducted a platform-facing research on, we want to explore further, what are the co-governance models that are acceptable and yet better responses and mitigations, and can bring along the government actors, tech platform, as well as civil society, in one forum and in one institution. And so, we’ve been doing this for the second year in a row now. We’ve concurrent with the 2024 general elections that were conducted in Indonesia. And so, we collaborate a lot with information actors, as well as electoral actors in Indonesia. We also shaped the dialogue with international communities in which we joined as speakers and also participated in the UNESCO forum, UN forum, and also several diplomatic embassies. We’ve also hosted an academy conference on disinformation in Indonesia, as well as publishing ad reports in which you can find the printed version of the report in our booth just outside of this room. We’ve established a booth for the entire IGF 2024, so feel free to drop in anytime. And for this year, 2024 going forward, we will be focusing on three research. The first is the impact of deepfakes on online fraud, how AI, how generative AI would worsen the topography of online scams in Southeast Asia. We also take a closer look into the impact of disinformation to democratic resilience, how is the net sum of democracy after a series of electoral tsunami, if you will, in 2024, and where does the information resilient place a part in this. And lastly, the one that we are going to present on this occasion is on information manipulation and interference. We are also a part of the Global Network of Internet and Society Research Center, or Global Network of Centers, in which institutions such as Harvard University, Oxford Internet Institute, the CIS at Stanford, and other probably 100 to 200 institutions focused on internet society convene in an academic discussion globally. So that’s a short presentation on SAIL, but we will delve further into one topic that is probably in growing interest across the region, which is on the information manipulation. We have an Indonesian case study, a Vietnamese case study, Philippines case study, and some perspective from Australia. Without further ado, I will let Peter, probably 10 to 15 minutes, to present the case on foreign information manipulation and interference, and whether what is happening in this part of the world, which is Southeast Asia, there are some parallels that can be drawn to instances that are also happening elsewhere. So please, Peter, the time is yours.

PIETER ALEXANDER PANDIE: Thank you very much, Beltz. Again, thank you everyone for attending the session. My name is Peter Pandey, a researcher at the Safer Internet Lab and also a researcher at the Department of International Relations at CSIS Indonesia. So as Beltz has very well introduced, the Safer Internet Lab this year has three research streams that we’ve tried to conduct for the second year of research of this research lab, and I will be focusing mostly on the foreign information manipulation and interference and instances of that occurring in Southeast Asia, and I’ll also be covering a little bit about the information landscape in Indonesia specifically, and how that correlates between foreign-based disinformation or domestic sourced disinformation. So as part of the research stream for FEMI in SAIL this year, we’ve tried to create a database that’s tried to make records of FEMI instances in Southeast Asia from 2019 to 2024. So what we’ve done is, from open source sources, we’ve tried to make a database of cases of where information operations, whether traditional, digital, or offline, have been made in Southeast Asia from 2019 to 2024. And for the data set, we’ve tried to make those three categories. So for traditional media influence, examples include when influence actors place advertisements, hires or pays or influences a journalist or opinion leader to share their part of the story on media, and so on and so forth. For digital media influence, these would be cases such as coordinated and non-ethnic behaviors, creation of troll and bot networks to share narratives on digital media, and offline influence include diplomatic influence, economic investment, and so on and so forth. But for part of this research, we’ll be focusing mostly on the digital aspect of it. So part of the ongoing research, what we found so far as part of our data set, is that while disinformation has been discussed openly by countries in Southeast Asia, FEMI has not been discussed that much across Southeast Asian states. And that’s, we’ll delve into the reasons why later, but for disinformation specifically, what we found is that countries in Southeast Asia tend to focus more on disinformation as the topic, but not FEMI. So what they’ve tried to address by policy is disinformation that’s occurred domestically, but not much discussion of FEMI more broadly. So as part of our data set, we’ve discovered that earlier, through our early findings, is that so far the data set shows a tail of two halves between 2019 and 2024. So from 2019 to 2021, we found that cases of FEMI were not quite high. Most of the disinformation cases that occurred in Southeast Asian states were domestically sourced, that were attributed, so they were mostly domestic, created by local actors or sometimes government actors. But from 2022 to 2024, what we found is that there has been an increase of FEMI case, reported FEMI cases, and also a greater diversity of threat actors that have been operating in Southeast Asia’s information landscape. So the correlation that we’ve sort of made as a result of these data findings is that there has been an increase of FEMI and influence operations in Southeast Asia, concurrent with rising geopolitical tensions between great powers and also a rising number of international conflicts. So the Russian-Ukraine conflict, the ongoing conflict in the Middle East, and so on and so forth. These have in fact increased the number of influence and FEMI operations in Southeast Asia, whereas from 2019 to 2021, it was still mostly domestic focus. So in addressing disinformation, as I’ve covered before, most countries still use national approaches to legislation, rarely through attribution, so very few countries, if any, attribute where the sources of disinformation are. If they’re foreign, if they’re domestic, it’s more likely the case that it would occur, and even more rarely through retaliation. I don’t think we have a case of that that we found so far. So as part of our data set, we’ve recorded from 10 different countries in Southeast Asia and drawing on lessons from Taiwan and Australia as well. And what we found is that it was quite difficult to find cases of, because our team is quite small and we’re mostly English-speaking, so the most of our sources were English-speaking media and newspapers and so on and so forth, and we found that that was a great limitation in how we identified cases, particularly in countries where the information space is much, much smaller and much less exposed to English language media. So countries such as Cambodia and Laos, we found it was quite difficult to identify cases of foreign-based disinformation. Number one is because attribution rarely occurred, where they attributed a foreign actor as part of the disinformation operation. And number two is that if it were to occur, it would most likely be in the local language. So the language would be localized, whereas in countries where the information landscape and the social media users were much more exposed to international media, it was a lot easier to detect cases of FEMI operations.

BELTSAZAR KRISETYA: And moving forward, we also identified a few foreign influence actors. These actors include, from reported cases, these are actors such as China, Russia, Iran, and also some non-state actors that were unattributed, either whether they were supported by a state actor or not. And also one of the examples that we found was also the United States, in fact, engaging in some information operations in Southeast Asia. So to wrap up how the data set that we found is that sources of disinformation and the information landscape more broadly in Southeast Asia is very different and very contextual from different Southeast Asian states, especially during election periods and so on and so forth. There’s also very different threat perceptions, particularly relating to FEMI. While disinformation is considered a challenge, and is likely so for many, many states, even outside of Southeast Asia, not all governments consider FEMI as a current threat. Some are quite comfortable with leaving certain cases of FEMI to fester because it’s not deemed as a big threat to the existing political regime, or it’s not creating the social disturbances that other sources of domestic disinformation might. There’s also, I think, with the different cyber capabilities across Southeast Asian states, there’s also a difficulty in addressing these issues or even attributing the source of disinformation. So in Southeast Asia, while there is, in ASEAN, for example, while there is the cybersecurity cooperation agreements and so on and so forth, these are still mostly led or hosted by countries such as Singapore or Malaysia, who have higher, I would say, cyber capabilities compared to other Southeast Asian states who are still building on those capabilities. So not everyone is on the same page, either threat perception-wise or capabilities-wise. And moving on specifically to Indonesia, we just held elections in 2024, presidential elections. And while the data is still very, very fresh, very new because the election just occurred in February of this year, we found that most of the disinformation cases were still domestic-sourced. So either by non-state actors that were paid by government actors or certain political actors, but still very, very domestic-based. And as part of that, we found that there were differences in how disinformation was created in previous elections. So in 2016 or 2019 presidential and regional elections, the game in 2024 was a lot different. Whereas in previous elections prior to 2020, most of the disinformation that was created was very text-based and image-based and distributed on platforms that were text-based and image-based. So platforms such as Instagram, Twitter, Facebook, but they were either image or just text-based disinformation or on messaging apps like WhatsApp. Whereas in 2024, we saw a greater proliferation of disinformation incidents that involved Gen AI, either visual or audio form. So three of the examples that I’ve noted down here is, the first is a video-based, this deepfake of our former president who has passed away, who stated support for one of the political candidates. So that was a deepfake that was made. He was making a speech saying that you should support this certain candidate. Two other examples that was posted on TikTok was audio-based. So one of them was an argument that occurred between a certain political candidate and the head of the party that supported him, which was very convincing for a lot of people. And the third one was one of the presidential candidates giving a speech in fluent Arabic when he did not in fact speak fluent Arabic. So these are three different ways where Gen AI has affected how disinformation has proliferated in Indonesia. And one thing that we found is that our election bodies that are trying to deal with these disinformation cases are still playing on the playbook from 2019 and previous elections. They were not adequately prepared to deal with how disinformation would be proliferated in future elections because of the creation of Gen AI. And I think this is another problem that will continue moving forward. So to wrap up the presentation, what’s the way forward after this? So I’ve identified three things. Number one is I think that especially this is for an Indonesian context. Of course, I can’t speak for every country since everyone has a very, very different contextual information landscape. But I think for Indonesia specifically, a multi-stakeholder approach involving government, civil society, and social media platforms will be needed to comprehensively address

PIETER ALEXANDER PANDIE: disinformation either during elections or other instances. Obviously, with Gen AI developing the way it is, it will be very, very difficult to create policy that will form itself as guardrails for it since with increasing geopolitical tensions and the tech competition between great powers, I think we’re going to see the rapid, rapid development of Gen AI. So I think we need to do what we can and involve as much stakeholders as possible in that regard. Number two, I think as I said before, emerging technologies will intensify the speed, nature, and spread of disinformation. While I think now there are still cases of Gen AI with video and audio that are still a little bit easy to identify where it’s fake or not, I think moving forward, the capabilities of these technologies will improve where it will be increasingly difficult even for the trained eye to detect whether that’s disinformation or not. And lastly, and I think this is very important to say, especially for the Indonesian context, is that we need to strike a balance between effective governments of the information landscape and ensuring that democratic freedoms for civilians are still upheld. Because this is drawing from previous research at the Safer Internet Lab is that while there are policy responses from the government to address disinformation, oftentimes they can step into civil freedoms for expressing opinions and so on and so forth. So they don’t address disinformation, but they limit freedoms for expression and so on. So I think that balance is of course a very, very difficult strike, but I think it’s something that we need to note on moving forward. I think that will be it for my presentation. I’ll pass it back to Beltz.

BELTSAZAR KRISETYA: Thank you, Peter. Before we move on with Dr. Fitri, allow me to delve further into something that you just said. Please paint a further picture on the users. You’ve explained really well on how threat perceptions inhibits the effort against information manipulation. You’ve also painted a picture on the different topography of threats in Southeast Asia. But how does the receiving end look like? How does the users look like? Do the Indonesian users serve as a fertile ground for disinformation, if you will? Or because they have been the quote-unquote victim for disinformation by domestic actors, and does it make them a fertile ground for foreign interference, in your opinion?

PIETER ALEXANDER PANDIE: Right. So I think with disinformation, and this can be extrapolated to not just Indonesians, but people from other countries as well, is that disinformation is most effective when it reinforces certain opinions or ideas that someone already has. This is something that I’ve spoken about with counterparts from the US and Australia as well, is that even whether foreign or domestic, the confirmation bias is a very, very big thing in how disinformation is spread. So when you already have pre-existing notions of a certain idea or a certain political position, disinformation can reinforce those ideas and in fact make it stronger. And I think in the Indonesian context more specifically, we are one of the most populated countries in the world. I think number four right now. Digitalization is occurring rapidly and a lot of the youth are starting to become more and more exposed to social media. And I think while that increase has happened, digital literacy has not increased with it. And I think that’s another challenge that we need to take, is that improving digital literacy for social media users, whether young or old in Indonesia, to be able to differentiate between fact and fiction, real or hoax information, I think is another really important step forward. I think this is also part of a public opinion survey that SAIL has conducted last year, and the numbers were quite low for the amount of people who have participated in a digital literacy program that was held by the government. Even though these programs existed for public, not a lot of people were aware of them and even less people were involved in them. So I think this is another challenge moving forward.

BELTSAZAR KRISETYA: Thank you. Moving on to Dr Fitriani, Senior Analyst at Australia Strategic Policy Institute. Can the IT team prepare for Dr Fitriani’s slides?

FITRI BINTANG TIMUR (FITRIANI): Hi Belz, thank you. Good afternoon everyone in Riyadh, in Canberra. It’s 1am, so apologies if I look pretty sleepy. Thank you for having me. It’s an honour to be able to speak at the Internet Governance Forum 2024. And I would like to extend my gratitude to CSIS, as well as Google, for bringing this timely discussion on the issue that is essential, I think, for our digital future and security. So my presentation today, if the IT team can manage to pull out the slide, is focusing on how we can tackle information manipulation in Southeast Asia by a drawing lesson and what does not work in Australian experience. If I can go to the next slide, I’ll share in how this information and if we can go to the next slide, this information and foreign information manipulation is a global challenge. And I think, as we know, and has been discussing, it undermined democratic processes and acid by societal divides, we can public trust an institution. And I would argue here that Australia is similar to Southeast Asia, where threats are happening in a fertile ground where the society is diverse in social, political, as well as opinion. In Australia, for example, we’re open to do protests on the street and because of that, and we have many population that’s coming from different parts of the world, and they are often leaving the country, but still have a connection from the country. Sometimes the government from the country actually conduct information operation. to influence how they actually say good things about the country where they’re from. If I can go to the presentation before the previous slide, I want to share about how this information is actually exploiting the sensitive issues of different political ideologies, and it is not common for a state-sponsored actor to employ this information campaign aimed at fostering division, confusion, and mistrust among the population, and for Australian experience is to wedge distrust against allies. It happens, for example, the top example is where in the recent US election, the recent BBC News was saying that Mr. Simon Novikov, he’s an Australian-born individual, but he was being known as the Russian spokesperson in Australia, and he was paying X account of Alphafox $7,800 to post in Alphafox Twitter account, X account, a fake AI video that falsely claimed Haitian immigrants engaging voting fraud in Georgia swing state, and this actually pose a concern for Australia because such activities could tarnish Australia reputation and connection to its allies, and it is implicating in a way that Australia can be considered as a launchpad for foreign interference in other countries, so this can be concerning. And I don’t say that ASEAN country might be like this, but we can see it in the increasing geopolitical tension that situation might happen in the future. Another example is how the disinformation as Peter was sharing has become more sophisticated and leveraging social platform. And the second example the photo below in is from Southeast Asia where there’s actually. I think we’re losing victory. Are you still with us. bogus website channel and that have news that is produced by AI in a post that is unfounded really fake. And they use a drone or flight that was being used for Ukraine in the example of South China Sea. And it actually trying to increase the tension by saying that US is sending anti-tank missile to support the Philippines and so on. And actually they copy pasting from top GPT I think because in the posting it actually said I am a language model AI and I cannot perform tasks that require real time information. But concerningly this news on South China Sea was shared over one of them are shared over 25 times. And I think we need to be aware of how this campaign is not only accent by regional tension but post significant rates to the security and stability in Southeast Asia. Asia. And here in my presentation, I would like to share how Australian recent experience could provide valuable insight to addressing this challenge, and perhaps give measure to combat information manipulation. So if we can go to the next slide, I will share of how Australia deal with information manipulation in the in last year voice of Parliament, which is a referendum that called on whether the First Nation, the indigenous aborigin, people of Australia can have direct seat in the Parliament like allocated seat. But this, this election, unlike the previously Russian operation in Australia, this was identified, there’s allegedly linked with the Chinese Communist Party. And there’s a lot of TikTok and social media, other social media being used to distribute false narrative that include racial segregation, and actually having the narrative as you can read there, and say that it’s a way to actually change how Australia currently is working. So how learning from the voice of Parliament failed to actually provide a stronger position for the First Nation people of Australia, then the government and the people are trying to address this challenge using three main ways. If I can go to the next slide, the three main ways is one legislative effort, two is public and joint attribution, and three is fact checking and awareness campaign. So let me start with the law, making the law. I know creating a law is a process that takes long, and I don’t know whether the ASEAN 10 and will be perhaps with Timor-Leste joined. meaning, hopefully soon, the countries of Southeast Asia can issue the update of law. But even in Australia, the proposed combating misinformation and disinformation bill was actually shut down by countries that disagree, by people that actually disagree, and saying that perhaps this is just a way of trying to silence the people. So the disinformation and misinformation bill campaign is actually receiving disinformation campaign. And one of the senators that actually thanking Elon Musk is because Elon Musk actually shared this bill draft saying that Australia is creating this bill. And after Elon Musk tweeted it, it’s the government receive. And behind that, there’s another local parliamentarian that say like, if you want to disagree with this bill, this is how you do it. And after that, there’s 16,000 submission saying how this bill should not go. So that bill was failed, although the effort should be appreciated. The second is having public and joint attribution. And for example, in the attribution might be difficult and cannot be done. For example, for countries that small and medium country that said, what’s the benefit of saying that big country, major power are conducting information operation to us, and we cannot, you know, respond to it. So the way Australia responds to the APT40 cyber threat activities. is by actually calling other like-minded actors, like-minded states that also become the victim of this advanced persistent threat 40, that infiltrating government computer system. So they call, the government also call the US, UK, Canada, New Zealand, South Korea and Japan to issue a joint attribution. And this is, this is called to a specific Chinese state sponsored group. And the way it does it is not political attribution, but technical attribution. So maybe this is one of the way that can be done. And the third way is fact checking and awareness campaign. And the government endorse and support, although the effort is done by independent institutions such as RMIT Fact Lab and Fact Checked AAP, that is systematically debunk false claim. I think other countries in Southeast Asia region have that, like such as Mafindo in Indonesia and Fair File in Philippine, for example, that maybe Maria will share later. So if I can go to the next slide, how this is relevant for Southeast Asia, because in Southeast Asia also have diverse social political environment that present unique vulnerabilities to information manipulation. I think it is, Australia experience is similar with Southeast Asia. But the differences is there’s fragmented regulation that hinders platform accountability. If I can give you example, the top right is one of the example, how in several university journalism majors in Indonesia. recently have signed an agreement, MOU, with Russia’s state media Sputnik to share how to do journalism. So it can be a bit concerning. And meanwhile, other countries in Southeast Asia, for example, Singapore, actually implement sanctions toward Russia. So there’s a discrepancy of regulation addressed on certain, as Peter was saying, actors that conduct information operation to the region. And this can be a concern, especially when there’s limited public awareness that actually exacerbates susceptibility. So what happened in the region, I think, as well as in Australia, is that the government then is called to play a greater role in actually verifying what is fact and what is disinformation. The bottom example of the photo is where Singapore Law Minister Shah Mugam actually clarifying and saying how Israel diplomat are being insensitive of posting a comment on how many times Israel actually, the word Israel is actually mentioned in Quran. It’s actually, it’s insensitive because that posting was shared in the hate of Gaza conflict, but that how Singapore manage it is managed to control and the harmony of the country to not escalate the issue. So I call for the need of regional cooperation to counter shared threat, to actually communicate together to share information of what happened in one country. And perhaps the content sharing agreement, for example, need to be something that the region needs to talk to each other because having content sharing agreement, for example, with Sputnik or with other countries, state media that might not be democratic or might not be correct in reporting the certain issue might increase tension in the region unnecessarily. If I can go to the next slide on the recommendation on Southeast Asia, there’s actually in terms of Transcribed by https://otter.ai of what there’s a diagram in terms of how what kind of content that can be addressed and regulate. First, the measurement is to address the one that leads most harms and and that would be equal with the level of intervention. There’s five step here that I suggest on how Southeast Asia can, you know, address information operation or information influence versus adopt clear regulation. So if there is a violation in certain social media platform, therefore, the if the government have established clear and enforceable regulation, then that violation can be brought to to the criminal and justice law processes. So, for example, the regulation should include minimum content moderation standard that is that is published, for example, and mechanism of how to hold platform accountable. The second is having the threatening of regional cooperation and intelligence sharing as well as capacity of the government to address disinformation campaign. The third one is enhanced media literacy and and ASEAN actually did this with trainer of trainers and under the the education minister in share in countering disinformation. And we have model in ASEAN. What the next step is to actually translate that model to two different ASEAN language. The third one is to promote transparent. The fourth one is sorry to promote transparency by encouraging platform to label trusted source, for example, to label whether this image is AI generated, whether the video is AI generated. The more difficult is perhaps the voice. How can we actually label voice to be to be AI? generated, but maybe we can find a way. The last one is to build multi-stakeholder framework with civil society and the private sector because somehow the technology that hosts the disinformation are owned by the private sector. And the civil society is the one that do mostly the checking while the government is the one that supervise how the game is played. I think that’s the end of my presentation. I thank you so much for the time given to me. I intend to go as a moderator. Thank you, Fitri. Perhaps two minutes elaborations on what kind of lessons does Southeast Asian country can learn from the Australian experience in developing the code of conduct against misinformation and disinformation, and what kind of parallels that can Southeast Asian country adopt, whether unilaterally or through original organization. I think good practical question. I think one that can be done is actually asking, for example, Google, as well as other social, other platform actually rank the website that is most credible to show first, like news from the government. And actually what happened with the COVID time, there’s a labeling or this is new related to COVID-19. So that actually would help the people to actually be more aware. If they can do that on COVID-19, I think they can do that for other things, like, for example, scam that actually quite prevalent not only in Australia, but also perhaps in Southeast Asia, because there’s a lot of scam generating, taking on platform as well. And while the platform is actually showcasing, for example, job opportunities or advertisement. discount or sale somewhere, they need to have this verification, that government disclaimer that please check, double check before you like input your details, for example. I think those two are the one that I recommend. Thank you.

BELTSAZAR KRISETYA: Thank you. Thank you, Vitry. Let’s move on to Maria Elise from the Philippines, from the University of the Philippines, Jiliman. You have 10 to 15 minutes and place for yours.

MARIA ELIZE H. MENDOZA: Okay. Hi. Good day, everyone. Good evening from Manila. I’m sorry. I also cannot join you physically, but I’m also pleased to be given this opportunity to join the panel. So I am Assistant Professor Maria Elise Mendoza from the Philippines, and I’m here to present the case of Philippines in terms of addressing information manipulation. So I don’t have slides, so I’ll just be going through the suggested talking points. First, is to provide an overview of the Philippines information landscape. So one thing that the Philippines has been known for many years is that we are the social media capital of the world, and we are also known as the patient zero of global disinformation, almost like the petri dish or the lab experiment of disinformation. So Filipinos are hyper-connected to social media and are among the top internet users in the world, especially Facebook. So that’s the top social media application being used in our country. Television, radio, and the internet are among the top three sources of people of information about politics and the government. But since the 2016 presidential campaign of former President Rodrigo Duterte, the country has seen an increase in the use of social media for political and electoral purposes. So the 2016 presidential elections marked a pivotal shift towards social media-driven campaigning. So Duterte set the playbook for it. His victory was significantly influenced by coordinated digital campaigns on Facebook and YouTube, where content creators that we have come to know as social media influencers or bloggers have spread and amplified narratives supporting his policies, including the controversial and violent war against illegal drugs. So in the 2019 midterm elections, which were in the vote for several national positions and local positions, the same playbook was adopted and the opposition suffered an extreme blow in the Senate race. No opposition candidate won in the senatorial election. So all candidates allied with the Duterte administration won in the 2019 midterm elections. And in our next presidential elections, our most recent one, last 2022, the victory of Ferdinand Marcos, Jr., who is the son of the late dictator Ferdinand Marcos, Sr., was also largely attributed to the spread of online disinformation across different social media platforms. And these contents spread on social media did not necessarily promote Marcos, Jr. as a candidate, but rather they twisted historical narratives, attempted to cleanse the family name of the Marcoses because they still have a lot to answer regarding the atrocities committed during the dictatorship, and it also contributed to the demonizing of the political opposition. So this information during Duterte’s time also attempted to demonize the political opposition and discontinued until the 2022 presidential elections. Investigative reports from civil society groups and independent media outlets show that Marcos, Jr. benefited the most from this information at the expense of the main opposition candidate who is our former vice president. So at present, the Philippine information system is saturated. with so-called independent media practitioners. These are technically the vloggers or the influencers who are not necessarily nor formally affiliated with any political party. What’s interesting is that these vloggers and influencers who are followed and watched and heard by many Filipinos, millions of Filipinos are not covered by existing media accreditation policies or the regulations surrounding journalists, for example. So they exert influence when it comes to shaping public opinion compared to official campaign teams of candidates because their online contents are extensively consumed by the general public. There is also evidence that they have been hired by politicians in previous elections and that millions of pesos, which is around thousands of dollars or almost millions of dollars, have been spent for these kinds of campaigns. And what’s troubling is that the social media domain of these vloggers and influencers remains largely unregulated. So the contents are there and add to that the poor content moderation policies of platforms such as Facebook and YouTube. These are aggravating the problem. So as a result of the saturation in the information ecosystem, a survey conducted in 2022 found that majority of Filipinos find it difficult to detect fake news. And similarly, despite the internet being a top source of information about politics and the government, the internet is also perceived as a top source of disinformation, mostly spread by influencers. So moreover, Filipinos have developed a growing distrust towards traditional media and journalists. And these findings together with the fact that Filipinos are among the top social media users in the world is a dangerous combination. So how does foreign information manipulation and interference, or PHIMI, enter the picture? So we have had our share of PHIMI in the past, of PHIMI in the form of… is a sponsor of disinformation and propaganda, has been around during Duterte’s time, who is relatively more friendly to China as compared to previous Philippine presidents. From 2018 to 2020, China launched a disinformation campaign known as Operation Naval Gazing, an attempt by China to penetrate the Philippine information space. So what happened was that a network of fake accounts originating from China promoted and supported the Duterte family and Aimee Marcos, who is the sister of the current president. So from 2018 to 2020, these fake accounts attacked the government critics, Duterte critics, including opposition senators and Philippine media. However, platforms such as Facebook have taken down some of these or pages and groups linked to China for coordinated inauthentic behavior. So in a nutshell, Phimi has not yet made impacts comparable to the domestic level of influence operations. A media outlet in the Philippines named SMNI is pro-Duterte and pro-China, but it was recently denied a legislative franchise to operate on television, so they are mostly operating on social media. So in the Philippines, disinformation and influence operations are mostly domestically created and spread by these social media influencers, bloggers, celebrities, digital workers, independent media practitioners, or even ordinary Filipinos who make a living out of creating and spreading disinformation or hyper-partisan content online. The last part is actually interesting, the hyper-partisan content, because not all contents are fake or false. Some are facts, but these were exaggerated and twisted to suit political agenda. But still, the threat of Phimi must not be disregarded because we’ve had a glimpse of it in the form of pro-China content. One thing that we must also be wary of would be the potential use and misuse of generative AI in the upcoming elections. Very recently, a few months ago, our own president was a victim of this. An AI-generated audio of him ordering an attack against China in light of the West Philippine Sea issue was spread and flagged by the government as false. So given this, how has the Philippine government worked to address these challenges? Over the years, the Philippine government has failed to effectively address electoral disinformation. Three electoral cycles have passed since 2016, yet we are still facing a worsening problem and we have an upcoming election in 2025, this coming May. Legislative proposals to combat false information and regulate social media campaigns have not seen any progress. As a result, civil society actors, particularly media groups and academic institutions, have shouldered the responsibility of ensuring the integrity of facts by launching fact-checking initiatives, digital literacy campaigns, voter education programs. However, without robust government support, a comprehensive legal framework, and systemic changes, the impact of these initiatives is limited. It was just last September 2024 when the country’s election commission released a resolution that provided guidelines on the use of artificial intelligence and the punishment for the misuse or for the use of mis- or disinformation in elections just in time for the upcoming elections in 2025. This September 2024 resolution also establishes the COMELEC or the Commission on Elections Formal Collaboration Networks with Civil Society Actors. However, this is very late and it remains to be seen whether it will be really implemented effectively given the extent of the problem that we have now. On the other hand, social media platforms such as Meta and TikTok have expressed their commitment to cooperate in the upcoming elections. This is bad news because proactive content moderation measures and accountability must be demanded from and exercised by social media platforms. At present, content Contents that are obviously false and hyper-partisan, even if they were posted in the last electoral cycle, are still present in these platforms. They have not yet been taken down despite multiple reports, so these content moderation policies really have to be looked at. Moving forward, COMELEC must also sustain and strengthen its engagements with civil society. Civil society actors alone cannot solve this problem, and they’ve been shouldering the burden of fighting against this information for the longest time. So this strong cooperation between the government and civil society is needed. Moreover, cybersecurity infrastructure in the country must also be strengthened. Outside of elections, Filipinos are highly susceptible to online scams, fraud, banking scams, and phishing attempts. Multiple government websites have also been hacked recently. There were also instances of data breaches in government agencies where millions of data have been allegedly sold in the dark web. Lastly, to end my short presentation, in the long run, digital and media literacy must be fully incorporated in basic and higher education because at present, under the Philippine education system, only students in their last two years of high school have media literacy in their curriculum. The rest are not really institutionalized. So this needs to be expanded across all levels of education to fully empower citizens in the fight against disinformation and information manipulation. So that’s my short presentation on the case of the Philippines. I’m very much looking forward to the questions and the discussion later.

BELTSAZAR KRISETYA: Thank you very much for having me. Thank you so much, Maria. Again, another quick question. I remember during COVID times, there was an influence corporation allegedly done by the Pentagon for the Philippine public to sow disbelief against Chinese-issued vaccines. And the Filipino public bought that idea. They chose to wait for a more non-Chinese version. vaccine instead, and it creates little consequences to the Philippine public health during that time. So would you say that in the realm of influence corporations, what is happening, is it what happened in digital realm serves as an extension to geopolitical realities, particularly in Philippines’ relations to the great powers?

MARIA ELIZE H. MENDOZA: Probably yes, because some of, in another forum that I attended who were in, there were some analysts who looked at posts in China that are related to the Philippines. Some posts are actually discrediting the US-Philippines alliance, and at the same time, still supporting the Duterte’s, because Duterte is known as a president who is friendly to China, and Marcos Jr. is not exactly that. It is greatly perceived that Marcos Jr. is more leaning towards the United States. So there are posts being spread on Chinese social media wherein they are discrediting Marcos because he’s pro-US, discrediting the Philippines-US alliance. So yeah, I think these kinds of disinformation can also be related to the geopolitical realities.

BELTSAZAR KRISETYA: Thank you. So we’ve got the case study from Indonesia, from Australia, from Philippines. None of them seems to bear good news. So we rely on you, Dr. Bik Chan, from Vietnam. How does the situation look like in Vietnam?

BICH TRAN: Thank you, and I’m grateful for the opportunity to be here. So, you know, first I would like to give a brief description of Vietnam’s information landscape. So there are three main components here. So the first one is domestic media and foreign media with Vietnamese language service and social media. So in terms of domestic media, most of them are state-owned or related to the government. So they are heavily regulated by the Communist Party of Vietnam, and they, of course, they adhere to official narrative. And in terms of foreign media with Vietnamese language service, there are actually several of them, but I will give some examples from China and some Western media. So for China, there are the China Global Television Network, or CGTN, and then the PeopleGov Radio and TV. So both of them have Vietnamese language. And then for the Western media from the UK, there’s BBC, and then there’s like US-funded as well, like Voice of America or Radio Free Asia. And the third one is social media. And, you know, unlike China, actually you can access a lot of Western platforms in Vietnam, you know, like according to, you know, several sources, Facebook, YouTube, and Instagram, actually among the tops of social media in Vietnam. And besides that, also there is a Vietnamese platform called Zalo. It’s a messaging app like WhatsApp. And then also TikTok was so very popular. So there is a kind of very many social media platforms that the Vietnamese can access and use. So in terms of foreign information manipulation and interference, so I will focus on the foreign interference part of this. So in Vietnam, you know, because of its political system, so phimmy in election is actually not a big issue. The Vietnamese government is mostly concerned about China’s disinformation, about the South China Sea disputes, and also what they call peaceful evolution from the West. So peaceful evolution is kind of defined as efforts by external forces seeking regime change without the use of militaries. I’m sorry. Okay. So with this, you know, sometimes, you know, in terms of South China Sea issues, you know, China has a lot of disinformation out there. But related to phimmy, I would say that the first one is that sometimes they misquote Vietnamese leaders. For example, in 2016, only two days after the ruling of the arbitral tribunals regarding the case initiated by the Philippines against China, only two days then, you know, the Vietnamese prime minister met the Chinese counterpart in Mongolia. And then after the event, a lot of Chinese media, you know, newspapers kind of reported that the Vietnamese prime minister actually said that Vietnam supported China’s stance regarding the ruling. But actually, he didn’t say so. So the Vietnamese media immediately, you know, because they got the permission from the government to kind of clarified on that. So they said that during the meetings, the Vietnamese prime minister mentioned things like the agreement in 2011 that Vietnam and China had regarding principles to settle the sea related issues. And then things like the declaration on the code of conduct, or the code of conduct itself and UNCLOS. And he never said anything about Vietnam supporting China’s stance. So, you know, with this kind of false information, it can, you know, undermine the legitimacy of the Vietnamese Communist Party. So that’s the concern here. And also, other China’s narrative is to try to drive a wedge between Vietnam and Western partners by saying that, you know, close relationship with external powers will not help Vietnam in the South China Sea disputes. And then in terms of peaceful evolutions. So for Vietnam, for the Vietnamese government, they perceived any kind of criticism on the Communist Party is peaceful evolution. So it’s sometimes it can be like narrative, for example, the government is too weak in terms of response to China’s behavior in the South China Sea, for example, try to undermine their legitimacy. Or sometimes, you know, even the promotion of human rights or democracy can be seen as a peaceful evolution. And then other kind of narrative to try to advise Vietnam to, you know, the Vietnamese people try to be, you know, they should be anti-China or pro-US. So this kind of discourse can cause disunity in the society. And then sometimes, you know, with the South China Sea disputes, there are some certain groups, kind of urge the people to stand up and to join the protest. So this is, you know, with this, the Vietnamese government is concerned about, you know, from the protest against China, it could, you know, lead to some other issues as well and cause instabilities in the society. So here, I just want to emphasize that, you know, between this information and PHIMI, there is actually very thin line that we can work here. So they are related, but they are two different concepts. And in the case of Vietnam, sometimes, you know, the perceived PHIMI can be also quite significant because for the government and for the Communist Party of Vietnam, they have their concerns as well. So, you know, so for that, I think it’s very difficult for them to strike the balance between political stability and freedom of speech sometime. So in terms of action on how the Vietnamese government has done to deal with PHIMI, so I focus on the government part because there are not much to, you know, from the civil society itself. So for the government, you know, they repeatedly rebuked China’s full narrative on the South China Sea, either through the spokespersons of the Ministry of Foreign Affairs or through state-owned media. So they try to do that every time they discovered any disinformation from China. And, you know, to deal with peaceful evolution, in 2016, the Vietnamese Ministry of Defense created what they call Task Force 47 to counter wrong view on the internet. And after that, in 2017, only one year later, they created a cyber comment. So, you know, it’s interesting because, you know, compared to some other cyber comments, then the Vietnamese one actually also in charge of countering peaceful evolution. So I will end here and hope to open to the discussion. Thank you. Thank you, Dr. Bic. Before we get on to the discussion part of the session, one little question for you. You mentioned something about the balance between regulation, also freedom of expression, but I believe that’s not the only balance that the government is also facing because there is also this balance of countering information manipulation while also the dependence or interdependence economically to a certain actor. So how does the Vietnamese government balance between this dependence and also, you know, combating the foreign interference? Can you hear me now? Yes. Okay. So, you know, I forgot to mention that, you know, to deal with phimmy, actually in Vietnam, people can still access the Chinese media, you know, the Chinese newspapers with Vietnamese language service, but they cannot access, you know, other media, for example, BBC or Voice of America. So I think for the Vietnamese government, you know, because they know that I think very, so this very, it speaks to what Peter and if you already mentioned that I think for the government, they know that no matter what the Chinese say about the South China Sea, the Vietnamese people will not believe. Yeah, so they’re not too concerned about Chinese media. But for the Western one, it’s a different issue because in Vietnam, it’s a one-state party. So I think they are a little more sensitive in that area. And to your question about some kind of dependence on economic issues with some partners, I think that could be one of the reasons as well. But I believe that what I mentioned earlier is the main reason why, because for Chinese media, there’s not much worry. Thank you.

BELTSAZAR KRISETYA: So I believe we have at least time for three questions. So for anyone that wants to raise questions, please make yourself identifiable, and then our staff will come to you. Sorry.

AUDIENCE: So, hello. Thank you for your presentation. It was very insightful. My name is Luisa. I am an advisor for the German-Brazilian Digital Dialogue Initiative to Promote Digital Transformation, and we also address this information as a topic. So I haven’t had many contacts with the Southeast context so far, so I wanted to ask you if you have any cases of this information having effects on the physical world, so to say. Because like in Brazil, we had the attack to the Supreme Court, and also in South Africa. I know there has been some complications with the Electoral Commission and et cetera. So are there any records of this in Southeast countries as well, from Asia? Thank you.

BELTSAZAR KRISETYA: So that’s one question on the impact of this information to real-life incidents. Shall we gather two more questions? Please, sir. And then the lady in the back. Okay. Thank you.

AUDIENCE: My name is Koichiro from Japan. I’m a cyber security expert, and I have a few questions. First of all, with the Fitorani’s presentation, I feel it is contradictory, because on one hand, we need to expect performers to do more in this regard. And at the same time, countries like Australia, the United States, and others, we already decide to ban certain online performers from our market. So I’d like to ask any panelists for their view on which is better. Expect more for performers, or ban them from your own economy. Of course, some of this initiative is funded by one giant performer. So how you can trust one performer, how you can say one performer is trustworthy than others. My last question is, there’s a movement to revitalize the discussion at the ASEAN Regional Forum. I was wondering, while listening to you, all the presentation, I was wondering which is the best platform to discuss our step forward in FEMI and disinformation, since at the ASEAN Regional Forum, we have China, Russia, and others. Of course, IGF might be a decent platform as well. But I’d also like to ask panelists where we should go for our next round of discussion. Thank you very much. Fantastic.

BELTSAZAR KRISETYA: Thank you, Koichiro-san. And last question for this term, please.

AUDIENCE: Yeah. Hi, my name is Nidhi. And I have a question. When it comes to dealing with misinformation, I think that we’ve all discussed how you can have digital literacy campaigns and maybe something along those lines, some technical solutions. But as you talked about, a large part of, I think, misinformation comes from confirmation bias. And also, I think there is something to be considered that the people who are in most power actually tend to have a greater role in spreading it. So even if you did manage to achieve digital literacy, which I think there are a lot of technical solutions for, this is a larger sociological problem at this point, where if you’re getting views for it or if you’re getting power out of it, there’s no reason for anybody to stop sort of putting out disinformation. And even if you know it’s wrong from believing it, so unless you have some way of sort of tackling that larger sociological problem on what has become alternative truth, it won’t really matter so much what technical solutions you come up with. But I’m not so sure how you would go around doing that because nobody has incentive to do that right now.

BELTSAZAR KRISETYA: Thank you for the intervention. So let’s go on with the three questions first before we go open another session. So the first question from Lisa, whether disinformation ever transform into real life incidents in Southeast Asia, from Koichi Urosan. So specifically to P3, which one is better? Should platform do more or should we ban them entirely? And also a question to the general speakers, what will be the best platform regionally to discuss this issue further, whether it is a multilateral platform such as Asian Regional Forum or multi-stakeholder forum such as APR-IGF, for example. And some remarks from Randy on no matter how technical solution is available to address this session, there’s this key opinion leaders that can, you know, breeze through and confirms to the confirmation bias of the audience. So whether there is a means for us to curb or to curb the influence of these people in power, whether it is true, whether in government or in tech platform. Please, Peter, you want to go first?

PIETER ALEXANDER PANDIE: Sure. I think for the first one, cases of disinformation affecting the physical world, I think the case that we discussed earlier on, on the US’s influence operations in the Philippines that was actually posted, declassified by the Pentagon and posted again by Reuters, a Reuters investigation. So the influence operation was more or less them trying to sow disconfidence against Chinese made vaccines in the Philippines, which resulted in people not taking the vaccine and waiting for Western options. So I think that’s a really big example of a foreign entity outside of Southeast Asia creating an influence operation that had real life physical effects. And I’m sure there are others as well. But I think that’s off the top of my head. That’s a big one that we could reference. And then to the question from Koichi Rosan about the best platform to discuss FEMI in the Asia Pacific. I think the conversation shouldn’t start where the best platform is. I think we should take the discussion a little back towards whether or not countries in Southeast Asia or the Asia Pacific have the same threat perceptions towards FEMI and whether or not that’s the case. Because I think I can speak from a Southeast Asian perspective where I don’t think everyone is on the same page as far as FEMI. I’ve said before that ASEAN has a cyber security cooperation strategy and a lot of different cyber initiatives. But they mostly focus on cyber crimes, so financial scams, deepfakes and financial fraud and so on. But as far as Southeast Asia-wise, but I think FEMI, especially in the Asia Pacific where you have some victims and you have some threat actors, government and non-state, I think the conversation shouldn’t start which platform is best and getting everyone on the same page first I think is the real challenge because everyone has different threat perceptions and dealing and addressing how they want with FEMI. And for the intervention from a colleague about confirmation bias and a broader socio-psychological problem with this information, I fully agree with your statement and I think it’s why Fitri and I and other, Bik and Maria is sort of proposing for this research to take on a more multi-stakeholder, multi-disciplinary approach because I think most of us in this panel are sort of IR or cyber security specialists and I think involving people from different lines of academia or others as well I think would be a good step forward in understanding the problem a bit broadly.

BELTSAZAR KRISETYA: Fantastic.

BICH TRAN: Bik, you want to do next? I would like to add on what Peter said, you know, regarding to Nidhi’s question. Yes, so even though, you know, like certain bias make, you know, the readers have more appetite for disinformation, for example, but I still believe that digital literacy campaigns will help because especially for those who haven’t formed their opinion yet, then, you know, the skills to identify trusted sources will serve them in a lot of issues.

BELTSAZAR KRISETYA: Thank you. Fitri, specific question on platforms.

FITRI BINTANG TIMUR (FITRIANI): Thank you. In Australia we have Australian Communication and Media Authority, ACMA, Voluntary Code that actually call for platform digital media to develop and report on safeguard to prevent harm that may arise from the propagation of mis and disinformation on their services. So it’s a voluntary code. But there’s a concern of how about if the code does not work as especially we know there’s a certain platform that after some rich people buy that platform, that platform is being used for disinformation. And that’s why in Australia, there’s a call for the bill, disinformation and misinformation bill that failed to be tabled, it shut down. So whether we call to regulate the platform or just do away with it, I think it’s good to have a voluntary code is very mature. And if we kind of expect the platform have goodwill in doing their business, they need to be able to show that they can prevent harm. But we know their platform like telegrams that actually very rarely responds to the government call when there’s like information of like terrorism and whatnot, that is quite concerning. So that kind of so perhaps we can do both side, we can allow the voluntary code to let the platform to, you know, to safeguard themselves. And as well, when that doesn’t work, the government need to have tool to actually intervene. So that’s one. And for me, I want to if I may answer what how can we discuss in the regional platform, I think in ASEAN, we have the ASEAN task force, or countering fake news, and that task force actually managed to issue the guideline on how the government can manage and combat fake news. It is the task force only issued last year, established last year, and the guidelines also just recent. So I think if if ASEAN can do it, I encourage other region, perhaps able to do it because that guidelines is actually telling what the government pen weighs, for example, what the government do when the when there’s fake news detected. So that’s my insight. My, my suggestion. Thank you.

BELTSAZAR KRISETYA: Thank you, feature. Maria, you want to respond to any of the question?

MARIA ELIZE H. MENDOZA: Okay, so hi, yeah, the transformation of this information or the effects of this information on the physical world, the vaccine example is a good one. And aside from the campaign against Chinese vaccines, this is this information surrounding the side effects of vaccines in general have also made physical effects here in the Philippines, because there is a high there has been a high level of vaccine hesitant, hesitancy in the past years, because of another issue, another vaccine before COVID. So that’s one. And also probably the lies that the Marcus family spread about themselves were actually being cited by their supporters as the reasons for voting for them, especially when they attend campaign rallies and are interviewed by the vote for the Marcus. So I think that’s also an effect of this information on the physical world that people actually wholeheartedly believe these lies spread on social media. And regarding the confirmation bias, I think the question of confirmation bias, an additional insight that I can provide would be tech platforms would still have a responsibility regarding this issue because of how they control the algorithm. So we know that if we react to the same kinds of posts, or comment on the same kinds of posts, these, these posts will keep appearing on our on our feeds. So if we are, if hyper partisan contents keep appearing on our feeds due to the algorithm, then it worsens the problem. So with that, tech platforms also have a responsibility with regards to the transparency of the algorithm probably or controlling the algorithm in general, because Facebook, for instance, has been under fire for allegedly prioritizing posts that have more angry reactions. So those that are really emotionally charged get more exposure on people’s news feeds. And in that way, they also contribute to the problem. So still, even if it’s a sociological issue, Peter is correct, a multi-sectoral approach involving digital platforms of society would still be an important step in terms of solving this problem. Thank you.

BELTSAZAR KRISETYA: Thank you, Maria. One or two more questions before we close the session. Okay, the lady in the back.

AUDIENCE: Hi, my name is Eliza, I’m from Vietnam and working in Germany. And my question is actually addressed to the first speaker, but I welcome responses and contributions from other speakers as well. So in your research, how would you define FEMI? Do you include people from the diasporic communities as perpetrators of disinformation? And in your research, sorry, in your findings, you mentioned that there are state and non-state actors. Can you please give us an example of non-state actors? And in your research, did you also find evidence of the participation of the Islamic state in spreading disinformation in the case of Indonesia? And I just want to add one kind of like input to the question of Eliza. Actually, when you ask about online disinformation and the real life, you know, incidents and court, I must emphasize in the case of Vietnam, only the government can decide what is disinformation or not. And in the case of one party in Vietnam, one party state in Vietnam, we have the legislative, executive, and judiciary powers belonging to the state, which means the head of all these state agencies must be the Communist Party members. And so when they say that is disinformation, they have the power to punish. So I would say that on a monthly basis, there are cases where online disinformation, whether it’s just a small post critical of a state-backed company, or just one small video mimicking a state leader, can be punished. And the highest punishment in the case of Vietnam is 20 years of imprisonment. So I just say that disinformation in Vietnam is very hard to detect. Oh, sorry, I forgot one question for twofieldtree. How do you see the political will of ASEAN in fighting disinformation spread and created by the government? So you talked about ASEAN fighting disinformation in general. How about the disinformation spread deliberately by the government? Thank you.

BELTSAZAR KRISETYA: Thank you. And one last question from the gentleman in the back. Preferably a quick question.

AUDIENCE: Thank you so much. I’m Fawaz from Center for Communication and Governance, New Delhi. We’ve been having very similar conversations. It was very interesting, very useful to join this. We also had a general election this year. And one problem that I think across the board we are facing, that the last question also spoke to, is the discourse around disinformation, misinformation has also now become weaponized, where fact-checking or countering disinformation, often these narratives are appropriated by the people, you know, who sometimes might be causing real world harm. So I fully, this is just a short intervention to say we are seeing very real world harm linked to online disinformation. At the same time, the lack of the kind of multi-stakeholder research that we’ve been talking about is leading, you know, it’s making possible this kind of appropriation. So yeah, just a short intervention to say we really do need. Not just multi-stakeholder, but also maybe inter-regional cooperation to bring out how disinformation is happening, how it’s related to online events, and also how the discourse is being misappropriated. Thank you.

BELTSAZAR KRISETYA: Thank you, Fawaz. I think the parallel between Fawaz and Elie’s intervention is, who is the arbiter of truth? Like in which power should we endow the government or the civil society or tech platform to be the arbiter of truth, and what kind of multi-stakeholder, multilateral cooperation can be done to that. Last response from each of the speakers regarding these two interventions. You can go first. Okay. Hello?

PIETER ALEXANDER PANDIE: Can you hear me? Okay. Yes. Great. So addressing the question about defining FEMI. So the way we’ve defined FEMI is with a pattern of mostly manipulative information that threatens or has the potential to negatively impact values, procedures, and political processes in a country conducted by a foreign state or non-state actor and their proxies. Still, I think while we’ve conducted this research, we’ve also conducted a focus group discussion with various experts from both Southeast Asia and external countries. And what we found through those discussions is that FEMI is still a very, very hard thing to define. You know, FEMI was first coined by the European Union External Action Service, and that’s where we drew the first definition. But I think another step forward that we can take is sort of taking a more maybe Southeast Asian or Asia Pacific specific definition for FEMI, and I think that’s one of the research direction that we could take is finding a definition for FEMI that is context specific and more palatable, I would say, or more applicable to different information landscapes. So I understand that it is a very difficult thing to define. And another question I think was about the role of the Islamic State in information operations in Indonesia. Our time period for research was 2019 to 2024, and I think from the top of my head, while we’re still early in the data set and we’re still adding cases to it, I don’t think we’ve found cases of Islamic State perpetuating influence operations in Indonesia, although I would say with a disclaimer that this is still very early on in the data set, and we could find cases later on that were reported. But so far, I don’t think we’ve found any. And I think an explanation for that, it could be because I think terrorism cases in Indonesia, I’m not a terrorism studies expert, but I think broadly speaking, terrorism and terrorist groups in Indonesia have taken a downturn in activity in recent years. I could be very, very wrong in that regard, but that is, I think, a broad assumption that I could make as to why that has not occurred. Thank you.

BELTSAZAR KRISETYA: Bit, you want to add to that?

BICH TRAN: So I just want to say something about your question, actually, about who should we give the power to be a visual of truth. So because in the discussion, we mentioned about digital literacy campaigns. So I think if we make it mandatory to be taught in school, it will reach more people, of course, but then who textbook would we use, right? So what kind of curriculums and the definition, so that’s actually the very big issue that we…

BELTSAZAR KRISETYA: There you go. Okay. Okay. Fitri and Maria, quick response.

FITRI BINTANG TIMUR (FITRIANI): Thank you. Difficult question. The text, the textbook that… that the guidance, the ASEAN guideline on management of government information in combating fake news and information in media is actually strategically saying that this is the perspective and stand of the ASEAN government. But interestingly, there’s a chapter there if you want to take a look of it, there’s type of how government address this information. So there’s whole of government approach, and there’s strategic government approach or combination. The whole government approach is having a different agency, civil society as well. But the strategic government approach, as you know, how we know this government side of things, as Beltanzan mentioned, is the one that decide what’s the truth and what the people can listen to. And I think ASEAN embrace that and it’s aware of that. But having this, you know, being aware of it and having multiple ways of approaching the issue would actually not alienate countries that actually non-democratic in a way, but also struggling with this information or foreign information coming from abroad. So that trying to separate or wedge ASEAN countries against each other. So that’s why the ASEAN is actually trying to address this information.

BELTSAZAR KRISETYA: Thank you. Maria, one last remark.

MARIA ELIZE H. MENDOZA: Yeah, I would just like to agree with the last intervention regarding cooperation within the region, whether it’s in Southeast Asia or the Indo-Pacific, because as in the case of the Philippines, we really have a lot to catch up on in terms of addressing this information. And as I keep mentioning in my presentation, there are no clear legislative frameworks at present to address this problem, but we also have to be very careful with passing legislation that might infringe on freedom of speech. Because as far as I know, there are some countries with anti-fake news laws, but these are being weaponized by the current government that anything that is dissent equals fake news. So must be careful regarding that. So we really have a lot to learn from our neighbors in Southeast Asia and the greater Indo-Pacific region in terms of addressing this problem. So I do agree that regional cooperation is important. And I think a single country like us engaging with tech platforms, calling them to be more accountable might have less effect as compared to when multiple countries come together and really demand action from the government. tech platforms that that latter action might be that latter strategy might be more effective for them to really be able to address this problem. So that’s it for my end. Thank you.

BELTSAZAR KRISETYA: Thank you, Maria. Thank you all to all to all the speakers and from the participants for engaging the discussion. I will not conclude because simply one there is not much time and second the only concluding remarks that I can deliver is we have no option to isolate this information issue solely as an information issue because when it becomes an electoral issue then we have to answer it through an electoral means when it becomes an economic and trade issues we also need to consider the participation economic trade actors so forth as and so on and so forth. So the discussion needs to continue beyond this room and also beyond the region of Southeast Asia so please feel free to drop by to our booth whenever you have the time to learn more about our works and potentially you know cooperate for the for the next research. Thank you very much for your participation. Please join me in giving the round of applause to the speakers and best of luck for your IGF participation. Goodbye. Thank you. Thank you Vic and Peter. Hi, Petri. Bye-bye. Bye-bye. Thanks Belz, Dieter, Vic, Sifa and Maria. Good afternoon. you you you

P

PIETER ALEXANDER PANDIE

Speech speed

171 words per minute

Speech length

2225 words

Speech time

776 seconds

Indonesia faces increasing use of AI-generated disinformation in elections

Explanation

In the 2024 Indonesian elections, there was a greater proliferation of disinformation incidents involving generative AI, particularly in visual and audio forms. This marks a shift from previous elections where disinformation was mostly text-based and image-based.

Evidence

Examples include a deepfake video of a former president supporting a candidate, an audio of an argument between a candidate and party head, and a candidate giving a speech in fluent Arabic when they couldn’t speak the language.

Major Discussion Point

Information landscape and foreign interference in Southeast Asian countries

Indonesian election bodies are unprepared to deal with AI-generated disinformation

Explanation

Election bodies in Indonesia are still using strategies from previous elections to deal with disinformation. They were not adequately prepared for the proliferation of AI-generated disinformation in the 2024 elections.

Major Discussion Point

Government and societal responses to disinformation

Agreed with

MARIA ELIZE H. MENDOZA

Agreed on

Importance of digital literacy

Difficult to define and attribute foreign information manipulation and interference

Explanation

FEMI (Foreign Information Manipulation and Interference) is challenging to define and attribute. While the research used a definition based on the European Union External Action Service, there’s a need for a more context-specific definition for Southeast Asia or the Asia Pacific region.

Evidence

The research conducted focus group discussions with experts from Southeast Asia and external countries, revealing the complexity of defining FEMI.

Major Discussion Point

Challenges in combating disinformation

Agreed with

BICH TRAN

MARIA ELIZE H. MENDOZA

Agreed on

Challenges in defining and combating foreign interference

Differed with

BICH TRAN

Differed on

Perception of foreign interference threats

Multi-stakeholder approach involving government, civil society and platforms needed

Explanation

A comprehensive approach to addressing disinformation requires involvement from government, civil society, and social media platforms. This is particularly important in the context of rapidly developing AI technologies and increasing geopolitical tensions.

Major Discussion Point

Recommendations for addressing disinformation

Agreed with

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Agreed on

Need for multi-stakeholder approach

B

BICH TRAN

Speech speed

128 words per minute

Speech length

1490 words

Speech time

694 seconds

Vietnam is concerned about China’s disinformation on South China Sea disputes

Explanation

The Vietnamese government is primarily concerned about China’s disinformation regarding the South China Sea disputes. This includes instances of Chinese media misquoting Vietnamese leaders and spreading false narratives about Vietnam’s stance on regional issues.

Evidence

An example was given of Chinese media falsely reporting that the Vietnamese prime minister supported China’s stance on a 2016 arbitral tribunal ruling, which the Vietnamese government had to immediately clarify.

Major Discussion Point

Information landscape and foreign interference in Southeast Asian countries

Differed with

PIETER ALEXANDER PANDIE

Differed on

Perception of foreign interference threats

Vietnam created Task Force 47 to counter “wrong views” on the internet

Explanation

In 2016, the Vietnamese Ministry of Defense established Task Force 47 to counter what they consider “wrong views” on the internet. This was followed by the creation of a cyber command in 2017, which is also responsible for countering “peaceful evolution”.

Major Discussion Point

Government and societal responses to disinformation

Differed with

MARIA ELIZE H. MENDOZA

Differed on

Role of government in combating disinformation

Balancing political stability and freedom of speech is challenging

Explanation

The Vietnamese government faces difficulties in striking a balance between maintaining political stability and ensuring freedom of speech. This challenge is particularly evident in their efforts to combat what they perceive as foreign information manipulation and interference.

Major Discussion Point

Challenges in combating disinformation

Agreed with

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

Agreed on

Challenges in defining and combating foreign interference

M

MARIA ELIZE H. MENDOZA

Speech speed

149 words per minute

Speech length

2313 words

Speech time

928 seconds

Philippines information ecosystem is saturated with “independent” media practitioners spreading disinformation

Explanation

The Philippine information system is saturated with so-called independent media practitioners, including vloggers and influencers, who are not formally affiliated with political parties. These individuals have significant influence in shaping public opinion and are not covered by existing media accreditation policies.

Evidence

There is evidence that these influencers have been hired by politicians in previous elections, with millions of pesos spent on such campaigns.

Major Discussion Point

Information landscape and foreign interference in Southeast Asian countries

Philippine government has failed to effectively address electoral disinformation

Explanation

Despite three electoral cycles since 2016, the Philippine government has not effectively addressed electoral disinformation. Legislative proposals to combat false information and regulate social media campaigns have not progressed.

Evidence

Civil society actors, particularly media groups and academic institutions, have had to shoulder the responsibility of ensuring the integrity of facts through fact-checking initiatives and digital literacy campaigns.

Major Discussion Point

Government and societal responses to disinformation

Agreed with

PIETER ALEXANDER PANDIE

BICH TRAN

Agreed on

Challenges in defining and combating foreign interference

Differed with

BICH TRAN

Differed on

Role of government in combating disinformation

Lack of digital literacy exacerbates susceptibility to disinformation

Explanation

The rapid digitalization in the Philippines has not been accompanied by an increase in digital literacy. This gap makes the population, especially the youth, more susceptible to disinformation on social media platforms.

Evidence

A public opinion survey conducted by SAIL last year showed low numbers of people participating in government-held digital literacy programs, with many unaware of their existence.

Major Discussion Point

Challenges in combating disinformation

Agreed with

PIETER ALEXANDER PANDIE

Agreed on

Importance of digital literacy

Digital literacy must be incorporated into education at all levels

Explanation

To combat disinformation effectively, digital and media literacy must be fully incorporated into basic and higher education in the Philippines. Currently, only students in their last two years of high school have media literacy in their curriculum.

Major Discussion Point

Recommendations for addressing disinformation

Agreed with

PIETER ALEXANDER PANDIE

Agreed on

Importance of digital literacy

F

FITRI BINTANG TIMUR (FITRIANI)

Speech speed

114 words per minute

Speech length

2782 words

Speech time

1453 seconds

Australia experiences foreign interference attempts, particularly from China

Explanation

Australia has faced foreign interference attempts, with a notable focus on China’s activities. These attempts have included disinformation campaigns aimed at fostering division, confusion, and mistrust among the population and wedging distrust against allies.

Evidence

An example was given of an Australian-born individual known as a Russian spokesperson in Australia paying for a fake AI video claiming Haitian immigrants were engaging in voting fraud in Georgia, a US swing state.

Major Discussion Point

Information landscape and foreign interference in Southeast Asian countries

Australia is developing voluntary codes for platforms and considering legislation

Explanation

Australia has implemented a Voluntary Code through the Australian Communication and Media Authority (ACMA) that calls for digital media platforms to develop and report on safeguards against mis- and disinformation. There have also been attempts to introduce legislation, though a recent bill failed to pass.

Evidence

The disinformation and misinformation bill campaign in Australia faced opposition and was ultimately shut down.

Major Discussion Point

Government and societal responses to disinformation

Regional cooperation and intelligence sharing should be strengthened

Explanation

To combat foreign information manipulation and interference effectively, there is a need for enhanced regional cooperation and intelligence sharing. This includes improving the capacity of governments to address disinformation campaigns.

Major Discussion Point

Recommendations for addressing disinformation

Agreed with

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

Agreed on

Need for multi-stakeholder approach

B

BELTSAZAR KRISETYA

Speech speed

147 words per minute

Speech length

2319 words

Speech time

942 seconds

Confirmation bias makes people susceptible to believing disinformation

Explanation

Disinformation is most effective when it reinforces existing opinions or ideas that someone already holds. This confirmation bias plays a significant role in how disinformation spreads and is believed by individuals.

Major Discussion Point

Challenges in combating disinformation

Balance needed between effective governance and ensuring democratic freedoms

Explanation

There is a need to strike a balance between effective governance of the information landscape and ensuring that democratic freedoms for civilians are upheld. Policy responses to address disinformation should not infringe on civil liberties and freedom of expression.

Major Discussion Point

Recommendations for addressing disinformation

Agreements

Agreement Points

Need for multi-stakeholder approach

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Multi-stakeholder approach involving government, civil society and platforms needed

Philippine government has failed to effectively address electoral disinformation

Regional cooperation and intelligence sharing should be strengthened

The speakers agree that addressing disinformation requires collaboration between government, civil society, and tech platforms, as well as regional cooperation.

Challenges in defining and combating foreign interference

PIETER ALEXANDER PANDIE

BICH TRAN

MARIA ELIZE H. MENDOZA

Difficult to define and attribute foreign information manipulation and interference

Balancing political stability and freedom of speech is challenging

Philippine government has failed to effectively address electoral disinformation

The speakers highlight the difficulties in defining foreign interference and balancing efforts to combat it with maintaining freedom of speech and political stability.

Importance of digital literacy

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

Indonesian election bodies are unprepared to deal with AI-generated disinformation

Lack of digital literacy exacerbates susceptibility to disinformation

Digital literacy must be incorporated into education at all levels

The speakers emphasize the need for improved digital literacy to combat disinformation, particularly in the face of evolving technologies like AI.

Similar Viewpoints

Both speakers highlight the increasing sophistication of disinformation campaigns, particularly those originating from foreign actors, and their potential impact on domestic politics and regional disputes.

PIETER ALEXANDER PANDIE

BICH TRAN

Indonesia faces increasing use of AI-generated disinformation in elections

Vietnam is concerned about China’s disinformation on South China Sea disputes

Both speakers discuss the challenges posed by actors spreading disinformation, whether domestic ‘independent’ practitioners or foreign state-sponsored efforts, and the need for effective countermeasures.

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Philippines information ecosystem is saturated with “independent” media practitioners spreading disinformation

Australia experiences foreign interference attempts, particularly from China

Unexpected Consensus

Limitations of technical solutions

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

BELTSAZAR KRISETYA

Indonesian election bodies are unprepared to deal with AI-generated disinformation

Lack of digital literacy exacerbates susceptibility to disinformation

Confirmation bias makes people susceptible to believing disinformation

There was an unexpected consensus among speakers that technical solutions alone are insufficient to combat disinformation. They agreed that sociological factors, such as confirmation bias and lack of digital literacy, play a crucial role in the spread and belief of disinformation, necessitating a more holistic approach.

Overall Assessment

Summary

The main areas of agreement among speakers include the need for a multi-stakeholder approach to combat disinformation, the challenges in defining and addressing foreign interference, and the importance of digital literacy. There was also consensus on the limitations of purely technical solutions and the need to consider sociological factors.

Consensus level

The level of consensus among the speakers was moderate to high, particularly on the need for collaborative efforts and the complexity of the disinformation landscape. This consensus implies that addressing disinformation in Southeast Asia and beyond requires a comprehensive, multi-faceted approach involving various stakeholders and considering both technical and sociocultural aspects. However, the specific strategies and priorities may vary depending on each country’s unique context and challenges.

Differences

Different Viewpoints

Role of government in combating disinformation

BICH TRAN

MARIA ELIZE H. MENDOZA

Vietnam created Task Force 47 to counter “wrong views” on the internet

Philippine government has failed to effectively address electoral disinformation

While Vietnam has taken a more active and restrictive approach through government intervention, the Philippines has struggled to effectively address disinformation through government action, leading to civil society taking on more responsibility.

Perception of foreign interference threats

BICH TRAN

PIETER ALEXANDER PANDIE

Vietnam is concerned about China’s disinformation on South China Sea disputes

Difficult to define and attribute foreign information manipulation and interference

Vietnam has a clear focus on China as a source of disinformation, while the Indonesian perspective acknowledges the difficulty in defining and attributing foreign interference, suggesting a more nuanced view of the threat landscape.

Unexpected Differences

Approach to platform regulation

FITRI BINTANG TIMUR (FITRIANI)

MARIA ELIZE H. MENDOZA

Australia is developing voluntary codes for platforms and considering legislation

Philippine government has failed to effectively address electoral disinformation

While both countries face challenges with disinformation, Australia’s approach of developing voluntary codes and considering legislation contrasts with the Philippines’ lack of progress in this area. This difference is unexpected given that both are democratic countries facing similar challenges.

Overall Assessment

summary

The main areas of disagreement revolve around the role of government in combating disinformation, the perception of foreign interference threats, and the approaches to platform regulation.

difference_level

The level of disagreement among the speakers is moderate. While there is a general consensus on the need to address disinformation, there are significant differences in how each country perceives and approaches the problem. These differences reflect the varied political systems, levels of digital development, and geopolitical contexts of the countries represented. The implications of these disagreements suggest that a one-size-fits-all approach to combating disinformation in Southeast Asia may not be effective, and regional cooperation efforts will need to account for these diverse perspectives and approaches.

Partial Agreements

Partial Agreements

All speakers agree on the need for a comprehensive approach to combat disinformation, but they emphasize different aspects: Pandie focuses on multi-stakeholder involvement, Mendoza on education, and Fitriani on regional cooperation. While these approaches are not mutually exclusive, they represent different priorities in addressing the issue.

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Multi-stakeholder approach involving government, civil society and platforms needed

Digital literacy must be incorporated into education at all levels

Regional cooperation and intelligence sharing should be strengthened

Similar Viewpoints

Both speakers highlight the increasing sophistication of disinformation campaigns, particularly those originating from foreign actors, and their potential impact on domestic politics and regional disputes.

PIETER ALEXANDER PANDIE

BICH TRAN

Indonesia faces increasing use of AI-generated disinformation in elections

Vietnam is concerned about China’s disinformation on South China Sea disputes

Both speakers discuss the challenges posed by actors spreading disinformation, whether domestic ‘independent’ practitioners or foreign state-sponsored efforts, and the need for effective countermeasures.

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Philippines information ecosystem is saturated with “independent” media practitioners spreading disinformation

Australia experiences foreign interference attempts, particularly from China

Takeaways

Key Takeaways

Foreign information manipulation and interference (FIMI) is an increasing concern in Southeast Asian countries, with different manifestations in each country

Governments in the region are struggling to effectively address disinformation, especially with the rise of AI-generated content

There is a need for multi-stakeholder approaches involving government, civil society, and tech platforms to combat disinformation

Digital literacy efforts are crucial but face challenges in implementation and reaching wide audiences

Balancing effective governance of information ecosystems with protecting democratic freedoms is a key challenge

Resolutions and Action Items

Explore developing a Southeast Asian or Asia-Pacific specific definition for FIMI

Strengthen regional cooperation and intelligence sharing on disinformation issues

Incorporate digital literacy education at all levels of schooling

Engage in multi-stakeholder and inter-regional cooperation to research disinformation

Unresolved Issues

How to define and attribute foreign information manipulation and interference in a consistent way

How to effectively regulate tech platforms without infringing on freedom of speech

Who should be the arbiter of truth in determining what constitutes disinformation

How to address confirmation bias and the sociological aspects of disinformation spread

How to balance political stability concerns with freedom of expression in addressing disinformation

Suggested Compromises

Implement voluntary codes for tech platforms while maintaining government ability to intervene if needed

Use a combination of whole-of-government and strategic government approaches to allow for different governance styles within ASEAN

Balance effective governance of information ecosystems with protections for democratic freedoms and civil liberties

Thought Provoking Comments

So in Southeast Asia, while there is, in ASEAN, for example, while there is the cybersecurity cooperation agreements and so on and so forth, these are still mostly led or hosted by countries such as Singapore or Malaysia, who have higher, I would say, cyber capabilities compared to other Southeast Asian states who are still building on those capabilities. So not everyone is on the same page, either threat perception-wise or capabilities-wise.

speaker

Pieter Alexander Pandie

reason

This comment highlights the disparity in cybersecurity capabilities and threat perceptions among Southeast Asian countries, which is a crucial factor in addressing regional information manipulation issues.

impact

It led to a deeper discussion on the challenges of regional cooperation and the need for context-specific approaches in combating disinformation.

Contents that are obviously false and hyper-partisan, even if they were posted in the last electoral cycle, are still present in these platforms. They have not yet been taken down despite multiple reports, so these content moderation policies really have to be looked at.

speaker

Maria Elize H. Mendoza

reason

This comment brings attention to the ongoing issue of ineffective content moderation by social media platforms, highlighting a critical gap in addressing disinformation.

impact

It shifted the discussion towards the responsibilities of tech platforms and the need for more effective content moderation policies.

So I think for the Vietnamese government, you know, because they know that I think very, so this very, it speaks to what Peter and if you already mentioned that I think for the government, they know that no matter what the Chinese say about the South China Sea, the Vietnamese people will not believe.

speaker

Bich Tran

reason

This comment provides insight into the unique dynamics of information manipulation in Vietnam, highlighting how cultural and historical factors influence the effectiveness of foreign disinformation campaigns.

impact

It introduced complexity to the discussion by showing how different countries may have varying vulnerabilities to foreign information manipulation based on their specific contexts.

Even if you did manage to achieve digital literacy, which I think there are a lot of technical solutions for, this is a larger sociological problem at this point, where if you’re getting views for it or if you’re getting power out of it, there’s no reason for anybody to stop sort of putting out disinformation.

speaker

Nidhi (audience member)

reason

This comment challenges the effectiveness of purely technical solutions to disinformation, highlighting the deeper sociological roots of the problem.

impact

It prompted the speakers to address the need for a multi-disciplinary approach to tackling disinformation, beyond just technical solutions.

Only the government can decide what is disinformation or not. And in the case of one party state in Vietnam, we have the legislative, executive, and judiciary powers belonging to the state, which means the head of all these state agencies must be the Communist Party members. And so when they say that is disinformation, they have the power to punish.

speaker

Eliza (audience member)

reason

This comment raises important questions about who has the authority to define and combat disinformation, especially in non-democratic contexts.

impact

It led to a discussion about the challenges of addressing disinformation in different political systems and the potential for misuse of anti-disinformation measures.

Overall Assessment

These key comments shaped the discussion by highlighting the complexity of addressing information manipulation and disinformation in Southeast Asia. They brought attention to the disparities in capabilities and threat perceptions among countries, the responsibilities of tech platforms, the influence of cultural and historical factors, the limitations of purely technical solutions, and the challenges of defining and combating disinformation in different political systems. The discussion evolved from a focus on specific country cases to a broader consideration of regional cooperation, multi-stakeholder approaches, and the need for context-specific strategies in combating disinformation.

Follow-up Questions

How can we define FEMI (Foreign Information Manipulation and Interference) in a way that is context-specific and more applicable to different information landscapes in Southeast Asia or the Asia Pacific?

speaker

Pieter Alexander Pandie

explanation

A more regionally-specific definition could help better understand and address FEMI issues in the context of Southeast Asian countries.

What are the best platforms or forums to discuss FEMI issues in the Asia Pacific region, considering the different threat perceptions and approaches of various countries?

speaker

Koichiro (audience member)

explanation

Identifying appropriate platforms for discussion could lead to more effective regional cooperation in addressing FEMI.

How can we address the broader sociological problem of confirmation bias and the incentives for spreading disinformation, beyond just technical solutions?

speaker

Nidhi (audience member)

explanation

Addressing the root causes of disinformation spread could lead to more effective long-term solutions.

How can we improve digital literacy campaigns and make them more effective, especially for those who haven’t formed their opinions yet?

speaker

Bich Tran

explanation

Effective digital literacy campaigns could help prevent the spread of disinformation and improve information resilience.

How can we balance the need for platform regulation with concerns about censorship and freedom of expression?

speaker

Fitri Bintang Timur (Fitriani)

explanation

Finding this balance is crucial for effective policy-making in combating disinformation while preserving democratic values.

How can we improve multi-stakeholder and inter-regional cooperation to better understand and address disinformation, its real-world impacts, and the misappropriation of anti-disinformation discourse?

speaker

Fawaz (audience member)

explanation

Enhanced cooperation could lead to more comprehensive and effective approaches to combating disinformation.

How can we develop curricula and textbooks for digital literacy that are objective and widely accepted?

speaker

Bich Tran

explanation

Developing appropriate educational materials is crucial for implementing effective digital literacy programs.

How can multiple countries work together to more effectively demand action from tech platforms in addressing disinformation?

speaker

Maria Elize H. Mendoza

explanation

Collective action by multiple countries could potentially have a greater impact on tech platform accountability.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.