Disinformation and Misinformation in Online Content and its Impact on Digital Trust

7 Jul 2025 14:00h - 14:45h

Disinformation and Misinformation in Online Content and its Impact on Digital Trust

Session at a glance

Summary

This discussion, titled “More Truth Less Trust,” focused on the growing challenges of misinformation and disinformation in the digital age, particularly as AI technologies become more sophisticated. The panel featured Christine Strutt as moderator, along with Tara Harris from Prosus, Mike Mpanya from Newbie.ai, and Lori Schulman from INTA, examining how false information impacts public trust and exploring potential solutions.


The conversation began by distinguishing between misinformation (unintentional spreading of false information) and disinformation (deliberate deception intended to cause harm). Tara Harris highlighted how bad actors increasingly use deepfakes and AI impersonation to create investment scams targeting consumers, noting that current IP laws don’t adequately address these emerging threats. She emphasized the need for multi-faceted enforcement approaches and welcomed regulatory developments like France’s ban on sharing deepfakes and Denmark’s consideration of granting copyright to faces and physical likeness.


Mike Mpanya addressed a critical but often overlooked issue: how large language models trained primarily on data from the Global North create inherent biases that disadvantage users in the Global South. He explained that AI systems trained on historically biased datasets can perpetuate discrimination, particularly in healthcare and financial services, and stressed the need for testing data integrity before deployment. Mpanya advocated for establishing global frameworks and best practices for AI development, similar to engineering standards in other fields.


The discussion revealed tension between the desire for harmonized global AI regulation and the reality of fragmented regional approaches. While speakers agreed that harmonization would benefit smaller companies and startups, they acknowledged that current regulatory diversity actually favors large tech companies with resources to navigate multiple legal frameworks. Lori Schulman emphasized that solving AI’s trust and safety challenges requires multi-stakeholder collaboration, noting that these technological disruptions, while challenging, are not unprecedented and can be successfully managed through inclusive cooperation.


Keypoints

## Major Discussion Points:


– **Distinction between misinformation and disinformation**: The panel explored how misinformation involves unintentional spreading of false information (like a “mistake”), while disinformation is deliberately created to cause harm. They discussed how both concepts manifest in AI-generated content, deepfakes, and executive impersonation scams.


– **Bias and representation in AI training data**: A significant focus on how large language models are predominantly trained on data from the Global North (US and Western Europe), creating inherent biases that disadvantage users from the Global South. This affects everything from healthcare diagnostics to financial services, with historical biases (like apartheid-era data in South Africa) being perpetuated in AI systems.


– **Regulatory fragmentation vs. harmonization**: The speakers debated the challenges of navigating multiple, fragmented AI regulations across different jurisdictions. While harmonized regulation would benefit scaling and innovation, the current reality forces companies (especially smaller ones) to spend more time with lawyers than engineers, potentially favoring big tech companies over startups.


– **Enforcement challenges in current IP law frameworks**: Discussion of how existing intellectual property laws inadequately address AI-generated deepfakes, voice cloning, and executive impersonation. The panel noted emerging solutions like France making deepfakes illegal and Denmark considering granting copyright to faces and physical likeness.


– **Solutions and future outlook**: The conversation concluded with optimism about small language models, open-source AI development, and the need for multi-stakeholder collaboration. Speakers emphasized the importance of creating resource hubs and interdisciplinary cooperation to address these challenges.


## Overall Purpose:


The discussion aimed to examine the growing threat of AI-powered misinformation and disinformation, explore current challenges in combating these issues through existing legal frameworks, and identify potential solutions through better regulation, improved data practices, and multi-stakeholder collaboration. The panel sought to bridge perspectives from legal, technical, and policy domains to address how false information erodes public trust and what can be done about it.


## Overall Tone:


The discussion maintained a professional, collaborative tone throughout, with speakers building on each other’s points constructively. While the conversation began with a somewhat concerning overview of the misinformation landscape, it evolved into a more optimistic and solution-oriented discussion. The speakers demonstrated mutual respect and expertise, with the tone becoming increasingly hopeful toward the end as they discussed emerging technologies like small language models, open-source AI, and the potential for better resource sharing and collaboration to address these challenges.


Speakers

– **Christine Strutt** – Intellectual property attorney and partner at Fonsidals (IP law firm focusing on African region), Chair of the Global Governance Subcommittee of the International Trademark Association (INTA’s Internet Committee), Session moderator


– **Lori Schulman** – Former board member and senior director of Internet Policy at INTA, General counsel and intellectual property counsel for Fortune 100 companies and major non-profit organizations, Immediate past president of the IPC, High-level facilitator at WSIS


– **Mike Mpanya** – Entrepreneur and AI strategist, Founder and CEO of Newbie.ai, Former leader of Africa’s largest youth organization, Has a foundation that trains young leaders, Background in engineering and public policy


– **Tara Harris** – Group IP Lead for Digital and Regulatory at Prosus (global consumer internet group and technology investor), Responsible for intellectual property strategy, enforcement, and risk management across global portfolio, Provides strategic support for digital policies and regulatory initiatives and AI governance frameworks


**Additional speakers:**


– **Audience** (specifically **Nanya Sudhir**) – Works at the ILO (International Labour Organization)


Full session report

# More Truth Less Trust: Comprehensive Discussion Report


## Executive Summary


The panel discussion “More Truth Less Trust” examined the challenges of misinformation and disinformation in the digital age, particularly as artificial intelligence technologies become increasingly sophisticated. Moderated by Christine Strutt, an intellectual property attorney and partner at Fonsidals, the session brought together diverse expertise from Tara Harris (Group IP Lead for Digital and Regulatory at Prosus), Mike Mpanya (Founder and CEO of Newbie.ai), and Lori Schulman (former INTA board member and Internet Policy senior director).


The conversation evolved from initial concerns about AI-powered fraud and deepfakes to a broader examination of systemic biases in AI training data and regulatory challenges. Speakers generally agreed on the need for multi-stakeholder collaboration and better resources for smaller companies, while discussing various approaches to regulatory frameworks and technical solutions.


## Key Themes and Definitions


### Distinguishing Misinformation from Disinformation


The discussion began with establishing clear definitions between two related but distinct concepts. Tara Harris explained that misinformation involves the unintentional spreading of false information—essentially mistakes that propagate through digital channels. In contrast, disinformation represents the deliberate creation and dissemination of false information with the intent to cause harm.


Mike Mpanya introduced a more nuanced perspective, arguing that the most widespread form of misinformation stems from large language models themselves, which are trained on internet data that reflects “the most widespread information” rather than “the most correct information.”


### The Scope of AI-Generated Threats


Christine Strutt presented statistics demonstrating the rapid acceleration of AI-generated deceptive content, noting that video deepfakes tripled between 2022 and 2023, while voice deepfakes increased eightfold during the same period. However, she acknowledged getting some statistics from ChatGPT and invited skepticism about the data.


Tara Harris provided concrete examples of how bad actors exploit these technologies, describing how Prosus has been targeted by sophisticated schemes where criminals use deepfakes and voice cloning to impersonate their executives for Bitcoin scams and fraudulent investment schemes. These attacks target consumers through social media platforms, creating convincing audio and video content that appears to feature trusted business leaders endorsing fake investment opportunities.


## Systemic Bias and Global Representation


### The Global North Bias Problem


Mike Mpanya delivered one of the discussion’s most significant insights by highlighting how large language models perpetuate systemic bias through their training data. He explained that most large language models are trained predominantly on information from the Global North, particularly the United States and Western Europe, creating inherent disadvantages for users in the Global South.


This bias manifests in critical applications such as healthcare diagnostics and financial services. Mpanya provided a particularly striking example from South Africa, where AI systems trained on historical credit data perpetuate apartheid-era discrimination in lending decisions. As he explained, these systems continue to reflect historical biases that systematically disadvantaged certain populations.


### Generational Shifts in Information Verification


Mpanya identified a fundamental shift in how AI-native generations approach information verification. Unlike previous generations who might consult libraries or search engines to verify information, younger users increasingly turn to AI systems as their primary source of truth. This creates a circular problem where biased AI systems become both the source of misinformation and the tool used to verify information.


The implications of this shift extend beyond individual decision-making to broader societal trust in information systems. As Mpanya noted about the regulatory complexity: “Some weeks I spend more time with lawyers than I do with engineers. And I don’t think that’s a great position to be in as a founder of a technology company.”


## Legal and Regulatory Challenges


### Inadequacy of Current IP Frameworks


Tara Harris emphasized that existing intellectual property laws prove inadequate for addressing AI-generated impersonation and deepfakes. Current legal frameworks were not designed to handle sophisticated AI-generated content that can convincingly replicate voices, faces, and mannerisms. This forces companies to pursue creative multi-jurisdictional enforcement approaches, often with limited success.


Christine Strutt highlighted a particular vulnerability: while celebrities and public figures have some recourse through defamation laws, everyday people lack similar protections against AI-powered impersonation.


### Emerging Regulatory Responses


The discussion revealed some encouraging developments in certain jurisdictions. France has made sharing deepfakes illegal, and Tara Harris mentioned that Denmark is considering granting copyright protection to faces and physical likeness. These initiatives represent early attempts to adapt legal frameworks to address AI-generated threats.


However, Lori Schulman raised fundamental questions about the regulatory rush, asking: “Do we know enough about how things work to regulate? What we’re seeing now is a lot of regulations come into place, and then either the ability to technically enforce them, or the principle behind the enforcement isn’t syncing up with, again, the technology.”


### The Regulatory Fragmentation Challenge


Mike Mpanya highlighted how fragmented regional regulations can favor large technology companies over smaller innovators. Only major corporations possess the resources to navigate compliance across multiple jurisdictions, effectively creating barriers for smaller competitors.


Lori Schulman noted the scope of this challenge, mentioning that “dozens, over 60 jurisdictions” have introduced “nearly if not more than 1,000 regulations,” creating uncertainty about enforcement and technical feasibility.


## Areas of Agreement and Collaboration


### Multi-Stakeholder Approaches


The speakers generally agreed on the necessity of multi-stakeholder approaches to AI governance. Tara Harris advocated for companies to adopt global ethical AI policies, noting that Prosus based their policy “largely on the OECD AI principles.” Mike Mpanya emphasized the need for interdisciplinary collaboration extending beyond engineering to include legal, social, and humanities expertise. Lori Schulman reinforced this view and mentioned INTA’s five principles for AI governance.


### Support for Smaller Players


The speakers agreed that smaller companies and Global South entrepreneurs require better access to resources and training for AI governance. Lori Schulman suggested that the ITU could create information hubs providing primers and training for entrepreneurs scaling AI solutions across jurisdictions.


### Technical Evolution Towards Specialization


Both Tara Harris and Mike Mpanya discussed how small language models trained on specific, local datasets can often outperform large language models. Mpanya mentioned that JP Morgan Chase uses small language models, suggesting this represents a promising direction for addressing bias and representation issues.


## Audience Engagement and Youth Perspectives


An important part of the discussion involved an audience question from Nanya Sudhir from the International Labour Organisation about motivating AI-native generations to understand the gravity of biased data sources. This prompted detailed responses from all speakers about youth engagement with AI bias issues.


Mike Mpanya expressed optimism about youth consciousness regarding decoloniality and their demand for technology that reflects their experiences. The speakers discussed various approaches to education and engagement with younger generations who are growing up with AI as a primary information source.


## Solutions and Future Outlook


### Technical Solutions


The discussion revealed growing interest in technical solutions to bias and representation problems. Mike Mpanya highlighted that open-source AI development enables global communities to fine-tune tools to reflect their specific needs and contexts. Small language models offer particular promise, as they can be trained on carefully curated, representative datasets.


### Governance Frameworks


Speakers identified several approaches to governance challenges. Tara Harris advocated for voluntary adoption of global policies based on OECD AI principles. The discussion also touched on upcoming developments, including the EU AI Act code of principles being published “in the next few weeks” and Japan’s AI framework.


### Market Forces and Demographics


Mpanya provided an optimistic perspective on market incentives, noting demographic trends that suggest the Global South represents a significant market for technology products, creating business incentives for developing inclusive AI solutions.


## Ongoing Challenges and Questions


### Legal Protection Gaps


The discussion highlighted ongoing questions about how legal systems can provide protection for ordinary people against AI impersonation when current laws primarily protect celebrities and public figures.


### Data Quality and Training


Questions remain about the best approaches to addressing biased datasets—whether to improve existing datasets or build entirely new ones from scratch.


### Regulatory Implementation


Lori Schulman’s concerns about regulatory enforceability remain significant, particularly regarding the technical feasibility of enforcement and the alignment between legal principles and technological realities.


## Historical Context and Reassurance


Lori Schulman provided valuable historical perspective, drawing parallels to previous technology challenges. She mentioned the domain name system as an example of how the internet community has successfully navigated complex technical and policy challenges before, offering reassurance that “we got through it” and can address current AI challenges as well.


## Conclusion


The “More Truth Less Trust” discussion revealed the complexity of addressing misinformation and disinformation in the AI era. The conversation evolved from tactical concerns about fraud prevention to strategic questions about global technology governance, systemic bias, and regulatory approaches.


The speakers demonstrated broad agreement on fundamental challenges while exploring different approaches to solutions. Mike Mpanya’s insights about systemic bias and regulatory fragmentation, combined with Tara Harris’s practical enforcement experience and Lori Schulman’s historical policy perspective, created a comprehensive dialogue about both current challenges and potential paths forward.


The discussion highlighted the importance of multi-stakeholder collaboration, the need for better resources for smaller companies and Global South entrepreneurs, and the potential of technical solutions like small language models to address bias issues. While significant challenges remain, particularly around regulatory coordination and protecting ordinary citizens from AI-powered threats, the speakers’ various perspectives suggest multiple avenues for progress through continued collaboration and innovation.


Session transcript

Christine Strutt: Good afternoon everyone and thank you for joining our session that I’ve loosely renamed More Truth Less Trust. Of course this is a social media phrase that notes the predicament that the greatest tools enabling human communication and productivity nowadays are increasingly becoming the source of deception in the service of manipulating our minds and actions. Now in January this year the World Economic Forum in its global risk report classified misinformation and disinformation as the top short term risk for the second year running. Over and above risks like extreme weather events, societal polarization, cyber espionage and warfare. Now a number of reputable studies have in the last year concluded that between 75 and 90 percent of people are expressly concerned about AI’s role in spreading misinformation and people’s ability to distinguish between real and fake content is becoming alarming with apparently 40 percent of our guesses being accurate. Now those two statistics I got off ChatGPT so I invite you to also approach them with some skepticism but in terms of more academic studies video deepfakes tripled and voice deepfakes increased eightfold between the years 2022 and 2023. Our speakers today are all at the forefront of dealing with these issues and can attest to how false information impacts and erodes public trust in the media, organizations and governments and they can speak to the I will do a quick round of introductions. My name is Christine Strutt, I’m an intellectual property attorney and a partner at Fonsidals, which is an IP law firm that focuses on the African region. I’m also the chair of the Global Governance Subcommittee of the International Trademark Association or INTA’s Internet Committee, and it’s my honor to moderate this panel today and host all three of these distinguished speakers. On the screen, someone you’ll see in a minute, is Tara Harris, Group IP Lead for Digital and Regulatory at Process. Process is a global consumer internet group and one of the largest technology investors in the world, operating across transformative sectors, including e-commerce, fintech, food delivery, and educational technology. As a subsidiary of NASPA, Process powers leading digital platforms across emerging markets, with significant investments in companies such as Tencent and operations spanning India, Brazil, China, and beyond. As Group IP Lead at Process, Tara represents the organization’s intellectual property strategy, enforcement, and risk management across its global portfolio. She also provides strategic support to the company’s broader digital policies and regulatory initiatives as well as the AI governance frameworks. We’re also joined by Mr. Mike Mpanya, entrepreneur and AI strategist with a powerful track record of advancing inclusive, innovative AI solutions across Africa and the global south. As the founder and CEO of Newbie.ai, he leads one of the continent’s most dynamic AI Ventures, recognized for harnessing large language models and cutting edge technologies to solve critical challenges in healthcare, education, financial inclusion, and governance. Mike has advised governments, multinational corporations, and startups on AI adoption, digital transformation, and ethical innovation. Drawing from a background in engineering and public policy, he bridges the gap between complex technologies and real-world impact, particularly in under-serviced communities. And then, with me in person, is Lori Schulman, former board member and senior director of Internet Policy at INTA. Lori is responsible for managing the association’s various Internet policy and advocacy initiatives, as well as representing INTA in forums such as the IGF, ICANN, where she is the immediate past president of the IPC, and of course, WSIS, where she’s served as a high-level facilitator on several occasions. Lori has a varied background as general counsel and intellectual property counsel for both Fortune 100 companies and major non-profit organizations. She’s a notable voice in matters concerning digital policy, data, and domain names. Thank you all for being here today. I’m going to start by posing a question to Tara. I wonder if we could find you on the screen, Tara. But Tara, what is the difference between misinformation and disinformation? And are they really distinguishable, independent concepts?


Tara Harris: Hi, Chris. My video seems to be disabled, so you’ll have to just do with my voice. I hope you can hear me, okay?


Christine Strutt: Okay. Intention. And I think there was another hand at the back. Same. Well, let’s see. Let’s see what our expert has to say. Tara, what, in your opinion, is the difference between those two concepts?


Tara Harris: Yes, indeed. Intention is very important. An easy way to remember it is misinformation sounds a bit like mistake. So it is the unintentional spreading of false or untrue information versus disinformation where this was done on purpose to cause harm. Something that’s quite interesting, however, is that bad actors. we get citizens and faithful customers that come to us and say, we’ve seen this great offering for this new investment and for this option for us, for example to buy into it. Is it real? And then we of course look into it and it’s often a scam. So yes, we see because we’re an investor, we often see bad actors pretending to create gams or investment opportunities to try and trick consumers into believing that they can invest with us. And often there is nothing behind it. They’re just trying to collect their credit card information. So as these technologies are developing, we are starting to see deepfakes, for example, and we definitely seeing a much bigger increase towards bad actors impersonating our execs, trying to trick people with these technologies into thinking that they are real and then hooking them into illegal and often Bitcoin scams or similar. Natara, you’ve mentioned the executive brands, deepfakes, impersonation. Those are not things that all IP laws adequately cover. How do you find taking enforcement measures or addressing those sorts of wrongs given the current IP laws that you are given to work with? Yeah, that’s a great question. I think as these issues are increasing, we’re having to become a lot smarter when it comes to- enforcement. Luckily enough, we have a global footprint, and so we’re experienced in dealing with a large amount of different issues on different platforms. The first thing we’ll have to do is look at where the bad, where the offense is taking place. Is it on a platform? Is it on an internet website? Is it on a telegram or a messaging app? Try and see if there are terms of use. Otherwise, if we’re dealing with something that’s really targeting a certain jurisdiction, try and have a look locally. The issue is often when these problems arrive, you want to get them down as soon as possible. And so you’re trying to find normally a multitude of ways to attack them. It could be looking at IP rights or privacy rights, or as you say, a certain right in a country. Recently, we’ve seen France making it illegal to share defects. Of course, if there’s pornographic or sexual content, the penalties and fines are even worse. So it’s great to see changes like this happening. We’re also seeing a large policy shift in Denmark. I’m sure many of our audience have read in the news that Denmark is considering granting copyright to faces and physical likeness to try and give citizens the ability to enforce against defects. So I think we’re starting to see a shift, but there’s far from harmonization at the moment.


Christine Strutt: So IP for humans. I’m not sure how I feel about that. But I think coming from a country where the concept of image rights doesn’t really exist in our laws, and you only have protections for well-known famous people in terms of defamation, I do think that is a trend that’s promising, because given the current state of affairs, celebrities have recourse or public figures, but the everyday I’m going to start with you, Mike. I think it’s very important to understand that the way that people think about the world today, whether it’s men on the street or women on the street, has no protection if they get impersonated or their likeness or voice gets copied. So that is a very concerning state of affairs to me. Mike, what Tara is describing, though, is deep fakes and sophisticated voice or image impersonations. That typically use generative AI. And, of course, this is top of mind for all of us. When it comes to online fraud, we typically find ourselves, or at least I hope that we find ourselves, discussing how to combat the risks that are presented by these tools. But there are also simpler forms of misrepresentation in the digital space that could occur without any bad actors being involved. And that’s a very important part of the conversation. So, Mike, I’m going to start with you. I’m going to start with you. I think it’s very important to understand that the way that people think about the world today, whether it’s men on the street or women on the street, has no protection if they get impersonated or their likeness or voice gets copied. So that is a very concerning state of affairs to me. In the context of this digital space, that could occur without any bad actors or intentional wrongdoing. Can you perhaps share with us some of the issues that you encounter in your line of work when it comes to the development of language models and applications that are, in fact, intended for good? Thanks for that, Christine. I’m glad that we can see Tara now. Just letting you know you’re visible on our end as well. I think the most widespread form of misinformation that, in my view, is going to become mainstream and where we really need to be the strictest is misinformation around large language models. In other words, someone going to ChatGPT to get information about the real world. And you, of course, Christine, started your conversation by saying and disclosing to us that you had received these stats off of ChatGPT. But you had the presence of mind to say, well, we have to look at other academic sources. And what we’re seeing as an increasingly challenging problem is for generations that are AI native. So, in other words, these are individuals who are coming of age, so anywhere between, you know, 12 and 18.


Mike Mpanya: and early 20s during the AI era, where the first place they go to verify information is not a library or a search browser, as we have done historically, but AI. And now the predominant challenge with large language models is that large language models are neural networks that were trained on the internet. In other words, they’re an amalgamation of information that was available. And therefore, their training data has inherent bias, not towards what is the most correct information, but what is the most widespread information. So in other words, a challenge that we, in my line of work, engage with every day, is the fact that most large language models are trained on information from the global north, in particular, the United States and Western Europe. The majority of information on the internet comes from those markets. So when you’re engaging with a large language model around any topic, whether it’s around inclusive finance and best practice for starting a business in a rural community in Africa, or best practice for growing a business in a part of Southeast Asia, it’s going to give you information that is not necessarily correct. Now, those are perhaps some of the more benign examples where it becomes fundamentally more complex is in AI use cases, when you’re trying to use AI for something like health or finance. If the underlying health data set is a data set that does not include the market you’re trying to reach to. In Nubia’s line of work, we’re trying to expand access to healthcare for those who need it most with a particular focus on the global south. You don’t have an abundance of training data, healthcare data, on the types of people living in the global south. So that can cause some really challenging consequences when you’re trying to look at the risks that someone has for a particular disease, or when you’re trying to get the right form of diagnosis. A practical example we’ve encountered, and I’ll leave it here, is one in the banking space, where in the developing world, fintechs are very quickly becoming… the most common way of finance. And all FinTechs are exploring how do you use large language models, machine learning to expand access to finance, in particular credit. The challenge in a country like South Africa, of course, is that South Africa 30 years ago was not an inclusive society. It was a society separated by race. So if you’re going to use a historical data set to credit rate Africans, people of European origin, et cetera, the different race groups in South Africa, you’re going to be confronted with the challenge that your data set is inherently biased because it reflects the society that the data set was created in. So in order for us to combat what I think will become the most dominant form of misinformation is going to be for us to have a new standard for testing the integrity and the reliability of the underlying data sets we use to train these models. Okay. So I agree about the integrity of the data set, but let’s say you’ve already got a distorted view. What’s the solution to improving that? Is it just feeding in synthetic data or do you just have to build a fresh? What’s your thinking around improving the quality of the data that you are going to use inevitably? So there’s kind of two schools of thoughts around this. One is kind of the reinforcement training that you can do to the model. So in other words, let’s get as many different people to use it as possible. And over time, the model improves. The challenge is if that model that you’re building or the tool you’re building or the use case is supposed to make decisions in real time, are you comfortable with people being adversely affected because the model at the time has an inherent bias? And I think that’s something that in my view, no company would be comfortable with is knowing that a certain segment of customers or clients were being adversely affected because the underlying data set is incorrect. I think what is the more practical option and why conversations like these are so important is that we actually as a society and as AI practitioners, lawmakers, and users, we need to be able to make the right decisions. have to begin to imagine and get creative around what are the requirements we think about when looking at what an underlying data set has to have. So if you look at many of the other disciplines in engineering, there is a best, there’s several best practices and codes and forms of conduct that people ascribe to either by law or willingly because that’s just part and parcel of the best practice culture when building a car or building a bridge. I think having high quality data sets that are representative, that do take in diverse demographics and that are tested for bias before they’re used need to become part and parcel of the design process. So when you’re building a small language model, a large language model or an agent at the beginning of the process, we’re going to have to test the data sets and we’re going to have to create at least a global framework or regional framework around what is best practice to make sure that the data sets we’re building these models on actually have integrity and truth to them.


Christine Strutt: Thanks for that, Mike. Tara, I know you’ve got some views on best practices and regulation and standardization or harmonization. Could you say some of your thoughts around that?


Tara Harris: Indeed. So anyone who’s in the EU, I think, or even who’s not in the EU has probably heard of the EU AI Act. Of course, when that first arrived, we at Process had to try and figure out how are we going to how are we going to manage this risk? How are we going to make sure that our businesses are compliant, that they’re processing fair data, accurate data? Of course, they’ve been using ML and AI for many years. And now many of the normal algorithms are now subject to this regulation. And we took the view that we will adapt a global policy on ethical and responsible AI development. And this was based largely on the OECD AI principles. I think they promote innovative, trustworthy AI that represents human rights and democratic. larger companies voluntarily decide to take up these policies and make sure that they’re trying to ensure that they’re developing and deploying safe and responsible AI, the more it will become the norm. I think this will also help companies such as Mike’s companies, for example, be able to scale because while South Africa might not be subject to this, if they are already setting their benchmark pretty high at an EU level, that’s going to make it much easier for them to go into other regions, because many of the countries have got similar, you know, most of these frameworks are global. So I think that they can be adopted adapted by various industries. But I’d love to hear Mike’s view on this as well.


Christine Strutt: So would I but I just I’ve noticed our slides have sort of frozen. And I want to suggest that we actually just close them out and see only the speakers, if that’s okay. But whilst we do that, Mike, more regulation, harmonized regulation. And do you think that’s the way forward?


Mike Mpanya: I think if I could wave a magic wand, I would want harmonized regulation. I think Tara’s spot on when she’s talking about the challenges around scaling and growing. And that’s something we’re dealing with on a regular basis is when moving from South Africa to a Bangladesh to a Nigerias, we’re deploying what should be the same solution. If you think about the logic behind startups and the traditional laws of scaling. It is that you create a particular product, that product has value and you’re able to replicate and deploy that product all over the world. Unfortunately, because of the current regulation being so regional and in many instances being fragmented, what we’re dealing with is each and every time we go into a new market, as opposed to focusing on the technical requirements of the solution, we’re focusing on the legal requirements of the solution. And I was having this conversation with Christine when we were in South Africa a few weeks ago, that some weeks I spend more time with lawyers than I do with engineers. And I don’t think that’s a great position to be in as a founder of a technology company. So yes, in a perfect world, I think what we would want is more harmonization. I think what we’re seeing though is an increasing regional approach where particular regions and particular countries are choosing how to regulate their data and using kind of data sovereignty as a concept to justify that. What I would say is the downside of that is that it actually plays into the hands of big tech as opposed to small tech. So even though the logic is that if you have a regional framework or local framework, you’re going to make it harder for the big players to come into your market. Actually, what you do is you squeeze out the small players. Because to Tara’s initial point, only the large companies can afford armies of lawyers to understand research and figure out what is best practice and what should be done in each market. And in a world where you have multiple fragmented legal frameworks, what ends up happening is you push out competition in terms of the tech space and the AI space, and you effectively leave the world vulnerable to a few major players with a lot of capital. So I would want more harmonization, clarity around harmonization. I think that would be easier for our scaling. However, in the short term, I do think something practical we’re all going to have to deal with is very fragmented regional approaches to how they govern information.


Christine Strutt: There’s such good observations. And I will just say as the lawyer, whenever we have to advise on, you know, principles, best practices or themes in some other standard, that’s not great for us either, right? Because we are also just hypothesizing and trying to figure out what is the practical implication of that rule. So I understand that you can’t have fixed do’s and don’ts but, you know, that is inevitably why we end up spending so much time with startups and tech founders, because we are all together in this and trying to figure out this sort of uncharted territory of laws and regulations. Speaking of which, Lori, you’ve been very overlooked beside me here. I’d like to ask you more about regulation and policy, because INTA as an organization actually represents very diverse stakeholders. You know, we have the tech community, we have educational groups, non-profits, we have governmental agencies, we have private practice. What’s INTA’s view on AI regulation and policy?


Lori Schulman: I would say that INTA’s views are evolving and we have noticed and I do believe that sitting in a room with a bunch of lawyers when you prefer to be coding is probably not the best situation, but it is the best situation in a world where the legal frameworks are not quite fixed and solid. So it’s not a waste of time speaking as a lawyer. I’ve loved my career as a tech lawyer. Consult a lawyer. I mean, I’m just going to go there. But that being said, yes, we’ve noticed enormous trends in regulation. As you will hear in other sessions, there have been dozens, over 60 jurisdictions that have introduced nearly if not more than 1,000 regulations. So regulations are springing up all over the place, and it begs some questions. One, do we know enough about how things work to regulate? What we’re seeing now is a lot of regulations come into place, and then either the ability to technically enforce them, or the principle behind the enforcement isn’t syncing up with, again, the technology. We’re finding enforcement on the lawyer’s side, we don’t know how to counsel clients. Well, you have to follow this law. I know you want our advice, but we’re all waiting to see. That’s a very tough spot to be in. And I also, in this world, because I’m in a world where I go in front of governments and advocate for INTA’s members, we focus on brands and related intellectual property, and we are very concerned about two things. One, making sure that our members do have the space to innovate, and at the same time, that their innovations are well protected through established intellectual property laws, because we have seen, and have done many studies, where trademark-driven economies, economies that recognize intellectual property rights, grow faster, do better. If you go to inta.org, you will see these studies. They were done quite a number of years ago, but the information still holds up. So, you know, do we know enough to regulate, and can we truly future-proof? That’s what I try to ask regulators, my members, all the time. I would say this, that we’re seeing governmental practices emerging, we’re seeing voluntary practices emerging, and organizations like INTA are developing policy frameworks, where they can go and express to governments what they think might be the most appropriate way for the private sector and the governmental sector to move forward. I’m going to recognize a few jurisdictions, just so you can see the diversity of it. Japan has an AI framework that includes social principles on human and Mr. Mr. Mike Mpanya. The EU has the EU AI Act and the corresponding code of principles. For those of you who follow that, it’s very thorny. There’s a lot of questions. It’s very broad in some cases, extremely specific in other cases, and again, there’s uncertainty around whether or not the AI Act can be enforced, and if it’s enforced, are the principles that we’ve been working on for the last year the right ones. Those principles will be published in the next few weeks, so keep your eye out because I think they will become the world’s guideposts simply because of the size and impact of the European Union on the rest of the world. As Tara already mentioned, OECD has guidelines. You can go to the OECD website, and there’s a lot of great information about things to think about as you’re implementing an AI governance objective. When we speak about AI governance, we’re speaking about it at two levels. One, inside the company. How is the company going to govern its own AI development? But we’re also talking about what we talk about here at WSIS, which is global impacts, global infrastructure. How do we scale up in a world that has thousands of laws? This isn’t a new problem right away, and the other thing I like to emphasize that when we talk about these problems, they’re not new. This happened. I was there about almost 40 years ago now when domain names came on the scene and commercialized domain spaces and websites were popping up 30 years ago in the late 90s. There was absolute panic. We got through it, folks. I mean, we’re not perfect, but we got through it. There’s a lot more understanding now, and I do believe that that will happen with AI. I would be remiss, before I give the mic back to Christine, if I didn’t talk about INSHA’s efforts and where they relate to the strategic development goals, because as you know, it’s all about the STGs. The STGs, in terms of what INSHA’s focusing on, is an STG 9, which is innovation, industry, and economy, and STG 16, which talks about justice and a just world, the rule of law. I don’t have the precise—yes, I do. Peace, justice, and strong institutions. We need both. One cannot coexist without the other and benefit the globe. That’s how— I think that’s most of us here feel that way. I know Intus certainly feels that way. So the five principles that we really support right now is recognizing human versus machine contributions to inputs and outputs. Final decisions on granting or revocation of rights should be subject to human oversight. We don’t want to go off programming AI judges and AI gatekeepers without having also the human element of experience intuition. We’re not there yet with AI, certainly. Rights holders should be able to obtain lawful access to data for the purpose of enforcing their rights. We need to know the sources. We need to know who is the right and fair source to go to. Kind of going to the misinformation versus disinformation distinction. One is clearly a mistake. The other is intentional. And if it’s been intentional, there should be accountability. There must be accountability. And lastly, that transparency, however these frameworks are developed, should be balanced. And that balance should be with the need to protect proprietary information. Going back to innovation, patents, protecting what you develop, trademarks, protecting your brands. And what we’re hearing the most about in AI is copyright, protecting your content, potentially protecting your image. None of this, again, is new, but it does need to be rethought in a different type of technological space. And that’s my job. Thanks, Lori. And I think those principles also speak to the versus action lines. I mean, off the cuff, if I’m thinking about access to accurate information, building confidence and security in ICTs, the role of the media, ethical dimensions, and of course, then the role of governments and stakeholders. Absolutely. And as we know that the SDGs are tied in, the versus action lines flow up to the SDGs. I have come up with a little quote, but I like it because I think it’s right. I think we should let the SDGs be our North Star and let the multi-stakeholder inputs be our compass. There’s no one way to regulate AI. That I’m convinced about it. I mean, if you were in today’s plenary, you heard it’s not a question of either or, government versus non-governmental frameworks. It’s about inclusivity and collaboration at every single level in the stack, whether you call it a policy stack, an information stack, a service stack, right? It all integrates in some way. We can’t look to one and not the other. So I have come to a conclusion when you’re ready. I don’t know if you’d like to take questions first or if you’d like me to read my conclusion.


Christine Strutt: Yeah, I think- You can do it either way. Yes, we have a couple of minutes. And I think before I ask for closing remarks from all the speakers, I do wanna open the floor. This is labeled as a workshop. And if anyone has a question or comment, I think we’d love to hear it. Please feel free to direct or nominate it to any of the speakers. We have a roving mic. Oh, you have speakers if it’s in front of you. If not, we have a roving mic. It’s really a roving clowse.


Audience: Hi, thank you to all the speakers for all of their comments. I think they’re very pertinent and I learned a lot. My name is Nanya Sudhir. I work at the ILO. My question, since I’m usually concerned with the, I would say the sustainability of the UN organizations, is we were talking a lot about how to ensure that data sources are, let’s say, decolonized. Let’s put it this way. That we’re taking from all kinds of data sources. That we are, the AI models that are developed, they take into account a range of sources and not just the biased ones that they currently do. I worry about this because I will live in the future, hopefully. The question here that I come up against a lot personally is how do we motivate a generation that may have grown up only with AI? People who are maybe a decade even younger than me who’ve never lived without the internet. And who, yeah, how do we motivate them or inspire them to be engaged in this when they don’t, maybe they don’t realize the gravity of how increasingly mainstream the sources of data that give answers currently are?


Christine Strutt: I know that Mike has a youth organization. So I want to. pass this question on to him, Mike, if you don’t mind addressing it. You also might have the most interactions with younger users and developers of AI content.


Mike Mpanya: Yeah, yeah. I would say as someone who’s going to live in the future as well, I’m actually very hopeful around this. So to Christine’s point, I ran Africa’s largest youth organization for a number of years and still have a foundation that trains young leaders. And my sense when engaging with youth is that they’re incredibly conscious. They’re incredibly focused on decoloniality. And actually the demands for better and more open technology and technology that’s reflective of them, I think is very, very high. And to give you comfort, I think the biggest reason why AI will be decolonized is because there’s a massive market for it. So even when I spend time in the US and in parts of Western Europe, there are tons of VCs, investors looking at how do we build technology for the global South? And that is because that is the main market. When you look at the demographics data, something that is incredibly fascinating is that though in our cultural conversation, looking at the world through a Western lens is normative. From a numbers perspective, the normative parts of the world are the global South. Most parts of the world will struggle to have compute, will struggle with energy, but data centers are not represented in large language models as it currently exists. So even in a world where the UN organizations might be slow to it, I think the private sector is increasingly going to find a need to be able to answer these questions. And when you talk about decoloniality, one of the things that I think is intimately linked to it is open source technology. And what you’re finding is that open source AI is very quickly growing much faster than closed source AI. And what that means is that people all over the world will be able to fine tune the tools to be able to reflect them. And the reason why that’s happening is because people in India, in Bangladesh, in South Africa, in Nigeria, you know, in Venezuela want technology that is reflective of them and want a tool that’s able to answer their questions. So I would say that right now it might seem very hopeless when you look at the current dominant technology, but I think in the long term we’re going to have very, very representative tools being built.


Christine Strutt: I don’t know if any of the other speakers want to add to that.


Tara Harris: Yeah, I think mine would be that I think we’re certainly seeing in our industry a big increase in small language models. And I guess we’re just using the big language models now to power our own data. So I would imagine, and Mike can comment, but I would imagine if we’re going to be using something, for example, to help a specific sector or look at education, whilst we’ll be using the LLM to power the thinking, the actual datasets that would go into using to looking at, you know, to getting to the query would in fact be the datasets of the relevant audience. But I’d love to hear Mike’s thing on that. So I think we’re going to see before we just had these big LLMs and everyone just used the big LLMs, but I think when are we seeing agents or we’re seeing the small language models. So I’d like to hear Mike’s thoughts on how that might change the datasets and the relevance of them.


Mike Mpanya: Yeah, I think Tara’s spot on there. What we’re seeing is that small language models are going to be the dominant form of interacting with AI. So when you think about the massive stuff like a GPT model or LLAMA model or any of these models with 13 to 32 billion parameters, that isn’t actually going to be how customers or people are going to be engaging with AI. People might take some of the architecture from that and really fine tune it onto local datasets. And we’re seeing companies as large as JP Morgan and Chase who only use small language models in their banking sector. And that is increasingly becoming mainstream. Furthermore, I think what’s dominating or leading to a kind of a market that favors small language models is regulation. As we have more and more regulation, , Ms. Tanya, so we’re going to be talking a little bit about what we’ve seen in the last few years with regards to the collaboration around data remaining in country. You’re just not going to be able to host some of these large language models in these developing countries, because it’s just inefficient. So you’d much rather have bespoke tools built for purpose. And what we found in our work is when you actually build small language models or specific agents that are really, really nuanced on a limited data set, they outperform large language models. And that is common not only in the developing world but even in the United States and Western Europe as well.


Lori Schulman: And I was going to add from a public policy perspective that I think that’s right. I think we’ve seen a lot of jump to global. But the way things really work, and we’re seeing that even in the political sphere right now, we’re going back to multisectorial thinking. And we’re going back to thinking inside of borders, inside of regions, inside of certain interest groups. And I don’t think that’s a trend that’s going to end. I actually think that’s a trend that’s going to get stronger. In terms of sustainability of UN, there’s a lot of questions about that. I’m certainly not here to answer them. But the only thing I will say is that you’re going to hear a lot this week about public-private partnerships and rethinking them. So in terms of how the UN operates, how its funding model might work, what is the appropriate role for the private sector, because I don’t know that that’s an answer that’s been truly satisfactorily answered for the private sector. So I would argue that we need a lot more engagement there, because some of the financial resources that have been dependent upon governments may not be there right now. But they could be in the private sector. Some would argue they are. And so we have to get realistic about how resources flow.


Christine Strutt: I’d love to take more questions, but I think we have about three minutes left. So I’m going to ask each of the speakers just to give us one send-off, just a last thought about misinformation, disinformation, and how we can improve the situation for the future. Lori, would you like to go first?


Lori Schulman: I think it’s important that we can conclude that there’s no single way to solve the question that AI poses in terms of ensuring safety and trust. So it has to be multi-sectorial and multi-stakeholder based. I would hope that’s a given. The other last thought I would say is one call to action we would ask organizations like the ITU, and this is something I’m going to give Mike credit for, is perhaps the ITU, from a sustainability perspective, could form information hubs where entrepreneurs like Mike can go to a single resource to get primers, training on what needs to be thought about in terms of starting smaller and scaling upward. That could be a perfect place to put information that benefits entrepreneurs in any sector. And I just have to say one more thing, and I’m sorry because I talk a lot, but just because this is difficult doesn’t mean we should give up.


Christine Strutt: Thanks, Lori. Tara, any concluding remarks from your side?


Tara Harris: Yeah, I echo what Lori says about resources. I think, you know, we’re a big company. I’ve been doing this, like Lori said, since domain names were created. And even still, it is hard, but we have to work together. I think more hubs, more resources, resources for smaller companies, companies from Global South, from Asia, on how to adopt a basic voluntary AI governance framework, education on how to get harmful content down. I think, again, resources and sharing will go a long way.


Christine Strutt: Thanks, and Mike, from your side, a closing remark?


Mike Mpanya: I would reiterate what Lori and Tara said. I think they’re spot on, and we will appreciate those resources as soon as they’re made available or that portal. All I would add is I would say we are going to need as much interdisciplinary collaboration as possible. and Mr. Mr. Mike Mpanya. I think when you look at the history of technology for a very long time, it’s been dominated by the engineers. And I think the first stage of AI has been dominated by engineers and technicians. But if we actually want to make this tool something that creates a more inclusive world and that bridges divides as opposed to exacerbating them, we’re going to need as many people around a table as possible. So I think the hub shouldn’t just be focused on bringing technical expertise to the table, but legal expertise, social expertise, humanities expertise as well.


Christine Strutt: I couldn’t agree more. So thank you all for that insightful and revealing conversation. May you all continue to create awareness and drive positive, impactful change towards a secure and trustworthy online environment. Thank you everyone for joining the session. Enjoy your afternoon.


T

Tara Harris

Speech speed

161 words per minute

Speech length

1043 words

Speech time

387 seconds

Misinformation is unintentional spreading of false information (like a mistake), while disinformation is intentional spreading to cause harm

Explanation

Tara explains that the key difference between misinformation and disinformation lies in intention. She uses the memory aid that misinformation sounds like ‘mistake’ to help distinguish unintentional false information from disinformation, which is deliberately spread to cause harm.


Evidence

She provides examples of bad actors creating fake investment opportunities and scams targeting their company’s customers, often collecting credit card information through deceptive means.


Major discussion point

Definitions and Types of False Information


Topics

Content policy | Cybercrime | Consumer protection


Bad actors use deepfakes and voice cloning to impersonate executives for Bitcoin scams and fraudulent investment schemes

Explanation

Tara describes how criminals are increasingly using sophisticated AI technologies to create fake representations of company executives. These deepfakes are used to trick consumers into believing they can invest with legitimate companies, when in reality they are elaborate scams designed to steal personal and financial information.


Evidence

She mentions seeing ‘a much bigger increase towards bad actors impersonating our execs, trying to trick people with these technologies into thinking that they are real and then hooking them into illegal and often Bitcoin scams or similar.’


Major discussion point

Current Threats and Enforcement Challenges


Topics

Cybercrime | Consumer protection | Content policy


Current IP laws inadequately cover deepfakes and executive impersonation, requiring creative multi-jurisdictional enforcement approaches

Explanation

Tara explains that existing intellectual property laws don’t adequately address deepfakes and impersonation issues, forcing companies to become more strategic in enforcement. They must consider multiple approaches including platform terms of use, local jurisdictional laws, and various types of rights (IP, privacy) to combat these threats effectively.


Evidence

She cites examples of legal developments: ‘France making it illegal to share defects’ and ‘Denmark is considering granting copyright to faces and physical likeness to try and give citizens the ability to enforce against defects.’


Major discussion point

Current Threats and Enforcement Challenges


Topics

Intellectual property rights | Legal and regulatory | Jurisdiction


Voluntary adoption of global policies based on OECD AI principles can help establish norms and facilitate scaling across regions

Explanation

Tara argues that when larger companies voluntarily adopt ethical AI policies based on established frameworks like OECD principles, it helps normalize responsible AI development practices. This approach can also help smaller companies scale more easily across different regions by setting high standards that meet various regulatory requirements.


Evidence

She mentions that Process ‘took the view that we will adapt a global policy on ethical and responsible AI development’ based on OECD AI principles, and notes this helps companies ‘be able to scale because while South Africa might not be subject to this, if they are already setting their benchmark pretty high at an EU level, that’s going to make it much easier for them to go into other regions.’


Major discussion point

Regulatory Approaches and Harmonization


Topics

Legal and regulatory | Data governance | Digital standards


Agreed with

– Mike Mpanya
– Lori Schulman

Agreed on

Current regulatory fragmentation creates challenges for scaling and compliance


Companies should adopt global ethical AI policies based on established frameworks like OECD principles to ensure responsible development

Explanation

Tara advocates for companies to proactively adopt comprehensive AI governance policies rather than waiting for regulation. She suggests using established frameworks like OECD AI principles as a foundation for developing internal policies that promote trustworthy AI development while respecting human rights and democratic values.


Evidence

She explains that Process adopted ‘a global policy on ethical and responsible AI development’ based ‘largely on the OECD AI principles’ that ‘promote innovative, trustworthy AI that represents human rights and democratic’ values.


Major discussion point

Industry Best Practices and Solutions


Topics

Data governance | Legal and regulatory | Human rights principles


Agreed with

– Mike Mpanya

Agreed on

Small language models and specialized AI solutions are becoming more practical and effective


Smaller companies and Global South entrepreneurs need accessible resources and training hubs for AI governance and harmful content removal

Explanation

Tara emphasizes that while large companies have resources to navigate complex AI governance challenges, smaller companies and those in developing regions need more accessible support. She advocates for creating shared resources and educational materials to help these organizations adopt basic AI governance frameworks and learn how to address harmful content.


Evidence

She mentions ‘we’re a big company’ and ‘even still, it is hard’ and calls for ‘more hubs, more resources, resources for smaller companies, companies from Global South, from Asia, on how to adopt a basic voluntary AI governance framework, education on how to get harmful content down.’


Major discussion point

Resource Needs and Collaboration


Topics

Capacity development | Digital access | Legal and regulatory


Agreed with

– Lori Schulman
– Mike Mpanya

Agreed on

Need for accessible resources and training hubs for smaller companies and Global South entrepreneurs


M

Mike Mpanya

Speech speed

191 words per minute

Speech length

2165 words

Speech time

679 seconds

Large language models create widespread misinformation by being trained on biased internet data that reflects most common rather than most correct information

Explanation

Mike explains that large language models are neural networks trained on internet data, which creates a fundamental problem: they prioritize the most widespread information rather than the most accurate information. This bias is particularly problematic because most internet content comes from the Global North, making these models unreliable for Global South contexts.


Evidence

He explains that ‘large language models are neural networks that were trained on the internet’ and ‘their training data has inherent bias, not towards what is the most correct information, but what is the most widespread information’ with ‘most large language models trained on information from the global north, in particular, the United States and Western Europe.’


Major discussion point

Definitions and Types of False Information


Topics

Content policy | Data governance | Cultural diversity


AI-native generations increasingly turn to AI rather than traditional sources for information verification, creating new risks

Explanation

Mike identifies a concerning trend where young people who have grown up during the AI era (ages 12-25) are using AI as their primary source for information verification instead of traditional sources like libraries or search engines. This creates significant risks because these AI systems have inherent biases and may provide incorrect information.


Evidence

He describes ‘generations that are AI native’ as ‘individuals who are coming of age, so anywhere between, you know, 12 and 18 and early 20s during the AI era, where the first place they go to verify information is not a library or a search browser, as we have done historically, but AI.’


Major discussion point

Definitions and Types of False Information


Topics

Online education | Content policy | Digital identities


Training data has inherent bias toward Global North information, creating problems for Global South applications in healthcare and finance

Explanation

Mike argues that because most internet data comes from developed countries, AI systems trained on this data are inadequate for Global South contexts. This creates serious problems when AI is used for critical applications like healthcare diagnosis or financial services in developing regions, where the training data doesn’t represent the target population.


Evidence

He provides examples: ‘when you’re trying to use AI for something like health or finance’ in the Global South, ‘you don’t have an abundance of training data, healthcare data, on the types of people living in the global south’ which ‘can cause some really challenging consequences when you’re trying to look at the risks that someone has for a particular disease, or when you’re trying to get the right form of diagnosis.’


Major discussion point

Data Bias and Representation Issues


Topics

Data governance | Inclusive finance | Cultural diversity


Historical datasets reflect past societal inequalities, such as apartheid-era credit data in South Africa affecting current AI lending decisions

Explanation

Mike illustrates how historical bias in datasets can perpetuate past injustices through AI systems. He uses South Africa as an example, where using historical credit data would reflect the inequalities of apartheid, leading to biased lending decisions that discriminate based on race due to the historical context in which the data was created.


Evidence

He explains that ‘South Africa 30 years ago was not an inclusive society. It was a society separated by race. So if you’re going to use a historical data set to credit rate Africans, people of European origin, et cetera, the different race groups in South Africa, you’re going to be confronted with the challenge that your data set is inherently biased because it reflects the society that the data set was created in.’


Major discussion point

Data Bias and Representation Issues


Topics

Inclusive finance | Data governance | Human rights principles


High-quality, representative datasets tested for bias should become standard practice in AI development, similar to engineering codes of conduct

Explanation

Mike advocates for establishing industry standards for AI development that require testing datasets for bias and ensuring they are representative of diverse demographics. He draws a parallel to other engineering disciplines that have established best practices and codes of conduct for safety and quality assurance.


Evidence

He notes that ‘if you look at many of the other disciplines in engineering, there is a best, there’s several best practices and codes and forms of conduct that people ascribe to either by law or willingly because that’s just part and parcel of the best practice culture when building a car or building a bridge’ and argues for similar standards in AI development.


Major discussion point

Data Bias and Representation Issues


Topics

Data governance | Digital standards | Legal and regulatory


Fragmented regional regulation favors big tech over small companies because only large corporations can afford legal compliance across multiple jurisdictions

Explanation

Mike argues that the current trend toward fragmented, regional AI regulation actually benefits large technology companies at the expense of smaller competitors. While the intention may be to protect local markets from big tech dominance, the reality is that only large companies can afford the legal resources needed to navigate multiple regulatory frameworks.


Evidence

He explains that ‘some weeks I spend more time with lawyers than I do with engineers’ and notes that ‘only the large companies can afford armies of lawyers to understand research and figure out what is best practice and what should be done in each market’ while fragmented frameworks ‘push out competition in terms of the tech space and the AI space.’


Major discussion point

Regulatory Approaches and Harmonization


Topics

Legal and regulatory | Digital business models | Jurisdiction


Agreed with

– Tara Harris
– Lori Schulman

Agreed on

Current regulatory fragmentation creates challenges for scaling and compliance


Disagreed with

– Lori Schulman

Disagreed on

Approach to AI regulation – harmonized vs. fragmented regional frameworks


Small language models trained on specific, local datasets often outperform large language models and are becoming the dominant form of AI interaction

Explanation

Mike explains that smaller, specialized AI models trained on focused datasets are becoming more popular and effective than large general-purpose models. These smaller models are more practical for specific use cases and often perform better because they are fine-tuned for particular applications rather than trying to be general-purpose tools.


Evidence

He mentions that ‘companies as large as JP Morgan and Chase who only use small language models in their banking sector’ and notes that ‘when you actually build small language models or specific agents that are really, really nuanced on a limited data set, they outperform large language models.’


Major discussion point

Industry Best Practices and Solutions


Topics

Digital business models | Data governance | Digital standards


Agreed with

– Tara Harris

Agreed on

Small language models and specialized AI solutions are becoming more practical and effective


Open source AI is growing faster than closed source, enabling global communities to fine-tune tools to reflect their specific needs and contexts

Explanation

Mike argues that open source AI development is outpacing proprietary systems because it allows communities worldwide to customize and adapt AI tools for their specific contexts and needs. This democratization of AI development is particularly important for underrepresented communities who want technology that reflects their experiences and can answer their specific questions.


Evidence

He states that ‘open source AI is very quickly growing much faster than closed source AI’ and explains that ‘people all over the world will be able to fine tune the tools to be able to reflect them’ because ‘people in India, in Bangladesh, in South Africa, in Nigeria, you know, in Venezuela want technology that is reflective of them and want a tool that’s able to answer their questions.’


Major discussion point

Industry Best Practices and Solutions


Topics

Digital access | Cultural diversity | Capacity development


Youth are highly conscious about decoloniality and demand technology that reflects their experiences, driving market demand for representative AI

Explanation

Mike expresses optimism about the future of AI representation based on his experience with young people. He argues that younger generations are very aware of decolonial issues and actively demand technology that represents their perspectives and experiences, creating market pressure for more inclusive AI development.


Evidence

He mentions running ‘Africa’s largest youth organization for a number of years’ and observes that ‘when engaging with youth is that they’re incredibly conscious. They’re incredibly focused on decoloniality. And actually the demands for better and more open technology and technology that’s reflective of them, I think is very, very high.’


Major discussion point

Future Outlook and Market Forces


Topics

Cultural diversity | Digital identities | Capacity development


Disagreed with

– Audience

Disagreed on

Optimism vs. concern about future AI representation and youth engagement


The Global South represents the main market demographically, creating business incentives for developing inclusive AI solutions

Explanation

Mike argues that despite Western cultural dominance in technology, the Global South represents the majority of the world’s population and therefore the primary market opportunity. This demographic reality creates strong business incentives for developing AI solutions that work for developing countries, even if current cultural conversations are dominated by Western perspectives.


Evidence

He notes that ‘though in our cultural conversation, looking at the world through a Western lens is normative. From a numbers perspective, the normative parts of the world are the global South’ and explains there’s ‘a massive market’ for building ‘technology for the global South’ with ‘tons of VCs, investors looking at how do we build technology for the global South.’


Major discussion point

Future Outlook and Market Forces


Topics

Digital business models | Digital access | Inclusive finance


Interdisciplinary collaboration beyond engineering is essential, requiring legal, social, and humanities expertise to create inclusive AI tools

Explanation

Mike emphasizes that creating truly inclusive and beneficial AI requires moving beyond the traditional engineering-dominated approach to AI development. He argues that meaningful progress requires bringing together experts from law, social sciences, humanities, and other disciplines to ensure AI tools bridge divides rather than exacerbate them.


Evidence

He observes that ‘for a very long time, it’s been dominated by the engineers. And I think the first stage of AI has been dominated by engineers and technicians’ but argues that ‘if we actually want to make this tool something that creates a more inclusive world and that bridges divides as opposed to exacerbating them, we’re going to need as many people around a table as possible.’


Major discussion point

Resource Needs and Collaboration


Topics

Interdisciplinary approaches | Capacity development | Human rights principles


Agreed with

– Tara Harris
– Lori Schulman

Agreed on

Multi-stakeholder and collaborative approaches are essential for AI governance


C

Christine Strutt

Speech speed

142 words per minute

Speech length

1706 words

Speech time

720 seconds

Video deepfakes tripled and voice deepfakes increased eightfold between 2022-2023, with 75-90% of people concerned about AI’s role in spreading misinformation

Explanation

Christine presents alarming statistics about the rapid growth of deepfake technology and public concern about AI-driven misinformation. She notes the exponential increase in both video and voice deepfakes over a single year period, alongside widespread public anxiety about AI’s role in spreading false information.


Evidence

She cites that ‘video deepfakes tripled and voice deepfakes increased eightfold between the years 2022 and 2023’ and ‘between 75 and 90 percent of people are expressly concerned about AI’s role in spreading misinformation,’ though she notes getting these statistics from ChatGPT and invites skepticism.


Major discussion point

Current Threats and Enforcement Challenges


Topics

Content policy | Cybercrime | Consumer protection


Everyday people lack protection against impersonation unlike celebrities who have defamation recourse

Explanation

Christine highlights a significant gap in legal protection where ordinary citizens have little recourse when their likeness or voice is copied or impersonated, unlike celebrities and public figures who have established legal protections through defamation laws. This creates a concerning inequality in protection against AI-generated impersonation.


Evidence

She mentions ‘coming from a country where the concept of image rights doesn’t really exist in our laws, and you only have protections for well-known famous people in terms of defamation’ and notes that ‘the everyday… men on the street or women on the street, has no protection if they get impersonated or their likeness or voice gets copied.’


Major discussion point

Current Threats and Enforcement Challenges


Topics

Human rights principles | Legal and regulatory | Privacy and data protection


L

Lori Schulman

Speech speed

162 words per minute

Speech length

1660 words

Speech time

614 seconds

The EU AI Act represents a comprehensive but complex regulatory framework that’s difficult to enforce with uncertain practical implications

Explanation

Lori describes the EU AI Act as a thorough but problematic regulatory approach that creates uncertainty for both legal practitioners and companies trying to comply. She notes that the Act is simultaneously too broad in some areas and too specific in others, making it difficult to provide clear guidance to clients or determine effective enforcement mechanisms.


Evidence

She explains that the EU AI Act is ‘very thorny. There’s a lot of questions. It’s very broad in some cases, extremely specific in other cases, and again, there’s uncertainty around whether or not the AI Act can be enforced, and if it’s enforced, are the principles that we’ve been working on for the last year the right ones.’


Major discussion point

Regulatory Approaches and Harmonization


Topics

Legal and regulatory | Data governance | Jurisdiction


Over 60 jurisdictions have introduced nearly 1,000 AI regulations, creating uncertainty about enforcement and technical feasibility

Explanation

Lori highlights the explosive growth in AI regulation worldwide, with numerous jurisdictions creating extensive regulatory frameworks. However, she questions whether regulators understand the technology well enough to create effective rules and whether these regulations can be practically enforced given current technical capabilities.


Evidence

She states ‘there have been dozens, over 60 jurisdictions that have introduced nearly if not more than 1,000 regulations’ and asks ‘do we know enough about how things work to regulate?’ noting that ‘regulations are springing up all over the place’ with enforcement challenges.


Major discussion point

Regulatory Approaches and Harmonization


Topics

Legal and regulatory | Jurisdiction | Digital standards


Agreed with

– Tara Harris
– Mike Mpanya

Agreed on

Current regulatory fragmentation creates challenges for scaling and compliance


Disagreed with

– Mike Mpanya

Disagreed on

Approach to AI regulation – harmonized vs. fragmented regional frameworks


Human oversight should be maintained for final decisions on rights granting or revocation rather than relying solely on AI systems

Explanation

Lori argues that while AI can assist in decision-making processes, human judgment should remain central to important decisions about intellectual property rights and similar matters. She emphasizes that AI systems are not yet sophisticated enough to replace human experience and intuition in complex legal and policy decisions.


Evidence

She states ‘Final decisions on granting or revocation of rights should be subject to human oversight. We don’t want to go off programming AI judges and AI gatekeepers without having also the human element of experience intuition. We’re not there yet with AI, certainly.’


Major discussion point

Policy Framework Principles


Topics

Human rights principles | Legal and regulatory | Intellectual property rights


Rights holders need lawful access to data sources for enforcement purposes, requiring transparency about AI training data origins

Explanation

Lori advocates for transparency in AI systems that allows rights holders to understand and access information about how their content or data is being used. This principle is essential for enabling proper enforcement of intellectual property rights and determining accountability when AI systems cause harm or infringe on rights.


Evidence

She explains that ‘Rights holders should be able to obtain lawful access to data for the purpose of enforcing their rights. We need to know the sources. We need to know who is the right and fair source to go to’ and connects this to distinguishing between mistakes and intentional harm.


Major discussion point

Policy Framework Principles


Topics

Intellectual property rights | Privacy and data protection | Legal and regulatory


Frameworks should balance transparency with protection of proprietary information and established intellectual property rights

Explanation

Lori emphasizes the need for AI governance frameworks that provide sufficient transparency for accountability while still protecting legitimate business interests and intellectual property rights. This balance is crucial for maintaining innovation incentives while ensuring responsible AI development and deployment.


Evidence

She states that ‘transparency, however these frameworks are developed, should be balanced. And that balance should be with the need to protect proprietary information’ and connects this to ‘innovation, patents, protecting what you develop, trademarks, protecting your brands.’


Major discussion point

Policy Framework Principles


Topics

Intellectual property rights | Legal and regulatory | Data governance


ITU could create information hubs providing primers and training for entrepreneurs scaling AI solutions across jurisdictions

Explanation

Lori suggests that international organizations like the ITU could play a valuable role in supporting AI entrepreneurs by creating centralized resources and training materials. These hubs would help smaller companies navigate the complex landscape of AI governance and scaling challenges without requiring extensive legal resources.


Evidence

She proposes that ‘perhaps the ITU, from a sustainability perspective, could form information hubs where entrepreneurs like Mike can go to a single resource to get primers, training on what needs to be thought about in terms of starting smaller and scaling upward.’


Major discussion point

Resource Needs and Collaboration


Topics

Capacity development | Digital access | Legal and regulatory


Agreed with

– Tara Harris
– Mike Mpanya

Agreed on

Need for accessible resources and training hubs for smaller companies and Global South entrepreneurs


Public-private partnerships need rethinking to address UN sustainability and funding challenges while leveraging private sector resources

Explanation

Lori acknowledges questions about UN sustainability and suggests that new models of public-private partnership may be necessary. She argues that while traditional government funding may be limited, private sector resources could help address these challenges if the appropriate frameworks for engagement can be developed.


Evidence

She notes ‘there’s a lot of questions about’ UN sustainability and explains ‘some of the financial resources that have been dependent upon governments may not be there right now. But they could be in the private sector. Some would argue they are. And so we have to get realistic about how resources flow.’


Major discussion point

Resource Needs and Collaboration


Topics

Sustainable development | Legal and regulatory | Digital business models


Multi-stakeholder and multi-sectoral approaches are essential since no single solution can address AI safety and trust challenges

Explanation

Lori emphasizes that the complexity of AI governance requires collaboration across different sectors and stakeholder groups. She argues that no single entity, whether government, private sector, or civil society, has all the answers needed to address AI safety and trust issues effectively.


Evidence

She concludes that ‘there’s no single way to solve the question that AI poses in terms of ensuring safety and trust. So it has to be multi-sectorial and multi-stakeholder based’ and emphasizes this should be ‘a given.’


Major discussion point

Future Outlook and Market Forces


Topics

Legal and regulatory | Human rights principles | Interdisciplinary approaches


Agreed with

– Tara Harris
– Mike Mpanya

Agreed on

Multi-stakeholder and collaborative approaches are essential for AI governance


A

Audience

Speech speed

144 words per minute

Speech length

199 words

Speech time

82 seconds

There is a need to motivate AI-native generations to engage with data decolonization and bias issues when they may not realize the gravity of mainstream data sources

Explanation

The audience member expresses concern about how to inspire younger generations who have grown up with AI and the internet to understand and address the biased nature of current AI data sources. They worry that these generations may not fully grasp how increasingly mainstream and biased the sources of data that provide AI answers currently are.


Evidence

The speaker mentions being concerned about ‘how do we motivate a generation that may have grown up only with AI? People who are maybe a decade even younger than me who’ve never lived without the internet’ and asks how to inspire engagement ‘when they don’t, maybe they don’t realize the gravity of how increasingly mainstream the sources of data that give answers currently are.’


Major discussion point

Future Outlook and Market Forces


Topics

Online education | Cultural diversity | Capacity development


Disagreed with

– Mike Mpanya

Disagreed on

Optimism vs. concern about future AI representation and youth engagement


Agreements

Agreement points

Need for accessible resources and training hubs for smaller companies and Global South entrepreneurs

Speakers

– Tara Harris
– Lori Schulman
– Mike Mpanya

Arguments

Smaller companies and Global South entrepreneurs need accessible resources and training hubs for AI governance and harmful content removal


ITU could create information hubs providing primers and training for entrepreneurs scaling AI solutions across jurisdictions


Interdisciplinary collaboration beyond engineering is essential, requiring legal, social, and humanities expertise to create inclusive AI tools


Summary

All speakers agree that smaller companies and entrepreneurs, particularly in the Global South, need better access to resources, training, and support for AI governance and scaling across jurisdictions. They advocate for centralized hubs that provide practical guidance.


Topics

Capacity development | Digital access | Legal and regulatory


Multi-stakeholder and collaborative approaches are essential for AI governance

Speakers

– Tara Harris
– Lori Schulman
– Mike Mpanya

Arguments

Companies should adopt global ethical AI policies based on established frameworks like OECD principles to ensure responsible development


Multi-stakeholder and multi-sectoral approaches are essential since no single solution can address AI safety and trust challenges


Interdisciplinary collaboration beyond engineering is essential, requiring legal, social, and humanities expertise to create inclusive AI tools


Summary

There is strong consensus that AI governance requires collaboration across multiple stakeholders, sectors, and disciplines. No single entity or approach can adequately address the complex challenges of AI safety and trust.


Topics

Legal and regulatory | Human rights principles | Interdisciplinary approaches


Current regulatory fragmentation creates challenges for scaling and compliance

Speakers

– Tara Harris
– Mike Mpanya
– Lori Schulman

Arguments

Voluntary adoption of global policies based on OECD AI principles can help establish norms and facilitate scaling across regions


Fragmented regional regulation favors big tech over small companies because only large corporations can afford legal compliance across multiple jurisdictions


Over 60 jurisdictions have introduced nearly 1,000 AI regulations, creating uncertainty about enforcement and technical feasibility


Summary

All speakers acknowledge that the current fragmented regulatory landscape creates significant challenges for companies trying to scale AI solutions across jurisdictions, with particular disadvantages for smaller companies.


Topics

Legal and regulatory | Jurisdiction | Digital standards


Small language models and specialized AI solutions are becoming more practical and effective

Speakers

– Tara Harris
– Mike Mpanya

Arguments

Companies should adopt global ethical AI policies based on established frameworks like OECD principles to ensure responsible development


Small language models trained on specific, local datasets often outperform large language models and are becoming the dominant form of AI interaction


Summary

Both speakers agree that smaller, specialized AI models trained on specific datasets are becoming more practical and often outperform large general-purpose models, particularly for specific use cases and local contexts.


Topics

Digital business models | Data governance | Digital standards


Similar viewpoints

Both speakers highlight the growing threat of AI-generated misinformation, with Mike focusing on systemic bias in training data and Christine presenting statistics on the rapid growth of deepfake technology and public concern.

Speakers

– Mike Mpanya
– Christine Strutt

Arguments

Large language models create widespread misinformation by being trained on biased internet data that reflects most common rather than most correct information


Video deepfakes tripled and voice deepfakes increased eightfold between 2022-2023, with 75-90% of people concerned about AI’s role in spreading misinformation


Topics

Content policy | Cybercrime | Consumer protection


Both speakers recognize that existing legal frameworks are inadequate for addressing AI-related threats and enforcement challenges, requiring new approaches and greater transparency for rights holders.

Speakers

– Tara Harris
– Lori Schulman

Arguments

Current IP laws inadequately cover deepfakes and executive impersonation, requiring creative multi-jurisdictional enforcement approaches


Rights holders need lawful access to data sources for enforcement purposes, requiring transparency about AI training data origins


Topics

Intellectual property rights | Legal and regulatory | Privacy and data protection


Both speakers emphasize the need for established standards and human oversight in AI development, with Mike focusing on data quality standards and Lori on maintaining human judgment in decision-making processes.

Speakers

– Mike Mpanya
– Lori Schulman

Arguments

High-quality, representative datasets tested for bias should become standard practice in AI development, similar to engineering codes of conduct


Human oversight should be maintained for final decisions on rights granting or revocation rather than relying solely on AI systems


Topics

Data governance | Human rights principles | Legal and regulatory


Unexpected consensus

Optimism about youth engagement and market forces driving AI decolonization

Speakers

– Mike Mpanya
– Audience

Arguments

Youth are highly conscious about decoloniality and demand technology that reflects their experiences, driving market demand for representative AI


There is a need to motivate AI-native generations to engage with data decolonization and bias issues when they may not realize the gravity of mainstream data sources


Explanation

While the audience member expressed concern about motivating AI-native generations to understand bias issues, Mike responded with unexpected optimism, arguing that young people are actually highly conscious about decoloniality and actively demanding representative technology. This creates an interesting tension between concern and optimism about youth engagement.


Topics

Cultural diversity | Digital identities | Capacity development


Agreement on the inadequacy of current enforcement mechanisms despite different professional backgrounds

Speakers

– Tara Harris
– Lori Schulman
– Christine Strutt

Arguments

Current IP laws inadequately cover deepfakes and executive impersonation, requiring creative multi-jurisdictional enforcement approaches


The EU AI Act represents a comprehensive but complex regulatory framework that’s difficult to enforce with uncertain practical implications


Everyday people lack protection against impersonation unlike celebrities who have defamation recourse


Explanation

Despite representing different sectors (corporate IP, policy advocacy, and legal practice), all three speakers converge on the view that current legal and enforcement mechanisms are inadequate for addressing AI-related threats. This consensus across different professional perspectives strengthens the argument for systemic reform.


Topics

Legal and regulatory | Intellectual property rights | Human rights principles


Overall assessment

Summary

The speakers demonstrate strong consensus on key structural issues: the need for better resources and support for smaller companies, the importance of multi-stakeholder collaboration, the challenges of regulatory fragmentation, and the inadequacy of current enforcement mechanisms. They also agree on technical trends toward smaller, specialized AI models.


Consensus level

High level of consensus on systemic challenges and solutions, with speakers from different sectors (corporate, policy, legal, entrepreneurial) converging on similar conclusions. This suggests these issues are fundamental rather than sector-specific, strengthening the case for coordinated action on AI governance, resource sharing, and regulatory harmonization.


Differences

Different viewpoints

Approach to AI regulation – harmonized vs. fragmented regional frameworks

Speakers

– Mike Mpanya
– Lori Schulman

Arguments

Fragmented regional regulation favors big tech over small companies because only large corporations can afford legal compliance across multiple jurisdictions


Over 60 jurisdictions have introduced nearly 1,000 AI regulations, creating uncertainty about enforcement and technical feasibility


Summary

Mike strongly advocates for harmonized regulation arguing that fragmented approaches hurt small companies and favor big tech, while Lori acknowledges the regulatory fragmentation but suggests it may be an inevitable trend toward multisectorial thinking and regional approaches that could strengthen rather than weaken


Topics

Legal and regulatory | Jurisdiction | Digital standards


Optimism vs. concern about future AI representation and youth engagement

Speakers

– Mike Mpanya
– Audience

Arguments

Youth are highly conscious about decoloniality and demand technology that reflects their experiences, driving market demand for representative AI


There is a need to motivate AI-native generations to engage with data decolonization and bias issues when they may not realize the gravity of mainstream data sources


Summary

Mike expresses strong optimism about youth consciousness and market forces driving decolonized AI, while the audience member expresses concern about whether AI-native generations understand the gravity of biased data sources and need motivation to engage with these issues


Topics

Cultural diversity | Capacity development | Online education


Unexpected differences

Effectiveness of current regulatory trends

Speakers

– Mike Mpanya
– Lori Schulman

Arguments

Fragmented regional regulation favors big tech over small companies because only large corporations can afford legal compliance across multiple jurisdictions


Public-private partnerships need rethinking to address UN sustainability and funding challenges while leveraging private sector resources


Explanation

Unexpectedly, Mike and Lori have different perspectives on regulatory fragmentation – Mike sees it as problematic for innovation and competition, while Lori views it as potentially inevitable and suggests adapting through new partnership models. This disagreement is unexpected because both are concerned with supporting smaller players, but they have opposite views on whether fragmented regulation helps or hurts this goal


Topics

Legal and regulatory | Digital business models | Sustainable development


Overall assessment

Summary

The speakers showed remarkable consensus on most major issues, with disagreements primarily centered on regulatory approaches and optimism levels about future trends. The main areas of disagreement were: 1) Whether harmonized or fragmented regulation is preferable, 2) The level of optimism about youth engagement with AI bias issues, and 3) Different emphasis on implementation approaches for supporting smaller companies


Disagreement level

Low to moderate disagreement level. The speakers fundamentally agreed on the problems (AI bias, need for better data, support for smaller companies) but differed on solutions and timelines. These disagreements are constructive rather than fundamental, suggesting different strategic approaches rather than conflicting values. The implications are positive – the disagreements highlight different valid pathways forward rather than irreconcilable differences, which could lead to more comprehensive solutions that incorporate multiple approaches


Partial agreements

Partial agreements

Similar viewpoints

Both speakers highlight the growing threat of AI-generated misinformation, with Mike focusing on systemic bias in training data and Christine presenting statistics on the rapid growth of deepfake technology and public concern.

Speakers

– Mike Mpanya
– Christine Strutt

Arguments

Large language models create widespread misinformation by being trained on biased internet data that reflects most common rather than most correct information


Video deepfakes tripled and voice deepfakes increased eightfold between 2022-2023, with 75-90% of people concerned about AI’s role in spreading misinformation


Topics

Content policy | Cybercrime | Consumer protection


Both speakers recognize that existing legal frameworks are inadequate for addressing AI-related threats and enforcement challenges, requiring new approaches and greater transparency for rights holders.

Speakers

– Tara Harris
– Lori Schulman

Arguments

Current IP laws inadequately cover deepfakes and executive impersonation, requiring creative multi-jurisdictional enforcement approaches


Rights holders need lawful access to data sources for enforcement purposes, requiring transparency about AI training data origins


Topics

Intellectual property rights | Legal and regulatory | Privacy and data protection


Both speakers emphasize the need for established standards and human oversight in AI development, with Mike focusing on data quality standards and Lori on maintaining human judgment in decision-making processes.

Speakers

– Mike Mpanya
– Lori Schulman

Arguments

High-quality, representative datasets tested for bias should become standard practice in AI development, similar to engineering codes of conduct


Human oversight should be maintained for final decisions on rights granting or revocation rather than relying solely on AI systems


Topics

Data governance | Human rights principles | Legal and regulatory


Takeaways

Key takeaways

Misinformation (unintentional) and disinformation (intentional) both pose significant threats, with AI-generated deepfakes and voice cloning being used increasingly for fraud and impersonation


Large language models inherently contain bias toward Global North data, creating systemic misinformation for Global South applications in critical areas like healthcare and finance


Current intellectual property laws are inadequate for addressing AI-generated impersonation and deepfakes, requiring creative multi-jurisdictional enforcement approaches


Fragmented regional AI regulation favors large tech companies over smaller innovators who cannot afford compliance across multiple jurisdictions


Small language models trained on specific, local datasets often outperform large language models and represent the future of AI interaction


Open source AI development is growing faster than closed source, enabling communities to create more representative and inclusive tools


Multi-stakeholder collaboration involving technical, legal, social, and humanities expertise is essential for creating inclusive AI solutions


High-quality, bias-tested datasets should become standard practice in AI development, similar to engineering codes of conduct


Resolutions and action items

ITU should consider creating information hubs with primers and training resources for entrepreneurs scaling AI solutions across jurisdictions


Companies should voluntarily adopt global ethical AI policies based on established frameworks like OECD principles


Industry should develop standardized requirements for testing data integrity and bias before using datasets to train AI models


More resources and education should be provided to smaller companies and Global South entrepreneurs on AI governance and harmful content removal


Rights holders should be granted lawful access to data sources for enforcement purposes, requiring greater transparency about AI training data origins


Unresolved issues

How to provide legal protection for everyday people against AI impersonation when current laws only protect celebrities and public figures


Whether to improve biased datasets through synthetic data generation or build entirely new datasets from scratch


How to balance transparency requirements with protection of proprietary information in AI frameworks


How to motivate AI-native generations to seek diverse information sources rather than relying solely on AI for verification


How to achieve regulatory harmonization when countries are increasingly pursuing data sovereignty and regional approaches


How to ensure technical enforceability of the numerous AI regulations being introduced globally


How to restructure UN funding models and public-private partnerships to address sustainability challenges


Suggested compromises

Adopt voluntary global AI governance frameworks based on OECD principles while allowing regional adaptation for local needs


Balance transparency in AI frameworks with protection of proprietary information and established intellectual property rights


Use multi-stakeholder approaches that include both governmental and private sector input rather than relying solely on either approach


Focus on small language models with local datasets as a middle ground between large global models and completely fragmented regional solutions


Implement human oversight for final AI decisions while allowing automated processing for initial stages


Create shared resource hubs that serve multiple stakeholders rather than developing separate systems for each organization or region


Thought provoking comments

The most widespread form of misinformation that, in my view, is going to become mainstream and where we really need to be the strictest is misinformation around large language models… what we’re seeing as an increasingly challenging problem is for generations that are AI native… where the first place they go to verify information is not a library or a search browser, as we have done historically, but AI.

Speaker

Mike Mpanya


Reason

This comment reframes the misinformation problem from intentional bad actors to systemic issues with AI training data and generational behavioral shifts. It identifies a fundamental change in how people seek information and the inherent risks when AI becomes the primary source of truth for entire generations.


Impact

This shifted the discussion from focusing on deepfakes and intentional fraud to examining the more pervasive and subtle problem of biased training data. It led to deeper exploration of data quality, representation issues, and the need for new standards in AI development.


Large language models are neural networks that were trained on the internet… their training data has inherent bias, not towards what is the most correct information, but what is the most widespread information… most large language models are trained on information from the global north, in particular, the United States and Western Europe.

Speaker

Mike Mpanya


Reason

This insight reveals a critical distinction between ‘most widespread’ versus ‘most correct’ information, exposing how AI systems perpetuate geographic and cultural biases. It demonstrates how technical architecture decisions have profound social and political implications.


Impact

This comment fundamentally changed the conversation’s scope from individual protection against fraud to systemic global inequality in AI systems. It prompted discussions about decolonizing AI, the need for representative datasets, and sparked the audience question about motivating younger generations to engage with these issues.


In a world where you have multiple fragmented legal frameworks, what ends up happening is you push out competition in terms of the tech space and the AI space, and you effectively leave the world vulnerable to a few major players with a lot of capital.

Speaker

Mike Mpanya


Reason

This observation reveals an unintended consequence of well-intentioned regulation – that fragmented compliance requirements actually benefit big tech companies while harming smaller innovators and competition. It challenges the assumption that more regulation automatically leads to better outcomes.


Impact

This comment introduced a crucial paradox that reframed the entire regulatory discussion. It led Lori to acknowledge the complexity of enforcement and Tara to emphasize the need for voluntary global standards. It shifted the conversation from ‘how to regulate’ to ‘how to regulate effectively without stifling innovation.’


Do we know enough about how things work to regulate? What we’re seeing now is a lot of regulations come into place, and then either the ability to technically enforce them, or the principle behind the enforcement isn’t syncing up with, again, the technology.

Speaker

Lori Schulman


Reason

This comment challenges the rush to regulate by questioning whether regulators understand the technology well enough to create effective rules. It highlights the disconnect between legal frameworks and technical realities, drawing from decades of experience in tech policy.


Impact

This observation validated Mike’s concerns about regulatory fragmentation and introduced historical perspective from the domain name era. It led to a more nuanced discussion about the balance between innovation and protection, and emphasized the need for multi-stakeholder collaboration rather than top-down regulation.


Some weeks I spend more time with lawyers than I do with engineers. And I don’t think that’s a great position to be in as a founder of a technology company.

Speaker

Mike Mpanya


Reason

This vivid, personal observation crystallizes the practical burden that regulatory complexity places on innovation. It transforms abstract policy discussions into a concrete illustration of how legal fragmentation affects real entrepreneurs trying to solve global problems.


Impact

This comment resonated strongly with other speakers and led to Christine’s acknowledgment that lawyers also struggle with unclear regulations. It humanized the regulatory burden and prompted Lori’s suggestion for ITU information hubs to help entrepreneurs navigate compliance more efficiently.


Even when I spend time in the US and in parts of Western Europe, there are tons of VCs, investors looking at how do we build technology for the global South? And that is because that is the main market. When you look at the demographics data… the normative parts of the world are the global South.

Speaker

Mike Mpanya


Reason

This comment flips the conventional narrative about technology development by pointing out that the Global South represents the majority market. It suggests that economic incentives, rather than just ethical considerations, will drive more inclusive AI development.


Impact

This optimistic perspective provided a counterbalance to concerns about AI bias and offered hope that market forces would naturally drive decolonization of AI. It led to discussions about small language models and open-source solutions as practical paths forward.


Overall assessment

Mike Mpanya’s contributions were particularly transformative in this discussion, consistently reframing issues from new angles and introducing systemic perspectives that other speakers hadn’t considered. His insights about AI-native generations, the paradox of regulatory fragmentation, and the economic drivers of inclusive AI development elevated the conversation from tactical concerns about fraud prevention to strategic questions about the future of global technology governance. The interplay between his entrepreneurial experience and the policy expertise of Lori and Tara created a rich dialogue that moved beyond simple problem identification to explore complex trade-offs and unintended consequences. The discussion evolved from a focus on protecting against bad actors to examining how well-intentioned systems and regulations might themselves create new forms of bias and barriers to innovation.


Follow-up questions

What’s the solution to improving distorted data quality in AI models – synthetic data or building fresh datasets?

Speaker

Christine Strutt


Explanation

This addresses a critical technical challenge in AI development where existing datasets contain inherent biases, particularly affecting global south populations and historically marginalized communities


How can we create global or regional frameworks for testing data sets for bias before they’re used in AI models?

Speaker

Mike Mpanya


Explanation

This is essential for establishing industry standards and best practices to ensure AI systems are built on representative and unbiased data, similar to engineering codes of conduct


How do we motivate AI-native generations to be engaged in addressing data bias when they may not realize the gravity of mainstream data sources?

Speaker

Nanya Sudhir (Audience member)


Explanation

This addresses the challenge of educating younger users who have grown up with AI and may not understand the limitations and biases in current AI systems


What is the appropriate role for the private sector in UN operations and funding models?

Speaker

Lori Schulman


Explanation

This relates to sustainability of international organizations and how public-private partnerships should be restructured to address funding challenges


How can small language models change the relevance and representation of datasets compared to large language models?

Speaker

Tara Harris


Explanation

This explores whether more targeted, smaller AI models using specific datasets could address bias and representation issues better than large general-purpose models


Can the ITU create information hubs where entrepreneurs can access primers and training on AI governance and scaling considerations?

Speaker

Lori Schulman (crediting Mike Mpanya)


Explanation

This addresses the practical need for centralized resources to help smaller companies and entrepreneurs navigate complex AI regulations across different jurisdictions


How can we provide resources and education for smaller companies, particularly from the Global South and Asia, on adopting basic voluntary AI governance frameworks?

Speaker

Tara Harris


Explanation

This addresses the resource gap that prevents smaller organizations from implementing proper AI governance, which could level the playing field with larger corporations


How can we ensure interdisciplinary collaboration beyond just technical expertise in AI development?

Speaker

Mike Mpanya


Explanation

This emphasizes the need to include legal, social, and humanities expertise alongside technical knowledge to create more inclusive and equitable AI systems


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.