Toward Collective Action_ Roundtable on Safe & Trusted AI

20 Feb 2026 18:00h - 19:00h

Toward Collective Action_ Roundtable on Safe & Trusted AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined what “safe and trusted AI” means for Africa, current progress, and collaborative pathways [5-8].


Ambassador Tigo warned AI that creates dependency, extracts data, and concentrates value abroad erodes agency and creates digital neocolonialism [33-36].


Professor Shock flagged short-term risks of misinformation and gendered disinformation during elections, which can erode public trust [48-55][60-64].


Dr Okolo pointed out the scarcity of Africa-specific AI incident data, citing unnoticed AI-graded exam problems in Nigeria and South Africa [72-76].


Gaffley said a survey showed three-quarters of South Africans know little about AI, leading GCG to launch courses, scholarships, and a free MOOC [141-151].


Shock emphasized that AI must reflect local languages and contexts to empower users and preserve agency [155-162].


The panel noted a lack of AI policies and talent, urging an all-in cooperative approach over competition [101-108][111-119].


Tigo identified scientists, governments, and citizens as key actors and urged capacity-building so each can evaluate, regulate, and safely use AI [173-191].


He suggested embedding safety benchmarks in procurement, creating agile oversight, and protecting data sovereignty through local alternatives and negotiation tools [241-254][255-257].


Okolo advocated for independent AI safety institutes to reduce reliance on foreign donors and tailor standards to African values [260-267].


Existing collaborations such as MasaKani, Deep Learning in Daba, GOAI Africa, and the African Compute Initiative illustrate shared resources boosting research [215-218].


The panel concluded that trustworthy, locally relevant AI needs coordinated governance, capacity building, and inclusive policies to empower citizens and prevent neocolonial exploitation [215-218][343-351].


Keypoints


Major discussion points


Risk of digital neocolonialism and loss of agency – The Ambassador warned that AI systems that create dependency, extract African data and concentrate value abroad erode human agency and amount to “digital neocolonialism,” even posing an existential threat to the continent [33-36].


Misinformation, disinformation and malicious AI agents – Professor Shock highlighted the current surge of election-related misinformation and targeted disinformation (often gender-based), noting that AI amplifies these attacks and that single malicious actors can now build autonomous agents to spread false content [48-55][60-64].


Low public awareness and capacity gaps – Mark Gaffley’s survey showed that roughly 75 % of Africans know very little about AI, learning mainly from informal channels, underscoring the need for widespread education, short courses, scholarships and a free MOOC to build the skills required to define and demand trustworthy AI [141-146][147-151].


Call for pan-African collaboration and shared infrastructure – Multiple panelists stressed that competition must be replaced by cooperation: building regional compute resources (the African Compute Initiative), leveraging existing grassroots groups, and creating networks across academia, civil society, government and the private sector to empower local researchers [201-204][215-218].


Policy, procurement safeguards and data sovereignty – The Ambassador and Dr. Chinasa pointed out the absence of continent-wide AI policies, the need for safety benchmarks in procurement contracts, agile regulatory mechanisms, and strategies for data localisation to keep AI development under African control [101-108][110-118][241-254][259-266].


Overall purpose / goal


The session was convened to answer three interlinked questions: what “safe and trusted AI” means for Africa, what progress has already been made, and which collaborative pathways can advance AI governance, safety and capacity building on the continent [5-8].


Tone of the discussion


The conversation began with a formal, introductory tone. It quickly shifted to a concerned and urgent mood when panelists described risks such as neocolonial exploitation and misinformation [33-36][48-55]. As the dialogue progressed, the tone became constructive and hopeful, focusing on education, capacity-building initiatives, and collaborative infrastructure [141-151][201-204][215-218]. Toward the end, the tone turned pragmatic and policy-oriented, emphasizing concrete steps for procurement, regulation, and negotiation with global tech firms [101-108][241-254][259-266]. Throughout, the panel maintained a collaborative spirit, repeatedly urging collective action over competition.


Speakers

Ambassador Philip Tigo – His Excellency Ambassador, Special Technology Envoy of the Government of Kenya; serves as a special envoy on technology for the President of Kenya and provides policy perspectives on AI safety and governance. [S1][S2]


Michelle Malonza – Co-moderator of the session; affiliated with the Center of Global AI Governance (GCG) as a panelist and contributes to discussions on AI trust and capacity building. [S4]


Speaker 2 – Moderator/chair of the panel; leads the Q&A, introduces questions, and guides the flow of the discussion. [S6][S7]


Mark Gaffley – Director of Legal and Operations at the Center of Global AI Governance (GCG); speaks on public awareness, capacity building, and policy implications of AI in Africa. [S9]


Dr. Chinasa Okolo – Founder of Technicultura; Policy AI Specialist at the United Nations Office for Digital and Emerging Technologies; provides insights on AI incident databases, advocacy, and African AI governance. [S10][S11][S12]


Speaker 1 – Opening host/moderator who introduces the panel, outlines the agenda, and closes the session with logistical information. [S13][S15]


Professor Jonathan Shock – Associate Professor in the Department of Mathematics and Applied Mathematics at the University of Cape Town; Director of the UCT AI Initiative; discusses risks, misinformation, and agency in AI systems. [S16]


Audience – Members of the audience who ask questions during the Q&A segment.


Additional speakers:


Zach – Mentioned as the person who will start the first set of questions; appears to act as a co-moderator or facilitator.


Prashok – Referred to as a participant who would take a question from the audience.


John – Briefly addressed by the moderator; no further context provided.


Iman – Named at the very end as the next person to take over after the panel discussion.


Full session reportComprehensive analysis and detailed insights


The session began with the moderator, Michelle Malonza, introducing the African-led research team – Marie-Ira Ducunda, Gatoni, Michel Malonza and the AI Safety South Africa initiative – and outlining the three interlinked questions that would guide the discussion: what “safe and trusted AI” means for the continent, what progress has already been made, and which collaborative pathways should be pursued [5-8][9-14][15-18]. She also gave brief housekeeping instructions, directing participants to the QR-code and Slido for live questions and noting the event schedule.


The moderator framed “safe and trusted AI” as technology that delivers the outcomes users desire and asked the panel to identify undesirable results in the African context. Ambassador Philip Tigo warned that AI systems that create dependency, extract African data and concentrate value abroad erode human agency and constitute a form of digital neocolonialism that could pose an existential threat to the continent [30-32][33-36].


Professor Jonathan Shock then highlighted the most pressing short-term risk: a rapid breakdown of public trust caused by misinformation and targeted disinformation during elections in Ghana, South Africa and Nigeria. He distinguished misinformation (unintentional errors) from disinformation (deliberate, often gender-based campaigns) and noted that AI-enabled agents now allow single malicious actors to launch large-scale, automated attacks [48-55][60-64].


Dr Chinasa Okolo pointed out a critical data gap: existing AI incident databases return “African American” when “Africa” is queried, making it difficult to locate continent-specific harms. She cited concrete but under-reported cases where AI-graded examinations in Nigeria and South Africa produced erroneous scores that students could not contest, illustrating how AI failures can go unnoticed without proper monitoring [72-76].


Addressing the capacity deficit, Mark Gaffley presented findings from a public-awareness survey embedded in the South African Social Attitudes Survey, which showed that nearly three-quarters of respondents knew very little about AI and relied on informal channels such as social media for information [141-145]. In response, the Global Centre for Governance (GCG) has launched short courses on AI ethics and human-rights implications, offered scholarships for African women, and is preparing a free MOOC that uses relatable imagery to broaden access to AI knowledge [147-151].


Professor Shock expanded on the empowerment dimension, arguing that AI must enhance agency by being understandable and culturally relevant. He stressed that models lacking local language and contextual nuance cannot truly empower users, and advocated for human-in-the-loop designs that preserve decision-making authority [155-162][226-233].


Ambassador Tigo noted that, despite the emergence of several AI strategies on the continent, most countries still lack concrete AI policies and the technical talent to evaluate models, especially in the public sector where AI is often dismissed as a “charge-EPT” (a term he used to describe low fluency) [101-108][111-119]. He urged an “all-in” cooperative effort that transcends competition, arguing that fragmented attempts waste resources and undermine collective progress [201-207].


Building on this, Tigo identified three interdependent personas – scientists, governments and citizens – each requiring capacity building. Scientists need access to models for safety evaluation; governments must develop the expertise to hold multinational firms accountable; and citizens should be included in safe environments to prevent manipulation by malicious agents [176-191]. He linked these personas to the broader goal of developing indigenous models that reflect African cultures and data, thereby reducing reliance on external providers such as OpenAI, Anthropic or Gemini [186-191].


To translate these ideas into practice, Tigo advocated embedding safety benchmarks directly into procurement contracts, creating agile oversight mechanisms that can adapt to the rapid evolution of AI, and developing negotiation playbooks that give policymakers market insight and bargaining power against trillion-dollar companies [241-254][255-257]. Dr Okolo complemented this by recommending the establishment of independent AI safety institutes – akin to the US National Institute of Standards and Technology – that could certify models, test a range of products and operate without dependence on multilateral lenders or philanthropic donors [260-267].


The panel also highlighted existing collaborative infrastructure. Professor Shock cited grassroots initiatives such as MasaKani, Deep Learning in Daba, GOAI Africa and Sasanke Biotic, and described the newly announced African Compute Initiative, which will provide a shared high-performance computing platform for researchers across the continent, exemplifying a network-effect rather than competition with big-tech firms [215-218][215-218].


When discussing the deployment of AI in critical infrastructure, both Shock and Gaffley warned against a “move-fast-and-break-things” approach. They recommended human-in-the-loop, transparent systems and the preservation of analogue alternatives to avoid over-reliance on AI, especially where failures could jeopardise essential services [226-233][224-225]. Ambassador Tigo added that AI should be used to optimise development outcomes – for example, improving energy-grid efficiency – rather than being adopted for its own sake, thereby ensuring that technology serves concrete development goals [318-326].


The discussion moved to a Q&A segment. An audience member asked whether AI-generated media should be mandatorily water-marked; Professor Shock responded that watermarks are a short-term mitigation that can be circumvented by determined actors and should be complemented by broader provenance-tracking mechanisms [S1]. A second question addressed the digital divide; Mark Gaffley offered a philosophical view that the digitally excluded constitute a reservoir of creativity that should be preserved, while Ambassador Tigo stressed the need for basic connectivity, electricity and literacy before AI can be meaningfully applied, and suggested using AI to accelerate, not replace, foundational development projects such as hospitals and schools [S2]. A philosophical audience query about what socio-economic structure AI would choose was answered by Mark, who remarked that an AI-driven system would likely aim for “very efficient” outcomes and strict time-keeping [S3]. Finally, a question on how younger generations can influence AI policy prompted Dr Okolo to suggest open feedback periods, targeted research, legal analysis and the creation of informal channels where formal mechanisms are absent [S4].


In closing, the moderator thanked the panel members, invited participants to gather for a group photo, and reminded everyone of the post-event social gathering at Café Lota.


Session transcriptComplete transcript of the session
Speaker 1

The first share of the research team, I believe, is here with us today, including Marie -Ira Ducunda. We have Gatoni as well, and Michel Malonza, who will also be moderating with us today. And then we’ve got AI Safety South Africa, where we’re working on building local capacity to work on AI safety alongside evaluations research. So together, our organization represents a growing ecosystem in African -led efforts on AI governance, safety, and capacity building. As you all must know, today we are exploring three interlinked questions. What does safe and trusted AI actually mean for the African context? What progress has already been made on the continent and by whom? And what are the most promising pathways for collaborations going forward?

And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on my left. who is the founder of Technicultura and a policy AI specialist at the UN Office for Digital and Emerging Technologies. And then we have Ambassador Philip Tigo that serves as a special envoy on technology for the President of the Republic of Kenya. And then we have Professor Jonathan Shock who is an associate professor in the Department of Mathematics and Applied Maths at UCT and the director of the UCT AI Initiative. And finally we also have Mark Gaffley who is the director of legal and operations at the Center of Global AI Governance. Hopefully we’ll also have Dr.

Kola Ideson that will join us in the next few minutes, who is the research director at Research ICT Africa. And in the next 47 minutes or so. We all spent about 30 minutes on the panel, followed by about 15 minutes for panel discussions. And then we’ll just conclude with some brief remarks to pull the threads together of what is discussed tonight. A few little housekeeping things before we start. So in the slide behind me, if you have not registered on NUMA, we’d love to stay connected and be in touch. And AI Safety South Africa and ELENA have exciting programs that you’d want to know about. So please scan this QR code on the top left of the screen.

With that link, you can leave us your contact details and also give us feedback on the event. And on the top right, you’ll see the link to Slido, which is the platform that we’ll use for Q &A. So you can just scan the code and then you’ll be redirected to a platform where you can leave your questions. And also avoid the questions of the two things we should prioritize. in the Q &A section. Okay, that’s all the points I had to share. So without further ado, let’s get into it. I’ll hand it over to you, Michelle. I believe Zach will be starting with the first couple of questions, then I’ll take over after him.

Speaker 2

Okay, thank you. So I’ll be moderating part of the session and my colleagues, Michelle, will be taking part of the questions. Afterward, then we’ll progress to the Q &A. So I will start with the foundation, Safe and Trusted AI, which is like we can consider broadly as kind of AI that delivers the outcome we want. So I want to start with you, Ambassador, please. In the context of Africa in particular, what AI -driving outcome will we consider undesirable?

Ambassador Philip Tigo

I think and it’s quite interesting I’ve been having this discussion of safety today the whole day I think in the context of Africa I think the first thing I want to be very careful is that the African continent is not homogenous right so I’ll give a very specific Kenyan understanding of this but I think it could potentially be something that is shared in the country I think the first part of this conversation is that largely that if AI systems are creating a dependency rather than building capacity or capability I think for me that’s undesirable because the erosion of human agency especially for a continent that is still trying to aspire is a problem if AI systems are extractors of African data if capturing our African markets and there’s a concentration of value outside the continent while leaving our institutions as mere implementers or users then I think for me as I said it’s digital neocolonialism I think that’s it the second part of course is that if these continue to be built without our knowledge, wisdom, cultures, it creates an existential threat.

It’s almost a civilization extinction story that then for me is just not undesirable. I think it goes beyond, it’s unacceptable. So those would be my two quick responses.

Speaker 2

Okay, thank you. So, Prof Jonathan, I will move over to you. So of the possible outcome and risks and some of what Ambassador please mention, what do you see as a trade -off like short and long -term risks? And which one shall kind of like likely kind of like consider now and those that we can consider in the future?

Professor Jonathan Shock

Sure, thank you very much for the question. So I agree with Ambassador Tigo in terms of these ideas of neocolonialism. And the bias is inherent in the models and the context. I think these things are all extremely important. And I think these things are all extremely important. And I think these things are all extremely important. I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we we have to be very aware of, which is happening right now. In fact, it happened before AI came along.

And AI is allowing this to happen at a scale that at the moment we already see disruptions, but I think there’s real risk of a complete breakdown in trust. And that’s misinformation and disinformation. We’re seeing already around times of elections within Africa, within Ghana, within South Africa, within Nigeria, that misinformation, but also disinformation, and I disambiguate those by misinformation being, it might be that people are spreading things that they just don’t know is correct, but disinformation is really targeted campaigns. And what we’re seeing is that those targeted campaigns are often gendered, that it’s often against female politicians, that technology -facilitated gender -based violence is a massive issue against politicians, but more broadly. But I think that for me, one of the real things…

is the breakdown in trust that we’re seeing in society. We’ve seen already with social media how echo chambers form. AI is really allowing that to happen at scale by malicious actors who can focus in on particular election periods and destabilize what’s happening. To me, in the short term, that’s really worrying. I think it’s quite difficult to talk about the long term. We can think about what might happen in the next few months, but thinking about the long term threats, people have talked about existential threats in terms of AI getting out of control. I think that’s something that’s extremely important to study, but I think that within particular contexts there are things that are real that are happening now that we have to worry about and try to mitigate.

I think that’s really important. The other thing that I think is happening at the moment that I don’t hear a lot of people within the space talk about, within the policy space maybe talk about, is the issue of agents. And the fact that now a single malicious actor can design their own agent to carry out a misinformation campaign or a disinformation campaign. I think just over the last few months, we’ve seen that possibility come to light. And I think that’s a real worry and something that we need to understand. It’s not just now about the big tech firms. Of course, they have a major role to play in this. But I think now an individual actor can produce software that millions

Speaker 2

Okay, thank you. So, Dr. Chinasa, I’ll move over to you. So, given that the kind of current development of frontier AI leaks that is kind of forcing some of the leaks we are talking about, how can Africa monitor and mitigate those leaks, given that they are kind of like most of the existing development is outside of the context?

Dr. Chinasa Okolo

Yeah, great question. And this reminds me, I actually talked to an Alita researcher last year when I was at the… The peers at AI… Action Summit about some work that they were interested on doing, like an AI incident database. And I think this is actually very important because when I look at current databases, and they’re really comprehensive for the most part, but honestly, when I look up or type in Africa, for example, it reverts back to African American. And I’m based in the U .S., and that’s helpful for me to know, obviously, because I get coded as African American there, but finding this just basic information about AI harms on the continent is still very hard if you’re not tuned in.

I get stuff on Twitter that comes up all the time. There are a couple cases with some African universities, particularly in Nigeria, and also in South Africa had issues with AI being used to automatically grade standardized exams and students having issues with trying to rebut some of those scores that they received. And so that did not make mainstream news, probably in the countries, but not just generally. And so I think that this is a really important one. So we understand how AI impacts. It affects the African continent. and also communities on the continent, and then also that governments can respond accurately to crafting regulations that can serve the needs of communities and also ensure that the responsible parties are held responsible for the harms that they’re causing on different communities.

Speaker 2

Yeah, thank you. So just a kind of like follow -up on that. So you mentioned like holding kind of like responsible, kind of accountable. So like is there anything in particular like maybe our stakeholders can do, in that regard, kind of like is there any short -term or like long -term effort that we can do?

Dr. Chinasa Okolo

Yeah, it’s hard to say because, you know, as you can tell by my accent, I am American, I’m also Nigerian, and so I do understand a little bit of intricacies between both countries and the U .S. are a little bit more formal ways for advocacy. Like you can actually write directly to your congressman. You can call their office. Most often you won’t get them directly, but you’ll get their staff members, and they often respond. Like people. Right. them for basic issues like, oh, I can’t get my passport in time. Please help expedite this. And, oh, there’s this issue happening at my school. Please help with this. And so, honestly, I’m not very aware of similar pathways across African countries.

But I think that this civil society advocacy, particularly grouping together, you know, forming these coalitions can have a lot of power. It’s just, again, like there are a lot of incentives in place for governments to suppress this, and we’ve seen this turn into violence, particularly against youth. And so I am aware of this, and I don’t want to recommend this so people get harmed. But I think there are ways that, you know, again, this coalition voting can be successful.

Ambassador Philip Tigo

I wanted to jump into that because you talk about policy. And I think, and let’s be real, and that’s why I think I, when Irina asked me to come to this through my colleague Stephanie, I thought it would be important because this is a very Africa -centric discussion. I’ve been into all the global ones. I think five today. I think let’s be very clear. But if we have a couple of AI strategies in the continent, we do not necessarily have AI policies in the continent. So there’s already no mechanism to do this. And that’s AI in general. We’re not even talking specific about safety. Secondly, we do not have necessarily the talent to do this in the continent.

I think that’s why what you guys are doing is important. And I say talent in the other spaces, not even in public sector. When you go into public sector, unfortunately my colleagues just think AI is charge EPT. Let’s be honest. So there’s basically a fluency question. Safety is so far in the scale that they’re not even thinking about it. So I think for me, the sense that I kind of have in this is that it needs to be an all -in effort. And this is where my sense in the African continent is where that dichotomy between civil society and governments disappears. Because if it’s about existential risk to the continent, and I say existential risk, it’s about existential risk in terms of harms to society.

I’m not talking about… I mean, a few scientists like us can talk about models and harms to the model. the chances of an AI pressing a nuclear button in Africa, come on. And that’s my point. So we have to even redefine what existential risk for Africa on AI means. And I think this is where we really have to break from that. And we can have a few of our scientists doing the existential risks models, models running rogues and science fiction. I think that’s important work. But the risk that he’s mentioned is real, right? Threats to democracy, threats to harmony of society are real risks. And then this is how then you begin to build guardrails from a point of understanding of what is really relevant to the African continent.

Otherwise, we get lost in the other conversation that really chances of happening, nil, but important. But these are the risks. Chances of happening, high, but less prioritized. Good data, folks.

Speaker 2

Okay. Thank you so much for that contribution. if you have a question, please use the QR codes and type your question. We’ll come back to that, but I’ll be moving over to my colleagues who will take over with the rest of the question. Michelle.

Michelle Malonza

Just to join the conversation that you’re already having, I think so far we’ve talked a lot about what we don’t want and the kind of risks that Africa should be focusing on versus the rest of the world, and now I’d like us to talk about how we define what we want the systems to look like and what trustful systems would look like. And so I’d like to start with Mark talking about what his work at GCG has revealed so far about what Africa should think about what they want from these systems.

Mark Gaffley

Cool. Thank you. Thank you for the question, Michelle, and obviously for the opportunity to speak this afternoon. I see the answer to this question as twofold. So first is the answer that addresses what we actually want to define from what we want from AI. As a high -level response, I would describe that as the desires of African citizens on the ground. especially our local communities and the marginalized and vulnerable amongst them who don’t necessarily have a voice or a seat at the decision -making table. The second response is the more likely scenario, in my view, that we remain subject to the whims, benevolent or otherwise, of those practitioners who are able to scale the most useful and not necessarily the most beneficial AI tools for our people.

Irrespective of whether those practitioners are based within national borders, across the broader continent or in foreign jurisdictions around the world. When I consider these responses in the context of GCG’s work, two things come to mind. The first is the results from a public awareness and perceptions of AI survey we released in September last year. The survey was a module in the annual South African Social Attitudes Survey, which is nationalized. The survey revealed that nearly 75 % of respondents knew very little about AI. And for those who did know about AI, most of their learning was through informal and unstructured channels, including through social media and television. These findings may reveal that African populations are some way away from being able to define what they want from AI, because quite simply the majority of citizens are unaware that technology even exists.

This drives the need for creating awareness and educating our peers on AI, so that when the time does come to interact with it, they can make informed and meaningful decisions about what they want. On this, GCG’s other work I’d like to highlight are the various short courses we run on ethical and human rights implications of artificial intelligence through accredited universities in South Africa. These courses attract interest from all over the world, and for each iteration we’ve received applications in the thousands. As part of these offerings, we are also prioritizing awarding scholarships, scholarships for African women as part of our Women in Focus series. Why this work is important to the question is that the courses, even if incrementally, are slowly moving the needle on the figure I mentioned earlier, equipping participants with the skills to pass on knowledge to their peers about the many benefits and risks related to AI technologies.

Finally, as a further effort towards equipping Africans to be able to define their own wants and needs, we have an online MOOC launching imminently that will offer our course content freely to the public using relatable caricatures and imagery, which I hope will further drive this objective of equipping Africans to understand and make their own informed decisions about what AI technologies to allow into their lives and what outcomes they want those tools to achieve for them.

Michelle Malonza

Thank you. I think that’s really interesting because it ties right to what Ambassador was saying, that in order to know what you want as Africans, you have to know that the technology exists, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what exact technology we are talking about when we see AI. So maybe I should let the rest of the panel… I don’t know if I… say what they think Africans want, and then we’ll go into

Professor Jonathan Shock

So I think, you know, I don’t want to speak to what an individual person wants, but I think that what we all want is empowerment. We all want agency. And so there is a possibility that we can think about AI as a way to give agency, and I spoke about agents before, and I mean agency in a slightly different sense, for people to understand the possibilities that they have. And to increase that range of possibilities so that people can make choices. And so knowing that there is something out there that can give you, empower you, is great, but it has to be able to empower you within a context. And, you know, we’ve spoken many times about the, you know, the lack of context, local context within these models, the lack of language, contextual language information.

And until those things have been fixed, it’s not actually going to empower people. So to me, it has to be about making sure that the model… understands local context, and then making sure that it’s actually giving people agency to make decisions. I think that’s really important.

Dr. Chinasa Okolo

Awesome. Yeah, so I’ll try to be a little bit nuanced about this because, again, I’m Nigerian -American. I grew up right in the middle of the United States. I have been fortunate to travel across the continent very frequently over the past couple years or so, but in going off of what Jonathan said, I would say that I do see really just an opportunity, one, to contribute to equitable governance structures and mechanisms, but also even just an opportunity to actually participate equitably in AI development more broadly. That’s what I see a lot of young Africans want, particularly one, because the epidemic of underemployment is very stark on the continent, and then also just generally that these systems have the power to change the world and have changed the world already, and so I think that this is something.

A lot of our conversations are on AI safety, can also provide new avenues for African researchers, scientists, engineers, to really contribute new research that we’re still missing. Because particularly when we consider the U .S. context or even these prominent AI safety or fairness conferences, a lot of the work on bias is rooted in race, for example. Again, which is a Western construct. And so if we understand how AI impacts people from different castes, from different tribes, religions, gender, and the intersection of all of these, I think this will, one, advance the field as a whole. But, again, also provide more opportunities for these governance structures that are needed within African context.

Ambassador Philip Tigo

Sorry. No, I think a couple of things. And I take this from a persona approach because, again, I think Africa and the communities are a little bit different. And I think I’ll take the three important ones. One, I think it’s basically our scientists, right? I think our scientists, for me, need us. because you cannot talk about benchmarks evaluations around safety if you don’t have access to these models because we are the ones who bear the brand of these models. I’ve given an example Kenya is the biggest user of charge GPT and the first user of charge GPT is emotional advice so you’re asking that’s real data so you’re asking a model for emotional advice that doesn’t understand your context so what does that mean so I think there has to be a way that our scientists have access to these models which means also capacity for them to be able to evaluate these models but also a way that then the second persona is governments, a way that then working with scientists that governments can hold those companies to account because of the potential adverse harms that they can do to our society and community so there is where I see hand in hand, now that’s what governments want but also what governments need is capacity because you’re talking to five trillion dollar companies and your GDP is like a hundred billion million dollars.

So I think potentially we have to, this is where there has to be collaboration because this company understands market pressure, not necessarily regulatory pressure. So there has to be a nuanced approach to how you do that. The third part, of course, I think is the citizenry, right? The citizenry, I think in my sense, just needs to be included. And part of inclusivity is the safety work, right? So you must be included in a safe environment so that you’re not left to put the whims of agents or folks who can manipulate the crowd. So I think I look at those three personas potentially as that. But I think the underlying kind of infrastructure in this is basically looking at how do we ensure that as a collective in the continent that we can build our own models.

And I think that’s important, right? Because you cannot over, part of agency is human agency but also part of challenge to agency is over reliance. On example, models. I think the continent, I understand local context, I understand culture, but that capability to be able to build our own models that are nuanced to our own context, I think is a good option. Then you are not left to Gemini, Quen, OpenAI, Anthropic, I can mention five of them. What choice do we have right now if we don’t have an alternative potentially built from open source?

Michelle Malonza

Thank you very much for all your responses. I really appreciate talking about how capacity and access are the ways that we are going to figure out agency and empowerment. I think that brings me to the next question that all of you have touched on about what is going to make it possible for us to strengthen cooperation and engagement across the region in Africa because that’s a key part of making the access possible to begin with. I can see Ambassador has immediate thoughts, so I guess we can start with you. Let’s start with you since you are very expressive. with you and then go from Dr. Chinasa coming up to the rest of the panel.

Ambassador Philip Tigo

Stop competing. I’m really, it’s, I’m sorry, sometimes I stop being an ambassador at some point. Because AI is not ICT. It’s not about who’s going to build the best data centers. You know, who’s going to do X or Y. This is a collective all -in effort. I think for me, that’s the biggest shift that we need to make. That it’s not about competition. It’s about cooperation and collaboration. That’s what will make us work together. I think for me, that’s my and I’m saying this out of frustration because I see it. And it’s a waste of money. But also, it’s just a waste.

Dr. Chinasa Okolo

Alrighty. So, I know in the draft of this I mentioned, I’ll talk about some of the stuff at the UN. I’m speaking also my personal capacity too. But, you know, we just recently launched the international conference. scientific panel on AI I read nearly every application for that and so I think really it’s important that and I was very happy to see African representation you know on the panel we have eight I believe and I was thinking we were gonna at max not a max but at least get around four or five or so and so it’s really good to see that you know our voices are valued and then also more broadly that there are other efforts to complement the panel including the Africa AI Council and so I also look forward to seeing how this plays into the work that the UN is doing and also again some of the other enough initiatives that we’re doing around the global AI dialogues which play directly into the the panel’s work as well and so really just again that’s having you know in not to say that you know just this inclusion will actually lead to actual change sometimes it you know honestly doesn’t but I think the UN is a little bit special and in some cases where we’ve seen how the work that was done with the H -Lab on AI really led to increased conversations and discourse on this idea of international AI cooperation.

And so I hope to see African governments do this kind of work individually. I had the chance to serve on the AU’s Continental AI Strategy. I did this work when I was a PhD student, like four years ago, and then also served as a drafting member on the Nigeria National AI Strategy as well. And so I did this all the way from the US, and I think that there’s many opportunities for, again, African countries and also those throughout the global majority to build their own initiatives for this

Professor Jonathan Shock

AI cooperation. Yeah, I’d like to sort of follow up on, in particular, Ambassador Tigo’s point about the need to not be competing with each other. And I think that within Africa, I think there are already really, really good examples of people working together. You’ve got Masa Kani, you’ve got the Deep Learning in Daba, you’ve got GOAI Africa, you’ve got Sasanke Biotic. You’ve got the… I think that’s a good point. all of these grassroots organizations who already with limited resources doing amazing work you then add some resources to this and you really superpower what people can do um at the university of cape town the african compute initiative was announced today um and so the idea of this is that we happen to have a cluster an hpc a high performance computer center currently with a lot of capacity that is to say a lot of space we are building that we’re setting up uh an african compute initiative which which researchers around africa are going to be able to use we’re setting up a cloud platform we’re bringing in gpus um state -of -the -art compute that’s going to allow other people at other universities to do their research this is not a competition this is really about how does one set of people empower another set of people because you know there is no competing with you know a trillion dollar company you but actually what we have is a network effect and that’s really really powerful in and of itself so we need to be working with academia with civil society, with government with the private sector all of these groupings need to work together and

Michelle Malonza

Alright so I’ll do the final question before we get into the Q &A I think you’ve all touched upon how you think the engagement and the policy should be working around the continent like moving from strategies to the policy and so if Africa is able to come up with their own systems or find a way to have leverage against the companies to localize the systems that they’re going to deploy on the continent, what considerations do you think should be made while deploying those specific systems into our critical infrastructure because that somehow seems like an inevitability that’s going to happen so what considerations should African governments be making when thinking about integrating AI into critical infrastructure?

I can start with Mark since he’s the one who didn’t answer in the last round of questions the proxy paper for staying silent.

Mark Gaffley

for the problem we’re trying to solve for. Sorry, John. So, yeah, just to ask if it is actually necessary. And the other thing, just, you know, sort of, you know, recognising access and inclusion issues is just to keep the alternatives open. So if you are going to digitise something or, you know, use AI tools to solve for a particular problem, just make sure that those who can’t access them still have their kind of analogue approaches to doing things. I did mention to someone earlier I was the against tech person in the room, so I think that’s why I’m pushing the analogue way.

Professor Jonathan Shock

Cool. So I think we just have to be very, very careful here of the sort of, you know, the Silicon Valley approach of move fast and break things. If you try to take a system, some sort of infrastructure system, be it, you know, a government department, and try to AI -ify it, you know, there are massive, massive risks there. That’s not to say that… that we shouldn’t be thinking about this and doing it very carefully. But we have to understand, again, I go back to agency, the agency that we remove when we get an AI system to make the decisions for us. I think there are really good ways to do this with human in the loop where we can have transparent systems so we can understand what the decision -making process is.

But if we’re simply going to a company who sell a product, who say we can streamline your service, then we’re really beholden to that company. And if it turns out that that’s not the right solution, trying to undo that when you’ve then lost the skills, then you’re in a really difficult position. So I think we need to move at a reasonable pace but not break too many things along the way. I think that’s a real risk.

Ambassador Philip Tigo

Well, I think I probably have an advantage because I’m in government, so we kind of face a lot of these things. I think partly to understand the challenge, right? The challenge, I think, remember is, that we haven’t, especially for the African continent, age 19 .7, very young, already engaging in these tools, government engaging in 19th century technology, and so there’s a gap. And so there’s already sufficient pressure for governments to engage in these new tools. So there’s really not much room to kind of make these rational choices of not to use these new technologies because you have a population that is already using it. So then what does that leave you as options? I think the options for me, then it means that you need to start creating some form of guardrails even before you acquire the tools.

So you have procurement is one tool. And we can write a lot of these rules in the procurement documents, and I don’t think many of us are doing that. Include safety benchmarks in that, include a lot of these guys don’t want to be audited, so just get that in there because they want your business. And I have a sense maybe that’s the sweet spot, the point of decision marketing. At that time, everybody wants to talk to you, and that’s where African countries lose. lose the game. The second part of course is that because the technology changes very quickly I have a sense what we need to do is then continuously have kind of these agile mechanisms that keep pushing the foundational questions because this is not one technology it’s not a laptop that you’re going to buy and you’re going to use it for three years.

It’s going to change in the next two, three months. So I think potentially we need that. Third I think is just this contingency planning this single sourcing business should not work we need options and for me the fourth option is consideration always have the local option open because I mean data localization sovereignty. It’s about sovereignty so I think and part of it we don’t do that and that’s where we also start to make strategic decisions of separating private sector from global big tech to local private sector companies to smaller medium enterprises and I think we need to do that deliberately because then at least the local companies can be kind of managed by domestic law.

These other ones you probably have to go to Silicon Valley to sort of litigate. So I think for me, and it will keep on evolving, these are things that I’m seeing right now as potential options. But then I think it still all boils down to the capacity of the decision maker or the policy maker to be able to disarm these insights. Where we lose is negotiations. And part of what my team continuously does, and maybe this is something you guys need to consider, is think about these playbooks, guidebooks, negotiation tools, so that when they are negotiating, at least they have some sense of knowledge as their power to engage. Then I’m not talking because you can’t, you know, the hundred billion, five trillion, maybe when you have knowledge and have market insights, you have a better, you’re actually in a better position to engage.

Negotiate.

Dr. Chinasa Okolo

Yeah, so I definitely agree with my co -panelists on a lot of the topics brought up. I would say for the first one, particularly around the need for AI as an actual solution, and governments really need to evaluate whether, again, simple solutions, non -AI or deep learning based are actually necessary. And then also around the need for guidelines on procurement. I’ve been doing some work with the World Bank, and, you know, we’ve seen in our work that a lot of African governments, those across the majority of regions, are really being bombarded by, you know, suppliers to basically buy solutions. A lot of them, I think, are honestly unnecessary, and a lot of governments don’t have the capacity to evaluate these and make decisions, let’s say, transparently in -house.

And I think the key part of actually building the capacity will be, you know, establishing AI safety institutes or, you know, whatever name. I think that’s what governments want to call them. And I think that, you know, we have… this within the United States, it’s embedded within the National Institute of Standards and Technology, and they test more than technology. It’s food, you know, lotions, you know, cosmetics, all that stuff, too. And this may not obviously look the same across Africa, across Southeast Asia, South Asia, et cetera, but it really needs to be done, again, just to have this independent capacity and also, again, not be reliant on these multilateral lenders and foreign organizations or even philanthropic organizations that may be, again, funding or providing solutions, again, that may not be aligned with African needs and values and also or maybe not even be necessary in the first place.

Michelle Malonza

Thank you so much for your responses. They were very thoughtful as we think about, to figure out what we don’t want is to think about what specifically African countries think is risky and thinking about the short term as the priority and then thinking about what we want is based on thinking. And I think that’s what we’re talking about, our capacity to make that decision or to… autonomy to decide what we want and then localizing in that context. And then in terms of thinking about how to collaborate across the board, the sense that I’m getting across the panel generally is that we need to think against competition so that we can be able to have leverage against the big companies.

So thank you so much for your details and thoughtful responses. I’ll hand it over to Zach to get us into the Q &A session.

Speaker 2

Okay, thank you. So we’re going to take a few questions. And maybe I’ll also take one question from the audience, one or two questions from the audience. So one of the questions here is kind of like broad, so maybe Prashok, I’ll hand it over to you in 30 seconds. He said to improve inclusivity and trust, what shall an ideal AI model optimize for?

Professor Jonathan Shock

Gosh, that’s a difficult question. I think part of it has to be about transparency. How is a decision being made? People talk about the sort of the black box problem of AI systems. In fact, this isn’t quite the right way to look at these systems. You can look exactly what’s happening inside the model. You can look at all the weights of the matrices, but it’s really difficult to tell what’s actually happening in there. So building transparent systems that are understandable, I think that’s one way to build trust. Yeah, I think that’s a way to think about it.

Speaker 2

Okay, thank you for that. There is one question here also about what are the most significant misconceptions about the current state of AI? Maybe Dr. Chinaza.

Dr. Chinasa Okolo

I’ll probably be redundant, you know, from some of the earlier topics we discussed on the panel. But again, that is a panacea or a band -aid or a solution for a lot of things, particularly like development challenges. I think we see African – particularly like doubling down on again adopting procuring these AI solutions when honestly like building hospitals paying teachers installing sustaining or reliable electrical grids would actually solve a would solve the problems much easier and better maybe not easier but better but and also with a little opportunity for you know funds being diverted or wasted on a non -functional solution so that’s one thing I think my other panelists would probably have other good comments as well

Speaker 2

all right is there any question from the audience maybe we can take one question okay I will take one one but very brief

Audience

first of all thank you for being digitally inclusive for those of us who couldn’t use the QR code my question is to professor shock so you talked about misinformation information and disinformation maybe I can work my way back a little bit. So I think in some ways we need to start talking about kind of disincentivizing some types of AI, and this is what I mean. Usually when we talk about disinformation, we think about it from the user’s perspective, right? But if you create a tool, for example, for one, for example, I don’t see why there’s a sort of massification of the use of AI tools for media creation. Like, it’s not very necessary. Like, there’s a running joke about someone saying, well, I was hoping AI would be created to do some of the hard work that I do at home, like laundering or housekeeping, so I have more time to actually do media and entertainment, but it reverses the case, right?

So we’re having AI do all of this sort of stuff, and we’re not really making progress on robotics and stuff like that. Now, my question is, well, relatively compared to LLMs, right? So my question is, should we have some sort… say, mandatory watermark? for example for AI generated media like in that case if I see some video or some songs or some pictures I know it’s AI generated and in some ways I’m naturally not inclined to believe it is that a workable solution?

Professor Jonathan Shock

I think the cat is out of the bag I think it’s great if some organizations do put watermarks on indeed within China within some of the other companies they are beginning to do that but because we now have open source models and the open source models are getting very very good if a malicious actor wants to set out a disinformation campaign they’re just going to choose the one that doesn’t have the watermarks I see that one could for instance have media where there is some requirements to have information about whether or not it’s come from an AI system but when there are choices to have watermarked output or not watermarked output the malicious actor is just going to going to choose the one which is going to subvert the system.

So I think that it may be a stopgap, but I think it’s a very short one.

Speaker 2

Okay, thank you. In 20 seconds.

Audience

So this is to the panelists. So I would say we have about 64 % of the continent of Africa that don’t have access to the internet and so are digitally excluded. So my question is how do we make sure that our advancements with AI are not widening the digital divide? I think it’s a really big problem. As we’re moving forward with AI, there are people who don’t have access to the internet, electricity, and other things. So how do we ensure that we’re also thinking about those digitally excluded individuals? Thank you.

Mark Gaffley

This is a very abstract response, but it’s something I’ve been working on, so I’ll float it here. But it’s this idea of the digitally excluded as the kind of last vestiges of creativity left on the planet. you play it out over time, those who don’t have access to what I said about mental arrest and cognitive decline etc, being the ones that we eventually come to to ask for the sort of creative ideas and the independent decisions, so decision making abilities. So in a way just to kind of flip that, perhaps this focus on not having access as being excluded as potentially being a way down the line that you are actually included and in fact relied on because you kind of kept your cognitive abilities intact.

So yeah, a bit out there but I thought I floated.

Ambassador Philip Tigo

in that particular instance and this is where I think AI becomes interesting I think and part of what I always speak about is the unfinished business that African governments need to do. So it’s about connectivity, it’s about electricity it’s about literacy it’s about the kind of old infrastructures that we’ve not done. So I think for the African continent this is where you start to use AI to optimise development. You can do smart it’s AI accelerates AI. And if you look at what we’re doing in terms of Kenya at least, is that what we’re doing. For example we’ve realized that with artificial intelligence that a lot of our energy optimization was wrong with our artificial intelligence because we were going for last mile electricity connectivity.

But now with AI we’re realizing in the World Bank that you could do this a little bit differently. All I’m saying is that we can leverage this technology on those non -sensitive capabilities to actually accelerate development so that again it’s not AI for AI. So for African governments don’t get AI for chat, right? Get AI for something else that drives development.

Speaker 2

Alright, thank you. So we only have one minute for questions so I will take the last two questions together and briefly our panelists will answer to that. So one question here one question there.

Audience

My question is a little philosophical one. Like we talked about how right now AI is in a war where very many new technology comes. Each country and each company is trying to to be capitalistic and try to one up the other one. Uniquely in AI though, AI might just be the one which might catch up with itself where they might just like, there’s a possibility, right? So there are so many economic and structures out there like socialism, capitalism, which unique focus on optimizing certain things like engagement on social media, for example. So if you had to ideally work on a structure, if AI had to decide on a structure for humanity, I would just like your opinions on that.

Speaker 2

Okay, thank you. We’ll take one question here.

Audience

Yeah, okay, thank you. So I’m going to consider two things, which is policy and our generation at large. So I wanted to ask, considering the zeal that we have for knowing AI, is the next generation safer also? And considering the thing that you’re saying, policy, we need policy. Should we, go around or just, just say we need policy because we can catch AI at where it is actually now in Africa, considering it hasn’t gone abroad that much, and just put policy to who is going to learn this and who is going to know this on AI.

Speaker 2

Okay, thank you. So I think these two questions will be split across our panelists, so who wants to go first?

Dr. Chinasa Okolo

All righty. Yeah, I’ll take the policy one. I think that I’m very hopeful for African governments in particular when it comes to AI policy. I think there is, let’s say, like a big learning curve or actually implementation curve from the 20 or so strategies and two draft policy frameworks. And there is an opportunity, you know, for the younger generation to be involved. Obviously, one where I think is providing like feedback on different strategies, a couple of countries have had open feedback period periods. A lot of most of them haven’t, unfortunately. But, you know, despite that, I think, you know, doing research, legal analysis and providing these findings openly can actually have a lot of change. Again, if there happen to be formal mechanisms to provide this feedback, obviously take advantage of them.

If not, you know, create your own avenues or pathways to do so. And then I can, I’ll let my panelists speak. Okay.

Speaker 2

Mark, do you want to add something? All right. Pro. Okay.

Mark Gaffley

Very briefly. Okay. Well, that would be my point, is I think if AI were to structure humanity, we’d be very efficient and we’d keep to time.

Speaker 2

All right. Thank you so much for your contribution. We’ll hand it over to Iman so that she can. Thank

Speaker 1

Thank you so much. I’ll be super brief. Well, I’ll first start by thanking our incredible panel. Thanks a lot for your insights and energy and time. Thanks to you all for coming. It’s been a long few days, I imagine, being here at the conference. There are such great people to talk to and learn from. Before we wrap up, we’d love to take a picture with the panel. So I’ll invite you to just step forward here so that we can grab a picture together. And as they do that, for everyone, we have a social happening at 7 .30 today at Cafe Lota. That is in a museum close by. You could just, like, Google it. And we’d love to see you there.

We’re going to be heading there at 7 .30. Thanks, guys. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (8)
Confirmedhigh

“The moderator, Michelle Malonza, introduced the African‑led research team – Marie‑Ira Ducunda, Gatoni, Michel Malonza and the AI Safety South Africa initiative.”

The knowledge base lists the same research team members (Marie-Ira Ducunda, Gatoni, Michel Malonza) as part of the roundtable, confirming their involvement [S2].

!
Correctionmedium

“The moderator’s name is Michelle Malonza.”

The source identifies the co-moderator as Michel Malonza and notes this is the same person as Michelle Ma, suggesting the report’s spelling may be inaccurate [S2].

Additional Contexthigh

“Ambassador Philip Tigo warned that AI systems that create dependency, extract African data and concentrate value abroad erode human agency and constitute a form of digital neocolonialism that could pose an existential threat to the continent.”

A related warning about digital neocolonialism was made by Nicaragua at the UN, highlighting similar concerns about AI-driven dependence and concentration of value abroad [S20].

Confirmedhigh

“Professor Jonathan Shock highlighted the most pressing short‑term risk: a rapid breakdown of public trust caused by misinformation and targeted disinformation during elections in Ghana, South Africa and Nigeria.”

Multiple sources identify misinformation and disinformation as the biggest short-term risk to democratic trust, aligning with Shock’s assessment [S50] and the WEF Global Risks Report [S97].

Confirmedmedium

“He distinguished misinformation (unintentional errors) from disinformation (deliberate, often gender‑based campaigns).”

The distinction between misinformation (unintentional) and disinformation (intentional) is explicitly discussed in the IGF report on trust online [S101].

Confirmedhigh

“AI‑enabled agents now allow single malicious actors to launch large‑scale, automated attacks.”

AI’s ability to enable micro-targeted, large-scale disinformation campaigns is documented in a discussion on AI-driven manipulation [S99].

Additional Contextlow

“Dr Chinasa Okolo pointed out a critical data gap: existing AI incident databases return “African American” when “Africa” is queried, making it difficult to locate continent‑specific harms.”

While the knowledge base does not address the specific database issue, it confirms Dr Okolo’s participation and her focus on AI bias and data sovereignty in Africa [S11].

Confirmedmedium

“The moderator framed “safe and trusted AI” as technology that delivers the outcomes users desire and gave housekeeping instructions directing participants to the QR‑code and Slido for live questions.”

A similar event description notes the moderator repeatedly mentioning QR-codes and Slido polls for audience participation, confirming this procedural detail [S94].

External Sources (106)
S1
Responsible AI for Shared Prosperity — -Philip Thigo- His Excellency Ambassador, Special Technology Envoy of the Government of Kenya
S2
Toward Collective Action_ Roundtable on Safe & Trusted AI — And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on…
S3
S4
Toward Collective Action_ Roundtable on Safe & Trusted AI — -Michel Malonza: Mentioned as co-moderator but appears to be the same person as Michelle Malonza -Michelle Malonza: Co-…
S5
Agents of inclusion: Community networks & media meet-up | IGF 2023 — Elisa Heppner, the grants management lead for the APNIC Foundation, is instrumental in driving these ventures. She empha…
S6
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S7
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S8
S9
Toward Collective Action_ Roundtable on Safe & Trusted AI — – Ambassador Philip Tigo- Dr. Chinasa Okolo- Mark Gaffley – Mark Gaffley- Professor Jonathan Shock
S10
https://dig.watch/event/india-ai-impact-summit-2026/toward-collective-action_-roundtable-on-safe-trusted-ai — And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on…
S11
Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups — Chinasa T. Okolo emphasized opportunities for smaller nations to lead through contextual innovation, data sovereignty, a…
S12
Toward Collective Action_ Roundtable on Safe & Trusted AI — – Ambassador Philip Tigo- Dr. Chinasa Okolo – Professor Jonathan Shock- Dr. Chinasa Okolo
S13
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S14
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S16
Toward Collective Action_ Roundtable on Safe & Trusted AI — – Ambassador Philip Tigo- Professor Jonathan Shock – Professor Jonathan Shock- Audience Both recognize different types…
S17
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S18
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S19
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S20
UN General Assembly 66th Plenary Meeting – WSIS Plus 20 High-Level Review — Artificial Intelligence Governance and Emerging Technologies Criticism of Western countries’ selective information acce…
S21
Africa’s Prospects in the New Global Economy: A Comprehensive Analysis from Davos — Economic | Development Mene acknowledges that while the African Union has a critical minerals strategy, governments con…
S22
How Multilingual AI Bridges the Gap to Inclusive Access — Capacity development | Artificial intelligence He highlights that only a tiny pool of experts exists worldwide, stressi…
S23
How African knowledge and wisdom can inspire the development and governance of AI — Despite the significance of sharing information freely, the economic challenges faced by African experts often discourag…
S24
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Amandeep Singh Gill: Thank you so much, Jovan, and thank you to you, Diplo Foundation, and its partners for convening th…
S25
Scoping Civil Society engagement in Digital Cooperation | IGF 2023 — Regulations, standards and guardrails can be ways to address risks.
S26
IGF 2025: Africa charts a sovereign path for AI governance — African leaders at theInternet Governance Forum (IGF) 2025 in Oslocalled for urgent action to build sovereign and ethica…
S27
Open Forum #46 Africa in CyberDiplomacy: Multistakeholder Engagement — How to reduce dependency on foreign technology providers while building local capabilities
S28
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Oluseyi Oyebisi:Yes, and thank you so much, Haiyan, for inviting me to speak this morning. I think in terms of African v…
S29
Towards a Safer South Launching the Global South AI Safety Research Network — – Ambassador Philip Thigo- Mr. Amir Banifatemi- Dr. Balaraman Ravindran – Dr. Rachel Sibande- Ms. Chenai Chair- Ambassa…
S30
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S31
AI: Lifting All Boats / DAVOS 2025 — Vijay Vythianathan Vaitheeswaran: Welcome, ladies and gentlemen, to our session on AI, lifting all boats. I’m Vijay Vy…
S32
Advancing Scientific AI with Safety Ethics and Responsibility — -Speaker 2 (P.T.): AI safety researcher with expertise in biosecurity and AI-enabled biological tools, associated with R…
S33
Open Forum #26 High-level review of AI governance from Inter-governmental P — 5. African Nations: Need to increase data infrastructure and sovereignty. Speaker 2: I’m sure you can hear me, right? …
S34
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — Speaker: by the minister by the end of the year, early November. So, and also, we are also drafting an implementation st…
S35
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Audience: Good evening, everyone. Is it? Okay. My name is Lydia Lamisa Akamvareba from Ghana. I’m looking at the team up…
S36
Finnovation — Research conducted by Georgadze’s firm reveals that women tend to require more confidence and knowledge before making fi…
S37
ACKNOWLEDGEMENTS — – Such initiatives should be designed to be inclusive, so marginalized groups and communities, especially people with di…
S38
Digital divides & Inclusion — In conclusion, the digital divide remains a serious and urgent issue that requires collective action. The lack of intern…
S39
Bridging the Digital Divide for Transition to a Greener Economy — Audience:Yes, thank you very much. My name is Tilman Kupfer. I’m an independent consultant from Brussels but with a back…
S40
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Another viewpoint raises concerns about the risks associated with AI. One such risk is “knowledge slavery,” where a cent…
S41
African Priorities for the Global Digital Compact: A Comprehensive Discussion Report — The discussion took a critical turn when Moctar Yedaly delivered a stark warning about Africa’s digital sovereignty. He …
S42
Gen AI: Boon or Bane for Creativity? — An election year is approaching with an expected overflow of misinformation and disinformation
S43
Main Topic 3 –  Identification of AI generated content — Dr Laurens Naudts, from the AI Media and Democracy Lab at the University of Amsterdam, provided a legal perspective, dis…
S44
High-Level Session 1: Navigating the Misinformation Maze: Strategic Cooperation For A Trusted Digital Future — Natalia Gherman: Thank you, and good morning, ladies and gentlemen. Great pleasure to be here, and just as Madam Moderat…
S45
How African knowledge and wisdom can inspire the development and governance of AI | WSIS+20 — Ubuntu philosophy in AI A foundational concept for AI governance is Ubuntu, a core African philosophy emphasising interc…
S46
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — -Need for regional cooperation and mutualization: Speakers advocated for pooling resources, knowledge, and infrastructur…
S47
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Audience: Good morning. I’m Levi Siansege with Internet Society, Zambia chapter, but also with the youth IGF. I love …
S48
Policy Network on Artificial Intelligence | IGF 2023 — Understanding the context of misinformation/disinformation generation is important
S49
Decoding Disinformation: Fostering Good Practices and Cooperation online course — Key initiatives to counter disinformation in the context of elections.We take a closer look at how disinformation is bei…
S50
Viewing Disinformation from a Global Governance Perspective | IGF 2023 WS #209 — Disinformation, which can impact democratic processes, is a topic of concern. However, solid evidence is needed to suppo…
S51
DC-DNSI: Beyond Borders – NIS2’s Impact on Global South — Isha Suri: Thank you, Professor Luka. I’ll just quickly share my screen. I’m joined by my co-author Shiva Kanwar and…
S52
WS #97 Interoperability of AI Governance: Scope and Mechanism — Mauricio Gibson: Thank you. Yeah, I mean, just building on what Chet was saying, I think, and what you were saying, Olg…
S53
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — ### Government Procurement The session demonstrated broad agreement among diverse stakeholders on the need for human ri…
S54
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — In addition to these key points, the analysis reveals a couple of noteworthy observations. One observation is the import…
S55
GOVERNING AI FOR HUMANITY — xxix Institutionalizing such multi-stakeholder exchange under the auspices of the United Nations can provide a reliably …
S56
Critical infrastructure — AI plays a pivotal role in safeguarding critical infrastructure systems. AI can strengthen the security of critical infr…
S57
WS #279 AI: Guardian for Critical Infrastructure in Developing World — These key comments shaped the discussion by progressively broadening its scope from specific technical challenges to enc…
S58
Artificial intelligence — Critical infrastructure
S59
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Harmonization of policies across the region was identified as a critical goal to enable seamless transactions and integr…
S60
WS #270 Understanding digital exclusion in AI era — An audience member raises the question of whether to wait for government to introduce AI policies or let the industry le…
S61
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Quote from UNDP Human Development Report 2025 stating that innovation incentives favor rapid deployment and automation o…
S62
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — There is unexpected consensus among speakers from different backgrounds (academia, industry startup, and large corporati…
S63
Toward Collective Action_ Roundtable on Safe & Trusted AI — This is a very abstract response, but it’s something I’ve been working on, so I’ll float it here. But it’s this idea of …
S64
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — European contexts focus heavily on regulatory compliance and managing cultural resistance within established bureaucraci…
S65
Resilient and Responsible AI | IGF 2023 Town Hall #105 — Martin Koyabe:And first of all, thank you so much for inviting me, and also for giving the GFCE an opportunity to share …
S66
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S67
AI Safety at the Global Level Insights from Digital Ministers Of — “Is there a way to put guardrails around it?”[49]. “The second point I’d like to make is that ultimately as policymakers…
S68
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — High level of consensus on implementation approach and timeline, with moderate consensus on regulatory strategies. The a…
S69
From principles to practice: Governing advanced AI in action — Both speakers advocate for embedding safety and responsibility considerations from the initial design phase rather than …
S70
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He emphasised the need for policy that balances principle-level guidance with practical guardrails whilst avoiding overl…
S71
Africa and the Digital Divide: Perspectives and Policies for catch up (Africa Trade Network) — Improving internet infrastructure is essential in driving the African digital economy, with the potential to increase di…
S72
NRIs MAIN SESSION: DATA GOVERNANCE — Data access and shifting from narrow notions of national sovereignty are important considerations in African data govern…
S73
The Digital Town Square Problem: public interest info online | IGF 2023 Open Forum #132 — Cultural, religious, and policy differences among African countries were emphasized in the context of data generation. T…
S74
African Priorities for the Global Digital Compact: A Comprehensive Discussion Report — Nnenna Nwakanma brought passionate advocacy for African unity and dignity to the discussion. She emphasised the fundamen…
S75
Toward Collective Action_ Roundtable on Safe & Trusted AI — The discussion began with Ambassador Philip Tigo’s powerful reframing of AI safety concerns through an African lens. Rat…
S76
The fading of human agency in automated systems — In many domains today, humans remain formally responsible for decisions shaped by automated systems. A civil servant sig…
S77
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Ashana Kalemera: Music Good afternoon. Thank you so much for joining us this afternoon. I’ll also say good morning, good…
S78
UN General Assembly 66th Plenary Meeting – WSIS Plus 20 High-Level Review — Nicaragua warns that artificial intelligence’s benefits are being monopolized by a small number of corporate and state a…
S79
New Technologies and the Impact on Human Rights — Reference to how produced data used to inform people is not maintained locally and goes elsewhere, creating risks of cul…
S80
Gen AI: Boon or Bane for Creativity? — An election year is approaching with an expected overflow of misinformation and disinformation
S81
WS #255 AI and disinformation: Safeguarding Elections — Babu Ram Aryal: I was also supposed to come into the very topic, disinformation and the election in our topic. So who…
S82
Main Topic 3 –  Identification of AI generated content — Dr Laurens Naudts, from the AI Media and Democracy Lab at the University of Amsterdam, provided a legal perspective, dis…
S83
Breaking the Fake in the AI World: Staying Smart in the Age of Misinformation, Disinformation, Hate, and Deepfake — AHM Bazlur Rahman from Bangladesh News Network for Radio and Communication described grassroots-level interventions focu…
S84
Learning from the MOOC model — Anyone (with Internet access) can enrol in a MOOC, but certain skills are needed to benefit from the instruction: fluenc…
S85
How African knowledge and wisdom can inspire the development and governance of AI — Dr. Jovan Kurbalija Executive Director DiploFoundation:Thank you very much. Let’s start with the Ubuntu spirit. First, e…
S86
How African knowledge and wisdom can inspire the development and governance of AI | WSIS+20 — Ubuntu philosophy in AI A foundational concept for AI governance is Ubuntu, a core African philosophy emphasising interc…
S87
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — -Need for regional cooperation and mutualization: Speakers advocated for pooling resources, knowledge, and infrastructur…
S88
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion revealed that the challenge extends beyond inequitable distribution to an overall supply-demand gap affec…
S89
Open Forum #26 High-level review of AI governance from Inter-governmental P — 2. Addressing data localisation and sovereignty concerns, particularly for developing regions. 3. Data Sovereignty and …
S90
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Audience: Good morning. I’m Levi Siansege with Internet Society, Zambia chapter, but also with the youth IGF. I love …
S91
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — -Moderator- Session moderator facilitating the panel discussion
S92
WS #211 Disability & Data Protection for Digital Inclusion — Fawaz Shaheen: . . Yes, I think it’s working now. Thank you so much. We’ll just start our session now. Welcome to …
S93
Day 0 Event #35 Empowering consumers towards secure by design ICTs — WOUT DE NATRIS: Thank you, Joao. And I think that shows how the two topics also intersect with each other, because w…
S94
World Economic Forum Town Hall on AI Ethics and Trust — He repeatedly mentioned the Slido polls, QR codes for audience participation, and encouraged questions from the audience…
S95
WS #25 Multistakeholder cooperation for online child protection — Gladys O. Yiadom: . Can the online moderator share her screen? Full screen, please. Thank you. So the firs…
S96
AI and Digital Developments Forecast for 2026 — Feudalism, at least the peasants had some sort of agency. Slavery is not only physical slavery, it’s basically not havin…
S97
Open Forum: Liberating Science — The new WEF Global Risks Report identified misinformation and disinformation as the biggest short-term risk
S98
DC-Inclusion & DC-PAL: Transformative digital inclusion: Building a gender-responsive and inclusive framework for the underserved — Viktoriia Romaniuk: Thank you very much. It’s a great honor to be here and share our experience. Among the organizations…
S99
What Proliferation of Artificial Intelligence Means for Information Integrity? — Septiaji Nugroho highlighted AI’s ability to enable micro-targeting of specific audiences such as elderly people and mig…
S100
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion revealed the complexity of platform governance in addressing different types of problematic content. Kend…
S101
(Re)-Building Trust Online: A Call to Action | IGF 2023 Launch / Award Event #144 — The issue of disinformation is also discussed, highlighting its intentional misleading of people and groups. It is noted…
S102
Global cyber capacity building efforts — Moctar Yedaly:Thank you, Martin. And thank you for the previous speakers. As I see in America, it’s very hard to follow,…
S103
Town Hall: How to Trust Technology — She cited instances such as airplane crashes, where people have demonstrated adverse overreactions. According to this pe…
S104
We are the AI Generation — Martin stressed that effective AI governance must be inclusive and globally representative, with AI systems reflecting l…
S105
Judiciary engagement — However, significant concerns emerged about AI implementation risks. Marcelja identified security vulnerabilities, histo…
S106
AI cheating scandal at University sparks concern — Hannah, a university student,admits to using AIto complete an essay when overwhelmed by deadlines and personal illness. …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ambassador Philip Tigo
6 arguments175 words per minute1976 words674 seconds
Argument 1
Undesirable AI outcomes – dependency, digital neocolonialism, erosion of agency (Ambassador Philip Tigo)
EXPLANATION
The ambassador warns that AI systems that create dependency rather than building local capacity erode human agency. He also flags AI that extracts African data and concentrates value abroad as a form of digital neocolonialism, and warns that AI built without African knowledge poses existential threats.
EVIDENCE
He explains that AI creating dependency undermines agency for a continent still aspiring, describes AI as an extractor of African data and value concentration outside the continent, and calls this digital neocolonialism; he also says AI built without African knowledge creates an existential threat to civilization [33-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concern about AI benefits being monopolised and creating digital neocolonialism is highlighted in the UN General Assembly discussion [S20], while calls for sovereign AI systems to avoid external dependence are echoed in IGF 2025 deliberations [S26] and multistakeholder engagement on reducing foreign tech reliance [S27].
MAJOR DISCUSSION POINT
Undesirable outcomes of AI in Africa
Argument 2
Call for an all‑in, cooperative effort across Africa; competition wastes resources (Ambassador Philip Tigo)
EXPLANATION
The ambassador stresses that AI development should be a collective, cooperative effort rather than a competitive race. He argues that competition wastes money and hampers progress, and that collaboration is essential for effective AI governance on the continent.
EVIDENCE
He states that AI is not about competition but about cooperation, describing competition as a waste of money and resources, and calls for an all-in effort across Africa [201-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both the roundtable on safe & trusted AI and the collective-action discussion stress the need for collaboration over competition, noting that competition wastes money and resources [S2]; the analysis of Africa’s global economic prospects also warns that individual negotiations undermine collective interests [S21].
MAJOR DISCUSSION POINT
Need for cooperation over competition
AGREED WITH
Dr. Chinasa Okolo
Argument 3
Lack of continent‑wide AI policies and talent; need to build capacity and expertise (Ambassador Philip Tigo)
EXPLANATION
He points out that Africa currently lacks comprehensive AI policies and sufficient talent, especially in the public sector, which hampers safe AI deployment. Building local expertise and policy frameworks is therefore crucial.
EVIDENCE
He notes that there are AI strategies but no AI policies on the continent, and a shortage of talent, particularly in the public sector where AI is often misunderstood as a cost centre [101-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on multilingual AI point to a tiny global pool of experts and the need for academic capacity building [S22], while studies on African knowledge highlight economic barriers that limit expert participation [S23]; the roundtable further identifies policy and talent gaps across the continent [S2].
MAJOR DISCUSSION POINT
Policy and talent gaps in African AI
AGREED WITH
Mark Gaffley, Dr. Chinasa Okolo, Professor Jonathan Shock, Michelle Malonza
Argument 4
Embed safety benchmarks and guardrails in procurement contracts; create agile, continuously updated mechanisms (Ambassador Philip Tigo)
EXPLANATION
The ambassador recommends incorporating safety standards into procurement processes and establishing agile mechanisms to keep pace with rapid AI changes. This approach aims to ensure that AI tools are vetted before acquisition and that policies remain current.
EVIDENCE
He suggests adding safety benchmarks to procurement documents, creating agile mechanisms to continuously push foundational questions, and emphasizes the need for ongoing updates as technology evolves rapidly [241-248].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines on digital cooperation stress the role of regulations, standards and guardrails in procurement processes [S25]; the launch of a Global South AI Safety Research Network proposes independent safety testing and certification mechanisms [S29]; and recent safety-focused discussions advise against anthropomorphising AI and call for agile safety frameworks [S24].
MAJOR DISCUSSION POINT
Safety-focused procurement and agile governance
AGREED WITH
Dr. Chinasa Okolo, Professor Jonathan Shock, Mark Gaffley
Argument 5
Promote African data sovereignty and develop home‑grown AI models to reduce reliance on foreign big‑tech providers
EXPLANATION
The ambassador argues that Africa must build capacity to access, evaluate and create its own AI models, ensuring that data stays on the continent and that AI systems reflect local contexts. This reduces dependence on external corporations and safeguards strategic interests.
EVIDENCE
He stresses that African scientists need access to models, that Kenya is a major user of ChatGPT for culturally mismatched advice, and calls for building indigenous models that understand local culture, arguing that over-reliance on companies like OpenAI, Anthropic or others is risky and that open-source alternatives are needed for data localisation and sovereignty [186-191].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IGF 2025 highlighted the urgency of building sovereign, ethical AI systems tailored to African contexts [S26]; multistakeholder cyber-diplomacy forums discuss reducing dependency on foreign providers while building local capabilities [S27]; a high-level AI governance review notes the need for African data infrastructure and sovereignty [S33]; and open-source AI initiatives are positioned as catalysts for a home-grown digital economy [S34].
MAJOR DISCUSSION POINT
Promoting African data sovereignty and home‑grown AI models
Argument 6
Create negotiation playbooks and guidebooks for governments to strengthen bargaining power with AI vendors
EXPLANATION
The ambassador notes that African negotiators often lack market insight when dealing with trillion‑dollar AI firms. He proposes developing structured playbooks, guidebooks and negotiation tools so policymakers can secure better terms, safety benchmarks and guardrails.
EVIDENCE
He describes the need for “playbooks, guidebooks, negotiation tools” that give decision-makers knowledge and power to negotiate with large tech companies, emphasizing that knowledge is essential for effective engagement and protecting national interests [256-258].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of Africa’s mineral strategy argue for collective negotiation tools and shared principles to avoid fragmented deals [S21]; the safe-trusted AI roundtable also calls for structured negotiation tools to empower policymakers [S2].
MAJOR DISCUSSION POINT
Enhancing governmental negotiation capacity with AI providers
S
Speaker 2
1 argument149 words per minute527 words210 seconds
Argument 1
Safe AI as delivering the outcomes we want – “AI that delivers the outcome we want” (Speaker 2)
EXPLANATION
Speaker 2 defines safe and trusted AI as technology that reliably produces the desired outcomes for users. This framing emphasizes outcome‑orientation rather than abstract safety criteria.
EVIDENCE
He describes safe and trusted AI broadly as AI that delivers the outcome we want [30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The safety researcher’s perspective emphasizes outcome-oriented AI design as a core principle of trustworthy systems [S32], and broader governance frameworks stress aligning AI performance with desired societal outcomes [S30].
MAJOR DISCUSSION POINT
Outcome‑oriented definition of safe AI
M
Michelle Malonza
1 argument218 words per minute600 words164 seconds
Argument 1
Defining what Africans want from AI – need to know the technology exists before we can decide (Michelle Malonza)
EXPLANATION
Michelle argues that Africans cannot articulate their AI preferences without first being aware of the technologies available. Understanding AI’s existence is a prerequisite for defining desired outcomes.
EVIDENCE
She links the need to know what AI technologies exist to the ability to define what Africans want, emphasizing that awareness precedes desire [152-154].
MAJOR DISCUSSION POINT
Awareness as a prerequisite for defining AI needs
M
Mark Gaffley
3 arguments166 words per minute788 words284 seconds
Argument 1
Public awareness, education programmes, MOOCs and scholarships to empower marginalized citizens (Mark Gaffley)
EXPLANATION
Mark highlights the importance of raising AI awareness through surveys, short courses, scholarships for women, and a forthcoming free MOOC. These initiatives aim to equip citizens, especially marginalized groups, with knowledge to make informed AI choices.
EVIDENCE
He cites a survey showing 75 % of respondents know little about AI, describes short courses attracting thousands of applicants, scholarships for African women, and an upcoming MOOC with relatable content [141-151].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive capacity-development initiatives targeting marginalized groups are recommended to ensure equitable access to AI education [S37]; research on gendered financial decision-making underlines the importance of tailored scholarships for women [S36]; and the scarcity of AI experts underscores the need for broad educational outreach [S22].
MAJOR DISCUSSION POINT
Education and outreach to build AI literacy
AGREED WITH
Dr. Chinasa Okolo, Ambassador Philip Tigo, Professor Jonathan Shock, Michelle Malonza
Argument 2
Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley)
EXPLANATION
Mark stresses that when deploying AI solutions, societies must retain non‑digital alternatives for those lacking access. This ensures inclusivity and prevents exclusion of digitally marginalized populations.
EVIDENCE
He advises keeping analogue approaches for people who cannot access AI tools, emphasizing inclusion of those without digital access [224-225].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The digital-divide brief calls for collective action to keep non-digital options for populations lacking connectivity or electricity [S38]; inclusive programme guidelines also stress the necessity of analogue alternatives for the digitally excluded [S37].
MAJOR DISCUSSION POINT
Maintaining non‑digital options for the digitally excluded
AGREED WITH
Ambassador Philip Tigo, Dr. Chinasa Okolo, Professor Jonathan Shock
Argument 3
Treat the digitally excluded as a valuable source of creativity and ensure their inclusion as a strategic asset
EXPLANATION
Mark suggests that people without digital access retain unique creative capacities that could be leveraged in the future. Preserving analog alternatives not only prevents exclusion but also safeguards a reservoir of creativity that AI systems might later draw upon.
EVIDENCE
He describes the digitally excluded as “the kind of last vestiges of creativity left on the planet,” arguing that keeping analogue capabilities could be valuable for future innovation and should be considered when deploying AI solutions [315-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of the digital divide highlight the strategic value of preserving analogue creativity and ensuring that excluded communities remain part of future innovation ecosystems [S38].
MAJOR DISCUSSION POINT
Valuing and integrating the digitally excluded as creative contributors
D
Dr. Chinasa Okolo
7 arguments178 words per minute1638 words551 seconds
Argument 1
African AI incident data is missing or mis‑classified; current databases revert to “African American” (Dr. Chinasa Okolo)
EXPLANATION
Dr. Chinasa points out that existing AI incident databases inadequately represent African harms, often mis‑labeling them under “African American.” This data gap hampers understanding of AI risks on the continent.
EVIDENCE
She observes that when searching for Africa in current databases, results default to “African American,” making it hard to find AI-related harms specific to the continent [72-73].
MAJOR DISCUSSION POINT
Data gaps in African AI incident reporting
Argument 2
Need for an AI incident database and better monitoring mechanisms to track harms on the continent (Dr. Chinasa Okolo)
EXPLANATION
She advocates for the creation of a dedicated AI incident database to systematically capture and monitor AI‑related harms in Africa. Better monitoring would enable informed policy and accountability.
EVIDENCE
She recounts a conversation with an AI researcher about an AI incident database and stresses its importance for tracking harms on the continent [69-71].
MAJOR DISCUSSION POINT
Establishing an African AI incident tracking system
AGREED WITH
Professor Jonathan Shock
Argument 3
Equitable participation in AI development creates jobs and advances the field beyond Western bias (Dr. Chinasa Okolo)
EXPLANATION
Dr. Chinasa argues that involving African researchers and engineers in AI development can address underemployment and bring diverse perspectives, moving beyond bias rooted in Western constructs.
EVIDENCE
She notes that young Africans seek equitable participation to combat underemployment, and that current bias research is often race-centric, a Western construct; inclusive participation would advance the field and governance structures [166-172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies on African knowledge and wisdom note that economic constraints limit African experts’ contributions, suggesting that equitable participation can both create jobs and diversify AI perspectives [S23]; broader discussions on inclusive AI development stress the need for diverse talent pools to move beyond Western-centric bias [S31]; and the shortage of multilingual AI experts further underscores this gap [S22].
MAJOR DISCUSSION POINT
Inclusive AI development for jobs and bias mitigation
AGREED WITH
Ambassador Philip Tigo
Argument 4
Coalition building among civil‑society groups amplifies advocacy impact (Dr. Chinasa Okolo)
EXPLANATION
She emphasizes that civil‑society coalitions can increase advocacy power, though she cautions about potential government suppression and associated risks.
EVIDENCE
She describes how civil-society advocacy, especially when grouped into coalitions, can be powerful, while also noting incentives for governments to suppress such movements, sometimes leading to violence [92-94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Civil-society engagement reports identify regulations, standards and guardrails as mechanisms that become more effective when advocacy groups act in coalitions [S25]; the safe-trusted AI roundtable also highlights the power of multistakeholder collaboration for policy influence [S2].
MAJOR DISCUSSION POINT
Strength of civil‑society coalitions for AI advocacy
AGREED WITH
Ambassador Philip Tigo
Argument 5
Leverage UN, AU and national AI strategies to shape policy and ensure African voices are heard (Dr. Chinasa Okolo)
EXPLANATION
Dr. Chinasa highlights the importance of engaging with UN and AU platforms, as well as national AI strategies, to ensure African perspectives influence global AI governance.
EVIDENCE
She mentions African representation on a UN scientific panel, the Africa AI Council, her involvement in the AU Continental AI Strategy and Nigeria’s National AI Strategy, and the role of UN initiatives in fostering dialogue [210-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UN General Assembly plenary on AI governance discusses the need for inclusive multilateral dialogue and warns against selective information access, aligning with calls to use UN and AU platforms for African influence [S20]; IGF 2025 emphasizes sovereign AI pathways that require engagement with continental bodies [S26]; and a high-level review notes that African voices are arriving late and need peer pressure to enter the global AI debate [S28].
MAJOR DISCUSSION POINT
Using multilateral mechanisms for African AI policy influence
Argument 6
Establish independent AI safety institutes for testing and certification, reducing dependence on external lenders (Dr. Chinasa Okolo)
EXPLANATION
She proposes creating AI safety institutes, akin to the US NIST model, to independently evaluate AI technologies, thereby decreasing reliance on foreign funding and ensuring alignment with African values.
EVIDENCE
She references the US NIST model that tests a range of products, suggesting a similar independent capacity is needed in Africa to avoid dependence on multilateral lenders or foreign organizations [264-268].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The launch of the Global South AI Safety Research Network proposes independent testing bodies akin to NIST to certify AI systems locally [S29]; earlier safety-focused discussions also call for independent evaluation mechanisms to avoid reliance on foreign entities [S24].
MAJOR DISCUSSION POINT
Independent AI safety testing bodies
Argument 7
Governments should first assess whether AI is truly necessary for a problem before procuring AI solutions
EXPLANATION
Dr. Chinasa warns that many challenges could be solved more effectively with basic infrastructure such as schools, hospitals or reliable electricity, and that rushing to buy AI tools can waste resources. A critical evaluation of AI’s added value is essential.
EVIDENCE
She states that AI is often procured when simple solutions-building hospitals, hiring teachers, or providing reliable electricity-would address the issue more efficiently, cautioning against unnecessary AI procurement and emphasizing the need for careful assessment [259-261].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines on digital cooperation stress the importance of guardrails and careful assessment before AI procurement to avoid unnecessary spending [S25]; governance frameworks further advise evaluating the added value of AI versus simpler solutions [S30].
MAJOR DISCUSSION POINT
Critical assessment of AI necessity before adoption
P
Professor Jonathan Shock
5 arguments184 words per minute1458 words473 seconds
Argument 1
Short‑term risk of misinformation, disinformation and breakdown of trust (Professor Jonathan Shock)
EXPLANATION
Professor Shock warns that AI‑enabled misinformation and disinformation campaigns are already undermining trust during elections across several African countries, posing an immediate threat.
EVIDENCE
He describes observed misinformation and targeted disinformation campaigns during elections in Ghana, South Africa, and Nigeria, noting the distinction between misinformation and disinformation and their gendered nature, leading to a breakdown of societal trust [48-55].
MAJOR DISCUSSION POINT
Immediate trust erosion from AI‑driven misinformation
AGREED WITH
Dr. Chinasa Okolo
Argument 2
Emerging threat of AI‑generated malicious agents for targeted campaigns (Professor Jonathan Shock)
EXPLANATION
He highlights that individual malicious actors can now create autonomous AI agents to conduct disinformation campaigns at scale, expanding the threat beyond large tech firms.
EVIDENCE
He notes that a single malicious actor can design their own agent for misinformation, a phenomenon observed over recent months, indicating a shift from big-tech-only threats [60-64].
MAJOR DISCUSSION POINT
Rise of AI agents as tools for malicious campaigns
Argument 3
Long‑term existential risk is less immediate; focus should be on current, observable harms (Professor Jonathan Shock)
EXPLANATION
While acknowledging theoretical existential risks, he argues that policy should prioritize the tangible, short‑term harms currently manifesting across the continent.
EVIDENCE
He states that long-term existential threats (e.g., AI pressing a nuclear button) are less immediate, and the priority should be on observable harms such as misinformation and trust breakdown [57-59].
MAJOR DISCUSSION POINT
Prioritizing present harms over speculative existential threats
Argument 4
Empowerment through agency, local language/context, and human‑in‑the‑loop design (Professor Jonathan Shock)
EXPLANATION
He argues that AI should enhance human agency by incorporating local languages and contexts, and by ensuring human‑in‑the‑loop mechanisms that allow people to make informed choices.
EVIDENCE
He stresses that empowerment requires AI to understand local context and language, and that without this, AI cannot truly empower; he also mentions the importance of human-in-the-loop design [155-163].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual AI research highlights the necessity of local language support to empower users and ensure relevance [S22]; inclusive AI discussions underline the role of human-in-the-loop designs for agency [S31]; and standards for safe AI stress human oversight as a core requirement [S25].
MAJOR DISCUSSION POINT
Designing AI for agency and contextual relevance
AGREED WITH
Mark Gaffley, Dr. Chinasa Okolo, Ambassador Philip Tigo, Michelle Malonza
Argument 5
Human‑in‑the‑loop, transparent systems are essential when AI‑ifying critical services; avoid over‑reliance (Professor Jonathan Shock)
EXPLANATION
He cautions against rapid, unchecked AI integration into critical infrastructure, advocating for transparent, human‑in‑the‑loop systems to prevent loss of control and over‑reliance.
EVIDENCE
He warns against the “move fast and break things” approach, recommends human-in-the-loop and transparent decision-making processes, and notes risks of becoming beholden to external vendors [226-233].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory recommendations call for transparent, human-in-the-loop mechanisms when integrating AI into critical infrastructure to prevent loss of control [S25]; broader AI governance literature stresses the need for ongoing oversight and transparent decision-making [S30]; and safety-focused dialogues advocate for such safeguards to reduce vendor lock-in [S24].
MAJOR DISCUSSION POINT
Safe AI integration in critical infrastructure
AGREED WITH
Ambassador Philip Tigo, Dr. Chinasa Okolo, Mark Gaffley
S
Speaker 1
1 argument103 words per minute627 words362 seconds
Argument 1
Promote community networking and post‑event engagement to strengthen regional ties (Speaker 1)
EXPLANATION
Speaker 1 encourages participants to continue networking after the event, highlighting a social gathering as an opportunity to build regional connections and sustain collaboration.
EVIDENCE
He invites attendees to a post-event picture and a social gathering at Café Lota, emphasizing community interaction after the conference [369-376].
MAJOR DISCUSSION POINT
Fostering post‑event community networking
A
Audience
4 arguments170 words per minute574 words202 seconds
Argument 1
Implement mandatory watermarks on AI‑generated media to help users identify synthetic content and curb disinformation
EXPLANATION
The audience proposes that every piece of AI‑generated audio, video or image should carry a clear, mandatory watermark. Such labeling would allow people to recognise synthetic media and reduce the spread of misinformation and disinformation.
EVIDENCE
The audience explains that AI tools are being mass-produced for media creation and suggests a mandatory watermark for AI-generated videos, songs or pictures, asking whether this would be a workable solution to help users identify AI content [295-303].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Standards and guardrails discussed in civil-society engagement reports include labeling and watermarking as tools to mitigate misinformation risks [S25].
MAJOR DISCUSSION POINT
Mitigating AI‑driven misinformation through labeling
Argument 2
AI development must not widen the digital divide; policies should explicitly include the digitally excluded population
EXPLANATION
The audience highlights that a large share of Africans lack internet access, electricity and digital skills, and warns that AI initiatives could exacerbate existing inequalities if these groups are ignored. Inclusive policies are needed to ensure AI benefits reach everyone.
EVIDENCE
The audience states that about 64 % of the continent does not have internet access, notes the lack of electricity and digital inclusion, and asks how to ensure AI advancements do not widen the digital divide [307-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The digital-divide brief stresses that AI initiatives must be inclusive and avoid exacerbating existing inequalities, recommending policies that address the needs of the offline population [S38]; inclusive programme guidelines further call for designing AI solutions that reach marginalized groups [S37].
MAJOR DISCUSSION POINT
Preventing AI from increasing digital exclusion
Argument 3
AI should not be allowed to decide humanity’s economic or political structure; human values must remain the guiding principle
EXPLANATION
The audience raises a philosophical concern that AI could be used to determine societal systems such as capitalism or socialism, and questions whether AI should be given that authority. The implication is that AI must remain subordinate to human‑defined values and governance.
EVIDENCE
The audience describes AI as a technology that might “decide on a structure for humanity” and asks panelists for opinions on the ideal economic system if AI were to choose, indicating apprehension about autonomous AI governance decisions [330-338].
MAJOR DISCUSSION POINT
Limits on AI’s role in determining societal structures
Argument 4
Proactive AI policy and youth engagement are essential to safeguard the next generation
EXPLANATION
The audience asks whether policy should be enacted now to protect younger people, emphasizing that early regulatory frameworks and education are required for safe AI adoption. It suggests that waiting for problems to emerge would be too late.
EVIDENCE
The audience questions if policy should be prioritized now, mentions the enthusiasm for AI knowledge, and wonders whether policy can make the next generation safer, highlighting the urgency of policy action [339-341].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI readiness discussion highlights the urgency of establishing policies now to create an enabling environment for safe AI adoption, especially for younger cohorts [S35]; broader governance frameworks also argue for early regulatory action to protect future generations [S30].
MAJOR DISCUSSION POINT
Early policy and education for AI safety
Agreements
Agreement Points
Broad consensus on the need for capacity development and AI literacy across the continent
Speakers: Mark Gaffley, Dr. Chinasa Okolo, Ambassador Philip Tigo, Professor Jonathan Shock, Michelle Malonza
Public awareness, education programmes, MOOCs and scholarships to empower marginalized citizens (Mark Gaffley) Equitable participation in AI development creates jobs and advances the field beyond Western bias (Dr. Chinasa Okolo) Lack of continent‑wide AI policies and talent; need to build capacity and expertise (Ambassador Philip Tigo) Empowerment through agency, local language/context, and human‑in‑the‑loop design (Professor Jonathan Shock) Defining what Africans want… need to know the technology exists before we can decide (Michelle Malonza)
All five speakers stress that without widespread AI awareness, education, and local expertise African societies cannot define or demand safe AI solutions; they cite low AI literacy, the need for training programmes, scholarships and the creation of local talent pools [141-151][166-172][101-108][155-163][152-154].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the emphasis on capacity building and AI literacy in African AI governance discussions, where regional strategies prioritize infrastructure development and skill acquisition to bridge the digital divide [S64][S71][S59].
Shared concern about digital neocolonialism and the need for African data sovereignty
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
Undesirable AI outcomes — dependency, digital neocolonialism, erosion of agency (Ambassador Philip Tigo) Promote African data sovereignty and develop home‑grown AI models to reduce reliance on foreign big‑tech providers (Ambassador Philip Tigo) Equitable participation in AI development creates jobs and advances the field beyond Western bias (Dr. Chinasa Okolo)
Both speakers warn that AI systems that extract African data or impose external models erode agency and constitute digital neocolonialism; they argue for building indigenous models and increasing African participation in AI development [33-36][186-191][166-172].
POLICY CONTEXT (KNOWLEDGE BASE)
The concern aligns with calls for African data self-determination and resistance to digital neocolonialism, as highlighted in analyses of data governance that stress shifting from narrow national sovereignty toward collaborative, continent-wide data frameworks [S72][S73][S59].
Consensus that cooperation, not competition, should drive African AI initiatives
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
Call for an all‑in, cooperative effort across Africa; competition wastes resources (Ambassador Philip Tigo) Coalition building among civil‑society groups amplifies advocacy impact (Dr. Chinasa Okolo)
Both emphasize that collaborative, coalition-based approaches are essential and that competing for resources wastes money and hampers progress [201-207][92-94].
POLICY CONTEXT (KNOWLEDGE BASE)
The preference for cooperation reflects multistakeholder approaches advocated by UN-backed initiatives and the Global Digital Compact, which stress African unity and coordinated policy harmonisation across nations [S55][S74][S59].
Agreement on embedding safety guardrails and careful procurement of AI systems
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo, Professor Jonathan Shock, Mark Gaffley
Embed safety benchmarks and guardrails in procurement contracts; create agile, continuously updated mechanisms (Ambassador Philip Tigo) Governments should first assess whether AI is truly necessary for a problem before procuring AI solutions (Dr. Chinasa Okolo) Human‑in‑the‑loop, transparent systems are essential when AI‑ifying critical services; avoid over‑reliance (Professor Jonathan Shock) Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley)
All four speakers call for safety-focused procurement, including safety benchmarks, necessity assessments, human-in-the-loop designs, and retaining non-digital alternatives to avoid over-reliance on AI [241-248][259-261][226-233][224-225].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding safety guardrails and prudent procurement is consistent with human-rights-based AI governance frameworks and ministerial guidance on operationalising guardrails in AI procurement processes [S53][S67][S69][S70][S65].
Recognition of short‑term misinformation and disinformation risks
Speakers: Professor Jonathan Shock, Dr. Chinasa Okolo
Short‑term risk of misinformation, disinformation and breakdown of trust (Professor Jonathan Shock) Need for an AI incident database and better monitoring mechanisms to track harms on the continent (Dr. Chinasa Okolo)
Both highlight that AI-enabled misinformation is already undermining trust in elections and that systematic incident tracking is needed to monitor and mitigate these harms [48-55][69-71][72-73].
POLICY CONTEXT (KNOWLEDGE BASE)
Recognition of short-term misinformation and disinformation risks is supported by IGF 2023 sessions that underline the need to understand AI-generated false content and develop counter-disinformation strategies [S48][S49][S50][S51].
Importance of civil‑society involvement and coalition building
Speakers: Dr. Chinasa Okolo, Ambassador Philip Tigo
Coalition building among civil‑society groups amplifies advocacy impact (Dr. Chinasa Okolo) Call for an all‑in, cooperative effort across Africa; competition wastes resources (Ambassador Philip Tigo)
Both stress that civil-society coalitions are vital for effective advocacy and that a cooperative, all-in approach should include these groups [92-94][201-207].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for civil-society involvement reflects the multistakeholder participation principle emphasized in global AI governance forums, which aim to balance power dynamics and include NGOs in policy design [S54][S55][S52].
Similar Viewpoints
Both see foreign‑centric AI as a threat to African agency and advocate for locally‑controlled data and models to avoid digital neocolonialism [33-36][186-191][166-172].
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
Undesirable AI outcomes — dependency, digital neocolonialism, erosion of agency (Ambassador Philip Tigo) Promote African data sovereignty and develop home‑grown AI models to reduce reliance on foreign big‑tech providers (Ambassador Philip Tigo) Equitable participation in AI development creates jobs and advances the field beyond Western bias (Dr. Chinasa Okolo)
Both argue that competition is counter‑productive and that inclusive, cooperative strategies—including retaining non‑digital options—are essential for equitable AI deployment [201-207][224-225].
Speakers: Ambassador Philip Tigo, Mark Gaffley
Call for an all‑in, cooperative effort across Africa; competition wastes resources (Ambassador Philip Tigo) Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley)
Both caution against over‑reliance on AI and stress the need for human oversight or analogue alternatives to safeguard critical services [226-233][224-225].
Speakers: Professor Jonathan Shock, Mark Gaffley
Human‑in‑the‑loop, transparent systems are essential when AI‑ifying critical services; avoid over‑reliance (Professor Jonathan Shock) Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley)
Unexpected Consensus
Preserving non‑digital/analogue alternatives while deploying AI
Speakers: Mark Gaffley, Professor Jonathan Shock
Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley) Human‑in‑the‑loop, transparent systems are essential when AI‑ifying critical services; avoid over‑reliance (Professor Jonathan Shock)
Although Mark focuses on keeping analogue options for the digitally excluded and Professor Shock emphasizes human-in-the-loop designs to avoid over-reliance, both converge on the principle that AI should not replace all existing non-digital processes, an alignment not explicitly anticipated at the start of the discussion [224-225][226-233].
POLICY CONTEXT (KNOWLEDGE BASE)
Preserving non-digital alternatives resonates with discussions on the value of the digitally excluded as a source of creativity and the need to avoid marginalising them amid rapid AI deployment [S63][S60].
Overall Assessment

The panel displayed strong convergence on four main themes: (1) the urgent need for capacity building and AI literacy; (2) the risk of digital neocolonialism and the imperative for African data sovereignty; (3) the preference for cooperative, coalition‑based approaches over competition; and (4) the necessity of embedding safety guardrails, transparent human‑in‑the‑loop designs, and preserving analogue alternatives in AI procurement and deployment.

High consensus – the repeated alignment across multiple speakers and arguments indicates a solid shared understanding of the priorities for safe and trusted AI in Africa, providing a robust foundation for coordinated policy action and regional collaboration.

Differences
Different Viewpoints
Extent and timing of AI integration into critical infrastructure and development projects
Speakers: Ambassador Philip Tigo, Professor Jonathan Shock, Mark Gaffley
AI can be used to optimise development, e.g., energy optimisation, and should be adopted (Ambassador Philip Tigo) Move‑fast AI integration is risky; need human‑in‑the‑loop, transparent systems and avoid over‑reliance (Professor Jonathan Shock) When deploying AI solutions, keep analogue alternatives for those without digital access to prevent exclusion (Mark Gaffley)
Ambassador Tigo promotes using AI now to accelerate development such as energy optimisation [318-326], while Professor Shock cautions that rapid AI-ification of services can break trust and recommends human-in-the-loop, transparent designs to avoid over-reliance [226-233]. Mark adds that any AI rollout must preserve non-digital options for the digitally excluded [224-225]. The three share a goal of safe AI use but diverge on how quickly and in what form AI should be embedded in critical services.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI integration into critical infrastructure reference studies highlighting AI’s role in safeguarding critical systems and the necessity for policy and capacity-building frameworks before large-scale deployment in developing contexts [S56][S57][S58].
Whether AI is necessary to solve development problems versus relying on basic infrastructure solutions
Speakers: Dr. Chinasa Okolo, Ambassador Philip Tigo
Governments should first assess if AI is truly needed; many challenges are better solved with hospitals, schools, electricity (Dr. Chinasa Okolo) AI can be leveraged to optimise development (e.g., energy optimisation) and should be adopted rather than waiting for AI‑only solutions (Ambassador Philip Tigo)
Dr. Chinasa argues that AI procurement often occurs when simpler, non-AI solutions would be more effective and that a critical assessment of AI necessity is essential [259-261]. Ambassador Tigo counters that AI already offers concrete benefits, such as improving energy optimisation, and should be integrated into development agendas [318-326]. This reflects a disagreement on the priority of AI versus traditional development interventions.
POLICY CONTEXT (KNOWLEDGE BASE)
The question mirrors prior dialogues that prioritize foundational connectivity and infrastructure as prerequisites for effective AI adoption in Africa, while also noting pragmatic views that immediate AI applications should complement, not replace, basic infrastructure [S71][S62].
Preferred focus for building AI capacity on the continent
Speakers: Mark Gaffley, Ambassador Philip Tigo
Raise public awareness through surveys, short courses, scholarships and a free MOOC to empower citizens (Mark Gaffley) Develop home‑grown AI models, ensure data sovereignty and give African scientists access to and ability to evaluate models (Ambassador Philip Tigo)
Mark emphasizes citizen-level education and outreach as the primary route to capacity building [141-151], whereas Ambassador Tigo stresses technical capacity for model development and data sovereignty as the cornerstone of African AI capability [186-191]. Both aim to strengthen capacity but differ on whether the priority is broad public AI literacy or technical model-building infrastructure.
POLICY CONTEXT (KNOWLEDGE BASE)
Preferences for AI capacity-building focus are informed by reports that African strategies prioritize infrastructure development, innovation hubs, and policy harmonisation to foster sustainable digital economies [S64][S59][S71].
Preferred mechanisms for strengthening governmental negotiating power with AI vendors
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
Create negotiation playbooks, guidebooks and tools to give policymakers market insight and bargaining power (Ambassador Philip Tigo) Leverage UN, AU and national AI strategy platforms to shape policy and ensure African voices are heard (Dr. Chinasa Okolo)
Ambassador Tigo proposes practical negotiation toolkits for governments to secure better terms with large AI firms [256-258]. Dr. Chinasa advocates using multilateral fora such as the UN scientific panel, the Africa AI Council and continental strategies to influence policy and secure African interests [210-214]. The disagreement lies in whether the focus should be on tactical negotiation aids or on multilateral policy engagement.
POLICY CONTEXT (KNOWLEDGE BASE)
Strengthening governmental negotiating power is addressed in discussions on government procurement policies and the role of public-sector coordination to secure fair, transparent contracts with AI vendors [S53][S52].
Unexpected Differences
Value of preserving the digitally excluded as a creative resource versus prioritising rapid AI rollout
Speakers: Mark Gaffley, Ambassador Philip Tigo
Mark frames the digitally excluded as a valuable source of creativity that should be preserved and even leveraged in the future [315-317] Ambassador Tigo pushes for an all-in AI effort, emphasizing cooperation and rapid adoption to avoid competition and to build capacity [201-207]
Mark’s unconventional view that the analog, digitally excluded population constitutes a strategic creative asset contrasts with Ambassador Tigo’s focus on accelerating AI adoption across the continent, revealing an unexpected tension between preserving non‑digital creativity and pursuing swift AI integration.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between preserving the digitally excluded as a creative resource and rapid AI rollout is highlighted in round-table debates that argue for inclusive approaches to avoid “mental arrest” of marginalized groups while advancing AI initiatives [S63][S60][S61].
Overall Assessment

The panel displayed broad consensus on the importance of AI safety, capacity building and inclusive governance, but diverged on how quickly AI should be deployed, whether AI is necessary for many development challenges, the primary focus of capacity building (public education vs technical model development), and the preferred strategy for strengthening governmental leverage with AI vendors.

Moderate to high – while participants share overarching goals (safe, trusted, inclusive AI), they hold contrasting views on implementation pathways, leading to potential delays or fragmented policies if not reconciled. These disagreements could affect the speed and effectiveness of regional AI collaboration, procurement standards, and the balance between AI adoption and addressing basic infrastructure needs.

Partial Agreements
Both see the need for safeguards, but Ambassador focuses on contractual/administrative levers while Professor Shock emphasizes technical design and operational safeguards.
Speakers: Ambassador Philip Tigo, Professor Jonathan Shock
Both agree AI safety must be embedded in procurement and governance processes Ambassador Tigo calls for safety benchmarks in procurement contracts and agile mechanisms [241-248] Professor Shock stresses human-in-the-loop and transparent systems to avoid over-reliance [226-233]
They share the goal of inclusive empowerment, but Dr. Chinasa focuses on structural participation and advocacy, whereas Mark concentrates on education and skill‑building for citizens.
Speakers: Dr. Chinasa Okolo, Mark Gaffley
Both highlight the importance of inclusive participation and capacity building Dr. Chinasa stresses civil-society coalitions and equitable participation in AI development [92-94][166-172] Mark stresses public awareness, education programmes and scholarships for marginalized groups [141-151]
Takeaways
Key takeaways
Safe and trusted AI for Africa means avoiding dependency, digital neocolonialism, and erosion of human agency; AI should deliver outcomes that African citizens want and understand. Short‑term risks such as misinformation, disinformation, and AI‑generated malicious agents are more urgent than speculative long‑term existential threats. There is a critical data gap: existing AI incident databases do not capture African‑specific harms, often mis‑classifying them as “African American.” Capacity building, public awareness, and education (MOOCs, scholarships, short courses) are essential to empower marginalized communities and create local expertise. Collaboration across academia, civil society, government, and the private sector is needed; competition among African actors wastes resources. Policy and governance are weak: many countries lack AI strategies, talent, and procurement safeguards; embedding safety benchmarks and agile oversight is required. Deploying AI in critical infrastructure must retain human‑in‑the‑loop controls, transparency, and fallback analogue systems to avoid over‑reliance. Inclusion of the digitally excluded must be considered to prevent widening the digital divide; AI should be used to accelerate development rather than for its own sake.
Resolutions and action items
Develop and maintain an Africa‑focused AI incident database to track harms and inform policy. Launch a publicly accessible MOOC on AI ethics, safety, and human rights for African audiences. Expand scholarship programmes (e.g., Women in Focus) and short courses to build local AI expertise. Create an African Compute Initiative providing shared high‑performance computing resources to researchers continent‑wide. Incorporate AI safety benchmarks and guardrails into government procurement contracts and develop agile, continuously updated oversight mechanisms. Establish independent AI safety institutes or certification bodies within African countries to evaluate models and certify compliance. Foster coalition building among civil‑society groups to amplify advocacy and policy influence. Encourage the development of locally‑trained AI models that reflect African languages, cultures, and contexts.
Unresolved issues
Specific mechanisms for enforcing accountability of multinational AI providers when harms occur on the continent. Detailed guidelines for balancing AI adoption with non‑AI solutions in sectors like education, health, and infrastructure. How to effectively monitor and mitigate AI model “leaks” and frontier‑model risks originating outside Africa. Concrete steps for integrating AI into critical infrastructure while ensuring transparency, human‑in‑the‑loop control, and fallback options. Implementation of mandatory watermarking or provenance labeling for AI‑generated media and its enforceability. Strategies to bridge the digital divide for the 64 % of Africans lacking internet or reliable electricity. Long‑term governance frameworks for existential AI risks specific to the African context.
Suggested compromises
Adopt a cooperative, continent‑wide approach rather than competitive national efforts, sharing resources and expertise. Maintain analogue alternatives alongside AI solutions to ensure services remain accessible to those without digital access. Use watermarks or provenance tags as a short‑term mitigation for AI‑generated disinformation, recognizing they can be bypassed. Combine AI deployment with traditional development investments (e.g., infrastructure, education) rather than treating AI as a panacea. Balance the use of foreign AI models with the development of open‑source, locally‑trained models to retain agency and data sovereignty.
Thought Provoking Comments
If AI systems are creating a dependency rather than building capacity or capability, eroding human agency and extracting African data while concentrating value outside the continent – that is digital neocolonialism and an existential threat.
Frames AI risk in terms of sovereignty and agency rather than technical safety, introducing the powerful concept of digital neocolonialism and linking it to existential risk for Africa.
Shifted the conversation from abstract safety concerns to concrete geopolitical and socio‑economic implications, prompting other panelists to discuss capacity building, local model development, and the need for African‑led governance structures.
Speaker: Ambassador Philip Tigo
We are already seeing a breakdown in trust caused by misinformation and disinformation campaigns, especially around elections, and now single malicious actors can create AI agents to spread targeted, gender‑based political violence at scale.
Highlights an immediate, observable threat—political manipulation via AI—while introducing the novel idea of AI‑driven agents as a new vector for disinformation.
Moved the discussion toward short‑term, real‑world harms, leading other speakers (e.g., Ambassador Tigo and Dr. Chinasa) to emphasize urgent mitigation strategies such as regulation, monitoring, and capacity for rapid response.
Speaker: Professor Jonathan Shock
Current AI incident databases return ‘African American’ when I search for Africa; there is virtually no accessible record of AI harms on the continent, making it hard for governments to craft appropriate regulations.
Exposes a data gap that hampers visibility of African AI risks, calling attention to the need for continent‑specific incident tracking and knowledge sharing.
Prompted calls for better data collection and monitoring mechanisms, influencing subsequent remarks about building African‑focused safety institutes and the importance of localized research.
Speaker: Dr. Chinasa Okolo
We have AI strategies but not AI policies; there is a lack of talent and fluency in the public sector, so AI safety is not even on the radar. We must redefine what ‘existential risk’ means for Africa—not sci‑fi scenarios, but threats to democracy and societal harmony.
Challenges the assumption that existing global AI risk frameworks apply directly to Africa, urging a re‑orientation toward context‑specific risks.
Steered the panel toward discussing concrete policy gaps, capacity building, and the necessity of African‑centric risk definitions, influencing later suggestions about procurement guardrails and collaborative frameworks.
Speaker: Ambassador Philip Tigo
Our public awareness survey showed that nearly 75 % of South Africans know very little about AI, learning mainly through informal channels. We need education, MOOCs, and scholarships to empower citizens to define what they want from AI.
Provides empirical evidence of low AI literacy and proposes concrete capacity‑building solutions, linking public awareness directly to the ability to shape AI governance.
Introduced the theme of education as a prerequisite for meaningful participation, which was echoed by other speakers emphasizing empowerment and agency.
Speaker: Mark Gaffley
What we all want is empowerment – AI must give agency within local contexts. Without language and cultural relevance, models cannot truly empower people.
Reframes the goal of AI from abstract safety to tangible empowerment, emphasizing the necessity of contextualized models.
Deepened the analysis of what ‘trusted AI’ looks like, leading to discussions about building African‑specific models and the importance of local data and expertise.
Speaker: Professor Jonathan Shock
We should view scientists, governments, and citizenry as three interdependent personas; capacity‑building for scientists, regulatory ability for governments, and inclusion of citizens are all essential for safe AI deployment.
Offers a structured framework for stakeholder engagement, highlighting the interconnectedness of capacity, regulation, and inclusion.
Guided the conversation toward collaborative mechanisms and the need for coordinated action across sectors, influencing later remarks on cooperation versus competition.
Speaker: Ambassador Philip Tigo
The African Compute Initiative will provide a shared high‑performance computing platform for researchers across the continent – not a competition with big tech, but a network effect that empowers local AI development.
Introduces a concrete collaborative infrastructure that can democratize access to compute resources, directly addressing earlier concerns about dependence on foreign providers.
Shifted the tone from problem‑focused to solution‑oriented, reinforcing the panel’s call for collective, non‑competitive approaches.
Speaker: Professor Jonathan Shock
Stop competing. AI is not about who builds the biggest data centre; it’s a collective all‑in effort. Competition wastes money and hampers progress.
A succinct, emphatic call to abandon competitive mindsets, reinforcing the earlier theme of cooperation.
Re‑energized the discussion on collaboration, prompting other panelists to cite existing cooperative initiatives (e.g., Masa Kani, GOAI Africa) and to stress the importance of shared resources.
Speaker: Ambassador Philip Tigo
Procurement documents should embed safety benchmarks and agile oversight; without them governments lose bargaining power against trillion‑dollar firms.
Provides a practical policy lever—embedding safety criteria in procurement—to address power asymmetries with large AI vendors.
Directed the conversation toward actionable governance tools, influencing later suggestions about creating negotiation playbooks and continuous, agile regulatory mechanisms.
Speaker: Ambassador Philip Tigo
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the panel from abstract definitions of safe AI to concrete, Africa‑specific challenges and solutions. Ambassador Tigo’s framing of digital neocolonialism and the need to redefine existential risk set a geopolitical lens that other speakers expanded upon with evidence of misinformation, data gaps, and low AI literacy. Dr. Okolo’s observation about missing incident data and Mark Gaffley’s survey on public awareness highlighted the foundational need for knowledge and capacity. Professor Shock’s focus on trust, agency, and the emerging threat of AI‑driven agents deepened the analysis of short‑term harms. Together, these insights redirected the conversation toward practical pathways—education, collaborative compute infrastructure, inclusive stakeholder frameworks, and procurement safeguards—emphasizing cooperation over competition. The cumulative effect was a shift from problem‑identification to a coordinated, actionable agenda for building safe, trusted, and locally relevant AI across Africa.

Follow-up Questions
How can Africa monitor and mitigate AI frontier model leaks given most development is outside the continent?
Need mechanisms to detect, track, and respond to AI model leaks that originate abroad, as current capacity to monitor such leaks is limited.
Speaker: Dr. Chinasa Okolo
What pathways exist for civil society advocacy and holding AI actors accountable in African countries?
The speaker highlighted uncertainty about effective advocacy routes in Africa, indicating a gap in mechanisms for civil society to influence AI policy and accountability.
Speaker: Dr. Chinasa Okolo
How can African scientists gain access to AI models for evaluation and safety testing?
Access to proprietary models is essential for local safety assessments; without it, African researchers cannot conduct meaningful evaluations.
Speaker: Ambassador Philip Tigo
What are effective ways to build African‑owned AI models that reflect local context and reduce dependence on foreign providers?
Developing home‑grown models would address issues of cultural relevance, data sovereignty, and avoid over‑reliance on external tech firms.
Speaker: Ambassador Philip Tigo
How can local languages and cultural context be incorporated into AI models to empower users?
Current models lack African language and cultural nuance, limiting their ability to provide genuine agency and empowerment.
Speaker: Professor Jonathan Shock
What research is needed on AI bias across African social categories such as caste, tribe, religion, and gender intersections?
Understanding bias beyond race—covering tribal, religious, gender, and other intersections—is crucial for fair AI systems in Africa.
Speaker: Dr. Chinasa Okolo
How can an African AI incident database be created and maintained to track harms and incidents?
Existing incident databases do not capture African‑specific AI harms; a dedicated database would support monitoring and policy making.
Speaker: Dr. Chinasa Okolo
What are the short‑term and long‑term AI risks specific to Africa, especially regarding misinformation, disinformation, and AI agents?
Identifying immediate threats (election‑related misinformation) and future existential risks is needed for targeted mitigation strategies.
Speaker: Professor Jonathan Shock
How should AI procurement processes incorporate safety benchmarks and agile oversight mechanisms?
Embedding safety criteria in procurement contracts offers a practical lever to enforce responsible AI adoption.
Speaker: Ambassador Philip Tigo
What negotiation playbooks or guidebooks are needed for African governments to engage effectively with large tech companies?
Governments lack bargaining power and structured guidance; dedicated playbooks would improve negotiation outcomes.
Speaker: Ambassador Philip Tigo
How can AI deployment in critical infrastructure be designed to avoid excluding digitally disconnected populations?
Ensuring analogue alternatives and inclusive design prevents widening the digital divide when AI is integrated into essential services.
Speaker: Mark Gaffley, Ambassador Philip Tigo
How can youth and broader citizenry be meaningfully involved in AI policy formulation?
Inclusive feedback mechanisms and open consultation processes are needed to incorporate the perspectives of younger generations and the public.
Speaker: Dr. Chinasa Okolo
Would mandatory watermarks for AI‑generated media be an effective tool to combat disinformation in Africa?
Explores a policy option to label AI content, though its efficacy against malicious actors remains uncertain.
Speaker: Audience (follow‑up addressed by Professor Jonathan Shock)
How can African governments balance AI adoption with basic development priorities such as electricity, education, and healthcare?
Avoids misallocation of scarce resources to AI solutions when fundamental infrastructure needs may yield greater impact.
Speaker: Dr. Chinasa Okolo
What are best practices for establishing AI safety institutes in Africa, similar to NIST in the United States?
Creating independent, standards‑based bodies would provide systematic testing and certification of AI systems.
Speaker: Dr. Chinasa Okolo
How can the African Compute Initiative be scaled and coordinated across institutions to support AI research?
A shared high‑performance computing platform can amplify research capacity, but requires coordinated governance and resource sharing.
Speaker: Professor Jonathan Shock
What human‑in‑the‑loop designs are most effective for AI systems deployed in critical infrastructure to maintain transparency and agency?
Ensuring that AI decisions remain overseen by humans helps preserve trust and prevents loss of agency in essential services.
Speaker: Professor Jonathan Shock

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.