Toward Collective Action_ Roundtable on Safe & Trusted AI
20 Feb 2026 18:00h - 19:00h
Toward Collective Action_ Roundtable on Safe & Trusted AI
Summary
The panel examined what “safe and trusted AI” means for Africa, current progress, and collaborative pathways [5-8].
Ambassador Tigo warned AI that creates dependency, extracts data, and concentrates value abroad erodes agency and creates digital neocolonialism [33-36].
Professor Shock flagged short-term risks of misinformation and gendered disinformation during elections, which can erode public trust [48-55][60-64].
Dr Okolo pointed out the scarcity of Africa-specific AI incident data, citing unnoticed AI-graded exam problems in Nigeria and South Africa [72-76].
Gaffley said a survey showed three-quarters of South Africans know little about AI, leading GCG to launch courses, scholarships, and a free MOOC [141-151].
Shock emphasized that AI must reflect local languages and contexts to empower users and preserve agency [155-162].
The panel noted a lack of AI policies and talent, urging an all-in cooperative approach over competition [101-108][111-119].
Tigo identified scientists, governments, and citizens as key actors and urged capacity-building so each can evaluate, regulate, and safely use AI [173-191].
He suggested embedding safety benchmarks in procurement, creating agile oversight, and protecting data sovereignty through local alternatives and negotiation tools [241-254][255-257].
Okolo advocated for independent AI safety institutes to reduce reliance on foreign donors and tailor standards to African values [260-267].
Existing collaborations such as MasaKani, Deep Learning in Daba, GOAI Africa, and the African Compute Initiative illustrate shared resources boosting research [215-218].
The panel concluded that trustworthy, locally relevant AI needs coordinated governance, capacity building, and inclusive policies to empower citizens and prevent neocolonial exploitation [215-218][343-351].
Keypoints
Major discussion points
– Risk of digital neocolonialism and loss of agency – The Ambassador warned that AI systems that create dependency, extract African data and concentrate value abroad erode human agency and amount to “digital neocolonialism,” even posing an existential threat to the continent [33-36].
– Misinformation, disinformation and malicious AI agents – Professor Shock highlighted the current surge of election-related misinformation and targeted disinformation (often gender-based), noting that AI amplifies these attacks and that single malicious actors can now build autonomous agents to spread false content [48-55][60-64].
– Low public awareness and capacity gaps – Mark Gaffley’s survey showed that roughly 75 % of Africans know very little about AI, learning mainly from informal channels, underscoring the need for widespread education, short courses, scholarships and a free MOOC to build the skills required to define and demand trustworthy AI [141-146][147-151].
– Call for pan-African collaboration and shared infrastructure – Multiple panelists stressed that competition must be replaced by cooperation: building regional compute resources (the African Compute Initiative), leveraging existing grassroots groups, and creating networks across academia, civil society, government and the private sector to empower local researchers [201-204][215-218].
– Policy, procurement safeguards and data sovereignty – The Ambassador and Dr. Chinasa pointed out the absence of continent-wide AI policies, the need for safety benchmarks in procurement contracts, agile regulatory mechanisms, and strategies for data localisation to keep AI development under African control [101-108][110-118][241-254][259-266].
Overall purpose / goal
The session was convened to answer three interlinked questions: what “safe and trusted AI” means for Africa, what progress has already been made, and which collaborative pathways can advance AI governance, safety and capacity building on the continent [5-8].
Tone of the discussion
The conversation began with a formal, introductory tone. It quickly shifted to a concerned and urgent mood when panelists described risks such as neocolonial exploitation and misinformation [33-36][48-55]. As the dialogue progressed, the tone became constructive and hopeful, focusing on education, capacity-building initiatives, and collaborative infrastructure [141-151][201-204][215-218]. Toward the end, the tone turned pragmatic and policy-oriented, emphasizing concrete steps for procurement, regulation, and negotiation with global tech firms [101-108][241-254][259-266]. Throughout, the panel maintained a collaborative spirit, repeatedly urging collective action over competition.
Speakers
– Ambassador Philip Tigo – His Excellency Ambassador, Special Technology Envoy of the Government of Kenya; serves as a special envoy on technology for the President of Kenya and provides policy perspectives on AI safety and governance. [S1][S2]
– Michelle Malonza – Co-moderator of the session; affiliated with the Center of Global AI Governance (GCG) as a panelist and contributes to discussions on AI trust and capacity building. [S4]
– Speaker 2 – Moderator/chair of the panel; leads the Q&A, introduces questions, and guides the flow of the discussion. [S6][S7]
– Mark Gaffley – Director of Legal and Operations at the Center of Global AI Governance (GCG); speaks on public awareness, capacity building, and policy implications of AI in Africa. [S9]
– Dr. Chinasa Okolo – Founder of Technicultura; Policy AI Specialist at the United Nations Office for Digital and Emerging Technologies; provides insights on AI incident databases, advocacy, and African AI governance. [S10][S11][S12]
– Speaker 1 – Opening host/moderator who introduces the panel, outlines the agenda, and closes the session with logistical information. [S13][S15]
– Professor Jonathan Shock – Associate Professor in the Department of Mathematics and Applied Mathematics at the University of Cape Town; Director of the UCT AI Initiative; discusses risks, misinformation, and agency in AI systems. [S16]
– Audience – Members of the audience who ask questions during the Q&A segment.
Additional speakers:
– Zach – Mentioned as the person who will start the first set of questions; appears to act as a co-moderator or facilitator.
– Prashok – Referred to as a participant who would take a question from the audience.
– John – Briefly addressed by the moderator; no further context provided.
– Iman – Named at the very end as the next person to take over after the panel discussion.
The session began with the moderator, Michelle Malonza, introducing the African-led research team – Marie-Ira Ducunda, Gatoni, Michel Malonza and the AI Safety South Africa initiative – and outlining the three interlinked questions that would guide the discussion: what “safe and trusted AI” means for the continent, what progress has already been made, and which collaborative pathways should be pursued [5-8][9-14][15-18]. She also gave brief housekeeping instructions, directing participants to the QR-code and Slido for live questions and noting the event schedule.
The moderator framed “safe and trusted AI” as technology that delivers the outcomes users desire and asked the panel to identify undesirable results in the African context. Ambassador Philip Tigo warned that AI systems that create dependency, extract African data and concentrate value abroad erode human agency and constitute a form of digital neocolonialism that could pose an existential threat to the continent [30-32][33-36].
Professor Jonathan Shock then highlighted the most pressing short-term risk: a rapid breakdown of public trust caused by misinformation and targeted disinformation during elections in Ghana, South Africa and Nigeria. He distinguished misinformation (unintentional errors) from disinformation (deliberate, often gender-based campaigns) and noted that AI-enabled agents now allow single malicious actors to launch large-scale, automated attacks [48-55][60-64].
Dr Chinasa Okolo pointed out a critical data gap: existing AI incident databases return “African American” when “Africa” is queried, making it difficult to locate continent-specific harms. She cited concrete but under-reported cases where AI-graded examinations in Nigeria and South Africa produced erroneous scores that students could not contest, illustrating how AI failures can go unnoticed without proper monitoring [72-76].
Addressing the capacity deficit, Mark Gaffley presented findings from a public-awareness survey embedded in the South African Social Attitudes Survey, which showed that nearly three-quarters of respondents knew very little about AI and relied on informal channels such as social media for information [141-145]. In response, the Global Centre for Governance (GCG) has launched short courses on AI ethics and human-rights implications, offered scholarships for African women, and is preparing a free MOOC that uses relatable imagery to broaden access to AI knowledge [147-151].
Professor Shock expanded on the empowerment dimension, arguing that AI must enhance agency by being understandable and culturally relevant. He stressed that models lacking local language and contextual nuance cannot truly empower users, and advocated for human-in-the-loop designs that preserve decision-making authority [155-162][226-233].
Ambassador Tigo noted that, despite the emergence of several AI strategies on the continent, most countries still lack concrete AI policies and the technical talent to evaluate models, especially in the public sector where AI is often dismissed as a “charge-EPT” (a term he used to describe low fluency) [101-108][111-119]. He urged an “all-in” cooperative effort that transcends competition, arguing that fragmented attempts waste resources and undermine collective progress [201-207].
Building on this, Tigo identified three interdependent personas – scientists, governments and citizens – each requiring capacity building. Scientists need access to models for safety evaluation; governments must develop the expertise to hold multinational firms accountable; and citizens should be included in safe environments to prevent manipulation by malicious agents [176-191]. He linked these personas to the broader goal of developing indigenous models that reflect African cultures and data, thereby reducing reliance on external providers such as OpenAI, Anthropic or Gemini [186-191].
To translate these ideas into practice, Tigo advocated embedding safety benchmarks directly into procurement contracts, creating agile oversight mechanisms that can adapt to the rapid evolution of AI, and developing negotiation playbooks that give policymakers market insight and bargaining power against trillion-dollar companies [241-254][255-257]. Dr Okolo complemented this by recommending the establishment of independent AI safety institutes – akin to the US National Institute of Standards and Technology – that could certify models, test a range of products and operate without dependence on multilateral lenders or philanthropic donors [260-267].
The panel also highlighted existing collaborative infrastructure. Professor Shock cited grassroots initiatives such as MasaKani, Deep Learning in Daba, GOAI Africa and Sasanke Biotic, and described the newly announced African Compute Initiative, which will provide a shared high-performance computing platform for researchers across the continent, exemplifying a network-effect rather than competition with big-tech firms [215-218][215-218].
When discussing the deployment of AI in critical infrastructure, both Shock and Gaffley warned against a “move-fast-and-break-things” approach. They recommended human-in-the-loop, transparent systems and the preservation of analogue alternatives to avoid over-reliance on AI, especially where failures could jeopardise essential services [226-233][224-225]. Ambassador Tigo added that AI should be used to optimise development outcomes – for example, improving energy-grid efficiency – rather than being adopted for its own sake, thereby ensuring that technology serves concrete development goals [318-326].
The discussion moved to a Q&A segment. An audience member asked whether AI-generated media should be mandatorily water-marked; Professor Shock responded that watermarks are a short-term mitigation that can be circumvented by determined actors and should be complemented by broader provenance-tracking mechanisms [S1]. A second question addressed the digital divide; Mark Gaffley offered a philosophical view that the digitally excluded constitute a reservoir of creativity that should be preserved, while Ambassador Tigo stressed the need for basic connectivity, electricity and literacy before AI can be meaningfully applied, and suggested using AI to accelerate, not replace, foundational development projects such as hospitals and schools [S2]. A philosophical audience query about what socio-economic structure AI would choose was answered by Mark, who remarked that an AI-driven system would likely aim for “very efficient” outcomes and strict time-keeping [S3]. Finally, a question on how younger generations can influence AI policy prompted Dr Okolo to suggest open feedback periods, targeted research, legal analysis and the creation of informal channels where formal mechanisms are absent [S4].
In closing, the moderator thanked the panel members, invited participants to gather for a group photo, and reminded everyone of the post-event social gathering at Café Lota.
The first share of the research team, I believe, is here with us today, including Marie -Ira Ducunda. We have Gatoni as well, and Michel Malonza, who will also be moderating with us today. And then we’ve got AI Safety South Africa, where we’re working on building local capacity to work on AI safety alongside evaluations research. So together, our organization represents a growing ecosystem in African -led efforts on AI governance, safety, and capacity building. As you all must know, today we are exploring three interlinked questions. What does safe and trusted AI actually mean for the African context? What progress has already been made on the continent and by whom? And what are the most promising pathways for collaborations going forward?
And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on my left. who is the founder of Technicultura and a policy AI specialist at the UN Office for Digital and Emerging Technologies. And then we have Ambassador Philip Tigo that serves as a special envoy on technology for the President of the Republic of Kenya. And then we have Professor Jonathan Shock who is an associate professor in the Department of Mathematics and Applied Maths at UCT and the director of the UCT AI Initiative. And finally we also have Mark Gaffley who is the director of legal and operations at the Center of Global AI Governance. Hopefully we’ll also have Dr.
Kola Ideson that will join us in the next few minutes, who is the research director at Research ICT Africa. And in the next 47 minutes or so. We all spent about 30 minutes on the panel, followed by about 15 minutes for panel discussions. And then we’ll just conclude with some brief remarks to pull the threads together of what is discussed tonight. A few little housekeeping things before we start. So in the slide behind me, if you have not registered on NUMA, we’d love to stay connected and be in touch. And AI Safety South Africa and ELENA have exciting programs that you’d want to know about. So please scan this QR code on the top left of the screen.
With that link, you can leave us your contact details and also give us feedback on the event. And on the top right, you’ll see the link to Slido, which is the platform that we’ll use for Q &A. So you can just scan the code and then you’ll be redirected to a platform where you can leave your questions. And also avoid the questions of the two things we should prioritize. in the Q &A section. Okay, that’s all the points I had to share. So without further ado, let’s get into it. I’ll hand it over to you, Michelle. I believe Zach will be starting with the first couple of questions, then I’ll take over after him.
Okay, thank you. So I’ll be moderating part of the session and my colleagues, Michelle, will be taking part of the questions. Afterward, then we’ll progress to the Q &A. So I will start with the foundation, Safe and Trusted AI, which is like we can consider broadly as kind of AI that delivers the outcome we want. So I want to start with you, Ambassador, please. In the context of Africa in particular, what AI -driving outcome will we consider undesirable?
I think and it’s quite interesting I’ve been having this discussion of safety today the whole day I think in the context of Africa I think the first thing I want to be very careful is that the African continent is not homogenous right so I’ll give a very specific Kenyan understanding of this but I think it could potentially be something that is shared in the country I think the first part of this conversation is that largely that if AI systems are creating a dependency rather than building capacity or capability I think for me that’s undesirable because the erosion of human agency especially for a continent that is still trying to aspire is a problem if AI systems are extractors of African data if capturing our African markets and there’s a concentration of value outside the continent while leaving our institutions as mere implementers or users then I think for me as I said it’s digital neocolonialism I think that’s it the second part of course is that if these continue to be built without our knowledge, wisdom, cultures, it creates an existential threat.
It’s almost a civilization extinction story that then for me is just not undesirable. I think it goes beyond, it’s unacceptable. So those would be my two quick responses.
Okay, thank you. So, Prof Jonathan, I will move over to you. So of the possible outcome and risks and some of what Ambassador please mention, what do you see as a trade -off like short and long -term risks? And which one shall kind of like likely kind of like consider now and those that we can consider in the future?
Sure, thank you very much for the question. So I agree with Ambassador Tigo in terms of these ideas of neocolonialism. And the bias is inherent in the models and the context. I think these things are all extremely important. And I think these things are all extremely important. And I think these things are all extremely important. I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we we have to be very aware of, which is happening right now. In fact, it happened before AI came along.
And AI is allowing this to happen at a scale that at the moment we already see disruptions, but I think there’s real risk of a complete breakdown in trust. And that’s misinformation and disinformation. We’re seeing already around times of elections within Africa, within Ghana, within South Africa, within Nigeria, that misinformation, but also disinformation, and I disambiguate those by misinformation being, it might be that people are spreading things that they just don’t know is correct, but disinformation is really targeted campaigns. And what we’re seeing is that those targeted campaigns are often gendered, that it’s often against female politicians, that technology -facilitated gender -based violence is a massive issue against politicians, but more broadly. But I think that for me, one of the real things…
is the breakdown in trust that we’re seeing in society. We’ve seen already with social media how echo chambers form. AI is really allowing that to happen at scale by malicious actors who can focus in on particular election periods and destabilize what’s happening. To me, in the short term, that’s really worrying. I think it’s quite difficult to talk about the long term. We can think about what might happen in the next few months, but thinking about the long term threats, people have talked about existential threats in terms of AI getting out of control. I think that’s something that’s extremely important to study, but I think that within particular contexts there are things that are real that are happening now that we have to worry about and try to mitigate.
I think that’s really important. The other thing that I think is happening at the moment that I don’t hear a lot of people within the space talk about, within the policy space maybe talk about, is the issue of agents. And the fact that now a single malicious actor can design their own agent to carry out a misinformation campaign or a disinformation campaign. I think just over the last few months, we’ve seen that possibility come to light. And I think that’s a real worry and something that we need to understand. It’s not just now about the big tech firms. Of course, they have a major role to play in this. But I think now an individual actor can produce software that millions
Okay, thank you. So, Dr. Chinasa, I’ll move over to you. So, given that the kind of current development of frontier AI leaks that is kind of forcing some of the leaks we are talking about, how can Africa monitor and mitigate those leaks, given that they are kind of like most of the existing development is outside of the context?
Yeah, great question. And this reminds me, I actually talked to an Alita researcher last year when I was at the… The peers at AI… Action Summit about some work that they were interested on doing, like an AI incident database. And I think this is actually very important because when I look at current databases, and they’re really comprehensive for the most part, but honestly, when I look up or type in Africa, for example, it reverts back to African American. And I’m based in the U .S., and that’s helpful for me to know, obviously, because I get coded as African American there, but finding this just basic information about AI harms on the continent is still very hard if you’re not tuned in.
I get stuff on Twitter that comes up all the time. There are a couple cases with some African universities, particularly in Nigeria, and also in South Africa had issues with AI being used to automatically grade standardized exams and students having issues with trying to rebut some of those scores that they received. And so that did not make mainstream news, probably in the countries, but not just generally. And so I think that this is a really important one. So we understand how AI impacts. It affects the African continent. and also communities on the continent, and then also that governments can respond accurately to crafting regulations that can serve the needs of communities and also ensure that the responsible parties are held responsible for the harms that they’re causing on different communities.
Yeah, thank you. So just a kind of like follow -up on that. So you mentioned like holding kind of like responsible, kind of accountable. So like is there anything in particular like maybe our stakeholders can do, in that regard, kind of like is there any short -term or like long -term effort that we can do?
Yeah, it’s hard to say because, you know, as you can tell by my accent, I am American, I’m also Nigerian, and so I do understand a little bit of intricacies between both countries and the U .S. are a little bit more formal ways for advocacy. Like you can actually write directly to your congressman. You can call their office. Most often you won’t get them directly, but you’ll get their staff members, and they often respond. Like people. Right. them for basic issues like, oh, I can’t get my passport in time. Please help expedite this. And, oh, there’s this issue happening at my school. Please help with this. And so, honestly, I’m not very aware of similar pathways across African countries.
But I think that this civil society advocacy, particularly grouping together, you know, forming these coalitions can have a lot of power. It’s just, again, like there are a lot of incentives in place for governments to suppress this, and we’ve seen this turn into violence, particularly against youth. And so I am aware of this, and I don’t want to recommend this so people get harmed. But I think there are ways that, you know, again, this coalition voting can be successful.
I wanted to jump into that because you talk about policy. And I think, and let’s be real, and that’s why I think I, when Irina asked me to come to this through my colleague Stephanie, I thought it would be important because this is a very Africa -centric discussion. I’ve been into all the global ones. I think five today. I think let’s be very clear. But if we have a couple of AI strategies in the continent, we do not necessarily have AI policies in the continent. So there’s already no mechanism to do this. And that’s AI in general. We’re not even talking specific about safety. Secondly, we do not have necessarily the talent to do this in the continent.
I think that’s why what you guys are doing is important. And I say talent in the other spaces, not even in public sector. When you go into public sector, unfortunately my colleagues just think AI is charge EPT. Let’s be honest. So there’s basically a fluency question. Safety is so far in the scale that they’re not even thinking about it. So I think for me, the sense that I kind of have in this is that it needs to be an all -in effort. And this is where my sense in the African continent is where that dichotomy between civil society and governments disappears. Because if it’s about existential risk to the continent, and I say existential risk, it’s about existential risk in terms of harms to society.
I’m not talking about… I mean, a few scientists like us can talk about models and harms to the model. the chances of an AI pressing a nuclear button in Africa, come on. And that’s my point. So we have to even redefine what existential risk for Africa on AI means. And I think this is where we really have to break from that. And we can have a few of our scientists doing the existential risks models, models running rogues and science fiction. I think that’s important work. But the risk that he’s mentioned is real, right? Threats to democracy, threats to harmony of society are real risks. And then this is how then you begin to build guardrails from a point of understanding of what is really relevant to the African continent.
Otherwise, we get lost in the other conversation that really chances of happening, nil, but important. But these are the risks. Chances of happening, high, but less prioritized. Good data, folks.
Okay. Thank you so much for that contribution. if you have a question, please use the QR codes and type your question. We’ll come back to that, but I’ll be moving over to my colleagues who will take over with the rest of the question. Michelle.
Just to join the conversation that you’re already having, I think so far we’ve talked a lot about what we don’t want and the kind of risks that Africa should be focusing on versus the rest of the world, and now I’d like us to talk about how we define what we want the systems to look like and what trustful systems would look like. And so I’d like to start with Mark talking about what his work at GCG has revealed so far about what Africa should think about what they want from these systems.
Cool. Thank you. Thank you for the question, Michelle, and obviously for the opportunity to speak this afternoon. I see the answer to this question as twofold. So first is the answer that addresses what we actually want to define from what we want from AI. As a high -level response, I would describe that as the desires of African citizens on the ground. especially our local communities and the marginalized and vulnerable amongst them who don’t necessarily have a voice or a seat at the decision -making table. The second response is the more likely scenario, in my view, that we remain subject to the whims, benevolent or otherwise, of those practitioners who are able to scale the most useful and not necessarily the most beneficial AI tools for our people.
Irrespective of whether those practitioners are based within national borders, across the broader continent or in foreign jurisdictions around the world. When I consider these responses in the context of GCG’s work, two things come to mind. The first is the results from a public awareness and perceptions of AI survey we released in September last year. The survey was a module in the annual South African Social Attitudes Survey, which is nationalized. The survey revealed that nearly 75 % of respondents knew very little about AI. And for those who did know about AI, most of their learning was through informal and unstructured channels, including through social media and television. These findings may reveal that African populations are some way away from being able to define what they want from AI, because quite simply the majority of citizens are unaware that technology even exists.
This drives the need for creating awareness and educating our peers on AI, so that when the time does come to interact with it, they can make informed and meaningful decisions about what they want. On this, GCG’s other work I’d like to highlight are the various short courses we run on ethical and human rights implications of artificial intelligence through accredited universities in South Africa. These courses attract interest from all over the world, and for each iteration we’ve received applications in the thousands. As part of these offerings, we are also prioritizing awarding scholarships, scholarships for African women as part of our Women in Focus series. Why this work is important to the question is that the courses, even if incrementally, are slowly moving the needle on the figure I mentioned earlier, equipping participants with the skills to pass on knowledge to their peers about the many benefits and risks related to AI technologies.
Finally, as a further effort towards equipping Africans to be able to define their own wants and needs, we have an online MOOC launching imminently that will offer our course content freely to the public using relatable caricatures and imagery, which I hope will further drive this objective of equipping Africans to understand and make their own informed decisions about what AI technologies to allow into their lives and what outcomes they want those tools to achieve for them.
Thank you. I think that’s really interesting because it ties right to what Ambassador was saying, that in order to know what you want as Africans, you have to know that the technology exists, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what exact technology we are talking about when we see AI. So maybe I should let the rest of the panel… I don’t know if I… say what they think Africans want, and then we’ll go into
So I think, you know, I don’t want to speak to what an individual person wants, but I think that what we all want is empowerment. We all want agency. And so there is a possibility that we can think about AI as a way to give agency, and I spoke about agents before, and I mean agency in a slightly different sense, for people to understand the possibilities that they have. And to increase that range of possibilities so that people can make choices. And so knowing that there is something out there that can give you, empower you, is great, but it has to be able to empower you within a context. And, you know, we’ve spoken many times about the, you know, the lack of context, local context within these models, the lack of language, contextual language information.
And until those things have been fixed, it’s not actually going to empower people. So to me, it has to be about making sure that the model… understands local context, and then making sure that it’s actually giving people agency to make decisions. I think that’s really important.
Awesome. Yeah, so I’ll try to be a little bit nuanced about this because, again, I’m Nigerian -American. I grew up right in the middle of the United States. I have been fortunate to travel across the continent very frequently over the past couple years or so, but in going off of what Jonathan said, I would say that I do see really just an opportunity, one, to contribute to equitable governance structures and mechanisms, but also even just an opportunity to actually participate equitably in AI development more broadly. That’s what I see a lot of young Africans want, particularly one, because the epidemic of underemployment is very stark on the continent, and then also just generally that these systems have the power to change the world and have changed the world already, and so I think that this is something.
A lot of our conversations are on AI safety, can also provide new avenues for African researchers, scientists, engineers, to really contribute new research that we’re still missing. Because particularly when we consider the U .S. context or even these prominent AI safety or fairness conferences, a lot of the work on bias is rooted in race, for example. Again, which is a Western construct. And so if we understand how AI impacts people from different castes, from different tribes, religions, gender, and the intersection of all of these, I think this will, one, advance the field as a whole. But, again, also provide more opportunities for these governance structures that are needed within African context.
Sorry. No, I think a couple of things. And I take this from a persona approach because, again, I think Africa and the communities are a little bit different. And I think I’ll take the three important ones. One, I think it’s basically our scientists, right? I think our scientists, for me, need us. because you cannot talk about benchmarks evaluations around safety if you don’t have access to these models because we are the ones who bear the brand of these models. I’ve given an example Kenya is the biggest user of charge GPT and the first user of charge GPT is emotional advice so you’re asking that’s real data so you’re asking a model for emotional advice that doesn’t understand your context so what does that mean so I think there has to be a way that our scientists have access to these models which means also capacity for them to be able to evaluate these models but also a way that then the second persona is governments, a way that then working with scientists that governments can hold those companies to account because of the potential adverse harms that they can do to our society and community so there is where I see hand in hand, now that’s what governments want but also what governments need is capacity because you’re talking to five trillion dollar companies and your GDP is like a hundred billion million dollars.
So I think potentially we have to, this is where there has to be collaboration because this company understands market pressure, not necessarily regulatory pressure. So there has to be a nuanced approach to how you do that. The third part, of course, I think is the citizenry, right? The citizenry, I think in my sense, just needs to be included. And part of inclusivity is the safety work, right? So you must be included in a safe environment so that you’re not left to put the whims of agents or folks who can manipulate the crowd. So I think I look at those three personas potentially as that. But I think the underlying kind of infrastructure in this is basically looking at how do we ensure that as a collective in the continent that we can build our own models.
And I think that’s important, right? Because you cannot over, part of agency is human agency but also part of challenge to agency is over reliance. On example, models. I think the continent, I understand local context, I understand culture, but that capability to be able to build our own models that are nuanced to our own context, I think is a good option. Then you are not left to Gemini, Quen, OpenAI, Anthropic, I can mention five of them. What choice do we have right now if we don’t have an alternative potentially built from open source?
Thank you very much for all your responses. I really appreciate talking about how capacity and access are the ways that we are going to figure out agency and empowerment. I think that brings me to the next question that all of you have touched on about what is going to make it possible for us to strengthen cooperation and engagement across the region in Africa because that’s a key part of making the access possible to begin with. I can see Ambassador has immediate thoughts, so I guess we can start with you. Let’s start with you since you are very expressive. with you and then go from Dr. Chinasa coming up to the rest of the panel.
Stop competing. I’m really, it’s, I’m sorry, sometimes I stop being an ambassador at some point. Because AI is not ICT. It’s not about who’s going to build the best data centers. You know, who’s going to do X or Y. This is a collective all -in effort. I think for me, that’s the biggest shift that we need to make. That it’s not about competition. It’s about cooperation and collaboration. That’s what will make us work together. I think for me, that’s my and I’m saying this out of frustration because I see it. And it’s a waste of money. But also, it’s just a waste.
Alrighty. So, I know in the draft of this I mentioned, I’ll talk about some of the stuff at the UN. I’m speaking also my personal capacity too. But, you know, we just recently launched the international conference. scientific panel on AI I read nearly every application for that and so I think really it’s important that and I was very happy to see African representation you know on the panel we have eight I believe and I was thinking we were gonna at max not a max but at least get around four or five or so and so it’s really good to see that you know our voices are valued and then also more broadly that there are other efforts to complement the panel including the Africa AI Council and so I also look forward to seeing how this plays into the work that the UN is doing and also again some of the other enough initiatives that we’re doing around the global AI dialogues which play directly into the the panel’s work as well and so really just again that’s having you know in not to say that you know just this inclusion will actually lead to actual change sometimes it you know honestly doesn’t but I think the UN is a little bit special and in some cases where we’ve seen how the work that was done with the H -Lab on AI really led to increased conversations and discourse on this idea of international AI cooperation.
And so I hope to see African governments do this kind of work individually. I had the chance to serve on the AU’s Continental AI Strategy. I did this work when I was a PhD student, like four years ago, and then also served as a drafting member on the Nigeria National AI Strategy as well. And so I did this all the way from the US, and I think that there’s many opportunities for, again, African countries and also those throughout the global majority to build their own initiatives for this
AI cooperation. Yeah, I’d like to sort of follow up on, in particular, Ambassador Tigo’s point about the need to not be competing with each other. And I think that within Africa, I think there are already really, really good examples of people working together. You’ve got Masa Kani, you’ve got the Deep Learning in Daba, you’ve got GOAI Africa, you’ve got Sasanke Biotic. You’ve got the… I think that’s a good point. all of these grassroots organizations who already with limited resources doing amazing work you then add some resources to this and you really superpower what people can do um at the university of cape town the african compute initiative was announced today um and so the idea of this is that we happen to have a cluster an hpc a high performance computer center currently with a lot of capacity that is to say a lot of space we are building that we’re setting up uh an african compute initiative which which researchers around africa are going to be able to use we’re setting up a cloud platform we’re bringing in gpus um state -of -the -art compute that’s going to allow other people at other universities to do their research this is not a competition this is really about how does one set of people empower another set of people because you know there is no competing with you know a trillion dollar company you but actually what we have is a network effect and that’s really really powerful in and of itself so we need to be working with academia with civil society, with government with the private sector all of these groupings need to work together and
Alright so I’ll do the final question before we get into the Q &A I think you’ve all touched upon how you think the engagement and the policy should be working around the continent like moving from strategies to the policy and so if Africa is able to come up with their own systems or find a way to have leverage against the companies to localize the systems that they’re going to deploy on the continent, what considerations do you think should be made while deploying those specific systems into our critical infrastructure because that somehow seems like an inevitability that’s going to happen so what considerations should African governments be making when thinking about integrating AI into critical infrastructure?
I can start with Mark since he’s the one who didn’t answer in the last round of questions the proxy paper for staying silent.
for the problem we’re trying to solve for. Sorry, John. So, yeah, just to ask if it is actually necessary. And the other thing, just, you know, sort of, you know, recognising access and inclusion issues is just to keep the alternatives open. So if you are going to digitise something or, you know, use AI tools to solve for a particular problem, just make sure that those who can’t access them still have their kind of analogue approaches to doing things. I did mention to someone earlier I was the against tech person in the room, so I think that’s why I’m pushing the analogue way.
Cool. So I think we just have to be very, very careful here of the sort of, you know, the Silicon Valley approach of move fast and break things. If you try to take a system, some sort of infrastructure system, be it, you know, a government department, and try to AI -ify it, you know, there are massive, massive risks there. That’s not to say that… that we shouldn’t be thinking about this and doing it very carefully. But we have to understand, again, I go back to agency, the agency that we remove when we get an AI system to make the decisions for us. I think there are really good ways to do this with human in the loop where we can have transparent systems so we can understand what the decision -making process is.
But if we’re simply going to a company who sell a product, who say we can streamline your service, then we’re really beholden to that company. And if it turns out that that’s not the right solution, trying to undo that when you’ve then lost the skills, then you’re in a really difficult position. So I think we need to move at a reasonable pace but not break too many things along the way. I think that’s a real risk.
Well, I think I probably have an advantage because I’m in government, so we kind of face a lot of these things. I think partly to understand the challenge, right? The challenge, I think, remember is, that we haven’t, especially for the African continent, age 19 .7, very young, already engaging in these tools, government engaging in 19th century technology, and so there’s a gap. And so there’s already sufficient pressure for governments to engage in these new tools. So there’s really not much room to kind of make these rational choices of not to use these new technologies because you have a population that is already using it. So then what does that leave you as options? I think the options for me, then it means that you need to start creating some form of guardrails even before you acquire the tools.
So you have procurement is one tool. And we can write a lot of these rules in the procurement documents, and I don’t think many of us are doing that. Include safety benchmarks in that, include a lot of these guys don’t want to be audited, so just get that in there because they want your business. And I have a sense maybe that’s the sweet spot, the point of decision marketing. At that time, everybody wants to talk to you, and that’s where African countries lose. lose the game. The second part of course is that because the technology changes very quickly I have a sense what we need to do is then continuously have kind of these agile mechanisms that keep pushing the foundational questions because this is not one technology it’s not a laptop that you’re going to buy and you’re going to use it for three years.
It’s going to change in the next two, three months. So I think potentially we need that. Third I think is just this contingency planning this single sourcing business should not work we need options and for me the fourth option is consideration always have the local option open because I mean data localization sovereignty. It’s about sovereignty so I think and part of it we don’t do that and that’s where we also start to make strategic decisions of separating private sector from global big tech to local private sector companies to smaller medium enterprises and I think we need to do that deliberately because then at least the local companies can be kind of managed by domestic law.
These other ones you probably have to go to Silicon Valley to sort of litigate. So I think for me, and it will keep on evolving, these are things that I’m seeing right now as potential options. But then I think it still all boils down to the capacity of the decision maker or the policy maker to be able to disarm these insights. Where we lose is negotiations. And part of what my team continuously does, and maybe this is something you guys need to consider, is think about these playbooks, guidebooks, negotiation tools, so that when they are negotiating, at least they have some sense of knowledge as their power to engage. Then I’m not talking because you can’t, you know, the hundred billion, five trillion, maybe when you have knowledge and have market insights, you have a better, you’re actually in a better position to engage.
Negotiate.
Yeah, so I definitely agree with my co -panelists on a lot of the topics brought up. I would say for the first one, particularly around the need for AI as an actual solution, and governments really need to evaluate whether, again, simple solutions, non -AI or deep learning based are actually necessary. And then also around the need for guidelines on procurement. I’ve been doing some work with the World Bank, and, you know, we’ve seen in our work that a lot of African governments, those across the majority of regions, are really being bombarded by, you know, suppliers to basically buy solutions. A lot of them, I think, are honestly unnecessary, and a lot of governments don’t have the capacity to evaluate these and make decisions, let’s say, transparently in -house.
And I think the key part of actually building the capacity will be, you know, establishing AI safety institutes or, you know, whatever name. I think that’s what governments want to call them. And I think that, you know, we have… this within the United States, it’s embedded within the National Institute of Standards and Technology, and they test more than technology. It’s food, you know, lotions, you know, cosmetics, all that stuff, too. And this may not obviously look the same across Africa, across Southeast Asia, South Asia, et cetera, but it really needs to be done, again, just to have this independent capacity and also, again, not be reliant on these multilateral lenders and foreign organizations or even philanthropic organizations that may be, again, funding or providing solutions, again, that may not be aligned with African needs and values and also or maybe not even be necessary in the first place.
Thank you so much for your responses. They were very thoughtful as we think about, to figure out what we don’t want is to think about what specifically African countries think is risky and thinking about the short term as the priority and then thinking about what we want is based on thinking. And I think that’s what we’re talking about, our capacity to make that decision or to… autonomy to decide what we want and then localizing in that context. And then in terms of thinking about how to collaborate across the board, the sense that I’m getting across the panel generally is that we need to think against competition so that we can be able to have leverage against the big companies.
So thank you so much for your details and thoughtful responses. I’ll hand it over to Zach to get us into the Q &A session.
Okay, thank you. So we’re going to take a few questions. And maybe I’ll also take one question from the audience, one or two questions from the audience. So one of the questions here is kind of like broad, so maybe Prashok, I’ll hand it over to you in 30 seconds. He said to improve inclusivity and trust, what shall an ideal AI model optimize for?
Gosh, that’s a difficult question. I think part of it has to be about transparency. How is a decision being made? People talk about the sort of the black box problem of AI systems. In fact, this isn’t quite the right way to look at these systems. You can look exactly what’s happening inside the model. You can look at all the weights of the matrices, but it’s really difficult to tell what’s actually happening in there. So building transparent systems that are understandable, I think that’s one way to build trust. Yeah, I think that’s a way to think about it.
Okay, thank you for that. There is one question here also about what are the most significant misconceptions about the current state of AI? Maybe Dr. Chinaza.
I’ll probably be redundant, you know, from some of the earlier topics we discussed on the panel. But again, that is a panacea or a band -aid or a solution for a lot of things, particularly like development challenges. I think we see African – particularly like doubling down on again adopting procuring these AI solutions when honestly like building hospitals paying teachers installing sustaining or reliable electrical grids would actually solve a would solve the problems much easier and better maybe not easier but better but and also with a little opportunity for you know funds being diverted or wasted on a non -functional solution so that’s one thing I think my other panelists would probably have other good comments as well
all right is there any question from the audience maybe we can take one question okay I will take one one but very brief
first of all thank you for being digitally inclusive for those of us who couldn’t use the QR code my question is to professor shock so you talked about misinformation information and disinformation maybe I can work my way back a little bit. So I think in some ways we need to start talking about kind of disincentivizing some types of AI, and this is what I mean. Usually when we talk about disinformation, we think about it from the user’s perspective, right? But if you create a tool, for example, for one, for example, I don’t see why there’s a sort of massification of the use of AI tools for media creation. Like, it’s not very necessary. Like, there’s a running joke about someone saying, well, I was hoping AI would be created to do some of the hard work that I do at home, like laundering or housekeeping, so I have more time to actually do media and entertainment, but it reverses the case, right?
So we’re having AI do all of this sort of stuff, and we’re not really making progress on robotics and stuff like that. Now, my question is, well, relatively compared to LLMs, right? So my question is, should we have some sort… say, mandatory watermark? for example for AI generated media like in that case if I see some video or some songs or some pictures I know it’s AI generated and in some ways I’m naturally not inclined to believe it is that a workable solution?
I think the cat is out of the bag I think it’s great if some organizations do put watermarks on indeed within China within some of the other companies they are beginning to do that but because we now have open source models and the open source models are getting very very good if a malicious actor wants to set out a disinformation campaign they’re just going to choose the one that doesn’t have the watermarks I see that one could for instance have media where there is some requirements to have information about whether or not it’s come from an AI system but when there are choices to have watermarked output or not watermarked output the malicious actor is just going to going to choose the one which is going to subvert the system.
So I think that it may be a stopgap, but I think it’s a very short one.
Okay, thank you. In 20 seconds.
So this is to the panelists. So I would say we have about 64 % of the continent of Africa that don’t have access to the internet and so are digitally excluded. So my question is how do we make sure that our advancements with AI are not widening the digital divide? I think it’s a really big problem. As we’re moving forward with AI, there are people who don’t have access to the internet, electricity, and other things. So how do we ensure that we’re also thinking about those digitally excluded individuals? Thank you.
This is a very abstract response, but it’s something I’ve been working on, so I’ll float it here. But it’s this idea of the digitally excluded as the kind of last vestiges of creativity left on the planet. you play it out over time, those who don’t have access to what I said about mental arrest and cognitive decline etc, being the ones that we eventually come to to ask for the sort of creative ideas and the independent decisions, so decision making abilities. So in a way just to kind of flip that, perhaps this focus on not having access as being excluded as potentially being a way down the line that you are actually included and in fact relied on because you kind of kept your cognitive abilities intact.
So yeah, a bit out there but I thought I floated.
in that particular instance and this is where I think AI becomes interesting I think and part of what I always speak about is the unfinished business that African governments need to do. So it’s about connectivity, it’s about electricity it’s about literacy it’s about the kind of old infrastructures that we’ve not done. So I think for the African continent this is where you start to use AI to optimise development. You can do smart it’s AI accelerates AI. And if you look at what we’re doing in terms of Kenya at least, is that what we’re doing. For example we’ve realized that with artificial intelligence that a lot of our energy optimization was wrong with our artificial intelligence because we were going for last mile electricity connectivity.
But now with AI we’re realizing in the World Bank that you could do this a little bit differently. All I’m saying is that we can leverage this technology on those non -sensitive capabilities to actually accelerate development so that again it’s not AI for AI. So for African governments don’t get AI for chat, right? Get AI for something else that drives development.
Alright, thank you. So we only have one minute for questions so I will take the last two questions together and briefly our panelists will answer to that. So one question here one question there.
My question is a little philosophical one. Like we talked about how right now AI is in a war where very many new technology comes. Each country and each company is trying to to be capitalistic and try to one up the other one. Uniquely in AI though, AI might just be the one which might catch up with itself where they might just like, there’s a possibility, right? So there are so many economic and structures out there like socialism, capitalism, which unique focus on optimizing certain things like engagement on social media, for example. So if you had to ideally work on a structure, if AI had to decide on a structure for humanity, I would just like your opinions on that.
Okay, thank you. We’ll take one question here.
Yeah, okay, thank you. So I’m going to consider two things, which is policy and our generation at large. So I wanted to ask, considering the zeal that we have for knowing AI, is the next generation safer also? And considering the thing that you’re saying, policy, we need policy. Should we, go around or just, just say we need policy because we can catch AI at where it is actually now in Africa, considering it hasn’t gone abroad that much, and just put policy to who is going to learn this and who is going to know this on AI.
Okay, thank you. So I think these two questions will be split across our panelists, so who wants to go first?
All righty. Yeah, I’ll take the policy one. I think that I’m very hopeful for African governments in particular when it comes to AI policy. I think there is, let’s say, like a big learning curve or actually implementation curve from the 20 or so strategies and two draft policy frameworks. And there is an opportunity, you know, for the younger generation to be involved. Obviously, one where I think is providing like feedback on different strategies, a couple of countries have had open feedback period periods. A lot of most of them haven’t, unfortunately. But, you know, despite that, I think, you know, doing research, legal analysis and providing these findings openly can actually have a lot of change. Again, if there happen to be formal mechanisms to provide this feedback, obviously take advantage of them.
If not, you know, create your own avenues or pathways to do so. And then I can, I’ll let my panelists speak. Okay.
Mark, do you want to add something? All right. Pro. Okay.
Very briefly. Okay. Well, that would be my point, is I think if AI were to structure humanity, we’d be very efficient and we’d keep to time.
All right. Thank you so much for your contribution. We’ll hand it over to Iman so that she can. Thank
Thank you so much. I’ll be super brief. Well, I’ll first start by thanking our incredible panel. Thanks a lot for your insights and energy and time. Thanks to you all for coming. It’s been a long few days, I imagine, being here at the conference. There are such great people to talk to and learn from. Before we wrap up, we’d love to take a picture with the panel. So I’ll invite you to just step forward here so that we can grab a picture together. And as they do that, for everyone, we have a social happening at 7 .30 today at Cafe Lota. That is in a museum close by. You could just, like, Google it. And we’d love to see you there.
We’re going to be heading there at 7 .30. Thanks, guys. Thank you.
The discussion began with Ambassador Philip Tigo’s powerful reframing of AI safety concerns through an African lens. Rather than focusing on speculative future risks, he identified three critical area…
EventIn many domains today, humans remain formally responsible for decisions shaped by automated systems. A civil servant signs off on a risk score, a doctor reviews an algorithmic recommendation, and an e…
BlogAshana Kalemera: Music Good afternoon. Thank you so much for joining us this afternoon. I’ll also say good morning, good evening and good day, considering that there are participants joining us online…
EventNicaragua warns that artificial intelligence’s benefits are being monopolized by a small number of corporate and state actors, creating new forms of digital neocolonialism. This concentration increase…
EventReference to how produced data used to inform people is not maintained locally and goes elsewhere, creating risks of cultural and economic dispossession The concentration of data centers and digital …
EventAn election year is approaching with an expected overflow of misinformation and disinformation
EventBabu Ram Aryal: I was also supposed to come into the very topic, disinformation and the election in our topic. So who is providing disinformation? Who are the agent of disinformation, especially …
EventDr Laurens Naudts, from the AI Media and Democracy Lab at the University of Amsterdam, provided a legal perspective, discussing the impact of synthetic content on democratic values and the EU’s abilit…
EventAHM Bazlur Rahman from Bangladesh News Network for Radio and Communication described grassroots-level interventions focused on hyperlocal Facebook page development and social media training at communi…
EventAnyone (with Internet access) can enrol in a MOOC, but certain skills are needed to benefit from the instruction: fluency in the language of instruction (usually English, although MOOCs in other langu…
BlogDr. Jovan Kurbalija Executive Director DiploFoundation:Thank you very much. Let’s start with the Ubuntu spirit. First, excellencies, ladies and gentlemen, dear colleagues, and what is the Ubuntu langu…
EventUbuntu philosophy in AI A foundational concept for AI governance is Ubuntu, a core African philosophy emphasising interconnectedness and mutual care, the session highlighted. The panel discussed how p…
Updates-Need for regional cooperation and mutualization: Speakers advocated for pooling resources, knowledge, and infrastructure across African countries rather than each nation working in isolation. Example…
EventThe discussion revealed that the challenge extends beyond inequitable distribution to an overall supply-demand gap affecting even developed nations. Panelists emphasized that solutions require multi-s…
Event2. Addressing data localisation and sovereignty concerns, particularly for developing regions. 3. Data Sovereignty and Localisation: Leydon Shantseko, a youth representative from Zambia, raised conce…
EventAudience: Good morning. I’m Levi Siansege with Internet Society, Zambia chapter, but also with the youth IGF. I love the discussions about data. Let me start with this. And my observation is most…
Event“The moderator, Michelle Malonza, introduced the African‑led research team – Marie‑Ira Ducunda, Gatoni, Michel Malonza and the AI Safety South Africa initiative.”
The knowledge base lists the same research team members (Marie-Ira Ducunda, Gatoni, Michel Malonza) as part of the roundtable, confirming their involvement [S2].
“The moderator’s name is Michelle Malonza.”
The source identifies the co-moderator as Michel Malonza and notes this is the same person as Michelle Ma, suggesting the report’s spelling may be inaccurate [S2].
“Ambassador Philip Tigo warned that AI systems that create dependency, extract African data and concentrate value abroad erode human agency and constitute a form of digital neocolonialism that could pose an existential threat to the continent.”
A related warning about digital neocolonialism was made by Nicaragua at the UN, highlighting similar concerns about AI-driven dependence and concentration of value abroad [S20].
“Professor Jonathan Shock highlighted the most pressing short‑term risk: a rapid breakdown of public trust caused by misinformation and targeted disinformation during elections in Ghana, South Africa and Nigeria.”
Multiple sources identify misinformation and disinformation as the biggest short-term risk to democratic trust, aligning with Shock’s assessment [S50] and the WEF Global Risks Report [S97].
“He distinguished misinformation (unintentional errors) from disinformation (deliberate, often gender‑based campaigns).”
The distinction between misinformation (unintentional) and disinformation (intentional) is explicitly discussed in the IGF report on trust online [S101].
“AI‑enabled agents now allow single malicious actors to launch large‑scale, automated attacks.”
AI’s ability to enable micro-targeted, large-scale disinformation campaigns is documented in a discussion on AI-driven manipulation [S99].
“Dr Chinasa Okolo pointed out a critical data gap: existing AI incident databases return “African American” when “Africa” is queried, making it difficult to locate continent‑specific harms.”
While the knowledge base does not address the specific database issue, it confirms Dr Okolo’s participation and her focus on AI bias and data sovereignty in Africa [S11].
“The moderator framed “safe and trusted AI” as technology that delivers the outcomes users desire and gave housekeeping instructions directing participants to the QR‑code and Slido for live questions.”
A similar event description notes the moderator repeatedly mentioning QR-codes and Slido polls for audience participation, confirming this procedural detail [S94].
The panel displayed strong convergence on four main themes: (1) the urgent need for capacity building and AI literacy; (2) the risk of digital neocolonialism and the imperative for African data sovereignty; (3) the preference for cooperative, coalition‑based approaches over competition; and (4) the necessity of embedding safety guardrails, transparent human‑in‑the‑loop designs, and preserving analogue alternatives in AI procurement and deployment.
High consensus – the repeated alignment across multiple speakers and arguments indicates a solid shared understanding of the priorities for safe and trusted AI in Africa, providing a robust foundation for coordinated policy action and regional collaboration.
The panel displayed broad consensus on the importance of AI safety, capacity building and inclusive governance, but diverged on how quickly AI should be deployed, whether AI is necessary for many development challenges, the primary focus of capacity building (public education vs technical model development), and the preferred strategy for strengthening governmental leverage with AI vendors.
Moderate to high – while participants share overarching goals (safe, trusted, inclusive AI), they hold contrasting views on implementation pathways, leading to potential delays or fragmented policies if not reconciled. These disagreements could affect the speed and effectiveness of regional AI collaboration, procurement standards, and the balance between AI adoption and addressing basic infrastructure needs.
The discussion was shaped by a series of pivotal remarks that moved the panel from abstract definitions of safe AI to concrete, Africa‑specific challenges and solutions. Ambassador Tigo’s framing of digital neocolonialism and the need to redefine existential risk set a geopolitical lens that other speakers expanded upon with evidence of misinformation, data gaps, and low AI literacy. Dr. Okolo’s observation about missing incident data and Mark Gaffley’s survey on public awareness highlighted the foundational need for knowledge and capacity. Professor Shock’s focus on trust, agency, and the emerging threat of AI‑driven agents deepened the analysis of short‑term harms. Together, these insights redirected the conversation toward practical pathways—education, collaborative compute infrastructure, inclusive stakeholder frameworks, and procurement safeguards—emphasizing cooperation over competition. The cumulative effect was a shift from problem‑identification to a coordinated, actionable agenda for building safe, trusted, and locally relevant AI across Africa.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

