Toward Collective Action_ Roundtable on Safe & Trusted AI
20 Feb 2026 18:00h - 19:00h
Toward Collective Action_ Roundtable on Safe & Trusted AI
Session at a glance
Summary
This discussion focused on defining safe and trusted AI for the African context, exploring what progress has been made on the continent, and identifying pathways for future collaboration. The panel included experts from various organizations working on AI governance, safety, and capacity building across Africa. Ambassador Philip Tigo emphasized that undesirable AI outcomes for Africa include systems that create dependency rather than building capacity, extract African data while concentrating value outside the continent, and perpetuate digital neocolonialism. Professor Jonathan Shock highlighted immediate risks like misinformation and disinformation campaigns, particularly around election periods, noting that AI enables malicious actors to conduct targeted campaigns at unprecedented scale.
Dr. Chinasa Okolo stressed the need for better documentation of AI harms occurring on the continent, citing examples of automated grading systems in African universities that caused problems for students. The panelists agreed that Africa needs to move beyond AI strategies to actual policies, with Ambassador Tigo noting that most African governments lack both comprehensive AI policies and the technical talent to implement them effectively. Mark Gaffley presented survey data showing that 75% of South Africans know very little about AI, highlighting the need for public education and awareness programs.
The discussion emphasized that Africans want empowerment and agency from AI systems, requiring models that understand local contexts, languages, and cultures. Panelists stressed the importance of collaboration over competition among African nations, with Professor Shock announcing the African Compute Initiative to share computational resources across universities. For deploying AI in critical infrastructure, the experts recommended careful procurement processes with safety benchmarks, maintaining non-AI alternatives, and building local capacity through AI safety institutes. The overall consensus was that Africa must develop its own AI capabilities while establishing guardrails to ensure these technologies serve African communities’ needs and values.
Keypoints
Major Discussion Points:
– Defining AI risks and undesirable outcomes for Africa: The panel discussed Africa-specific concerns including digital neocolonialism, data extraction without local benefit, dependency rather than capacity building, misinformation/disinformation campaigns (especially targeting elections and female politicians), and systems built without African knowledge, wisdom, and cultural context.
– What Africans want from AI systems: Panelists emphasized the need for empowerment and agency rather than dependency, equitable participation in AI development and governance, access to models for evaluation and research, and systems that understand local context, languages, and cultures. They stressed the importance of building local capacity and having alternatives to foreign-developed models.
– Strengthening African cooperation and capacity: Discussion centered on moving from competition to collaboration across African countries, the need for actual AI policies (not just strategies), building technical talent and government fluency in AI, creating African AI safety institutes, and leveraging initiatives like compute sharing and grassroots organizations already doing important work.
– Considerations for deploying AI in critical infrastructure: The panel addressed procurement guidelines that include safety benchmarks, maintaining human-in-the-loop systems for transparency, avoiding single-sourcing and maintaining alternatives (including analog options), the importance of local private sector partnerships, and ensuring governments have negotiation tools and capacity when dealing with big tech companies.
– Addressing the digital divide and inclusion: Concerns were raised about ensuring AI advancement doesn’t widen existing gaps, with 64% of Africa lacking internet access, and the need to use AI to optimize development in areas like electricity and connectivity rather than just adopting AI for its own sake.
Overall Purpose:
The discussion aimed to explore what safe and trusted AI means specifically for African contexts, assess current progress on the continent, and identify promising pathways for collaboration. The panel sought to move beyond Western-centric AI safety discussions to focus on Africa-specific risks, needs, and solutions.
Overall Tone:
The discussion maintained a serious but constructive tone throughout, with participants showing both urgency about current challenges and optimism about potential solutions. There was notable frustration expressed about the gap between global AI development and African participation, but this was balanced by practical suggestions and examples of positive initiatives already underway. The tone became more collaborative and solution-focused as the discussion progressed, particularly when addressing cooperation and capacity-building opportunities.
Speakers
Speakers from the provided list:
– Speaker 1: Event moderator/organizer, appears to be affiliated with AI Safety South Africa and involved in building local capacity for AI safety and evaluations research
– Speaker 2: Co-moderator of the session (identified as Zach in the transcript)
– Michelle Malonza: Co-moderator of the session, colleague of Speaker 2
– Ambassador Philip Tigo: Special envoy on technology for the President of the Republic of Kenya
– Professor Jonathan Shock: Associate professor in the Department of Mathematics and Applied Maths at UCT (University of Cape Town), Director of the UCT AI Initiative
– Dr. Chinasa Okolo: Founder of Technicultura, Policy AI specialist at the UN Office for Digital and Emerging Technologies
– Mark Gaffley: Director of legal and operations at the Center of Global AI Governance (GCG)
– Audience: Multiple audience members who asked questions during the Q&A session
Additional speakers:
– Marie-Ira Ducunda: Member of the research team (mentioned by Speaker 1 but did not speak in the transcript)
– Gatoni: Member of the research team (mentioned by Speaker 1 but did not speak in the transcript)
– Michel Malonza: Mentioned as co-moderator but appears to be the same person as Michelle Malonza
– Dr. Kola Ideson: Research director at Research ICT Africa (mentioned as expected to join but did not appear to speak in the transcript)
Full session report
This panel discussion brought together leading experts in AI governance, safety, and capacity building to explore what safe and trusted AI means specifically for African contexts, moving beyond Western-centric frameworks to address continent-specific challenges and opportunities. The conversation was structured around three key questions: What does safe and trusted AI actually mean for the African context? What progress has already been made on the continent and by whom? And what are the most promising pathways for collaborations going forward?
Defining AI Risks and Undesirable Outcomes for Africa
The discussion began with Ambassador Philip Tigo’s powerful reframing of AI safety concerns through an African lens. Rather than focusing on speculative future risks, he identified three critical areas of immediate concern. First, AI systems that create dependency rather than building local capacity represent a form of “digital neocolonialism” that erodes human agency—particularly problematic for a continent still working to build its aspirational capacity. Second, systems that extract African data whilst concentrating value outside the continent, leaving African institutions as mere implementers or users, perpetuate exploitative economic relationships. Third, AI systems built without incorporating African knowledge, wisdom, and cultures pose what he termed an “existential threat” that goes beyond undesirable to “unacceptable.”
Professor Jonathan Shock expanded on these immediate risks by highlighting the breakdown of social trust through misinformation and disinformation campaigns. He provided concrete evidence of how AI-enabled disinformation is already disrupting African democracies, particularly during election periods, with campaigns often targeting female politicians through technology-facilitated gender-based violence. Crucially, he noted that individual malicious actors can now design their own agents to carry out disinformation campaigns at unprecedented scale, moving beyond the traditional focus on big tech companies to include distributed threats.
Dr. Chinasa Okolo contributed a critical observation about the invisibility of African AI harms in global discourse. She noted that current AI incident databases, whilst comprehensive for other regions, fail to capture African contexts adequately—searching for “Africa” redirects to “African American” content. This documentation gap means that African governments lack the evidence base needed to craft appropriate regulations and hold responsible parties accountable for harms affecting their communities. She also highlighted specific examples of AI systems causing harm in African universities, including grading systems that disadvantage students and procurement of AI solutions that fail to function as promised.
The panellists reached a crucial consensus on redefining “existential risk” for African contexts. Ambassador Tigo argued passionately that whilst some scientists should study traditional existential risks like rogue AI systems, the real existential threats to Africa are immediate: threats to democracy, social harmony, and human agency. This reframing proved influential throughout the discussion, with other panellists consistently returning to immediate, contextually relevant risks rather than speculative future scenarios.
What Africans Want from AI Systems
The conversation revealed a sophisticated understanding of African aspirations for AI that centres on empowerment and agency rather than mere access. Professor Jonathan Shock articulated this as wanting AI that increases people’s range of possibilities and enables informed decision-making within local contexts. However, he emphasised that current AI systems cannot provide this empowerment because they lack understanding of local contexts, languages, and cultural nuances.
Dr. Chinasa Okolo highlighted two key desires emerging from her engagement with young Africans across the continent. First, there is strong demand for equitable participation in AI governance structures and development processes, driven partly by widespread underemployment and recognition that AI has world-changing potential. Second, African researchers, scientists, and engineers want opportunities to contribute new research that advances the field globally, particularly around understanding how AI impacts people from “different castes, tribes, religions, gender, and the intersection of all of these”—moving beyond Western constructs of bias that focus primarily on race.
Ambassador Tigo provided a persona-based analysis of what different African stakeholders need from AI. Scientists require access to AI models for evaluation and safety research, particularly crucial since African countries are among the biggest users of systems like ChatGPT. He noted a concerning trend where citizens are using culturally blind systems for emotional advice, highlighting the mismatch between available AI tools and local needs. Governments need capacity to hold AI companies accountable for potential harms whilst building negotiation capabilities to engage effectively with trillion-dollar companies. A key challenge he identified is that many governments think “AI is ChatGPT,” revealing a fluency problem in the public sector that hampers effective governance.
Mark Gaffley’s presentation of survey data provided sobering context for these aspirations. His research revealed that 75% of South Africans know very little about AI, with most learning through informal channels like social media and television. This finding suggests that African populations may be “some way away from being able to define what they want from AI” because many citizens are unaware the technology exists or understand its implications.
Strengthening African Cooperation and Capacity Building
The discussion revealed significant frustration with competitive approaches to AI development across African countries. Ambassador Tigo’s passionate intervention—”Stop competing. I’m really, it’s, I’m sorry, sometimes I stop being an ambassador at some point”—became a turning point in the conversation. He argued that AI is fundamentally different from traditional ICT infrastructure development and requires a “collective all-in effort” rather than competition over who builds the best data centres.
Professor Jonathan Shock provided concrete examples of successful collaborative initiatives already underway, including Masakhane (focused on African language technologies), Deep Learning Indaba (machine learning capacity building), GOAI Africa (AI governance), and Sisonke Biotik (biotechnology applications). He announced the launch of the African Compute Initiative at the University of Cape Town, which will provide shared computational resources, cloud platforms, and state-of-the-art GPUs to researchers across African universities. This initiative exemplifies the network effects possible when institutions focus on empowering others rather than competing.
Dr. Chinasa Okolo highlighted international opportunities for African voices in global AI governance, noting strong African representation on the UN’s International Scientific Panel on AI that exceeded expectations. She emphasised how her work with the World Bank on continental and national AI strategies demonstrates the possibilities for diaspora engagement in African AI governance, particularly around government procurement issues where African governments are being “bombarded” by suppliers selling often-unnecessary AI solutions.
The panellists identified several key infrastructure needs for effective collaboration: moving from AI strategies (which exist across many African countries) to actual implementable policies, building technical talent and government fluency in AI, creating African AI safety institutes, and leveraging existing grassroots organisations. Mark Gaffley’s educational initiatives, including MOOCs with relatable African imagery and scholarship programmes prioritising African women, represent practical steps toward building the foundational knowledge needed for informed participation.
Considerations for AI Integration into Critical Infrastructure
The discussion of AI deployment in critical infrastructure revealed sophisticated understanding of the challenges facing African governments. Ambassador Tigo acknowledged that governments face immense pressure to adopt AI technologies because young populations are already using these tools extensively, leaving little room for rational choices about non-adoption.
The panellists identified several key principles for responsible AI integration. First, procurement processes should include safety benchmarks and audit requirements, taking advantage of companies’ desire for African markets to negotiate better terms. Ambassador Tigo suggested developing negotiation tools and procurement guidelines to help governments engage more effectively with major tech companies. Second, governments should avoid single-sourcing and maintain alternatives, including both local private sector options and analogue systems for those unable to access digital solutions. Third, continuous monitoring and agile mechanisms are essential because AI technology evolves rapidly, unlike traditional infrastructure purchases.
Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with government systems. He advocated for transparent, human-in-the-loop systems that maintain human agency in decision-making processes, noting the risks of losing institutional skills and becoming beholden to external companies. When discussing transparency, he acknowledged that whilst you can examine model weights, “it’s really difficult to tell what’s actually happening in there,” highlighting the practical limitations of technical transparency.
Dr. Chinasa Okolo emphasised the need for independent evaluation capacity through AI safety institutes, drawing parallels with the US National Institute of Standards and Technology whilst noting that African versions would need different designs aligned with local needs and values. She also stressed the importance of evaluating whether AI solutions are actually necessary, citing her World Bank research showing African governments being pressured to adopt solutions that often fail to deliver promised benefits.
Addressing Digital Exclusion and Development Priorities
A critical audience question highlighted that 64% of Africans lack internet access whilst AI development accelerates globally, raising concerns about AI advancement widening existing inequalities rather than promoting inclusion. This digital divide became a central theme in discussing development priorities.
Ambassador Tigo offered a strategic perspective on using AI to accelerate traditional development priorities rather than pursuing AI for its own sake. He provided examples from Kenya where AI is being used to optimise energy distribution and infrastructure development, arguing that African governments should “get AI for something else that drives development” rather than adopting AI for basic functions like chatbots.
Dr. Chinasa Okolo emphasised that simple, non-AI solutions often address development challenges more effectively than complex AI systems. She argued that building hospitals, paying teachers, and installing reliable electrical grids would solve many problems better than AI solutions, whilst also reducing opportunities for funds to be diverted or wasted on non-functional technologies.
Mark Gaffley provided a contrarian perspective, suggesting that maintaining analogue alternatives might preserve valuable human capabilities. He posed the philosophical question of whether the “digitally excluded” might retain cognitive abilities that become valuable as others experience dependency on AI systems, though he acknowledged this idea was “a bit out there.”
Pathways Forward and Ongoing Initiatives
The discussion identified several concrete action items and initiatives already underway. The African Compute Initiative represents a practical step toward shared computational resources, whilst educational programmes like Mark Gaffley’s MOOCs and scholarship initiatives address the foundational knowledge gap. The development of negotiation tools and procurement guidelines could help African governments engage more effectively with major tech companies.
Professor Shock highlighted the importance of existing collaborative networks and their potential for expansion. The success of initiatives like Masakhane and Deep Learning Indaba demonstrates that effective pan-African collaboration is already happening and can be scaled up for AI governance and safety work.
Dr. Chinasa Okolo’s work with international organisations like the World Bank and UN panels shows how African expertise can influence global AI governance whilst building capacity for local implementation. Her emphasis on moving from strategies to policies represents a crucial next step for many African countries.
However, significant challenges remain unresolved. The fundamental power imbalance between African governments and trillion-dollar tech companies requires innovative approaches that leverage market pressure rather than relying solely on regulatory mechanisms. The need for comprehensive AI incident databases that capture African contexts remains unfulfilled, limiting evidence-based policy development.
When discussing content provenance and watermarking AI-generated content, Professor Shock noted that “the cat is out of the bag” regarding detection technologies, suggesting that technical solutions alone cannot address misinformation challenges.
Conclusion
This discussion demonstrated the sophistication of African thinking about AI governance and safety, moving well beyond simplistic narratives of technological adoption or rejection. The panellists articulated a vision of AI development that prioritises African agency, contextual understanding, and collaborative approaches whilst remaining pragmatic about the pressures and opportunities facing the continent.
The conversation’s most significant contribution may be its reframing of AI safety from an African perspective, emphasising immediate threats to democracy and social cohesion over speculative future risks. This contextualised approach to AI governance offers valuable insights not only for African policymakers but for the global AI governance community seeking to understand how AI risks and benefits manifest differently across diverse contexts.
The emphasis on collaboration over competition, capacity building over dependency, and contextual understanding over universal solutions provides a framework for African AI development that could serve as a model for other regions seeking to assert agency in global AI governance. The concrete initiatives already underway—from the African Compute Initiative to collaborative research networks—demonstrate that this vision is moving from aspiration to implementation.
However, the significant challenges identified—from digital divides to power imbalances with tech companies—underscore the complexity of translating these principles into effective policies and practices. The path forward requires sustained effort across multiple fronts: building technical capacity, developing appropriate governance frameworks, fostering international collaboration, and ensuring that AI development serves African development priorities rather than becoming an end in itself.
Session transcript
The first share of the research team, I believe, is here with us today, including Marie -Ira Ducunda. We have Gatoni as well, and Michel Malonza, who will also be moderating with us today. And then we’ve got AI Safety South Africa, where we’re working on building local capacity to work on AI safety alongside evaluations research. So together, our organization represents a growing ecosystem in African -led efforts on AI governance, safety, and capacity building. As you all must know, today we are exploring three interlinked questions. What does safe and trusted AI actually mean for the African context? What progress has already been made on the continent and by whom? And what are the most promising pathways for collaborations going forward?
And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on my left. who is the founder of Technicultura and a policy AI specialist at the UN Office for Digital and Emerging Technologies. And then we have Ambassador Philip Tigo that serves as a special envoy on technology for the President of the Republic of Kenya. And then we have Professor Jonathan Shock who is an associate professor in the Department of Mathematics and Applied Maths at UCT and the director of the UCT AI Initiative. And finally we also have Mark Gaffley who is the director of legal and operations at the Center of Global AI Governance. Hopefully we’ll also have Dr. Kola Ideson that will join us in the next few minutes, who is the research director at Research ICT Africa.
And in the next 47 minutes or so. We all spent about 30 minutes on the panel, followed by about 15 minutes for panel discussions. And then we’ll just conclude with some brief remarks to pull the threads together of what is discussed tonight. A few little housekeeping things before we start. So in the slide behind me, if you have not registered on NUMA, we’d love to stay connected and be in touch. And AI Safety South Africa and ELENA have exciting programs that you’d want to know about. So please scan this QR code on the top left of the screen. With that link, you can leave us your contact details and also give us feedback on the event.
And on the top right, you’ll see the link to Slido, which is the platform that we’ll use for Q &A. So you can just scan the code and then you’ll be redirected to a platform where you can leave your questions. And also avoid the questions of the two things we should prioritize. in the Q &A section. Okay, that’s all the points I had to share. So without further ado, let’s get into it. I’ll hand it over to you, Michelle. I believe Zach will be starting with the first couple of questions, then I’ll take over after him.
Okay, thank you. So I’ll be moderating part of the session and my colleagues, Michelle, will be taking part of the questions. Afterward, then we’ll progress to the Q &A. So I will start with the foundation, Safe and Trusted AI, which is like we can consider broadly as kind of AI that delivers the outcome we want. So I want to start with you, Ambassador, please. In the context of Africa in particular, what AI -driving outcome will we consider undesirable?
I think and it’s quite interesting I’ve been having this discussion of safety today the whole day I think in the context of Africa I think the first thing I want to be very careful is that the African continent is not homogenous right so I’ll give a very specific Kenyan understanding of this but I think it could potentially be something that is shared in the country I think the first part of this conversation is that largely that if AI systems are creating a dependency rather than building capacity or capability I think for me that’s undesirable because the erosion of human agency especially for a continent that is still trying to aspire is a problem if AI systems are extractors of African data if capturing our African markets and there’s a concentration of value outside the continent while leaving our institutions as mere implementers or users then I think for me as I said it’s digital neocolonialism I think that’s it the second part of course is that if these continue to be built without our knowledge, wisdom, cultures, it creates an existential threat.
It’s almost a civilization extinction story that then for me is just not undesirable. I think it goes beyond, it’s unacceptable. So those would be my two quick responses.
Okay, thank you. So, Prof Jonathan, I will move over to you. So of the possible outcome and risks and some of what Ambassador please mention, what do you see as a trade -off like short and long -term risks? And which one shall kind of like likely kind of like consider now and those that we can consider in the future?
Sure, thank you very much for the question. So I agree with Ambassador Tigo in terms of these ideas of neocolonialism. And the bias is inherent in the models and the context. I think these things are all extremely important. And I think these things are all extremely important. And I think these things are all extremely important. I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we we have to be very aware of, which is happening right now. In fact, it happened before AI came along.
And AI is allowing this to happen at a scale that at the moment we already see disruptions, but I think there’s real risk of a complete breakdown in trust. And that’s misinformation and disinformation. We’re seeing already around times of elections within Africa, within Ghana, within South Africa, within Nigeria, that misinformation, but also disinformation, and I disambiguate those by misinformation being, it might be that people are spreading things that they just don’t know is correct, but disinformation is really targeted campaigns. And what we’re seeing is that those targeted campaigns are often gendered, that it’s often against female politicians, that technology -facilitated gender -based violence is a massive issue against politicians, but more broadly. But I think that for me, one of the real things… is the breakdown in trust that we’re seeing in society.
We’ve seen already with social media how echo chambers form. AI is really allowing that to happen at scale by malicious actors who can focus in on particular election periods and destabilize what’s happening. To me, in the short term, that’s really worrying. I think it’s quite difficult to talk about the long term. We can think about what might happen in the next few months, but thinking about the long term threats, people have talked about existential threats in terms of AI getting out of control. I think that’s something that’s extremely important to study, but I think that within particular contexts there are things that are real that are happening now that we have to worry about and try to mitigate.
I think that’s really important. The other thing that I think is happening at the moment that I don’t hear a lot of people within the space talk about, within the policy space maybe talk about, is the issue of agents. And the fact that now a single malicious actor can design their own agent to carry out a misinformation campaign or a disinformation campaign. I think just over the last few months, we’ve seen that possibility come to light. And I think that’s a real worry and something that we need to understand. It’s not just now about the big tech firms. Of course, they have a major role to play in this. But I think now an individual actor can produce software that millions
Okay, thank you. So, Dr. Chinasa, I’ll move over to you. So, given that the kind of current development of frontier AI leaks that is kind of forcing some of the leaks we are talking about, how can Africa monitor and mitigate those leaks, given that they are kind of like most of the existing development is outside of the context?
Yeah, great question. And this reminds me, I actually talked to an Alita researcher last year when I was at the… The peers at AI… Action Summit about some work that they were interested on doing, like an AI incident database. And I think this is actually very important because when I look at current databases, and they’re really comprehensive for the most part, but honestly, when I look up or type in Africa, for example, it reverts back to African American. And I’m based in the U.S., and that’s helpful for me to know, obviously, because I get coded as African American there, but finding this just basic information about AI harms on the continent is still very hard if you’re not tuned in.
I get stuff on Twitter that comes up all the time. There are a couple cases with some African universities, particularly in Nigeria, and also in South Africa had issues with AI being used to automatically grade standardized exams and students having issues with trying to rebut some of those scores that they received. And so that did not make mainstream news, probably in the countries, but not just generally. And so I think that this is a really important one. So we understand how AI impacts. It affects the African continent. and also communities on the continent, and then also that governments can respond accurately to crafting regulations that can serve the needs of communities and also ensure that the responsible parties are held responsible for the harms that they’re causing on different communities.
Yeah, thank you. So just a kind of like follow -up on that. So you mentioned like holding kind of like responsible, kind of accountable. So like is there anything in particular like maybe our stakeholders can do, in that regard, kind of like is there any short -term or like long -term effort that we can do?
Yeah, it’s hard to say because, you know, as you can tell by my accent, I am American, I’m also Nigerian, and so I do understand a little bit of intricacies between both countries and the U.S. are a little bit more formal ways for advocacy. Like you can actually write directly to your congressman. You can call their office. Most often you won’t get them directly, but you’ll get their staff members, and they often respond. Like people. Right. them for basic issues like, oh, I can’t get my passport in time. Please help expedite this. And, oh, there’s this issue happening at my school. Please help with this. And so, honestly, I’m not very aware of similar pathways across African countries.
But I think that this civil society advocacy, particularly grouping together, you know, forming these coalitions can have a lot of power. It’s just, again, like there are a lot of incentives in place for governments to suppress this, and we’ve seen this turn into violence, particularly against youth. And so I am aware of this, and I don’t want to recommend this so people get harmed. But I think there are ways that, you know, again, this coalition voting can be successful.
I wanted to jump into that because you talk about policy. And I think, and let’s be real, and that’s why I think I, when Irina asked me to come to this through my colleague Stephanie, I thought it would be important because this is a very Africa -centric discussion. I’ve been into all the global ones. I think five today. I think let’s be very clear. But if we have a couple of AI strategies in the continent, we do not necessarily have AI policies in the continent. So there’s already no mechanism to do this. And that’s AI in general. We’re not even talking specific about safety. Secondly, we do not have necessarily the talent to do this in the continent.
I think that’s why what you guys are doing is important. And I say talent in the other spaces, not even in public sector. When you go into public sector, unfortunately my colleagues just think AI is charge EPT. Let’s be honest. So there’s basically a fluency question. Safety is so far in the scale that they’re not even thinking about it. So I think for me, the sense that I kind of have in this is that it needs to be an all -in effort. And this is where my sense in the African continent is where that dichotomy between civil society and governments disappears. Because if it’s about existential risk to the continent, and I say existential risk, it’s about existential risk in terms of harms to society.
I’m not talking about… I mean, a few scientists like us can talk about models and harms to the model. the chances of an AI pressing a nuclear button in Africa, come on. And that’s my point. So we have to even redefine what existential risk for Africa on AI means. And I think this is where we really have to break from that. And we can have a few of our scientists doing the existential risks models, models running rogues and science fiction. I think that’s important work. But the risk that he’s mentioned is real, right? Threats to democracy, threats to harmony of society are real risks. And then this is how then you begin to build guardrails from a point of understanding of what is really relevant to the African continent.
Otherwise, we get lost in the other conversation that really chances of happening, nil, but important. But these are the risks. Chances of happening, high, but less prioritized. Good data, folks.
Okay. Thank you so much for that contribution. if you have a question, please use the QR codes and type your question. We’ll come back to that, but I’ll be moving over to my colleagues who will take over with the rest of the question. Michelle.
Just to join the conversation that you’re already having, I think so far we’ve talked a lot about what we don’t want and the kind of risks that Africa should be focusing on versus the rest of the world, and now I’d like us to talk about how we define what we want the systems to look like and what trustful systems would look like. And so I’d like to start with Mark talking about what his work at GCG has revealed so far about what Africa should think about what they want from these systems.
Cool. Thank you. Thank you for the question, Michelle, and obviously for the opportunity to speak this afternoon. I see the answer to this question as twofold. So first is the answer that addresses what we actually want to define from what we want from AI. As a high -level response, I would describe that as the desires of African citizens on the ground. especially our local communities and the marginalized and vulnerable amongst them who don’t necessarily have a voice or a seat at the decision -making table. The second response is the more likely scenario, in my view, that we remain subject to the whims, benevolent or otherwise, of those practitioners who are able to scale the most useful and not necessarily the most beneficial AI tools for our people.
Irrespective of whether those practitioners are based within national borders, across the broader continent or in foreign jurisdictions around the world. When I consider these responses in the context of GCG’s work, two things come to mind. The first is the results from a public awareness and perceptions of AI survey we released in September last year. The survey was a module in the annual South African Social Attitudes Survey, which is nationalized. The survey revealed that nearly 75 % of respondents knew very little about AI. And for those who did know about AI, most of their learning was through informal and unstructured channels, including through social media and television. These findings may reveal that African populations are some way away from being able to define what they want from AI, because quite simply the majority of citizens are unaware that technology even exists.
This drives the need for creating awareness and educating our peers on AI, so that when the time does come to interact with it, they can make informed and meaningful decisions about what they want. On this, GCG’s other work I’d like to highlight are the various short courses we run on ethical and human rights implications of artificial intelligence through accredited universities in South Africa. These courses attract interest from all over the world, and for each iteration we’ve received applications in the thousands. As part of these offerings, we are also prioritizing awarding scholarships, scholarships for African women as part of our Women in Focus series. Why this work is important to the question is that the courses, even if incrementally, are slowly moving the needle on the figure I mentioned earlier, equipping participants with the skills to pass on knowledge to their peers about the many benefits and risks related to AI technologies.
Finally, as a further effort towards equipping Africans to be able to define their own wants and needs, we have an online MOOC launching imminently that will offer our course content freely to the public using relatable caricatures and imagery, which I hope will further drive this objective of equipping Africans to understand and make their own informed decisions about what AI technologies to allow into their lives and what outcomes they want those tools to achieve for them.
Thank you. I think that’s really interesting because it ties right to what Ambassador was saying, that in order to know what you want as Africans, you have to know that the technology exists, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what exact technology we are talking about when we see AI. So maybe I should let the rest of the panel… I don’t know if I… say what they think Africans want, and then we’ll go into
So I think, you know, I don’t want to speak to what an individual person wants, but I think that what we all want is empowerment. We all want agency. And so there is a possibility that we can think about AI as a way to give agency, and I spoke about agents before, and I mean agency in a slightly different sense, for people to understand the possibilities that they have. And to increase that range of possibilities so that people can make choices. And so knowing that there is something out there that can give you, empower you, is great, but it has to be able to empower you within a context. And, you know, we’ve spoken many times about the, you know, the lack of context, local context within these models, the lack of language, contextual language information.
And until those things have been fixed, it’s not actually going to empower people. So to me, it has to be about making sure that the model… understands local context, and then making sure that it’s actually giving people agency to make decisions. I think that’s really important.
Awesome. Yeah, so I’ll try to be a little bit nuanced about this because, again, I’m Nigerian -American. I grew up right in the middle of the United States. I have been fortunate to travel across the continent very frequently over the past couple years or so, but in going off of what Jonathan said, I would say that I do see really just an opportunity, one, to contribute to equitable governance structures and mechanisms, but also even just an opportunity to actually participate equitably in AI development more broadly. That’s what I see a lot of young Africans want, particularly one, because the epidemic of underemployment is very stark on the continent, and then also just generally that these systems have the power to change the world and have changed the world already, and so I think that this is something.
A lot of our conversations are on AI safety, can also provide new avenues for African researchers, scientists, engineers, to really contribute new research that we’re still missing. Because particularly when we consider the U.S. context or even these prominent AI safety or fairness conferences, a lot of the work on bias is rooted in race, for example. Again, which is a Western construct. And so if we understand how AI impacts people from different castes, from different tribes, religions, gender, and the intersection of all of these, I think this will, one, advance the field as a whole. But, again, also provide more opportunities for these governance structures that are needed within African context.
Sorry. No, I think a couple of things. And I take this from a persona approach because, again, I think Africa and the communities are a little bit different. And I think I’ll take the three important ones. One, I think it’s basically our scientists, right? I think our scientists, for me, need us. because you cannot talk about benchmarks evaluations around safety if you don’t have access to these models because we are the ones who bear the brand of these models. I’ve given an example Kenya is the biggest user of charge GPT and the first user of charge GPT is emotional advice so you’re asking that’s real data so you’re asking a model for emotional advice that doesn’t understand your context so what does that mean so I think there has to be a way that our scientists have access to these models which means also capacity for them to be able to evaluate these models but also a way that then the second persona is governments, a way that then working with scientists that governments can hold those companies to account because of the potential adverse harms that they can do to our society and community so there is where I see hand in hand, now that’s what governments want but also what governments need is capacity because you’re talking to five trillion dollar companies and your GDP is like a hundred billion million dollars.
So I think potentially we have to, this is where there has to be collaboration because this company understands market pressure, not necessarily regulatory pressure. So there has to be a nuanced approach to how you do that. The third part, of course, I think is the citizenry, right? The citizenry, I think in my sense, just needs to be included. And part of inclusivity is the safety work, right? So you must be included in a safe environment so that you’re not left to put the whims of agents or folks who can manipulate the crowd. So I think I look at those three personas potentially as that. But I think the underlying kind of infrastructure in this is basically looking at how do we ensure that as a collective in the continent that we can build our own models.
And I think that’s important, right? Because you cannot over, part of agency is human agency but also part of challenge to agency is over reliance. On example, models. I think the continent, I understand local context, I understand culture, but that capability to be able to build our own models that are nuanced to our own context, I think is a good option. Then you are not left to Gemini, Quen, OpenAI, Anthropic, I can mention five of them. What choice do we have right now if we don’t have an alternative potentially built from open source?
Thank you very much for all your responses. I really appreciate talking about how capacity and access are the ways that we are going to figure out agency and empowerment. I think that brings me to the next question that all of you have touched on about what is going to make it possible for us to strengthen cooperation and engagement across the region in Africa because that’s a key part of making the access possible to begin with. I can see Ambassador has immediate thoughts, so I guess we can start with you. Let’s start with you since you are very expressive. with you and then go from Dr. Chinasa coming up to the rest of the panel.
Stop competing. I’m really, it’s, I’m sorry, sometimes I stop being an ambassador at some point. Because AI is not ICT. It’s not about who’s going to build the best data centers. You know, who’s going to do X or Y. This is a collective all -in effort. I think for me, that’s the biggest shift that we need to make. That it’s not about competition. It’s about cooperation and collaboration. That’s what will make us work together. I think for me, that’s my and I’m saying this out of frustration because I see it. And it’s a waste of money. But also, it’s just a waste.
Alrighty. So, I know in the draft of this I mentioned, I’ll talk about some of the stuff at the UN. I’m speaking also my personal capacity too. But, you know, we just recently launched the international conference. scientific panel on AI I read nearly every application for that and so I think really it’s important that and I was very happy to see African representation you know on the panel we have eight I believe and I was thinking we were gonna at max not a max but at least get around four or five or so and so it’s really good to see that you know our voices are valued and then also more broadly that there are other efforts to complement the panel including the Africa AI Council and so I also look forward to seeing how this plays into the work that the UN is doing and also again some of the other enough initiatives that we’re doing around the global AI dialogues which play directly into the the panel’s work as well and so really just again that’s having you know in not to say that you know just this inclusion will actually lead to actual change sometimes it you know honestly doesn’t but I think the UN is a little bit special and in some cases where we’ve seen how the work that was done with the H -Lab on AI really led to increased conversations and discourse on this idea of international AI cooperation.
And so I hope to see African governments do this kind of work individually. I had the chance to serve on the AU’s Continental AI Strategy. I did this work when I was a PhD student, like four years ago, and then also served as a drafting member on the Nigeria National AI Strategy as well. And so I did this all the way from the US, and I think that there’s many opportunities for, again, African countries and also those throughout the global majority to build their own initiatives for this
AI cooperation. Yeah, I’d like to sort of follow up on, in particular, Ambassador Tigo’s point about the need to not be competing with each other. And I think that within Africa, I think there are already really, really good examples of people working together. You’ve got Masa Kani, you’ve got the Deep Learning in Daba, you’ve got GOAI Africa, you’ve got Sasanke Biotic. You’ve got the… I think that’s a good point. all of these grassroots organizations who already with limited resources doing amazing work you then add some resources to this and you really superpower what people can do um at the university of cape town the african compute initiative was announced today um and so the idea of this is that we happen to have a cluster an hpc a high performance computer center currently with a lot of capacity that is to say a lot of space we are building that we’re setting up uh an african compute initiative which which researchers around africa are going to be able to use we’re setting up a cloud platform we’re bringing in gpus um state -of -the -art compute that’s going to allow other people at other universities to do their research this is not a competition this is really about how does one set of people empower another set of people because you know there is no competing with you know a trillion dollar company you but actually what we have is a network effect and that’s really really powerful in and of itself so we need to be working with academia with civil society, with government with the private sector all of these groupings need to work together and
Alright so I’ll do the final question before we get into the Q &A I think you’ve all touched upon how you think the engagement and the policy should be working around the continent like moving from strategies to the policy and so if Africa is able to come up with their own systems or find a way to have leverage against the companies to localize the systems that they’re going to deploy on the continent, what considerations do you think should be made while deploying those specific systems into our critical infrastructure because that somehow seems like an inevitability that’s going to happen so what considerations should African governments be making when thinking about integrating AI into critical infrastructure?
I can start with Mark since he’s the one who didn’t answer in the last round of questions the proxy paper for staying silent.
for the problem we’re trying to solve for. Sorry, John. So, yeah, just to ask if it is actually necessary. And the other thing, just, you know, sort of, you know, recognising access and inclusion issues is just to keep the alternatives open. So if you are going to digitise something or, you know, use AI tools to solve for a particular problem, just make sure that those who can’t access them still have their kind of analogue approaches to doing things. I did mention to someone earlier I was the against tech person in the room, so I think that’s why I’m pushing the analogue way.
Cool. So I think we just have to be very, very careful here of the sort of, you know, the Silicon Valley approach of move fast and break things. If you try to take a system, some sort of infrastructure system, be it, you know, a government department, and try to AI -ify it, you know, there are massive, massive risks there. That’s not to say that… that we shouldn’t be thinking about this and doing it very carefully. But we have to understand, again, I go back to agency, the agency that we remove when we get an AI system to make the decisions for us. I think there are really good ways to do this with human in the loop where we can have transparent systems so we can understand what the decision -making process is.
But if we’re simply going to a company who sell a product, who say we can streamline your service, then we’re really beholden to that company. And if it turns out that that’s not the right solution, trying to undo that when you’ve then lost the skills, then you’re in a really difficult position. So I think we need to move at a reasonable pace but not break too many things along the way. I think that’s a real risk.
Well, I think I probably have an advantage because I’m in government, so we kind of face a lot of these things. I think partly to understand the challenge, right? The challenge, I think, remember is, that we haven’t, especially for the African continent, age 19 .7, very young, already engaging in these tools, government engaging in 19th century technology, and so there’s a gap. And so there’s already sufficient pressure for governments to engage in these new tools. So there’s really not much room to kind of make these rational choices of not to use these new technologies because you have a population that is already using it. So then what does that leave you as options? I think the options for me, then it means that you need to start creating some form of guardrails even before you acquire the tools.
So you have procurement is one tool. And we can write a lot of these rules in the procurement documents, and I don’t think many of us are doing that. Include safety benchmarks in that, include a lot of these guys don’t want to be audited, so just get that in there because they want your business. And I have a sense maybe that’s the sweet spot, the point of decision marketing. At that time, everybody wants to talk to you, and that’s where African countries lose. lose the game. The second part of course is that because the technology changes very quickly I have a sense what we need to do is then continuously have kind of these agile mechanisms that keep pushing the foundational questions because this is not one technology it’s not a laptop that you’re going to buy and you’re going to use it for three years.
It’s going to change in the next two, three months. So I think potentially we need that. Third I think is just this contingency planning this single sourcing business should not work we need options and for me the fourth option is consideration always have the local option open because I mean data localization sovereignty. It’s about sovereignty so I think and part of it we don’t do that and that’s where we also start to make strategic decisions of separating private sector from global big tech to local private sector companies to smaller medium enterprises and I think we need to do that deliberately because then at least the local companies can be kind of managed by domestic law.
These other ones you probably have to go to Silicon Valley to sort of litigate. So I think for me, and it will keep on evolving, these are things that I’m seeing right now as potential options. But then I think it still all boils down to the capacity of the decision maker or the policy maker to be able to disarm these insights. Where we lose is negotiations. And part of what my team continuously does, and maybe this is something you guys need to consider, is think about these playbooks, guidebooks, negotiation tools, so that when they are negotiating, at least they have some sense of knowledge as their power to engage. Then I’m not talking because you can’t, you know, the hundred billion, five trillion, maybe when you have knowledge and have market insights, you have a better, you’re actually in a better position to engage.
Yeah, so I definitely agree with my co -panelists on a lot of the topics brought up. I would say for the first one, particularly around the need for AI as an actual solution, and governments really need to evaluate whether, again, simple solutions, non -AI or deep learning based are actually necessary. And then also around the need for guidelines on procurement. I’ve been doing some work with the World Bank, and, you know, we’ve seen in our work that a lot of African governments, those across the majority of regions, are really being bombarded by, you know, suppliers to basically buy solutions. A lot of them, I think, are honestly unnecessary, and a lot of governments don’t have the capacity to evaluate these and make decisions, let’s say, transparently in -house.
And I think the key part of actually building the capacity will be, you know, establishing AI safety institutes or, you know, whatever name. I think that’s what governments want to call them. And I think that, you know, we have… this within the United States, it’s embedded within the National Institute of Standards and Technology, and they test more than technology. It’s food, you know, lotions, you know, cosmetics, all that stuff, too. And this may not obviously look the same across Africa, across Southeast Asia, South Asia, et cetera, but it really needs to be done, again, just to have this independent capacity and also, again, not be reliant on these multilateral lenders and foreign organizations or even philanthropic organizations that may be, again, funding or providing solutions, again, that may not be aligned with African needs and values and also or maybe not even be necessary in the first place.
Thank you so much for your responses. They were very thoughtful as we think about, to figure out what we don’t want is to think about what specifically African countries think is risky and thinking about the short term as the priority and then thinking about what we want is based on thinking. And I think that’s what we’re talking about, our capacity to make that decision or to… autonomy to decide what we want and then localizing in that context. And then in terms of thinking about how to collaborate across the board, the sense that I’m getting across the panel generally is that we need to think against competition so that we can be able to have leverage against the big companies.
So thank you so much for your details and thoughtful responses. I’ll hand it over to Zach to get us into the Q &A session.
Okay, thank you. So we’re going to take a few questions. And maybe I’ll also take one question from the audience, one or two questions from the audience. So one of the questions here is kind of like broad, so maybe Prashok, I’ll hand it over to you in 30 seconds. He said to improve inclusivity and trust, what shall an ideal AI model optimize for?
Gosh, that’s a difficult question. I think part of it has to be about transparency. How is a decision being made? People talk about the sort of the black box problem of AI systems. In fact, this isn’t quite the right way to look at these systems. You can look exactly what’s happening inside the model. You can look at all the weights of the matrices, but it’s really difficult to tell what’s actually happening in there. So building transparent systems that are understandable, I think that’s one way to build trust. Yeah, I think that’s a way to think about it.
Okay, thank you for that. There is one question here also about what are the most significant misconceptions about the current state of AI? Maybe Dr. Chinaza.
I’ll probably be redundant, you know, from some of the earlier topics we discussed on the panel. But again, that is a panacea or a band -aid or a solution for a lot of things, particularly like development challenges. I think we see African – particularly like doubling down on again adopting procuring these AI solutions when honestly like building hospitals paying teachers installing sustaining or reliable electrical grids would actually solve a would solve the problems much easier and better maybe not easier but better but and also with a little opportunity for you know funds being diverted or wasted on a non -functional solution so that’s one thing I think my other panelists would probably have other good comments as well
all right is there any question from the audience maybe we can take one question okay I will take one one but very brief
first of all thank you for being digitally inclusive for those of us who couldn’t use the QR code my question is to professor shock so you talked about misinformation information and disinformation maybe I can work my way back a little bit. So I think in some ways we need to start talking about kind of disincentivizing some types of AI, and this is what I mean. Usually when we talk about disinformation, we think about it from the user’s perspective, right? But if you create a tool, for example, for one, for example, I don’t see why there’s a sort of massification of the use of AI tools for media creation. Like, it’s not very necessary. Like, there’s a running joke about someone saying, well, I was hoping AI would be created to do some of the hard work that I do at home, like laundering or housekeeping, so I have more time to actually do media and entertainment, but it reverses the case, right?
So we’re having AI do all of this sort of stuff, and we’re not really making progress on robotics and stuff like that. Now, my question is, well, relatively compared to LLMs, right? So my question is, should we have some sort… say, mandatory watermark? for example for AI generated media like in that case if I see some video or some songs or some pictures I know it’s AI generated and in some ways I’m naturally not inclined to believe it is that a workable solution?
I think the cat is out of the bag I think it’s great if some organizations do put watermarks on indeed within China within some of the other companies they are beginning to do that but because we now have open source models and the open source models are getting very very good if a malicious actor wants to set out a disinformation campaign they’re just going to choose the one that doesn’t have the watermarks I see that one could for instance have media where there is some requirements to have information about whether or not it’s come from an AI system but when there are choices to have watermarked output or not watermarked output the malicious actor is just going to going to choose the one which is going to subvert the system.
So I think that it may be a stopgap, but I think it’s a very short one.
Okay, thank you. In 20 seconds.
So this is to the panelists. So I would say we have about 64 % of the continent of Africa that don’t have access to the internet and so are digitally excluded. So my question is how do we make sure that our advancements with AI are not widening the digital divide? I think it’s a really big problem. As we’re moving forward with AI, there are people who don’t have access to the internet, electricity, and other things. So how do we ensure that we’re also thinking about those digitally excluded individuals? Thank you.
This is a very abstract response, but it’s something I’ve been working on, so I’ll float it here. But it’s this idea of the digitally excluded as the kind of last vestiges of creativity left on the planet. you play it out over time, those who don’t have access to what I said about mental arrest and cognitive decline etc, being the ones that we eventually come to to ask for the sort of creative ideas and the independent decisions, so decision making abilities. So in a way just to kind of flip that, perhaps this focus on not having access as being excluded as potentially being a way down the line that you are actually included and in fact relied on because you kind of kept your cognitive abilities intact.
So yeah, a bit out there but I thought I floated.
in that particular instance and this is where I think AI becomes interesting I think and part of what I always speak about is the unfinished business that African governments need to do. So it’s about connectivity, it’s about electricity it’s about literacy it’s about the kind of old infrastructures that we’ve not done. So I think for the African continent this is where you start to use AI to optimise development. You can do smart it’s AI accelerates AI. And if you look at what we’re doing in terms of Kenya at least, is that what we’re doing. For example we’ve realized that with artificial intelligence that a lot of our energy optimization was wrong with our artificial intelligence because we were going for last mile electricity connectivity.
But now with AI we’re realizing in the World Bank that you could do this a little bit differently. All I’m saying is that we can leverage this technology on those non -sensitive capabilities to actually accelerate development so that again it’s not AI for AI. So for African governments don’t get AI for chat, right? Get AI for something else that drives development.
Alright, thank you. So we only have one minute for questions so I will take the last two questions together and briefly our panelists will answer to that. So one question here one question there.
My question is a little philosophical one. Like we talked about how right now AI is in a war where very many new technology comes. Each country and each company is trying to to be capitalistic and try to one up the other one. Uniquely in AI though, AI might just be the one which might catch up with itself where they might just like, there’s a possibility, right? So there are so many economic and structures out there like socialism, capitalism, which unique focus on optimizing certain things like engagement on social media, for example. So if you had to ideally work on a structure, if AI had to decide on a structure for humanity, I would just like your opinions on that.
Okay, thank you. We’ll take one question here.
Yeah, okay, thank you. So I’m going to consider two things, which is policy and our generation at large. So I wanted to ask, considering the zeal that we have for knowing AI, is the next generation safer also? And considering the thing that you’re saying, policy, we need policy. Should we, go around or just, just say we need policy because we can catch AI at where it is actually now in Africa, considering it hasn’t gone abroad that much, and just put policy to who is going to learn this and who is going to know this on AI.
Okay, thank you. So I think these two questions will be split across our panelists, so who wants to go first?
All righty. Yeah, I’ll take the policy one. I think that I’m very hopeful for African governments in particular when it comes to AI policy. I think there is, let’s say, like a big learning curve or actually implementation curve from the 20 or so strategies and two draft policy frameworks. And there is an opportunity, you know, for the younger generation to be involved. Obviously, one where I think is providing like feedback on different strategies, a couple of countries have had open feedback period periods. A lot of most of them haven’t, unfortunately. But, you know, despite that, I think, you know, doing research, legal analysis and providing these findings openly can actually have a lot of change.
Again, if there happen to be formal mechanisms to provide this feedback, obviously take advantage of them. If not, you know, create your own avenues or pathways to do so. And then I can, I’ll let my panelists speak. Okay.
Mark, do you want to add something? All right. Pro. Okay.
Very briefly. Okay. Well, that would be my point, is I think if AI were to structure humanity, we’d be very efficient and we’d keep to time.
All right. Thank you so much for your contribution. We’ll hand it over to Iman so that she can. Thank
Thank you so much. I’ll be super brief. Well, I’ll first start by thanking our incredible panel. Thanks a lot for your insights and energy and time. Thanks to you all for coming. It’s been a long few days, I imagine, being here at the conference. There are such great people to talk to and learn from. Before we wrap up, we’d love to take a picture with the panel. So I’ll invite you to just step forward here so that we can grab a picture together. And as they do that, for everyone, we have a social happening at 7 .30 today at Cafe Lota. That is in a museum close by. You could just, like, Google it. And we’d love to see you there.
We’re going to be heading there at 7 .30. Thanks, guys. Thank you.
Speaker 1
Speech speed
103 words per minute
Speech length
627 words
Speech time
362 seconds
Session framing – introduction of panel and questions
Explanation
The moderator opened the session by stating that three interlinked questions would guide the discussion and introduced the panel members who would address them.
Evidence
“As you all must know, today we are exploring three interlinked questions” [1]. “And to explore those questions, we’ve got an amazing panel that I’m honored to introduce” [3].
Major discussion point
Session framing and moderation
Topics
Artificial intelligence | The enabling environment for digital development
Speaker 2
Speech speed
149 words per minute
Speech length
527 words
Speech time
210 seconds
Session framing – foundational questions and Q&A flow
Explanation
The co‑moderator set the stage by defining safe and trusted AI, asking what outcomes would be undesirable for Africa, and outlining the structure for the upcoming Q&A.
Evidence
“So I will start with the foundation, Safe and Trusted AI, which is like we can consider broadly as kind of AI that delivers the outcome we want” [5]. “In the context of Africa in particular, what AI‑driving outcome will we consider undesirable?” [16]. “Afterward, then we’ll progress to the Q &A” [17].
Major discussion point
Session framing and moderation
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Ambassador Philip Tigo
Speech speed
175 words per minute
Speech length
1976 words
Speech time
674 seconds
Digital neocolonialism – AI dependency and data extraction
Explanation
The ambassador warned that AI systems that create dependency and harvest African data while concentrating value abroad amount to digital neocolonialism.
Evidence
“if AI systems are creating a dependency rather than building capacity or capability I think for me that’s undesirable because the erosion of human agency … if AI systems are extractors of African data … then I think for me … it’s digital neocolonialism” [28].
Major discussion point
Defining safe and trusted AI for Africa / undesirable outcomes
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Existential threat from AI lacking African knowledge
Explanation
He argued that AI built without African knowledge, wisdom, culture, and values poses an existential risk to societies on the continent.
Evidence
“if these continue to be built without our knowledge, wisdom, cultures, it creates an existential threat” [28].
Major discussion point
Defining safe and trusted AI for Africa / undesirable outcomes
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Policy pathways – need for procurement guardrails and safety benchmarks
Explanation
The ambassador suggested developing playbooks, negotiation tools and embedding safety benchmarks in procurement contracts to give governments leverage over multinational AI firms.
Evidence
“think about these playbooks, guidebooks, negotiation tools, so that when they are negotiating, at least they have some sense of knowledge as their power to engage” [73]. “So you have procurement is one tool” [74]. “Include safety benchmarks in that, include a lot of these guys don’t want to be audited, so just get that in there because they want your business” [102].
Major discussion point
Accountability, policy mechanisms, and civil‑society advocacy
Topics
Artificial intelligence | The enabling environment for digital development
All‑in cooperative approach – reject intra‑continental competition
Explanation
He emphasized that Africa should pursue a collective, cooperative effort rather than competing for AI resources among countries.
Evidence
“This is a collective all‑in effort” [90]. “That it’s not about competition” [89].
Major discussion point
Collaboration versus competition across Africa
Topics
Artificial intelligence | The enabling environment for digital development
Infrastructure first – internet, electricity, literacy before AI
Explanation
He stressed that without widespread internet, electricity, and literacy, AI could widen exclusion, and that AI should be used to accelerate basic infrastructure development.
Evidence
“it’s about connectivity, it’s about electricity it’s about literacy it’s about the kind of old infrastructures that we’ve not done” [123]. “because you cannot talk about benchmarks evaluations around safety if you don’t have access to these models because we are the ones who bear the brand of these models” [105] (context of lacking infrastructure).
Major discussion point
Digital divide and inclusive AI development
Topics
Closing all digital divides | Artificial intelligence
Sovereignty – data localisation and local options in AI sourcing
Explanation
He advocated for contingency planning that keeps local sourcing and data localisation options open to preserve national sovereignty.
Evidence
“Third I think is just this contingency planning this single sourcing business should not work we need options and for me the fourth option is consideration always have the local option open because I mean data localization sovereignty” [101].
Major discussion point
Deployment of AI in critical infrastructure and procurement considerations
Topics
Artificial intelligence | The enabling environment for digital development
Professor Jonathan Shock
Speech speed
184 words per minute
Speech length
1458 words
Speech time
473 seconds
Short‑term risk – gender‑targeted misinformation and disinformation
Explanation
He highlighted that AI‑enabled misinformation campaigns often target women politicians, eroding public trust and causing societal disruption.
Evidence
“And what we’re seeing is that those targeted campaigns are often gendered, that it’s often against female politicians, that technology‑facilitated gender‑based violence is a massive issue” [42]. “And that’s misinformation and disinformation” [43]. “And AI is allowing this to happen at a scale … risk of a complete breakdown in trust” [44].
Major discussion point
Short‑term vs long‑term AI risks in Africa
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Long‑term risk – autonomous malicious agents launching large‑scale campaigns
Explanation
He warned that a single malicious actor can now design autonomous agents to conduct misinformation or disinformation campaigns at scale, beyond the influence of traditional big‑tech actors.
Evidence
“AI is really allowing that to happen at scale by malicious actors who can focus in on particular election periods” [46]. “And the fact that now a single malicious actor can design their own agent to carry out a misinformation campaign or a disinformation campaign” [47].
Major discussion point
Short‑term vs long‑term AI risks in Africa
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Empowerment through local language and context
Explanation
He argued that AI models must understand local languages and contexts to give people agency rather than removing it.
Evidence
“So to me, it has to be about making sure that the model… understands local context, and then making sure that it’s actually giving people agency to make decisions” [82]. “But we have to understand, again, I go back to agency, the agency that we remove when we get an AI system to make the decisions for us” [83]. “And so knowing that there is something out there that can give you, empower you, is great, but it has to be able to empower you within a context” [84].
Major discussion point
Capacity building, awareness, and empowerment
Topics
Capacity development | Artificial intelligence
Grassroots collaboration – African Compute Initiative and networks
Explanation
He described existing grassroots networks (MasaKani, Deep Learning in Africa, GOAI Africa) and the newly announced African Compute Initiative that provides shared high‑performance computing resources across the continent.
Evidence
“the african compute initiative was announced today … we are building a cloud platform … bringing in GPUs … this is not a competition … it’s about how one set of people empower another set of people” [97]. “You’ve got Masa Kani, you’ve got the Deep Learning in Daba, you’ve got GOAI Africa, you’ve got Sasanke Biotic” [98]. “I think that within Africa, I think there are already really, really good examples of people working together” [99].
Major discussion point
Collaboration versus competition across Africa
Topics
Artificial intelligence | The enabling environment for digital development
Watermark limitations – open‑source models bypass safeguards
Explanation
He noted that while watermarks can signal AI‑generated media, open‑source models allow malicious actors to avoid them, limiting their long‑term effectiveness.
Evidence
“I think the cat is out of the bag … because we now have open source models … if a malicious actor wants to set out a disinformation campaign they’re just going to choose the one that doesn’t have the watermarks” [114].
Major discussion point
Trust, transparency, and watermarking of AI‑generated media
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Dr. Chinasa Okolo
Speech speed
178 words per minute
Speech length
1638 words
Speech time
551 seconds
Data gaps – African cases missing from incident databases
Explanation
She pointed out that current AI incident databases rarely return African results, underscoring the need for an Africa‑specific incident tracking system.
Evidence
“when I look up or type in Africa, for example, it reverts back to African American” [65]. “the peers at AI… Action Summit about some work that they were interested on doing, like an AI incident database” [62].
Major discussion point
Monitoring AI harms and data gaps on the continent
Topics
Monitoring and measurement | Artificial intelligence
Policy vacuum – lack of formal AI policy pathways
Explanation
She highlighted that many African states lack formal AI policy pathways and that coalition‑based civil‑society advocacy can fill this gap.
Evidence
“But if we have a couple of AI strategies in the continent, we do not necessarily have AI policies in the continent” [68]. “I think that this civil society advocacy, particularly grouping together, you know, forming these coalitions can have a lot of power” [61].
Major discussion point
Accountability, policy mechanisms, and civil‑society advocacy
Topics
The enabling environment for digital development | Artificial intelligence
Capacity building – scholarships, MOOCs, short courses
Explanation
She reported that roughly 75 % of South Africans have low AI awareness and promoted MOOCs, short courses, and scholarships to raise AI literacy and agency.
Evidence
“The survey revealed that nearly 75 % of respondents knew very little about AI” [76]. “Finally, as a further effort towards equipping Africans to be able to define their own wants and needs, we have an online MOOC launching imminently” [77]. “On this, GCG’s other work I’d like to highlight are the various short courses we run on ethical and human rights implications of artificial intelligence” [78].
Major discussion point
Capacity building, awareness, and empowerment
Topics
Capacity development | Artificial intelligence
AI as a panacea – prioritize basic development needs
Explanation
She critiqued the notion that AI alone can solve development challenges, arguing that investments in electricity, healthcare, and education often yield higher impact.
Evidence
“I think we see African – particularly like doubling down on again adopting procuring these AI solutions when honestly like building hospitals paying teachers installing sustaining or reliable electrical grids would actually solve a would solve the problems much easier and better” [67].
Major discussion point
Misconceptions about AI as a universal solution
Topics
Social and economic development | Artificial intelligence
Inclusive governance – equitable participation in AI development
Explanation
She emphasized that broader participation in AI governance structures can lead to more equitable outcomes across the continent.
Evidence
“I do see really just an opportunity, one, to contribute to equitable governance structures and mechanisms, but also even just an opportunity to actually participate equitably in AI development more broadly” [95].
Major discussion point
Collaboration versus competition across Africa
Topics
Artificial intelligence | The enabling environment for digital development
Mark Gaffley
Speech speed
166 words per minute
Speech length
788 words
Speech time
284 seconds
Low AI awareness – need for education and MOOCs
Explanation
He cited a survey showing that about 75 % of respondents have limited AI knowledge and promoted MOOCs and short courses to improve awareness and agency.
Evidence
“The survey revealed that nearly 75 % of respondents knew very little about AI” [76]. “Finally, as a further effort towards equipping Africans to be able to define their own wants and needs, we have an online MOOC launching imminently” [77]. “On this, GCG’s other work I’d like to highlight are the various short courses we run on ethical and human rights implications of artificial intelligence” [78].
Major discussion point
Capacity building, awareness, and empowerment
Topics
Capacity development | Artificial intelligence
Analogue alternatives – avoid over‑reliance on AI
Explanation
He urged that when digitising or deploying AI tools, organisations should retain analogue approaches to ensure inclusion for those without access.
Evidence
“So if you are going to digitise something or, you know, use AI tools to solve for a particular problem, just make sure that those who can’t access them still have their kind of analogue approaches to doing things” [106]. “And the other thing, just, you know, recognising access and inclusion issues is just to keep the alternatives open” [107]. “I did mention to someone earlier I was the against tech person in the room, so I think that’s why I’m pushing the analogue way” [108].
Major discussion point
Deployment of AI in critical infrastructure and procurement considerations
Topics
Closing all digital divides | Artificial intelligence
Michelle Malonza
Speech speed
218 words per minute
Speech length
600 words
Speech time
164 seconds
Collaboration over competition – leverage against big companies
Explanation
She echoed the panel’s view that African actors should collaborate rather than compete, to build collective leverage against large AI firms.
Evidence
“I think that’s really interesting because it ties right to what Ambassador was saying, that in order to know what you want as Africans, you have to know that the technology exists…” [41]. “And then in terms of thinking about how to collaborate across the board, the sense that I’m getting across the panel generally is that we need to think against competition so that we can be able to have leverage against the big companies” [93].
Major discussion point
Collaboration versus competition across Africa
Topics
Artificial intelligence | The enabling environment for digital development
Audience
Speech speed
170 words per minute
Speech length
574 words
Speech time
202 seconds
Digital divide – concern for excluded populations
Explanation
Audience members asked how AI advancements could avoid widening the digital divide, noting lack of internet, electricity, and other basic infrastructure for many people.
Evidence
“So my question is how do we make sure that our advancements with AI are not widening the digital divide?” [94]. “As we’re moving forward with AI, there are people who don’t have access to the internet, electricity, and other things” [112].
Major discussion point
Digital divide and inclusive AI development
Topics
Closing all digital divides | Artificial intelligence
Agreements
Agreement points
AI systems should empower people and provide agency rather than create dependency
Speakers
– Ambassador Philip Tigo
– Professor Jonathan Shock
Arguments
AI systems creating dependency rather than building capacity represents digital neocolonialism
AI should provide empowerment and agency within local contexts and languages
Summary
Both speakers agree that AI should enhance human agency and empowerment rather than creating dependency. Ambassador Tigo frames dependency as digital neocolonialism, while Professor Shock emphasizes that AI should give people agency to understand possibilities and make choices within their local contexts.
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Local context, knowledge, and cultural understanding are essential for AI systems
Speakers
– Ambassador Philip Tigo
– Professor Jonathan Shock
Arguments
AI built without African knowledge, wisdom, and cultures creates existential threats
AI should provide empowerment and agency within local contexts and languages
Summary
Both speakers emphasize the critical importance of incorporating local context, knowledge, and cultural understanding into AI systems. They argue that without this local grounding, AI systems cannot effectively serve African communities and may even pose threats.
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Collaboration and cooperation among African countries is essential, not competition
Speakers
– Ambassador Philip Tigo
– Professor Jonathan Shock
Arguments
Competition between African countries on AI is counterproductive – cooperation is essential
Existing grassroots organizations like Masakhane and Deep Learning Indaba provide strong foundations
Summary
Both speakers strongly advocate for collaborative approaches among African countries and organizations rather than competitive ones. Ambassador Tigo explicitly calls for an end to wasteful competition, while Professor Shock highlights existing successful collaborative initiatives that should be built upon.
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Simple, non-AI solutions may be more appropriate than complex AI systems for many problems
Speakers
– Dr. Chinasa Okolo
– Mark Gaffley
Arguments
Simple non-AI solutions may be more appropriate than complex AI systems for many problems
The necessity of AI solutions should be evaluated before implementation
Summary
Both speakers advocate for careful evaluation of whether AI is actually necessary before implementation. Dr. Okolo argues that basic infrastructure like hospitals and electrical grids often solve problems better than AI, while Mark Gaffley emphasizes asking whether AI is actually necessary for the problem at hand.
Topics
Artificial intelligence | Social and economic development | Information and communication technologies for development
Capacity building and education are fundamental prerequisites for meaningful AI participation
Speakers
– Dr. Chinasa Okolo
– Mark Gaffley
– Michelle Malonza
Arguments
Africans want opportunities to contribute equitably to AI governance structures and development
Educational programs and MOOCs can equip Africans to make informed decisions about AI
Africans need to understand AI technology before they can define what they want from AI systems
Summary
All three speakers agree that education and capacity building are essential for meaningful participation in AI governance and development. They emphasize that people need to understand AI technology before they can make informed decisions about what they want from these systems.
Topics
Artificial intelligence | Capacity development | Closing all digital divides
Similar viewpoints
Both speakers emphasize the need for African institutions to have independent capacity to evaluate AI systems. Ambassador Tigo focuses on scientists needing access to models for evaluation, while Dr. Okolo advocates for establishing dedicated AI safety institutes to provide this capacity.
Speakers
– Ambassador Philip Tigo
– Dr. Chinasa Okolo
Arguments
African scientists need access to AI models to evaluate systems that impact their communities
AI safety institutes should be established to provide independent evaluation capacity
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both speakers recognize the importance of documenting and addressing AI-related harms, particularly those affecting African contexts. Professor Shock focuses on immediate risks from misinformation campaigns, while Dr. Okolo emphasizes the need for better documentation of AI incidents affecting Africa.
Speakers
– Professor Jonathan Shock
– Dr. Chinasa Okolo
Arguments
Misinformation and disinformation campaigns targeting elections and female politicians pose immediate risks
Current AI incident databases lack comprehensive coverage of African contexts and harms
Topics
Building confidence and security in the use of ICTs | Artificial intelligence | Monitoring and measurement
Both speakers advocate for institutional mechanisms to ensure responsible AI procurement and deployment by governments. They emphasize the need for governments to have capacity and frameworks to evaluate AI systems before adoption.
Speakers
– Ambassador Philip Tigo
– Dr. Chinasa Okolo
Arguments
Governments should establish guardrails and safety benchmarks in procurement processes
AI safety institutes should be established to provide independent evaluation capacity
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Unexpected consensus
The immediate priority of addressing basic infrastructure needs over advanced AI deployment
Speakers
– Ambassador Philip Tigo
– Dr. Chinasa Okolo
– Mark Gaffley
Arguments
AI should be used to optimize development and accelerate infrastructure projects
Simple non-AI solutions may be more appropriate than complex AI systems for many problems
The necessity of AI solutions should be evaluated before implementation
Explanation
Despite being at an AI safety conference, there was unexpected consensus that basic infrastructure needs (electricity, hospitals, education) should often take priority over AI deployment. This pragmatic approach suggests a mature understanding that AI should serve development goals rather than be pursued for its own sake.
Topics
Social and economic development | Information and communication technologies for development | Artificial intelligence
The potential value of digital exclusion in preserving human cognitive abilities
Speakers
– Mark Gaffley
– Professor Jonathan Shock
Arguments
The digitally excluded may retain cognitive abilities that become valuable as AI advances
Transparent, human-in-the-loop systems are preferable to black box solutions
Explanation
There was unexpected philosophical consensus that maintaining human agency and cognitive abilities is valuable, even if it means being less digitally integrated. This represents a counter-narrative to typical digital inclusion discussions, suggesting that some forms of exclusion might preserve important human capacities.
Topics
Closing all digital divides | Human rights and the ethical dimensions of the information society | Artificial intelligence
Overall assessment
Summary
The speakers demonstrated strong consensus on several key principles: the need for African agency and empowerment in AI development, the importance of collaboration over competition among African countries, the necessity of local context and cultural understanding in AI systems, and the priority of capacity building and education. There was also agreement on the need for institutional mechanisms to evaluate AI systems and the importance of addressing basic development needs.
Consensus level
High level of consensus with significant implications for African AI governance. The agreement suggests a mature, pragmatic approach that prioritizes African agency, collaborative development, and careful evaluation of AI necessity. This consensus provides a strong foundation for coordinated African approaches to AI governance and suggests that African stakeholders are aligned on fundamental principles, even if they may differ on specific implementation strategies.
Differences
Different viewpoints
Effectiveness of watermarking AI-generated content
Speakers
– Professor Jonathan Shock
– Audience
Arguments
Professor Shock argues that watermarking is ineffective because malicious actors will simply choose open source models without watermarks, making it a very short stopgap solution
Audience member suggests mandatory watermarks on AI-generated media could help people identify AI-generated content and be less inclined to believe it
Summary
Professor Shock believes watermarking is futile due to availability of non-watermarked alternatives, while audience member sees it as a viable solution for combating disinformation
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Prioritization of existential AI risks vs immediate practical risks
Speakers
– Ambassador Philip Tigo
– Professor Jonathan Shock
Arguments
Ambassador Tigo argues that for Africa, existential risk should be redefined away from science fiction scenarios like AI pressing nuclear buttons, focusing instead on real threats to democracy and social harmony
Professor Shock acknowledges existential threats are important to study but emphasizes immediate threats like misinformation campaigns and breakdown of trust in society
Summary
Both recognize different types of risks but disagree on which should receive priority attention – Ambassador Tigo dismisses traditional existential risk scenarios as irrelevant to Africa, while Professor Shock maintains they are important to study alongside immediate threats
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Value of digital exclusion
Speakers
– Mark Gaffley
– Dr. Chinasa Okolo
– Ambassador Philip Tigo
Arguments
Mark Gaffley presents digital exclusion as potentially valuable, suggesting the digitally excluded may retain cognitive abilities and creativity that become relied upon as others experience mental arrest from AI dependence
Dr. Chinasa Okolo and Ambassador Philip Tigo focus on using AI to address digital divides and accelerate development rather than viewing exclusion as beneficial
Summary
Mark Gaffley offers a contrarian philosophical view that digital exclusion might preserve human capabilities, while other panelists see digital inclusion and AI adoption as necessary for development
Topics
Closing all digital divides | Artificial intelligence | Human rights and the ethical dimensions of the information society
Unexpected differences
Role of competition vs cooperation among African countries
Speakers
– Ambassador Philip Tigo
Arguments
Ambassador Tigo strongly argues against competition between African countries on AI, expressing frustration at wasteful competition and emphasizing need for collective effort
Explanation
This was unexpected as no other panelist explicitly advocated for competition, yet Ambassador Tigo’s passionate response suggests this is a significant ongoing issue in African AI development that others may not have directly addressed
Topics
Artificial intelligence | The enabling environment for digital development
Necessity of AI adoption vs maintaining alternatives
Speakers
– Mark Gaffley
– Ambassador Philip Tigo
Arguments
Mark Gaffley advocates for questioning whether AI is actually necessary and maintaining analog alternatives for those who cannot access digital systems
Ambassador Philip Tigo argues that governments face pressure to adopt AI because young populations are already using these tools, making rational choices about non-adoption difficult
Explanation
This disagreement was unexpected as it reveals a fundamental tension between cautious, inclusive approaches versus pragmatic responses to technological inevitability that wasn’t explicitly debated but emerged through their different perspectives
Topics
Artificial intelligence | Closing all digital divides | The enabling environment for digital development
Overall assessment
Summary
The panel showed remarkable consensus on major goals (African agency, capacity building, avoiding digital colonialism) but revealed subtle yet significant disagreements on implementation approaches, risk prioritization, and the pace of AI adoption
Disagreement level
Low to moderate disagreement level with high strategic implications – while speakers largely agreed on desired outcomes, their different approaches to achieving these goals could lead to conflicting policy recommendations and resource allocation decisions across African countries
Partial agreements
Partial agreements
All agree on the need for African oversight and evaluation of AI systems, but disagree on mechanisms – Ambassador Tigo focuses on access and government accountability, Professor Shock on transparency and human involvement, Dr. Okolo on institutional capacity building
Speakers
– Ambassador Philip Tigo
– Professor Jonathan Shock
– Dr. Chinasa Okolo
Arguments
Ambassador Tigo emphasizes need for African scientists to have access to AI models for evaluation and governments to hold companies accountable
Professor Shock advocates for transparent, human-in-the-loop systems rather than black box solutions
Dr. Chinasa Okolo argues for establishing AI safety institutes to provide independent evaluation capacity
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both agree that development priorities should guide technology adoption, but disagree on approach – Dr. Okolo favors basic infrastructure over AI solutions, while Ambassador Tigo sees AI as a tool to accelerate traditional development
Speakers
– Dr. Chinasa Okolo
– Ambassador Philip Tigo
Arguments
Dr. Okolo argues that simple non-AI solutions like building hospitals and paying teachers may be more appropriate than complex AI systems for many development problems
Ambassador Tigo argues that AI should be used to optimize development and accelerate infrastructure projects, leveraging technology for non-sensitive capabilities
Topics
Artificial intelligence | Social and economic development | Information and communication technologies for development
Similar viewpoints
Both speakers emphasize the need for African institutions to have independent capacity to evaluate AI systems. Ambassador Tigo focuses on scientists needing access to models for evaluation, while Dr. Okolo advocates for establishing dedicated AI safety institutes to provide this capacity.
Speakers
– Ambassador Philip Tigo
– Dr. Chinasa Okolo
Arguments
African scientists need access to AI models to evaluate systems that impact their communities
AI safety institutes should be established to provide independent evaluation capacity
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both speakers recognize the importance of documenting and addressing AI-related harms, particularly those affecting African contexts. Professor Shock focuses on immediate risks from misinformation campaigns, while Dr. Okolo emphasizes the need for better documentation of AI incidents affecting Africa.
Speakers
– Professor Jonathan Shock
– Dr. Chinasa Okolo
Arguments
Misinformation and disinformation campaigns targeting elections and female politicians pose immediate risks
Current AI incident databases lack comprehensive coverage of African contexts and harms
Topics
Building confidence and security in the use of ICTs | Artificial intelligence | Monitoring and measurement
Both speakers advocate for institutional mechanisms to ensure responsible AI procurement and deployment by governments. They emphasize the need for governments to have capacity and frameworks to evaluate AI systems before adoption.
Speakers
– Ambassador Philip Tigo
– Dr. Chinasa Okolo
Arguments
Governments should establish guardrails and safety benchmarks in procurement processes
AI safety institutes should be established to provide independent evaluation capacity
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Takeaways
Key takeaways
Safe and trusted AI for Africa must prioritize building local capacity over creating dependency, avoiding digital neocolonialism where value is extracted while leaving African institutions as mere users
Immediate AI risks for Africa include misinformation/disinformation campaigns targeting elections and vulnerable groups, rather than distant existential risks from rogue AI systems
African participation in AI development requires access to models for evaluation, capacity building for scientists and policymakers, and development of local AI systems that understand African contexts and languages
Cooperation rather than competition between African countries is essential for effective AI governance, leveraging existing grassroots organizations and shared computational resources
AI integration into critical infrastructure requires careful evaluation of necessity, transparent procurement processes with safety benchmarks, and maintaining human-in-the-loop decision making
Educational initiatives are crucial as 75% of South Africans know little about AI, with most learning through informal channels like social media
AI should be used strategically to accelerate development priorities like connectivity, electricity, and literacy rather than as an end in itself
Resolutions and action items
Launch of the African Compute Initiative at University of Cape Town to provide shared computational resources across African universities
Release of an online MOOC by the Center for Global AI Governance offering free AI education content with relatable African imagery and caricatures
Continued scholarship programs prioritizing African women through the Women in Focus series
Development of playbooks and negotiation tools to help African policymakers engage with large tech companies
Establishment of AI safety institutes or similar independent evaluation bodies within African governments
Integration of safety benchmarks and audit requirements into government AI procurement processes
Unresolved issues
How to effectively hold trillion-dollar tech companies accountable when African countries have much smaller GDPs and limited regulatory leverage
Addressing the digital divide where 64% of Africans lack internet access while advancing AI development
Balancing the pressure from young populations already using AI tools with the need for careful, regulated implementation in government systems
Creating comprehensive AI incident databases that adequately capture African contexts and harms
Developing formal advocacy pathways for civil society to influence AI policy across diverse African political systems
Moving from AI strategies (which exist) to actual implementable AI policies (which are largely missing)
Ensuring watermarking and other technical solutions remain effective against malicious actors using open-source models
Suggested compromises
Using market pressure rather than just regulatory pressure to influence tech companies, leveraging Africa’s significant user base (e.g., Kenya being the biggest ChatGPT user)
Maintaining analog alternatives alongside AI systems to ensure inclusion of those who cannot access digital solutions
Focusing AI safety efforts on immediate, contextually relevant risks rather than distant existential threats while still supporting some research on long-term risks
Combining civil society and government efforts rather than maintaining traditional separation, given the existential nature of AI risks to the continent
Prioritizing local private sector partnerships over global big tech to maintain domestic legal jurisdiction and control
Using AI strategically to optimize existing development challenges (energy, infrastructure) rather than implementing AI for its own sake
Thought provoking comments
I think the first part of this conversation is that largely that if AI systems are creating a dependency rather than building capacity or capability I think for me that’s undesirable because the erosion of human agency especially for a continent that is still trying to aspire is a problem if AI systems are extractors of African data if capturing our African markets and there’s a concentration of value outside the continent while leaving our institutions as mere implementers or users then I think for me as I said it’s digital neocolonialism
Speaker
Ambassador Philip Tigo
Reason
This comment reframes AI safety from a uniquely African perspective, moving beyond technical risks to focus on economic sovereignty and human agency. It introduces the powerful concept of ‘digital neocolonialism’ and positions dependency vs. capacity-building as a central tension.
Impact
This comment established the foundational framework for the entire discussion, with subsequent speakers consistently returning to themes of agency, empowerment, and African-specific risks. It shifted the conversation away from Western-centric AI safety concerns toward contextually relevant issues.
Stop competing. I’m really, it’s, I’m sorry, sometimes I stop being an ambassador at some point. Because AI is not ICT. It’s not about who’s going to build the best data centers. You know, who’s going to do X or Y. This is a collective all-in effort. I think for me, that’s the biggest shift that we need to make. That it’s not about competition. It’s about cooperation and collaboration.
Speaker
Ambassador Philip Tigo
Reason
This passionate interjection cuts through diplomatic language to address a fundamental strategic error. The raw emotion (‘stop being an ambassador’) and the clear distinction between AI and traditional ICT infrastructure reveals deep frustration with current approaches and offers a paradigm shift toward collaboration.
Impact
This comment created a turning point in the discussion about regional cooperation. It prompted other panelists to provide concrete examples of collaborative initiatives and reinforced the theme that African countries must work together rather than compete for scraps from global tech companies.
These findings may reveal that African populations are some way away from being able to define what they want from AI, because quite simply the majority of citizens are unaware that technology even exists. This drives the need for creating awareness and educating our peers on AI, so that when the time does come to interact with it, they can make informed and meaningful decisions about what they want.
Speaker
Mark Gaffley
Reason
This comment introduces a fundamental prerequisite that challenges the entire premise of the discussion – how can people define what they want from AI if they don’t know it exists? It grounds the theoretical discussion in empirical data (75% of respondents knew very little about AI) and identifies a critical gap in the pathway to agency.
Impact
This shifted the conversation toward the practical foundations needed before higher-level policy discussions can be meaningful. It influenced subsequent discussions about capacity building and highlighted the importance of public education as a prerequisite for democratic participation in AI governance.
But if we have a couple of AI strategies in the continent, we do not necessarily have AI policies in the continent. So there’s already no mechanism to do this. And that’s AI in general. We’re not even talking specific about safety… So we have to even redefine what existential risk for Africa on AI means. And I think this is where we really have to break from that. And we can have a few of our scientists doing the existential risks models, models running rogues and science fiction. I think that’s important work. But the risk that he’s mentioned is real, right? Threats to democracy, threats to harmony of society are real risks.
Speaker
Ambassador Philip Tigo
Reason
This comment makes a crucial distinction between strategies (aspirational documents) and policies (actionable frameworks) while challenging the global AI safety discourse. It argues for redefining ‘existential risk’ in African contexts – from sci-fi scenarios to immediate threats to democracy and social harmony.
Impact
This comment fundamentally reoriented the discussion about AI risks and safety priorities. It validated Professor Shock’s earlier points about misinformation while establishing a hierarchy of risks that puts immediate, contextually relevant threats above speculative future scenarios. This influenced how other panelists framed their subsequent responses about practical safety measures.
I’ve given an example Kenya is the biggest user of charge GPT and the first user of charge GPT is emotional advice so you’re asking that’s real data so you’re asking a model for emotional advice that doesn’t understand your context so what does that mean
Speaker
Ambassador Philip Tigo
Reason
This specific, data-driven example powerfully illustrates the abstract concept of cultural misalignment. The image of Kenyans seeking emotional advice from a culturally blind AI system makes the risks tangible and immediate, moving beyond theoretical discussions to real human impact.
Impact
This concrete example gave weight to all the previous abstract discussions about context and cultural understanding. It provided other panelists with a clear reference point for discussing the importance of local context in AI systems and reinforced the urgency of the capacity-building discussions.
I think there’s something else which we have to be very aware of, which is happening right now… And that’s misinformation and disinformation… To me, in the short term, that’s really worrying. I think it’s quite difficult to talk about the long term… there are things that are real that are happening now that we have to worry about and try to mitigate.
Speaker
Professor Jonathan Shock
Reason
This comment introduces a temporal framework that prioritizes immediate, observable risks over speculative future threats. It also brings gender-based violence into the AI safety discussion, expanding the scope beyond traditional technical concerns to include social justice issues.
Impact
This established the short-term vs. long-term risk framework that other panelists, particularly Ambassador Tigo, built upon. It also introduced the theme of technology-facilitated gender-based violence, adding a social justice dimension to the safety discussion that influenced how other panelists framed empowerment and agency.
Overall assessment
These key comments fundamentally shaped the discussion by establishing an African-centric framework for AI safety that diverges significantly from Western discourse. Ambassador Tigo’s interventions were particularly influential, introducing concepts like digital neocolonialism and redefining existential risk, while his emotional plea for collaboration created a turning point that influenced how other panelists discussed regional cooperation. Mark Gaffley’s empirical grounding about public awareness provided a reality check that influenced discussions about capacity building, while Professor Shock’s focus on immediate risks created a temporal framework that other speakers adopted. Together, these comments moved the conversation from abstract global AI safety concerns toward concrete, contextually relevant challenges facing African communities, creating a more grounded and actionable discussion about pathways forward.
Follow-up questions
How can Africa develop comprehensive AI incident databases that capture harms specific to the continent?
Speaker
Dr. Chinasa Okolo
Explanation
Current AI incident databases don’t adequately capture African contexts – searching for ‘Africa’ redirects to ‘African American’ content, indicating a gap in documenting AI harms on the continent
What are the formal advocacy pathways available across different African countries for AI policy influence?
Speaker
Dr. Chinasa Okolo
Explanation
She acknowledged not being aware of similar advocacy pathways in African countries compared to the US system, highlighting a need to map out these mechanisms
How should Africa redefine ‘existential risk’ in the context of AI to focus on locally relevant threats?
Speaker
Ambassador Philip Tigo
Explanation
He emphasized that traditional AI existential risks may not be relevant to Africa, and the continent needs to define what existential risk means in their context – focusing on threats to democracy and social harmony rather than science fiction scenarios
What mechanisms can ensure African scientists get access to frontier AI models for evaluation and safety research?
Speaker
Ambassador Philip Tigo
Explanation
He noted that African countries are major users of AI systems but lack access to evaluate these models, which is essential for conducting safety assessments relevant to their contexts
How can African governments build capacity to negotiate effectively with trillion-dollar AI companies?
Speaker
Ambassador Philip Tigo
Explanation
He highlighted the power imbalance between African governments and major tech companies, suggesting need for negotiation tools, playbooks, and guidebooks
What would effective AI procurement guidelines look like for African governments?
Speaker
Ambassador Philip Tigo
Explanation
He suggested that procurement is a key leverage point where safety benchmarks and audit requirements can be included, but indicated this isn’t being done effectively currently
How can the effectiveness of mandatory watermarking for AI-generated content be evaluated as a solution to misinformation?
Speaker
Audience member
Explanation
This was raised as a potential technical solution to combat AI-generated misinformation, though Professor Shock expressed skepticism about its long-term effectiveness
How can AI advancement be leveraged to bridge rather than widen the digital divide in Africa?
Speaker
Audience member
Explanation
With 64% of Africa lacking internet access, there’s concern that AI advancement could exacerbate digital exclusion rather than promote inclusion
What would AI safety institutes or evaluation bodies look like when adapted to African contexts and needs?
Speaker
Dr. Chinasa Okolo
Explanation
She referenced the US model embedded in NIST but noted that African versions would need to be designed differently to align with local needs and values
How can the Africa AI Council complement UN AI governance initiatives effectively?
Speaker
Dr. Chinasa Okolo
Explanation
She mentioned looking forward to seeing how this regional body would interact with global UN AI governance efforts, suggesting need for clarity on coordination
What are the most effective models for continental AI collaboration that move beyond competition?
Speaker
Ambassador Philip Tigo
Explanation
He emphasized the need to shift from competitive to collaborative approaches but the specific mechanisms for achieving this continental cooperation need further exploration
How can open-source AI development be strategically leveraged to build African AI capacity and reduce dependency?
Speaker
Ambassador Philip Tigo
Explanation
He suggested building African models from open source as an alternative to dependence on major AI companies, but the practical implementation pathway needs development
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

