African AI: Digital Public Goods for Inclusive Development | IGF 2023 WS #317

9 Oct 2023 08:45h - 09:45h UTC

Event report

Speakers and Moderators

Speakers:
  • Susan Waweru, Government, African Group
  • Bobina Zulfa, Civil Society, African Group
  • Irura Mark, Intergovernmental Organization, African Group
Moderators:
  • Lilian Diana Awuor Wanzare, Government, African Group

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis covers several important topics related to the development of AI and its impact on various aspects of society. One of the key points discussed is the significance of data infrastructure and access to compute for the democratization of AI. It is noted that the lack of proper data infrastructure can hinder the development and use of AI, especially in contexts like Africa or the global South.

Another point raised is the need to address challenges regarding data infrastructure and compute access. While no specific supporting facts are provided, this suggests that there are issues that need to be discussed and resolved to ensure that AI can be effectively utilized and its benefits can be accessible to all.

The analysis also touches upon the presence of AI policies and legislation in Kenya. The question raised is whether Kenya has a specific AI policy in place and corresponding legislation to operationalise it. Unfortunately, no supporting facts or evidence are presented to explore this question further.

Lastly, the analysis considers the topic of human-robot interaction, specifically focusing on how human workers should perceive and interact with robots. However, no supporting facts or arguments are provided to delve deeper into this topic.

In conclusion, the analysis raises important questions and topics related to data infrastructure, access to compute, AI policies and legislation, and human-robot interaction. However, it is worth highlighting that the lack of supporting facts or evidence limits the depth of analysis and leaves several open-ended questions.

Yilmaz Akkoyun

AI has the potential to significantly impact inclusive development and help achieve the Sustainable Development Goals (SDGs). It can play a crucial role in improving access to medical services and increasing efficiency in agriculture, which can contribute to the goals of good health and well-being (SDG 3) and zero hunger (SDG 2). AI applications can facilitate medical service delivery by assisting in diagnostics, monitoring patients’ health, and providing personalized treatment. In agriculture, AI can enhance productivity, optimize resource usage, and improve food security.

However, there are challenges associated with the access and negative effects of AI that disproportionately affect developing countries (SDG 10). Only a fraction of the global population currently has access to AI applications tailored to their specific needs. This digital divide reinforces existing inequalities and limits the potential benefits of AI for those who need it the most. Moreover, negative impacts of AI, such as job displacements and bias in decision-making algorithms, can exacerbate existing inequalities in developing countries.

Ethical considerations and the regulation of AI are also critical. Risks associated with AI range from high greenhouse gas emissions to digital disinformation and risks to civil and democratic rights (SDG 16). To ensure the responsible and fair development and use of AI, it is essential to promote ethical principles and practices. This includes addressing issues such as algorithmic bias, ensuring transparency and accountability, and safeguarding privacy and human rights.

In order to reduce inequalities and ensure diverse representation, it is important to have AI expertise and perspectives from various regions, including African countries (SDG 10). Africa has seen the emergence of various AI initiatives, and it is crucial to involve these initiatives in shaping the global conversation around AI. This will help ensure more equitable development and minimize the risk of marginalization.

The German Federal Ministry for Economic Cooperation and Development (BMZ) is committed to supporting the realization of AI’s potential through local innovation in partner countries (SDGs 8 and 9). The BMZ believes that digital public goods, such as open AI training datasets and research, are important enablers of economic and political participation. These measures can enhance economic growth and create opportunities for communities to harness AI for their specific needs.

Access to open AI training data and research, as well as open-source AI models, is considered foundational for local innovation (SDG 9). By sharing relevant data, AI models, and methods openly as digital public goods, a global exchange of AI innovations can be fostered, benefiting various regions and promoting cross-cultural collaboration.

In conclusion, AI holds tremendous potential for inclusive development and the achievement of SDGs. However, challenges of access, negative effects, and ethical concerns must be addressed. It is essential to ensure diverse representation, particularly from regions such as Africa, and promote ethical AI practices. Open access to AI training data and research is crucial for fostering local innovation and accelerating progress towards the SDGs. The African AI initiatives are inspiring and underscore the need for continued dialogue and learning about AI’s impact on development.

Zulfa Bobina

AI technologies, though viewed as an ideal as digital public goods, have not yet become a reality. They are described as more of a future aspiration rather than something that is currently achievable. However, there is optimism about the future growth of AI technologies and collaborations. More work is being done in the advocacy space, which is believed to lead to a more widespread adoption of AI technologies.

Civil society is seen as playing a vital role in addressing ethical considerations related to AI. It is believed that civil society can step in to address these concerns and ensure that AI technologies are developed and deployed ethically and responsibly. Efforts are being made to address these ethical concerns through research and advocacy.

There is a need for comprehensible communication regarding AI technologies. It is argued that explaining technologically complex concepts in simple language can help the general population understand and incorporate these technologies into their lives. The goal is to avoid elitism in technology comprehension and ensure that everyone has access to and understands AI technologies.

The often overlooked human workforce behind automated technologies is being highlighted and advocated for. It is recognized that automation and AI technologies can have a significant impact on the workforce. Therefore, efforts are being made to support and advocate for the rights of these workers to ensure fair treatment and protection in the face of technological advancements.

Harmonizing collective and individual rights is emphasized, particularly when it comes to data rights. It is argued that adopting western blueprints of data rights that focus solely on individual rights may not be suitable for African societies. There is a need for more balanced regulations that take into account both collective and individual rights.

Discussions around AI technologies as a public good are considered important. There are considerable discussions taking place, especially at events like the Kyoto summit. Furthermore, public interest in data and AI technologies is growing, highlighting the need for ongoing discussions and dialogue as technologies progress.

Overall, there is excitement about the various activities happening across the continent in the field of AI and technological developments. These advancements are seen as opportunities for growth and progress. While there are challenges and ethical considerations to address, there is an optimistic outlook for the future of AI technologies in Africa.

Darlington Akogo

Mino Health AI Labs, a leading healthcare technology company, has developed an advanced AI system that can interpret medical images and deliver results within seconds. This groundbreaking technology has received approval from FDA in Ghana and has attracted users from approximately 50 countries across the globe. By providing fast and accurate results in medical image interpretation, the AI system has the potential to significantly accelerate and streamline healthcare processes.

Although the benefits of AI applications in healthcare are evident, it is crucial to subject these systems to rigorous evaluation processes, especially in healthcare. Approval of AI systems by health regulators can be challenging and requires extensive testing to ensure their effectiveness, reliability, and safety. It is essential to distinguish between AI research or prototypes and their real-world implementations, as the latter demands meticulous scrutiny and validation.

Considering the perspective of users is another important aspect of AI implementation. Users should actively participate in determining the features and operations of AI systems, particularly in healthcare. This ensures that these systems seamlessly integrate into users’ workflow and effectively meet their specific needs. Their input provides valuable insights on optimizing the functionality and usability of AI solutions, ultimately enhancing their impact in healthcare.

Moreover, the concept of businesses being built around solving problems connected to the Sustainable Development Goals (SDGs) has gained prominence. Companies such as Mino Health align their business strategies with addressing issues related to healthcare access and food security, demonstrating a positive approach towards achieving the SDGs. By focusing on solving socially significant problems, businesses can contribute to broader societal goals and make a tangible difference in people’s lives.

To guide businesses in achieving a balance between profit and impact, the concept of an internal constitution has emerged. This moral code acts as a set of guidelines for the company’s operations and ensures that its decisions and actions align with its core values. In certain cases, even the CEO can be voted out if they deviate from the principles outlined in the internal constitution. This mechanism promotes a sense of ethical responsibility within the business and encourages a long-term view that prioritizes societal welfare alongside financial success.

Furthermore, businesses can be registered for public good, which implies an obligation to prioritize the public interest over the interests of shareholders and investors. This designation reinforces the idea that businesses should focus on the common good, aiming to create positive social impact rather than solely maximizing profits. By doing so, businesses can align their objectives with the well-being of communities and contribute to the achievement of the SDGs.

Artificial intelligence (AI) has tremendous potential in aiding the attainment of the SDGs. The ability of AI to process vast amounts of data and derive actionable insights can be instrumental in addressing complex societal challenges. Investing in AI can be a strategic approach to tackling the problems identified within the SDGs, as it enables the development of innovative solutions and the efficient allocation of resources.

However, while harnessing the power of AI is essential, it is equally important to exercise responsibility and adhere to ethical frameworks. The transformative nature of AI technology calls for careful consideration of its potential risks and impacts. Leveraging AI in a responsible manner involves issues such as bias, accountability, and privacy, among others. Operating within ethical boundaries is crucial to prevent the emergence of new problems that could arise from unchecked deployment of AI systems.

In summary, Mino Health AI Labs has made significant advancements in the field of healthcare through the development of their AI system for medical image interpretation. However, the successful implementation of AI in healthcare requires rigorous evaluation, active user involvement, and a focus on aligning business strategies with the SDGs. The concept of an internal constitution and the registration of businesses for public good provide mechanisms to guide companies towards balancing profit and societal impact. AI, if invested in responsibly, holds the potential to address the challenges addressed within the SDGs. At this pivotal juncture in history, there is a need to harness AI technology while ensuring its ethical and responsible use to avoid unforeseen consequences.

Meena Lysko

During the discussion on industry, innovation, infrastructure, and data privacy in South Africa, several important topics were addressed. One of the key points highlighted was the implementation of the Protection of Personal Information Act (POPI Act) and the Cyber Crimes Act. These acts were considered crucial for prioritising the safeguarding of personal information and for providing a legal framework to address various digital offences.

It was acknowledged that challenges arise in striking the balance between innovation and compliance in digital privacy. However, the speakers emphasised that the POPI Act and the Cyber Crimes Act play a vital role in ensuring responsible handling of data by organisations in South Africa.

Collaboration between businesses, individuals, and law enforcement agencies was emphasised as imperative in moving forward with the implementation of these acts. This collaboration is seen as a key factor in promoting the responsible use of personal information and in effectively addressing digital offences. The need for joint efforts in creating a secure and ethical digital environment was highlighted.

Another significant point discussed was the incorporation of ethics in the AI systems lifecycle. It was emphasised that ethics should be included from conception to production of AI systems. This includes the integration of a module on AI ethics and bias in training programmes. Ethical competence, which includes knowledge of laws and policies, was deemed necessary for individuals involved in AI development. Additionally, the need for an ethically tuned organisational environment was highlighted to ensure the responsible and ethical use of AI systems.

The importance of industry interaction in AI and data science training was also emphasised. The inclusion of industry experts in training sessions was seen as a means of facilitating knowledge sharing and promoting morally sound solutions. This collaboration between the training programmes and industry experts was found to be beneficial in keeping up with the latest trends and developments in the field.

The positive impact of training programmes on participants was highlighted with the assertion that these programmes support quality education, industry innovation, infrastructure development, zero hunger initiatives, and responsible consumption. The post-training feedback from previous programmes indicated that the training positively influenced the participants.

Lastly, the use of open AI systems was advocated as a means of contributing to sustainable digital development. It was noted that proprietary AI systems are generally used to make money, ensure security, empower technology, and simplify tasks. However, open AI systems were proposed as a more sustainable alternative for digital development.

In conclusion, the discussion highlighted the significance of the POPI Act and the Cyber Crimes Act in South Africa for ensuring personal data protection and addressing digital offences. Collaboration between businesses, individuals, and law enforcement agencies was deemed essential in moving forward with these acts. Ethics in AI systems development and the incorporation of industry interaction in training programmes were emphasised. The positive impact of training programmes on participants and the advocacy for the use of open AI systems in sustainable digital development were also discussed as important aspects of the conversation.

Susan Waweru

The Kenyan government has demonstrated a strong commitment to implementing and adhering to policies related to artificial intelligence (AI) and digital transformation. The Constitution of Kenya plays a significant role in guiding the development and use of AI. It includes provisions that emphasise transparency, accountability, and the protection of privacy rights. This indicates that the government recognises the fundamental importance of privacy in AI systems.

Moving beyond theoretical frameworks to actual implementation is a crucial step in the development of AI. The government understands the significance of leadership commitment in successfully executing plans. Without strong leadership support and commitment, the implementation and execution of policies become challenging.

The Kenyan government is actively pursuing digitisation and aims to develop an intelligent government. Key efforts in this direction include onboarding all government services onto eCitizen, a platform that provides online access to government services. The President himself is overseeing the Digital Transformation Agenda, highlighting the government’s high level of interest in digitisation. Currently, the government’s focus is on infrastructure development to support these digital initiatives.

Privacy and accessibility are two important principles emphasised in the development of digital public goods and AI technology. The government recognises that video surveillance mechanisms should respect privacy and not infringe on people’s freedoms. The Data Protection Act in Kenya primarily affects data controllers and processors, ensuring that personal data is handled with care and protects individual privacy.

To further support AI development, the Kenyan government is working towards separate legislation and strategies specifically for AI. This demonstrates a commitment to creating a comprehensive and focused approach to AI policy. The government is actively drafting AI legislation and has established a central working group to review and update tech-related legislations, policies, and strategies.

In line with their commitment to effective governance, the Kenyan government is developing an AI chatbot. This chatbot, using natural language processing with large datasets, is aimed at enhancing compliance and bringing government services closer to the people. It will be available 24/7, providing services in both English and Swahili.

Demystifying AI and promoting human-centred design are also important aspects. The government recognises that creating awareness and understanding among the public can enhance the adoption and reduce fear of AI. In addition, a focus on human-centred design ensures that AI development prioritises the needs of citizens over the benefits of organisations.

Finally, the benefits of AI, especially in public service delivery, are highlighted. The government acknowledges that AI has the potential to provide significant benefits to its citizens. The aim is to ensure that the advantages of AI technology outweigh any potential risks.

In conclusion, the Kenyan government has taken substantial steps towards implementing and adhering to AI and digital transformation policies. With a strong commitment to privacy, accessibility, and human-centred design, as well as efforts to develop separate AI legislation and strategies, the government is actively working to create a more inclusive and technologically advanced society. Through initiatives such as the AI chatbot and the digitisation agenda, the government aims to provide efficient and accessible services to its citizens.

Moderator – Mark Irura

During the discussion, several important topics related to healthcare and the implementation of digital solutions were discussed. Mark Irura emphasised the need for risk assessment and harm prevention when incorporating digital solutions. He highlighted the importance of evaluating potential risks and taking necessary precautions to protect individuals from physical, emotional, and psychological harm. Irura also stressed the importance of implementing data protection protocols to safeguard sensitive information and maintain citizens’ privacy.

The discussion also acknowledged the challenge of balancing business interests with Sustainable Development Goals (SDGs) and the integration of artificial intelligence (AI). It was recognised that business requirements and regulations may take precedence at times, making it difficult to align them with the objectives of sustainable development and the use of AI technologies. The speakers agreed that finding a harmonious balance between these different aspects is crucial to ensure the successful implementation of digital solutions that contribute positively to both business interests and the achievement of SDGs.

Mark Irura further emphasised the need for developing strategies that can effectively align business objectives, SDGs, and AI technologies. He inquired about the approach used to align these elements in addressing various challenges. This highlights the importance of creating a comprehensive framework and implementing strategies that consider all three components, providing a cohesive and integrated approach to problem-solving.

Overall, the speakers strongly emphasised the need for rigorous certification processes, active user involvement in decision-making processes, and robust data protection measures. These measures are crucial to mitigate risks and ensure the well-being of individuals when implementing digital solutions. The discussion conveyed the wider implications of the implementation process and the importance of responsible use of AI technologies in healthcare and other sectors.

Session transcript

Moderator – Mark Irura:
I want to check also if the colleagues online have been able to join. Bobina, Dr. Mina Zulfa and Darlington, are you online with us?

Meena Lysko:
This is Mina. Yes, I am online. Thank you.

Moderator – Mark Irura:
Bobina? Perfect.

Zulfa Bobina:
Hello. I’m here as well.

Moderator – Mark Irura:
All right, thank you. So we are missing Darlington, but we’ll start with the session. I will start with introductions. My name is Mark Irura. And today we are here to talk about AI and its use, particularly for sustainable development as far as digital public goods are concerned. It’s some work that we have been doing in Africa, and we kind of will do a little bit of a deep dive looking at some of the things that we have done as a program within GIZ, but also explore what are some of the risks that are coming out in the discussions that we have. With us today, we have Yilmaz from the Federal Ministry of Economic Cooperation and Development, BMZ. We have Susan O’Hara seated beside me. She’s the head of legal at the Office of the Data Protection Commissioner. I have Dr. Mina Lisko. She brings on board her experience having worked with government, academia, and the private sector. And she’s currently a director at Move Beyond Consulting based in South Africa. And we have Bobina Zulfa. Bobina is an AI and data rights researcher at Policy based in Uganda. Policy is a feminist collective of technologists, data scientists, creatives, and academics working at the intersection of data, design, and technology to see how government can improve on service delivery. We’ll start with a keynote from Yilmaz to talk to us a little bit about, from a high overview, what they are doing before we delve into the conversation. So over to you. Thank you.

Yilmaz Akkoyun:
Dear Mark, distinguished guests and colleagues, dear ladies and gentlemen, dear IGF friends, it’s a great honor on behalf of the German BMZ and pleasure to share a few opening remarks today highlighting the potentials of AI, especially African AI, for inclusive development. What is the potential of AI for inclusive development? I think we already heard a lot on day zero and today. In my view, it can be instrumental in achieving the SDGs. They can facilitate medical service delivery, increase efficiency in agriculture, and improve food security, challenges of our time. Yet, only a fraction of the population worldwide has access to AI applications that are tailored to their needs. And we want to change this. This is why we are here. And on top of that, the negative effects of AI disproportionately affect developing countries, especially in the global south. However, we also need to be aware of the risks related to AI. These risks range from high greenhouse gas emissions of large language models to digital disinformation and risks to civil and democratic rights. The international community is becoming increasingly aware of these issues, and we see it here at the IGF. Accordingly, in my view, the promotion of ethical, fair, and trustworthy AI, as well as the regulation of its risks, are beginning to be addressed at the global level, as we heard this morning in the G7 context of the AI Hiroshima process. AI has been addressed in the UN, G7, G20, and international organizations such as UNESCO and the OECD have published principles and clear recommendations that aim to protect human rights with AI being on the rise worldwide. And the EU is on the forefront of regulating AI with the EU AI Act. Secretary General Guterres is convening a multi-stakeholder high-level advisory board for AI that will include emerging and developing countries. I think these conversations between countries from the global north and the global south are essential so we can make sure that AI benefits all. And when talking about AI, we mostly hear about models and applications developed in Silicon Valley, California of the US, or in Europe, but there’s so much more. And we discuss large language models that represent and benefit only a fraction of the world population. That is why I’m especially excited to hear about AI use cases today that were developed and deployed in African countries and that truly represent African AI, and that were designed specifically to benefit the public in African countries. As the German Federal Ministry of Economic Cooperation and Development, we want to enhance economic, political participation of all people in our partner countries. And we are very eager to support our global partners to realize the potential of AI through local innovation in these countries that we are talking about here in this session. We are very committed to the idea that digital public goods are an important enabler. For example, to be more concrete, the access to open African language datasets is supporting local governments and the private sector in building AI-empowered services for citizens. For instance, our initiative Fair Forward contributes to the development of open AI training datasets in different languages, Kiswahili, Kinyarwanda, and Luganda, languages spoken by more than 150 million people collectively. And some of the examples we’ll get to know in this session are built on these language datasets. I’m looking very much forward to this. And to give you an outlook, we see open access to AI training data and research, as well as open source AI models as the foundation for local innovation. Therefore, relevant data, AI models, and methods should be shared openly as digital public goods. To realize the potential of AI for inclusive and sustainable development, we need to make sure at the same time that AI systems are treated as digital public goods. Open, transparent, and inclusive at the same time. In this way, a global exchange on AI innovations can emerge. This IGF with AI being mentioned in so many sessions is one starting point for the global exchange. And now, I’m looking very much forward to the use cases. And thank you so much for being part of this wonderful session.

Moderator – Mark Irura:
Thank you so much. So, before we dive in and building upon that, we are kind of taking a critical approach to try and see how are we beginning to define what AI means to us in the continent, in the African continent. And today, we specifically have this idea that we can actually build solutions and systems and not just look at it from a policy and a framework perspective, so to speak. And I will start with Susan, because she’s in the room and in the hot seat. And I will ask, I will start you to the framework, right? And what the Office of the Data Protection Commissioner is doing in Kenya as far as thinking about AI. And then, also explore if you have any ideas and context about what is happening in the rest of the continent.

Susan Waweru:
Thank you, Mark, for your question. And good evening to all. As you’ve heard, my name is Susan Awero from the Office of the Data Protection Commissioner in Kenya. As we may be aware, in the AI context, privacy is of fundamental importance to ensure that AI works for the benefit of the people and not to their harm. Discussing on the frameworks, in Kenya, the top echelon of frameworks is the Constitution of Kenya. Within that constitution, we have several provisions that sort of guide AI in Kenya now. One of them is the values and principles of governance. From a government perspective, we are bound to be transparent, to be accountable, to be ethical in everything that we do. This includes in the deployment of AI for public service, in the service delivery in all forms. Secondly, we have values and principles of public service. These are the values and principles that govern us as public servants in how we carry out our public duties. So, that is what, at a constitutional level, will be guided by in the deployment of AI in delivery of service. Of most importance is the Bill of Rights and Fundamental Freedoms. Within that Bill of Rights and Fundamental Freedoms, which is also in our constitution, we have the right to privacy. The right to privacy is what bats data protection laws, data protection policies, and data protection frameworks in Kenya. So, having the top organ, the top-groomed norm in Kenya, giving the guardrails in which AI, privacy, data protection will be guided by, gives a good background and a firm fundamental foundation for any other strategy or policy in tech or in AI can then spring from. It forms the global guardrail for everything that should be done. So, it may not specifically touch on AI, but these values and principles, the Bill of Rights and Fundamental Freedoms, give you the constitutional guardrail of what you can and cannot do in the AI space. So, it is from that mark, then, the frameworks. All other frameworks, including the AI strategy, the digital master plan, all must adhere to.

Moderator – Mark Irura:
Thanks. Thanks, Susan. And building upon that and the increasing knowledge that we have these things anchored in constitution or in principles that you want to develop, and then we have digital public goods. And digital public goods, then, are ways in which government is offering shared services. So, if you have a registry, for example, a civil birth and death registry, how can it be leveraged across government, rather than the social security and the hospital insurance fund and the tax administration all have their own registers, but kind of duplicating that effort. And so, from that knowledge, we are also beginning to think about how, because we have an opportunity and we have seen how multiple registers affect how government delivers services. Then we see the need for adopting this in government, because it can be cheaper. It can be cheaper in terms of how government procures these services. And I’m giving this background, because I do not know if Darlington is online. Is Darlington there? All right. So, you can introduce yourself and then move on to the question and share with us what you’re doing in West Africa. And then you can also talk a little bit about the lessons you’re learning from the work that you’re doing. And are you using any digital public goods approaches in your work? So, over to you, Darlington. Thank you.

Darlington Akogo:
Thank you for having me. My name is Darlington Akogo, founder and CEO of Mino Health AI Labs and Karangroo AI. So, I’m in a moving vehicle. So, apologies for any sounds. But, yeah. So, what we do at Mino Health AI Labs is build artificial intelligence solutions for healthcare. And so, connected to the question, we do have one AI system that is focused on medical image interpretation. And, you know, we got the health regulators in Ghana, FDA Ghana, to certify and approve this AI system. And, yeah. So, we rolled it out. We have users from all over. We’ve had sign-ups from about 50 countries around the world. In Ghana, we have it being used in, you know, some of the top, I mean, the capital cities, but also some small towns. And, we are expanding access to even really rural areas. The benefits of, you know, this AI system to the communities, for example, is that by default, if you go take a medical image and x-ray, for example, it will take several weeks before you get the results. Because, you know, there are very few radiologists. So, in Ghana, for example, there are about 40 radiologists. If you take an African country like Liberia, they only have less than five radiologists. So, what this AI system does is, you know, help speed up that process by using AI to interpret that medical image. So, the AI system, if you – we have it online at platform.vinohealth.ai. And, this AI system is able to generate results in just a few seconds. About five, ten seconds, you get the results. So, what used to take weeks can now just take a few seconds. It makes all the difference in healthcare. Because, you want to know exactly what is wrong with people quick enough that you can respond to it. The lessons we’ve learned are quite a lot. One key one is within the space of AI, there’s a huge difference between, you know, doing AI for research or, you know, doing some sort of demo proof of concept. And, building real world AI that is meant to work with real humans. There’s a whole lot of difference. The key thing is the rigorous evaluations you need to do. And, this is super applicable in healthcare. So, getting the AI system to be certified by FDA or health regulator is a very, very major step. And, what it takes to get health regulators to certify, you know, an AI system for some use cases quite a lot. But then, you learn a lot of lessons. So, one of the key things we learned is just double down on rigorous evaluation. The other bit is, you don’t want to build the AI system and just hand it over to, you know, the users. Let them decide what kind of features they want, how they want the AI system to fit into their workflow. That is very important.

Moderator – Mark Irura:
Thank you so much, Darlington. Thank you. And, moving from what you’ve just said, like, I will turn to you, Mina, and ask about, Darlington just said, like, it’s very different when you’re doing a research project, but when you’re actually implementing a solution. Like, there are a lot of things. There are a lot of risks. And, from the wide goal, I think one of the approaches from, you know, a digital public goods perspective, or a DPI perspective, a digital public infrastructure perspective, is can we cause harm? What are the risks? How can we expose data we shouldn’t expose, and how do we protect code bases we shouldn’t, so that there are no harms that ultimately translate to the citizens? So, I will invite you to reflect on that. Thank you.

Meena Lysko:
Thank you. Thank you very much, Mark. And then, perhaps, firstly, thank you for this platform and giving me the opportunity to actually e-visit Japan. I do wish I could have been there in person. I could not, and I apologize for that. So, thank you for this opportunity to e-visit Japan and actually the captivating and quite unique city, Kyoto. I know it is the birth city of Nintendo, and it actually hosts a phenomenal number of the UNESCO World Sites. So, Mark and team, I am very envious of you. So, maybe I will come back to, I was intending to also share with you some of the work we have done or that we are doing. So, I will come back to it. In terms of your question, so looking at AI ethics and governance standards, probably first in South Africa, right, we have the ever-evolving digital landscape, and we have the Protection of Personal Information Act, or POPI Act, and we also have the Cyber Crimes Act, which stands as significant legal frameworks, which is shaping the realm of data privacy, security, and digital crime prevention. So, the POPI Act, which is endorsed in South Africa, that prioritizes the safeguarding of individuals’ personal information. It encourages responsible data handling by organizations. The POPI Act emphasis is on individual privacy and is reshaping the way organizations collect and manage personal data, and it prompts them to adopt stringent data protection measures. Perhaps I can give an example. So, I frequently get these sort of annoying calls, and then I ask, how did you get my number? And then they go about and say, but you do know that I have not shared my number with you willingly, so this is against the POPI Act. And very often the phone goes down immediately. So, people in South Africa are very aware of the POPI Act, and people feel safeguarded through the POPI Act. However, challenges do emerge in balancing innovation and compliance, especially in the age of digital privacy. In parallel to the POPI Act, we have the Cyber Crimes Act. This addresses the escalating threat of cyber crime by providing a legal structure to tackle various digital offenses, thereby fortifying the defenses against cyber threats. So moving forward, I think it becomes quite imperative for businesses, individuals, and law enforcement agencies to actually collaborate in the implementation of these acts. Thank you, Mark.

Moderator – Mark Irura:
Thank you, Mina. And I turn to you, Bobina. So we’ve talked about digital public goods. We’ve talked about how we protect citizens. And Susan gave a very good introduction on what is being done as far as the frameworks that we have are concerned on data rights. SDG number 16 talks about partnerships. As civil society, in this particular field on digital public goods, do you have any collaborations with other stakeholders, whether in private or public sector? And do you think, and that’s loaded, do you think that there’s alignment from what you can see in the landscape, right, with sustainable development as far as AI is concerned right now? Over to you, Bobina.

Zulfa Bobina:
Okay, thank you, Mark. Please allow me to go and record my video because my internet is unstable. Can you hear me?

Moderator – Mark Irura:
Yes, yes, we can hear you.

Zulfa Bobina:
Yes?

Moderator – Mark Irura:
Yes, go for it.

Zulfa Bobina:
Good afternoon. So we lost you now. We can’t hear you. Okay, can you hear me now? Yes, yes, go for it. Okay, great. I’ll quickly get to your question, Mark. Very interesting discussion from the rest of the panelists. Good to hear about the number of things you’ve been working on and just sticking around. I just want to say just from the get-go, the composition itself, I think I’ve just been thinking the idea of AI technologies, even in the continent or globally as a whole right now, being digital public goods, it’s still very much an ideal space that we’re, in a sense, working towards because that’s not really a reality at the moment because a lot of the things you describe as being a digital public good, it’s not includable, it’s more inclusive in a sense, is not really what’s happening at the moment. And so in the sense of trying to relate that to the SDGs and working towards something a world as a whole, as the technology is being adopted here on the continent and how that’s happening along the line of intersection of different partners and how they’re possibly working towards realisation of different SDGs. I think I’ll say I do see a number of examples, for example, here in Uganda where I’m based in Kampala or just across the African region or Africa as a whole. I’ll give an example. I see partnerships along maybe academia and the practice, especially. I’ll give one example, like the Lacuna Fund with the Macquarie AI Lab. For example, here in Kampala, Uganda, I see a lot happening at the Macquarie AI Lab and a lot of that is in partnership with a lot of maybe developing countries or they take themselves. So, for example, the Lacuna Fund, which is building the natural language text and speech database, are being, I think, working in collaboration with Google. So I think a lot of what I see is within the academia space over to private sector. And as civil society just coming in to do more advocacy, both speaking towards the issue of the number of data seekers, I’ve brought ethical considerations in the adoption of these technologies. So I think it’s something that is springing up in a sense. It’s not happening on a very grand scale, but it’s something that we see coming up. And I guess we can hope that it’s only going to, especially with more work being done around the space of advocacy, we can see more of that happening over the coming months and the coming years.

Moderator – Mark Irura:
All right. Thank you so much. Thank you so much, Bobina. And I come back to you, Susan. I’ll act like a journalist and say I’m sure the people in the room are wondering. Probably people in Kenya will say the Kenyans are asking as if it’s one person. How do we move from these frameworks, right, to the actual implementation? And what are some of the things that you’re doing in this regard so that they don’t remain on paper?

Susan Waweru:
Mark, that’s a good question. And it’s one I’m passionate about. I’m known as the get-it-done girl. So my reputation is to move things from paper to actual implementation and execution. Death in the drawer is a concept we learn in policy and in business administration where you can have the best policies, you can have the best strategies, frameworks, legislations, all documented. And that is one thing you see in Kenya. We have some of the best documentations even borrowed by the West. But implementation becomes one of the biggest challenges, not only in AI but in the tech space. So how you get it done, from my perspective, is one. Leadership matters. If you don’t have leadership commitment to getting what is on paper out to be physically seen, would be a challenge. So what we do is, as the technocrats in government, we seek to influence leadership. And we have some of our parliamentarians here with us. We seek to influence them on the importance of what has been documented. Because if the policy is done at the strategy level and just benched, then it becomes a challenge. But as technocrats, influencing the leadership on the importance of the documents that have been prepared is key. Once you get the leadership buy-in, then it trickles down to the user and the citizenry buy-in. Because those using the frameworks, for example, the Data Protection Act is an act passed by parliament to be implemented majorly. It affects majorly data controllers and data processors. Who are largely entities. So if we don’t get entities on board through awareness creation, through advocacy, then we don’t have that document done. And one way to get user buy-in, and we’ll talk about this later, is to have a free flow of information. To be transparent in what you do. To be very simple and clear on what the compliance journey is for data protection and for privacy. So leadership buy-in. Leadership matters, citizenry buy-in. Another thing is collaboration. Partnerships with organizations and entities who have executed that which is in our documentation. Once we collaborate, for example, with other bodies, other government agencies, for example, who have implemented their AI applications successfully in the Kenyan government, then we collaborate with them on how to do that. Currently in Kenya, I can say this get-it-done attitude is at high gear. In the tech space, the government has what it calls the Digital Transformation Agenda. It’s spearheaded by the presidency, with the president himself overseeing and calling out most of the projects. Currently that Digital Transformation Agenda is at infrastructure development stage and onboarding all government services onto one platform which we call eCitizen. And he gives specific timelines on when he wants all of that done and checks them himself. That’s the leadership. That’s the level at which the government of Kenya is interested in digitization towards moving to an intelligent government where we don’t react to public sector needs. We preempt them and provide them even before they happen. Those are the three ways, Mark, I would say, how we get documentation to the ground.

Moderator – Mark Irura:
Thanks, Susan. Of course, today we’ll wait to see what you’ve been doing as well with AI itself. I hope we can get a chance to see that if we don’t run out of time. I come to you, Dr. Mina, and I ask about training and capacity building. What does that mean to you for different stakeholders, whether they’re in policy, whether it’s the level of graduates that we have? We know a lot of them have to go overseas or outside the continent to get their training to be able to come back and be part of an ecosystem. So what does that look for you right now, and especially now with the risks and the potential harms of AI being apparent? Thank you.

Meena Lysko:
Thank you, Mark. I’m really looking forward to share some of the programs we’re busy with currently, but I’ll hold back and maybe address this particular key question. So the emphasis should be in including ethics in AI systems lifecycle. So that should be from conception all the way through production, and it’s a cycle. It means that it should go continuously at a sustained sort of initiative throughout the working stages of a particular system. Within some of the programs, and for example, the one I’m currently on, we’ve incorporated a module on AI ethics and bias. Now, albeit that we are looking at very hands-on development, we looked at the soft skill, if I can call it that, where we need for our participants, our trainees, to emphasize that the adopting of ethics in AI is more than just knowing ethical frameworks and the AI systems lifecycle. So you require awareness of ethics from the perspective of knowledge, skills, and attitudes. And that means knowledge of laws, policies, standards, principles, and practices. And then we also need to integrate with that professional bodies and activists. And we have a number within South Africa itself. For example, we have an AI overarching representative within South Africa as a body. We have, I think it’s called DepHub in South Africa, which focuses on AI policies and data recommendations. And then we must also look at application of ethical competence. So we need an ethically tuned organizational environment. And in tune with that as well, we have to look at ethical judgment. So we’ve been emphasizing that our participants in our training program is fully aware of these aspects. So they need to be, their projects, their developments require to be guided by their ethical principles and philosophies. So they need to be imbibed with that. In the projects that they are in, they have to apply ethics throughout the design and development process. And we’ve also, to ensure that we are training people in AI and data science as an example, but we’ve also incorporated to invite industry experts into our sessions for engaging with the participants so that there is an encouragement of healthy knowledge sharing. But also in the opposite direction, there is youthful perspectives that is shared on promoting morally sound solutions. So they’re not yet contaminated with what is going on in a market for the purposes of just for profit. And that’s where we’ve seen it happening within our training programs as a very successful sort of sharing mechanism conjured. Thanks, Mark.

Moderator – Mark Irura:
Thanks, Mina. And I come to you, Darlington, and I ask this question that touches a little bit on what Dr. Mina said, working with industry experts. So you have a bucket where we have the SDGs, we have the AI and the problem that needs to be solved. In your case, you talked about radiology and being able to read and interpret what those images mean. And then we have the complexities of running a business. So talk to us a little bit about strategies. If any exist to align this in the work that you’re doing. If you’re still there. Darlington is not there. Okay, then I would move to you, Bobina. Are you online?

Zulfa Bobina:
Yes, I am.

Moderator – Mark Irura:
All right. I will ask a question that is related to ethical deployment of AI. What are you doing as civil society in this regard to make sure, for example, that people will not be left behind by digitization and digitization topics, that children will learn at an earlier age about the risks of these technologies and even as they begin to use them? Over to you, Bobina.

Zulfa Bobina:
Okay, thank you, Mark. I hope you can hear me. That’s a really profound question because I think it goes over a lot of the things we’re trying to unpack throughout this conversation. But also, like Dr. Aminah was going over a number of the ethical concerns and how these are being navigated. But I’ll say, well, for instance, with the work with blood policing, a lot of what we do is research, which is sociotechnical, in a sense. So we’re not developers, but we look at the land and look at how these technologies, as they’re being adopted and deployed, in different communities. So in our role, in a sense, a lot of what we’re doing is one knowledge production with our research, and then advocacy on the other hand. So with the knowledge production, I would say a lot of, just like very largely, the things we’re looking at very critically right now in terms of just addressing the ethical concerns and bringing communities to understand the workings of these technologies because we think it’s very elitist to… I think we always compare this to the health conversation where there is a disease outbreak and then governments find a language to communicate that this is about the general population even when there is all this scientific language about it. So I think we’ve been trying to think around this and how do we come up with language that the populations, the ordinary person within the country or anywhere else across the continent will be able to understand these technologies and how they impact them on a day-to-day, how they can incorporate them in their lives and how this could be something to just benefit their lives. I think on just like a broader scale, also just given the time I have, just going over those two points of knowledge production and advocates, I think we’re looking very critically at issues of one, visible lighting workers. So this is really a composition of automation and the invisible workforce behind a lot of these technologies that we’re being told are sort of working frictionlessly. So trying to get the understanding, the buying of government was supposed to be regulated in a sense for these technologies that are being adopted by, for example, the people doing this work, this invisible work. So that’s one of the things we’re looking at. The other being the harmonization of collective and individual rights. So a lot of the frameworks, I think, that are being developed, and I think this is a trickle from the West where we’re getting sort of like a blueprint from the GDPR, et cetera, where we see that a lot of our frameworks are driven towards individual rights. I think that’s problematic, especially as a society, a data find more and more, there is need for us to move towards a place where we harmonize both collective and individual rights and that would bring in a participatory. Thank you.

Moderator – Mark Irura:
Hi, Bobina. Can you hear me?

Zulfa Bobina:
Yes, I can hear you. I don’t know if you can hear me.

Moderator – Mark Irura:
Yeah, I lost you temporarily. Just repeat the last sentence as you wind up. Audible gasp. Okay. No, we’re here with you. Please, please. Yeah, yeah. Ah, I think we lost Bobina. Okay. Darlington, I see you’re settled now. Okay, perfect. Cool. I had a question for you, which I think you did not hear, but I will take two questions from the audience. You can prepare them if anyone has a question and probably one from the room and one online. My question to you, Darlington, before we lost you in cyberspace, was we have the SDGs, the Sustainable Development Goals, we have you as a business, and then we have AI, and not all the time these interests are aligned. Sometimes it’s the business which has to take precedence. Sometimes, like in the problem that you gave us, you’re trying to solve a really impactful problem, and at times it’s just, you know, you have to comply with some certain regulations. What are some of the strategies that you have to align all of this in solving the problems that you want to do and also aligning with the SDGs?

Darlington Akogo:
Yeah, I mean, that’s a very, very important question. The initial solution or the initial strategy is make sure your business is built around solving a problem in itself, so a problem connected to the SDG. Then fundamentally, there’s no conflict to begin with. So if you are profiting off of something that is damaging the environment or destroying the health of people, then alignment becomes a really, really big problem. But if fundamentally you have a social enterprise, you have business built around solving a problem, like in our case, the whole business is built around being able to provide healthcare and make it accessible to everyone. For Mino Health and for CARA, we’re making sure we solve food security. Outside of that, you know, there are definitely instances where maybe if you took one route, you’d make a lot of profit, but the impact might not be so much. And then there’s another route where, you know, it might not be the case. So I can give you a real-world example. So we work on drug discovery with AI. There’s a scenario we’ve looked at where you could take certain conditions, work on new drugs for them, and it’ll be very expensive. You know, there are certain medications where, you know, a few tablets are tens of thousands of dollars, hundreds of thousands and millions, and you could sell it to a few people and make a lot of money. But then the question is, are you actually building any equitable access to healthcare by doing that? And so when it comes to those scenarios, you need to have guiding principles. What you can do is have an internal constitution that says this is our moral backbone and we need to live by it. And the board is basically obliged to make decisions off of it. So even if the CEO veers off by not following that internal code of conduct, that constitution, they could be voted out. And depending on how serious you are about this, you can solidify this within the company’s constitution, and then it will be fully followed. Some people go a step further, even the way you register the business. So there’s a category in some countries where you can register as a social enterprise or, you know, for profits, but for public good, I think the term is. And when you do that, it means that your primary obligation is not to shareholders and investors, it’s to the public. So those are legal ways of making a binding to make sure that you are actually focused on addressing the SDGs and not just, you know, maximizing profits.

Moderator – Mark Irura:
Thank you, thank you. And I guess what I’m hearing from you as well is the ability to also consider self-regulation, especially in this space, as you innovate, as you solve these problems, even where there might be a lacuna in the law or in the frameworks that exist. I don’t know if there are any questions coming in from the room to begin with or from online. Yes. Hi, Leah.

Audience:
Hi, everyone. I’m Leah from the Digital Public Goods Alliance. And I think we should also quickly talk about infrastructure. I mean, apparently we had some troubles here, which is a good bridge to talk about data infrastructure and access to compute. Obviously you need both of them in order to democratize the use of, the development of, and also the benefits of AI in an African or global South context. So how do you deal with these challenges in your project, in your country context? Thank you.

Moderator – Mark Irura:
So I will open up that question to anyone to take it up. Yeah.

Susan Waweru:
Thank you for the question. And something in Kenya, one of the things I mentioned under the digital transformation agenda is the first building block is the infrastructure. So I know for the next five years, the government is having the last mile connectivity project which seeks to bring fiber connectivity to every market, to every bus station, to every public entity. That will give free wifi and that gives them the access to digital public goods. So that’s one of the things that was adopted as one of the first things to be done because you can’t develop digital public goods without accessibility. Accessibility, equal accessibility is very important. And I think one of the bedrocks to make AI successful in other tech and DPGs. So that’s what I know from the Kenyan experience that’s happening.

Meena Lysko:
Anyone else? Dr. Amina, would you like to come in? Yes, sure, Mark. I was looking for a Zoom, raise my hand, but thank you for asking me as well. I think from a South African perspective, let’s see if I can turn the video on. So from a South African perspective, you may have read in the news or aware that we have this thing called load shedding. It’s a term coined within South Africa where it is a structured approach for us to actually manage electricity within, power consumption within the country when there are constraints on the national grid. So this brings about, of course, in addition to what we do have existing as challenges and I guess globally, with infrastructure to ensure connectivity is redundancy. But with redundancy, I guess we also need to ensure that it is affordable and it must be affordable to every latitude and longitude and to the decimal of latitude and longitude. So it reaches every sphere of life within the world. And well, from a South African context within South Africa. And in running our bootcamp, for example, a program we are currently doing, this has been a challenge to run a hybrid program where people cannot stay online for the entire duration full of the training because of this matter of connectivity. Fortunately, we record sessions so they can follow it up post the session. So we have solutions around it. So the one aspect is infrastructure, but it is also about redundancy. And then it is also the question of, we have this reliance in education and training and also now in our kind of 4IR where we are relying heavily on infrastructure. And the question becomes, what happens if someday for whatever reason, and we’ve seen this through global to disasters, natural disasters in various parts of the world, how do we then manage to come back online as expediently as possible when infrastructure is affected? Because in this 4IR, in this AI and data evolved world, our reliance is fully on infrastructure to keep global economies going. So that the risk is quite high. And I think that kind of is like a call for action to look into this. In kind of going in the opposite extreme, is the question of the impact infrastructure has on the environment. So in energy consumption, for example, is massive, right? Within this context. So these are sort of things that I think we have to be very mindful of and look into in a responsible, we talk about responsibility. So we gotta be responsible about that as well. Thank you, Mark.

Moderator – Mark Irura:
Thank you, thank you. I don’t know if Sumaya, you can find one question for us online to read out.

Audience:
Thank you, Mark. We have a few questions online. Okay. So the first one is, do we have an AI policy in Kenya? Question mark. If yes, and legislation to operationalize the policy. Question number two. How should human workers perceive and interact with robots working alongside them? Question mark. Are these robots supposed to be treated as tools or colleagues by the humans working with them? So the question’s here. Thank you.

Moderator – Mark Irura:
Susan, I’ll direct the one for Kenya to you.

Susan Waweru:
Kenya has what is the digital master plan. In it is some aspects about AI. Recently, about two weeks ago, the government led by the president instructed for an AI legislation to be drafted. So that’s an ongoing work. Further, there’s a central working group that’s looking at all tech-related legislations, policies, and strategies. And one of the things that will be considered is the AI policy in place. So the answer is yes. Within the digital master plan, we have aspects of the AI policy within there. But however, there are efforts, I think, within this year to have legislation, policies, and strategies that will guide that.

Moderator – Mark Irura:
Thank you. And then the second question is existential, right? Should humans interact with robots? We already are, right? We already are to some extent. If there are questions online and in the room, we will take them and we will continue to answer them because I want to move to ask our panelists and everyone who has shared here to take a minute and just wrap up and leave the room. We wanted at least to hear what’s happening. We wanted to show you what’s happening. And we wanted you to appreciate, we are not just talking in Africa. We are doing something. And I don’t know if I begin with you, Yilmaz. If it’s okay, like just a minute, yeah.

Yilmaz Akkoyun:
Yes, thank you so much. It’s a real privilege to listen to these different examples and use cases because this was really inspiring for me. And hear more about it, creating African AI and how it works, also the challenges and how you deal with it. This was super helpful. I would like to stay in close touch with you to continue this conversation. I think it’s just the starting point. And I’m so happy to see the success stories already. And I can just congratulate to this panel, to you also for the amazing moderation and the different panelists which participated also from remote. And let’s please continue this conversation. And for now, I can just, I absorbed so much because I didn’t hear about it too much in advance. And yeah, this is why we are also here to get in touch and join the IGF here. And I think it’s just the beginning.

Moderator – Mark Irura:
Thank you. As Sumayya, are we going to put it up? Are we going to put it up? Okay, good. As you prepare, then I’ll go online and I will ask Bobina to reflect on her closing remarks.

Zulfa Bobina:
Sure, thank you very much again for having me be a part of this conversation. I think like someone had mentioned earlier, there has been a lot of conversations happening even at Kyoto, in Kyoto rather, about just weighing around the compositions, I mean, going around AI technologies as a whole. So I think to be talking about the direction of, how do we get to realize this being a digital public good and indeed, being of benefit to everyone is a point towards, we’re coming from the point of the initial compositions around digitization and now as a public, as a whole is getting data paid more and more, how do we let the composition evolve as the technologies are evolving as well? And so I think for me, I’m very excited to just hear about some things that are happening here and there across the continent and very excited to see more of that and very happy to keep in touch with you all to just let this conversation keep on going. Thank you.

Moderator – Mark Irura:
Thank you so much. And then I move to you, Dr. Mina.

Meena Lysko:
Thank you, Mark. In context of the sustainable development goals, our training has aimed to support quality education, industry innovation and infrastructure, zero hunger and responsible consumption and production. I take with me today what you have said and I think that could be a nice global call. Self-regulate as you innovate. Our post-training feedback from previous programs as well as feedback from participants in our current program is giving a glimpse of how a paid forward is being achieved. And then I want to kind of sum up by saying this, proprietary AI systems are generally used to make money, enable security, empower technology, simplify tasks that would otherwise be mundane and rudimentary. But if AI ecosystems could be designed to take advantage of openly available software systems, publicly accessible datasets and generally openly available AI models and standards or open content, it will enable digital public goods to avail for Africa generally free works and hence contribute to sustainable continental and international digital development. Thank you, Mark.

Moderator – Mark Irura:
Thank you. And then I move to you, Darlington.

Darlington Akogo:
Yeah, so I think we are in one of the best moments in human history where we are building technology that finally digitize what makes us special as a species and potentially even surpasses. The potential is beyond anything we can think. So we are what, seven years away from the deadline of the SDGs. And there’s a lot of realization that we are not close to meeting the targets. I strongly believe if we can double down on properly using AI ambitiously, whether in Africa, Asia or anywhere in the world, if we can seriously double down, invest properly in it, we can address right about everything on the SDGs. There’s no limit to how far AI can go, especially in the context of foundation models now and how general they are. So I would say let’s double down on it, but let’s do this in a very responsible and ethical way. So as we are solving the SDGs, we don’t create a new batch of problem for the next target that we have to create. So let’s leverage AI and solve the SDGs.

Moderator – Mark Irura:
Thank you so much. And before Susan closes for us, they have been working on an interesting project. Maybe she can talk to us about it and then give us closing remarks. It’s been projected before you. It’s a tool that can help citizens to learn about the act and it can communicate in Sheng. Sheng is a mixture of Swahili and English, English and Swahili. So over to you, Susan.

Susan Waweru:
Thank you, Mark. So just to quickly run through, one of the things we are developing is an AI chatbot to provide the services that the ODPC should provide. This chatbot is using natural language processing with large datasets to just train it on the questions the citizenry may have on the ODPC project. So it speaks both English and Swahili, which are the two official languages in Kenya. So Sumai, if you just may ask it, what is a data controller? This is an awareness tool. It’s a tool to enhance compliance. It’s a tool to bring the services closer to people. And it just overcomes challenges such as the eight to five working day. So as a data controller and a data processor seeking to register or make a complaint, you’re not limited to working hours. That can be done at any time. It gives information, it gives processes, and it’s all free of charge, giving it accessibility. So to just end the session, my clarion call is that AI is inevitable. Both, we’re already using it. It’s already on our phones. It’s already in our public services. So it’s inevitable. The main thing I would say is to have it human-centered. Even when we’re developing the chatbot, we put ourselves in the shoes of the citizen more than the benefit of the organization. So if we can enhance human-centered AI and maybe bring up the benefits more than the risks, that would be best. The way to do this is to demystify AI. And such a panel is one of the things we do. You demystify it, because currently it’s a scary big monster, which is not what it truly is. That’s not the whole aspect. It’s what it could be, but it has much more benefits, especially to public service delivery. And with that, Mark, I just want to say thank you to you and the organizers, Sumeya and Bonifaz, for this, and largely to IGF.

Moderator – Mark Irura:
Thank you so much. Let’s give a round of applause to everyone who’s contributed to this conversation. I hope the session has been valuable to you. I hope you learned something. And I hope we can connect. I hope we can talk more about the topic. Thank you so much. And thank you online as well for joining us. Thank you so much. Bye.

Audience

Speech speed

183 words per minute

Speech length

189 words

Speech time

62 secs

Darlington Akogo

Speech speed

185 words per minute

Speech length

1254 words

Speech time

406 secs

Meena Lysko

Speech speed

134 words per minute

Speech length

1635 words

Speech time

731 secs

Moderator – Mark Irura

Speech speed

148 words per minute

Speech length

2163 words

Speech time

876 secs

Susan Waweru

Speech speed

161 words per minute

Speech length

1629 words

Speech time

606 secs

Yilmaz Akkoyun

Speech speed

146 words per minute

Speech length

919 words

Speech time

377 secs

Zulfa Bobina

Speech speed

182 words per minute

Speech length

1291 words

Speech time

427 secs