Towards a Safer South Launching the Global South AI Safety Research Network
20 Feb 2026 17:00h - 18:00h
Towards a Safer South Launching the Global South AI Safety Research Network
Summary
The event marked the launch of the Global South Network for Trustworthy AI, introduced by Dr. Urvashi Aneja at the India AI Impact Summit to address AI deployment challenges in the Global South [8-9]. She highlighted that AI is rapidly being used in critical sectors across the Global South but that low institutional capacity and deep inequities create significant risks, and that the region is under-represented in global safety and governance structures [11-18]. Independent civil-society organisations were presented as uniquely positioned to provide grounded evidence from real-world deployments that can inform global benchmarks and standards [19-22].
The network’s core activities will include building an independent evidence base, conducting contextual real-world assessments, and advancing evaluation science beyond existing benchmarks [29-33]. Specific flagship projects for the coming year were announced: multilingual AI benchmarks with the Collective Intelligence Project, a gender-harm taxonomy with GXD Hub, and work to link evaluation outcomes to public-policy procurement mechanisms, which are seen as a lever for responsible innovation in the Global South [43-52]. Mr. Abhishek Singh emphasized that safe and trusted AI is a universal goal but that current multilingual benchmarks are lacking, noting India’s 22 official languages as an example and praising the New Delhi Frontier AI commitments as a step toward shared data and evaluation tools [66-77][88-92]. He warned that without capacity-building and resource sharing, the Global South will remain excluded from shaping AI safety standards [84-87][98-104].
Ambassador Philip Thigo reinforced the urgency of inclusion, calling the network timely yet late, and proposed regional nodes, multilingual benchmark datasets, an annual red-team exercise, and a Global South AI Safety Report to integrate the network into multilateral processes such as the UN AI governance panel [138-146][170-178][179-182]. Dr. Rachel Sibande argued that safety definitions must be re-defined to reflect local cultural, gender, and linguistic contexts, illustrating how mistranslation of a pregnant mother’s warning could miss a critical health signal [216-227]. Ms. Chenai Chair added that gender-biased voice interfaces and the diversity of African languages can exacerbate existing inequalities and even turn benign technologies into surveillance tools [240-269].
Natasha Crampton from Microsoft described the challenge of scaling community-led, multilingual evaluations to thousands of languages and stressed the need for sustainable, ongoing assessment processes [276-284]. Amir Banifatemi pointed out that safety is poorly defined, lacks financial incentives, and suffers from talent and infrastructure gaps, proposing open-source evaluation tools and incident-reporting systems to close feedback loops, especially where latency and regulatory mechanisms are weak [296-311][312-322]. Balaraman Ravindran noted the proliferation of overlapping AI safety initiatives and called for coordinated effort through a single node in the global accountability network to avoid duplication and amplify impact [330-337][338-342].
The speakers agreed that the network will serve as connective tissue between global governance, technology developers, and on-the-ground stakeholders, aiming to make AI trustworthy and inclusive for the Global South [37-39][59-60].
Keypoints
Major discussion points
– Urgent need for trustworthy AI in the Global South and the current under-representation of these regions in global safety governance.
The speakers note that AI is rapidly deployed in critical sectors across the Global South, but low institutional capacity and deep inequities create high risks, while the region remains “under-represented in global safety and governance infrastructures” and often lacks its own oversight institutes [11-14][15-18].
– The Global South Network for Trustworthy AI as a civil-society-driven platform to generate real-world evidence, improve contextual evaluation, and advocate for inclusive governance.
The network aims to “build an independent evidence base,” conduct “real-world deployment assessment,” and push the “science of evaluations” beyond standard benchmarks, while also “field building” and providing “connective tissue” between global governance and on-the-ground realities [29-33][34-38][43-50].
– Key structural challenges identified: multilingual and cultural mismatches, limited access to compute, concentration of benchmark-setting power, and gaps in talent and infrastructure.
Participants highlight the scarcity of “multilingual benchmarks” for the many languages spoken in the Global South, the “access to compute” problem for researchers, the fact that “benchmarks are not neutral” and are often defined by a handful of institutions, and the lack of “talent inclusion” and appropriate “infrastructure” for evaluation [73-78][158-162][165-166][304-311].
– Planned flagship projects to address these gaps, including multilingual benchmark development, gender-harm taxonomy, procurement-lever strategies, and sector-specific evaluations (e.g., health information systems).
The network will work with partners on “benchmarks for multilingual AI,” build a “taxonomy of gender harm,” support “procurement” as a lever for responsible innovation, and evaluate “labor market impacts” and “health information systems” in the Global South [43-50][54-58][59-60].
– Calls for coordinated, regional, and multilateral structures to amplify impact and avoid duplication of effort.
The Ambassador proposes “regional nodes” and a “Global South AI Safety Report,” while other speakers stress the need to “harmonize” the many emerging initiatives, integrate the network into the UN AI governance process, and create a shared “steering committee” that includes Indian and Kenyan representatives [171-179][184-186][330-342].
Overall purpose / goal of the discussion
The session was convened to launch the Global South Network for Trustworthy AI and to articulate its mission: creating a civil-society-led ecosystem that generates context-specific evidence, builds multilingual and culturally aware evaluation tools, and advocates for the inclusion of Global South perspectives in global AI safety standards and governance frameworks.
Overall tone and its evolution
– The opening remarks are enthusiastic and celebratory, thanking partners and expressing excitement about the launch [4-9][11-14].
– The conversation then shifts to a problem-focused, analytical tone, detailing systemic gaps, risks, and technical challenges [15-22][73-78][158-166].
– As the panel proceeds, the tone becomes collaborative and solution-oriented, highlighting concrete project plans, regional coordination ideas, and commitments from industry and multilateral actors [43-50][171-179][348-349].
– The closing moments retain a hopeful and forward-looking tone, emphasizing rapid action, partnership, and the urgency of turning discussion into tangible outcomes [353-358].
Overall, the discussion moves from celebration of the network’s inception, through a sober assessment of existing deficiencies, to a constructive agenda for collective action.
Speakers
– Dr. Urvashi Aneja – Founder and Director of Digital Futures Lab; host and moderator of the session.
– Mr. Abhishek Singh – Under-Secretary, Ministry of Electronics and Information Technology, Government of India [S7].
– Ambassador Philip Thigo – Special Envoy on Technology, Republic of Kenya [S4].
– Mr. Quintin Chou-Lambert – Chief of Office and AI Lead, UN Office for Digital and Emerging Technologies [S16].
– Ms. Natasha Crampton – Vice President and Chief Responsible AI Officer, Microsoft [S17].
– Dr. Rachel Sibande – Senior Program Officer, AI for Africa, Gates Foundation [S10].
– Ms. Chenai Chair – Director, Masakane African Language Hub [S12].
– Dr. Balaraman Ravindran – Professor, IIT Madras; Head, Center of Responsible AI, IIT Madras; member of the UN scientific panel on AI [S1][S2].
– Mr. Amir Banifatemi – (Speaker; specific title not stated in the transcript).
Additional speakers:
– None identified beyond the list above.
The session opened with Dr Urvashi Aneja welcoming participants to the India AI Impact Summit and formally launching the Global South Network for Trustworthy AI. She opened the panel by asking Dr Rachel Sibande where clarity is lacking about safe AI in the Global South [210-214]. Aneja highlighted that AI is being deployed rapidly across health, education, the judiciary, and government in the Global South, creating “immense” opportunities but also “immense” risks because many of these contexts suffer from low institutional capacity, deep societal inequities and low literacy levels [11-14]. She warned that the region is “under-represented in global safety and governance infrastructures” and that many countries lack their own oversight bodies, leaving local concerns at risk of being ignored [15-18].
Aneja then positioned independent civil-society organisations as uniquely suited to fill this gap, arguing that their proximity to real-world deployments enables them to surface risks invisible to laboratory testing [19-21]. The Network’s core mission is to build an independent evidence base, conduct contextual real-world assessments, and advance the science of evaluation because existing benchmarks do not capture all societal risks [29-33]. The network will give visibility to technology companies designing tools and safety infrastructure, as well as to governments and international organisations shaping global AI-governance architecture [70-73]. It will also act as connective tissue between the global governance architecture, the global safety infrastructure, and what’s happening on the ground [70-73].
Five flagship projects for the first year were announced:
* development of multilingual AI benchmarks in partnership with the Collective Intelligence Project and CARIA [43-45];
* creation of a taxonomy of gender-related harms with GXD Hub and the Global Centre for AI Governance to improve incident-reporting databases [46-47];
* work on procurement levers, linking evaluation outcomes to public-policy procurement to shape markets for responsible innovation [48-53];
* a labour-market impact study; and
* health-information-system evaluations to test whether large language models meet clinicians’ needs in the Global South [54-58][59-60].
Mr Abhishek Singh reinforced that safe and trustworthy AI is a universal goal, but identified a critical shortfall: most benchmarks are English-centric, ignoring the 22 official languages of India and the linguistic diversity of other Global South nations [73-77]. He praised the New Delhi Frontier AI commitments, which require model developers to share usage data and to publish multilingual performance benchmarks, and asked how compliance can be ensured and capacity built across the region [84-92][98-104]. Singh cautioned that without tools and benchmarks, merely identifying risks is insufficient [71-73].
Ambassador Philip Thigo echoed the urgency, noting that the Global South has been “systematically excluded” from safety conversations and that Kenya is currently the only member of the international AI-safety-institute network [138-141]. He enumerated four structural gaps-limited team-capacity, access to compute, linguistic and cultural mismatches, and the non-neutrality of benchmarks, which concentrate power in a few institutions [157-166]-and proposed establishing regional nodes (e.g., an African hub) [160-162], creating multilingual benchmark datasets, organising an annual red-team exercise, and publishing a Global South AI Safety Report to feed into multilateral processes such as the UN AI-governance panel [170-179][180-182].
Dr Rachel Sibande (Gates Foundation) argued that safety must be re-defined to reflect local cultural, gender, religious and linguistic norms. She illustrated the danger of mistranslation with a pregnant mother’s phrase “waters have broken”, which could be rendered as “I have thrown away water” and thus miss a critical health alert [216-227]. She called for community-informed analyses of societal, ethical and distributional risks [216-218][229-232].
Ms Chenai Chair, Director of the Masakhane African Language Hub, added that developers often overlook user experience and gender dynamics, citing a voice-enabled agricultural tool that used a male-sounding voice in a context of gender-based violence, thereby exacerbating existing inequalities [236-247]. She highlighted the vast linguistic diversity of Africa-over 2 000 documented languages, of which Masakhane currently supports only about 50-leading to mismatches when tools are deployed in local dialects [248-255]. She warned that benign technologies can quickly become surveillance tools when communities are not consulted, giving the example of luggage-tracking devices being misused [256-269].
From the industry side, Natasha Crampton (Microsoft) described the challenge of scaling community-led, multilingual evaluations. She noted that projects like Samishka, which combined civil-society insight with research, must be turned into sustainable, ongoing evaluation pipelines that can operate across thousands of languages and cultural settings [276-284]. She stressed that benchmarks cannot be a one-off activity; they need to be run continuously to capture shifts in model behaviour [281-284].
Amir Banifatemi (ITS Rio) pointed out that safety is poorly defined and rarely costed into financial planning, meaning firms lack incentives to prioritise it [296-311]. He identified gaps in compute access, talent inclusion, and system-wide evaluation tools, arguing that current assessments focus narrowly on model design and ignore the broader ecosystem of APIs, data pipelines and infrastructure [312-322]. He advocated for open-source incident-reporting tools that capture contextual harms and for mechanisms to accelerate feedback loops in the Global South, where institutional latency hampers rapid response [321-322].
Professor Balaraman Ravindran (IIT Madras) observed a proliferation of overlapping AI-safety initiatives-including networks in Africa, China and UN-led capacity-building programmes-creating a risk of duplication [330-342]. He urged the Global South Network to serve as a single node within the broader accountability network, coordinating efforts and harmonising activities to amplify impact [330-337][338-342].
Mr Quintin Chou-Lambert (UN Office for Digital and Emerging Technologies) warned that technical standards alone cannot ensure safety, as a one-size-fits-all approach fails to capture contextual nuances [191-194]. He argued that field-tested, low-resource examples are essential to surface challenges that large-scale models overlook, and that the Network can feed such empirical evidence into the UN Global Dialogue on AI Governance [195-199].
Rapid-fire commitments
– Microsoft pledged to honour the New Delhi Frontier AI commitments by sharing multilingual data and investing $50 billion by the end of this decade in Global South infrastructure to support scalable evaluation [348-349].
– The Gates Foundation committed to institutionalise safety evaluation at the point of deployment, ensuring issues are caught early [355].
– Masakhane announced a benchmarking initiative for African languages to be delivered within the year [356].
– Amir’s labs in Bangalore and San Francisco will release open-source, culturally contextual incident-reporting tools for public use [358].
Across the discussion, participants converged on the need for multilingual, culturally aware benchmarks (Aneja, Singh, Crampton, Thigo, Chenai Chair, Sibande) [29-33][34-38][43-45][73-77][174-175][216-227][236-247]; they agreed that civil-society insight and inclusive talent are essential for surfacing risks (Aneja, Thigo, Chenai Chair, Banifatemi, Sibande) [19-22][138-141][236-247][304-315][216-218]; and they recognised capacity-building, compute access and infrastructure investment as prerequisites (Singh, Thigo, Crampton, Banifatemi, Aneja) [71-73][158-160][276-284][296-311][34-38].
Key points of disagreement emerged around who should define benchmarks. Singh called for multilingual benchmarks to address risks [73-77], while Thigo warned that benchmarks are not neutral and should not be set by a handful of institutions [161-165]; Banifatemi added that evaluations must consider the whole system, not just model performance [319-320]. On incentives, Banifatemi argued that safety is not costed into financial planning, reducing corporate motivation [307-311], whereas Singh stressed that safety should complement, not stifle, innovation [105-108]; Aneja suggested using public procurement as a lever to drive responsible AI [50-53]. Regarding the scope of safety, Singh focused on technical risk identification [68-73], Thigo broadened it to include environmental, misinformation and lifecycle harms [154-156], Banifatemi emphasised system-wide evaluation [319-320], and Sibande called for a culturally grounded definition of harm [216-218].
Take-aways: (i) AI deployment in the Global South offers great promise but also risks amplifying existing social, gender, linguistic and environmental harms; (ii) identifying risks is insufficient without tools, benchmarks and capacity-building; (iii) the Global South is systematically under-represented in AI-safety governance, and the Network aims to provide field-tested evidence and act as a bridge to global policy forums; (iv) English-centric benchmarks must be replaced by multilingual, culturally aware ones; (v) capacity gaps-compute, talent, sustainable evaluation mechanisms-must be addressed; (vi) governance must be de-concentrated, ensuring benchmarks are not dictated by a few institutions and that safety is financially incentivised; (vii) coordination across overlapping initiatives is essential to avoid duplication and maximise impact [29-33][34-38][43-50][174-175][216-227][236-247][276-284][319-322][330-342].
Unresolved issues include: (a) a precise, universally accepted definition of “safety” and “harm” that captures diverse cultural contexts; (b) concrete mechanisms to cost safety into corporate financial planning or impose penalties for unsafe AI; (c) design of ongoing, scalable evaluation frameworks beyond one-off tests; (d) equitable access to high-performance compute for Global South researchers; (e) detailed pathways for the Network to integrate with UN AI-governance processes; (f) strategies to de-concentrate benchmark authority and ensure inclusive risk prioritisation; (g) methods to close the accountability loop so that technical evaluations translate into tangible citizen-level benefits [216-218][307-311][281-284][158-160][170-179][161-165][180-182].
Suggested compromises involve establishing regional nodes to balance rapid activation with local expertise, adopting an open-source, collaborative benchmarking framework that allows multiple institutions to contribute, leveraging the New Delhi Frontier AI commitments as a baseline while expanding multilingual evaluation work, combining top-down UN engagement with bottom-up civil-society evidence generation, and using pilot projects and incremental infrastructure investments (e.g., Microsoft’s $50 bn pledge) as stepping stones toward a sustainable, global evaluation ecosystem [348-349][170-176][S3].
Overall, the launch marked a decisive step toward a coordinated, inclusive AI-safety ecosystem for the Global South, with broad consensus on the need for multilingual, context-sensitive evaluation and capacity-building, alongside notable divergences on benchmark governance, incentive structures and the breadth of safety considerations that will shape the Network’s future trajectory.
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Good evening, everyone. My name is Urvishya Neja. I am the founder and director of Digital Futures Lab. And I am so excited to see all of you here and to have you all here for the launch of this network. So it’s a real pleasure to welcome you to the launch of the Global South Network for Trustworthy AI here at the India AI Impact Summit. On behalf of Digital Futures Lab and our other founding partners, Sirai from IIT Madras, the Global Center for AI Governance, ITS Rio, International Innovation Corps, thank you all for being here. And we’re especially grateful to Mr. Abhishek Singh and Ambassador Philip Tigo and Mr.
Quintin Chow and to all our distinguished speakers and guests who are joining us today. Across the Global South, AI systems are being rapidly deployed in critical social sectors such as healthcare, education, judiciary, and in government. And while the opportunities are immense, in many of these contexts, many of these contexts are also marked by low institutional capacity, deep societal inequities, popularization, and populations with low levels of literacy. So while the potential is immense, the risks and harms are also immense. And so it’s particularly important that we figure out ways to make AI safe and trustworthy in these contexts to ensure not only that we protect the populations and to ensure that we don’t exasperate existing harms, but also to ensure that we build the infrastructure for safe and inclusive AI adoption.
Unfortunately, Global South organizations, Global South communities, Global South states remain underrepresented in global safety and governance infrastructures. And many countries in the Global South are actually unlikely to even have in the near term their own safety or oversight institutes. And there’s a real risk, therefore, that the concerns and priorities of these countries, of these communities remain underrepresented in the global safety infrastructure. And precisely those countries that have the most potential or the most opportunity to leverage AI. Independent civil society organizations are uniquely positioned to address this gap. Their proximity to real -world deployment contexts enables them to surface risks that are invisible to lab -based evaluations or testing. The form of grounded evidence that civil society organizations can bring can inform global safety benchmarks, standard -setting processes, and risk assessments, providing corrective signals to technical and regulatory institutions.
The Global South Network for Trustworthy AI works to advance exactly these objectives – to evaluate the real -world impact of AI systems, to build the trust and oversight mechanisms localized to different linguistic, cultural, and infrastructural contexts, and to elevate Global South perspectives in global AI governance forums. It is particularly encouraging that this initiative also aligns closely with the recently announced New Delhi Frontier AI. The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world impact of AI systems, and to develop a more robust and efficient system The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world impact of AI systems, and to develop a more robust and efficient system the real -world impact of AI systems, and to develop a more robust and efficient system The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world impact of AI.
impact of AI. The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world systems. The Global South Network for Trustworthy AI brings together some of the leading research institutions from across the Global South. We are joined by a community of organizations from Asia, from Africa, from Latin America, and the names of which you see displayed behind you. I also want to take this opportunity to highlight some of the key activities that we’re going to be doing as part of the network. I think one of the key things that we want to do as part of the network is to really build an independent evidence base to generate community -informed analysis of the societal, ethical, and distributional risks of AI systems across diverse contexts.
We also want to do real -world deployment assessment to conduct contextual and public evaluations of models and applications across diverse social contexts. We also want to push the field of evaluations, push the science of evaluations, where we say that benchmarks are very important, but benchmarks as they stand today do not necessarily capture all the societal risks that we see in the Global South. So how do we ensure that the evaluation work that we’re doing also captures some of those harms? In some sense, what we want to do with the network is field building. We want to bring together Global South civil society organizations to pool in their collective intelligence, to pool in their capacities, and to advocate together for the representation of Global South concerns on global governance forums.
So what we are trying to do here is field building within the Global South around AI safety and around building that trust infrastructure. And eventually what we hope that all of this amounts to is collective advocacy. We see an important role that the network will play in creating a connective tissue between the global governance architecture, between the global safety infrastructure, and what’s happening on the ground. We hope the network can provide that visibility to real -world impact. to technology companies who are designing tools, who are designing safety infrastructure, as well as to governments and international organizations who are building the architecture of global AI governance. So with that, I want to thank you all. Oh, wait, I have one more thing to share with all of you.
I’m not ready to thank you yet. I also want to showcase some of the projects that we’ll be doing in the coming year. Picking up on yesterday’s commitments, one of the things that we’ll be doing is building benchmarks for multilingual AI. This is with our network partners, the Collective Intelligence Project and CARIA, and we’re really excited to start this work. We’re also going to be doing work on gender and safety. This is with our partners at GXD Hub and the Global Center for AI Governance to build a taxonomy of gender harm so that we can start building a more robust incident reporting database when it comes to gender -related harms and really advance gender safety in digital spaces.
The third piece that we’re going to be working on this year is around procurement. All of the evaluation work that we do, all the benchmarks that we build, all of that has to eventually feed into public policy. And so we hope that some of this work can support procurement. And procurement, we think, is a really important lever for countries in the global south to shape markets for responsible innovation. I think we’ve all heard a lot about the kind of third way of AI governance that India brings to the global governance landscape. And procurement can be an important lever of making that third way a reality and setting the bar for what responsible innovation looks like.
Like I mentioned earlier, we also want to push on the science of evaluation. What does good evaluation look like? What are the kind of methodologies that we need? What are the kind of methodologies that reflect the concerns and the capacities of communities in the global south? So we’re very excited to be doing this work with ITS Rio, who’s also one of the founding partners, and specifically to implement and advance this discussion on evaluations. We’ll be looking at labor market impacts in the global south. and finally we’re going to be looking at evaluations of health information systems do the existing generative AI tools and large language models that we see do they deliver for clinicians do they deliver for doctors what more can they do to support the needs of healthcare professionals in the global south.
So those are the five kind of big flagship projects that we’re going to be launching within the coming year. We’re going to be very busy as you can see we have a lot that we’re going to try and get done and we’re really excited to be on this journey with all of you and would love to engage with all of you post the launch and see how we build this civil society and research infrastructure together. So with that I am delighted to welcome our keynote speakers first and I would like to give the floor to Mr. Abhishek Singh. Sir thank you for your continued support Thank you for the network and for your leadership on the India AI Summit.
Over to you, sir.
Thank you, Urvashi. And first and foremost, I’d like to congratulate all the team, the network which has brought this together, this Global South Network for Trustworthy AI. With a few months back when we started discussing this concept with Urvashi, with Kalika, with my team, we felt that how do we go about it? Because safe and trusted AI is something that nobody disagrees with. Everybody says that whenever AI innovation is happening, but we must ensure that we must protect ourselves, we must kind of secure ourselves from the harms that can come from misuse of AI or from the risks that frontier AI poses. So yes, we did have the Yoshua Bengio’s report, the scientific panel report, which is part of all the impact summits, the Action Summit and the Bletchley Park Summit, in which it has kind of…
identifies the risks that frontier AI model poses. But what we do believe is that just identifying the risk is not sufficient. We need to think of how do we address those risks. And for addressing those risks, you need to first have the technical tool, the capacity to identify those risks. What are the benchmarks on which you will evaluate them? Some of which Roshi identified, like how do various models perform on multilingual benchmarks? Because very often, most models are evaluated on benchmarks which are predominantly in English language. But if you look at India, a diverse country, we have 22 official languages and multiple other dialects. How do we evaluate how a model performs on various domains in prompts given in those languages?
We don’t have specific linguistic benchmarks. The same applies to many countries of global south. So it felt that while limited expertise exists in some institutions where research is going on, like Serai is one of them, where Professor Balram Ravindran is leading it. There are many labs, of course. whether it’s Microsoft Research or whether other labs wherein such work is going on. The AI Security Institute in UK is doing some work in this direction. The OECD has been doing some work. But how do we ensure that we enable the access to such resources, such tools, such studies to the larger global majority? So with that, this whole concept of creating a global south network for trustworthy AI came in.
And then we immediately had these conversations with all the key stakeholders, partners. We got a lot of support from almost all stakeholders. And along with that, the conversation for the New Delhi Frontier AI commitments was also going on, which Kalika from my team was leading it. And luckily, we were able to announce it in which all models committed to those two commitments about sharing usage data as also multilingual performance benchmarks. So that was a huge achievement. And I feel that the launch of this. Global South Network for trustworthy AI is a further step in that direction. How do we enable compliance to those commitments? How do we ensure that how this data will be shared?
How do we create tools for evaluating models in various languages? How do we build up capacity in all countries of the global south? How do we share resources? How do we share knowledge across? So this is just the beginning and I feel that we support from all industry organizations, the frontier AI labs, the research organizations, governments across the world. This can really, really grow into a resource that can be a global utility. So I compliment all that team which is involved in doing that. The launch of the network is the first step. But how do we action it out? How do we make it functional? How do we ensure that we get necessary support from all stakeholders?
Very often whenever we talk about trusted AI, whenever we talk about safe AI, some people think that we are trying to stifle innovation. The objective is not that. We always say that while the primary objective is to ensure diffusion of AI, primary objective is to ensure that more and more users benefit from the usage of AI. But at the same time, we need to do that in a responsible manner. We need to do it in a safe manner. We do need to do it in a trustworthy manner to limit the harm that can be caused. So this Global South Network for Trustworthy AI which is being launched will work in that direction. It will be an institution that will support not only India but the entire Global South.
And I am sure with just the presence of all the speakers who are present in this session, the strong commitment that all industry and all countries and all multilateral organizations are showing to this initiative, I am sure this will get further strengthened in the days to come. There is a lot of work that Urvashi and team are taking. They are taking on their own. But we will be there to provide all necessary support for India AI mission and we will work towards ensuring that you get the same level of support from every. participating country which is here. So thank you once again and congratulations for this launch and look forward to working towards the objectives in the near future.
Thank you.
MS. Thank you, sir, for your remarks and most importantly for your support. I think it means a lot to us to be working so closely with the India AI mission and we’re really excited to be able to deliver on this promise. It’s now my honor to invite Ambassador Philip Tigo, the Special Envoy on Technology from the Republic of Kenya, to share his reflections.
AMBASSADOR PHILIP TIGO, ASSOCIATE OF TECHNOLOGY, KENYA, Thank you so much for this opportunity to share my reflections. And I noticed that this is really a women -led network, so again, congratulations, Ovashi and Rachel, for putting this together. I think before we celebrate the launch of the network, I think we must acknowledge that we are working with the right people and we are working with the right people. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here.
And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think because you must acknowledge the structural problem around the safety conversations and the infrastructure that has been cutting safety in the last three years. I think the global south has always been excluded from this conversation. I say this from a position of strength because Kenya is the only, Kenya I think, we’re the only member of the international network of AI safety institutes.
And so there’s a challenge there. And so I think that model that is not inclusive to a global majority, that in most cases bears the brunt and the impacts of AI, is not acceptable. And so this network, in my sense, is timely but also late. And so there’s almost an urgency that we need to work very closely in how we scale up what this network does. The second part, of course, is, as I mentioned, that a lot of the global majority countries that are there are not. They are the ones that not just bear the brunt of the models, but bear the adverse societal harms of the models. Kenya is one of the countries that uses one of the models.
and from the use cases we see that they use it for the wrong reasons. Emotional support or companionship, it’s not necessarily for anything meaningful or productivity. And so as the world advances, it therefore behoves us that we work with these frontier model companies to ensure that their models are safe beyond secure, but also are more trustworthy. The second part, of course, is that part of model evaluations assumes access. We now know that a lot of my colleagues who are doing model evaluations are doing it from an external point of view. So we need to be very clear that global majority countries, and by this when I say global majority countries, we also have a new global south in AI, because it’s just not the global majority.
We know in the global north of artificial intelligence is two countries and a few companies. So we must, beyond this, extend to also include other colleagues, whether it’s from Europe, Western Europe, or Latin America. Safety must also go beyond technology. towards socio -technical issues. We look at AI in the countries of Kenya from minds to models and so safety must also include environmental harms, biases, misinformation, disinformation but also harms to water, environment and so we need full lifecycle accountability. It’s good to evaluate the models but also it’s good to evaluate the footprints of the model quickly. There are four structural gaps that we see and this is why I love this network and the network I think one is yes you want global majority folks to evaluate the models but we have great teaming capacity gaps so I hope that this network will look at this.
Secondly I think is also issues of access to compute. We can’t have global majority researchers trying to evaluate models without necessarily having access to compute to do that. Third part of course has been mentioned by I think his left issues around linguistic and cultural mismatch so we need to do that the other part of course is benchmarking. as governance power. Also, benchmarks are not neutral. Sometimes I think I like to be honest because that’s what evaluation needs to do. And so we need, in most cases, to ensure that only a handful of institutions should not define what risks are measured, what harms are prioritized, and what safe performance means. Governance is about power. And we must deconcentrate that power even if it’s unintentional.
Finally, I think for me, evaluation is also about agency. And we must have a question of agency, a notion of agency around these models, but also including sovereign capability. As we know, a lot of your countries are trying to build sovereign models, but also sovereign capabilities across the track. What should this network deliver, in my view? And I’ll humbly make these quick suggestions. One, I think, yes, good to have the network, but can we have regional nodes for this? So that, because Africa… I speak for Africa, Africa is another country, it’s 54 countries, expanded to have nodes. Secondly, include multilingual benchmark data sets. Could be an interesting annual red teaming exercise. Could be potentially, why not publish a Global South AI Safety Report with an expansive definition of what safety is.
And I would be remiss if I don’t say how do we fit this into the multilateral process. We already have a global UN scientific panel on AI, and there’s a global dialogue on AI governance. I’m one of the champions for this, so hopefully we will get this in there. Finally, let’s close the accountability loop. How do all this ultimately matter for citizens? We can evaluate all we want, but if they don’t translate
Thank you, Ambassador, for highlighting the urgency of this work and also reframing the safety conversation for the Global South. And just to say we are planning to have regional hubs, and we do. And I think the point about how we engage with the multilateral system is very important, and we will have the Indian AC as part of our steering committee, and we hope we can work with the government of Kenya as well. And, of course, we have Professor Ravindran, who is part of the scientific council, so we will be relying on him as well. But thank you. Thank you for your remarks. And with that, I’d like to call our final keynote speaker for the day, who represents the UN Office for Digital and Emerging Technologies.
I’m pleased to invite Mr. Quenchen Chow Lambert, the Chief of Office and AI Lead, to deliver the next keynote. Thank you for your keynote address.
There is less, perhaps, infrastructure or energy connection to go around. So the concept of AI safety becomes less of a, or it kind of edges into this more contextual field, and that’s where this kind of low perspectives, field -tested examples can be very helpful to surface, which we’re missing. And I’d say the idea of AI standards as technical standards don’t solve that issue because a one -size -fits -all standard will not be contextually sensitive. So moving from this kind of scaling a small, a very concentrated, highly expensive model across a massive user base to more tailored, small -language models to context turns the issue of AI safety into a more fuzzy kind of discussion and one which really needs empirical evidence.
And I think the trends in the institutional discussions from Bletchley Park to Sears, Seoul, where there were also around 30 countries signing the declaration, to Paris, where you had 60 -plus, and now here. over 100 countries engaging. We now have the United Nations Global Dialogue on AI Governance, which will include a whole 193 member states informed by analysis from an independent international scientific panel on AI, which will look at the risks and also opportunities and impacts of AI. And so as the conversation in these summit settings and in the international level has widened and to include more countries and more people and covered more of humanity, the focus has, through the open source developments, been allowed to become much more focused of encompassing other perspectives.
And that’s why, to close and to echo Ambassador Thieger, these kinds of networks play a crucial role in connecting and bringing examples of the challenges that we face. Thank you very much. cases of threats from various sources to local people into discussions so that international discussions do not ignore or omit or discount the perspectives of the vast majority of people on the planet. Thank you.
Thank you, Mr. Chow, for those remarks. I’d now like to call our panelists onto the stage. Ms. Natasha Crampton, Vice President and Chief Responsible AI Officer at Microsoft. Dr. Rachel Tabande, Senior Program Officer AI for Africa at the Gates Foundation. Before you sit, we’re going to take one quick picture. Ms. Chennai Chair. I don’t see you. Oh, there you are. Yes, okay. Director of the Masakane African Language Hub. Mr. Amin Banefatami, Chief Responsible AI Officer. I’m cognizant. And last but certainly not least, Dr. Balaram Ravindran, Head Center of Responsible AI at IIT Madras. Yes, and can we get the keynote speakers as well? Thank you. As with all good things in life, we’re short on time.
But so let’s get started. Rachel, I’m going to start with you. Thank you. where according to you what according to you or where according to you do you feel like we still lack clarity on how safe and reliable AI systems are when they’re deployed in real world context in the global south
thank you so couple of things maybe two three things number one is we need to redefine what is safe and what is harmful in as far as AI models or applications are concerned according to the social cultural context that they are deployed in and that means that having models or applications that are great at understanding the data or the patterns to generate content is not enough if they do not understand the social norms the gender dynamics the religious beliefs the political sensitivities or indeed even the humor the slang or the tone particularly now that voice is being used in the media a key channel for delivery of AI. So we need to redefine safety and harm in the context in which AI models are deployed.
So I think we’re missing that, but hopefully we get there. I think the second piece is around language. It’s not enough for a large language model to have strong translation capabilities. Language in itself is not just about vocabulary. It’s also about the lived meaning, the lived experiences. I come from a beautiful country called Malawi. It’s also called the warm heart of Africa. Now, if you’re deploying a model for pregnant mothers to access advisory messaging there, if the mother says their waters have broken, which clinically is a critical incident that should warrant that mother to be referred to a health facility, but if you translate that from the local language to English, which is where most of these large language models and applications have been benchmarked on, that will literally mean I have thrown away water.
So if the model is not trained to understand that context, then you will miss that flag. And then finally, I wanted to say that we also need to understand the harms that emerge as people use the AI models. Currently, I think much of the benchmarking is done on the content and predefined metrics. So final example, personally, I use my AI companion as my therapist. So it’s the one persona that knows a lot about my personality from all spheres, as a mother, as a career person, my finances, all of that. But at what point can we then be able to track whether I’m substituting my cognizance and cognitive capabilities with that AI model or application, or that I’m becoming overly emotionally dependent?
So I think there are those three areas that we’re missing, and hopefully we can get better at it. Thank you.
Thank you, Rachel, and thank you also for those powerful examples, because I think we’ve been saying some of this at almost a theoretical level, but I think those examples really bring home the gaps in terms of where the current safety conversation is. Chennai, from a civil society perspective, what do you feel companies or developers often miss about the safety implications of deploying AI systems in the global south?
That’s one thing they miss, the user experience. So on a more serious note, thank you, Vashi. So this is great to actually piggyback from what you said, and I was like, are we reading the same notes? So I think what really is missed when people are deploying some of these solutions is around the context in which they’re deploying the tool. And this is particularly looking at an example where on the African continent, there is high levels of gender inequality, a very youthful population with young people often unemployed, and also older people forgotten in actually the development of technologies. So I don’t know who we’re developing for, but sometimes we actually don’t consider that diversity and the inequalities that exist.
So you can find that sometimes when these tools are deployed, they actually further exacerbate a situation of inequality. And I’ll give you one example where perhaps an agricultural tool that has a voice system on it to provide farmers or women information on what to plant may actually have a male -sounding voice. And if in that context there’s high issues of gender -based violence or lack of trust, and the community members were not consulted in the design process, what it actually leads to is just exacerbating an already existing situation. And that is an example. That actually did happen when people were deploying Internet solutions for a community. Then secondly, also thinking about who gets left behind in deploying these solutions.
This is where language, as Rachel was mentioning, comes in. So on the African continent, we have over 2 ,000 languages that have been documented. Masakhane is only working on 50 of those African languages to build up quality data sets. So what you then find is when people are deploying technologies, even if they deploy them in something like Kiswahili, which now has a large number of data sets, people just don’t speak Kiswahili across East Africa. And particularly in Kenya, if you go to Nairobi, the Kiswahili spoken in Nairobi will be Shang. Then you go to, it’s not even Kiswahili, as I’m being corrected. And then if you go to the coast in Mombasa, it will be completely different. So we have to actually take into account the context and nuance of what is being deployed.
And then lastly, the way in which the sector, the technology is actually used, if deployment doesn’t take into account. the whole ecosystem of the end user, it can actually result in misuse. And I want to specifically say that there’s two forms of misuse here. There could be people who unintentionally actually carry out a problematic, harmful act online based on how they’re interacting with the technology. And we know that content, particularly if it’s in their own language, and we know that content moderation for the global majority is not sufficient. Or people are underpaid, as we’ve seen the cases that were coming out about content moderators in Kenya. Then there’s actually intentional misuse. Now, this is where we find gender disinformation, the use of deepfakes to discredit people, particularly around election period.
And now with increased open AI that people can actually just type something and get something back, we are seeing that high level of deployment without thinking about what is the after -end impact. To close it off, because I’m talking about AI as if it’s coming later. A10. when they were deployed. It was great, I can track my missing bag on a flight. They have now been put in women’s bags or children’s bags by people who they do not know and they track them. That’s already an act of surveillance that was, if people had been consulted, it might have been mitigated against. Yes, I do want to know where my bag is, but I don’t want to be tracked unknowingly.
Thanks, Chennai, for that and also for pointing out, bringing the gender dimension on the table and highlighting the issues around what seems like useful technology, how quickly it can become surveillance technology. I’d like to now bring the industry perspective into this conversation. So, Natasha, maybe I can start with you. As you scale systems globally, what are some of the hardest constraints that you as a company face in ensuring context -sensitive safety?
Well, thanks for that question, Arati, and congratulations to everyone on the establishment of the network. I think it’s a really important step forward. So when I think about Microsoft, I think about sort of Microsoft’s scale, and our mission is really to try and empower every person in every organization in the world to achieve more. And so one of the challenges I think that we face with scaling up our efforts here is how do we take the very deep, careful, thoughtful, community -led evaluation work that animated a project like Samishka, which the CAIA organization, as well as the Collective Intelligence Project and Microsoft Research worked on together, which really developed very context -aware evaluations that were appropriate for the use case.
And how do we take that thoughtful work and really scale it up? Because really we want to do that type of work for thousands of languages and probably millions of different cultural settings. And so I think we really need to think about this system of how we are going to build multilingual and multicultural evaluations that we can really run broadly. I think sometimes we think evaluations, we don’t sort of understand how sustainably they need to be run. As in you can’t just do it once before you release a product. You need to run the evaluations on an ongoing basis to understand how there might have been shifts. And so I really think for us we need to think about this system.
How are we going to build a sustainable, grounded, community -led system of scalable evaluation?
Thanks, Natasha. And I hope in some sense also the network can actually play at least part of that function in building that kind of coherence to the space of evaluation and helping us at least build a shared vocabulary and a shared set of methodologies together as organizations. Amir, what do you think needs to change, whether it’s internally within companies or externally in terms of the ecosystem that we’re operating in, to make such grounded evaluations, the kind that Natasha was talking about, become the standard practice for industry? Should they be the standard practice? And if so, how? How do we get there?
Thank you for that question. And first, congratulations. I’m happy to be also part of this network and support it. I think Natasha mentioned part of the foundational questions. And I think from a, I’m putting my hat off, cognizant chief responsible, we work with a lot of companies and governments into deploying. new scenarios. We call it systems or applications or anything else. The concept of safety, I was mentioned, is diffused. It’s not very clear what we’d call safety. So evaluating the underlying element that needs to be changed or to be addressed is not obvious. When we talk about models, models are not just one thing that you deploy. It goes into an application, there’s a system, infrastructure, there’s network access, there’s API connected data access.
All of them are contextually different. That was mentioned before. And then the problem, one of the problems is that, you didn’t ask me about the problem, but there’s a problem issue is that there’s a lack of imagination. People that are building systems have no awareness about the context in which those situations occur and how they occur and what’s the causes and what’s the likelihood of solution to happen. So absent of that, all this context which language is part of it, culture is part of it, is not captured. So without that, there is very little capability. to address that from a regulation or incentive perspective. Safety, on the other side, is not costed into financial systems and so forth.
There is no penalty of not being safe. So as long as there is no constraint to put safety as a cost structure, which strong mandate, companies will not pay attention or enough attention. So if it’s not part of the financial planning and the processes and so forth, it won’t happen. So there is a disconnect between what we do as enterprises to make sure that systems and platforms are properly built and deployed. There is a disconnect between the system in which they are deployed. At the same time, there is a talent inclusion that is missing. So the inclusion part is that all the talent that is building into those safety conversations are not the talent that are exposed to those issues.
So that absent voice is also a piece that needs to be addressed, not just from a skilling perspective, but also from an integration perspective. And finally, the infrastructure part. The infrastructure is not just systems and models and data, it’s also the tooling and the evaluation. And it was mentioned that evaluation has to be done differently, but if you don’t know what harm or safety means, evaluation’s gotta be different. There is probably an opportunity here to come up with a series of evaluation tools that are not only built for model design, but also built for system deployment. And if we go from pilot to scaling, what issues occur and what examples are happening and what incidents are deployed, and incident reporting is a huge opportunity here because it will capture, nested in reporting, some of the hidden element of the control issues, data access, regulation, absence, or anything else.
Finally, there is a latency issue in global north, and you mentioned probably correctly that there’s a lot of latency issues. There are only a handful of countries in the global north that probably the new slot is much bigger. there are institutional framework, you have basically the rule of law, you have civil society which is very active, you have legal framework that basically creates an accelerated feedback loop into all this incident safety in most of the global south countries these mechanisms don’t exist which delays the feedback loop and basically compounds the possible harm and everything else so there is probably an opportunity to figure out how we can accelerate the learning capabilities and the skills at which we capture knowledge and data to be tied with tools that probably need to be implemented and deployed either on an open source matter or a free access matter and build it with a contextual environment, the talent pool to make it together so the ownership of the global south, all these pieces are important so the network can actually incentivize those different pieces that could complete each other to really play a role into the global south understanding better where safety issues are, where harm can happen and what corrections can be made in the rhythm that needs to happen because rhythms are not exportable and what we do from one country to another is not.
And finally the network could probably help bring it together.
Thank you for laying that out and also just pointing out how all the kind of pieces link to each other and we can’t just kind of go at it at one level alone and to the importance of capacity across all those. Professor Ravindran, AI deployment is accelerating in the global south, in India, in many other countries as well. But at the same time we don’t see or so far we haven’t seen as much investment in the kind of safety and safety infrastructure. Would you agree? You’re actually asking an academic about investment? Sure, of course there’s not enough money. Why not and how do we change it?
so I’m going to answer a different question sure perfect like a true academic I’m sorry I’ll connect it back to what you asked so there are a whole lot of initiatives that are getting announced at the summit and also things that I kind of discovered while having various conversations that there are multiple networks that are getting launched there are already in operation there is a network in Africa looking at capacity building there is a network in China apparently which none of us seem to have heard about that’s being launched on AI safety and capacity building and that is our network that’s getting launched and that is the UN initiative on building this network of capacity building institutes for the global south which we had a meeting in the morning as well about that so there is just too many of these initiatives that are getting launched.
And we have to figure out a way how we would coordinate operations among these initiatives as well. So I think that would be a great multiplier instead of everybody going out and saying, okay, let me see what small piece of the pie that I can get so that I can do these activities. And after that, if there is a lot more coordination. And if you remember our initial conversations about when we wanted to start this thing was about this would be like this one node in the global AC. I can’t even say global network of safety institutes anymore, can I? So they’re not even safety institutes. So AC institutes, whatever ACs, this should be like one node in the AC network which kind of represents unheard voices there because almost except for, as the ambassador was pointing out, except for Kenya.
So we really have, and of course India, I presume. We don’t have safety institutes in the global south, right, that can participate in the dialogue. So I mean, that kind of larger collaboration framework is something that we should enable so that, I mean, even if you say, we go to Gates, and then how many different people, how many different networks would Gates want to spend their money to? If that is one way we can say that there’s this whole operation that’s happening, then that would be a great way of harmonizing our efforts. I can turn it back to the question. Thank you.
No, I mean, I think you raised a really important issue of kind of harmonizing these efforts, and also that how this network can play a really important role in the larger kind of AC network. Luckily, the S remains the same, so we can still go with the acronym, I guess, on the safety network. We’re almost at time, so let’s just do one kind of quick rapid -fire round with all the panelists, and maybe Natasha, I can start with you. What is the one concrete step your institution, Microsoft, could take in the next year to strengthen AI safety in the global south?
Well I’m looking forward to making good on the New Delhi Frontier AI commitments that Microsoft made which is going to help advance multilingual and multicultural evaluation work as well as share data that will help policy makers make or understand AI adoption within their countries and make the sort of choices and policy interventions that help bring that broader access so if I can be sneaky and kind of come as one thing. The second thing I’m really excited about is we’re making large infrastructure investments across the global south to the tune of 50 billion dollars by the end of this decade now that infrastructure as Amir and others on the panel have mentioned is essential to being able to building up this scaled system of sustainable evaluation so I’m looking forward to those investments too.
Thank you.
is that a fire alarm or something?
No, no, no, they’re telling us that we have to wrap up I think.
Okay, great, so wrapping up, so we have to get the work going, rolling, right, so talking about it is one thing, but actually starting to do this collaboration and getting this research efforts going, we’d love to reach out to partners across the globe, in fact, I’m part of the other UN network as well and we have been talking about looking at problems that would necessarily require cross -border collaboration, right, as supposed to, you know, problems that we would anyway solve in our geography, then just working with somebody else to solve it in two geographies, okay but if you can pick problems that will necessarily require people across borders to collaborate, I think that will certainly drive this and also will, you know, kind of put forth the importance of having the network itself, not just information sharing, but actually problem solving that can be done only across the network.
Thank you. Rachel, 30 seconds.
30 seconds I think from the foundation side is to really institutionalize the evaluation of safety of AI solutions right at deployment because what we see now is that safety issues almost emerge post deployment thank you
so from the hub side we actually do have a benchmarking initiative that’s going on this year so this will be one contributing to the African benchmarking work and so that will be our output in contribution
amazing looking forward to that thank you Chennai and Amir last but not least
we’re working already on with our two labs one in Bangalore actually and one in San Francisco on safety evaluations mostly on incident reporting and we already made it culturally contextual so I hope that we are helpful to basically provide open source tools for evaluation to disseminate them and work with that work to basically make them accessible to the public available to all partners.
Thank you.
“And while the opportunities are immense, in many of these contexts, many of these contexts are also marked by low institutional capacity, deep societal inequities, popularization, and populations wit…
EventAnd this is almost like a test for me of kind of saying. These names of these institutions through this panel. But they just announced a few days ago that they’re going to be working on standardizing …
Event## Areas of Different Emphasis and Debate ## Practical Applications and Examples ## Unresolved Questions and Future Directions **Anita Gurumurthy** from IT4Change advocated for “regenerative AI” ap…
EventJhalak Mrignayani Kakkar: Thank you. Thanks, Min. I think there’s a lot of work happening globally on human rights due diligence, risk assessment of AI systems. A lot of it is concentrated currently i…
EventEnsuring Better Representation of Developing and Least-Developed Countries in Global Digital GovernanceThe question of how to better represent the interests, priorities, and realities of developing an…
EventThank you. and how safety is governed under real constraints, how AI systems actually reach the people and states often struggle to serve. The speakers you will hear from today are not theorizing from…
EventOverall, the panel emphasized that while challenges remain, there are promising avenues to increase meaningful inclusion of the Global South in shaping the future of AI. Collaboration between governme…
EventSeveral key challenges were identified:
Event“And I think on these three capabilities, we need to jointly increase, and whoever doesn’t have it should be able to easily get the data and the benchmarks.”<a href=”https://dig.watch/event/india-ai-i…
EventBella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that global consensus on AI governance is unlikely given US-China tensions and weakene…
EventFundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—require innovative financing approaches. The scale of investment required compared t…
EventThis comment deepened the discussion by introducing the concept of compound disadvantages and helped other panelists recognize that the problem extends beyond mere access to include innovation capacit…
EventData generation is essential to be able to address these gaps, especially gender gaps
EventIn conclusion, the analysis underscores the need for equitable access to the internet to ensure inclusive and quality digital education. Language barriers, skill gaps, and gender inequalities are amon…
EventGuilherme Canela de Souza Godoi: Thank you very much. First and foremost, thank you so much for the invitation to be here. And the previous two speakers already did part of my job because they explain…
EventThe conversation concluded with calls for greater coordination among stakeholders to avoid duplication of efforts and maximize impact despite shrinking resources. Participants emphasized that digital …
EventCollaboration and Partnership Importance Ng explains that UNDRR depends on partnerships due to being a small organization, and the Sendai Framework requires an all-of-society approach. The organizati…
EventBy working together strategically, they can pool resources, expertise, and knowledge to better respond to and mitigate cyber security incidents. Efforts should be made to utilize cyber security resour…
EventDominican Republic: Thank you, Chairman. The Dominican Republic reiterates its firm conviction that capacity building is an essential pillar to strengthen cybersecurity in a global sense where cybe…
Event### Regional and Multilingual Strategies Amrita Choudhury emphasised the need to “keep processes open and inclusive with regional and young voices, avoid duplication, and be time-zone sensitive.” Tri…
EventThe AU’s commitment to working with Member States in adopting the meeting’s recommendations was reaffirmed, alongside the importance of partnerships with technical support organisations to strengthen …
EventThe tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusiastic and grateful atmosphere, with speakers expressing appreciation for partici…
EventThe tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed genuine enthusiasm and pride in the collaborative achievement, with frequent exp…
EventThe tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with the awards ceremony, became more personal and engaging during founder testimonials where entre…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventThe discussion maintained a professional but increasingly urgent tone throughout. It began optimistically with solution-focused presentations but became more sobering as panelists acknowledged the per…
EventThe tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation. While Kurbalija maintains an expert, measured delivery, there’s a growing sens…
EventThis IGF session demonstrated a maturing of debates around technology and human rights, with stakeholders from different sectors recognising that incremental approaches are insufficient and that more …
EventThe tone was pragmatic and solution-oriented, with speakers expressing both frustration with past failures and cautious optimism about current opportunities. There was a notable shift from theoretical…
EventThe tone of the discussion was largely collaborative and solution-oriented, with panelists sharing insights from different regional perspectives. However, there were moments of debate and disagreement…
EventThe tone was largely collaborative and solution-oriented. Panelists built on each other’s points and offered complementary perspectives. There was a sense of urgency about addressing connectivity gaps…
EventThe discussion maintained a professional, collaborative tone throughout, with participants demonstrating technical expertise while acknowledging shared challenges. The tone was constructive and soluti…
EventThe discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insights constructively. The tone was pragmatic and solution-oriented, acknowledging si…
EventThe discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplishment, and forward-looking optimism. Speakers expressed appreciation for the wee…
EventThe overall tone was positive and forward-looking. Speakers expressed gratitude to the hosts and participants, emphasized the importance of collaboration, and conveyed optimism about the future of the…
EventThe tone of the discussion was generally optimistic and forward-looking, with speakers emphasizing the need for urgent action and reform. However, there were also notes of frustration from some develo…
EventThe tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunities and potential ways forward. There was a sense of urgency about the need for …
EventThe overall tone was inspirational, hopeful and energetic. Speakers aimed to motivate and empower youth attendees while also conveying the urgency of youth action on global issues. There was an emphas…
Event“Dr Urvashi Aneja formally launched the Global South Network for Trustworthy AI at the India AI Impact Summit”
The knowledge base lists Dr Urvashi Aneja as a participant in the launch of the Global South AI Safety Research Network, confirming the launch event and her role [S3].
“Independent civil‑society organisations are uniquely suited to surface risks invisible to laboratory testing”
Civil-society organisations are described as able to bridge the gap between citizens and governments by conducting independent assessments and surfacing risks that may not appear in lab settings [S49].
“The network will act as connective tissue between the global governance architecture, the global safety infrastructure, and what’s happening on the ground”
The knowledge base notes that such networks play a crucial role in connecting local challenges to global discussions and that they bring unique regional expertise to facilitate sharing and capacity-building across the Global South [S16] and [S128].
“The Global South is under‑represented in global safety and governance infrastructures, with many countries lacking their own oversight bodies”
Discussion in the knowledge base highlights infrastructural barriers and a lack of assurance mechanisms for many Global South countries, underscoring under-representation in safety and governance frameworks [S34].
“Ambassador Philip Thigo is involved in the network and echoed the urgency of addressing AI safety in the Global South”
Ambassador Philip Thigo is listed among the participants in the launch of the Global South AI Safety Research Network, confirming his involvement and support for the initiative [S3].
The discussion reveals strong convergence around three core pillars: (1) the creation of multilingual, culturally‑aware benchmarks and evaluation tools; (2) the centrality of local civil‑society insight, inclusive talent, and contextual understanding; and (3) the necessity of capacity‑building, compute access, and financial investment to operationalise safety measures. Participants from academia, civil society, industry, and diplomacy all endorse these themes, indicating a shared vision for a coordinated, inclusive AI safety ecosystem in the Global South.
High consensus – most speakers, across sectors, articulate overlapping priorities, suggesting that future collaborative actions (regional nodes, open‑source tools, coordinated networks) have broad stakeholder buy‑in and are likely to shape policy and implementation agendas.
The discussion reveals broad consensus on the necessity of a Global South network for trustworthy AI, but significant divergences arise around benchmark governance, incentive structures, and the scope of safety. While speakers align on goals—building evidence, fostering capacity, and ensuring inclusive governance—their preferred pathways (regional nodes, corporate investment, open‑source tools, procurement levers, or regulatory penalties) differ markedly. These disagreements highlight challenges in harmonising technical standards, financing mechanisms, and interdisciplinary safety definitions across diverse stakeholders.
Moderate to high: The core objective is shared, yet the lack of agreement on implementation strategies and the breadth of safety considerations could impede coordinated action unless reconciled. The implications are that without a unified approach to benchmarks, incentives, and scope, the network may face fragmentation, slower adoption of standards, and uneven protection for vulnerable populations.
The discussion was shaped by a series of pivotal remarks that moved the conversation from a high‑level problem statement to concrete, multidimensional solutions. Dr. Aneja’s opening framed the urgency, while Mr. Singh introduced actionable benchmarks. Ambassador Thigo’s enumeration of structural gaps broadened the lens to include power and resource inequities, prompting participants to surface real‑world examples (Rachel, Chenai) that illustrated cultural and gendered harms. Technical scalability concerns (Natasha) and economic incentives (Amir) added layers of operational complexity, and Dr. Ravindran’s call for coordination highlighted the risk of fragmented efforts. Together, these comments redirected the dialogue toward actionable, context‑aware, and collaborative pathways, culminating in a rapid‑fire round where each participant committed to concrete steps. The key comments thus acted as turning points that deepened analysis, shifted perspectives, and forged a shared agenda for the Global South Network.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

