Democratizing AI Building Trustworthy Systems for Everyone

20 Feb 2026 12:00h - 13:00h

Democratizing AI Building Trustworthy Systems for Everyone

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by asking what the greatest obstacle is to coordinating a global AI effort, to which Dr. Saurabh Garg identified governance of sharing mechanisms, the interdependence of hardware-software-protocol ecosystems, and a shortage of talent and institutional capability as the primary challenges [6-12]. He emphasized that while infrastructure can be acquired, expertise must be developed to democratize AI worldwide [9-10].


Microsoft’s chief responsible AI officer, Natasha Crampton, announced a $50 billion commitment to bring AI to the global south by 2030, framing the initiative around five inter-linked pillars [28-33]. The first pillar focuses on building data-centre and connectivity infrastructure while respecting national sovereignty through configurable public and private cloud controls [34-41]. The second pillar targets large-scale skilling, including a program to teach AI-specific skills to two million Indian teachers, recognizing that education drives rapid technology diffusion [46-53]. The third and fourth pillars address multilingual, multicultural AI and local innovation, with collaborations such as the Lingua Africa project and partnerships with Indian AI firms to share adoption data for policy making [54-69].


Natasha stressed that AI products must be “trusted by design” and offer configurable defaults so that different jurisdictions can apply the same models within their own legal and cultural contexts [80-86]. She noted that conflicts between jurisdictions can be mitigated by open-source models and a robust partner ecosystem that enables local adaptation [90-93].


Peter Mattson of ML Commons argued that the current bottleneck for AI is reliability, which can only be improved through common, industrial-scale benchmarks and metrics [108-124]. He described federated evaluation techniques, such as the MedPerf healthcare benchmark, that allow diverse data sets and legal regimes to be tested securely and at scale [135-137]. Both Mattson and Dr. Garg highlighted that measurement of AI performance, energy use, and domain-specific models is essential for widening diffusion and reducing compute costs [310-312][328-330].


The discussion also touched on the role of open data, with Wendy Hall noting that while not all data can be fully open, shared repositories and UN-backed data-governance frameworks are critical for trustworthy AI development [305-311]. Participants concluded that achieving trustworthy, inclusive AI will require coordinated governance, sustained investment in infrastructure and skills, culturally aware technologies, open benchmarking, and rigorous measurement to guide policy and practice [1-5][70-74][94-100][321-324].


Keypoints

Major discussion points


Global coordination and governance challenges for AI collaboration – The panel opened with a question about the biggest hurdles in international AI work, to which Dr. Garg highlighted resource sharing, inter-dependence of hardware-software-protocol layers, governance of sharing mechanisms, and the need for talent and institutional capability [5][6-12]. Later, Justin asked how differing national laws (e.g., the “Brussels effect”) can be reconciled, and Natasha explained Microsoft’s need to embed configurable controls so each jurisdiction can apply the technology safely [75-78][80-90].


Microsoft’s five-pillar strategy for AI diffusion to the Global South – Natasha described a $50 billion commitment structured around (1) infrastructure (data-centres, connectivity, sovereignty controls) [33-40]; (2) skilling (e.g., training 2 million Indian teachers) [46-53]; (3) multilingual & multicultural AI (Lingua Africa, safety benchmarks for Hindi, Tamil, etc.) [54-61]; (4) support for local innovation and data sharing with policy makers [62-69]; and (5) partnerships with governments and other funders [42-45].


Trustworthiness, reliability and the need for robust measurement – Both Dr. Garg and Natasha stressed that trustworthy AI requires reliable systems and governance [7-12][80-90]. Peter Mattson expanded on this, arguing that AI’s biggest barrier is reliability, which must be demonstrated through industrial-grade, multilingual safety and security benchmarks, federated evaluation, and continuous measurement [106-136][138-146][328-331].


Open data, open-source models and collaborative ecosystems – The panel repeatedly linked openness to trust: Microsoft’s open-weight model family and open-source releases empower ecosystems [92-93]; Wendy highlighted the importance of open data while noting that not all data can be fully public, and called for cross-border data-sharing mechanisms and registries [305-311]; Peter echoed that open benchmarks and open-source models are essential for sovereign capability building [92-93][106-112].


Inclusion, equity and societal impact – Wendy warned that AI discussions often exclude women, children and marginalized groups, stressing the need for “all-inclusive” governance [258-270]; the participant from the Gates Foundation emphasized language, edge-computing, sustainability, and reaching the “bottom 50 %” of the population to avoid new divides [183-229]; Natasha’s teacher-training initiative also illustrated a focus on equitable skill development [51-53].


Overall purpose / goal of the discussion


The panel was convened to explore how the global AI community can democratize and responsibly diffuse AI technologies, especially to the Global South, by addressing governance, infrastructure, talent, measurement, and inclusivity. Speakers presented concrete initiatives (Microsoft’s $50 bn plan, ML Commons benchmarks, UN data-governance work) and debated the policies and technical standards needed to build trustworthy, sovereign AI capabilities worldwide.


Tone of the discussion


The conversation began with a formal, appreciative tone (thanks, acknowledgments) and quickly shifted to a constructive, solution-focused dialogue about challenges and concrete strategies. Throughout, participants remained optimistic and collaborative, interspersed with occasional informal remarks and humor (e.g., jokes about “the man who drank bleach”). By the end, the tone became reflective and rallying, emphasizing collective responsibility and calling for continued measurement and open collaboration, culminating in a warm, appreciative closing.


Speakers

Dr. Saurabh Garg – Secretary, Ministry of Statistics and Programme Implementation, Government of India; AI governance expert focusing on resource sharing, interdependence of AI ecosystem, and talent development [S1].


Natasha Crampton – Microsoft’s first Chief Responsible AI Officer; leads the Office of Responsible AI; drives AI diffusion to the Global South and oversees AI infrastructure, skilling, multilingual AI, and policy measurement [S4].


Participant – Representative of the Gates Foundation (identified in the transcript as “Dr. Aya”); discusses philanthropic support for trustworthy AI in low-infrastructure settings, focusing on health, agriculture, edge computing, sustainability, and open-source models.


Wendy Hall – Dame Wendy Hall, Regius Professor of Computer Science and Associate Vice-President, International Engagement, University of Southampton; Director of the Web Science Institute; former member of the UN high-level expert advisory body on AI; involved in UK AI measurement and security initiatives [S10].


Peter Mattson – President of ML Commons and CEO; Senior Staff Engineer at Google; founder of ML Commons; former head of Programming Systems and Applications at NVIDIA; works on open benchmarks, reliability, multilingual safety, and federated evaluation [S12].


Justin Carsten – Moderator and panel host; leads discussion on AI democratization, governance, and measurement.


Additional speakers:


Dr. Clark – Mentioned in the closing rapid-fire round; likely an AI researcher or policy expert (specific role not detailed in the transcript).


Dr. Aya – Gates Foundation representative (identified as the “Participant” above); senior figure in the foundation’s health and agriculture AI initiatives.


Harish – Referred to by name during the rapid-fire segment; appears to be the same individual as the “Participant” representing the Gates Foundation, though the transcript treats the name separately.


Brad – Cited by Justin as having given a speech earlier in the summit; no direct remarks recorded in this transcript.


Tim Berners-Lee – Mentioned by Wendy Hall in reference to the invention of the web; not an active speaker in this session.


Nigel Shadbolt – Referenced by Wendy Hall regarding a prior review; not a speaker in this session.


Vint Cerf – Mentioned as an intended participant who could not attend; not a speaker in this session.


Ms. Asha – Name called by Justin near the end, but no spoken contribution recorded.


Full session reportComprehensive analysis and detailed insights

The session opened with moderator Justin Carsten thanking the audience and the panelists and framing the discussion around the difficulty of coordinating a truly global AI effort. He asked the working-group chair what the biggest obstacle to such international collaboration might be [5][13-15]. Dr Saurabh Garg responded that the most pressing problems lie not in the physical hardware alone but in the governance of sharing mechanisms, the inter-dependence of hardware, software and protocol layers, and the scarcity of talent and institutional capability to manage these resources [6-12]. He stressed that while data-centre infrastructure can be purchased, the expertise required to operate it responsibly must be cultivated [9-10].


Carsten then highlighted the political backdrop of the summit – noting the photograph of Prime Minister Modi with tech leaders and the presence of Microsoft – before introducing Microsoft’s first Chief Responsible AI Officer, Natasha Crampton, who leads the Office of Responsible AI [16-20][21-24]. Crampton announced that Microsoft will commit US $50 billion by the end of the decade to accelerate AI diffusion in the Global South, organising the effort around five inter-linked components[28-33].


The first component concerns the construction of data-centre and connectivity infrastructure that respects national sovereignty. Microsoft plans to invest in new data-centres and power-grid upgrades while offering both public-cloud and private-cloud options that embed “sovereignty controls” for host countries [34-41]. Crampton stressed that these facilities will be co-designed with government partners to ensure agency for the nations that house them [42-45].


The second component targets large-scale skilling. Recognising that technology diffusion historically follows education, Microsoft will train two million Indian teachers in AI-specific skills, partnering with national standards bodies to embed AI literacy at the grassroots level [46-53].


Components three and four focus on multilingual, multicultural AI and local innovation. Microsoft is collaborating with ML Commons to extend safety benchmarks to Hindi, Tamil, Malay, Japanese and Korean, and has launched the “Lingua Africa” initiative to collect rich, locally-sourced spoken-language data in partnership with the Gates Foundation [54-61][62-69]. These efforts aim to ensure AI systems operate correctly in the languages and cultural contexts of end-users, thereby supporting home-grown solutions and informing policy through shared adoption data [64-69].


The fifth component underlines the necessity of deep partnerships with governments, NGOs and other funders, acknowledging that the scale of required infrastructure cannot be met by the private sector alone [70-74].


When Carsten raised the ‘Brussels effect’-the tendency of EU regulations such as GDPR to become de-facto global standards-Crampton explained that Microsoft builds its models “trusted by design” with configurable defaults, allowing each jurisdiction to adjust controls to meet local legal and cultural requirements [75-79][80-90]. She added that open-weight model families-Microsoft’s “five families of models”-enable ecosystems to adapt technology without compromising sovereignty [91-93].


Peter Mattson of ML Commons shifted the conversation to reliability, arguing that the principal barrier to AI adoption is not capability but trustworthiness. He called for industrial-grade, repeatable benchmarks and described “federated evaluation” – exemplified by the MedPerf healthcare project – which tests models across disparate data sets while preserving privacy through confidential compute [106-124][135-137]. Mattson warned that turning experimental benchmarks into dependable, multilingual safety and security standards is a massive technical undertaking that must be sustained over time [128-136].


Justin then introduced Harish, the Gates Foundation participant, noting a recent blog post he co-authored with Brad (“Brad’s speech yesterday… based upon a recent blog post you and Brad put out”) [70-73]. Harish outlined several practical concerns for the Global South: the need for edge-inference capabilities in low-connectivity settings such as healthcare and agriculture; worries about energy consumption and the importance of lower-parameter, low-energy models for sustainability; exploration of novel hardware architectures (e.g., “multi-parameter, multi-state compute capabilities”) as future enablers of edge AI; the centrality of open-source models because many governments cannot afford large proprietary offerings; the state-level policy variation in India (e.g., differing maternal-risk rules in Uttar Pradesh vs. Telangana) that AI tools must respect; and the broader social impact of creating jobs, avoiding a digital divide within countries, and ensuring AI benefits the bottom 50 % of the population [150-180].


Wendy Hall, a Dame and director of Web Science at the University of Southampton, broadened the discussion to AI metrology. She advocated for a new science of AI measurement, likening it to the National Physical Laboratory’s work on weather forecasting and announced the UK’s Centre for AI Measurement and AI Security Institute as institutional anchors for systematic trust metrics [290-299]. Hall also highlighted the importance of open-data governance, proposing cross-border data-sharing mechanisms and global data registries while acknowledging that not all data can be fully open [305-311]. She noted the conference size (“250,000 people here”) and described a “love-hate relationship” with the event [250-255]. When asked directly about the UK’s sovereign AI strategy, Hall declined to answer and shifted to broader commentary on AI hype [252-267].


Across the panel, several points of agreement emerged. All speakers concurred that effective governance of sharing mechanisms is essential for international AI collaboration [5][6-7][70-71]; that deep, multi-stakeholder partnerships are required to deliver the five-component strategy [70-74][72-74]; that systematic measurement-whether through AI metrology institutes or industrial-grade benchmarks-is vital for trustworthy AI [290-299][306-309][328-330]; that multilingual, culturally adapted AI is a prerequisite for global diffusion [54-61][124-126][192-199]; and that large-scale skilling and talent development underpin sustainable diffusion [46-53][8-12]. Both Crampton and Mattson stressed the role of open-source models and open benchmarks in lowering entry barriers and enabling local customisation [91-93][92-93].


Nevertheless, the panel revealed notable disagreements. Hall argued that while open data is valuable, privacy and sovereignty constraints mean that only “exchangeable, shareable” datasets-not fully open ones-should be circulated, and she called for global data registries [305-311]; Crampton, by contrast, presented data sharing as a core component of Microsoft’s five-component plan without foregrounding such limits [64-69]. A second tension arose between Crampton’s $50 billion private-sector investment model and Harish’s view that open-source models are a more affordable route for the Global South [30-33][150-180]. Finally, Hall’s proposal for AI metrology institutions differed from Mattson’s emphasis on industrial-grade benchmark development as the primary path to reliability [290-299][106-124]; Hall also unexpectedly declined to answer the direct question about the UK’s sovereign AI strategy [252-267].


Thought-provoking remarks punctuated the discussion. Garg’s warning that governance and talent, rather than raw compute, are the real bottlenecks reframed the debate [6-7]; Crampton’s concrete $50 bn, five-component roadmap gave the panel a tangible agenda [30-33]; Mattson’s claim that “reliability, not capability, is the real barrier” and his illustration of federated evaluation provided a clear technical solution [116-124][135-137]; Hall’s call for a new science of AI metrology linked measurement to interdisciplinary collaboration [290-299]; and Harish highlighted the practical challenges of edge inference, language diversity, energy consumption and the need for open-source accessibility in low-connectivity settings [150-180][216-220].


Key take-aways


– Global AI collaboration is hampered by complex governance, talent shortages and the inter-dependence of the AI stack [5][6-12].


– Microsoft’s $50 bn, five-component plan seeks to close the North-South AI gap through sovereign-aware infrastructure, massive skilling, multilingual data collection, local innovation and policy-oriented data sharing [28-33][34-45][46-53][54-69][70-74].


– Deep partnerships with governments, NGOs and the broader ecosystem are indispensable for realising each component [70-74][72-74].


– Trustworthy AI hinges on industrial-grade benchmarks, federated evaluation and emerging AI metrology institutions [106-124][290-299][306-309].


– Open data and open-source models can lower barriers but must be balanced against privacy and sovereignty concerns [91-93][305-311].


– Efficient, domain-specific, low-energy models are needed to make AI viable in low-resource environments [150-180].


– Inclusive development-addressing language, cultural norms, gender and age representation-is essential to avoid creating new digital divides [54-61][124-126][190-208].


The panel also identified unresolved issues: designing governance frameworks that reconcile conflicting national regulations while preserving interoperability; securing sustainable financing beyond private investment; delivering reliable edge AI in low-connectivity contexts; establishing concrete, multi-dimensional metrics that link technical performance to societal impact; and creating global data-registry and cross-border sharing mechanisms that respect privacy and sovereignty. Suggested compromises included configurable defaults in AI products, hybrid sovereign-cloud models, leveraging open-source families alongside private investment, matching corporate funds with public and venture-capital contributions, and employing federated evaluation with confidential compute to enable cross-jurisdictional benchmarking.


In closing, the participants reiterated that democratising trustworthy AI will require coordinated governance, sustained investment, robust measurement, multilingual and culturally aware technologies, and inclusive talent development. Carsten praised the panel’s collaborative spirit, thanked the speakers and noted a brief round of applause before ending the session [340-345][70-74][94-100][321-324]. The consensus, though tempered by differing views on data openness and financing models, points toward a shared commitment to build an AI ecosystem that is both globally interoperable and locally trustworthy.


Session transcriptComplete transcript of the session
Justin Carsten

Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. you you you you Thank you. Thank you. Thank you so much, Dr. Garg. It really highlights one of the things about collaboration, and I’ll be talking to… a number of the panelists about… about that and that i’ve been so impressed this week at how much people are really coming together for the community you know this is a much bigger summit than we’ve had previously many more people really opening it up to everyone but if i can just ask you one thing on because the working group that you’re doing i think is is excellent it’s going to be really important um what do you see is the biggest challenge around that what do you think you know your vast experience that you’ve got of coming together do you think um there’s any particular challenges in coordinating that international effort

Dr. Saurabh Garg

of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control every layer of the resources that is there and while foundational resources the foundational computer resources sharing would be a major challenge but i think a bigger challenge might be to manage the interdependence of the AI ecosystem because it spans hardware, software, and the protocols, so to say, or the ethics around that. So I think one of the biggest challenges would be the governance around this sharing mechanisms, sharing protocols, and managing the framework. And the other would be what would be the talent and the institutional capability, which is in a way required. Well, the infrastructure can be acquired, but expertise has to be developed.

And I think that’s critical to ensure that if you want to democratize and ensure that GlobalSoft is integral to that, and that’s where it would be. And I think, you know, we don’t need to focus so much on whether each country is owning each layer of the AI, but how one can do that. What is the capability and confidence in the systems that manage that we have the required methods to ensure that it takes care of the priorities and the values that each country wants to push forward?

Justin Carsten

Thank you so much. And I agree with you. It’s a big challenge, but I’m glad that you’re there to take that forward. And this week, you may have seen the photograph of Modi here with many of the leaders in tech. And it’s a great pleasure that one of the large organizations in the private sector, Microsoft, has got representation here. So I come to you, Natasha. So Natasha Crampton is Microsoft’s first chief responsible AI officer and leads the Office of Responsible AI. And it was interesting how long that’s been going. I heard earlier this week. But she’s putting Microsoft’s AI principles into practice by defining, enabling, and governing the company’s approach. to responsible AI. The office also collaborates with internal and external stakeholders to shape new laws, norms, and standards to help ensure that the promise of AI technologies is realized for the benefit of all.

As I said, that’s been a key theme. So I saw Brad speak yesterday. It was a fantastic speech, and that was based upon a recent blog post that you and Brad put out just a couple of days ago. So can you tell us a little bit about that for some people who haven’t had the chance to absorb in this session, please?

Natasha Crampton

Sure. Thank you, Justin, and it’s a pleasure to be here with the panel and the audience today. So I think our announcement earlier in the week was about how Microsoft is contributing to bringing AI to the global south, and the headline that you might have seen is that we’re on. Hi. to spend 50 billion US dollars in order to do that by the end of the decade. What we’re seeing from the diffusion data that we have access to and that we’ve publicly published already is that there is an urgent need to focus on the diffusion and what it’s going to take to do that broadly and beneficially of AI to the global south because we are already seeing that diffusion in the global north is roughly double what we see in the global south.

And so for Microsoft, as a private sector player here, we think we have a role to play in helping to close that gap and we see it as being centred on five different components. First, as Dr. Garb mentioned initially, we need to help build out the infrastructure that is needed for broad AI diffusion. So this is both… Investments in data centres to power AI applications, but it’s also investments in connectivity as well. There are real electricity needs that need to be met. We’re trying to do that with an eye towards the sovereignty of countries around the world. We realise that the world is a fragmented place, and so we design our data centres and also the services that run on top of them with a recognition that there needs to be real agency for the countries hosting those data centres.

And so we have a range of different controls that we put in to our data centres, which include sovereignty controls and public clouds. Sometimes we build private clouds. But most importantly, it’s all built on a foundation of collaborating with our government partners around the world. The scale of the infrastructure… The infrastructure investment that’s needed is just so great. It’s really hard to see how we’ll achieve what we need to without significant private sector investment as well as funding from a range of different sources as well, governments, venture capitalists and others. So the first limb is all about infrastructure. The second limb is all about skilling. What we’ve learnt from the history of diffusion of other general purpose technologies like electricity, for example, is that the countries that succeed in these really transformative economic moments are not actually the countries that necessarily invent the new general purpose technology.

They’re the countries that diffuse and adopt that technology fastest. And if you look back at history, skilling turns out to be one of the major unlocks to that adoption and broad diffusion. So, as I said, We’ve made a range of skilling announcements. One that I’m particularly energised by myself is a very specific one focused on educating educators to help them with an AI -driven educational future. And of course, when you teach teachers, you’re teaching students, and therefore the workforce of the future as well. So we committed to teach AI -specific skills to 2 million Indian teachers in partnership, of course, with Indian national standards and training institutions, which is an exciting thing to me to support the future.

Third, the third limb is all about investments in multilingual and multicultural AI. You know, AI is… It’s no good to you if it does not work in the language… that you speak and the culture in which you use the system. So we’ve been pleased to collaborate with Peter Mattson from ML Commons on an expansion to represent Hindi, Tamil, Malay, Japanese, Korean, of some safety benchmarks that ML Commons has played a key role in standing up. But we’re working upstream of testing and evaluation as well. So we’re pleased to announce a Lingua Africa initiative where we are working with local communities in partnership with the Gates Foundation and others to really make sure that we’re collecting lots of that really rich local data with and for communities.

All of that data is not well represented on the internet and spoken languages. And spoken languages in particular require that careful collection. is all about supporting local innovation. I think it’s critically important that as the private sector we really deeply understand that AI will only be meaningful in people’s lives if it’s actually solving the local problems that matter to them. So we announced some initiatives here in India and further afield that are designed to really support that local innovation. Last, we announced also as part of the new Delhi Frontier AI commitments that several leading Indian AI companies and Frontier AI companies from around the world signed on to yesterday that we’re going to be contributing our data as to what we can see about adoption and usage of AI in the economy into some central projects.

Including one led by the World Bank. So that policy makers are in a good… position to understand how is AI being adopted in the economy? Where are the places where it’s going faster than expected? Where are the places where it’s going slower? Because I think that kind of data is incredibly useful for policy making because it allows you to spot those places where you might need a skilling intervention or an infrastructure intervention.

Justin Carsten

That was fantastic. And if you ever want to know about really believing in something, having such a complex blog and then just reeling off the five pillars, and that really just shows that commitment, I think, that we’re seeing from Microsoft taking that leading role. And actually, collaboration has been, since Brad’s presidency really, has been one of the things that he really encouraged about saying, look, we’ve got to work together.

Natasha Crampton

Absolutely. I mean, not one of those five limbs is possible without deep partnership. And that coordination of those five pillars is really important. Thank you. Thank you. of those partnerships and deeply investing in them over time is really what’s going to give us the outsized impact here.

Justin Carsten

And if we think about this, because Microsoft is a global corporation, you’ve got lots of countries, each with, just as Dr Garg said, they’ve got their own customisations, they think. They’ve got their own local laws and regulations. And some things, you know, there’s something called the Brussels effect around GDPR, for example, which went pretty global, but it’s not the case for AI, for example. How do you think you manage that challenge of trying to make sure that it’s broad enough but focuses for the individual needs of nations? Have you come across that challenge?

Natasha Crampton

Yes, that is part of what I work on day in, day out at Microsoft, because part of my role is working very closely with our product teams to make sure that we are building our product. our models in a way that’s trusted and trustworthy by design. And so we are building products and technologies that we aim to share with the world. And it is absolutely true that not every part of the world has the same rules or expectations. And part of what we need to do is to make sure that we’re building technology in a way that has enough sort of controls and choices that people can make downstream of what we choose to do at Microsoft to apply that technology in their own context.

So we ourselves do have a point of view about how we want our technology to show up in the world. So, you know, we do think carefully about if we’re making available a service that’s got some configurable controls, we do think carefully about what we think the default should be. But we also really do recognize… the need for that agency, and we do deeply understand that not every part of the world is homogenous. I think it’s, you know, here in India, it’s just a beautiful place to recognize the sort of linguistic and cultural diversity of the world. Quite honestly, if we don’t build technology that can be easily adapted and applied in people’s local contexts with their values, with their laws, we’re just missing the opportunity to, you know, have our technology reach the world.

So there are complex challenges. Sometimes there are direct conflicts between what one jurisdiction wants and what another jurisdiction has sort of declared as a matter of law. They can be worked through, and this is partly why you also need a great partner ecosystem, right? Being able to make available models open source or in an open -weight space. which Microsoft has long done, for example, with our five family of models. This is another way of empowering the ecosystem to adapt and build based on that.

Justin Carsten

Thank you so much. And you just touched on, you mentioned ML Commons and you touched about culturally sensitive. And it’s interesting, there is a report that’s been released by ML Commons this week on robust and defensible benchmarks. And part of that was some great work from the Singaporean agency IMDA, which the response from an AI, it has to be culturally sensitive. And that’s the point that you made. I think culture is important because what is seen as acceptable in one culture may not be in another. So that brings me nicely to Dr. Peter Mattson, who is the president of ML Commons and also a CEO. He’s a senior staff engineer at Google. So he founded ML Commons himself and was previously the head of the programming systems and applications group at NVIDIA.

So on that ML Commons, I think it’s done some great work, as we’ve heard. It’s played a major role in benchmarking performance and efficiency of AI. How do you see that open benchmarks can contribute to building sovereign capabilities, Peter?

Peter Mattson

I think that’s a fantastic question. I’m going to start with a very broad context and then narrow it down to that specific. And the broad context I want to start in is why is trust and reliability so vital for AI? AI has tremendous potential to change everything we do. But in order for it to do that, people need to feel comfortable adopting it. And we’re all… smart, we don’t adopt things we don’t trust. You don’t give them your banking information. You don’t give them your business information. You don’t give them your medical information or trust what they say or do about it if they’re not reliable. And so the question becomes, how do we make AI reliable?

Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right? Is it correct? Is it secure? Is it safe all the time? And if we can make AI truly reliable, the potential for benefits to everyone around the world, and frankly, the potential for businesses and markets is fantastic. But the way that we drive that is with metrics, is with evaluations. AI is an incredibly complex black box system. So to make it better, you need to have common yardsticks that you use to measure progress. And we need those common yardsticks back. widely for all aspects of reliability. So you alluded to the work on security with IMDA. Natasha alluded to some of the work around multilingual safety that we’re collaborating with Microsoft on and with folks at Google as well.

These are examples of what’s necessary to drive that push towards reliability. But they’re very technically hard. This is something that I don’t think people appreciate enough. They see someone publish a paper. We made a benchmark for something, right? And they made a data set and they did it once. But there’s a tremendous amount of technology to go to industrial quality benchmarking, which is what we need for industrial level reliability. There’s one. We need to work to take the experiments we’re doing in multilingual benchmarking and turn those into a dependable framework that empowers people around the world to produce very high quality. quality, multilingual safety and security benchmarks, and then to maintain and evolve them over time, right?

If ML Commons can help lift the resources there so that people can make the choices about language and culture where they have expertise without having to grapple with the really hard technical questions of how you do AI benchmarking, we hope that could be very empowering. An example from the healthcare space, we have a MedPerf project that uses what we call federated evaluation, where it sends models out to different facilities and then tests them on a small bit of data and accumulates the results. This is how you do healthcare benchmarking for reliability, for correctness, against very, very diverse data sets, potentially around the world. It’s technology like that, like dependable industrial scale multilingual safety and security, or medical benchmarking, or medical benchmarking, made possible by the with data sets across disparate legal systems through technology like Federated Eval and Confidential Compute that we believe really unlocks that future of high reliability systems.

Justin Carsten

That’s excellent. Thank you. And the repeated use of that term reliable. So what we need is reliable LLMs, but we need the reliable benchmarks, as you said. Yes, yes. And I think this point about healthcare is really interesting because what we need to do is, you mentioned industrial scale as well, we need this process that can be trusted. And that’s one thing that I found working with ML Commons, how we all come together, the people from industry, many academics around the world. You just look at any of the papers released, so you can go to the website, and how many authors and how many years of expertise is donated to that effort. Yes, yes. Where do you see, Peter, the next sort of big movements for ML commons?

Because these yardsticks will change. You’ve done healthcare. Where do you think is the important area for you in benchmarking in the near future?

Peter Mattson

I think thanks to the contributions from all of those experts. I truly think it is a testament to the industry that we are getting very in -demand experts from some of the leading companies to contribute to this work. Like, people really care about doing AI right. That is unarguable if you look at, as you say, the author list. What we need to do is leverage that expertise to scale. It’s not enough to do a benchmark and publish a paper. We need to make that benchmark available to the industry. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper.

It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. prompt response. You ask a question, you look at the answer, you see whether it’s safe or secure or correct.

But the future, as everyone knows, is multi -turn and agentic. And so we need to drive, you know, wider and deeper at the same time. There is tremendous demand for what we do. It is tremendously resource -intensive, and

Justin Carsten

You mentioned the work of Google, so I’m going to come to Dr. Aya from the Gates Foundation in a moment, just talking about some of the conversations. So we were hoping to have Vint Cerf, who some of you may know. I know, Wendy, you know him very well. But he doesn’t travel so much, does he? No, yeah, that’s the thing. He couldn’t travel. He’s got some issue that he couldn’t. improve public health and economic development. He’s a strategic partner between Indian researchers, you’re based over here in India, global partners and Gates Foundation teams in areas including vaccine preventable diseases, disease surveillance and modelling. So thank you for joining us today. We’ve heard a little bit, of course, India has really pushed forward with its digital public infrastructure.

And we’ve heard in the last session, Dr. Gog was in from Sanjay Jain, your colleague, about Mosef, which is modelled on Adha in some ways and is an open source initiative. So what I’d like to ask you is, where countries lack foundational infrastructure, what role do philanthropic organisations like the Gates Foundation play in enabling access to… to trustworthy AI capabilities?

Participant

Thank you so much for inviting me. I think this is obviously a very complex question, not fully settled, I will say for sure. So I mean, most of my experience in this field is in India. So I think, first off, I’d like to start by saying it’s great that India is hosting this summit. It’s fantastic. And showcasing a lot of the work that the country has done, the capability and the use cases that we are very closely supporting. I think the trustworthy question is very much, and I would say sustainability as well is another question that we have to think about, is about what sort of models do we need to have? Are they large centralized models?

Or are they dispersed decentralized models on the edge? do we need in countries with poor connectivity so trustworthiness has got many aspects to it is it going to be ready to work when you want it to work suppose again my work a lot of it is in health and agriculture and things like that so if you are a front line worker how do you make sure that they can if they have to make inferences and primary care can they make inferences if needed on the edge if you are a health system person and you want to improve the working of a health system making sure the right experts are in the right facility the right medicines are there patients are taken care of there is a great opportunity to make this very high quality but again the question becomes how do you access the compute how quickly can inferences come how easy it is to prompt there is all this which is very, because if it doesn’t work well, then you lose trust.

That’s the, it just doesn’t work. The next level question is language. I think Dr. Garth talked about it, the whole Bhashani project in India and there are similar projects that we’ve been involved in and there’s been a lot of debate even within the foundation as to which models can perform on language well. Which systems can interpret super complex, I think we heard from the other speakers about how complex this is, what works well. So trustworthiness will partly come from how systems respond and the lived experience in terms of simple things like, is it accessible? Is it the right language? Is it relevant? I mean, India is a continent on its own between different states, the health system and approaches are often different based on local policies.

how does it work in terms of policy in a particular state? One thing I’m particularly familiar with is pregnancy risk stratification. We talk a lot about how to reduce maternal mortality, infant mortality, stillbirth. The rules in Uttar Pradesh, for example, may be different from the rules in Telangana. How do you make sure that if you have a tool that supports frontline workers in understanding and improving identification of risk of pregnant mothers, how do you make sure that it works in that context? So this context is important. I think trust has all of these things built into it. I’ll also talk a little bit about sustainability questions. Sustainability also requires these kinds of questions to be answered well.

What’s the energy consumption? Are there simpler, lower parameter, lower energy consuming models rather than the giant models? To me, it’s a core question. And I think… it’s nice to know that there are researchers in the country who are thinking about that. Beyond that, can compute hardware itself look different? You know, beyond digital, let’s say, I saw these researchers recently looking at, you know, multi -parameter, multi -state compute capabilities and that was really fascinating. I just saw it two weeks ago because I was prepping for a bunch of meetings. Can those be great opportunities? Maybe they are further in the future to improve the likelihood of edge computing and edge inferences. So there’s a lot of, and then I think finally, open source.

I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement takes track here. I believe because many governments in the global south may not be able to afford the large amounts of money that may be needed for a long period of time. How do you do these use cases well? So that I think is going to be another aspect of it that allows for adoption, trust at the highest levels. Again, I’m talking about the bottom 50 % of the pyramid. Top 10 % of the pyramid, they’ll do what they have to do. But ultimately to build trust, you need to get to the bottom 50 % of the pyramid.

And so there are different in quotes, markets here at UL. People who can pay at different levels. Even within a country like India, obviously there’s multiple different levels. How can you make sure that this thing can reach everybody and don’t create a divide, not just between global north and global south, but even within countries, you want to make sure that this doesn’t create a divide. And that’s, I think, another important part of building societal trust. The last point, which I think is also important is, what is the impact on society of this technology? I think this is going to be an important one as well. Are you able to create jobs, employment, and there’s a meta question about how does

Justin Carsten

Thank you so much. And we’ll come back to some of those points in a minute if I may, Harish. Because, as you may have seen, we’ve just been joined by Dame Professor Wendy Hall, someone I’ve…

Wendy Hall

Professor Dame, but don’t mind. Carsten, you should know that. You’re a Brit.

Justin Carsten

I’m not a Dame. But if you were a Sir. It’s always Professor Sir. But if I keep being nice to you, maybe you’ll put a word in for me. So I’ve known Wendy for a long time. She’s a Regis Professor of Computer Science and Associate Vice President, International Engagement at the University of Southampton, where she’s also Director of Web Science. There are so many accolades. She’s been a Dame Commander since 2009 and is a Fellow of the Royal Society and the Royal Academy of Engineering and the ACM and was President of lots of those organisations, including the British Computer Society, BCS, sorry. and most notably she was the co -chair of the UK government’s AI review and a member of the AI council.

We’ve talked also about skills actually, Wendy. We were both on the, I think you were probably leading it, but I was just a member of it, the review with Nigel Shadbow into computer science, if you remember.

Wendy Hall

No, he did that one. That was Professor Sir Nigel. No, I didn’t.

Justin Carsten

Okay, okay. Anyway, you’ve been involved in advising many governments around the world and could you tell us a little bit about the UK’s approach to developing sovereign AI capabilities?

Wendy Hall

No, I’m not going to answer that question because this is a trustworthy panel, right? And I want to talk about trustworthiness. Okay. And that’s why I was asking what the panel was about because I’m doing three panels this morning and I’ve got a lunch date to go to, so an important one. So I was asking Peter what the panel was about and he said, because it’s about trustworthy AI, right? Yeah. so I want to say if you don’t mind Carsten I could tell you what the UK is doing it’s very parochial I’m very excited that this conference has been in India but I have a love hate relationship with it it’s been a really difficult conference to navigate 250 ,000 people here but you end up talking to rooms of tens of people ok it’s out on YouTube does AI need this sort of jamboree I don’t know for the future but it is fabulous to have the spotlight on India I’m a member of the MOSIP

Justin Carsten

of course you are

Wendy Hall

I’ve been involved I’m in awe of what India has done with the Aadhaar and built the digital public infrastructure and I want to see how that works I would love to see how that works in the UK but it doesn’t translate it works in developing countries it’s much harder to translate it to an old world that has long established rules and regs and ways of working and anyway so that’s I’m really excited it’s here and it was fabulous also to see the young people here because in the UK and I think it’s probably true in most of Europe and the US people are really worried about AI they’re scared because that’s what they get, they get scaremongering they’re scared it’s going to attack them they’re scared it’s going to wipe the world out they’re scared they’re going to lose their job here the kids are going wow what an opportunity right and for India I mean that’s been an eye opener for me I mean I know I’ve been working in India long enough to know I mean I helped introduce the web into India right web and internet and the website and stuff work I’ve done here and I know what you can do with the power of that technology for people that can’t read and write and live in the rural areas I mean it’s just amazing what it does, add AI on top of that, they’re not worried about the deep fakes yet what they want is to get the information to their people in the fields the farmers in the fields in rural India I suppose deep fakes, I mean I don’t know but that’s not what they’re worried about at the moment so it has been fabulous and I love the slogan here, in India AI is all inclusive but it isn’t AI is missing out 50 % of the population right this technology and I’ve been fighting this sort of thing all my career totally male dominated totally male dominated and I love, I’m very sorry but the way we talk about women’s safety women aren’t involved in these discussions right?

children aren’t involved in these discussions 50 % of us are women and we’re not involved in the discussions about keeping us safe actually we need to keep men safe too right? men suffer from deep fakes as much as women do so you know well maybe someone’s not agreeing with that but you know it could be disproportionately hitting women and children but I don’t want to exclude the men here so I have become I have become even more passionate I talked about it in my keynote on Wednesday not in the talk itself but in the conversation that it’s so important that this is really all inclusive and that women are involved at the top level in the decision making about what we do and I think take for example the Australian experiment to stop the kids under 16 using social media.

Now that is an experiment. Everything about this world is a global experiment and people are doing different bits of it. The web was like that. The web itself from the genius that is Tim Berners -Lee was a worldwide experiment. There are many different ways that you could have built a hypermedia network on top of the internet. Boy, I tried to do one myself. And it was better than the web. But what Tim did was give it away, make it fantastic, make it open. And actually that led to the rise of the use of it. But it’s also left us with the stuff we’ve got today. Because anyone can do anything on the web. So bad people can do bad things.

And bad things happen unintentionally. The unintended consequences is what I call my talk on Wednesday. So this ban on social media, we need to we’ve got to be able to study the effects. Now, I know the Australians are. We heard Macron say here in France it’s going to be under 15. Keir Starmer’s saying under 16. But he changed his mind on a penny, so it’ll probably change. But that’s a joke for the Brits. But I think Spain has said under 16. In the US, of course, Trump says, no, we won’t need to worry about safety. But I made this joke in the other panel. And he’s the man that drank bleach during COVID. But the point is we have to study.

And people say, oh, it’s all moving so fast. The alpha males say that, right? The alpha males say, it’s all moving so fast. And I’m bigger, better, faster, and cheaper than you are, right? All that sort of alpha male stuff. We have to think about how we actually measure the effects of what we’re doing. So… two good things that have come out of the UK this is my last point just this last month the National Physical Laboratory I’m their AI advisor but that’s beside the point it’s like the UK equivalent of NIST they do our metrology it’s a word I’ve learnt to say very well weather forecasting is metrology studying the weather if we can do that we can do flipping AI because that’s complicated the thing about AI is of course it’s got people in it not just physical objects doing things systems so it’s harder in that sense but the National Physical Laboratory announced two weeks ago backed by the UK government the Centre for AI Measurement and the UK AI Security Institute which was founded by Rishi Sunak at Bletchley Park from Bletchley Park is part of the network of security institutes.

And the US, this is the man again who drank bleach during COVID, says no regulation. So we can’t talk about the network being a network of safety institutes. Why would we want to be safe? Sorry, joke. But they’ve renamed it the Network for AI Measurement and Evaluation. Now, this is brilliant. Brilliant. So with my ACM hat on and everything else I can do in the dying embers of my career. No, it’s not dying yet. But the, is to start a science of AI that’s about AI metrology. But what we’re doing, of course, is we are measuring the effects of social machines, which is difficult. You have to like, so, you know, the social scientists have taught me how you have to gather the data.

How do you gather the evidence? and we can do it we don’t have there is time to do this the world is not going to end at the end of this year because of AI other things yes but not because of AI so that’s where I want to leave you the thought I think if we can develop this new science put all our the compute power the best brains from social science and computer science and psychology and all the other disciplines we need, the law, everything we can really start to think about how we measure trust one of the metrics in AI metrology will be the trust factor I leave it there thank you very much a round of applause please thank you and I’m ever so sorry you can ask me what I’ve got to go in two minutes

Justin Carsten

I’ll ask you one thing very briefly then open data you’ve been a proponent of right Tim and Nigel

Wendy Hall

yeah yeah yeah yeah yeah

Justin Carsten

so I just wanted openness collaboration is important we’ve talked about open source what role do you think open data has in trustworthiness

Wendy Hall

well there’s two things about that, the open data movement has been really important but not all data can be open it can’t be and I mean you can have data that is exchangeable shareable that won’t necessarily be open so another thing I’m on is the UN, it’s the CSTD Commission for Science and Technology in the UN data governance working group and I could tell you in much more detail about that for me data governance we ignore that when we talk about AI governance we ignore data governance at our peril and we’ve really got to build on that from the UN report we did the General Assembly accepted all the points we recommended they’re being implemented that’s the other panel I should have been on today there’s a UN panel and they accepted everything that we recommended the global scientific panel the global dialogue the global fund and the Secretary General yesterday asked for three billion that’s not very much you know for a global fund to develop AI in the global south but our recommendations on data governance were not accepted because people would not the countries would not vote for them because it’s so difficult it’s so complicated and so there’s another thing I’m working hard on is how can we actually get some you know how do we do cross -border data sharing how do we get the data flows so we can actually share data sets and another thing we need to do which is something I want to do is build data tell people where the data is we need data repositories or at least registries that’s around the world so researchers know where the data is so they can do this study I’ll leave you with that that’s something else I was on my agenda

Justin Carsten

thank you so much Wendy I’m going to Yes, thank you. Thanks so much. I’m going to go to each of the panelists for just 30 seconds. I’ll start with Dr. Clark, then Harish, then Natasha, and then Peter, just to make us busy. Just one comment for the audience about how we really push this democratizing AI and trustworthiness.

Dr. Saurabh Garg

Yes. I think one issue which I mentioned in the earlier panel is that we perhaps need to give a lot more attention to the models because that will also help more efficient models will help reduce the requirement for compute and energy, which is among the biggest costs presently. And having models which are more domain specific would also enable better usage of those models and widen diffusion across. Thank you so much.

Justin Carsten

Harish.

Participant

Just very quickly, I think real world evidence is going to be very important in terms of, is it actually useful? I think we all assume it’s useful, but I’m talking about social and the development sector. I can imagine so many ways it’s useful, but it would be good to make sure we build evidence on how it can be trusted and, of course, be useful, metricize this a bit more. Thank you.

Justin Carsten

Thank you. Ms. Asha? Well, I

Natasha Crampton

think one of the points that has come out clearly in this discussion is that trustworthy AI diffusion is not going to just happen by itself. We have to make choices that lead to that outcome. And so for that reason, I am excited about these attempts at measurement in multiple dimensions, measurement of the systems, but also measurement in the changes of our economy so that we can then start to see whether the interventions that we’re putting in place are actually having the desired effect. Because we get to write this future, but we have to actively guide it. And I think data in multiple dimensions is really important. keys are there. Thank you. And the

Justin Carsten

final word on measurement should go to Peter. So Peter. I’m going

Peter Mattson

to echo the obvious point, which is that measurement is tremendously important. And then the hidden point, which is the scope of measurement is vast. And so we need to get really good at it, both in terms of quality and the efficiency, the cost efficiency with which we can implement it and with which we can evolve it. Thank you. Could you

Justin Carsten

please give a round of applause to an excellent panel. Thank you so much. Thank you. Hello, hello, hello, hello, hello. Hello. Hello. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedmedium

“Justin Carsten served as the moderator/host of the panel discussion.”

The knowledge base lists Justin Carsten as the moderator/host of the session [S2].

!
Correctionhigh

“Microsoft will commit US $50 billion by the end of the decade to accelerate AI diffusion in the Global South.”

Microsoft’s announcement referenced in the knowledge base states the company is on pace to spend $50 billion by the end of this year, not by the end of the decade [S39].

Additional Contextlow

“The first component involves building data‑centre and connectivity infrastructure that respects national sovereignty, offering public‑cloud and private‑cloud options with “sovereignty controls”.”

The knowledge base discusses Microsoft’s sovereign-cloud approach and the importance of data-centre sovereignty for states, providing additional detail on how such controls are being designed [S26] and [S120].

External Sources (120)
S1
The Foundation of AI Democratizing Compute Data Infrastructure — I’m Saurabh Garg. I’m secretary in the Ministry of Statistics and Program Implementation in the Government of India.
S2
S3
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S4
Multi-stakeholder Discussion on issues about Generative AI — Natasha Crampton:So, I’m Natasha Crankjian from Microsoft. I’m incredibly optimistic about AI’s potential to help us hav…
S5
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — Absolutely. I mean, not one of those five limbs is possible without deep partnership. And that coordination of those fiv…
S6
Democratizing AI Building Trustworthy Systems for Everyone — – Natasha Crampton- Participant – Peter Mattson- Natasha Crampton
S7
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — Thank you so much for inviting me. I think this is obviously a very complex question, not fully settled, I will say for …
S8
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — And I think that’s something that’s very evident in this conversation. So it’s great to be part of this club. So C -Line…
S9
https://dig.watch/event/india-ai-impact-summit-2026/ai-without-the-cost-rethinking-intelligence-for-a-constrained-world — And so it’s very easy for the students who are in a school. You know, they can do their assignments in a minute or in a …
S10
From Technical Safety to Societal Impact Rethinking AI Governanc — -Dame Wendy Hall- Regius Professor of Computer Science, Associate Vice President and Director of the Web Science Institu…
S11
Beyond North: Effects of weakening encryption policies | IGF 2023 WS #516 — Prateek Waghre:Thank you very much for having me. I was told I have about 10 minutes, so I’ve just started my timer to m…
S12
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right? Is it …
S13
Democratizing AI Building Trustworthy Systems for Everyone — – Peter Mattson- Natasha Crampton- Participant – Peter Mattson- Wendy Hall- Participant
S14
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — And if we think about this, because Microsoft is a global corporation, you’ve got lots of countries, each with, just as …
S15
S16
WS #84 The Venn Intersection of Cyber and National Security — Karsan Gabriel: Thank you very much. My name is Carsten. I work as the coordinator of the African Parliamentarian Ne…
S17
Overcoming the fragmentation of the digital governance: what role for the Global Digital Compact and e-trade rules? (South Centre) — Developing countries, in particular, face challenges in keeping track of discussions and negotiations related to digital…
S18
Multistakeholder Partnerships for Thriving AI Ecosystems — Not only the big players. So all those things need framework and need governance. And we have to make sure that the outc…
S19
Keynote-Surya Ganguli — Energy Efficiency: Learning from Biological Computation So, I work in a unified science of intelligence across both bra…
S20
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Wai Sit Si Thou: Just to double-check whether you can see my screen and hear me well. Yes. Yes. Okay, perfect. So my sha…
S21
Building Public Interest AI Catalytic Funding for Equitable Compute Access — “computer capability collaboration connectivity compliance and context”[3]. “From these discussions, there were six foun…
S22
Published by DiploFoundation — An important argument of this paper is that traditional diplomatic training is no longer adequate to address the global …
S23
Authors — Governments have a leading role to play in developing cybersecurity norms; however their challenge is that they must do …
S24
Safeguarding Children with Responsible AI — Cultural, contextual, and inclusion considerations She highlights the need for global norms that respect cultural and r…
S25
AI in 2026: Learning to live with powerful systems — Purpose-built models designed for specific domainsbegin to play a more prominent role. In healthcare, education, public …
S26
WS #43 States and Digital Sovereignty: Infrastructural Challenges — Ekaterine Imedadze: Thank you so much for amazing question. Thank you. Actually, you pointed out in the question, the to…
S27
Data centres now deemed critical national infrastructure in the UK — Great Britain has recently designated its data centres as’critical national infrastructure,’a move designed to bolster t…
S28
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — Kossi AMESSINOU: Thank you, Moderator. Data is very important for government, because when we don’t have data, we don’…
S29
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And then we are coupling that with investments in skilling. So we have made some big -number commitments around how we a…
S30
Leaders TalkX: ICT application to unlock the full potential of digital – Part I — Himanshu Rai: So, thank you for the question. You know, I’ll foreground it in a little bit of a fact about what is the m…
S31
WS #462 Bridging the Compute Divide a Global Alliance for AI — Ivy Lau-Schindewolf: Sure. Yeah, it’s kind of hard to go after, you know, Elena. And that was a very, very good point an…
S32
Conversational AI in low income & resource settings | IGF 2023 — In conclusion, the analysis underscores the potential of conversational AI in addressing healthcare gaps and improving g…
S33
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — The widespread adoption of efficient infrastructure implementations across sectors is supported by arguments that model …
S34
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S35
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S36
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S37
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Gurumurthy argues that mainstream AI solutions often fail Global South contexts and advocates for alternative approaches…
S38
Indias AI Leap Policy to Practice with AIP2 — Thanks, Doreen. As you can see, Doreen has spent her career in ensuring. Every country, every community has access to or…
S39
Keynote-Brad Smith — -Infrastructure investment requirements: The need for massive investment in data centers, compute power, connectivity, a…
S40
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S41
Can we test for trust? The verification challenge in AI — **Anja Kaspersen** stressed the importance of bringing technical professional organizations into governance conversation…
S42
Regulating Open Data_ Principles Challenges and Opportunities — It is also evident in the market concentration of hyperscale cloud providers whose global dominance shapes where data is…
S43
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Isabella Hampton:Thank you for the question. So the key consideration that I think organizations should make is framing,…
S44
Connecting open code with policymakers to development | IGF 2023 WS #500 — Building trust in open source was another significant argument put forth. In Nepal, for instance, there was a lack of tr…
S45
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — The research does not provide specific supporting facts in this regard, but it implies that efforts should be made to id…
S46
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade stresses the need for a multistakeholder approach in policymaking. She argues that policies often lack in…
S47
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — A recent analysis of different viewpoints on AI technologies has revealed several key themes. One prominent concern rais…
S48
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Children in Uganda primarily focused on the material aspects of fairness, while children in Japan emphasized the psychol…
S49
Driving Indias AI Future Growth Innovation and Impact — Thank you, Mridu, and thank you, everyone, for joining us for the unveiling of this important blueprint. As we have hear…
S50
Multistakeholder Partnerships for Thriving AI Ecosystems — And what are the conditions that helped ensure these collaborations? Translated into sustained impact rather than… and…
S51
Building Population-Scale Digital Public Infrastructure for AI — To address this challenge, the Gates Foundation is investing in “scaling hubs” in Rwanda, Nigeria, Senegal, and soon Ken…
S52
Democratizing AI Building Trustworthy Systems for Everyone — “Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right?”[62]….
S53
Building the Next Wave of AI_ Responsible Frameworks & Standards — And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to …
S54
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S55
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:OK. Wonderful. So I really hear your concerns, in fact. And it’s interesting, starting by I was expecting, in …
S56
From India to the Global South_ Advancing Social Impact with AI — Darren Farrant from the United Nations Information Centre, speaking with his characteristic Australian humor about crick…
S57
Democratizing AI: Open foundations and shared resources for global impact — The model incorporates over 1,000 languages, including Swiss minority languages, addressing critical gaps in AI accessib…
S58
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Preserving multilingual societies is essential because different language structures enable different ways of thinking a…
S59
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — And I think that’s true in the short term when the ecosystem is getting prepared. But in longer term, frauds and mis -se…
S60
Building Scalable AI Through Global South Partnerships — In India, we are a subcontinental scale. There are 22 official languages and many other languages which need to be taugh…
S61
Democratising AI: the promise and pitfalls of open-source LLMs — At theInternet Governance Forum 2024 in Riyadh, the sessionDemocratising Access to AI with Open-Source LLMsexplored a tr…
S62
WS #100 Integrating the Global South in Global AI Governance — 4. Leveraging Private Sector Involvement Jill: Thank you, Fadi. I think in a nutshell, I think it’s important to ackno…
S63
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — ## Challenges and Implementation Barriers Anne Rachel: Thank you very much and good afternoon everybody. I’m actually v…
S64
AI and Digital @ WEF 2024 in Davos — AI access and control should not be exclusive to a few corporations but accessible to all, including the developing worl…
S65
The open-source gambit: How America plans to outpace AI rivals by democratising tech — The AI openness approach will spark a heated debate around the dual nature of open-source AI. The benefits are evident i…
S66
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Thank you so much, Ahitha, for presenting me today. I’m so glad to be here to discuss this very importa…
S67
Artificial Intelligence &amp; Emerging Tech — Another important aspect highlighted in the analysis is the ethical considerations in AI development. It is argued that …
S68
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S69
UK AI plan calls for AI sovereignty and bottom-up developments — The UK government has launched an ambitiousAI Opportunities Action Planto accelerate the adoption of AI to drive economi…
S70
Global AI Policy Framework: International Cooperation and Historical Perspectives — Given your role in leading AI policy at United Nations Office for Digital and Emerging Technologies, what are the AI pri…
S71
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S72
Policymaker’s Guide to International AI Safety Coordination — Moderate disagreement with significant implications – while speakers share common concerns about AI safety, their differ…
S73
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Despite technical and economic opportunities, significant policy challenges remain. Chandra identified lack of coordinat…
S74
Skilling and Education in AI — Infrastructure development emerged as crucial, with investments in data centers, subsea cables, and compute capacity to …
S75
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — The convergence on skills development as a critical priority, combined with innovative approaches to infrastructure shar…
S76
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “We’re investing to train everyone.”[15]. “We have over 350,000 people here, and we are growing ourselves.”[208]. “And w…
S77
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Mr. Sher Verick:Great. Well, thank you very much. It’s a real pleasure to be with you here today. I think Janine updated…
S78
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S79
Laying the foundations for AI governance — This discussion revealed both the substantial challenges in translating AI governance principles into practice and the s…
S80
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S81
Smart Regulation Rightsizing Governance for the AI Revolution — However, significant implementation challenges remain, particularly around scaling coalition-building approaches beyond …
S82
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -Global AI Governance Alignment: The critical need for international coordination on AI regulation to avoid fragmentatio…
S83
Democratizing AI Building Trustworthy Systems for Everyone — A key component addresses multilingual and multicultural AI development, as “AI is no good to you if it does not work in…
S84
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Democratizing AI Access and Preventing Digital Divide: Concerns about AI’s economic value concentrating in Western econ…
S85
Microsoft commits $17.5 billion to AI in India — The US tech giant, Microsoft,has announcedits largest investment in Asia, committing US$17.5 billion to India over four …
S86
Building Scalable AI Through Global South Partnerships — “So the examples I’ve given of TB, government has a wonderful platform called Nikshay”[8]. “Rajasthan, as an example, ha…
S87
Discussion Report: AI Implementation and Global Accessibility — $4 billion announcement for enabling capacity building for nearly 20 million people across the world over the next two t…
S88
Can we test for trust? The verification challenge in AI — This comment fundamentally reframes the discussion by deconstructing the oversimplified concept of ‘trust’ in AI. It pro…
S89
Connecting open code with policymakers to development | IGF 2023 WS #500 — Henri Verdier:If I can say something, because that’s very important. So most of the people that went to work with me did…
S90
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Chris Albon:I think when it comes to regulation, I agree with Jim. I would love to see space for people, particularly pe…
S91
Regulating Open Data_ Principles Challenges and Opportunities — It is also evident in the market concentration of hyperscale cloud providers whose global dominance shapes where data is…
S92
Global Perspectives on Openness and Trust in AI — Okay, so let’s get into it. I’m going to moderate this panel, so I’ll take a seat. Thank you. So let’s get into it. Okay…
S93
From Technical Safety to Societal Impact Rethinking AI Governanc — This comment created a pivotal moment that shifted the discussion from theoretical safety concerns to examining the very…
S94
Open Forum #30 High Level Review of AI Governance Including the Discussion — These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a r…
S95
Global AI Governance: Reimagining IGF’s Role &amp; Impact — – Shamira Ahmed- Paloma Lara-Castro- William Bird AI presents shared challenges and opportunities for humanity, requiri…
S96
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — A lot of efforts are concentrated in a handful of countries and companies Developing countries need to be included for …
S97
Next-Gen Education: Harnessing Generative AI | IGF 2023 WS #495 — Involving different stakeholders, organizations, and companies is emphasized throughout the discussions. This inclusive …
S98
Main Session on Artificial Intelligence | IGF 2023 — Audience:Okay. Hello, everybody. This is Hossein Mirzapour from data for governance lab for the record. Thank you for br…
S99
Leveraging the UN system to advance global AI Governance efforts — Daren Tang:I think the most important thing is that we need to be the platform where we are big tent and we’re inclusive…
S100
WS #97 Interoperability of AI Governance: Scope and Mechanism — 3. The need to streamline UN agencies and define clear duties (Mauricio Gibson) Mauricio Gibson emphasized the need for…
S101
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Cross-cultural understanding is also important for translating research into a global aspect. Ethical considerations, in…
S102
Plenary session on CBMs and capacity building — Team Pink:Once again, let me say thanks, Mr Chair, for affording me the privilege and our group to interact. I think one…
S103
Futuring Peace in Northeast Asia in the Digital Era | IGF 2023 Open Forum #169 — The analysis then delves into the challenges of international cooperation, particularly in regions with differing stages…
S104
EU institutions are close to reaching a deal on data sharing between businesses and governments — The Data Act, a landmark law governing howdata is accessed, transferred, and shared, was the subject of an update commun…
S105
Opening of the session — El Salvador: Thank you, Chair. El Salvador, thank you for convening this session. For my country, it is essential to …
S106
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — From these discussions, there were six foundational pillars that we had to address. And we thought need to form the back…
S107
Cybersecurity of Civilian Nuclear Infrastructure | IGF 2023 WS #220 — Building trust and cooperation with industry is crucial for the IAEA. While the organization has purchased commercial pr…
S108
Internet standards and human rights | IGF 2023 WS #460 — Lastly, Perkins asserts that engagement in standard development requires time, effort, and expertise. He emphasizes that…
S109
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S110
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Minister Vaishnav, Excellencies, ladies and gentlemen, let me begin by giving our thanks and expressing our sincere appr…
S111
Shaping the Future AI Strategies for Jobs and Economic Development — So, Narendra, first of all, welcome. Narendra is the MD of RackBank and NeveCloud. Narendra, you’ve heard all the challe…
S112
Microsoft Reimagine Tomorrow Summit — Microsoft held its first virtual summit tilted ‘Forward Together. Reimagine Tomorrow’ on 12-13 October 2020. During the …
S113
Microsoft to boost AI investment in South Africa — Microsoft hasannouncedplans to invest an additional 5.4 billion rand (about $296.81 million) by 2027 to enhance its clou…
S114
Brazil to benefit from major $2.7 billion Microsoft AI Investment — Microsofthas committed $2.7 billionto enhance cloud and artificial intelligence infrastructure in Brazil. The investment…
S115
Microsoft at 50 – A journey through code, cloud, and AI — Microsoft, the American tech giant, wasfounded50 years ago, on 4 April 1975, by Harvard dropout Bill Gates and his child…
S116
Conversation: 01 — Krishnan outlined the Trump administration’s three-pillar strategy developed over 13 months. The first pillar focuses on…
S117
Digital sovereignty: the end of the open Internet as we know it? (Part 1) — Perceptions are changing drastically and fast, becausethe political project of liberalism is being overridden by a neo-m…
S118
Analyst flags potential slowdown in Microsoft’s data centre expansion — Microsoft has reportedlyscrapped leasesfor significant data centre capacity in theUnited States, raising concerns about …
S119
Microsoft to invest in Sweden’s digital transformation and open sustainable data centers in 2021 — Microsoft has announced its plan to invest in Sweden’sdigital transformation and open sustainable data centresin 2021. T…
S120
Day 0 Event #270 Everything in the Cloud How to Remain Digital Autonomous — Bullwinkel acknowledged the legitimacy of sovereignty concerns whilst emphasising that trust forms the foundation of all…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Saurabh Garg
3 arguments132 words per minute297 words134 seconds
Argument 1
Governance complexity and sharing mechanisms (Garg)
EXPLANATION
Garg highlights that coordinating AI resources across nations faces significant governance hurdles, especially around how foundational computing resources are shared. Effective governance structures and sharing protocols are essential to manage the interdependent AI ecosystem.
EVIDENCE
He notes that while foundational computer resource sharing is a major challenge, a larger issue is managing the interdependence of the AI ecosystem across hardware, software, and protocols, and stresses the need for governance around sharing mechanisms and protocols [6-7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The challenges of fragmented digital governance and the need for comprehensive frameworks are highlighted in discussions on digital governance fragmentation [S17] and multistakeholder partnership governance [S18].
MAJOR DISCUSSION POINT
Governance and Coordination Challenges in International AI Collaboration
AGREED WITH
Justin Carsten
Argument 2
Need for talent and institutional capability to manage AI ecosystems (Garg)
EXPLANATION
Garg argues that merely acquiring infrastructure is insufficient; developing expertise and institutional capacity is crucial for sustainable AI collaboration. Talent development and institutional capability are required to operationalize shared AI resources.
EVIDENCE
He points out that while infrastructure can be acquired, expertise must be developed, emphasizing the importance of talent and institutional capability for managing AI ecosystems [8-12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Garg’s emphasis on capability development and talent is reflected in the AI democratization report that notes his prioritization of talent and domain-specific models [S1] and in the identified talent pillar of public-interest AI funding frameworks [S21].
MAJOR DISCUSSION POINT
Governance and Coordination Challenges in International AI Collaboration
Argument 3
More efficient, domain‑specific models reduce compute and energy costs, aiding diffusion (Garg)
EXPLANATION
Garg suggests that creating more efficient, domain‑specific AI models can lower compute and energy requirements, making AI more affordable and easier to diffuse globally. This approach can also help address the high cost of large models.
EVIDENCE
He states that focusing on more efficient models will reduce compute and energy costs, and that domain-specific models enable better usage and wider diffusion across regions [310-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for less-power domain models is supported by Garg’s prioritization of such models in the democratizing compute report [S1] and by broader industry trends toward purpose-built domain models [S25].
MAJOR DISCUSSION POINT
Infrastructure, Skilling, and Local Innovation as Foundations for AI Adoption
N
Natasha Crampton
6 arguments140 words per minute1432 words611 seconds
Argument 1
$50 B investment and five pillars: infrastructure, skilling, multilingual AI, local innovation, data for policy (Crampton)
EXPLANATION
Microsoft has pledged a $50 billion investment by 2030 to accelerate AI diffusion to the Global South, organized around five strategic pillars: infrastructure, skilling, multilingual AI, local innovation, and data for policy. Each pillar targets a specific barrier to AI adoption.
EVIDENCE
She announces a $50 billion commitment and outlines the five pillars-building data-centre and connectivity infrastructure, large-scale skilling, multilingual and multicultural AI, supporting local innovation, and providing data for policy makers-to close the AI gap between the Global North and South [30-33][33-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The five-pillar strategy with sovereignty-aware infrastructure is described in the democratizing AI report outlining the five-pillar approach [S2] and echoed in the six-pillar framework of catalytic funding initiatives [S21].
MAJOR DISCUSSION POINT
Microsoft’s Five‑Pillar Strategy for AI Diffusion to the Global South
DISAGREED WITH
Participant
Argument 2
Deep partnerships with governments and NGOs are required to deliver each pillar (Crampton)
EXPLANATION
Crampton stresses that the success of each pillar depends on strong collaborations with governments, NGOs, and other stakeholders. Partnerships enable sovereign‑respectful infrastructure, funding, and local relevance.
EVIDENCE
She describes designing data centres with sovereignty controls and collaborating with government partners worldwide, noting that significant private-sector and governmental funding are needed, and later affirms that none of the five limbs is possible without deep partnership [38-42][72-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of deep government and NGO partnerships is emphasized in multistakeholder partnership discussions that call for governance frameworks and open-source outcomes [S18].
MAJOR DISCUSSION POINT
Microsoft’s Five‑Pillar Strategy for AI Diffusion to the Global South
Argument 3
Configurable controls enable AI products to respect diverse legal and cultural contexts (Crampton)
EXPLANATION
Crampton explains that Microsoft builds AI products with configurable controls, allowing downstream users to adapt them to local laws, cultural norms, and values. This flexibility is key to ensuring trust and relevance across jurisdictions.
EVIDENCE
She notes that Microsoft builds technology with enough controls and choices for downstream adaptation, carefully considers defaults while recognizing the need for agency and local context, and emphasizes that without such adaptability the technology would miss global reach [80-87].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI guidelines stress cultural and contextual adaptability, aligning with configurable controls for legal diversity [S24], while sovereignty-aware design of data-centre services also highlights such flexibility [S2].
MAJOR DISCUSSION POINT
Microsoft’s Five‑Pillar Strategy for AI Diffusion to the Global South
Argument 4
Building data‑centre infrastructure and connectivity while respecting national sovereignty (Crampton)
EXPLANATION
Crampton outlines investments in data centres and connectivity, emphasizing that designs incorporate sovereignty controls so host nations retain agency over their infrastructure. This respects fragmented global regulations and promotes trust.
EVIDENCE
She describes investments in data centres and connectivity, the need to meet electricity requirements, and the inclusion of sovereignty controls and private-cloud options to give countries agency over hosted data centres [33-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sovereignty-controlled data-centre investments are detailed in the five-pillar description [S2] and reinforced by discussions on digital sovereignty challenges and the classification of data centres as critical infrastructure [S26][S27].
MAJOR DISCUSSION POINT
Infrastructure, Skilling, and Local Innovation as Foundations for AI Adoption
AGREED WITH
Wendy Hall
Argument 5
Large‑scale skilling programmes, e.g., training 2 million teachers in India, to drive diffusion (Crampton)
EXPLANATION
Crampton highlights a targeted initiative to educate 2 million Indian teachers on AI, recognizing that teacher training cascades knowledge to students and the future workforce, thereby accelerating AI adoption.
EVIDENCE
She states that Microsoft committed to teach AI-specific skills to 2 million Indian teachers in partnership with national standards and training institutions, linking educator training to broader workforce development [51-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Large-scale skilling commitments in India, targeting millions of learners, are reported in the scaling initiative for 10 million Indians by 2030 [S29] and contextualized within higher-education missions [S30].
MAJOR DISCUSSION POINT
Infrastructure, Skilling, and Local Innovation as Foundations for AI Adoption
Argument 6
Culturally and linguistically adapted AI, with configurable controls, ensures relevance and acceptance in diverse societies (Crampton, Participant)
EXPLANATION
Both speakers stress that AI must work in local languages and cultural contexts, and that configurable controls allow adaptation to varied legal frameworks. This cultural and linguistic alignment is essential for trust and widespread uptake.
EVIDENCE
Crampton details collaborations to expand safety benchmarks for Hindi, Tamil, Malay, Japanese, Korean and the Lingua Africa initiative to collect rich local data, emphasizing the need for AI to work in users’ languages and cultures [54-61]; participants add that language support, local policy differences, and culturally appropriate models are critical for trustworthiness [192-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for multilingual, culturally adapted AI is highlighted in inclusive AI for development discussions [S20], responsible AI cultural considerations [S24], and low-resource AI deployment studies emphasizing language support [S32].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
P
Participant
3 arguments158 words per minute1007 words381 seconds
Argument 1
Edge compute, language support, and sustainable low‑energy models are critical for trust in low‑connectivity settings (Participant)
EXPLANATION
The participant argues that for regions with poor connectivity, AI must run on edge devices, support local languages, and be energy‑efficient to maintain reliability and user trust. Sustainable, low‑parameter models are needed to ensure accessibility.
EVIDENCE
He discusses the need for dispersed, decentralized models on the edge, language suitability, reliable inference for frontline workers, and the importance of lower-parameter, lower-energy models to address sustainability and trust in low-connectivity environments [190-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research on AI in low-income settings underscores the importance of edge compute, local language support, and low-parameter, energy-efficient models for trustworthy deployment [S32] and links these needs to broader sustainability concerns [S33].
MAJOR DISCUSSION POINT
Infrastructure, Skilling, and Local Innovation as Foundations for AI Adoption
Argument 2
Open‑source models lower cost barriers, making AI accessible to the Global South (Participant)
EXPLANATION
The participant emphasizes that open‑source AI reduces financial barriers for governments in the Global South, enabling broader adoption where funding is limited. Open‑source also fosters local innovation and trust.
EVIDENCE
He notes that many governments in the Global South cannot afford large, long-term costs, and that open-source can help them adopt AI use cases more affordably [216-220].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Investments in open-source to make AI outcomes widely available are discussed in multistakeholder partnership frameworks [S18], and open-source is identified as a key pillar in catalytic funding for equitable compute access [S21].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
DISAGREED WITH
Natasha Crampton
Argument 3
Culturally and linguistically adapted AI, with configurable controls, ensures relevance and acceptance in diverse societies (Crampton, Participant)
EXPLANATION
The participant adds that language and cultural relevance are vital for trust, citing examples of state‑specific policies and the need for AI tools to adapt to local regulations and linguistic nuances. This complements Crampton’s emphasis on multilingual AI.
EVIDENCE
He references the Bhashani project, state-specific rules in Uttar Pradesh vs. Telangana, and the necessity for AI tools to work in the appropriate language and cultural context to be trusted [192-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for multilingual, culturally adapted AI is highlighted in inclusive AI for development discussions [S20], responsible AI cultural considerations [S24], and low-resource AI deployment studies emphasizing language support [S32].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
W
Wendy Hall
2 arguments156 words per minute1740 words667 seconds
Argument 1
Establishing AI metrology and measurement institutes creates systematic trust metrics (Hall)
EXPLANATION
Hall describes the creation of UK AI measurement bodies, such as the Centre for AI Measurement and the AI Security Institute, to develop systematic metrics for AI trustworthiness. She likens this effort to metrology in physical sciences.
EVIDENCE
She explains that the National Physical Laboratory, acting as the UK equivalent of NIST, announced the Centre for AI Measurement and the AI Security Institute, aiming to build a science of AI metrology and develop trust metrics like a ‘trust factor’ [290-299].
MAJOR DISCUSSION POINT
Trustworthiness, Reliability, and Measurement of AI Systems
DISAGREED WITH
Peter Mattson
Argument 2
Open data governance, cross‑border data sharing, and global data registries support trustworthy AI while respecting privacy (Hall)
EXPLANATION
Hall argues that while open data is valuable, not all data can be fully open; instead, mechanisms for cross‑border sharing, data registries, and robust data governance are needed to balance openness with privacy and sovereignty concerns.
EVIDENCE
She notes that the open data movement is important but limited, emphasizes the need for exchangeable but not necessarily open data, mentions her work with the UN CSTD data governance working group, and calls for global data registries to facilitate research while respecting privacy [305-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The five-pillar approach includes open data governance with sovereignty controls [S2], and broader data-governance pillars emphasize cross-border sharing and registries while balancing privacy [S21].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
DISAGREED WITH
Natasha Crampton
P
Peter Mattson
2 arguments172 words per minute901 words314 seconds
Argument 1
Open, industrial‑grade benchmarks are necessary to make AI systems reliable (Mattson)
EXPLANATION
Mattson stresses that reliable AI requires common, industrial‑grade benchmarks that go beyond academic prototypes. Such benchmarks provide consistent yardsticks for safety, security, and performance across the industry.
EVIDENCE
He explains that AI reliability hinges on common yardsticks and that moving from experimental benchmarks to industrial-quality benchmarking is essential, highlighting the need for dependable multilingual safety and security benchmarks and noting the repeated emphasis that a benchmark must be made available to industry, not just published in a paper [121-124][132-164].
MAJOR DISCUSSION POINT
Trustworthiness, Reliability, and Measurement of AI Systems
DISAGREED WITH
Wendy Hall
Argument 2
Open benchmarks and federated evaluation enable reliable, privacy‑preserving testing across jurisdictions (Mattson)
EXPLANATION
Mattson presents federated evaluation, exemplified by the MedPerf project, as a way to test AI models on distributed data while preserving privacy and complying with diverse legal regimes. This approach supports trustworthy, cross‑jurisdictional AI deployment.
EVIDENCE
He describes the MedPerf project that uses federated evaluation to send models to different facilities, test on local data, and aggregate results, illustrating how federated evaluation and confidential compute enable reliable, privacy-preserving benchmarking across varied legal systems [133-137].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
J
Justin Carsten
2 arguments81 words per minute1457 words1070 seconds
Argument 1
Collaboration across nations is essential for progress (Carsten)
EXPLANATION
Carsten underscores that the scale of the summit and the willingness of many stakeholders to work together illustrate the importance of international collaboration for AI progress. He frames collaboration as a key challenge and opportunity.
EVIDENCE
He remarks on the larger summit, the increased openness, and the need to coordinate international working groups, asking what the biggest challenges are and praising the collaborative spirit [5][70-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder partnership discussions stress the importance of international collaboration and shared governance frameworks for AI ecosystem development [S18].
MAJOR DISCUSSION POINT
Governance and Coordination Challenges in International AI Collaboration
AGREED WITH
Natasha Crampton, Peter Mattson
Argument 2
Systematic measurement is vital for democratizing trustworthy AI (Carsten)
EXPLANATION
Carsten calls for robust measurement frameworks to ensure AI systems are trustworthy and democratically accessible. He emphasizes that measurement guides policy and validates interventions.
EVIDENCE
He urges the panel to focus on democratizing AI and trustworthiness, noting the need for measurement to assess impact and guide policy, and later thanks the panel for their contributions [306-309][332-334].
MAJOR DISCUSSION POINT
Trustworthiness, Reliability, and Measurement of AI Systems
AGREED WITH
Wendy Hall, Peter Mattson, Natasha Crampton
Agreements
Agreement Points
Governance and coordination challenges are a central barrier to international AI collaboration
Speakers: Dr. Saurabh Garg, Justin Carsten
Governance complexity and sharing mechanisms (Garg) Collaboration across nations is essential for progress (Carsten)
Both speakers highlight that effective governance structures and sharing protocols are critical to manage the interdependent AI ecosystem across countries, and that coordinating such efforts is a major challenge [6-7][5][70-71].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors calls for multistakeholder governance frameworks highlighted in discussions on AI ecosystem partnerships [S50] and reflects the coordination bottlenecks identified in national data-centre planning in India [S73]. It also aligns with the broader governance disagreement noted in international AI policy analyses [S71].
Deep partnerships with governments, NGOs and ecosystem players are essential to deliver AI diffusion pillars
Speakers: Natasha Crampton, Justin Carsten, Peter Mattson
Deep partnerships with governments and NGOs are required (Crampton) Collaboration across nations is essential for progress (Carsten) Partner ecosystem needed for adaptable models (Mattson)
All three emphasize that none of the AI diffusion components can succeed without strong, multi-stakeholder partnerships, including government, NGOs and a vibrant partner ecosystem [72-74][5][70-71][91-92].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of public-private-NGO collaborations is underscored by the multistakeholder partnership principles in [S50] and by the Gates Foundation’s scaling-hub model that channels government funding through regional partners in Africa [S51]. Private-sector involvement in capacity-building is further emphasized in the Global South forum [S62].
Systematic measurement and metrics are vital for trustworthy and democratic AI
Speakers: Wendy Hall, Peter Mattson, Justin Carsten, Natasha Crampton
Establishing AI metrology and measurement institutes (Hall) Open, industrial‑grade benchmarks are necessary (Mattson) Systematic measurement is vital for democratizing trustworthy AI (Carsten) Measurement important for guiding interventions (Crampton)
The panel converges on the need for robust, standardized measurement frameworks-ranging from AI metrology institutes to industrial-grade benchmarks-to assess trustworthiness and guide policy and deployment [290-299][328-330][306-309][320-324].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for common yardsticks and industrial-grade benchmarks for AI reliability are documented in expert testimonies on trustworthy AI [S52], the development of testing frameworks for AI systems [S53], and the emphasis on interoperable protocols for trustworthy AI <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S54].
Multilingual and culturally adapted AI is essential for global diffusion
Speakers: Natasha Crampton, Participant, Peter Mattson
Culturally and linguistically adapted AI (Crampton, Participant) Open, industrial‑grade benchmarks (Mattson) include multilingual safety
All agree that AI must work in local languages and cultural contexts, with benchmarks and initiatives specifically targeting multilingual safety and data collection to ensure relevance and trust [54-61][192-199][124-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Multilingual AI initiatives covering over 1,000 languages have been showcased as a way to close accessibility gaps [S57]; the need to preserve multilingual societies and decolonize AI is discussed in [S58]; India’s experience with 22 official languages illustrates large-scale cultural adaptation [S60]; and open-source models are highlighted for supporting diverse linguistic contexts [S64].
Skilling and talent development are crucial for AI diffusion
Speakers: Natasha Crampton, Dr. Saurabh Garg
Large‑scale skilling programmes (Crampton) Need for talent and institutional capability (Garg)
Both stress that building expertise-through massive teacher training programmes and broader talent development-is a prerequisite for sustainable AI adoption [51-54][8-12].
POLICY CONTEXT (KNOWLEDGE BASE)
Skill-building is a priority in national AI roadmaps, as seen in India’s AI education agenda [S74], the impact of AI on UK hiring and workforce policies [S68], and large-scale training commitments reported by industry leaders [S76].
Efficient, low‑energy models and edge compute are needed for trustworthy AI in low‑connectivity settings
Speakers: Participant, Dr. Saurabh Garg
Edge compute, language support, and sustainable low‑energy models (Participant) More efficient, domain‑specific models reduce compute and energy costs (Garg)
Both highlight that energy-efficient, possibly edge-deployed models are essential to maintain reliability and trust where connectivity and resources are limited [190-208][310-312].
Open data and open‑source models lower barriers and support trustworthy AI
Speakers: Wendy Hall, Participant, Peter Mattson
Open data governance, cross‑border sharing, data registries (Hall) Open‑source models lower cost barriers (Participant) Open, industrial‑grade benchmarks (Mattson)
The speakers concur that openness-whether through data sharing frameworks, open-source models, or publicly available benchmarks-facilitates broader, more affordable, and trustworthy AI deployment [305-311][216-220][92-93].
POLICY CONTEXT (KNOWLEDGE BASE)
The democratizing potential of open‑source large language models is a recurring theme at the Internet Governance Forum and other forums [S61, S64, S65, S66], emphasizing lower entry barriers and broader trust through transparency.
Infrastructure investments must respect national sovereignty and provide configurable controls
Speakers: Natasha Crampton, Wendy Hall
Building data‑centre infrastructure and connectivity while respecting national sovereignty (Crampton) Open data governance, cross‑border sharing respecting privacy and sovereignty (Hall)
Both underline that AI infrastructure and data initiatives need to embed sovereignty-aware designs and configurable controls to align with diverse legal and cultural regimes [33-39][305-311].
POLICY CONTEXT (KNOWLEDGE BASE)
The UK AI Opportunities Action Plan explicitly calls for AI sovereignty and bottom-up development of infrastructure [S69]; broader sovereign-aware infrastructure considerations are discussed in UN-level AI policy dialogues [S70] and in calls to decolonize AI systems [S58].
Similar Viewpoints
Both see multilingual safety benchmarks and culturally adapted AI as foundational to trustworthy global AI deployment [54-61][124-126].
Speakers: Natasha Crampton, Peter Mattson
Culturally and linguistically adapted AI (Crampton, Participant) Open, industrial‑grade benchmarks (Mattson) include multilingual safety
Both argue that standardized, industrial‑grade measurement tools are essential for AI reliability and trustworthiness [290-299][328-330].
Speakers: Wendy Hall, Peter Mattson
Establishing AI metrology and measurement institutes (Hall) Open, industrial‑grade benchmarks are necessary (Mattson)
Both stress that multi‑stakeholder collaboration is the backbone of successful AI diffusion initiatives [5][70-71][72-74].
Speakers: Justin Carsten, Natasha Crampton
Collaboration across nations is essential for progress (Carsten) Deep partnerships with governments and NGOs are required (Crampton)
Unexpected Consensus
Both a UK academic (Wendy Hall) and a Microsoft executive (Natasha Crampton) prioritize sovereignty‑aware infrastructure and measurement despite differing regional perspectives
Speakers: Wendy Hall, Natasha Crampton
Open data governance, cross‑border sharing respecting privacy and sovereignty (Hall) Building data‑centre infrastructure and connectivity while respecting national sovereignty (Crampton)
It is surprising that a UK-based academic, who initially declined to discuss UK sovereign AI, aligns closely with Microsoft’s emphasis on sovereignty-aware data-centre design and measurement, indicating cross-sector convergence on respecting national legal frameworks while promoting openness [305-311][33-39].
POLICY CONTEXT (KNOWLEDGE BASE)
Their focus on sovereignty-aware infrastructure aligns with the UK AI sovereignty strategy outlined in the national AI plan [S69] and with international discussions on sovereign-respectful AI infrastructure in UN forums [S70].
Agreement between a private‑sector leader (Peter Mattson, ML Commons) and a public‑sector academic (Wendy Hall) on the necessity of industrial‑grade benchmarks for AI reliability
Speakers: Peter Mattson, Wendy Hall
Open, industrial‑grade benchmarks are necessary (Mattson) Establishing AI metrology and measurement institutes (Hall)
Despite coming from different sectors, both converge on the need for rigorous, standardized benchmarking infrastructure to underpin trustworthy AI, a point not explicitly raised by other participants [328-330][290-299].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for industrial-grade benchmarks is reinforced by expert commentary on common yardsticks for AI reliability [S52] and by the development of testing frameworks that enable benchmark-based evaluation of AI systems [S53].
Overall Assessment

The panel shows strong convergence on several fronts: the need for robust governance and coordination mechanisms; the centrality of deep, multi‑stakeholder partnerships; the importance of systematic measurement and benchmarking; the necessity of multilingual, culturally aware AI; and the role of skilling, efficient models, and open data/open‑source approaches. These shared positions cut across private‑sector, public‑sector, academic and civil‑society perspectives.

High consensus across most themes, indicating a shared understanding that trustworthy AI diffusion requires coordinated governance, partnership, measurement, and contextual adaptation. This broad agreement suggests that future policy and investment initiatives are likely to find common ground, facilitating collaborative action toward equitable AI deployment.

Differences
Different Viewpoints
Extent and openness of data sharing for trustworthy AI
Speakers: Wendy Hall, Natasha Crampton
Open data governance, cross‑border data sharing, and global data registries support trustworthy AI while respecting privacy (Hall) $50 B investment and five pillars: infrastructure, skilling, multilingual AI, local innovation, data for policy (Crampton)
Wendy Hall argues that while open data is valuable, not all data can be fully open and stresses the need for exchangeable data, cross-border sharing mechanisms and global registries to balance openness with privacy and sovereignty [305-311]. Natasha Crampton emphasizes large-scale data sharing for policy making as part of Microsoft’s five-pillar strategy, presenting data sharing as a key component of AI diffusion without highlighting limits on openness [64-69]. The two positions differ on how openly data should be shared and the mechanisms required.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over open data versus security concerns are reflected in discussions on the dual nature of open-source AI, including potential vulnerabilities [S65], as well as arguments for open-source models to democratize access while managing trustworthiness [S61, S64].
Preferred mechanism to enable AI adoption in the Global South – massive private‑sector investment vs. open‑source models
Speakers: Natasha Crampton, Participant
$50 B investment and five pillars: infrastructure, skilling, multilingual AI, local innovation, data for policy (Crampton) Open‑source models lower cost barriers, making AI accessible to the Global South (Participant)
Natasha Crampton outlines a $50 billion private-sector commitment by Microsoft, structured around five strategic pillars to close the AI gap between North and South [30-33]. The Participant counters that open-source AI models can reduce financial barriers for governments in the Global South, suggesting a lower-cost, community-driven approach instead of relying on large private investments [216-220]. This reflects a disagreement on the primary pathway to democratize AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Private-sector scaling-hub investments in Africa illustrate the massive investment pathway [S51], while capacity-building arguments stress the role of private actors [S62]; contrastingly, open-source LLM initiatives advocate for low-cost, locally adaptable models as a complementary route [S61, S64].
Primary approach to measuring and ensuring AI trustworthiness – AI metrology vs. industrial‑grade benchmarks
Speakers: Wendy Hall, Peter Mattson
Establishing AI metrology and measurement institutes creates systematic trust metrics (Hall) Open, industrial‑grade benchmarks are necessary to make AI systems reliable (Mattson)
Wendy Hall proposes building AI metrology institutions (e.g., Centre for AI Measurement, AI Security Institute) to develop systematic trust metrics akin to physical-science metrology [290-299]. Peter Mattson argues that reliable AI requires common, industrial-grade benchmarks and federated evaluation frameworks, emphasizing the need for robust benchmarking beyond academic prototypes [121-124][132-164]. The two experts differ on whether measurement should focus on metrology institutions or on benchmark development.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between metrology-style measurement and benchmark-driven evaluation mirrors the call for common yardsticks and industrial-grade benchmarks in trustworthy AI discussions [S52, S53] and is highlighted as a source of disagreement in AI governance roadmaps [S71].
Unexpected Differences
Refusal to address UK sovereign AI strategy and off‑topic commentary on AI hype
Speakers: Wendy Hall, Other panelists (e.g., Justin Carsten, Natasha Crampton)
Open data governance, cross‑border data sharing, and global data registries support trustworthy AI while respecting privacy (Hall) Wendy Hall declines to answer about the UK’s sovereign AI approach, instead makes jokes about AI scaremongering and broader societal impacts (Hall)
When asked to describe the UK’s approach to sovereign AI capabilities, Wendy Hall explicitly refuses to answer and shifts the discussion to personal remarks about AI hype, scaremongering, and societal experiments, which was unexpected given the panel’s focus on trustworthy AI diffusion [252-256][258-267]. This departure created a surprising divergence from the expected substantive discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
The relevance of the UK sovereign AI strategy is underscored in the official AI Opportunities Action Plan [S69] and in analyses of UK AI policy impacts on hiring and workforce planning [S68], making avoidance of the topic a notable divergence from established policy discourse.
Overall Assessment

The panel shows broad consensus on the importance of trustworthy AI diffusion, yet diverges on the means to achieve it—ranging from governance and talent development, large private‑sector investments, open‑source models, benchmarking, to AI metrology. Disagreements are most pronounced around data openness, funding models, and measurement strategies, while an unexpected deviation occurs when Wendy Hall sidesteps a direct question about sovereign AI policy.

Moderate disagreement: while all participants share the same overarching goal, they propose distinct pathways, leading to substantive but not antagonistic conflicts. The implications suggest that coordinated policy will need to reconcile these differing approaches—balancing governance, investment, open‑source, and measurement frameworks—to build a cohesive global AI strategy.

Partial Agreements
All speakers agree that trustworthy AI diffusion is essential, but they propose different primary levers: Garg stresses governance and talent development; Crampton focuses on massive investment across five pillars; Mattson emphasizes industrial‑grade benchmarking; Hall calls for AI metrology and trust metrics; the Participant highlights edge compute, language support, and low‑energy models for reliability in low‑connectivity environments [6-7][30-33][121-124][290-299][190-208].
Speakers: Dr. Saurabh Garg, Natasha Crampton, Peter Mattson, Wendy Hall, Participant
Governance complexity and sharing mechanisms (Garg) $50 B investment and five pillars: infrastructure, skilling, multilingual AI, local innovation, data for policy (Crampton) Open, industrial‑grade benchmarks are necessary to make AI systems reliable (Mattson) Establishing AI metrology and measurement institutes creates systematic trust metrics (Hall) Edge compute, language support, and sustainable low‑energy models are critical for trust in low‑connectivity settings (Participant)
Takeaways
Key takeaways
International AI collaboration faces complex governance and coordination challenges, especially around sharing mechanisms, talent development, and institutional capability. Microsoft announced a $50 billion, multi‑pillar strategy to accelerate AI diffusion to the Global South, focusing on infrastructure, skilling, multilingual/cultural AI, local innovation, and data for policy making. Deep partnerships with governments, NGOs, and other private‑sector actors are essential to deliver each pillar and respect national sovereignty and cultural contexts. Trustworthiness and reliability of AI depend on industrial‑grade, open benchmarks and systematic measurement (AI metrology), as advocated by ML Commons and the UK’s new AI measurement institutes. Open data, open‑source models, and federated evaluation are critical for cross‑border testing, privacy preservation, and lowering cost barriers for low‑connectivity regions. Efficient, domain‑specific and low‑energy models are needed to reduce compute and energy costs, facilitating broader diffusion. Inclusive development—addressing language diversity, cultural norms, gender and age representation—is necessary to avoid creating new digital divides.
Resolutions and action items
Microsoft will invest $50 billion by 2030 to build data‑centre infrastructure, improve connectivity, and support sovereign cloud options. Microsoft commits to up‑skill 2 million Indian teachers on AI‑driven education in partnership with national standards bodies. Launch of the Lingua Africa initiative to collect and curate multilingual data with local communities and the Gates Foundation. Microsoft and partner AI companies will contribute adoption and usage data to a World Bank‑led central project for policy insight. ML Commons will advance industrial‑scale, multilingual safety and security benchmarks and develop federated evaluation tools for sectors such as healthcare. The UK’s National Physical Laboratory will establish the Centre for AI Measurement and the AI Security Institute to create AI metrology standards. Participants called for creation of global data registries and cross‑border data‑sharing frameworks under UN data‑governance initiatives.
Unresolved issues
How to design and enforce governance frameworks that reconcile conflicting national AI regulations while maintaining interoperability. Sustainable financing models for the massive infrastructure required in the Global South beyond private‑sector investment. Technical and policy solutions for delivering trustworthy AI on edge devices in low‑connectivity environments. Concrete mechanisms for measuring the real‑world impact of AI interventions and linking those metrics to policy decisions. Balancing open‑data benefits with privacy, security, and sovereignty constraints; specifics of cross‑border data sharing remain undefined. Ensuring inclusive participation (gender, age, regional) in AI governance and development processes. Standardizing and scaling benchmark maintenance to keep pace with rapidly evolving AI capabilities.
Suggested compromises
Implement configurable controls and default settings in AI products so jurisdictions can adapt models to local laws and cultural values. Combine sovereign (private) cloud deployments with shared public‑cloud resources to respect national data sovereignty while leveraging economies of scale. Leverage open‑source model families to lower entry barriers for the Global South, while allowing local customization. Adopt a partnership model where private investment is matched with public funding and venture capital to spread financial risk. Use federated evaluation and confidential compute to enable cross‑jurisdictional benchmarking without moving raw data. Develop AI measurement institutes that provide common metrics but allow region‑specific extensions to address local priorities.
Thought Provoking Comments
One of the biggest challenges would be the governance around sharing mechanisms, sharing protocols, and managing the framework. And the other would be the talent and institutional capability required to develop expertise, not just acquire infrastructure.
Highlights that technical resources alone are insufficient; governance and human capital are critical bottlenecks for global AI collaboration.
Shifted the conversation from purely technical infrastructure to the need for policy frameworks and capacity building, prompting later speakers (Natasha, Peter) to discuss measurement, standards, and partnership models.
Speaker: Dr. Saurabh Garg
Microsoft will spend $50 billion by the end of the decade to close the AI diffusion gap between the Global North and South, focusing on five pillars: infrastructure, skilling, multilingual & multicultural AI, local innovation, and data sharing for policy‑making.
Provides a concrete, multi‑dimensional roadmap that links private‑sector investment to societal outcomes, introducing the notion of sovereign‑controlled data centres and education of 2 million teachers.
Set a concrete agenda that other panelists referenced (e.g., Peter’s benchmarks, Wendy’s call for measurement), and steered the dialogue toward practical implementation and the role of large corporations.
Speaker: Natasha Crampton
Reliability, not capability, is the real barrier to AI adoption. We need industrial‑grade, repeatable benchmarks—like MedPerf’s federated evaluation—to turn experimental datasets into trustworthy, globally‑usable metrics.
Frames the core problem as trustworthiness and introduces federated evaluation as a technical solution, moving the discussion from high‑level policy to concrete evaluation methodology.
Prompted deeper discussion on measurement, inspired Wendy’s remarks on AI metrology, and reinforced the panel’s focus on trustworthy AI as a measurable objective.
Speaker: Peter Mattson
We need a new science of AI metrology – a systematic way to measure trust, safety, and societal impact, similar to how the National Physical Laboratory measures weather. This requires collaboration across computer science, social science, law, and psychology.
Introduces the ambitious concept of AI metrology, linking technical measurement to societal trust and emphasizing interdisciplinary collaboration, while also critiquing current governance gaps.
Created a turning point that broadened the conversation to include measurement standards, data governance, and inclusivity, influencing subsequent remarks about open data and the need for metrics.
Speaker: Prof. Dame Wendy Hall
Trustworthiness must consider edge inference, language diversity, energy consumption, and open‑source accessibility, especially for frontline workers in health and agriculture in low‑connectivity settings.
Brings a ground‑level perspective on practical constraints—connectivity, language, sustainability—that challenge the lofty goals of AI diffusion, emphasizing real‑world usability.
Added nuance to the earlier high‑level strategies, prompting the panel to acknowledge the importance of lightweight models, multilingual benchmarks, and open‑source solutions.
Speaker: Harish (Participant, Gates Foundation)
We should give more attention to developing efficient, domain‑specific models to reduce compute and energy costs, which will also widen diffusion across regions.
Re‑emphasizes the link between model efficiency and equitable access, tying back to earlier points about talent and infrastructure while offering a concrete technical direction.
Reinforced the earlier discussion on sustainability and guided the final round‑up toward actionable research priorities.
Speaker: Dr. Saurabh Garg (closing remark)
Open data is vital but not all data can be open; we need exchangeable, shareable datasets, cross‑border data flows, and global registries so researchers know where data resides.
Balances the ideal of openness with practical privacy and sovereignty concerns, and proposes a concrete mechanism (global data registries) to support trustworthy AI development.
Extended the earlier conversation about governance and measurement, linking data accessibility directly to the ability to create reliable benchmarks and trustworthy systems.
Speaker: Prof. Dame Wendy Hall
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a broad celebration of collaboration to a focused examination of the concrete levers needed for trustworthy AI diffusion. Dr. Garg’s emphasis on governance and talent reframed the problem beyond hardware. Natasha’s five‑pillar plan supplied a tangible corporate commitment, which Peter then grounded in the technical necessity of reliable, industrial‑scale benchmarks. Wendy’s call for AI metrology and data‑governance frameworks broadened the scope to include interdisciplinary measurement and inclusivity, while Harish’s on‑the‑ground concerns about edge use‑cases and sustainability added practical urgency. These comments collectively redirected the panel toward actionable strategies—standardized metrics, efficient models, multilingual support, and open yet controlled data—thereby deepening the conversation and setting a roadmap for future collaboration.

Follow-up Questions
How do you manage the challenge of ensuring AI solutions are broad enough yet tailored to individual nations’ needs?
Balancing global AI standards with diverse local regulations, cultural contexts, and sovereignty concerns is critical for widespread adoption.
Speaker: Justin Carsten
Where do you see the next big movements for ML Commons, particularly which areas of benchmarking will be important after healthcare?
Identifying future focus areas will guide research priorities, funding, and community effort toward the most impactful benchmarks.
Speaker: Justin Carsten
What role does open data play in building trustworthy AI?
Understanding the benefits and limits of open data is essential for transparency, validation, and responsible AI deployment while respecting privacy and security.
Speaker: Justin Carsten
How can we develop real‑world evidence and metrics to assess the trustworthiness and usefulness of AI in health and development contexts?
Empirical evidence is needed to validate AI interventions, inform policy, and ensure that AI delivers reliable benefits in practical settings.
Speaker: Harish (Participant)
How can we create more efficient, domain‑specific AI models to reduce compute and energy costs and accelerate diffusion?
Reducing resource demands makes AI sustainable and accessible, especially for low‑resource regions, and promotes broader diffusion.
Speaker: Dr. Saurabh Garg
What measurement frameworks are needed to evaluate AI systems across multiple dimensions (technical, economic, societal) and to track the impact of interventions?
Multi‑dimensional metrics are essential for guiding, monitoring, and assessing the effectiveness of AI democratization efforts.
Speaker: Natasha Crampton
How can we improve the quality, cost‑efficiency, and scalability of AI reliability benchmarks?
Robust, affordable benchmarks are foundational for establishing trustworthy AI across industries and for continuous improvement.
Speaker: Peter Mattson
What mechanisms are required for cross‑border data sharing, data repositories, and governance to support AI development while respecting privacy and sovereignty?
Effective data governance and infrastructure enable global collaboration and trust while protecting national interests and individual rights.
Speaker: Wendy Hall
How can multilingual and culturally sensitive AI be advanced through local data collection and community involvement?
Ensuring AI works in diverse languages and cultural contexts is vital for equitable benefits and adoption worldwide.
Speaker: Natasha Crampton
What governance structures and talent development strategies are needed to manage the interdependence of the AI ecosystem globally?
Coordinated governance and capacity building are identified as major challenges for international AI collaboration and responsible deployment.
Speaker: Dr. Saurabh Garg
How can open‑source models and weight spaces empower ecosystems to adapt AI to local laws and values?
Open models enable customization, allowing jurisdictions to apply AI within their regulatory and ethical frameworks.
Speaker: Natasha Crampton
How can a science of AI metrology be developed to measure trust and other social impacts of AI systems?
Standardized metrics for trust and societal effects would support regulation, accountability, and public confidence in AI technologies.
Speaker: Wendy Hall

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.