Democratizing AI Building Trustworthy Systems for Everyone
20 Feb 2026 12:00h - 13:00h
Democratizing AI Building Trustworthy Systems for Everyone
Summary
The panel opened by asking what the greatest obstacle is to coordinating a global AI effort, to which Dr. Saurabh Garg identified governance of sharing mechanisms, the interdependence of hardware-software-protocol ecosystems, and a shortage of talent and institutional capability as the primary challenges [6-12]. He emphasized that while infrastructure can be acquired, expertise must be developed to democratize AI worldwide [9-10].
Microsoft’s chief responsible AI officer, Natasha Crampton, announced a $50 billion commitment to bring AI to the global south by 2030, framing the initiative around five inter-linked pillars [28-33]. The first pillar focuses on building data-centre and connectivity infrastructure while respecting national sovereignty through configurable public and private cloud controls [34-41]. The second pillar targets large-scale skilling, including a program to teach AI-specific skills to two million Indian teachers, recognizing that education drives rapid technology diffusion [46-53]. The third and fourth pillars address multilingual, multicultural AI and local innovation, with collaborations such as the Lingua Africa project and partnerships with Indian AI firms to share adoption data for policy making [54-69].
Natasha stressed that AI products must be “trusted by design” and offer configurable defaults so that different jurisdictions can apply the same models within their own legal and cultural contexts [80-86]. She noted that conflicts between jurisdictions can be mitigated by open-source models and a robust partner ecosystem that enables local adaptation [90-93].
Peter Mattson of ML Commons argued that the current bottleneck for AI is reliability, which can only be improved through common, industrial-scale benchmarks and metrics [108-124]. He described federated evaluation techniques, such as the MedPerf healthcare benchmark, that allow diverse data sets and legal regimes to be tested securely and at scale [135-137]. Both Mattson and Dr. Garg highlighted that measurement of AI performance, energy use, and domain-specific models is essential for widening diffusion and reducing compute costs [310-312][328-330].
The discussion also touched on the role of open data, with Wendy Hall noting that while not all data can be fully open, shared repositories and UN-backed data-governance frameworks are critical for trustworthy AI development [305-311]. Participants concluded that achieving trustworthy, inclusive AI will require coordinated governance, sustained investment in infrastructure and skills, culturally aware technologies, open benchmarking, and rigorous measurement to guide policy and practice [1-5][70-74][94-100][321-324].
Keypoints
Major discussion points
– Global coordination and governance challenges for AI collaboration – The panel opened with a question about the biggest hurdles in international AI work, to which Dr. Garg highlighted resource sharing, inter-dependence of hardware-software-protocol layers, governance of sharing mechanisms, and the need for talent and institutional capability [5][6-12]. Later, Justin asked how differing national laws (e.g., the “Brussels effect”) can be reconciled, and Natasha explained Microsoft’s need to embed configurable controls so each jurisdiction can apply the technology safely [75-78][80-90].
– Microsoft’s five-pillar strategy for AI diffusion to the Global South – Natasha described a $50 billion commitment structured around (1) infrastructure (data-centres, connectivity, sovereignty controls) [33-40]; (2) skilling (e.g., training 2 million Indian teachers) [46-53]; (3) multilingual & multicultural AI (Lingua Africa, safety benchmarks for Hindi, Tamil, etc.) [54-61]; (4) support for local innovation and data sharing with policy makers [62-69]; and (5) partnerships with governments and other funders [42-45].
– Trustworthiness, reliability and the need for robust measurement – Both Dr. Garg and Natasha stressed that trustworthy AI requires reliable systems and governance [7-12][80-90]. Peter Mattson expanded on this, arguing that AI’s biggest barrier is reliability, which must be demonstrated through industrial-grade, multilingual safety and security benchmarks, federated evaluation, and continuous measurement [106-136][138-146][328-331].
– Open data, open-source models and collaborative ecosystems – The panel repeatedly linked openness to trust: Microsoft’s open-weight model family and open-source releases empower ecosystems [92-93]; Wendy highlighted the importance of open data while noting that not all data can be fully public, and called for cross-border data-sharing mechanisms and registries [305-311]; Peter echoed that open benchmarks and open-source models are essential for sovereign capability building [92-93][106-112].
– Inclusion, equity and societal impact – Wendy warned that AI discussions often exclude women, children and marginalized groups, stressing the need for “all-inclusive” governance [258-270]; the participant from the Gates Foundation emphasized language, edge-computing, sustainability, and reaching the “bottom 50 %” of the population to avoid new divides [183-229]; Natasha’s teacher-training initiative also illustrated a focus on equitable skill development [51-53].
Overall purpose / goal of the discussion
The panel was convened to explore how the global AI community can democratize and responsibly diffuse AI technologies, especially to the Global South, by addressing governance, infrastructure, talent, measurement, and inclusivity. Speakers presented concrete initiatives (Microsoft’s $50 bn plan, ML Commons benchmarks, UN data-governance work) and debated the policies and technical standards needed to build trustworthy, sovereign AI capabilities worldwide.
Tone of the discussion
The conversation began with a formal, appreciative tone (thanks, acknowledgments) and quickly shifted to a constructive, solution-focused dialogue about challenges and concrete strategies. Throughout, participants remained optimistic and collaborative, interspersed with occasional informal remarks and humor (e.g., jokes about “the man who drank bleach”). By the end, the tone became reflective and rallying, emphasizing collective responsibility and calling for continued measurement and open collaboration, culminating in a warm, appreciative closing.
Speakers
– Dr. Saurabh Garg – Secretary, Ministry of Statistics and Programme Implementation, Government of India; AI governance expert focusing on resource sharing, interdependence of AI ecosystem, and talent development [S1].
– Natasha Crampton – Microsoft’s first Chief Responsible AI Officer; leads the Office of Responsible AI; drives AI diffusion to the Global South and oversees AI infrastructure, skilling, multilingual AI, and policy measurement [S4].
– Participant – Representative of the Gates Foundation (identified in the transcript as “Dr. Aya”); discusses philanthropic support for trustworthy AI in low-infrastructure settings, focusing on health, agriculture, edge computing, sustainability, and open-source models.
– Wendy Hall – Dame Wendy Hall, Regius Professor of Computer Science and Associate Vice-President, International Engagement, University of Southampton; Director of the Web Science Institute; former member of the UN high-level expert advisory body on AI; involved in UK AI measurement and security initiatives [S10].
– Peter Mattson – President of ML Commons and CEO; Senior Staff Engineer at Google; founder of ML Commons; former head of Programming Systems and Applications at NVIDIA; works on open benchmarks, reliability, multilingual safety, and federated evaluation [S12].
– Justin Carsten – Moderator and panel host; leads discussion on AI democratization, governance, and measurement.
Additional speakers:
– Dr. Clark – Mentioned in the closing rapid-fire round; likely an AI researcher or policy expert (specific role not detailed in the transcript).
– Dr. Aya – Gates Foundation representative (identified as the “Participant” above); senior figure in the foundation’s health and agriculture AI initiatives.
– Harish – Referred to by name during the rapid-fire segment; appears to be the same individual as the “Participant” representing the Gates Foundation, though the transcript treats the name separately.
– Brad – Cited by Justin as having given a speech earlier in the summit; no direct remarks recorded in this transcript.
– Tim Berners-Lee – Mentioned by Wendy Hall in reference to the invention of the web; not an active speaker in this session.
– Nigel Shadbolt – Referenced by Wendy Hall regarding a prior review; not a speaker in this session.
– Vint Cerf – Mentioned as an intended participant who could not attend; not a speaker in this session.
– Ms. Asha – Name called by Justin near the end, but no spoken contribution recorded.
The session opened with moderator Justin Carsten thanking the audience and the panelists and framing the discussion around the difficulty of coordinating a truly global AI effort. He asked the working-group chair what the biggest obstacle to such international collaboration might be [5][13-15]. Dr Saurabh Garg responded that the most pressing problems lie not in the physical hardware alone but in the governance of sharing mechanisms, the inter-dependence of hardware, software and protocol layers, and the scarcity of talent and institutional capability to manage these resources [6-12]. He stressed that while data-centre infrastructure can be purchased, the expertise required to operate it responsibly must be cultivated [9-10].
Carsten then highlighted the political backdrop of the summit – noting the photograph of Prime Minister Modi with tech leaders and the presence of Microsoft – before introducing Microsoft’s first Chief Responsible AI Officer, Natasha Crampton, who leads the Office of Responsible AI [16-20][21-24]. Crampton announced that Microsoft will commit US $50 billion by the end of the decade to accelerate AI diffusion in the Global South, organising the effort around five inter-linked components [28-33].
The first component concerns the construction of data-centre and connectivity infrastructure that respects national sovereignty. Microsoft plans to invest in new data-centres and power-grid upgrades while offering both public-cloud and private-cloud options that embed “sovereignty controls” for host countries [34-41]. Crampton stressed that these facilities will be co-designed with government partners to ensure agency for the nations that house them [42-45].
The second component targets large-scale skilling. Recognising that technology diffusion historically follows education, Microsoft will train two million Indian teachers in AI-specific skills, partnering with national standards bodies to embed AI literacy at the grassroots level [46-53].
Components three and four focus on multilingual, multicultural AI and local innovation. Microsoft is collaborating with ML Commons to extend safety benchmarks to Hindi, Tamil, Malay, Japanese and Korean, and has launched the “Lingua Africa” initiative to collect rich, locally-sourced spoken-language data in partnership with the Gates Foundation [54-61][62-69]. These efforts aim to ensure AI systems operate correctly in the languages and cultural contexts of end-users, thereby supporting home-grown solutions and informing policy through shared adoption data [64-69].
The fifth component underlines the necessity of deep partnerships with governments, NGOs and other funders, acknowledging that the scale of required infrastructure cannot be met by the private sector alone [70-74].
When Carsten raised the ‘Brussels effect’-the tendency of EU regulations such as GDPR to become de-facto global standards-Crampton explained that Microsoft builds its models “trusted by design” with configurable defaults, allowing each jurisdiction to adjust controls to meet local legal and cultural requirements [75-79][80-90]. She added that open-weight model families-Microsoft’s “five families of models”-enable ecosystems to adapt technology without compromising sovereignty [91-93].
Peter Mattson of ML Commons shifted the conversation to reliability, arguing that the principal barrier to AI adoption is not capability but trustworthiness. He called for industrial-grade, repeatable benchmarks and described “federated evaluation” – exemplified by the MedPerf healthcare project – which tests models across disparate data sets while preserving privacy through confidential compute [106-124][135-137]. Mattson warned that turning experimental benchmarks into dependable, multilingual safety and security standards is a massive technical undertaking that must be sustained over time [128-136].
Justin then introduced Harish, the Gates Foundation participant, noting a recent blog post he co-authored with Brad (“Brad’s speech yesterday… based upon a recent blog post you and Brad put out”) [70-73]. Harish outlined several practical concerns for the Global South: the need for edge-inference capabilities in low-connectivity settings such as healthcare and agriculture; worries about energy consumption and the importance of lower-parameter, low-energy models for sustainability; exploration of novel hardware architectures (e.g., “multi-parameter, multi-state compute capabilities”) as future enablers of edge AI; the centrality of open-source models because many governments cannot afford large proprietary offerings; the state-level policy variation in India (e.g., differing maternal-risk rules in Uttar Pradesh vs. Telangana) that AI tools must respect; and the broader social impact of creating jobs, avoiding a digital divide within countries, and ensuring AI benefits the bottom 50 % of the population [150-180].
Wendy Hall, a Dame and director of Web Science at the University of Southampton, broadened the discussion to AI metrology. She advocated for a new science of AI measurement, likening it to the National Physical Laboratory’s work on weather forecasting and announced the UK’s Centre for AI Measurement and AI Security Institute as institutional anchors for systematic trust metrics [290-299]. Hall also highlighted the importance of open-data governance, proposing cross-border data-sharing mechanisms and global data registries while acknowledging that not all data can be fully open [305-311]. She noted the conference size (“250,000 people here”) and described a “love-hate relationship” with the event [250-255]. When asked directly about the UK’s sovereign AI strategy, Hall declined to answer and shifted to broader commentary on AI hype [252-267].
Across the panel, several points of agreement emerged. All speakers concurred that effective governance of sharing mechanisms is essential for international AI collaboration [5][6-7][70-71]; that deep, multi-stakeholder partnerships are required to deliver the five-component strategy [70-74][72-74]; that systematic measurement-whether through AI metrology institutes or industrial-grade benchmarks-is vital for trustworthy AI [290-299][306-309][328-330]; that multilingual, culturally adapted AI is a prerequisite for global diffusion [54-61][124-126][192-199]; and that large-scale skilling and talent development underpin sustainable diffusion [46-53][8-12]. Both Crampton and Mattson stressed the role of open-source models and open benchmarks in lowering entry barriers and enabling local customisation [91-93][92-93].
Nevertheless, the panel revealed notable disagreements. Hall argued that while open data is valuable, privacy and sovereignty constraints mean that only “exchangeable, shareable” datasets-not fully open ones-should be circulated, and she called for global data registries [305-311]; Crampton, by contrast, presented data sharing as a core component of Microsoft’s five-component plan without foregrounding such limits [64-69]. A second tension arose between Crampton’s $50 billion private-sector investment model and Harish’s view that open-source models are a more affordable route for the Global South [30-33][150-180]. Finally, Hall’s proposal for AI metrology institutions differed from Mattson’s emphasis on industrial-grade benchmark development as the primary path to reliability [290-299][106-124]; Hall also unexpectedly declined to answer the direct question about the UK’s sovereign AI strategy [252-267].
Thought-provoking remarks punctuated the discussion. Garg’s warning that governance and talent, rather than raw compute, are the real bottlenecks reframed the debate [6-7]; Crampton’s concrete $50 bn, five-component roadmap gave the panel a tangible agenda [30-33]; Mattson’s claim that “reliability, not capability, is the real barrier” and his illustration of federated evaluation provided a clear technical solution [116-124][135-137]; Hall’s call for a new science of AI metrology linked measurement to interdisciplinary collaboration [290-299]; and Harish highlighted the practical challenges of edge inference, language diversity, energy consumption and the need for open-source accessibility in low-connectivity settings [150-180][216-220].
Key take-aways
– Global AI collaboration is hampered by complex governance, talent shortages and the inter-dependence of the AI stack [5][6-12].
– Microsoft’s $50 bn, five-component plan seeks to close the North-South AI gap through sovereign-aware infrastructure, massive skilling, multilingual data collection, local innovation and policy-oriented data sharing [28-33][34-45][46-53][54-69][70-74].
– Deep partnerships with governments, NGOs and the broader ecosystem are indispensable for realising each component [70-74][72-74].
– Trustworthy AI hinges on industrial-grade benchmarks, federated evaluation and emerging AI metrology institutions [106-124][290-299][306-309].
– Open data and open-source models can lower barriers but must be balanced against privacy and sovereignty concerns [91-93][305-311].
– Efficient, domain-specific, low-energy models are needed to make AI viable in low-resource environments [150-180].
– Inclusive development-addressing language, cultural norms, gender and age representation-is essential to avoid creating new digital divides [54-61][124-126][190-208].
The panel also identified unresolved issues: designing governance frameworks that reconcile conflicting national regulations while preserving interoperability; securing sustainable financing beyond private investment; delivering reliable edge AI in low-connectivity contexts; establishing concrete, multi-dimensional metrics that link technical performance to societal impact; and creating global data-registry and cross-border sharing mechanisms that respect privacy and sovereignty. Suggested compromises included configurable defaults in AI products, hybrid sovereign-cloud models, leveraging open-source families alongside private investment, matching corporate funds with public and venture-capital contributions, and employing federated evaluation with confidential compute to enable cross-jurisdictional benchmarking.
In closing, the participants reiterated that democratising trustworthy AI will require coordinated governance, sustained investment, robust measurement, multilingual and culturally aware technologies, and inclusive talent development. Carsten praised the panel’s collaborative spirit, thanked the speakers and noted a brief round of applause before ending the session [340-345][70-74][94-100][321-324]. The consensus, though tempered by differing views on data openness and financing models, points toward a shared commitment to build an AI ecosystem that is both globally interoperable and locally trustworthy.
Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. you you you you Thank you. Thank you. Thank you so much, Dr. Garg. It really highlights one of the things about collaboration, and I’ll be talking to… a number of the panelists about… about that and that i’ve been so impressed this week at how much people are really coming together for the community you know this is a much bigger summit than we’ve had previously many more people really opening it up to everyone but if i can just ask you one thing on because the working group that you’re doing i think is is excellent it’s going to be really important um what do you see is the biggest challenge around that what do you think you know your vast experience that you’ve got of coming together do you think um there’s any particular challenges in coordinating that international effort
of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control every layer of the resources that is there and while foundational resources the foundational computer resources sharing would be a major challenge but i think a bigger challenge might be to manage the interdependence of the AI ecosystem because it spans hardware, software, and the protocols, so to say, or the ethics around that. So I think one of the biggest challenges would be the governance around this sharing mechanisms, sharing protocols, and managing the framework. And the other would be what would be the talent and the institutional capability, which is in a way required. Well, the infrastructure can be acquired, but expertise has to be developed.
And I think that’s critical to ensure that if you want to democratize and ensure that GlobalSoft is integral to that, and that’s where it would be. And I think, you know, we don’t need to focus so much on whether each country is owning each layer of the AI, but how one can do that. What is the capability and confidence in the systems that manage that we have the required methods to ensure that it takes care of the priorities and the values that each country wants to push forward?
Thank you so much. And I agree with you. It’s a big challenge, but I’m glad that you’re there to take that forward. And this week, you may have seen the photograph of Modi here with many of the leaders in tech. And it’s a great pleasure that one of the large organizations in the private sector, Microsoft, has got representation here. So I come to you, Natasha. So Natasha Crampton is Microsoft’s first chief responsible AI officer and leads the Office of Responsible AI. And it was interesting how long that’s been going. I heard earlier this week. But she’s putting Microsoft’s AI principles into practice by defining, enabling, and governing the company’s approach. to responsible AI. The office also collaborates with internal and external stakeholders to shape new laws, norms, and standards to help ensure that the promise of AI technologies is realized for the benefit of all.
As I said, that’s been a key theme. So I saw Brad speak yesterday. It was a fantastic speech, and that was based upon a recent blog post that you and Brad put out just a couple of days ago. So can you tell us a little bit about that for some people who haven’t had the chance to absorb in this session, please?
Sure. Thank you, Justin, and it’s a pleasure to be here with the panel and the audience today. So I think our announcement earlier in the week was about how Microsoft is contributing to bringing AI to the global south, and the headline that you might have seen is that we’re on. Hi. to spend 50 billion US dollars in order to do that by the end of the decade. What we’re seeing from the diffusion data that we have access to and that we’ve publicly published already is that there is an urgent need to focus on the diffusion and what it’s going to take to do that broadly and beneficially of AI to the global south because we are already seeing that diffusion in the global north is roughly double what we see in the global south.
And so for Microsoft, as a private sector player here, we think we have a role to play in helping to close that gap and we see it as being centred on five different components. First, as Dr. Garb mentioned initially, we need to help build out the infrastructure that is needed for broad AI diffusion. So this is both… Investments in data centres to power AI applications, but it’s also investments in connectivity as well. There are real electricity needs that need to be met. We’re trying to do that with an eye towards the sovereignty of countries around the world. We realise that the world is a fragmented place, and so we design our data centres and also the services that run on top of them with a recognition that there needs to be real agency for the countries hosting those data centres.
And so we have a range of different controls that we put in to our data centres, which include sovereignty controls and public clouds. Sometimes we build private clouds. But most importantly, it’s all built on a foundation of collaborating with our government partners around the world. The scale of the infrastructure… The infrastructure investment that’s needed is just so great. It’s really hard to see how we’ll achieve what we need to without significant private sector investment as well as funding from a range of different sources as well, governments, venture capitalists and others. So the first limb is all about infrastructure. The second limb is all about skilling. What we’ve learnt from the history of diffusion of other general purpose technologies like electricity, for example, is that the countries that succeed in these really transformative economic moments are not actually the countries that necessarily invent the new general purpose technology.
They’re the countries that diffuse and adopt that technology fastest. And if you look back at history, skilling turns out to be one of the major unlocks to that adoption and broad diffusion. So, as I said, We’ve made a range of skilling announcements. One that I’m particularly energised by myself is a very specific one focused on educating educators to help them with an AI -driven educational future. And of course, when you teach teachers, you’re teaching students, and therefore the workforce of the future as well. So we committed to teach AI -specific skills to 2 million Indian teachers in partnership, of course, with Indian national standards and training institutions, which is an exciting thing to me to support the future.
Third, the third limb is all about investments in multilingual and multicultural AI. You know, AI is… It’s no good to you if it does not work in the language… that you speak and the culture in which you use the system. So we’ve been pleased to collaborate with Peter Mattson from ML Commons on an expansion to represent Hindi, Tamil, Malay, Japanese, Korean, of some safety benchmarks that ML Commons has played a key role in standing up. But we’re working upstream of testing and evaluation as well. So we’re pleased to announce a Lingua Africa initiative where we are working with local communities in partnership with the Gates Foundation and others to really make sure that we’re collecting lots of that really rich local data with and for communities.
All of that data is not well represented on the internet and spoken languages. And spoken languages in particular require that careful collection. is all about supporting local innovation. I think it’s critically important that as the private sector we really deeply understand that AI will only be meaningful in people’s lives if it’s actually solving the local problems that matter to them. So we announced some initiatives here in India and further afield that are designed to really support that local innovation. Last, we announced also as part of the new Delhi Frontier AI commitments that several leading Indian AI companies and Frontier AI companies from around the world signed on to yesterday that we’re going to be contributing our data as to what we can see about adoption and usage of AI in the economy into some central projects.
Including one led by the World Bank. So that policy makers are in a good… position to understand how is AI being adopted in the economy? Where are the places where it’s going faster than expected? Where are the places where it’s going slower? Because I think that kind of data is incredibly useful for policy making because it allows you to spot those places where you might need a skilling intervention or an infrastructure intervention.
That was fantastic. And if you ever want to know about really believing in something, having such a complex blog and then just reeling off the five pillars, and that really just shows that commitment, I think, that we’re seeing from Microsoft taking that leading role. And actually, collaboration has been, since Brad’s presidency really, has been one of the things that he really encouraged about saying, look, we’ve got to work together.
Absolutely. I mean, not one of those five limbs is possible without deep partnership. And that coordination of those five pillars is really important. Thank you. Thank you. of those partnerships and deeply investing in them over time is really what’s going to give us the outsized impact here.
And if we think about this, because Microsoft is a global corporation, you’ve got lots of countries, each with, just as Dr Garg said, they’ve got their own customisations, they think. They’ve got their own local laws and regulations. And some things, you know, there’s something called the Brussels effect around GDPR, for example, which went pretty global, but it’s not the case for AI, for example. How do you think you manage that challenge of trying to make sure that it’s broad enough but focuses for the individual needs of nations? Have you come across that challenge?
Yes, that is part of what I work on day in, day out at Microsoft, because part of my role is working very closely with our product teams to make sure that we are building our product. our models in a way that’s trusted and trustworthy by design. And so we are building products and technologies that we aim to share with the world. And it is absolutely true that not every part of the world has the same rules or expectations. And part of what we need to do is to make sure that we’re building technology in a way that has enough sort of controls and choices that people can make downstream of what we choose to do at Microsoft to apply that technology in their own context.
So we ourselves do have a point of view about how we want our technology to show up in the world. So, you know, we do think carefully about if we’re making available a service that’s got some configurable controls, we do think carefully about what we think the default should be. But we also really do recognize… the need for that agency, and we do deeply understand that not every part of the world is homogenous. I think it’s, you know, here in India, it’s just a beautiful place to recognize the sort of linguistic and cultural diversity of the world. Quite honestly, if we don’t build technology that can be easily adapted and applied in people’s local contexts with their values, with their laws, we’re just missing the opportunity to, you know, have our technology reach the world.
So there are complex challenges. Sometimes there are direct conflicts between what one jurisdiction wants and what another jurisdiction has sort of declared as a matter of law. They can be worked through, and this is partly why you also need a great partner ecosystem, right? Being able to make available models open source or in an open -weight space. which Microsoft has long done, for example, with our five family of models. This is another way of empowering the ecosystem to adapt and build based on that.
Thank you so much. And you just touched on, you mentioned ML Commons and you touched about culturally sensitive. And it’s interesting, there is a report that’s been released by ML Commons this week on robust and defensible benchmarks. And part of that was some great work from the Singaporean agency IMDA, which the response from an AI, it has to be culturally sensitive. And that’s the point that you made. I think culture is important because what is seen as acceptable in one culture may not be in another. So that brings me nicely to Dr. Peter Mattson, who is the president of ML Commons and also a CEO. He’s a senior staff engineer at Google. So he founded ML Commons himself and was previously the head of the programming systems and applications group at NVIDIA.
So on that ML Commons, I think it’s done some great work, as we’ve heard. It’s played a major role in benchmarking performance and efficiency of AI. How do you see that open benchmarks can contribute to building sovereign capabilities, Peter?
I think that’s a fantastic question. I’m going to start with a very broad context and then narrow it down to that specific. And the broad context I want to start in is why is trust and reliability so vital for AI? AI has tremendous potential to change everything we do. But in order for it to do that, people need to feel comfortable adopting it. And we’re all… smart, we don’t adopt things we don’t trust. You don’t give them your banking information. You don’t give them your business information. You don’t give them your medical information or trust what they say or do about it if they’re not reliable. And so the question becomes, how do we make AI reliable?
Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right? Is it correct? Is it secure? Is it safe all the time? And if we can make AI truly reliable, the potential for benefits to everyone around the world, and frankly, the potential for businesses and markets is fantastic. But the way that we drive that is with metrics, is with evaluations. AI is an incredibly complex black box system. So to make it better, you need to have common yardsticks that you use to measure progress. And we need those common yardsticks back. widely for all aspects of reliability. So you alluded to the work on security with IMDA. Natasha alluded to some of the work around multilingual safety that we’re collaborating with Microsoft on and with folks at Google as well.
These are examples of what’s necessary to drive that push towards reliability. But they’re very technically hard. This is something that I don’t think people appreciate enough. They see someone publish a paper. We made a benchmark for something, right? And they made a data set and they did it once. But there’s a tremendous amount of technology to go to industrial quality benchmarking, which is what we need for industrial level reliability. There’s one. We need to work to take the experiments we’re doing in multilingual benchmarking and turn those into a dependable framework that empowers people around the world to produce very high quality. quality, multilingual safety and security benchmarks, and then to maintain and evolve them over time, right?
If ML Commons can help lift the resources there so that people can make the choices about language and culture where they have expertise without having to grapple with the really hard technical questions of how you do AI benchmarking, we hope that could be very empowering. An example from the healthcare space, we have a MedPerf project that uses what we call federated evaluation, where it sends models out to different facilities and then tests them on a small bit of data and accumulates the results. This is how you do healthcare benchmarking for reliability, for correctness, against very, very diverse data sets, potentially around the world. It’s technology like that, like dependable industrial scale multilingual safety and security, or medical benchmarking, or medical benchmarking, made possible by the with data sets across disparate legal systems through technology like Federated Eval and Confidential Compute that we believe really unlocks that future of high reliability systems.
That’s excellent. Thank you. And the repeated use of that term reliable. So what we need is reliable LLMs, but we need the reliable benchmarks, as you said. Yes, yes. And I think this point about healthcare is really interesting because what we need to do is, you mentioned industrial scale as well, we need this process that can be trusted. And that’s one thing that I found working with ML Commons, how we all come together, the people from industry, many academics around the world. You just look at any of the papers released, so you can go to the website, and how many authors and how many years of expertise is donated to that effort. Yes, yes. Where do you see, Peter, the next sort of big movements for ML commons?
Because these yardsticks will change. You’ve done healthcare. Where do you think is the important area for you in benchmarking in the near future?
I think thanks to the contributions from all of those experts. I truly think it is a testament to the industry that we are getting very in -demand experts from some of the leading companies to contribute to this work. Like, people really care about doing AI right. That is unarguable if you look at, as you say, the author list. What we need to do is leverage that expertise to scale. It’s not enough to do a benchmark and publish a paper. We need to make that benchmark available to the industry. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper.
It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. prompt response. You ask a question, you look at the answer, you see whether it’s safe or secure or correct.
But the future, as everyone knows, is multi -turn and agentic. And so we need to drive, you know, wider and deeper at the same time. There is tremendous demand for what we do. It is tremendously resource -intensive, and
You mentioned the work of Google, so I’m going to come to Dr. Aya from the Gates Foundation in a moment, just talking about some of the conversations. So we were hoping to have Vint Cerf, who some of you may know. I know, Wendy, you know him very well. But he doesn’t travel so much, does he? No, yeah, that’s the thing. He couldn’t travel. He’s got some issue that he couldn’t. improve public health and economic development. He’s a strategic partner between Indian researchers, you’re based over here in India, global partners and Gates Foundation teams in areas including vaccine preventable diseases, disease surveillance and modelling. So thank you for joining us today. We’ve heard a little bit, of course, India has really pushed forward with its digital public infrastructure.
And we’ve heard in the last session, Dr. Gog was in from Sanjay Jain, your colleague, about Mosef, which is modelled on Adha in some ways and is an open source initiative. So what I’d like to ask you is, where countries lack foundational infrastructure, what role do philanthropic organisations like the Gates Foundation play in enabling access to… to trustworthy AI capabilities?
Thank you so much for inviting me. I think this is obviously a very complex question, not fully settled, I will say for sure. So I mean, most of my experience in this field is in India. So I think, first off, I’d like to start by saying it’s great that India is hosting this summit. It’s fantastic. And showcasing a lot of the work that the country has done, the capability and the use cases that we are very closely supporting. I think the trustworthy question is very much, and I would say sustainability as well is another question that we have to think about, is about what sort of models do we need to have? Are they large centralized models?
Or are they dispersed decentralized models on the edge? do we need in countries with poor connectivity so trustworthiness has got many aspects to it is it going to be ready to work when you want it to work suppose again my work a lot of it is in health and agriculture and things like that so if you are a front line worker how do you make sure that they can if they have to make inferences and primary care can they make inferences if needed on the edge if you are a health system person and you want to improve the working of a health system making sure the right experts are in the right facility the right medicines are there patients are taken care of there is a great opportunity to make this very high quality but again the question becomes how do you access the compute how quickly can inferences come how easy it is to prompt there is all this which is very, because if it doesn’t work well, then you lose trust.
That’s the, it just doesn’t work. The next level question is language. I think Dr. Garth talked about it, the whole Bhashani project in India and there are similar projects that we’ve been involved in and there’s been a lot of debate even within the foundation as to which models can perform on language well. Which systems can interpret super complex, I think we heard from the other speakers about how complex this is, what works well. So trustworthiness will partly come from how systems respond and the lived experience in terms of simple things like, is it accessible? Is it the right language? Is it relevant? I mean, India is a continent on its own between different states, the health system and approaches are often different based on local policies.
how does it work in terms of policy in a particular state? One thing I’m particularly familiar with is pregnancy risk stratification. We talk a lot about how to reduce maternal mortality, infant mortality, stillbirth. The rules in Uttar Pradesh, for example, may be different from the rules in Telangana. How do you make sure that if you have a tool that supports frontline workers in understanding and improving identification of risk of pregnant mothers, how do you make sure that it works in that context? So this context is important. I think trust has all of these things built into it. I’ll also talk a little bit about sustainability questions. Sustainability also requires these kinds of questions to be answered well.
What’s the energy consumption? Are there simpler, lower parameter, lower energy consuming models rather than the giant models? To me, it’s a core question. And I think… it’s nice to know that there are researchers in the country who are thinking about that. Beyond that, can compute hardware itself look different? You know, beyond digital, let’s say, I saw these researchers recently looking at, you know, multi -parameter, multi -state compute capabilities and that was really fascinating. I just saw it two weeks ago because I was prepping for a bunch of meetings. Can those be great opportunities? Maybe they are further in the future to improve the likelihood of edge computing and edge inferences. So there’s a lot of, and then I think finally, open source.
I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement takes track here. I believe because many governments in the global south may not be able to afford the large amounts of money that may be needed for a long period of time. How do you do these use cases well? So that I think is going to be another aspect of it that allows for adoption, trust at the highest levels. Again, I’m talking about the bottom 50 % of the pyramid. Top 10 % of the pyramid, they’ll do what they have to do. But ultimately to build trust, you need to get to the bottom 50 % of the pyramid.
And so there are different in quotes, markets here at UL. People who can pay at different levels. Even within a country like India, obviously there’s multiple different levels. How can you make sure that this thing can reach everybody and don’t create a divide, not just between global north and global south, but even within countries, you want to make sure that this doesn’t create a divide. And that’s, I think, another important part of building societal trust. The last point, which I think is also important is, what is the impact on society of this technology? I think this is going to be an important one as well. Are you able to create jobs, employment, and there’s a meta question about how does
Thank you so much. And we’ll come back to some of those points in a minute if I may, Harish. Because, as you may have seen, we’ve just been joined by Dame Professor Wendy Hall, someone I’ve…
Professor Dame, but don’t mind. Carsten, you should know that. You’re a Brit.
I’m not a Dame. But if you were a Sir. It’s always Professor Sir. But if I keep being nice to you, maybe you’ll put a word in for me. So I’ve known Wendy for a long time. She’s a Regis Professor of Computer Science and Associate Vice President, International Engagement at the University of Southampton, where she’s also Director of Web Science. There are so many accolades. She’s been a Dame Commander since 2009 and is a Fellow of the Royal Society and the Royal Academy of Engineering and the ACM and was President of lots of those organisations, including the British Computer Society, BCS, sorry. and most notably she was the co -chair of the UK government’s AI review and a member of the AI council.
We’ve talked also about skills actually, Wendy. We were both on the, I think you were probably leading it, but I was just a member of it, the review with Nigel Shadbow into computer science, if you remember.
No, he did that one. That was Professor Sir Nigel. No, I didn’t.
Okay, okay. Anyway, you’ve been involved in advising many governments around the world and could you tell us a little bit about the UK’s approach to developing sovereign AI capabilities?
No, I’m not going to answer that question because this is a trustworthy panel, right? And I want to talk about trustworthiness. Okay. And that’s why I was asking what the panel was about because I’m doing three panels this morning and I’ve got a lunch date to go to, so an important one. So I was asking Peter what the panel was about and he said, because it’s about trustworthy AI, right? Yeah. so I want to say if you don’t mind Carsten I could tell you what the UK is doing it’s very parochial I’m very excited that this conference has been in India but I have a love hate relationship with it it’s been a really difficult conference to navigate 250 ,000 people here but you end up talking to rooms of tens of people ok it’s out on YouTube does AI need this sort of jamboree I don’t know for the future but it is fabulous to have the spotlight on India I’m a member of the MOSIP
of course you are
I’ve been involved I’m in awe of what India has done with the Aadhaar and built the digital public infrastructure and I want to see how that works I would love to see how that works in the UK but it doesn’t translate it works in developing countries it’s much harder to translate it to an old world that has long established rules and regs and ways of working and anyway so that’s I’m really excited it’s here and it was fabulous also to see the young people here because in the UK and I think it’s probably true in most of Europe and the US people are really worried about AI they’re scared because that’s what they get, they get scaremongering they’re scared it’s going to attack them they’re scared it’s going to wipe the world out they’re scared they’re going to lose their job here the kids are going wow what an opportunity right and for India I mean that’s been an eye opener for me I mean I know I’ve been working in India long enough to know I mean I helped introduce the web into India right web and internet and the website and stuff work I’ve done here and I know what you can do with the power of that technology for people that can’t read and write and live in the rural areas I mean it’s just amazing what it does, add AI on top of that, they’re not worried about the deep fakes yet what they want is to get the information to their people in the fields the farmers in the fields in rural India I suppose deep fakes, I mean I don’t know but that’s not what they’re worried about at the moment so it has been fabulous and I love the slogan here, in India AI is all inclusive but it isn’t AI is missing out 50 % of the population right this technology and I’ve been fighting this sort of thing all my career totally male dominated totally male dominated and I love, I’m very sorry but the way we talk about women’s safety women aren’t involved in these discussions right?
children aren’t involved in these discussions 50 % of us are women and we’re not involved in the discussions about keeping us safe actually we need to keep men safe too right? men suffer from deep fakes as much as women do so you know well maybe someone’s not agreeing with that but you know it could be disproportionately hitting women and children but I don’t want to exclude the men here so I have become I have become even more passionate I talked about it in my keynote on Wednesday not in the talk itself but in the conversation that it’s so important that this is really all inclusive and that women are involved at the top level in the decision making about what we do and I think take for example the Australian experiment to stop the kids under 16 using social media.
Now that is an experiment. Everything about this world is a global experiment and people are doing different bits of it. The web was like that. The web itself from the genius that is Tim Berners -Lee was a worldwide experiment. There are many different ways that you could have built a hypermedia network on top of the internet. Boy, I tried to do one myself. And it was better than the web. But what Tim did was give it away, make it fantastic, make it open. And actually that led to the rise of the use of it. But it’s also left us with the stuff we’ve got today. Because anyone can do anything on the web. So bad people can do bad things.
And bad things happen unintentionally. The unintended consequences is what I call my talk on Wednesday. So this ban on social media, we need to we’ve got to be able to study the effects. Now, I know the Australians are. We heard Macron say here in France it’s going to be under 15. Keir Starmer’s saying under 16. But he changed his mind on a penny, so it’ll probably change. But that’s a joke for the Brits. But I think Spain has said under 16. In the US, of course, Trump says, no, we won’t need to worry about safety. But I made this joke in the other panel. And he’s the man that drank bleach during COVID. But the point is we have to study.
And people say, oh, it’s all moving so fast. The alpha males say that, right? The alpha males say, it’s all moving so fast. And I’m bigger, better, faster, and cheaper than you are, right? All that sort of alpha male stuff. We have to think about how we actually measure the effects of what we’re doing. So… two good things that have come out of the UK this is my last point just this last month the National Physical Laboratory I’m their AI advisor but that’s beside the point it’s like the UK equivalent of NIST they do our metrology it’s a word I’ve learnt to say very well weather forecasting is metrology studying the weather if we can do that we can do flipping AI because that’s complicated the thing about AI is of course it’s got people in it not just physical objects doing things systems so it’s harder in that sense but the National Physical Laboratory announced two weeks ago backed by the UK government the Centre for AI Measurement and the UK AI Security Institute which was founded by Rishi Sunak at Bletchley Park from Bletchley Park is part of the network of security institutes.
And the US, this is the man again who drank bleach during COVID, says no regulation. So we can’t talk about the network being a network of safety institutes. Why would we want to be safe? Sorry, joke. But they’ve renamed it the Network for AI Measurement and Evaluation. Now, this is brilliant. Brilliant. So with my ACM hat on and everything else I can do in the dying embers of my career. No, it’s not dying yet. But the, is to start a science of AI that’s about AI metrology. But what we’re doing, of course, is we are measuring the effects of social machines, which is difficult. You have to like, so, you know, the social scientists have taught me how you have to gather the data.
How do you gather the evidence? and we can do it we don’t have there is time to do this the world is not going to end at the end of this year because of AI other things yes but not because of AI so that’s where I want to leave you the thought I think if we can develop this new science put all our the compute power the best brains from social science and computer science and psychology and all the other disciplines we need, the law, everything we can really start to think about how we measure trust one of the metrics in AI metrology will be the trust factor I leave it there thank you very much a round of applause please thank you and I’m ever so sorry you can ask me what I’ve got to go in two minutes
I’ll ask you one thing very briefly then open data you’ve been a proponent of right Tim and Nigel
yeah yeah yeah yeah yeah
so I just wanted openness collaboration is important we’ve talked about open source what role do you think open data has in trustworthiness
well there’s two things about that, the open data movement has been really important but not all data can be open it can’t be and I mean you can have data that is exchangeable shareable that won’t necessarily be open so another thing I’m on is the UN, it’s the CSTD Commission for Science and Technology in the UN data governance working group and I could tell you in much more detail about that for me data governance we ignore that when we talk about AI governance we ignore data governance at our peril and we’ve really got to build on that from the UN report we did the General Assembly accepted all the points we recommended they’re being implemented that’s the other panel I should have been on today there’s a UN panel and they accepted everything that we recommended the global scientific panel the global dialogue the global fund and the Secretary General yesterday asked for three billion that’s not very much you know for a global fund to develop AI in the global south but our recommendations on data governance were not accepted because people would not the countries would not vote for them because it’s so difficult it’s so complicated and so there’s another thing I’m working hard on is how can we actually get some you know how do we do cross -border data sharing how do we get the data flows so we can actually share data sets and another thing we need to do which is something I want to do is build data tell people where the data is we need data repositories or at least registries that’s around the world so researchers know where the data is so they can do this study I’ll leave you with that that’s something else I was on my agenda
thank you so much Wendy I’m going to Yes, thank you. Thanks so much. I’m going to go to each of the panelists for just 30 seconds. I’ll start with Dr. Clark, then Harish, then Natasha, and then Peter, just to make us busy. Just one comment for the audience about how we really push this democratizing AI and trustworthiness.
Yes. I think one issue which I mentioned in the earlier panel is that we perhaps need to give a lot more attention to the models because that will also help more efficient models will help reduce the requirement for compute and energy, which is among the biggest costs presently. And having models which are more domain specific would also enable better usage of those models and widen diffusion across. Thank you so much.
Harish.
Just very quickly, I think real world evidence is going to be very important in terms of, is it actually useful? I think we all assume it’s useful, but I’m talking about social and the development sector. I can imagine so many ways it’s useful, but it would be good to make sure we build evidence on how it can be trusted and, of course, be useful, metricize this a bit more. Thank you.
Thank you. Ms. Asha? Well, I
think one of the points that has come out clearly in this discussion is that trustworthy AI diffusion is not going to just happen by itself. We have to make choices that lead to that outcome. And so for that reason, I am excited about these attempts at measurement in multiple dimensions, measurement of the systems, but also measurement in the changes of our economy so that we can then start to see whether the interventions that we’re putting in place are actually having the desired effect. Because we get to write this future, but we have to actively guide it. And I think data in multiple dimensions is really important. keys are there. Thank you. And the
final word on measurement should go to Peter. So Peter. I’m going
to echo the obvious point, which is that measurement is tremendously important. And then the hidden point, which is the scope of measurement is vast. And so we need to get really good at it, both in terms of quality and the efficiency, the cost efficiency with which we can implement it and with which we can evolve it. Thank you. Could you
please give a round of applause to an excellent panel. Thank you so much. Thank you. Hello, hello, hello, hello, hello. Hello. Hello. Thank you. Thank you.
The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 sessions revealed a strong consensus that international institutions must play a centra…
EventThis discussion revealed both the substantial challenges in translating AI governance principles into practice and the significant potential for progress through collaborative approaches. While obstac…
EventThe discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreement on the need for global cooperation and comprehensive governance frameworks, …
EventHowever, significant implementation challenges remain, particularly around scaling coalition-building approaches beyond major economic powers, developing sufficient governance talent in countries with…
Event-Global AI Governance Alignment: The critical need for international coordination on AI regulation to avoid fragmentation that could hinder innovation and cross-border collaboration. Panelists emphasi…
EventA key component addresses multilingual and multicultural AI development, as “AI is no good to you if it does not work in the language that you speak and the culture in which you use the system.” Micro…
Event-Democratizing AI Access and Preventing Digital Divide: Concerns about AI’s economic value concentrating in Western economies and China, with discussion of intentional efforts needed to ensure broader…
EventThe US tech giant, Microsoft,has announcedits largest investment in Asia, committing US$17.5 billion to India over four years to expand cloud and AI infrastructure, workforce skilling, and operations …
Updates“So the examples I’ve given of TB, government has a wonderful platform called Nikshay”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-eve…
Event$4 billion announcement for enabling capacity building for nearly 20 million people across the world over the next two to three years, including schools, colleges, teachers, and public servants Chadh…
EventThis comment fundamentally reframes the discussion by deconstructing the oversimplified concept of ‘trust’ in AI. It provides crucial conceptual clarity by separating distinct technical and social con…
EventHenri Verdier:If I can say something, because that’s very important. So most of the people that went to work with me did divide their salary by two. But you can have very skilled and dedicated people …
EventChris Albon:I think when it comes to regulation, I agree with Jim. I would love to see space for people, particularly people working on open source models, to be able to innovate. Some of the most wil…
EventIt is also evident in the market concentration of hyperscale cloud providers whose global dominance shapes where data is stored, processed, and ultimately valorized. The issue is not cross -border dat…
EventOkay, so let’s get into it. I’m going to moderate this panel, so I’ll take a seat. Thank you. So let’s get into it. Okay, let’s get into it. There have been many discussions about openness at this sum…
EventThis comment created a pivotal moment that shifted the discussion from theoretical safety concerns to examining the very structures of power and representation in AI governance. It introduced a meta-c…
EventThese key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a routine overview of governance initiatives into a more sophisticated analysis of …
Event– Shamira Ahmed- Paloma Lara-Castro- William Bird AI presents shared challenges and opportunities for humanity, requiring platforms like PNI to ensure diversity in conversations. This includes differ…
EventA lot of efforts are concentrated in a handful of countries and companies Developing countries need to be included for seeing this as a common project Doreen highlights the importance of including t…
EventInvolving different stakeholders, organizations, and companies is emphasized throughout the discussions. This inclusive approach promotes innovation and fosters fruitful discussions in tackling online…
Event“Justin Carsten served as the moderator/host of the panel discussion.”
The knowledge base lists Justin Carsten as the moderator/host of the session [S2].
“Microsoft will commit US $50 billion by the end of the decade to accelerate AI diffusion in the Global South.”
Microsoft’s announcement referenced in the knowledge base states the company is on pace to spend $50 billion by the end of this year, not by the end of the decade [S39].
“The first component involves building data‑centre and connectivity infrastructure that respects national sovereignty, offering public‑cloud and private‑cloud options with “sovereignty controls”.”
The knowledge base discusses Microsoft’s sovereign-cloud approach and the importance of data-centre sovereignty for states, providing additional detail on how such controls are being designed [S26] and [S120].
The panel shows strong convergence on several fronts: the need for robust governance and coordination mechanisms; the centrality of deep, multi‑stakeholder partnerships; the importance of systematic measurement and benchmarking; the necessity of multilingual, culturally aware AI; and the role of skilling, efficient models, and open data/open‑source approaches. These shared positions cut across private‑sector, public‑sector, academic and civil‑society perspectives.
High consensus across most themes, indicating a shared understanding that trustworthy AI diffusion requires coordinated governance, partnership, measurement, and contextual adaptation. This broad agreement suggests that future policy and investment initiatives are likely to find common ground, facilitating collaborative action toward equitable AI deployment.
The panel shows broad consensus on the importance of trustworthy AI diffusion, yet diverges on the means to achieve it—ranging from governance and talent development, large private‑sector investments, open‑source models, benchmarking, to AI metrology. Disagreements are most pronounced around data openness, funding models, and measurement strategies, while an unexpected deviation occurs when Wendy Hall sidesteps a direct question about sovereign AI policy.
Moderate disagreement: while all participants share the same overarching goal, they propose distinct pathways, leading to substantive but not antagonistic conflicts. The implications suggest that coordinated policy will need to reconcile these differing approaches—balancing governance, investment, open‑source, and measurement frameworks—to build a cohesive global AI strategy.
The discussion was shaped by a series of pivotal insights that moved it from a broad celebration of collaboration to a focused examination of the concrete levers needed for trustworthy AI diffusion. Dr. Garg’s emphasis on governance and talent reframed the problem beyond hardware. Natasha’s five‑pillar plan supplied a tangible corporate commitment, which Peter then grounded in the technical necessity of reliable, industrial‑scale benchmarks. Wendy’s call for AI metrology and data‑governance frameworks broadened the scope to include interdisciplinary measurement and inclusivity, while Harish’s on‑the‑ground concerns about edge use‑cases and sustainability added practical urgency. These comments collectively redirected the panel toward actionable strategies—standardized metrics, efficient models, multilingual support, and open yet controlled data—thereby deepening the conversation and setting a roadmap for future collaboration.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

