Building the Next Wave of AI_ Responsible Frameworks & Standards

20 Feb 2026 13:00h - 14:00h

Building the Next Wave of AI_ Responsible Frameworks & Standards

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by stressing that AI must be safe, responsible, ethical, inclusive and explainable, and that effective safety benchmarks should arise from real-world deployment rather than isolated research labs, be co-created with industry and academia, and remain living infrastructure that evolves with technology [4-6][8-13][24-27]. The moderator introduced ICOM’s RAISE Index as a pioneering framework for quantifying AI safety and responsibility across development and deployment, accessible via a QR code for testing AI solutions [13-15][38-42]. He also highlighted the Telangana Data Exchange, a sandboxed public-data platform that lets startups validate their models against actual datasets before launch [16-19][20-23]. Emphasising India’s unique position, he argued that the country’s multilingual, large-scale environment gives it a competitive edge in shaping global AI standards, and noted that the RAISE Index harmonises requirements from the EU AI Act, NIST, Singapore and UK frameworks [29-35][39-42].


Kamesh then invited panelists, asking Arundhati Bhattacharya how a global enterprise balances rapid AI innovation with trust and accountability [50-57]. Arundhati explained that Salesforce created a “humane and ethical use of technology” office in 2014 to review every product, stressing that AI’s misuse requires a global compact and transparent information exchange [58-66][67-68]. She further stated that trust is Salesforce’s top value and described a “TrustLayer” that safeguards against data leakage, bias, toxicity and hallucination [112-118][132-138].


Karna argued that responsible AI should be productised, embedding governance guardrails and a human-in-the-loop directly into AI agents to enable mass adoption [96-102]. Ankush added that sovereign AI demands on-prem or edge solutions giving clients full data control, and that trust is built through explainability, privacy and purpose-driven models [108-110]. He suggested delivering compliance as reusable APIs so enterprises can select required regulations without burdening probabilistic AI systems [151-158][159-162]. Karna also advocated default-protective settings for user data and making explainability a core API output to support decision-making [165-172].


The session concluded with Kazim Rizvi urging participants to adopt the RAISE Index and embed responsible AI by design, emphasizing the shared responsibility of technologists, policymakers and startups to ensure AI benefits society without unintended harms [202-209].


Keypoints

Major discussion points


Establishing practical, co-created safety benchmarks for AI – The moderator stressed that benchmarks must emerge from real-world deployment, be co-created with industry, academia and government, and remain “living infrastructure” that evolves with AI capabilities. The RAISE Index (India’s first quantitative framework) and the Telangana Data Exchange sandbox were presented as concrete tools to validate and continuously improve these benchmarks [4-12][13-15][16-24][27-28][30-34].


Global collaboration and a “trust compact” to curb misuse – Arundhati Bhattacharya described Salesforce’s early creation of a “humane and ethical use” office and argued that preventing bad-actor exploitation of AI requires a transparent, worldwide agreement and shared standards [58-66][67-71][112-118].


Embedding responsible AI into startup and MSME products – Karna Chokshi highlighted the need to bake governance, observability and human-in-the-loop controls directly into the AI product (rather than as a separate 200-page PDF), turning compliance into reusable APIs and making responsible AI a value proposition that drives mass adoption [96-102][151-168][174-177].


Data sovereignty, trust layers and explainability for enterprise clients – Ankush Sabharwal explained that large organisations demand full control over data (on-premise or edge solutions) and that trust is built through strict access controls, bias/toxicity filters and explainability mechanisms; Salesforce echoed this focus on a “TrustLayer” that safeguards data and model outputs [108-110][112-119][130-137].


Choosing between large-scale LLMs and smaller, task-specific models – When asked about the rise of small language models (SLMs), Karna noted that enterprises often start with powerful LLMs for speed, then migrate to SLMs for lower latency and cost once the use-case is clarified [191-193].


Overall purpose / goal of the discussion


The panel was convened to close the Global AI Summit by crystallising how the AI community-governments, innovation hubs, academia, large firms and startups-can jointly develop, benchmark and continuously refine safe, ethical and inclusive AI systems, and to promote concrete tools (e.g., the RAISE Index, Telangana Data Exchange) that operationalise responsible-AI by design [4-7][28-30][202-208].


Overall tone and its evolution


The conversation began with an optimistic, forward-looking tone emphasizing collaboration and the promise of responsible AI [4][48]. As the dialogue progressed, speakers adopted a more pragmatic, problem-solving tone, detailing concrete technical challenges (governance integration, data sovereignty, model selection) and practical solutions [96-102][108-110][191-193]. Throughout, the tone remained constructive and collegial, ending on a hopeful note encouraging continued ecosystem cooperation [84][190][209].


Speakers

Kamesh Shekar – Area of expertise: Artificial Intelligence & Emerging Tech; Role: Moderator of the panel, Youth Ambassador at The Internet Society; Title: Youth Ambassador, Moderator [S1][S2]


Karna Chokshi – Area of expertise: AI productization for startups; Role: Startup founder/CEO (voice-agent solutions); Title: 


Moderator – Area of expertise: ; Role: Session moderator; Title: 


Arundhati Bhattacharya – Area of expertise: Responsible AI, AI ethics; Role: Executive at Salesforce, Global Enterprise Leader; Title: 


Ankush Sabharwal – Area of expertise: AI infrastructure, sovereign AI solutions; Role: Leader of AI solutions company (Vada GPT appliance); Title: 


Kazim Rizvi – Area of expertise: AI policy & governance; Role: Founding Director of The Dialogue, Moderator; Title: Founding Director, Moderator [S11][S12]


Additional speakers:


Sarj – Area of expertise: ; Role: ; Title: 


Fani – Area of expertise: ; Role: ; Title: 


Sahish – Area of expertise: ; Role: ; Title:


Full session reportComprehensive analysis and detailed insights

The panel opened with the moderator emphasizing that the ultimate challenge of AI innovation is to ensure its impact is safe, responsible, ethical, inclusive and explainable, a goal that must be pursued holistically [4-6]. He argued that the week’s lessons highlight the need for benchmarks derived from real-world deployment rather than isolated research labs, and that governments, innovation hubs, academia and startups all share responsibility for shaping such standards [7-10]. Crucially, he stressed that benchmarks must be co-created with industry and academia and function as “living infrastructure” that evolves alongside AI capabilities[11-13][24-27].


To illustrate concrete tools for this vision, the moderator presented ICOM’s RAISE Index, described as the first quantitative framework that measures AI safety and responsibility across both development and deployment phases [13-15]. Attendees could scan a QR code on the screen to access the full framework and test their own AI solutions against it [14-15]. He added that the methodology is open and adaptable for other jurisdictions, enabling broader applicability [39-42]. He also highlighted the Telangana Data Exchange, a first-of-its-kind digital public infrastructure within the realm of AI that gives startups sandboxed access to government datasets for validating models against real data, use-cases and constraints before launch [16-23]. These initiatives embody the principle that benchmarks should be validated in situ and continuously refined.


The moderator then addressed three distinct points.


1. Practical benchmark validation – exemplified by the Telangana Data Exchange, which allows startups to test against real-world data [16-23].


2. India’s strategic advantage – he asked, “How is India leveraging its innovation hubs and its leadership position in shaping the global dialogue on inclusive and responsible AI?” and answered that India’s multilingual, large-scale environment turns infrastructure constraints and massive scale into a competitive edge, offering a unique perspective for global AI standards [29-34].


3. Rapid-startup-friendly frameworks – noting that startups move at a fast pace, he called for benchmarks that are agile enough to keep up with their speed of innovation [96-102].


He positioned India’s context as a strategic advantage for shaping global AI standards [29-34]. He noted that most existing frameworks assume high-resource, homogeneous settings, whereas India operates under infrastructure constraints and massive scale, turning these challenges into a competitive edge [30-33]. The RAISE Index, he explained, harmonises requirements from the EU AI Act, the NIST AI Risk Management Framework, Singapore’s guidelines and the UK AI Assurance, offering a single portable assessment for organisations operating across jurisdictions [39-42][43]. He also emphasized that the Index employs a phase-based assessment to keep benchmarks relevant to a company’s maturity stage [44-48]. He concluded by urging continuous, phase-based benchmark evolution to keep pace with rapid AI advances [44-48].


When the panel began, Arundhati Bhattacharya recounted that Salesforce established an “Office for the Humane and Ethical Use of Technology” in 2014, which reviews every product and process before market release [58-61]. She argued that preventing misuse by bad actors requires a global compact and transparent information exchange, noting the proliferation of deep-fakes and the need for societal safeguards [65-71]. Trust, she said, is Salesforce’s number-one value, embodied in a TrustLayer that protects against data leakage, bias, toxicity and hallucination, and the company deliberately delayed its Copilot-like offering until this layer was robust [112-118][119-136][132-138].


Karna Chokshi shifted the focus to startups, insisting that responsible AI must be productised: governance, observability and human-in-the-loop controls should be baked into the core AI product rather than relegated to a lengthy PDF [96-102]. She described a design where guardrails are applied at the prompt, during tool-calling and at output, and where the human-in-the-loop is treated as a first-class feature, not a failure point [98-99]. By productising these safeguards, her company has enabled 30 000 organisations to deploy voice-agent interview tools within minutes, demonstrating mass-adoption potential [100-102]. She further advocated turning compliance into reusable APIs with sensible defaults, arguing that such infrastructure-level solutions would make governance scalable and encourage default-protective settings for user data [151-169][174-177].


Ankush Sabharwal highlighted the imperative of data sovereignty for high-stakes sectors such as defence and finance. He explained that clients demand full control over data, prompting the development of on-premise and edge AI appliances (e.g., the Vada GPT super-computer) that keep processing within the customer’s premises [108-110]. Trust, for his clients, is built on near-perfect accuracy (99.9 %), rigorous bias and hallucination checks, and the ability to opt-in to data use rather than defaulting to it[111-118][141-148][165-168]. He also noted that compliance can be addressed through software-level APIs or hardware-level data control, allowing organisations to select the regulations they need without over-burdening probabilistic AI systems [108-110].


During the audience Q&A, a participant asked about the emerging small language models (SLMs) versus large language models (LLMs). Karna responded that enterprises typically start with powerful LLMs to accelerate value creation, then transition to SLMs when latency, cost or data-sensitivity considerations become paramount [191-193].


The panel converged on three recurring themes: (1) trust as the foundational value of AI, (2) the necessity of embedding governance, observability and built-in trust layers directly into AI products, and (3) the promotion of the RAISE Index as a unifying, iterative assessment tool[12][24-27][151-169][202-208]. Different perspectives were offered regarding how trust is delivered-cloud-native TrustLayers versus on-premise sovereign appliances-and how compliance might be realised-software-level APIs versus hardware-level data control[112-118][108-110][151-169].


In closing, Kazim Rizvi thanked the participants, reiterated that the RAISE Index represents India’s first responsible-AI readiness tool, and urged the audience to adopt it to embed responsible AI by design[202-208]. He called for continued ecosystem collaboration-among technologists, policymakers, think-tanks and startups-to ensure AI delivers societal benefits without unintended harms, and announced further Dialog-led policy conversations on AI governance [209-211].


Overall, the discussion mapped a roadmap from high-level principles of safe, ethical AI to practical mechanisms: co-created, evolving benchmarks; the first-of-its-kind Telangana Data Exchange; product-centric governance, observability, and built-in trust layers; sovereign data solutions; and a globally-harmonised, phase-based assessment framework. Agreed-upon actions include publishing and iterating the RAISE Index, expanding the Telangana Data Exchange, open-sourcing compliance APIs, and pursuing a global compact on responsible AI to align standards and prevent misuse [13][39-42][151-169][202-208].


Session transcriptComplete transcript of the session
Moderator

Thank you. Good afternoon, everyone. I know it’s Friday afternoon, almost end of a fantastic Global AI Summit. And good afternoon to my fellow distinguished panelists. I think the topic of this particular panel, it’s probably the apt one to wrap up this Global AI Summit because the most important arc in this innovation, the innovation of AI is making sure… the impact of the AI is safe, responsible, ethical, inclusive, and explainable, right? And it has to be holistic at the end of the day. I think there’s a lot that we have learned over the course of this week, listening to a number of different thought leaders talking about how AI could be channeled in a manner where it delivers the intended impact without getting into unintended consequences.

I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing this safe and ethical AI, right? Starting with benchmarks must emerge from deployment reality. And not just research labs. Safety benchmarks fail. when developed in isolation, the most effective ones come from institutions building, deploying, and maintaining AI at scale, right? Government innovation hubs sit at this critical intersection between policy intent and operational reality, surfacing failure modes and trust gaps. The second most important element in this framework is to ensure these safety benchmarks are co -created with the industry and with academia and the research institutions. ICOM and the dialogue developed one of its kind index called the RAISE Index over the last year and a half that we have been working together, which is the first of its kind in quantifying the impact or the quantifying the value or the quantifying the impact of AI within deployment and during development on the safety and responsibility matrix.

And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to the entire framework and you could even test your respective AI solutions or AI systems that you might be developing or you already have in production, test it against that and then see what the index comes back and tells you. The third is making benchmarks practical. And in Telangana, we have launched Telangana Data Exchange, which is first of its kind, digital public infrastructure, within the realm of AI. It provides startups access to government data sets in a sandboxed environment. This is where benchmarks get validated and time tested. Startups can test their AI systems against actual data, actual use cases, actual constraints before deployment.

The third is we all understand and recognize that startups move at a rapid pace. So when startups are deploying AI solutions, there’s a number of risks that emerge. And we are providing this index again as part and parcel of the whole startup ecosystem that we are building. And as a result, we expect them to detect any early warning signs within this framework and continue to improve this. The last is benchmarks and frameworks must be living infrastructure, not static checklists, right? AI. Capabilities evolve faster than regulatory cycles. Static benchmarks become. Hubs must institutionalize continuous benchmark evolution. This raised index methodology includes phase -based assessment, ensuring benchmarks remain relevant to company maturity stages. So if you take this broader framework of making sure, how do we make sure AI systems are safe and responsible and ethical, the question comes down to how is India leveraging its innovation hubs and its leadership position in shaping the global dialogue on inclusive and responsible AI.

What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed for high resource, homogenous environments. India operates in the context that most of the developing world shares, which is multilingual populations, infrastructure, and innovation. It has infrastructure constraints, massive scale, and the imperative to serve both economic growth and social inclusion. This is not a limitation. This is a significant competitive advantage that India has in shaping the global standards. Number two is demonstrating responsible AI in high stakes, high scale deployments, which we are offering. ICOM, the first of its kind innovation, AI innovation entity out of Telangana, with its research and co -innovation pillar, helps build AI solutions for healthcare, agriculture, climate, financial inclusion, where failures have immediate societal impact.

When we document how these systems are designed, tested, and governed, we contribute frameworks that have been validated under real world complexity, not just lab conditions. This particular RAISE index is India’s contribution to global standardization. You will notice the more you dig into this index, the index harmonizes requirements across leading global frameworks. Be it EU AI Act, NIST AI Risk Management Framework, the Singapore Mass Guidelines or the UK AI Assurance. We brought it all together into a single portable assessment. Organizations operating in multiple markets can use one assessment to evaluate alignment with diverse regulatory escalations. The methodology is open and adaptable for other jurisdictions. And I would leave you with last but very important point of institutionalized continuous learning in responsible AI practice, right?

Most frameworks are static standards. ICOM believes in creating systems with ongoing feedback, tracking system performance over time, updating benchmarks as models evolve, incorporating new research. And Raise Index is designed as an iterative framework. What we are releasing today is the first edition and it will continue to evolve through pilot phases. stakeholder consultation and it’s not a one time standard we all know AI is an evolving technology and this has to evolve but our intent and goal and hope is this would keep pace with the pace with which the technology is moving and that is very critical and that’s a common responsibility that we all hold be it technologists, be it policy makers be it think tanks or be it researchers or start ups it is we all have to come together as an ecosystem to ensure the technology that we put out there with the intent of doing benefit for the society does exactly that without any unintended consequences so I think we are up for a fantastic panel and you guys absolutely would enjoy the conversation that is going to be held now.

Thank you.

Kamesh Shekar

Thank you so much, sir, for setting the context. And I think like that deep, like sets the perfect context for us to like pick up the conversation from there, which is going to be like we are discussing today in terms of like reimagining like responsible AI. What are we trying to like do today in this panel is to like, you know, understand like what are the shifts that are needed like when it comes to responsibilities with evolving innovations and like how we can take the needle forward when it comes to responsibilities. I would like to start with Ms. Arunthati Bhattacharya here. Thank you so much, ma ‘am, for taking the time. It’s absolutely a pleasure to host you.

And first question is to you, ma ‘am, is that is like as you are a global enterprise leader, how do you see the balance between the rapid AI innovation with there is a need for a trust and accountability and customer protection as well? So how do you see that balance

Arundhati Bhattacharya

So, you know, in the company that I work for, Salesforce, we started our AI journey in 2014. And in 2014, we also set up within the company an office for the humane and ethical use of technology. So this is an office, by the way, which goes through every one of our products, every one of our processes, before it is allowed to make its debut in the market. Because we realized very early on that while technology and AI could give us many advantages, it would also be used by bad actors for doing things which it was never intended for. And that is true of every single thing that, you know, we come up with. Whether it be a new medicine, whether it be nuclear energy, whether it be anything that we come up with, it can have its good use.

It can also be used for the wrong reasons. And that is something that we must come together in a global compact in order to defeat and in order to stop. Again, this has to be a global compact. It’s not something that one country or one organization or one effort can probably ensure. Because unless and until we have sufficient transparent information exchange, unless and until we all say together that this is not something that we will allow, it would be very difficult for us to stop the bad actors. It’s not easy. Today you see the kind of deep fakes that are there, stuff that we never thought of in our childhood, families having safe words amongst themselves.

It’s not something that was there at all. But today, in fact, I was asking a colleague from the US. And he was saying, yes, we do have a safe word in the family because we don’t know when somebody is going to get a call that’s going to sound like me. And it’s going to say that I’m in the hospital and I need so much money. Please come and get me. And it might be somebody entirely different trying to scam you. So we do have safe words. Now, imagine the extent to which we have gone, where we are having to teach children that these are the ways that you can be sure and you can be safe.

Now, this is not something that we want, because obviously, AI is also something that can speed up things like medical research. It can actually speed up skilling. It can speed up many things which enable us and empower us to come up to potential. So a technology this powerful should not and cannot be stopped because bad actors are misusing it. And therefore, it’s up to all of us to come up with a framework. A global compact, again, as I say, a framework that will enable us. to ensure that we are all of us together trying to stop the bad actors and ensuring that this is being used for the good of humanity.

Kamesh Shekar

Excellent point, ma ‘am. I think a very interesting aspect is your starting remark in terms of putting together an office on the humane aspect, which actually shows that it’s not only the technical side can solve the problem when we talk about responsibility, it’s also organizational ethics and organizational ethos which kind of brings that kind of essence to it. And great submission on the global compact, and I think that’s something that we should all strive towards, and I hope the summit will kickstart that process for us as well. I’ll come back to you, ma ‘am. I know you have a hard stop, but I’ll do come back to you for one more question. But now I would like to go to Karna here.

Thank you so much, Karna, for joining. We did hear from ma ‘am in terms of what can be actually done in terms of… Thank you. from how larger organizations are looking from this. But I would like to pick up your brains in terms of as a startup and an MSME, what are the operational challenges that you guys face when you are trying to balance this equation of responsibility versus innovation? And also you guys are looking at it from a four -sidedness and new technologies. So any thoughts there would be

Karna Chokshi

make the AI technology which comes with a lot of power be a bit more Enterprise software ish in terms of compliance governance observability. So we that’s what we do is which means the way we believe is if governance looks like a 200 page PDF for all companies MSM is to figure out we will see them struggle and our our idea is it should be a part of the core product as a lot of us are building solutions for customers governance should be the core product like we believe product is it product as it product as it and that allows mass adoption and the way we do it is so governance to product as it we just writing into the prompt is just the first line of defense.

It should be the core part through the entire agentic lifecycle. Which means. At the time you’re giving it an input and it’s reasoning there are guardrails it check before it does some tool calling which is like hey i’m gonna write uh to the crm or i’m gonna talk to uh one of your customer on this topic there is again guardrails before that and even when you do an output there needs to be guardrail and the guardrails should be a part of the core product and that is important to drive mass adoption and secondly the way we think is knowing we build voice agents for companies uh we still believe human in the loop is a first class feature not a failure point which means you should design the system that it doesn’t in the intent to give an answer it doesn’t give wrong answers it’s okay to figure out when it should transition from a fully autonomous to an assisted agent to fully autonomous to a human and that principles of using humans in the right place should be the core product of our product and that that productization has allowed us so we also have another company up now which is a hiring platform which allows around 3 lakh companies.

Now, because what we saw beautifully when we productize a lot of these, every year, every month in fact, 3 ,000 MSMEs are building voice interview agents on their own. They’re not even realizing because we have productized it that at the back of it, there are three agents they are creating and training for their recruiting process and they’re deploying it and within a matter of five minutes. So, and that has driven to adoption of 30 ,000 companies who are doing it on their own and if we want the entire India, all companies to leverage it, more and more as software, agent -based software builders, we productize it, the better the adoption will be.

Kamesh Shekar

That’s an excellent point, right? Like, I think like this is something that like we kind of like also keep speaking is that productization of responsible AI from a value proposition perspective, right? Like how can responsible AI be embedded as a value proposition towards the product that you’re building, which also is one of the selling points for like whatever that is like taken. That’s a great, great point. So I’ll definitely come back to you, but I would like to go to Ankush and then like I’ll come back to ma ‘am again. is like quickly like Ankush wanted to like understand you guys build AI systems so what are the governance challenges that you see most are like you know difference when it’s for public and private

Ankush Sabharwal

yeah I think one is control I think when it’s about sovereign AI so it’s not just the data residency which matters to our client they want the complete control no one else other government no other party should be able to even see that sniff that audit that so I think that is something which our clients ask for it and that’s why though we work with almost all the cloud providers and but we let the decision be with our clients like which data center they want us to hold and now we see the huge demand of on premise solution and that’s why it’s now even we had seen the need of the edge ai day before yesterday with nvidia we have launched vada gpt desk ai appliance so that’s a supercomputer itself that process is around one petaflops floating point instructions and you know 4db hard disk and you know and that can run a model with one one trillion parameter huge right so but our vada gpt model is just half a billion parameters so means they can do multiple models multiple use cases just one box and we’ll be announcing that soon we we’re working with the defense and now there’s a huge need to have not just on not just in india not just on premise it’s just in the room on the desk right now when the army is doing critical meetings so they don’t want the data to even go out of the room so even that kind of but with a complete processing complete sovereignty and they also don’t want to limit the use cases also right so they want to start with minutes of meeting a change and the aspirations keep increasing so we needed to have a super computer thanks to NVIDIA who’s powering our box there so I think that is the major part rest we all know about explainability inclusivity and privacy and purpose so I think this is something where I think that’s why many many data centers are coming up in the country there is a need of having our own data center here

Kamesh Shekar

that’s excellent like I think like what you’re trying to underline is the trust over the solutions and that’s coming through the sovereignty of the data more they have control over it more it is

Ankush Sabharwal

that’s correct so now tagline is AI with purpose and trust trust is of course important for any relationship like vendor so I think with AI the trust is more important because they are trusting us they are giving us data to create the models so that’s why many new companies are coming up you know of course I thank and welcome them to come to the table but I think now the old players are still being valued so the work is still concentrated here though the deliveries are taking time and all that but there is definitely now need and we need to I think my message to all the new startups and AI startups is yes innovation you have to keep showing doing but show the trustworthy part of it said about observability I think that’s very very important so enterprises want more of trust scale security than the innovation I’m not saying don’t do the innovation but the trust part is very important especially when AI comes

Kamesh Shekar

that’s a great great important submission so but ma ‘am over to you I think like you have to leave in five so like any closing remarks that you would like to like you know provide

Arundhati Bhattacharya

no the one thing that I wanted to talk about was trust because that’s what was being discussed Trust in Salesforce. Trust is our number one value. We have five values. The first is trust. The second is customer success, followed by innovation, equality and sustainability. But trust is definitely number one. Now, having said that, we are number one in trust. We are also a cloud native company. OK, so we do not have on prem systems. And we also believe that basically it is important for us to adopt asset like models, mainly because today the need for storage and compute is so high, given the fact that AI is able to handle trillions and trillions of data points.

And the more you have data points, the better your answers will be. Of course, not for everything. You don’t need to boil the ocean for every single thing. But where there are really deep questions that will benefit from the diversity and the extent of the data, it is very important. For us to have the right kind of compute and storage facility. Now, obviously, you know, if you’re going to have that kind of storage and compute facilities that is entirely on -prem, it also means a pretty high amount of investment into the hardware resource. And India is not very well known for having deep pools of resources. So given the fact that we necessarily have to have capital -like models, it’s important for us to find ways and means of ensuring logical security and trust.

And there are ways of doing this. There are several ways of doing this. One of the reasons why, by the way, we were behind Copilot in bringing our enterprise -level offerings to the market was because we were working very hard on the trust layer. Because the trust layer is not only about access. It’s also about ensuring. not only that your data doesn’t go out, but also the fact that your data doesn’t have any toxicity, that your data doesn’t have bias, that your data is not hallucinating. And by the way, the bigger the data, the amount of data, the more is the tendency to hallucinate. And obviously, you don’t want something as important as this to hallucinate and give you a right wrong answer.

So TrustLayer actually performs a number of these actions, which is all meant towards ensuring that the results that come are not only responsible, they are trustworthy. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Ankush Sabharwal

and we created it. We launched it when we have seen, and I’m still not saying we are 100 % safe, but I’ve seen the world is now okay to have inaccuracies, right? So we are a bit risk averse, right? We are not that risk takers when the whole world was okay. Because we have the client, so you see our clients IRCTC, LIC, NPCI, and Army Defense, they used to expect 99 .9 % accuracy. When the whole world is okay, was okay getting wrong answers from these general purpose LLMs, they got more convinced and most of our clients before 12 GPT days, so that was classic NLP. I liked your point where we don’t have to answer everything, right? So God really is important.

But now most of our clients have gone to Genia. not just gen not only gen a only thing we do composite ai so we still follow the conversation the classic nlp based intent classification entity extraction you would not believe so 80 percent 80 to 90 percent of our interactions happen classic nlp without gen ai because we think we all are different we are not right so so when say in one of them say irctc say four million people come to irctc if i open the dashboard they’re only eight to ten intense people you have to book cancel change board station whatever so 80 percent use cases if someone is saying i want to travel from bangalore to delhi tomorrow there is no gen ai involved i just have to call nlu is involved that old model works just cause the api gets the data right no gen ai if someone say hey i have three pets then how do i do if it is one pet that is a policy that we know it says i have three pet can i carry in my train right so probably that answer is not there in classic nlp for that we you do the rack base with barrager bar gpt so I think if we safety is important I think that should be the core of design and then composite air don’t do just Gen AI because Gen AI is easily available and don’t use Gen AI because you have money to buy GPUs and burn the tokens so idea is do purpose led innovation begin with end in the mind I have told this line I think 10 times today first see what problem you are solving and then you see which solution then which model if model is available use the available model if not build the

Kamesh Shekar

that’s an excellent point thank you so much Ankush for making the time and quickly moving to Karna any closing remarks that you would have and also whatever you want to add to your previous point

Karna Chokshi

yeah so I think to the point Ankush was mentioning AI technology is fundamentally designed on probabilistic model and and we are all used to software working in a deterministic manner, right? So it has to exactly do this. Now when it comes to this topic of large processes for large enterprises, I think compliance is one area which is super hard to think about, right? AI is probabilistic, but compliance, you always want it to be correct. So I think what to enable the ecosystem, what we believe is we are converting compliance into APIs. So what I mean by that is, so we’re deploying voice agent in one of the large mutual fund houses, all the compliance for that industry are checkbox.

So every company can pick what compliances they need. They just need to take the APIs which they want to ensure and that makes the entire ecosystem flourish and these APIs should ideally get open sourced in the market. So there is enough level of validation across all players that, hey, this SEBI guideline, this is an API which you can invoke into your agent and agent will follow it. And this has pressure test. This takes this burden of ensuring AI works 100 % correct in all use cases, which is not the power of the technology. But if we don’t think like that, then we’ll become very restrictive in its application. While we work a lot on making it P99 accuracy, but there is always the probabilistic chance of it.

And I think the second point we should think about is I think the human state of mind works well in default versus optional. What I mean by that is whatever is the default selection in any of the things you do has 90 % adoption or 80 % adoption and whatever is the change is a 20%. So the way we think about it is a lot of things should be a default. Yes. So customer data should not be used by default to train LLMs or models. It should be an optional add -on rather than the other way around, which you see. Right. Because that’s how most. startups, MSMEs, businesses would otherwise ignore it and the scale of innovation will not happen if that’s not the default state.

And lastly, explainable is extremely important because as models are making decisions, how do you know why this decision was made? And if we make that more as a core output of the API and not think of it as, oh, if something breaks, we will figure out how it works. You will not enable your partners to be a decision maker with you when you’re designing AI solutions for them. So that’s what we focus on. We focus on how do we make a PAT technology, P99 available for enterprises and or governance is the prime topic which comes on why, what is the missing element to get mass adoption and that’s something which I want the entire ecosystem to embrace.

Can we make it an API? Can compliance, governance be more of an infrastructure rather than a paperwork? Because if it is that, then we’re going to slower adoption in India than maybe other parts of the world.

Kamesh Shekar

That’s a great point. Thank you so much, Karna. But we have very few minutes left and we have one panelist who has dedicated full time for us. So, like, you know, kudos to that. So, opening up to the floor, any questions? I think, like, we can take two questions, given the time frame. Any questions to Karna? Anybody? Yeah. They’re all very clear. Yeah. Hi. Good evening. Hello. So, my question is related to small language models which are becoming increasingly popular. Within the developer… community so for businesses like yourself yeah do you see a profitable path ahead for slms or do we continue depending on this llms which i think will be raised to the bottom

Karna Chokshi

yeah no great question i think we think about it a lot and a lot of customers of our ask the question hey would you be in using slm will we use an llm i think the place where we are we all will benefit from the flexibility of llms because frankly most companies are deploying their first or second actual large -scale deployment i think it is helpful to leverage the power of the larger models at that time and over time you will learn what actually is needed in it and over time you can transition llms to an slm where you get the advantage of sometimes latency sometimes cost depending on what your use case optimizes for but i think in the interest of speed of innovation it’s okay to just use llm figure out where the value is getting coming to your business and then explore through the journey of an SLM model which can give you additional advantages Thank you

Kamesh Shekar

Anyone else? Awesome So thank you I would request now Sarj to take it over

Moderator

Thank you so much Thank you so much Thank you so much Kamesh Thank you so much to all of our panel members I think it’s been a really really interesting discussion on how where responsible AI is now and its future particularly with artificial intelligence going ahead I’ll call Mr. Kazan Rizvi the founding director of the dialogue to give the closing remarks for the session Kazan

Kazim Rizvi

This works, this doesn’t work Thank you I think this mic works Yeah, okay, great. Thanks a lot, Sahish. And thank you, Kamesh. Thank you to all those who stayed back till now. I think we are crossing the limit of event fatigue. I know a lot of us are quite tired and sort of very, very sort of exhausted, too many events. But I think the last one week has been fantastic. We’ve had the pleasure and the honor of hosting a few events over the last week. But I think specifically on Responsible AI, as Fani was talking in the beginning, the Dialog and ICOM have developed India’s first tool to assess Responsible AI readiness. So we urge and we encourage and we motivate all of you guys to sort of look into that.

But thank you, Kamesh, for moderating. Thank you to all our speakers for joining in. I think it’s important that we all work towards building Responsible AI practices from the beginning by design. I think that’s something which, you know, even the tool will encourage. So please have a look at that. But all… of you have a good evening I think for what is left of the AI summit it’s been a fantastic summit and hopefully all of us got to learn a lot I did myself but look forward to seeing you all soon dialogue will be hosting multiple conversations on AI policy and we encourage you all to join that but until then have a good evening enjoy your weekend and thank you to all our panelists again thank you thank you Thank you.

Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Benchmarks must emerge from deployment reality rather than isolated research labs.”

The knowledge base states that benchmarks should emerge from deployment reality and not just research labs, confirming the claim [S2].

Confirmedhigh

“Attendees could scan a QR code on the screen to access the full framework and test their own AI solutions against it.”

Both sources describe a QR code that provides access to the entire framework and allows users to test their AI solutions, confirming the claim [S13] and [S77].

Additional Contextmedium

“The Telangana Data Exchange is a first‑of‑its‑kind digital public infrastructure within the realm of AI that gives startups sandboxed access to government datasets for validating models.”

While the knowledge base does not mention Telangana specifically, it discusses India’s approach of treating AI as a shared public infrastructure, which adds context to the claim about a sandboxed data exchange for startups [S84].

External Sources (88)
S1
Artificial Intelligence & Emerging Tech — Kamesh Shekar, Youth Ambassador at The Internet Society
S2
Building the Next Wave of AI_ Responsible Frameworks & Standards — This comprehensive panel discussion served as the closing session of the Global AI Summit, bringing together enterprise …
S3
Building the Next Wave of AI_ Responsible Frameworks & Standards — – Karna Chokshi- Ankush Sabharwal – Karna Chokshi- Arundhati Bhattacharya – Karna Chokshi- Arundhati Bhattacharya- Kaz…
S4
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S5
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S6
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S7
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S8
S9
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Sudeesh VC Nambiar
S11
Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112 — Kazim Rizvi:I hope I’m audible. Thank you to the chair, thank you to GIGANET and IGF for hosting us today in Kyoto on a …
S12
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — -Kazim Rizvi- Moderator/Host of the panel discussion This panel discussion on heterogeneous computing and AI infrastruc…
S13
https://dig.watch/event/india-ai-impact-summit-2026/building-the-next-wave-of-ai_-responsible-frameworks-standards — And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to …
S14
Setting the Rules_ Global AI Standards for Growth and Governance — I didn’t realize that. No, the one thing I wanted to add in terms of like a goal for where we can find ourselves two yea…
S15
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Brandon Mello from GenSpark identified adoption challenges, noting that 95% of AI pilots fail to reach production due to…
S16
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — This insight challenges the conventional view that linguistic diversity is a barrier to AI development. Instead, Raghava…
S17
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S18
Panel Discussion: 01 — We are expecting our other guests to join us very soon as Ms. Devjani Khosh, Distinguished Fellow Niti Aayog is going to…
S19
Data first in the AI era — International coordination is necessary beyond national frameworks Melamed argues that while there have been many data …
S20
https://dig.watch/event/india-ai-impact-summit-2026/shaping-the-future-ai-strategies-for-jobs-and-economic-development — Governments willing to move decisively, private sector actors willing to collaborate, technologists willing to design fo…
S21
Panel Discussion Data Sovereignty India AI Impact Summit — “One, of course, is basically the policies need to evolve along with the infrastructure.”[37]. “As far as governments ar…
S22
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S23
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at tra…
S24
S25
Safe and Responsible AI at Scale Practical Pathways — Prem Ramaswami from Google’s Data Commons project provided a complementary perspective on making public data accessible …
S26
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — Common definitions on data sovereignty are required Enabling a free flow of data is essential for access to new technol…
S27
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Ioanna Ntinou acknowledged the tension between developing efficient small models and continuing to advance the field thr…
S28
Understanding the language of modern AI — Large Language Models (LLMs)are trained on vast datasets containing billions or trillions of words from across the inter…
S29
How Small AI Solutions Are Creating Big Social Change — So in our paper, we are providing all these three CPs to follow to get the best boost in terms of performance. What I wo…
S30
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — Bhattacharya advocates for cloud-native solutions with trust layers to ensure security while leveraging shared compute r…
S31
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S32
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S33
Digital policy issues emphasised at the G20 Leaders’ Summit — A reference is made to the need to ensure respect for privacy and personal data protection in the context of any action …
S34
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — Building trust is highlighted as a fundamental requirement for data governance in multilateral environments. Trust can b…
S35
Operationalizing data free flow with trust | IGF 2023 WS #197 — In conclusion, the analysis presents a comprehensive overview of the various facets of data flows, their impact on compe…
S36
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we …
S37
Secure Finance Risk-Based AI Policy for the Banking Sector — This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automat…
S38
Conversational AI in low income &amp; resource settings | IGF 2023 — Rajendra Pratap Gupta:But Sameer, even after the Sarbanes-Oxley Act in the financial markets, we had the subprime crisis…
S39
eTrade for all leadership roundtable: The role of partnership for a more inclusive and sustainable digital future — These entities possess the advantage of agility, risk-tolerance, and innovation, making them valuable contributors to po…
S40
How AI Is Transforming Indias Workforce for Global Competitivene — Moderate disagreement with significant implications – while speakers share common goals of inclusive AI development and …
S41
Day 0 Event #142 Navigating Innovation and Risk in the Digital Realm — Noha argues that the speed of digital innovation is outpacing the development of national strategies, digital skills, an…
S42
Setting the Rules_ Global AI Standards for Growth and Governance — So in summary, and thank you, dear panelists, for the great discussion. So you heard today that standards are important….
S43
Laying the foundations for AI governance — Artemis Seaford: So the greatest obstacle, in my opinion, to translating AI governance principles into practice may actu…
S44
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S45
Global AI Governance: Reimagining IGF’s Role &amp; Impact — Audience: Yeah thank you Elizabeth Ponsleit speaking a member of the Policy Network for AI. What I want is to get from v…
S46
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S47
Elections and the Internet: free, fair and open? | IGF 2023 Town Hall #39 — Data needed for policy making needs to reflect their specific local contexts
S48
WS #102 Harmonising approaches for data free flow with trust — Dave Pendle: Just take maybe 15, 20 seconds, but I mean, cooperation on data governance requires trust and you’ll nev…
S49
Nri Collaborative Session Data Governance for the Public Good Through Local Solutions to Global Challenges — – Consider hybrid approaches that balance sovereignty with practical needs Nancy Kanasa: Good morning, everyone. I’m Na…
S50
Global AI Policy Framework: International Cooperation and Historical Perspectives — And I think that’s been foundational to the summit and all the activities that’s been happening. And so I think there’s …
S51
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S52
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with …
S53
Agentic AI in Focus Opportunities Risks and Governance — Benchmarks created jointly by academia and industry are needed to test multi‑agent behaviours before deployment.
S54
AI Safety at the Global Level Insights from Digital Ministers Of — The evaluation ecosystem should be multi-stakeholder, involving government, industry, researchers, civil society, and in…
S55
https://dig.watch/event/india-ai-impact-summit-2026/building-the-next-wave-of-ai_-responsible-frameworks-standards — And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to …
S56
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S57
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S58
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — “At GSMA, about 12 months ago, we formed a coalition called Cross -Sector Any Scam Task Force”[60]. “And the important t…
S59
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S60
Responsible AI in India Leadership Ethics &amp; Global Impact — And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excit…
S61
WS #123 Responsible AI in Security Governance Risks and Innovation — Both industry and humanitarian perspectives converged on integrating governance considerations throughout the entire AI …
S62
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Sovereignty has multiple layers: data, operations, technology stack – can control three out of four
S63
S64
Operationalizing data free flow with trust | IGF 2023 WS #197 — In summary, the fear of government access to data poses a threat to the free flow of data with trust. Microsoft’s statis…
S65
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — Common definitions on data sovereignty are required Enabling a free flow of data is essential for access to new technol…
S66
Understanding the language of modern AI — Large Language Models (LLMs)are trained on vast datasets containing billions or trillions of words from across the inter…
S67
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Balance between large foundational models and small specialized models Ioanna Ntinou acknowledged the tension between d…
S68
WS #219 Generative AI Llms in Content Moderation Rights Risks — Marlene Owizniak: And before I open it up to the floor, I just wanted to highlight a few of the key risks that we found,…
S69
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S70
Ethical AI_ Keeping Humanity in the Loop While Innovating — Innovation is much more than that. innovation is really challenging ourselves to go further. And I want to go back to a …
S71
WS #110 AI Innovation Responsible Development Ethical Imperatives — – Ke GONG- Dr. Yik Chan Chin- Moderator Godoi emphasizes that if innovation is not for everyone, then something is miss…
S72
Panel Discussion Inclusion Innovation &amp; the Future of AI — And I think AI might have some tail, you know, sort of catastrophic type risks associated with it. And so this is an are…
S73
AI for food systems — Seizo Onoe argues that by providing shared digital infrastructure and conducting pilot programs, the initiative will ena…
S74
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “When it comes to discovery, we need to develop foundation models for proteins, RNA, cellular circuits and systems biolo…
S75
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Described Warsaw’s values-first approach to AI governance, beginning with stakeholder engagement and citizen consultatio…
S77
Digital Safety and Cyber Security Curriculum | IGF 2023 Launch / Award Event #71 — In addition to cybersecurity, the analysis touches upon other topics as well. It mentions the creation of interactive sc…
S78
Protecting vulnerable groups online from harmful content – new (technical) approaches — The speaker, evidently in a coordinating role, commenced with vital updates for the attendees, underlining their intenti…
S79
WS #211 Disability &amp; Data Protection for Digital Inclusion — Fawaz Shaheen: . . Yes, I think it’s working now. Thank you so much. We’ll just start our session now. Welcome to …
S80
Day 0 Event #35 Empowering consumers towards secure by design ICTs — WOUT DE NATRIS: Thank you, Joao. And I think that shows how the two topics also intersect with each other, because w…
S81
Unlocking Trust and Safety to Preserve the Open Internet | IGF 2023 Open Forum #129 — The jurisdiction may affect the approach to different cases
S82
Ad Hoc Consultation: Friday 2nd February, Afternoon session — The delegation has formally expressed its support for the European Union’s proposal to alter the terminology in a docume…
S83
Rule of Law for Data Governance | IGF 2023 Open Forum #50 — Many jurisdictions have expanded their reach and legal basis with some form of extraterritoriality
S84
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — I mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have star…
S85
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Valeriya Ionan:So I would like to echo, in some ways, the previous speakers. Well, we believe in golden triangle of rela…
S86
Regional Leaders Discuss AI-Ready Digital Infrastructure — The country offers attractive tax incentives and customs exemptions for investors willing to build data centers worth ov…
S87
vi CONTENTS — Overall, the contributors consider the fundamental issues which must be raised in order to understand how multilateralis…
S88
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — I didn’t realize that. No, the one thing I wanted to add in terms of like a goal for where we can find ourselves two yea…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Moderator
4 arguments45 words per minute1115 words1463 seconds
Argument 1
Co‑created, living benchmarks are essential
EXPLANATION
The moderator stresses that safety benchmarks should be developed together with industry, academia and research institutions rather than in isolation. Living benchmarks that reflect real‑world deployment are needed to close trust gaps.
EVIDENCE
He notes that the second most important element is co-creation of safety benchmarks with industry and academia [12]. He also points out that benchmarks must emerge from deployment reality, not just research labs, and that safety benchmarks fail when developed in isolation [10-11].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safety benchmarks should be co-created with industry, academia and research institutions and validated beyond labs, as emphasized in [S2].
MAJOR DISCUSSION POINT
Co‑created, living benchmarks are essential
AGREED WITH
Karna Chokshi, Kamesh Shekar
Argument 2
The RAISE Index unifies global standards
EXPLANATION
The moderator describes the RAISE Index as a tool that aggregates requirements from major AI regulatory frameworks into a single, portable assessment. It enables organisations operating in multiple markets to evaluate alignment with diverse regulations through one methodology.
EVIDENCE
He explains that ICOM and the dialogue developed the RAISE Index, the first of its kind to quantify AI impact on safety and responsibility [13], and that the index harmonises requirements across the EU AI Act, NIST AI RMF, Singapore guidelines and the UK AI Assurance [39-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RAISE Index aggregates requirements from major AI regulatory frameworks into a single assessment methodology, and a QR-code provides access to the full framework for testing, as noted in [S2] and [S13].
MAJOR DISCUSSION POINT
The RAISE Index unifies global standards
AGREED WITH
Kazim Rizvi
Argument 3
Benchmarks must evolve continuously with AI capabilities
EXPLANATION
The moderator argues that AI capabilities outpace regulatory cycles, so static checklists quickly become obsolete. Benchmarks therefore need to be treated as living infrastructure that is continuously updated.
EVIDENCE
He states that AI capabilities evolve faster than regulatory cycles, making static benchmarks ineffective, and that hubs must institutionalise continuous benchmark evolution [24-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Living, iterative benchmarks are advocated over static checklists, with the RAISE Index designed as an iterative framework that evolves with AI capabilities, according to [S2].
MAJOR DISCUSSION POINT
Benchmarks must evolve continuously with AI capabilities
Argument 4
India’s multilingual, large‑scale context gives it a competitive advantage in shaping inclusive AI standards
EXPLANATION
The moderator highlights that India operates in a multilingual, resource‑constrained environment that mirrors many developing nations. This unique context positions India to influence global AI standards toward inclusivity and scalability.
EVIDENCE
He notes that most global AI frameworks are designed for high-resource, homogeneous settings, whereas India deals with multilingual populations, infrastructure constraints and massive scale, turning these challenges into a competitive advantage [29-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s multilingual, resource-constrained environment is identified as a competitive advantage for shaping inclusive AI standards in [S15] and [S16].
MAJOR DISCUSSION POINT
India’s multilingual, large‑scale context gives it a competitive advantage in shaping inclusive AI standards
A
Arundhati Bhattacharya
3 arguments111 words per minute929 words498 seconds
Argument 1
Salesforce created an “Office for Humane and Ethical Use of Technology” and calls for a global compact
EXPLANATION
Arundhati explains that Salesforce established a dedicated office to review every product and process for humane and ethical considerations before market launch. She argues that preventing misuse of AI requires a worldwide compact with transparent information exchange.
EVIDENCE
She recounts that Salesforce set up an Office for the Humane and Ethical Use of Technology in 2014, which reviews all products before release [58-60], and stresses the need for a global compact to stop bad actors through shared transparency and collective commitment [65-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Salesforce’s establishment of an Office for Humane and Ethical Use of Technology and the call for a global compact are documented in [S2].
MAJOR DISCUSSION POINT
Salesforce created an “Office for Humane and Ethical Use of Technology” and calls for a global compact
Argument 2
Trust is Salesforce’s top value; a dedicated Trust Layer ensures data security, bias mitigation, and hallucination control
EXPLANATION
Arundhati states that trust is the number‑one value at Salesforce and that the company has built a Trust Layer to protect data, eliminate bias, and prevent hallucinations in AI outputs. This layer underpins the reliability of their AI services.
EVIDENCE
She lists trust as the first of five core values and claims Salesforce is the market leader in trust [112-118]. She then details how the Trust Layer safeguards data, checks for toxicity, bias and hallucination, especially as model size grows, to deliver responsible results [119-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Salesforce’s Trust Layer that safeguards data, mitigates bias and hallucinations is detailed in [S2].
MAJOR DISCUSSION POINT
Trust is Salesforce’s top value; a dedicated Trust Layer ensures data security, bias mitigation, and hallucination control
AGREED WITH
Ankush Sabharwal
DISAGREED WITH
Ankush Sabharwal
Argument 3
A global compact is needed to prevent misuse of AI and ensure worldwide cooperation
EXPLANATION
Arundhati reiterates that AI misuse can only be curbed through a coordinated international agreement that binds all actors to shared norms. She emphasizes that no single country or organisation can succeed alone.
EVIDENCE
She argues that stopping bad actors requires sufficient transparent information exchange and a collective declaration that such misuse will not be tolerated, calling for a global compact [65-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a global AI compact and international coordination beyond national frameworks is discussed in [S2] and reinforced by [S19].
MAJOR DISCUSSION POINT
A global compact is needed to prevent misuse of AI and ensure worldwide cooperation
A
Ankush Sabharwal
3 arguments170 words per minute971 words342 seconds
Argument 1
Trust is the primary enterprise requirement; solutions must be risk‑averse and demonstrably reliable
EXPLANATION
Ankush asserts that enterprises prioritize trust above all, demanding highly reliable AI that minimizes risk. He notes that clients expect near‑perfect accuracy and that his company adopts a risk‑averse stance to meet those expectations.
EVIDENCE
He explains that clients such as IRCTC, LIC, NPCI and the Army expect 99.9 % accuracy, and that his firm is risk-averse, preferring safe, reliable solutions over rapid innovation [108-110] and further emphasizes the need for high accuracy and risk aversion in later remarks [141-148].
MAJOR DISCUSSION POINT
Trust is the primary enterprise requirement; solutions must be risk‑averse and demonstrably reliable
AGREED WITH
Arundhati Bhattacharya
DISAGREED WITH
Karna Chokshi
Argument 2
Clients demand full control over data; on‑premise and edge AI appliances provide sovereign, secure processing
EXPLANATION
Ankush describes how customers require complete sovereignty over their data, leading his firm to offer on‑premise and edge AI appliances that keep processing within the client’s premises. This approach satisfies stringent security and compliance needs.
EVIDENCE
He mentions that clients want absolute control, no external party should see the data, and that his company provides on-premise and edge AI appliances such as the Vada-GPT desk-AI appliance with petaflop capability [108-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Clients’ demand for sovereign, on-premise processing is supported by discussions on data sovereignty and innovation in [S9]; however, a contrasting view promotes cloud-native solutions with trust layers for security, as presented in [S2].
MAJOR DISCUSSION POINT
Clients demand full control over data; on‑premise and edge AI appliances provide sovereign, secure processing
AGREED WITH
Karna Chokshi
DISAGREED WITH
Karna Chokshi
Argument 3
Building local data centers and offering choice of data residency are critical for trust in high‑stakes deployments
EXPLANATION
Ankush highlights the strategic importance of establishing data centres within the country to give clients the option of data residency, which bolsters trust for mission‑critical applications such as defense and finance.
EVIDENCE
He notes the growing demand for local data centres and the need for on-premise solutions for high-stakes deployments, emphasizing that sovereignty and residency choices are essential for trust [108-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The strategic importance of building local data centres and offering data-residency choices to enhance trust is highlighted in [S9].
MAJOR DISCUSSION POINT
Building local data centers and offering choice of data residency are critical for trust in high‑stakes deployments
K
Karna Chokshi
2 arguments173 words per minute1177 words407 seconds
Argument 1
Governance should be built into the core product, not a separate PDF, to enable mass adoption
EXPLANATION
Karna argues that governance cannot be a lengthy document; it must be embedded directly into the AI product so that compliance is automatic and scalable. This integration drives widespread adoption among SMEs.
EVIDENCE
She explains that governance should be part of the core product rather than a 200-page PDF, describing how their voice-agent platform incorporates guardrails at input, reasoning, tool-calling and output stages, enabling mass adoption [96-102].
MAJOR DISCUSSION POINT
Governance should be built into the core product, not a separate PDF, to enable mass adoption
AGREED WITH
Moderator, Kamesh Shekar
DISAGREED WITH
Ankush Sabharwal
Argument 2
Compliance can be delivered as reusable APIs with sensible defaults, turning governance into infrastructure
EXPLANATION
Karna proposes converting compliance requirements into modular APIs that can be plugged into AI solutions, with default settings that reflect industry standards. This turns governance from paperwork into a reusable infrastructure component.
EVIDENCE
She details how compliance checklists are exposed as APIs that companies can select, using the example of a mutual-fund house where each regulatory rule is an API, and stresses the importance of defaults and open-sourcing these APIs [151-169].
MAJOR DISCUSSION POINT
Compliance can be delivered as reusable APIs with sensible defaults, turning governance into infrastructure
AGREED WITH
Ankush Sabharwal
K
Kamesh Shekar
1 argument162 words per minute768 words283 seconds
Argument 1
Embedding responsible AI as a product value proposition helps startups balance innovation with accountability
EXPLANATION
Kamesh highlights that positioning responsible AI as a selling point adds commercial value while ensuring ethical compliance. This framing assists startups in reconciling rapid innovation with societal responsibilities.
EVIDENCE
He remarks that productisation of responsible AI from a value-proposition perspective is a key discussion point, linking responsible AI to the product’s market appeal [103-106].
MAJOR DISCUSSION POINT
Embedding responsible AI as a product value proposition helps startups balance innovation with accountability
AGREED WITH
Moderator, Karna Chokshi
DISAGREED WITH
Ankush Sabharwal, Karna Chokshi
K
Kazim Rizvi
1 argument87 words per minute279 words192 seconds
Argument 1
The RAISE Index, India’s first responsible‑AI assessment tool, is urged for adoption to embed responsible AI by design
EXPLANATION
Kazim calls on the audience to adopt the RAISE Index, positioning it as India’s inaugural tool for measuring responsible AI readiness. He stresses that using the index will help embed responsible AI principles from the design stage.
EVIDENCE
He references the Dialog and ICOM’s development of India’s first responsible-AI assessment tool, urging participants to explore it and embed responsible AI by design [202-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RAISE Index, India’s first responsible-AI assessment tool, is presented as an iterative framework for embedding responsible AI by design, with access via a QR-code for testing, as described in [S2] and [S13].
MAJOR DISCUSSION POINT
The RAISE Index, India’s first responsible‑AI assessment tool, is urged for adoption to embed responsible AI by design
AGREED WITH
Moderator
Agreements
Agreement Points
Trust is the foundational value for AI systems and must be engineered into products
Speakers: Arundhati Bhattacharya, Ankush Sabharwal
Trust is Salesforce’s top value; a dedicated Trust Layer ensures data security, bias mitigation, and hallucination control Trust is the primary enterprise requirement; solutions must be risk‑averse and demonstrably reliable
Both speakers stress that trust is the number-one priority for AI deployments. Arundhati describes Salesforce’s Trust Layer that protects data, mitigates bias and hallucinations, while Ankush notes that enterprise clients demand near-perfect accuracy and a risk-averse approach, making trust the decisive factor for adoption [112-118][119-136][108-110][141-148].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on trust aligns with the view of AI as critical infrastructure requiring data control and secure compute to be trustworthy [S31], and with calls for traceability and “trustability” in AI systems [S36]. It also reflects broader governance agendas that prioritize trust in public services [S44] and the need for measurable standards [S42].
Benchmarks and governance should be co‑created, embedded in products, and continuously evolved
Speakers: Moderator, Karna Chokshi, Kamesh Shekar
Co‑created, living benchmarks are essential Governance should be built into the core product, not a separate PDF, to enable mass adoption Embedding responsible AI as a product value proposition helps startups balance innovation with accountability
The moderator argues for co-created, living safety benchmarks that evolve with AI capabilities, Karna stresses that governance must be part of the core AI product (e.g., guardrails and compliance APIs) to achieve scale, and Kamesh highlights that positioning responsible AI as a product value proposition aids startups in reconciling innovation with accountability [12][24-27][151-169][103-106].
POLICY CONTEXT (KNOWLEDGE BASE)
Co-creation of benchmarks is echoed in discussions on designing standards for fast-moving AI ecosystems and the need for continuous measurement [S42], as well as in calls for transparent governance frameworks that evolve with technology [S44] and address practical translation challenges [S43].
The RAISE Index unifies global AI standards and should be adopted widely
Speakers: Moderator, Kazim Rizvi
The RAISE Index unifies global standards The RAISE Index, India’s first responsible‑AI assessment tool, is urged for adoption to embed responsible AI by design
Both the moderator and Kazim describe the RAISE Index as a single, portable assessment that harmonises requirements from the EU AI Act, NIST AI RMF, Singapore guidelines and the UK AI Assurance, and they call on participants to adopt it to embed responsible AI from the design stage [13][39-42][202-208].
POLICY CONTEXT (KNOWLEDGE BASE)
The push for a unified index mirrors international efforts to build trust through common norms, standards and law-enforcement mechanisms [S34], and to harmonise data-free-flow with trust at a global level [S35]. It also resonates with the broader agenda for global AI standards and interoperability [S51].
Data sovereignty and on‑premise/edge solutions are critical for high‑stakes AI deployments
Speakers: Ankush Sabharwal, Karna Chokshi
Clients demand full control over data; on‑premise and edge AI appliances provide sovereign, secure processing Compliance can be delivered as reusable APIs with sensible defaults, turning governance into infrastructure
Ankush emphasizes that clients (e.g., defense, finance) require absolute data control, leading to on-premise and edge AI appliances, while Karna proposes delivering compliance via modular APIs with defaults, both approaches aiming to ensure sovereign, trustworthy AI in sensitive contexts [108-110][151-169].
POLICY CONTEXT (KNOWLEDGE BASE)
This view is supported by debates on cloud-native trust layers versus on-premise solutions that stress complete data sovereignty and edge computing needs [S30], as well as by policy papers highlighting data control and secure compute as prerequisites for trustworthy AI [S31]. Panel discussions on data sovereignty in India further underline the consensus on national-level control balanced with global collaboration [S32], and G20 statements reinforce privacy and data-protection as foundations for trust [S33].
Similar Viewpoints
Both argue that trust must be engineered into AI solutions, with concrete technical safeguards and a risk‑averse posture to satisfy enterprise and societal expectations [112-118][119-136][108-110][141-148].
Speakers: Arundhati Bhattacharya, Ankush Sabharwal
Trust is Salesforce’s top value; a dedicated Trust Layer ensures data security, bias mitigation, and hallucination control Trust is the primary enterprise requirement; solutions must be risk‑averse and demonstrably reliable
Both stress that AI governance cannot be a static document; it must be integrated into the product lifecycle and continuously updated to stay relevant [12][24-27][151-169].
Speakers: Moderator, Karna Chokshi
Co‑created, living benchmarks are essential Governance should be built into the core product, not a separate PDF, to enable mass adoption
Both promote the RAISE Index as a unifying, iterative framework for responsible AI that should be widely adopted across markets [13][39-42][202-208].
Speakers: Moderator, Kazim Rizvi
The RAISE Index unifies global standards The RAISE Index, India’s first responsible‑AI assessment tool, is urged for adoption to embed responsible AI by design
Both see productisation of responsible AI and compliance as a commercial value proposition that can drive adoption among startups and SMEs [96-102][103-106].
Speakers: Karna Chokshi, Kamesh Shekar
Governance should be built into the core product, not a separate PDF, to enable mass adoption Embedding responsible AI as a product value proposition helps startups balance innovation with accountability
Unexpected Consensus
Embedding governance and trust directly into AI products is championed both by a large multinational (Salesforce) and a startup focused on voice agents
Speakers: Arundhati Bhattacharya, Karna Chokshi
Trust is Salesforce’s top value; a dedicated Trust Layer ensures data security, bias mitigation, and hallucination control Governance should be built into the core product, not a separate PDF, to enable mass adoption
It is notable that a global enterprise and a nascent startup converge on the principle that responsible AI mechanisms (trust layers, guardrails, compliance APIs) must be baked into the product itself rather than treated as after-the-fact documentation, indicating a cross-scale alignment on product-centric governance [112-118][119-136][96-102].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding governance mirrors the broader push for standards-driven product design and measurable trust frameworks discussed in global AI governance forums [S42] and in initiatives to revitalize trust in public services through AI [S44].
Both a policy‑focused moderator and a data‑sovereignty‑focused entrepreneur stress the need for local, sovereign solutions to build trust
Speakers: Moderator, Ankush Sabharwal
Co‑created, living benchmarks are essential Clients demand full control over data; on‑premise and edge AI appliances provide sovereign, secure processing
While the moderator talks about living benchmarks emerging from deployment reality, Ankush highlights on-premise and edge appliances to ensure data sovereignty. The convergence on locality (benchmarks derived from real deployments and data residency) was not anticipated given their different focal points [24-27][108-110].
POLICY CONTEXT (KNOWLEDGE BASE)
The convergence of policy and entrepreneurial perspectives on local, sovereign AI solutions is reflected in the cloud-native versus on-premise debate emphasizing sovereignty [S30], the consensus on data sovereignty from the India AI Impact Summit [S32], and panel dialogues on digital sovereignty and trusted AI at scale [S46, S49].
Overall Assessment

The panel shows strong convergence on three pillars: (1) trust as the core value of AI, (2) the necessity of embedding governance and responsible‑AI safeguards directly into products and keeping them alive through co‑creation and continuous evolution, and (3) the promotion of the RAISE Index as a unifying, iterative assessment framework. These shared positions cut across enterprise, startup, and policy perspectives, indicating a high level of consensus on how to operationalise responsible AI.

High consensus – the alignment across diverse stakeholders (large corporations, startups, policy‑makers) suggests that future initiatives are likely to focus on trust‑centric product design, living benchmark ecosystems, and the adoption of the RAISE Index, which could accelerate coherent global standards and practical implementation.

Differences
Different Viewpoints
Architecture for achieving trust and data security
Speakers: Arundhati Bhattacharya, Ankush Sabharwal
Trust is Salesforce’s top value; a dedicated Trust Layer ensures data security, bias mitigation, and hallucination control Clients demand full control over data; on‑premise and edge AI appliances provide sovereign, secure processing
Arundhati argues that trust can be delivered through a cloud-native Trust Layer that protects data, mitigates bias and hallucinations without on-premise hardware [112-118][119-136]. Ankush counters that enterprise clients require absolute data sovereignty, favouring on-premise or edge AI appliances that keep processing within the client’s premises to guarantee security and compliance [108-110][141-148]. The two positions reflect a fundamental disagreement on whether trust is best achieved via cloud-based services or on-premise, sovereign solutions.
POLICY CONTEXT (KNOWLEDGE BASE)
Architectural approaches are contested in literature contrasting cloud-native trust layers with on-premise sovereign appliances [S30], and in broader discussions on secure compute as a core requirement for trustworthy AI systems [S31].
How governance and compliance should be delivered in AI products
Speakers: Karna Chokshi, Ankush Sabharwal
Governance should be built into the core product, not a separate PDF, to enable mass adoption Clients demand full control over data; on‑premise and edge AI appliances provide sovereign, secure processing
Karna proposes that governance be embedded directly into AI products through built-in guardrails and exposed as reusable compliance APIs with sensible defaults, turning governance into infrastructure rather than paperwork [96-102][151-169]. Ankush focuses on meeting client trust requirements by offering sovereign, on-premise hardware solutions, implying that compliance is achieved by isolating data and processing rather than integrating governance into the software stack [108-110][141-148]. The disagreement lies in whether compliance is best realized through software-level integration or through hardware-level data control.
POLICY CONTEXT (KNOWLEDGE BASE)
The delivery of governance is debated in contexts that call for measurable, standards-based approaches [S42], highlight obstacles in translating governance principles into practice [S43], and advocate for integrated governance to boost public trust [S44].
Risk tolerance versus speed of innovation
Speakers: Ankush Sabharwal, Karna Chokshi
Trust is the primary enterprise requirement; solutions must be risk‑averse and demonstrably reliable Embedding responsible AI as a product value proposition helps startups balance innovation with accountability
Ankush stresses a risk-averse approach, insisting on near-perfect accuracy (99.9 %) for high-stakes clients and prioritising trust over rapid innovation [108-110][141-148]. Karna, by contrast, advocates productising responsible AI as a marketable value proposition, encouraging startups to embed responsible AI features directly into their offerings to achieve both innovation and accountability [103-106][96-102]. The tension is between a cautious, accuracy-first stance and a more agile, product-centric strategy.
POLICY CONTEXT (KNOWLEDGE BASE)
Tensions between rapid AI innovation and regulatory risk management have been noted in remarks about over-regulation and its limited impact on systemic crises [S38], as well as concerns that digital innovation outpaces national strategies and policy frameworks [S41], and observations of moderate disagreement on implementation strategies [S40].
Unexpected Differences
Cloud‑native trust layer versus on‑premise sovereign AI appliances
Speakers: Arundhati Bhattacharya, Ankush Sabharwal
Trust is Salesforce’s top value; a dedicated Trust Layer ensures data security, bias mitigation, and hallucination control Clients demand full control over data; on‑premise and edge AI appliances provide sovereign, secure processing
Given both speakers represent leading AI organisations, one might expect convergence on a common trust architecture. Instead, they advocate opposite technical models-cloud-based trust services versus on-premise, data-sovereign hardware-revealing an unexpected split in strategic direction for enterprise AI security [112-118][119-136][108-110][141-148].
POLICY CONTEXT (KNOWLEDGE BASE)
This core disagreement is directly addressed in debates advocating cloud-native trust layers for resource efficiency [S30] versus calls for complete data sovereignty via on-premise/edge solutions [S30, S49], and in panel discussions on building trusted AI at scale that weigh both approaches <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S46].
Overall Assessment

The panel largely concurs on the importance of responsible, trustworthy AI and the need for collaborative standards. Disagreements cluster around implementation pathways: cloud‑native trust layers versus on‑premise sovereign solutions; software‑integrated governance APIs versus hardware‑centric data control; and risk‑averse accuracy‑first approaches versus rapid, product‑centric innovation.

Moderate – while foundational goals are shared, the divergent technical strategies could hinder the formation of unified standards unless a flexible framework accommodates both cloud and on‑premise models. The implications are a need for hybrid benchmark designs that recognize multiple trust architectures and for policy that allows both approaches to coexist.

Partial Agreements
Both emphasize the need for collaborative, globally coordinated mechanisms (a global compact or co‑created benchmarks) to ensure AI is used responsibly, though the Moderator focuses on benchmark creation while Arundhati stresses institutional governance structures [12][65-68].
Speakers: Arundhati Bhattacharya, Moderator
Salesforce created an “Office for Humane and Ethical Use of Technology” and calls for a global compact Co‑created, living benchmarks are essential
Both agree that responsible AI must be integrated into the product itself to drive adoption, differing only in the specific mechanisms (guardrails/APIs vs value‑proposition framing) [96-102][103-106].
Speakers: Karna Chokshi, Kamesh Shekar
Governance should be built into the core product, not a separate PDF, to enable mass adoption Embedding responsible AI as a product value proposition helps startups balance innovation with accountability
Takeaways
Key takeaways
Responsible AI requires co‑created, living benchmarks rather than static checklists. The RAISE Index, developed by ICOM/Dialogue, unifies multiple global AI regulatory frameworks into a single, portable assessment tool. Benchmarks must be continuously updated to keep pace with rapid AI capability evolution. Corporate trust is paramount; Salesforce’s “Office for Humane and Ethical Use of Technology” and its Trust Layer illustrate how large enterprises embed ethics, bias mitigation, and hallucination control. Start‑ups and MSMEs need governance built directly into their products (e.g., guardrails at prompt, tool‑calling, and output stages) to achieve mass adoption. Treating compliance as reusable APIs with sensible defaults can turn governance into infrastructure rather than paperwork. Data sovereignty and on‑premise/edge AI appliances are critical for high‑stakes sectors (defence, finance) to maintain control and trust. India’s multilingual, large‑scale environment provides a competitive advantage for shaping inclusive, responsible‑AI standards globally. A global compact is essential to prevent misuse of AI and to align stakeholders across borders.
Resolutions and action items
Release the first edition of the RAISE Index and make it publicly accessible via the QR code shown in the presentation. Encourage organizations to pilot the RAISE Index against their AI systems and provide feedback for iterative improvement. Promote the concept of embedding governance and compliance as APIs within AI products, with an invitation to open‑source such APIs. Advocate for the adoption of a global compact on responsible AI, leveraging India’s experience and the RAISE Index as a reference framework. Continue development of Telangana Data Exchange sandbox to allow startups to test AI solutions on real government data sets.
Unresolved issues
Specific mechanisms and governance structures needed to establish a binding global compact on AI ethics remain undefined. How to standardize and certify compliance APIs across different industries and jurisdictions was discussed but not resolved. The optimal balance between using large language models (LLMs) for rapid innovation versus transitioning to smaller, domain‑specific models (SLMs) lacks a concrete roadmap. Details on the operational process for continuous benchmark evolution (e.g., frequency of updates, stakeholder governance) were not finalized. Methods for ensuring data privacy and security while still leveraging cloud‑native AI services, especially for enterprises that prefer on‑premise solutions, were left open.
Suggested compromises
Make data usage for model training an optional add‑on rather than a default, respecting privacy while still enabling innovation. Adopt a phased approach: start with LLMs for speed of value creation, then migrate to SLMs where latency, cost, or data sensitivity demand it. Embed governance as part of the core product (guardrails at prompt, tool‑calling, and output) rather than as a separate compliance document, balancing regulatory needs with product agility. Provide default compliance settings via APIs, allowing customers to opt‑in to stricter controls as needed, thus reconciling mass adoption with regulatory rigor.
Thought Provoking Comments
Safety benchmarks must emerge from deployment reality, not just research labs. The most effective ones come from institutions building, deploying, and maintaining AI at scale.
Highlights the gap between theoretical safety standards and practical, real‑world validation, urging a shift toward evidence‑based benchmarks that reflect operational complexities.
Set the agenda for the panel by framing the need for industry‑grounded metrics, prompting later speakers (e.g., Karna and Ankush) to discuss concrete ways to embed governance and trust directly into products and infrastructure.
Speaker: Moderator
We set up an Office for the Humane and Ethical Use of Technology in 2014, which reviews every product and process before it reaches the market.
Demonstrates a proactive, organization‑wide commitment to ethics that predates many current AI governance initiatives, offering a concrete model for other enterprises.
Introduced the concept of internal ethical oversight, leading the discussion toward institutional mechanisms (e.g., global compact, trust layers) and influencing Karna’s emphasis on embedding governance into the product itself.
Speaker: Arundhati Bhattacharya
We need a global compact with transparent information exchange; no single country or organization can stop bad actors alone.
Calls for coordinated international action, moving the conversation from isolated corporate policies to a broader, collaborative regulatory ecosystem.
Shifted the tone from company‑centric solutions to a call for worldwide standards, which the Moderator later linked to the RAISE Index that aims to harmonize multiple global frameworks.
Speaker: Arundhati Bhattacharya
Governance should be part of the core product – guardrails at the prompt, tool‑calling, and output stages – and human‑in‑the‑loop is a first‑class feature, not a failure point.
Proposes a practical, product‑centric approach to responsible AI that makes compliance automatic and scalable, addressing the pain point of bulky PDFs and manual checklists.
Redirected the discussion toward implementation tactics, inspiring Ankush to talk about trust through data sovereignty and prompting further dialogue on making compliance an API (later echoed by Karna again).
Speaker: Karna Chokshi
Our clients demand full data sovereignty; we therefore deliver on‑premise AI appliances (Vada GPT) that keep processing and data inside the customer’s premises.
Introduces the concept that control over data location and processing is a core trust factor, especially for high‑stakes public and defense use‑cases.
Added a new dimension to the trust discussion, moving it from policy to technical architecture, and reinforced the earlier point about living benchmarks needing to adapt to such sovereign requirements.
Speaker: Ankush Sabharwal
We should convert compliance into reusable APIs, making compliance an infrastructure layer rather than paperwork, and default‑opt‑in for data use should be the opposite – opt‑out.
Offers a concrete engineering solution to the compliance bottleneck, linking regulatory needs with software development practices and emphasizing user‑centric defaults.
Deepened the technical conversation, providing a bridge between high‑level governance ideas and actionable developer tools, and set the stage for the final Q&A on small vs. large language models.
Speaker: Karna Chokshi
Trust is our number one value at Salesforce; we built a TrustLayer that prevents data leakage, bias, and hallucination, ensuring results are both responsible and trustworthy.
Articulates a concrete, layered security and quality framework that operationalizes the abstract notion of ‘trust’, addressing practical concerns like hallucination in large models.
Reinforced the earlier themes of trust and responsibility, providing a tangible example that resonated with Ankush’s emphasis on risk‑averse deployments and with the audience’s concerns about model reliability.
Speaker: Arundhati Bhattacharya
Overall Assessment

The discussion was shaped by a handful of pivotal remarks that moved the conversation from abstract principles to concrete, implementable solutions. The Moderator’s opening call for real‑world benchmarks set the stage, while Arundhati’s early description of an internal ethics office and the call for a global compact broadened the scope to international cooperation. Karna’s product‑centric view of embedding governance directly into AI systems and converting compliance into APIs offered a practical pathway for mass adoption. Ankush’s focus on data sovereignty introduced a technical trust mechanism that complemented the earlier governance ideas. Together, these comments created a progressive narrative: starting with the need for grounded standards, moving through organizational and global frameworks, and culminating in actionable engineering approaches that address trust, compliance, and scalability. This sequence steered the panel toward actionable outcomes, such as the promotion of the RAISE Index and the emphasis on building living, adaptable governance infrastructures.

Follow-up Questions
How can a global compact be created to prevent misuse of AI by bad actors?
Arundhati emphasized the need for a worldwide agreement to stop malicious use of AI, indicating that mechanisms and leadership for such a compact are still undefined and require further exploration.
Speaker: Arundhati Bhattacharya
How should safety benchmarks be continuously updated to keep pace with rapid AI capability evolution?
The moderator highlighted that static benchmarks become obsolete quickly, suggesting the need for research into processes and institutions that can maintain living, evolving safety standards.
Speaker: Moderator
How can the RAISE Index be adapted and adopted across different jurisdictions and industries?
Arundhati noted that the RAISE methodology is open and adaptable, but practical guidance for localization and cross‑jurisdictional adoption remains an open area.
Speaker: Arundhati Bhattacharya
What mechanisms are needed to turn compliance requirements into reusable APIs for AI systems?
Karna proposed converting compliance checklists into APIs to simplify integration, indicating a need for standards, open‑source implementations, and validation frameworks.
Speaker: Karna Chokshi
What should be the default policy regarding the use of customer data for training LLMs versus an opt‑in approach?
Karna argued that data usage should be optional rather than default, raising the question of optimal default settings to balance innovation and privacy.
Speaker: Karna Chokshi
How can explainability be integrated as a core output of AI APIs rather than an after‑the‑fact debugging step?
She stressed the importance of built‑in explainability, suggesting research into API designs that automatically provide decision rationale.
Speaker: Karna Chokshi
What are the best practices for building a trust layer that ensures data security, bias mitigation, and hallucination control in large‑scale AI deployments?
Both speakers discussed trust mechanisms (e.g., TrustLayer) but acknowledged ongoing challenges, indicating a need for systematic best‑practice frameworks.
Speaker: Arundhati Bhattacharya; Ankush Sabharwal
What are the trade‑offs between on‑premise, edge, and cloud AI deployments for sovereign data requirements, especially in high‑security sectors?
Ankush highlighted client demand for data sovereignty and on‑premise solutions, prompting further study of performance, cost, and security implications of different deployment models.
Speaker: Ankush Sabharwal
What is the long‑term profitability and business model for small language models (SLMs) compared to large language models (LLMs) for enterprises?
An audience question raised the strategic decision between using SLMs for cost/latency benefits versus LLMs for capability, a topic that remains open for deeper economic analysis.
Speaker: Audience member (unidentified)
How effective is the Telangana Data Exchange sandbox in validating AI benchmarks, and what metrics can assess its impact?
The moderator mentioned the sandbox as a validation tool but did not provide evidence of its efficacy, suggesting research into its outcomes and measurable impact.
Speaker: Moderator
How can human‑in‑the‑loop be designed as a first‑class feature without becoming a failure point?
Karna advocated for human‑in‑the‑loop as a strength, yet practical design patterns and failure‑mode analyses are needed to operationalize this principle.
Speaker: Karna Chokshi
What processes are needed to ensure continuous feedback loops for responsible AI practice across organizations?
The moderator called for institutionalized continuous learning, indicating a gap in defined feedback mechanisms and governance structures for ongoing AI risk management.
Speaker: Moderator

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.