Building the Next Wave of AI_ Responsible Frameworks & Standards
20 Feb 2026 13:00h - 14:00h
Building the Next Wave of AI_ Responsible Frameworks & Standards
Session at a glance
Summary
This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with trust, accountability, and ethical considerations. The moderator introduced India’s first responsible AI assessment tool, the RAISE Index, developed by ICOM and The Dialogue, which quantifies AI impact on safety and responsibility metrics across development and deployment phases. The framework emphasizes that safety benchmarks must emerge from real-world deployment rather than isolated research labs, and should be co-created with industry and academia.
Arundhati Bhattacharya from Salesforce highlighted how her company established an office for humane and ethical use of technology in 2014, reviewing every product before market release. She emphasized the need for a global compact to prevent bad actors from misusing AI while allowing beneficial applications in healthcare, education, and research to flourish. Bhattacharya stressed that trust is Salesforce’s primary value, requiring robust trust layers to prevent data toxicity, bias, and hallucinations.
Startup representatives Karna Chokshi and Ankush Sabharwal discussed operational challenges in balancing responsibility with innovation. They advocated for productizing governance as core features rather than afterthoughts, implementing “governance as product” approaches with built-in guardrails throughout the AI lifecycle. Both emphasized the importance of human-in-the-loop systems and composite AI approaches that combine traditional NLP with generative AI only when necessary.
The discussion concluded with calls for making compliance API-driven and treating responsible AI practices as default infrastructure rather than optional add-ons, enabling broader adoption while maintaining safety standards.
Keypoints
Major Discussion Points:
– Development of India’s RAISE Index for Responsible AI Assessment: The moderator introduced a comprehensive framework co-created by ICOM and The Dialogue to quantify AI safety and responsibility during development and deployment. This index harmonizes global standards (EU AI Act, NIST, Singapore guidelines, UK AI Assurance) into a single, portable assessment tool that evolves with technology.
– Balancing Innovation Speed with Trust and Accountability: Panelists discussed the challenge of maintaining rapid AI innovation while ensuring customer protection and ethical use. Salesforce’s approach of establishing an “office for the humane and ethical use of technology” in 2014 was highlighted as an example of embedding ethics into organizational structure from the beginning.
– Productization of Responsible AI Governance: The conversation emphasized making AI governance a core product feature rather than external compliance paperwork. This includes embedding guardrails throughout the AI lifecycle, implementing human-in-the-loop as a first-class feature, and converting compliance requirements into APIs for easier adoption by startups and MSMEs.
– Trust Through Data Sovereignty and Control: Significant discussion centered on enterprise needs for complete control over AI systems, particularly in sensitive sectors like defense. This includes on-premise solutions, edge AI deployment, and ensuring data never leaves controlled environments while maintaining full functionality.
– Hybrid AI Approaches for Reliability: Panelists advocated for composite AI solutions that combine traditional NLP with generative AI selectively, using deterministic methods for routine tasks (80-90% of interactions) and generative AI only when necessary, prioritizing accuracy and purpose-driven innovation over technology adoption for its own sake.
Overall Purpose:
The discussion aimed to explore practical approaches for implementing responsible AI practices while maintaining innovation momentum, with a focus on India’s unique position in shaping global AI governance standards through real-world deployment experience in diverse, resource-constrained environments.
Overall Tone:
The discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with an authoritative presentation of frameworks and evolved into a pragmatic exchange of real-world experiences. The tone was professional yet accessible, with speakers building upon each other’s points constructively. There was a sense of urgency about getting responsible AI practices right, but balanced with optimism about India’s potential to lead global standards through practical innovation rather than theoretical frameworks.
Speakers
Speakers from the provided list:
– Moderator – Role: Panel moderator and event host; appears to be associated with ICOM (AI innovation entity from Telangana); expertise in AI policy, innovation hubs, and responsible AI frameworks
– Kamesh Shekar – Role: Panel moderator/host for the responsible AI discussion session
– Arundhati Bhattacharya – Role: Global enterprise leader at Salesforce; expertise in AI ethics, enterprise AI implementation, and technology governance
– Karna Chokshi – Role: Startup founder/leader; expertise in voice AI agents, enterprise AI solutions, and AI governance for MSMEs; builds solutions for hiring platforms and customer service
– Ankush Sabharwal – Role: AI systems builder/entrepreneur; expertise in sovereign AI, on-premise AI solutions, defense AI applications, and enterprise AI security
– Kazim Rizvi – Role: Founding Director of The Dialogue; expertise in AI policy, responsible AI frameworks, and policy research
Additional speakers:
– None identified beyond the provided speakers list
Full session report
This comprehensive panel discussion served as the closing session of the Global AI Summit, bringing together enterprise leaders, startup founders, and policy experts to explore practical approaches for implementing responsible AI governance while maintaining innovation momentum. Moderated by Kamesh Shekar, the panel featured Arundhati Bhattacharya from Salesforce, Karna Chokshi from a startup platform serving 3 lakh companies, and Ankush Sabharwal, whose company works with major clients including IRCTC, LIC, NPCI, and Army Defense.
Setting the Context: India’s RAISE Index Framework
The moderator opened by introducing India’s groundbreaking RAISE Index, developed collaboratively by ICOM (Innovation Centre of Excellence for AI and Machine Learning) and The Dialogue over the past year and a half. This framework represents the first quantitative tool for assessing AI safety and responsibility across both development and deployment phases, addressing a critical gap in the global AI governance landscape.
The moderator emphasized that this index harmonizes requirements from leading international frameworks, including the EU AI Act, NIST AI Risk Management Framework, Singapore’s Model AI Governance guidelines, and the UK’s AI Assurance standards, into a single, portable assessment tool. The framework is designed as living infrastructure rather than static compliance checklists, acknowledging that AI capabilities evolve faster than traditional regulatory cycles.
The moderator highlighted India’s unique competitive advantage in shaping global AI standards, noting that while most international frameworks are designed for high-resource, homogeneous environments, India operates within constraints that mirror those of most developing nations: multilingual populations, infrastructure limitations, and the imperative to serve both economic growth and social inclusion simultaneously. This positioning allows India to contribute frameworks validated under real-world complexity rather than idealized laboratory conditions.
Enterprise Perspectives: Trust as a Core Value
Arundhati Bhattacharya from Salesforce provided insights into how established technology companies approach responsible AI implementation. She revealed that Salesforce established an “office for the humane and ethical use of technology” in 2014, coinciding with the company’s AI journey. This office reviews every product and process before market release, demonstrating proactive organizational commitment to ethical AI development.
Bhattacharya explained that trust ranks first among Salesforce’s five core values—trust, customer success, innovation, equality, and sustainability. Their trust layer addresses multiple dimensions beyond data security, including toxicity prevention, bias mitigation, and hallucination reduction. She noted that as AI systems handle larger datasets, the tendency to hallucinate increases proportionally, making comprehensive trust measures increasingly critical.
Addressing practical deployment challenges, Bhattacharya argued that while complete on-premise deployment might seem more secure, the computational and storage requirements for effective AI systems necessitate cloud-like models, particularly given India’s limited deep resource pools. The solution lies in developing sophisticated trust layers that ensure logical security while leveraging shared computational resources.
Perhaps most compellingly, Bhattacharya illustrated how AI risks now penetrate intimate human relationships, describing how families require “safe words” to verify identity due to sophisticated deepfake technologies. This vivid example transformed abstract AI risks into tangible concerns affecting fundamental human trust, emphasizing why global cooperation is essential for addressing AI misuse.
Startup Innovation: Governance as Product
Karna Chokshi introduced the revolutionary concept of “governance as product,” arguing that compliance should be a core product feature rather than external oversight. His platform serves 3 lakh companies, with 30,000 companies deploying voice interview agents, demonstrating scale in responsible AI implementation.
Chokshi’s approach involves embedding guardrails throughout the entire AI lifecycle: during input processing, reasoning phases, tool calling, and output generation. He emphasized that “human-in-the-loop should be designed as a first-class feature not a failure point,” challenging conventional assumptions about AI autonomy. Rather than viewing human intervention as system limitation, this approach designs graduated autonomy levels that can transition seamlessly between fully autonomous, assisted, and human-controlled modes.
A key innovation Chokshi proposed is converting compliance requirements into APIs, making regulatory adherence modular, testable, and shareable across the ecosystem. He provided a concrete example of a mutual fund house deployment where compliance APIs ensure regulatory adherence while maintaining system flexibility. This approach could significantly reduce the compliance burden on startups and MSMEs while ensuring consistent standards.
Purpose-Led AI Development and Data Sovereignty
Ankush Sabharwal emphasized the importance of composite AI approaches, revealing that “80-90% of interactions happen classic NLP without gen AI.” This strategic technology selection prioritizes accuracy and purpose over technological novelty, using generative AI only when complex reasoning is specifically required. His principle of “begin with the end in mind” provides a practical framework for responsible AI deployment.
Working with high-stakes clients like IRCTC (serving 4 million users with only “eight to ten intents”), LIC, NPCI, and Army Defense, Sabharwal detailed varying accuracy expectations across sectors. While some applications can operate effectively acknowledging AI’s inherent probabilistic nature, sectors like defense and financial services demand 99.9% accuracy, requiring more conservative approaches.
Addressing enterprise and government requirements for complete data sovereignty, Sabharwal detailed the development of edge AI solutions, including a collaboration with NVIDIA to create a desktop AI appliance capable of processing “one petaflops floating point instructions” with substantial storage capacity. Their model uses “half a billion parameters” while the appliance can handle “one trillion parameter” models. This on-premise solution ensures that sensitive data never leaves controlled environments while maintaining full AI functionality.
Balancing Innovation with Responsibility
The panel addressed the fundamental tension between AI’s probabilistic nature and enterprise requirements for deterministic compliance. The discussion revealed different approaches to achieving trust through control—while Bhattacharya advocated for cloud-native solutions with sophisticated trust layers, Sabharwal emphasized complete data sovereignty requiring on-premise deployment where no external party can access, audit, or detect data processing.
During the Q&A session, the panelists discussed the strategic considerations between large language models versus small language models. Chokshi advocated for beginning with flexible LLMs to enable rapid innovation and learning, then transitioning to more specialized SLMs as use cases become clearer. This approach balances experimentation needs with long-term efficiency and cost considerations.
The conversation emphasized the importance of explainability as a core system output rather than a post-hoc investigation tool, enabling partners and stakeholders to understand AI decision-making processes in real-time and facilitating collaborative decision-making through transparency.
Global Cooperation and Implementation
The panel concluded with strong emphasis on the need for global cooperation in responsible AI development. Bhattacharya’s call for a “global compact” to address AI misuse while enabling beneficial applications resonated throughout the discussion, acknowledging that no single country or organization can address these challenges independently.
The discussion highlighted the importance of making responsible AI practices default settings rather than optional add-ons, recognizing that most organizations will adopt whatever configuration is provided initially. This insight has significant implications for AI platform design and regulatory approaches.
Key Takeaways and Future Directions
The panel demonstrated a maturing field where practitioners are converging on core principles while offering complementary implementation approaches. The emphasis on productizing governance, embedding human oversight, and maintaining purpose-driven technology selection provides a practical pathway for balancing innovation with responsibility.
As moderator Kazim Rizvi noted in closing remarks, the RAISE Index represents India’s contribution to global responsible AI efforts, providing an open, adaptable methodology that other jurisdictions can adopt and modify. The framework’s iterative design ensures continuous evolution through pilot phases and stakeholder consultation, maintaining relevance as AI technology advances.
The session concluded with encouragement for attendees to access and test the RAISE Index framework, emphasizing that responsible AI is not merely a technical challenge but requires comprehensive organizational commitment, appropriate regulatory frameworks, and international cooperation. The frameworks and approaches discussed provide a foundation for ensuring AI development serves humanity’s best interests while maintaining the innovation momentum necessary for addressing global challenges.
Session transcript
Thank you. Good afternoon, everyone. I know it’s Friday afternoon, almost end of a fantastic Global AI Summit. And good afternoon to my fellow distinguished panelists. I think the topic of this particular panel, it’s probably the apt one to wrap up this Global AI Summit because the most important arc in this innovation, the innovation of AI is making sure… the impact of the AI is safe, responsible, ethical, inclusive, and explainable, right? And it has to be holistic at the end of the day. I think there’s a lot that we have learned over the course of this week, listening to a number of different thought leaders talking about how AI could be channeled in a manner where it delivers the intended impact without getting into unintended consequences.
I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing this safe and ethical AI, right? Starting with benchmarks must emerge from deployment reality. And not just research labs. Safety benchmarks fail. when developed in isolation, the most effective ones come from institutions building, deploying, and maintaining AI at scale, right? Government innovation hubs sit at this critical intersection between policy intent and operational reality, surfacing failure modes and trust gaps. The second most important element in this framework is to ensure these safety benchmarks are co -created with the industry and with academia and the research institutions. ICOM and the dialogue developed one of its kind index called the RAISE Index over the last year and a half that we have been working together, which is the first of its kind in quantifying the impact or the quantifying the value or the quantifying the impact of AI within deployment and during development on the safety and responsibility matrix.
And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to the entire framework and you could even test your respective AI solutions or AI systems that you might be developing or you already have in production, test it against that and then see what the index comes back and tells you. The third is making benchmarks practical. And in Telangana, we have launched Telangana Data Exchange, which is first of its kind, digital public infrastructure, within the realm of AI. It provides startups access to government data sets in a sandboxed environment. This is where benchmarks get validated and time tested. Startups can test their AI systems against actual data, actual use cases, actual constraints before deployment.
The third is we all understand and recognize that startups move at a rapid pace. So when startups are deploying AI solutions, there’s a number of risks that emerge. And we are providing this index again as part and parcel of the whole startup ecosystem that we are building. And as a result, we expect them to detect any early warning signs within this framework and continue to improve this. The last is benchmarks and frameworks must be living infrastructure, not static checklists, right? AI. Capabilities evolve faster than regulatory cycles. Static benchmarks become. Hubs must institutionalize continuous benchmark evolution. This raised index methodology includes phase -based assessment, ensuring benchmarks remain relevant to company maturity stages. So if you take this broader framework of making sure, how do we make sure AI systems are safe and responsible and ethical, the question comes down to how is India leveraging its innovation hubs and its leadership position in shaping the global dialogue on inclusive and responsible AI.
What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed for high resource, homogenous environments. India operates in the context that most of the developing world shares, which is multilingual populations, infrastructure, and innovation. It has infrastructure constraints, massive scale, and the imperative to serve both economic growth and social inclusion. This is not a limitation. This is a significant competitive advantage that India has in shaping the global standards. Number two is demonstrating responsible AI in high stakes, high scale deployments, which we are offering. ICOM, the first of its kind innovation, AI innovation entity out of Telangana, with its research and co -innovation pillar, helps build AI solutions for healthcare, agriculture, climate, financial inclusion, where failures have immediate societal impact.
When we document how these systems are designed, tested, and governed, we contribute frameworks that have been validated under real world complexity, not just lab conditions. This particular RAISE index is India’s contribution to global standardization. You will notice the more you dig into this index, the index harmonizes requirements across leading global frameworks. Be it EU AI Act, NIST AI Risk Management Framework, the Singapore Mass Guidelines or the UK AI Assurance. We brought it all together into a single portable assessment. Organizations operating in multiple markets can use one assessment to evaluate alignment with diverse regulatory escalations. The methodology is open and adaptable for other jurisdictions. And I would leave you with last but very important point of institutionalized continuous learning in responsible AI practice, right?
Most frameworks are static standards. ICOM believes in creating systems with ongoing feedback, tracking system performance over time, updating benchmarks as models evolve, incorporating new research. And Raise Index is designed as an iterative framework. What we are releasing today is the first edition and it will continue to evolve through pilot phases. stakeholder consultation and it’s not a one time standard we all know AI is an evolving technology and this has to evolve but our intent and goal and hope is this would keep pace with the pace with which the technology is moving and that is very critical and that’s a common responsibility that we all hold be it technologists, be it policy makers be it think tanks or be it researchers or start ups it is we all have to come together as an ecosystem to ensure the technology that we put out there with the intent of doing benefit for the society does exactly that without any unintended consequences so I think we are up for a fantastic panel and you guys absolutely would enjoy the conversation that is going to be held now.
Thank you.
Thank you so much, sir, for setting the context. And I think like that deep, like sets the perfect context for us to like pick up the conversation from there, which is going to be like we are discussing today in terms of like reimagining like responsible AI. What are we trying to like do today in this panel is to like, you know, understand like what are the shifts that are needed like when it comes to responsibilities with evolving innovations and like how we can take the needle forward when it comes to responsibilities. I would like to start with Ms. Arunthati Bhattacharya here. Thank you so much, ma ‘am, for taking the time. It’s absolutely a pleasure to host you.
And first question is to you, ma ‘am, is that is like as you are a global enterprise leader, how do you see the balance between the rapid AI innovation with there is a need for a trust and accountability and customer protection as well? So how do you see that balance
So, you know, in the company that I work for, Salesforce, we started our AI journey in 2014. And in 2014, we also set up within the company an office for the humane and ethical use of technology. So this is an office, by the way, which goes through every one of our products, every one of our processes, before it is allowed to make its debut in the market. Because we realized very early on that while technology and AI could give us many advantages, it would also be used by bad actors for doing things which it was never intended for. And that is true of every single thing that, you know, we come up with. Whether it be a new medicine, whether it be nuclear energy, whether it be anything that we come up with, it can have its good use.
It can also be used for the wrong reasons. And that is something that we must come together in a global compact in order to defeat and in order to stop. Again, this has to be a global compact. It’s not something that one country or one organization or one effort can probably ensure. Because unless and until we have sufficient transparent information exchange, unless and until we all say together that this is not something that we will allow, it would be very difficult for us to stop the bad actors. It’s not easy. Today you see the kind of deep fakes that are there, stuff that we never thought of in our childhood, families having safe words amongst themselves.
It’s not something that was there at all. But today, in fact, I was asking a colleague from the US. And he was saying, yes, we do have a safe word in the family because we don’t know when somebody is going to get a call that’s going to sound like me. And it’s going to say that I’m in the hospital and I need so much money. Please come and get me. And it might be somebody entirely different trying to scam you. So we do have safe words. Now, imagine the extent to which we have gone, where we are having to teach children that these are the ways that you can be sure and you can be safe.
Now, this is not something that we want, because obviously, AI is also something that can speed up things like medical research. It can actually speed up skilling. It can speed up many things which enable us and empower us to come up to potential. So a technology this powerful should not and cannot be stopped because bad actors are misusing it. And therefore, it’s up to all of us to come up with a framework. A global compact, again, as I say, a framework that will enable us. to ensure that we are all of us together trying to stop the bad actors and ensuring that this is being used for the good of humanity.
Excellent point, ma ‘am. I think a very interesting aspect is your starting remark in terms of putting together an office on the humane aspect, which actually shows that it’s not only the technical side can solve the problem when we talk about responsibility, it’s also organizational ethics and organizational ethos which kind of brings that kind of essence to it. And great submission on the global compact, and I think that’s something that we should all strive towards, and I hope the summit will kickstart that process for us as well. I’ll come back to you, ma ‘am. I know you have a hard stop, but I’ll do come back to you for one more question. But now I would like to go to Karna here.
Thank you so much, Karna, for joining. We did hear from ma ‘am in terms of what can be actually done in terms of… Thank you. from how larger organizations are looking from this. But I would like to pick up your brains in terms of as a startup and an MSME, what are the operational challenges that you guys face when you are trying to balance this equation of responsibility versus innovation? And also you guys are looking at it from a four -sidedness and new technologies. So any thoughts there would be
make the AI technology which comes with a lot of power be a bit more Enterprise software ish in terms of compliance governance observability. So we that’s what we do is which means the way we believe is if governance looks like a 200 page PDF for all companies MSM is to figure out we will see them struggle and our our idea is it should be a part of the core product as a lot of us are building solutions for customers governance should be the core product like we believe product is it product as it product as it and that allows mass adoption and the way we do it is so governance to product as it we just writing into the prompt is just the first line of defense.
It should be the core part through the entire agentic lifecycle. Which means. At the time you’re giving it an input and it’s reasoning there are guardrails it check before it does some tool calling which is like hey i’m gonna write uh to the crm or i’m gonna talk to uh one of your customer on this topic there is again guardrails before that and even when you do an output there needs to be guardrail and the guardrails should be a part of the core product and that is important to drive mass adoption and secondly the way we think is knowing we build voice agents for companies uh we still believe human in the loop is a first class feature not a failure point which means you should design the system that it doesn’t in the intent to give an answer it doesn’t give wrong answers it’s okay to figure out when it should transition from a fully autonomous to an assisted agent to fully autonomous to a human and that principles of using humans in the right place should be the core product of our product and that that productization has allowed us so we also have another company up now which is a hiring platform which allows around 3 lakh companies.
Now, because what we saw beautifully when we productize a lot of these, every year, every month in fact, 3 ,000 MSMEs are building voice interview agents on their own. They’re not even realizing because we have productized it that at the back of it, there are three agents they are creating and training for their recruiting process and they’re deploying it and within a matter of five minutes. So, and that has driven to adoption of 30 ,000 companies who are doing it on their own and if we want the entire India, all companies to leverage it, more and more as software, agent -based software builders, we productize it, the better the adoption will be.
That’s an excellent point, right? Like, I think like this is something that like we kind of like also keep speaking is that productization of responsible AI from a value proposition perspective, right? Like how can responsible AI be embedded as a value proposition towards the product that you’re building, which also is one of the selling points for like whatever that is like taken. That’s a great, great point. So I’ll definitely come back to you, but I would like to go to Ankush and then like I’ll come back to ma ‘am again. is like quickly like Ankush wanted to like understand you guys build AI systems so what are the governance challenges that you see most are like you know difference when it’s for public and private
yeah I think one is control I think when it’s about sovereign AI so it’s not just the data residency which matters to our client they want the complete control no one else other government no other party should be able to even see that sniff that audit that so I think that is something which our clients ask for it and that’s why though we work with almost all the cloud providers and but we let the decision be with our clients like which data center they want us to hold and now we see the huge demand of on premise solution and that’s why it’s now even we had seen the need of the edge ai day before yesterday with nvidia we have launched vada gpt desk ai appliance so that’s a supercomputer itself that process is around one petaflops floating point instructions and you know 4db hard disk and you know and that can run a model with one one trillion parameter huge right so but our vada gpt model is just half a billion parameters so means they can do multiple models multiple use cases just one box and we’ll be announcing that soon we we’re working with the defense and now there’s a huge need to have not just on not just in india not just on premise it’s just in the room on the desk right now when the army is doing critical meetings so they don’t want the data to even go out of the room so even that kind of but with a complete processing complete sovereignty and they also don’t want to limit the use cases also right so they want to start with minutes of meeting a change and the aspirations keep increasing so we needed to have a super computer thanks to NVIDIA who’s powering our box there so I think that is the major part rest we all know about explainability inclusivity and privacy and purpose so I think this is something where I think that’s why many many data centers are coming up in the country there is a need of having our own data center here
that’s excellent like I think like what you’re trying to underline is the trust over the solutions and that’s coming through the sovereignty of the data more they have control over it more it is
that’s correct so now tagline is AI with purpose and trust trust is of course important for any relationship like vendor so I think with AI the trust is more important because they are trusting us they are giving us data to create the models so that’s why many new companies are coming up you know of course I thank and welcome them to come to the table but I think now the old players are still being valued so the work is still concentrated here though the deliveries are taking time and all that but there is definitely now need and we need to I think my message to all the new startups and AI startups is yes innovation you have to keep showing doing but show the trustworthy part of it said about observability I think that’s very very important so enterprises want more of trust scale security than the innovation I’m not saying don’t do the innovation but the trust part is very important especially when AI comes
that’s a great great important submission so but ma ‘am over to you I think like you have to leave in five so like any closing remarks that you would like to like you know provide
no the one thing that I wanted to talk about was trust because that’s what was being discussed Trust in Salesforce. Trust is our number one value. We have five values. The first is trust. The second is customer success, followed by innovation, equality and sustainability. But trust is definitely number one. Now, having said that, we are number one in trust. We are also a cloud native company. OK, so we do not have on prem systems. And we also believe that basically it is important for us to adopt asset like models, mainly because today the need for storage and compute is so high, given the fact that AI is able to handle trillions and trillions of data points.
And the more you have data points, the better your answers will be. Of course, not for everything. You don’t need to boil the ocean for every single thing. But where there are really deep questions that will benefit from the diversity and the extent of the data, it is very important. For us to have the right kind of compute and storage facility. Now, obviously, you know, if you’re going to have that kind of storage and compute facilities that is entirely on -prem, it also means a pretty high amount of investment into the hardware resource. And India is not very well known for having deep pools of resources. So given the fact that we necessarily have to have capital -like models, it’s important for us to find ways and means of ensuring logical security and trust.
And there are ways of doing this. There are several ways of doing this. One of the reasons why, by the way, we were behind Copilot in bringing our enterprise -level offerings to the market was because we were working very hard on the trust layer. Because the trust layer is not only about access. It’s also about ensuring. not only that your data doesn’t go out, but also the fact that your data doesn’t have any toxicity, that your data doesn’t have bias, that your data is not hallucinating. And by the way, the bigger the data, the amount of data, the more is the tendency to hallucinate. And obviously, you don’t want something as important as this to hallucinate and give you a right wrong answer.
So TrustLayer actually performs a number of these actions, which is all meant towards ensuring that the results that come are not only responsible, they are trustworthy. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
and we created it. We launched it when we have seen, and I’m still not saying we are 100 % safe, but I’ve seen the world is now okay to have inaccuracies, right? So we are a bit risk averse, right? We are not that risk takers when the whole world was okay. Because we have the client, so you see our clients IRCTC, LIC, NPCI, and Army Defense, they used to expect 99 .9 % accuracy. When the whole world is okay, was okay getting wrong answers from these general purpose LLMs, they got more convinced and most of our clients before 12 GPT days, so that was classic NLP. I liked your point where we don’t have to answer everything, right?
So God really is important. But now most of our clients have gone to Genia. not just gen not only gen a only thing we do composite ai so we still follow the conversation the classic nlp based intent classification entity extraction you would not believe so 80 percent 80 to 90 percent of our interactions happen classic nlp without gen ai because we think we all are different we are not right so so when say in one of them say irctc say four million people come to irctc if i open the dashboard they’re only eight to ten intense people you have to book cancel change board station whatever so 80 percent use cases if someone is saying i want to travel from bangalore to delhi tomorrow there is no gen ai involved i just have to call nlu is involved that old model works just cause the api gets the data right no gen ai if someone say hey i have three pets then how do i do if it is one pet that is a policy that we know it says i have three pet can i carry in my train right so probably that answer is not there in classic nlp for that we you do the rack base with barrager bar gpt so I think if we safety is important I think that should be the core of design and then composite air don’t do just Gen AI because Gen AI is easily available and don’t use Gen AI because you have money to buy GPUs and burn the tokens so idea is do purpose led innovation begin with end in the mind I have told this line I think 10 times today first see what problem you are solving and then you see which solution then which model if model is available use the available model if not build the
that’s an excellent point thank you so much Ankush for making the time and quickly moving to Karna any closing remarks that you would have and also whatever you want to add to your previous point
yeah so I think to the point Ankush was mentioning AI technology is fundamentally designed on probabilistic model and and we are all used to software working in a deterministic manner, right? So it has to exactly do this. Now when it comes to this topic of large processes for large enterprises, I think compliance is one area which is super hard to think about, right? AI is probabilistic, but compliance, you always want it to be correct. So I think what to enable the ecosystem, what we believe is we are converting compliance into APIs. So what I mean by that is, so we’re deploying voice agent in one of the large mutual fund houses, all the compliance for that industry are checkbox.
So every company can pick what compliances they need. They just need to take the APIs which they want to ensure and that makes the entire ecosystem flourish and these APIs should ideally get open sourced in the market. So there is enough level of validation across all players that, hey, this SEBI guideline, this is an API which you can invoke into your agent and agent will follow it. And this has pressure test. This takes this burden of ensuring AI works 100 % correct in all use cases, which is not the power of the technology. But if we don’t think like that, then we’ll become very restrictive in its application. While we work a lot on making it P99 accuracy, but there is always the probabilistic chance of it.
And I think the second point we should think about is I think the human state of mind works well in default versus optional. What I mean by that is whatever is the default selection in any of the things you do has 90 % adoption or 80 % adoption and whatever is the change is a 20%. So the way we think about it is a lot of things should be a default. Yes. So customer data should not be used by default to train LLMs or models. It should be an optional add -on rather than the other way around, which you see. Right. Because that’s how most. startups, MSMEs, businesses would otherwise ignore it and the scale of innovation will not happen if that’s not the default state.
And lastly, explainable is extremely important because as models are making decisions, how do you know why this decision was made? And if we make that more as a core output of the API and not think of it as, oh, if something breaks, we will figure out how it works. You will not enable your partners to be a decision maker with you when you’re designing AI solutions for them. So that’s what we focus on. We focus on how do we make a PAT technology, P99 available for enterprises and or governance is the prime topic which comes on why, what is the missing element to get mass adoption and that’s something which I want the entire ecosystem to embrace.
Can we make it an API? Can compliance, governance be more of an infrastructure rather than a paperwork? Because if it is that, then we’re going to slower adoption in India than maybe other parts of the world.
That’s a great point. Thank you so much, Karna. But we have very few minutes left and we have one panelist who has dedicated full time for us. So, like, you know, kudos to that. So, opening up to the floor, any questions? I think, like, we can take two questions, given the time frame. Any questions to Karna? Anybody? Yeah. They’re all very clear. Yeah. Hi. Good evening. Hello. So, my question is related to small language models which are becoming increasingly popular. Within the developer… community so for businesses like yourself yeah do you see a profitable path ahead for slms or do we continue depending on this llms which i think will be raised to the bottom
yeah no great question i think we think about it a lot and a lot of customers of our ask the question hey would you be in using slm will we use an llm i think the place where we are we all will benefit from the flexibility of llms because frankly most companies are deploying their first or second actual large -scale deployment i think it is helpful to leverage the power of the larger models at that time and over time you will learn what actually is needed in it and over time you can transition llms to an slm where you get the advantage of sometimes latency sometimes cost depending on what your use case optimizes for but i think in the interest of speed of innovation it’s okay to just use llm figure out where the value is getting coming to your business and then explore through the journey of an SLM model which can give you additional advantages Thank you
Anyone else? Awesome So thank you I would request now Sarj to take it over
Thank you so much Thank you so much Thank you so much Kamesh Thank you so much to all of our panel members I think it’s been a really really interesting discussion on how where responsible AI is now and its future particularly with artificial intelligence going ahead I’ll call Mr. Kazan Rizvi the founding director of the dialogue to give the closing remarks for the session Kazan
This works, this doesn’t work Thank you I think this mic works Yeah, okay, great. Thanks a lot, Sahish. And thank you, Kamesh. Thank you to all those who stayed back till now. I think we are crossing the limit of event fatigue. I know a lot of us are quite tired and sort of very, very sort of exhausted, too many events. But I think the last one week has been fantastic. We’ve had the pleasure and the honor of hosting a few events over the last week. But I think specifically on Responsible AI, as Fani was talking in the beginning, the Dialog and ICOM have developed India’s first tool to assess Responsible AI readiness. So we urge and we encourage and we motivate all of you guys to sort of look into that.
But thank you, Kamesh, for moderating. Thank you to all our speakers for joining in. I think it’s important that we all work towards building Responsible AI practices from the beginning by design. I think that’s something which, you know, even the tool will encourage. So please have a look at that. But all… of you have a good evening I think for what is left of the AI summit it’s been a fantastic summit and hopefully all of us got to learn a lot I did myself but look forward to seeing you all soon dialogue will be hosting multiple conversations on AI policy and we encourage you all to join that but until then have a good evening enjoy your weekend and thank you to all our panelists again thank you thank you Thank you.
Thank you. Thank you. Thank you.
Moderator
Speech speed
45 words per minute
Speech length
1115 words
Speech time
1463 seconds
RAISE Index as a holistic, living benchmark
Explanation
The moderator describes the RAISE Index as an iterative, phase‑based framework that must evolve continuously and remain holistic rather than a static checklist. This design ensures benchmarks stay relevant to a company’s maturity and to changing AI risks.
Evidence
“And Raise Index is designed as an iterative framework” [1]. “This raised index methodology includes phase -based assessment, ensuring benchmarks remain relevant to company maturity stages” [4]. “The last is benchmarks and frameworks must be living infrastructure, not static checklists, right?” [9]. “And it has to be holistic at the end of the day” [10]. “Hubs must institutionalize continuous benchmark evolution” [5].
Major discussion point
Institutional Frameworks and Benchmarks for Responsible AI
Topics
Artificial intelligence | Monitoring and measurement | The enabling environment for digital development
Co‑creation of practical benchmarks with industry and academia
Explanation
The moderator emphasizes that safety benchmarks should be co‑created with industry partners, academia and research institutions so that they are validated, time‑tested and not limited to research labs.
Evidence
“The second most important element in this framework is to ensure these safety benchmarks are co -created with the industry and with academia and the research institutions” [16]. “This is where benchmarks get validated and time tested” [17]. “And not just research labs” [18].
Major discussion point
Institutional Frameworks and Benchmarks for Responsible AI
Topics
Artificial intelligence | The enabling environment for digital development
India’s unique position to shape global AI standards through the RAISE Index
Explanation
The moderator points out that India’s innovation hubs give it a competitive advantage to influence global AI standards, leveraging the RAISE Index as a vehicle for inclusive and responsible AI dialogue worldwide.
Evidence
“What is interesting is India is uniquely positioned in this global AI discourse” [27]. “This is a significant competitive advantage that India has in shaping the global standards” [28]. “…how is India leveraging its innovation hubs and its leadership position in shaping the global dialogue on inclusive and responsible AI” [29].
Major discussion point
Institutional Frameworks and Benchmarks for Responsible AI
Topics
Artificial intelligence | The enabling environment for digital development
Ecosystem‑wide collaboration and continuous learning
Explanation
The moderator calls for governments, innovation hubs, academia and startups to work together, institutionalising continuous learning so that AI standards keep pace with rapid technological change.
Evidence
“I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing this safe and ethical AI, right?” [22]. “I would leave you with last but very important point of institutionalized continuous learning in responsible AI practice, right?” [37]. “Stakeholder consultation and it’s not a one time standard we all know AI is an evolving technology and this has to evolve…” [98].
Major discussion point
Future Directions, Community Engagement, and Call to Action
Topics
Capacity development | The enabling environment for digital development | Artificial intelligence
Kazim Rizvi
Speech speed
87 words per minute
Speech length
279 words
Speech time
192 seconds
Promotion of India’s first Responsible AI readiness tool
Explanation
Kazim Rizvi announces that the Dialogue and ICOM have created India’s first assessment tool for Responsible AI readiness, positioning it as a concrete step toward responsible AI practice.
Evidence
“But I think specifically on Responsible AI, as Fani was talking in the beginning, the Dialog and ICOM have developed India’s first tool to assess Responsible AI readiness” [30]. “I think that’s something which, you know, even the tool will encourage” [38].
Major discussion point
Institutional Frameworks and Benchmarks for Responsible AI
Topics
Artificial intelligence | Monitoring and measurement
Invitation to adopt the RAISE assessment tool and engage in policy dialogue
Explanation
He urges participants to explore and adopt the RAISE assessment tool and to join ongoing policy conversations, reinforcing the call for broad uptake of responsible AI standards.
Evidence
“So we urge and we encourage and we motivate all of you guys to sort of look into that” [100]. “So please have a look at that” [104].
Major discussion point
Future Directions, Community Engagement, and Call to Action
Topics
Artificial intelligence | Capacity development
Arundhati Bhattacharya
Speech speed
111 words per minute
Speech length
929 words
Speech time
498 seconds
Establishment of a Humane and Ethical Use of Technology office
Explanation
Arundhati explains that Salesforce created, in 2014, a dedicated office focused on the humane and ethical use of technology, which reviews every product and process before market release.
Evidence
“And in 2014, we also set up within the company an office for the humane and ethical use of technology” [43]. “So this is an office, by the way, which goes through every one of our products, every one of our processes, before it is allowed to make its debut in the market” [48].
Major discussion point
Corporate Trust and Ethical Governance
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Trust as the top corporate value and foundation of a Trust Layer
Explanation
She repeatedly stresses that trust is Salesforce’s number‑one value and that a dedicated Trust Layer underpins product reliability, access control and overall customer confidence.
Evidence
“Trust is our number one value” [54]. “But trust is definitely number one” [55]. “The first is trust” [56]. “Because the trust layer is not only about access” [57]. “One of the reasons why, by the way, we were behind Copilot in bringing our enterprise‑level offerings to the market was because we were working very hard on the trust layer” [58].
Major discussion point
Corporate Trust and Ethical Governance
Topics
Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs | Artificial intelligence
Call for a global compact to prevent AI misuse
Explanation
Arundhati advocates for an international compact that would unite stakeholders to stop malicious actors and ensure AI is used for the good of humanity.
Evidence
“And that is something that we must come together in a global compact in order to defeat and in order to stop” [65]. “Again, this has to be a global compact” [66].
Major discussion point
Corporate Trust and Ethical Governance
Topics
Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs
Kamesh Shekar
Speech speed
162 words per minute
Speech length
768 words
Speech time
283 seconds
Productizing trust as a core value proposition for AI solutions
Explanation
Kamesh argues that embedding responsible AI and trust directly into product value propositions creates a market differentiator and a compelling selling point for AI offerings.
Evidence
“Like how can responsible AI be embedded as a value proposition towards the product that you’re building, which also is one of the selling points for like whatever that is like taken” [39]. “productization of responsible AI from a value proposition perspective” [41]. “productize it, the better the adoption will be” [40].
Major discussion point
Corporate Trust and Ethical Governance
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The digital economy
Emphasizing responsible AI by design from the outset
Explanation
He stresses that responsible AI practices must be built in from the beginning, not added later, to ensure safety, ethics and compliance are integral to AI systems.
Evidence
“I think it’s important that we all work towards building Responsible AI practices from the beginning by design” [26].
Major discussion point
Future Directions, Community Engagement, and Call to Action
Topics
Artificial intelligence | Monitoring and measurement
Reinforcing trust as central in enterprise AI deployments
Explanation
Kamesh highlights that trust, especially data sovereignty and control, is the key factor for enterprises and defense clients, outweighing pure innovation.
Evidence
“that’s excellent like I think like what you’re trying to underline is the trust over the solutions and that’s coming through the sovereignty of the data more they have control over it more it is” [60].
Major discussion point
Data Sovereignty, Control, and Trust in Enterprise AI Deployments
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Karna Chokshi
Speech speed
173 words per minute
Speech length
1177 words
Speech time
407 seconds
Embedding governance and compliance directly into the core product
Explanation
Karna argues that governance should be built into the product itself rather than delivered as a lengthy PDF, turning compliance into an infrastructure that scales with the solution.
Evidence
“Can compliance, governance be more of an infrastructure rather than a paperwork?” [73]. “governance should be the core product … governance should be the core product as a lot of us are building solutions for customers governance should be the core product” [74]. “we are converting compliance into APIs” [24].
Major discussion point
Embedding Governance and Compliance into AI Products for Startups/SMEs
Topics
Artificial intelligence | The enabling environment for digital development | Monitoring and measurement
Human‑in‑the‑loop as a first‑class feature
Explanation
She stresses that human‑in‑the‑loop must be treated as a primary capability, enabling safe transitions from autonomous to assisted operation rather than being seen as a failure.
Evidence
“human in the loop is a first class feature not a failure point … design the system … transition … to a human” [79].
Major discussion point
Embedding Governance and Compliance into AI Products for Startups/SMEs
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Delivering compliance via reusable APIs and default‑on settings
Explanation
Karna describes a strategy of exposing compliance checks as APIs with default‑on configurations, ensuring that every AI call passes guardrails automatically.
Evidence
“make that more as a core output of the API and not think of it as, oh, if something breaks, we will figure out how it works” [77]. “a lot of things should be a default” [88]. “And I think the human state of mind works well in default versus optional” [82].
Major discussion point
Embedding Governance and Compliance into AI Products for Startups/SMEs
Topics
Artificial intelligence | The enabling environment for digital development | Monitoring and measurement
Ankush Sabharwal
Speech speed
170 words per minute
Speech length
971 words
Speech time
342 seconds
Demand for on‑premise and edge AI to ensure data sovereignty
Explanation
Ankush explains that clients, especially in defense and critical sectors, require complete control over data, driving demand for on‑premise and edge AI appliances that keep processing within the customer’s premises.
Evidence
“they want the complete control no one else other government no other party should be able to even see that … huge demand of on premise solution … we launched vada gpt desk ai appliance … not just on premise it’s just in the room on the desk … complete processing complete sovereignty” [92].
Major discussion point
Data Sovereignty, Control, and Trust in Enterprise AI Deployments
Topics
Data governance | Building confidence and security in the use of ICTs | Artificial intelligence
Implementation of a Trust Layer to guarantee security, bias mitigation, and accuracy
Explanation
He notes that the Trust Layer performs actions that make AI outputs responsible and trustworthy, addressing concerns such as bias, hallucination and high‑accuracy requirements.
Evidence
“TrustLayer actually performs a number of these actions, which is all meant towards ensuring that the results that come are not only responsible, they are trustworthy” [63].
Major discussion point
Data Sovereignty, Control, and Trust in Enterprise AI Deployments
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Prioritizing trust over pure innovation for enterprise and defense clients
Explanation
Ankush stresses that enterprises and defense customers value trust, security and reliability more than rapid innovation, and that AI providers must demonstrate these attributes to win business.
Evidence
“trust is more important because they are trusting us they are giving us data … enterprises want more of trust scale security than the innovation” [61].
Major discussion point
Data Sovereignty, Control, and Trust in Enterprise AI Deployments
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Agreements
Agreement points
Trust as fundamental requirement for AI adoption
Speakers
– Arundhati Bhattacharya
– Ankush Sabharwal
– Kamesh Shekar
Arguments
Trust as the number one value requiring comprehensive trust layers beyond just data security
Complete data sovereignty and control as essential for enterprise and government clients
Trust in AI solutions comes through data sovereignty and user control
Summary
All speakers emphasized that trust is the foundational element for AI adoption, whether through comprehensive trust layers, data sovereignty, or user control mechanisms
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Integration of governance and ethics into core AI product design
Speakers
– Karna Chokshi
– Arundhati Bhattacharya
– Kazim Rizvi
Arguments
Productization of governance as core product feature rather than separate compliance exercise
Office for humane and ethical use of technology reviewing all products before market release
Importance of building responsible AI practices from the beginning by design
Summary
Speakers agreed that responsible AI practices should be embedded into the core product design and organizational processes from the beginning, rather than being treated as separate compliance exercises
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Need for human oversight in AI systems
Speakers
– Karna Chokshi
– Ankush Sabharwal
Arguments
Human-in-the-loop as a first-class feature rather than failure point in AI systems
Composite AI approach using classical NLP for routine tasks and generative AI only when necessary
Summary
Both speakers advocated for intentional human involvement in AI systems, whether through designed human-in-the-loop features or strategic use of different AI technologies based on task complexity
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Importance of explainability and transparency in AI systems
Speakers
– Karna Chokshi
– Arundhati Bhattacharya
Arguments
Explainability as core API output rather than post-hoc investigation tool
Trust as the number one value requiring comprehensive trust layers beyond just data security
Summary
Both speakers emphasized that AI systems must be transparent and explainable, with Chokshi advocating for explainability as a core output and Bhattacharya highlighting comprehensive trust layers that address bias and hallucinations
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Similar viewpoints
Both emphasized the necessity of collaborative, multi-stakeholder approaches to AI governance, with the Moderator focusing on co-creation for benchmarks and Bhattacharya advocating for global compacts to address misuse
Speakers
– Moderator
– Arundhati Bhattacharya
Arguments
Need for co-creation between government, industry, and academia for effective benchmarks
Need for global compact to prevent misuse by bad actors while enabling beneficial AI applications
Topics
Artificial intelligence | The enabling environment for digital development
Both advocated for dynamic, adaptable approaches to AI governance that can evolve with technology rather than remaining static, with the Moderator emphasizing living infrastructure and Chokshi proposing API-based compliance
Speakers
– Moderator
– Karna Chokshi
Arguments
Need for living infrastructure rather than static checklists for AI benchmarks
Converting compliance into APIs to make governance more accessible for enterprises
Topics
Artificial intelligence | The enabling environment for digital development | Data governance
Both speakers advocated for strategic, purpose-driven approaches to AI technology selection, emphasizing the importance of using the right technology for specific tasks rather than applying generative AI universally
Speakers
– Ankush Sabharwal
– Karna Chokshi
Arguments
Composite AI approach using classical NLP for routine tasks and generative AI only when necessary
Flexibility of LLMs for initial deployments with transition to SLMs over time
Topics
Artificial intelligence | The enabling environment for digital development
Unexpected consensus
Organizational structures and processes are as important as technical solutions
Speakers
– Arundhati Bhattacharya
– Kamesh Shekar
– Kazim Rizvi
Arguments
Office for humane and ethical use of technology reviewing all products before market release
Organizational ethics and ethos are crucial for responsible AI beyond technical solutions
Importance of building responsible AI practices from the beginning by design
Explanation
There was unexpected consensus that responsible AI requires significant organizational commitment and structural changes, not just technical fixes. This suggests a mature understanding that AI governance is fundamentally about organizational culture and processes
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
India’s unique position as an advantage rather than limitation in AI development
Speakers
– Moderator
– Ankush Sabharwal
Arguments
India’s unique position to shape global AI standards due to multilingual, resource-constrained environment
Complete data sovereignty and control as essential for enterprise and government clients
Explanation
Both speakers viewed India’s constraints (multilingual populations, infrastructure limitations, sovereignty requirements) as competitive advantages rather than obstacles, suggesting a shift from deficit-based to asset-based thinking about developing country contexts in AI
Topics
Artificial intelligence | Closing all digital divides | The enabling environment for digital development
Overall assessment
Summary
The speakers demonstrated strong consensus on fundamental principles of responsible AI development, including the primacy of trust, the need for embedded governance, human oversight, and collaborative approaches. There was also agreement on practical implementation strategies such as dynamic frameworks, purpose-driven technology selection, and organizational commitment to ethics.
Consensus level
High level of consensus with complementary perspectives rather than conflicting viewpoints. This suggests a maturing field where practitioners are converging on core principles while offering different implementation approaches. The implications are positive for developing coherent AI governance frameworks that can be practically implemented across different organizational contexts and scales.
Differences
Different viewpoints
Cloud-native vs on-premise AI deployment approaches
Speakers
– Arundhati Bhattacharya
– Ankush Sabharwal
Arguments
Trust as the number one value requiring comprehensive trust layers beyond just data security
Complete data sovereignty and control as essential for enterprise and government clients
Summary
Bhattacharya advocates for cloud-native solutions with trust layers to ensure security while leveraging shared compute resources, arguing that India lacks deep resource pools for individual on-premise investments. Sabharwal emphasizes complete data sovereignty requiring on-premise and edge solutions where no external party can access data, particularly for defense and government clients.
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Data governance
Approach to AI model selection and deployment strategy
Speakers
– Karna Chokshi
– Ankush Sabharwal
Arguments
Flexibility of LLMs for initial deployments with transition to SLMs over time
Composite AI approach using classical NLP for routine tasks and generative AI only when necessary
Summary
Chokshi recommends starting with large language models for flexibility and transitioning to smaller models over time as needs become clearer. Sabharwal advocates for a composite approach using classical NLP for 80-90% of routine tasks and only employing generative AI when specifically needed for complex queries.
Topics
Artificial intelligence | The enabling environment for digital development
Unexpected differences
Acceptance of AI inaccuracy levels in enterprise applications
Speakers
– Karna Chokshi
– Ankush Sabharwal
Arguments
Converting compliance into APIs to make governance more accessible for enterprises
Composite AI approach using classical NLP for routine tasks and generative AI only when necessary
Explanation
An unexpected disagreement emerged regarding tolerance for AI inaccuracies. Chokshi acknowledges the probabilistic nature of AI and suggests working within P99 accuracy while designing systems that can handle the inherent uncertainty. Sabharwal takes a more risk-averse approach, noting that their clients expect 99.9% accuracy and emphasizing the use of classical NLP to avoid generative AI’s probabilistic uncertainties for routine tasks.
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Overall assessment
Summary
The main areas of disagreement centered around technical implementation approaches for achieving AI trust and security, with fundamental philosophical differences about cloud vs on-premise deployment, AI model selection strategies, and acceptable accuracy levels for enterprise applications.
Disagreement level
Moderate disagreement level with significant implications for AI deployment strategies. While all speakers agreed on the importance of trust, security, and responsible AI, their different approaches reflect varying risk tolerances and client requirements. These disagreements highlight the need for flexible frameworks that can accommodate different deployment models and use cases rather than one-size-fits-all solutions.
Partial agreements
Partial agreements
All speakers agree on the fundamental need for trust and security in AI systems, but disagree on implementation approaches. Bhattacharya emphasizes global cooperation and cloud-based trust layers, Chokshi focuses on product-integrated governance with human oversight, while Sabharwal prioritizes complete data sovereignty through on-premise solutions.
Speakers
– Arundhati Bhattacharya
– Karna Chokshi
– Ankush Sabharwal
Arguments
Need for global compact to prevent misuse by bad actors while enabling beneficial AI applications
Human-in-the-loop as a first-class feature rather than failure point in AI systems
Complete data sovereignty and control as essential for enterprise and government clients
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Both speakers agree that AI governance should be practical and integrated into core systems rather than being separate compliance exercises. However, they differ in their technical approaches – Chokshi focuses on API-based compliance integration while Sabharwal emphasizes selective use of different AI technologies based on task requirements.
Speakers
– Karna Chokshi
– Ankush Sabharwal
Arguments
Productization of governance as core product feature rather than separate compliance exercise
Composite AI approach using classical NLP for routine tasks and generative AI only when necessary
Topics
Artificial intelligence | The enabling environment for digital development
Similar viewpoints
Both emphasized the necessity of collaborative, multi-stakeholder approaches to AI governance, with the Moderator focusing on co-creation for benchmarks and Bhattacharya advocating for global compacts to address misuse
Speakers
– Moderator
– Arundhati Bhattacharya
Arguments
Need for co-creation between government, industry, and academia for effective benchmarks
Need for global compact to prevent misuse by bad actors while enabling beneficial AI applications
Topics
Artificial intelligence | The enabling environment for digital development
Both advocated for dynamic, adaptable approaches to AI governance that can evolve with technology rather than remaining static, with the Moderator emphasizing living infrastructure and Chokshi proposing API-based compliance
Speakers
– Moderator
– Karna Chokshi
Arguments
Need for living infrastructure rather than static checklists for AI benchmarks
Converting compliance into APIs to make governance more accessible for enterprises
Topics
Artificial intelligence | The enabling environment for digital development | Data governance
Both speakers advocated for strategic, purpose-driven approaches to AI technology selection, emphasizing the importance of using the right technology for specific tasks rather than applying generative AI universally
Speakers
– Ankush Sabharwal
– Karna Chokshi
Arguments
Composite AI approach using classical NLP for routine tasks and generative AI only when necessary
Flexibility of LLMs for initial deployments with transition to SLMs over time
Topics
Artificial intelligence | The enabling environment for digital development
Takeaways
Key takeaways
India has developed the RAISE Index as the first quantitative framework for assessing AI safety and responsibility, harmonizing global standards like EU AI Act, NIST, and Singapore guidelines into a single assessment tool
Trust must be the foundational value in AI development, requiring comprehensive trust layers that address not just data security but also bias, toxicity, and hallucination prevention
A global compact is essential to prevent AI misuse by bad actors while enabling beneficial applications, as no single country or organization can address these challenges alone
Responsible AI governance should be productized as core features rather than separate compliance exercises, with compliance converted into APIs for easier enterprise adoption
India’s unique position with multilingual populations and infrastructure constraints provides a competitive advantage in shaping global AI standards for developing nations
Composite AI approaches using classical NLP for routine tasks and generative AI only when necessary can improve accuracy and reduce risks
Human-in-the-loop should be designed as a first-class feature rather than a failure point in AI systems
AI benchmarks must be living infrastructure that evolves continuously rather than static checklists, given the rapid pace of AI development
Resolutions and action items
Participants encouraged to test their AI solutions against the RAISE Index framework using the provided QR code
The Dialogue will host multiple future conversations on AI policy with encouragement for broader participation
RAISE Index methodology will continue evolving through pilot phases and stakeholder consultation as an iterative framework
Telangana Data Exchange launched as digital public infrastructure providing startups access to government datasets in sandboxed environments
Default settings should favor privacy and security (e.g., customer data not used for training by default) rather than making these optional add-ons
Unresolved issues
How to effectively implement and enforce global AI standards across different jurisdictions with varying regulatory approaches
The ongoing challenge of balancing AI innovation speed with comprehensive safety and responsibility measures
Technical challenges of achieving 99.9% accuracy expectations from enterprise clients while working with probabilistic AI models
The transition pathway from LLMs to SLMs for enterprises and the optimal timing for such transitions
Specific mechanisms for preventing AI misuse by bad actors on a global scale beyond general cooperation principles
How to maintain the pace of benchmark evolution to match the rapid advancement of AI capabilities
Suggested compromises
Use flexible LLMs for initial AI deployments to enable faster innovation, then transition to more specialized SLMs over time as use cases become clearer
Implement composite AI approaches that use classical NLP for routine tasks (80-90% of interactions) and reserve generative AI for complex queries requiring deeper reasoning
Balance cloud-native scalability with on-premise sovereignty requirements by offering both deployment options based on client security needs
Design AI systems with graduated autonomy levels that can transition from fully autonomous to assisted to human-controlled based on context and confidence levels
Make governance and compliance features default settings while allowing optional customization, rather than making responsible AI practices optional add-ons
Thought provoking comments
In 2014, we also set up within the company an office for the humane and ethical use of technology. So this is an office, by the way, which goes through every one of our products, every one of our processes, before it is allowed to make its debut in the market.
Speaker
Arundhati Bhattacharya
Reason
This comment is insightful because it demonstrates proactive organizational commitment to ethical AI from the very early stages of AI development (2014), showing that responsible AI isn’t just a recent concern but requires institutional embedding. It challenges the notion that ethics can be an afterthought and introduces the concept of systematic ethical review as a business process.
Impact
This comment set the tone for the entire discussion by establishing that responsible AI requires organizational structure and process, not just good intentions. It influenced subsequent speakers to discuss practical implementation approaches rather than just theoretical frameworks.
Today you see the kind of deep fakes that are there, stuff that we never thought of in our childhood, families having safe words amongst themselves… we are having to teach children that these are the ways that you can be sure and you can be safe.
Speaker
Arundhati Bhattacharya
Reason
This comment is deeply thought-provoking because it illustrates how AI risks have penetrated into the most intimate aspects of human relationships – family trust. The image of families needing ‘safe words’ to verify identity transforms the abstract concept of AI risk into a visceral, relatable reality that affects everyone.
Impact
This shifted the conversation from technical and business considerations to deeply human and societal impacts. It emphasized the urgency of the responsible AI challenge and helped frame why global cooperation is essential – because the risks affect fundamental human relationships.
Governance should be the core product… product as it product as it and that allows mass adoption… At the time you’re giving it an input and it’s reasoning there are guardrails… before it does some tool calling… there is again guardrails before that and even when you do an output there needs to be guardrail and the guardrails should be a part of the core product.
Speaker
Karna Chokshi
Reason
This comment reframes responsible AI from a compliance burden to a product feature and competitive advantage. The insight that governance should be ‘productized’ rather than treated as external oversight is revolutionary – it suggests that safety and responsibility can drive adoption rather than hinder it.
Impact
This comment fundamentally shifted the discussion from viewing responsibility as a constraint on innovation to seeing it as an enabler of mass adoption. It influenced the conversation toward practical implementation strategies and demonstrated how startups can make responsible AI a business advantage.
Human in the loop is a first class feature not a failure point… you should design the system that it doesn’t in the intent to give an answer it doesn’t give wrong answers it’s okay to figure out when it should transition from a fully autonomous to an assisted agent to fully autonomous to a human.
Speaker
Karna Chokshi
Reason
This insight challenges the common assumption that AI systems should strive for complete autonomy. Instead, it reframes human involvement as a design strength rather than a system limitation, offering a nuanced approach to AI deployment that acknowledges both capabilities and limitations.
Impact
This comment introduced a more sophisticated understanding of AI system design that influenced subsequent discussions about accuracy, trust, and practical deployment. It helped move the conversation away from binary thinking about AI autonomy toward more nuanced hybrid approaches.
AI technology is fundamentally designed on probabilistic model and we are all used to software working in a deterministic manner… So we are converting compliance into APIs… these APIs should ideally get open sourced in the market.
Speaker
Karna Chokshi
Reason
This comment addresses a fundamental tension in AI deployment – the mismatch between probabilistic AI behavior and deterministic compliance requirements. The solution of converting compliance into APIs is innovative because it makes regulatory requirements modular, testable, and shareable across the ecosystem.
Impact
This insight helped resolve a key tension that had been building throughout the discussion about how to ensure compliance with probabilistic systems. It provided a concrete technical solution that could enable broader AI adoption while maintaining regulatory compliance.
80 to 90 percent of our interactions happen classic nlp without gen ai because we think we all are different… don’t do just Gen AI because Gen AI is easily available and don’t use Gen AI because you have money to buy GPUs and burn the tokens so idea is do purpose led innovation begin with end in the mind.
Speaker
Ankush Sabharwal
Reason
This comment challenges the prevailing hype around generative AI by advocating for ‘composite AI’ approaches that use the right tool for each specific task. It’s insightful because it promotes efficiency and purpose over technological novelty, which is crucial for responsible deployment.
Impact
This comment grounded the discussion in practical reality and challenged assumptions about always using the most advanced AI technology. It reinforced the theme of purposeful, measured AI deployment that considers both effectiveness and responsibility.
Overall assessment
These key comments collectively transformed the discussion from abstract principles to concrete, actionable approaches for responsible AI. The conversation evolved through three distinct phases: first establishing the human stakes and organizational requirements (Bhattacharya’s contributions), then reframing responsibility as a product advantage and technical challenge (Chokshi’s insights), and finally grounding the discussion in practical deployment realities (Sabharwal’s composite AI approach). The most impactful insight was the reconceptualization of responsible AI from a constraint to an enabler – showing how governance, human-in-the-loop design, and purposeful technology selection can drive rather than hinder adoption. This shift from viewing responsibility as overhead to viewing it as competitive advantage fundamentally changed the tenor of the discussion and provided a pathway for practical implementation that balances innovation with safety.
Follow-up questions
How can we establish a global compact to prevent bad actors from misusing AI technology while ensuring beneficial uses continue?
Speaker
Arundhati Bhattacharya
Explanation
This addresses the critical need for international cooperation to combat AI misuse while preserving innovation, requiring coordination across countries and organizations.
How can compliance and governance be converted into APIs and infrastructure rather than paperwork to enable mass adoption?
Speaker
Karna Chokshi
Explanation
This explores making regulatory compliance more accessible and automated for startups and MSMEs, potentially accelerating responsible AI adoption across the ecosystem.
What is the profitable path ahead for small language models (SLMs) versus large language models (LLMs) for businesses?
Speaker
Audience member
Explanation
This question addresses the economic viability and strategic considerations for businesses choosing between different model sizes, impacting cost, performance, and deployment decisions.
How can we balance the probabilistic nature of AI with the deterministic requirements of compliance and enterprise applications?
Speaker
Karna Chokshi and Ankush Sabharwal
Explanation
This addresses a fundamental technical challenge where AI’s inherent uncertainty conflicts with business needs for reliable, compliant operations.
How can the RAISE Index methodology be adapted and implemented across different jurisdictions and regulatory frameworks?
Speaker
Moderator
Explanation
This explores the scalability and international applicability of India’s responsible AI assessment framework, potentially influencing global standards.
What are the specific requirements and challenges for implementing sovereign AI solutions across different sectors (defense, healthcare, finance)?
Speaker
Ankush Sabharwal
Explanation
This addresses the growing demand for data sovereignty and on-premise AI solutions, particularly in sensitive sectors requiring complete control over data and processing.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

