Building the Next Wave of AI_ Responsible Frameworks & Standards
20 Feb 2026 13:00h - 14:00h
Building the Next Wave of AI_ Responsible Frameworks & Standards
Summary
The panel opened by stressing that AI must be safe, responsible, ethical, inclusive and explainable, and that effective safety benchmarks should arise from real-world deployment rather than isolated research labs, be co-created with industry and academia, and remain living infrastructure that evolves with technology [4-6][8-13][24-27]. The moderator introduced ICOM’s RAISE Index as a pioneering framework for quantifying AI safety and responsibility across development and deployment, accessible via a QR code for testing AI solutions [13-15][38-42]. He also highlighted the Telangana Data Exchange, a sandboxed public-data platform that lets startups validate their models against actual datasets before launch [16-19][20-23]. Emphasising India’s unique position, he argued that the country’s multilingual, large-scale environment gives it a competitive edge in shaping global AI standards, and noted that the RAISE Index harmonises requirements from the EU AI Act, NIST, Singapore and UK frameworks [29-35][39-42].
Kamesh then invited panelists, asking Arundhati Bhattacharya how a global enterprise balances rapid AI innovation with trust and accountability [50-57]. Arundhati explained that Salesforce created a “humane and ethical use of technology” office in 2014 to review every product, stressing that AI’s misuse requires a global compact and transparent information exchange [58-66][67-68]. She further stated that trust is Salesforce’s top value and described a “TrustLayer” that safeguards against data leakage, bias, toxicity and hallucination [112-118][132-138].
Karna argued that responsible AI should be productised, embedding governance guardrails and a human-in-the-loop directly into AI agents to enable mass adoption [96-102]. Ankush added that sovereign AI demands on-prem or edge solutions giving clients full data control, and that trust is built through explainability, privacy and purpose-driven models [108-110]. He suggested delivering compliance as reusable APIs so enterprises can select required regulations without burdening probabilistic AI systems [151-158][159-162]. Karna also advocated default-protective settings for user data and making explainability a core API output to support decision-making [165-172].
The session concluded with Kazim Rizvi urging participants to adopt the RAISE Index and embed responsible AI by design, emphasizing the shared responsibility of technologists, policymakers and startups to ensure AI benefits society without unintended harms [202-209].
Keypoints
Major discussion points
– Establishing practical, co-created safety benchmarks for AI – The moderator stressed that benchmarks must emerge from real-world deployment, be co-created with industry, academia and government, and remain “living infrastructure” that evolves with AI capabilities. The RAISE Index (India’s first quantitative framework) and the Telangana Data Exchange sandbox were presented as concrete tools to validate and continuously improve these benchmarks [4-12][13-15][16-24][27-28][30-34].
– Global collaboration and a “trust compact” to curb misuse – Arundhati Bhattacharya described Salesforce’s early creation of a “humane and ethical use” office and argued that preventing bad-actor exploitation of AI requires a transparent, worldwide agreement and shared standards [58-66][67-71][112-118].
– Embedding responsible AI into startup and MSME products – Karna Chokshi highlighted the need to bake governance, observability and human-in-the-loop controls directly into the AI product (rather than as a separate 200-page PDF), turning compliance into reusable APIs and making responsible AI a value proposition that drives mass adoption [96-102][151-168][174-177].
– Data sovereignty, trust layers and explainability for enterprise clients – Ankush Sabharwal explained that large organisations demand full control over data (on-premise or edge solutions) and that trust is built through strict access controls, bias/toxicity filters and explainability mechanisms; Salesforce echoed this focus on a “TrustLayer” that safeguards data and model outputs [108-110][112-119][130-137].
– Choosing between large-scale LLMs and smaller, task-specific models – When asked about the rise of small language models (SLMs), Karna noted that enterprises often start with powerful LLMs for speed, then migrate to SLMs for lower latency and cost once the use-case is clarified [191-193].
Overall purpose / goal of the discussion
The panel was convened to close the Global AI Summit by crystallising how the AI community-governments, innovation hubs, academia, large firms and startups-can jointly develop, benchmark and continuously refine safe, ethical and inclusive AI systems, and to promote concrete tools (e.g., the RAISE Index, Telangana Data Exchange) that operationalise responsible-AI by design [4-7][28-30][202-208].
Overall tone and its evolution
The conversation began with an optimistic, forward-looking tone emphasizing collaboration and the promise of responsible AI [4][48]. As the dialogue progressed, speakers adopted a more pragmatic, problem-solving tone, detailing concrete technical challenges (governance integration, data sovereignty, model selection) and practical solutions [96-102][108-110][191-193]. Throughout, the tone remained constructive and collegial, ending on a hopeful note encouraging continued ecosystem cooperation [84][190][209].
Speakers
– Kamesh Shekar – Area of expertise: Artificial Intelligence & Emerging Tech; Role: Moderator of the panel, Youth Ambassador at The Internet Society; Title: Youth Ambassador, Moderator [S1][S2]
– Karna Chokshi – Area of expertise: AI productization for startups; Role: Startup founder/CEO (voice-agent solutions); Title:
– Moderator – Area of expertise: ; Role: Session moderator; Title:
– Arundhati Bhattacharya – Area of expertise: Responsible AI, AI ethics; Role: Executive at Salesforce, Global Enterprise Leader; Title:
– Ankush Sabharwal – Area of expertise: AI infrastructure, sovereign AI solutions; Role: Leader of AI solutions company (Vada GPT appliance); Title:
– Kazim Rizvi – Area of expertise: AI policy & governance; Role: Founding Director of The Dialogue, Moderator; Title: Founding Director, Moderator [S11][S12]
Additional speakers:
– Sarj – Area of expertise: ; Role: ; Title:
– Fani – Area of expertise: ; Role: ; Title:
– Sahish – Area of expertise: ; Role: ; Title:
The panel opened with the moderator emphasizing that the ultimate challenge of AI innovation is to ensure its impact is safe, responsible, ethical, inclusive and explainable, a goal that must be pursued holistically [4-6]. He argued that the week’s lessons highlight the need for benchmarks derived from real-world deployment rather than isolated research labs, and that governments, innovation hubs, academia and startups all share responsibility for shaping such standards [7-10]. Crucially, he stressed that benchmarks must be co-created with industry and academia and function as “living infrastructure” that evolves alongside AI capabilities [11-13][24-27].
To illustrate concrete tools for this vision, the moderator presented ICOM’s RAISE Index, described as the first quantitative framework that measures AI safety and responsibility across both development and deployment phases [13-15]. Attendees could scan a QR code on the screen to access the full framework and test their own AI solutions against it [14-15]. He added that the methodology is open and adaptable for other jurisdictions, enabling broader applicability [39-42]. He also highlighted the Telangana Data Exchange, a first-of-its-kind digital public infrastructure within the realm of AI that gives startups sandboxed access to government datasets for validating models against real data, use-cases and constraints before launch [16-23]. These initiatives embody the principle that benchmarks should be validated in situ and continuously refined.
The moderator then addressed three distinct points.
1. Practical benchmark validation – exemplified by the Telangana Data Exchange, which allows startups to test against real-world data [16-23].
2. India’s strategic advantage – he asked, “How is India leveraging its innovation hubs and its leadership position in shaping the global dialogue on inclusive and responsible AI?” and answered that India’s multilingual, large-scale environment turns infrastructure constraints and massive scale into a competitive edge, offering a unique perspective for global AI standards [29-34].
3. Rapid-startup-friendly frameworks – noting that startups move at a fast pace, he called for benchmarks that are agile enough to keep up with their speed of innovation [96-102].
He positioned India’s context as a strategic advantage for shaping global AI standards [29-34]. He noted that most existing frameworks assume high-resource, homogeneous settings, whereas India operates under infrastructure constraints and massive scale, turning these challenges into a competitive edge [30-33]. The RAISE Index, he explained, harmonises requirements from the EU AI Act, the NIST AI Risk Management Framework, Singapore’s guidelines and the UK AI Assurance, offering a single portable assessment for organisations operating across jurisdictions [39-42][43]. He also emphasized that the Index employs a phase-based assessment to keep benchmarks relevant to a company’s maturity stage [44-48]. He concluded by urging continuous, phase-based benchmark evolution to keep pace with rapid AI advances [44-48].
When the panel began, Arundhati Bhattacharya recounted that Salesforce established an “Office for the Humane and Ethical Use of Technology” in 2014, which reviews every product and process before market release [58-61]. She argued that preventing misuse by bad actors requires a global compact and transparent information exchange, noting the proliferation of deep-fakes and the need for societal safeguards [65-71]. Trust, she said, is Salesforce’s number-one value, embodied in a TrustLayer that protects against data leakage, bias, toxicity and hallucination, and the company deliberately delayed its Copilot-like offering until this layer was robust [112-118][119-136][132-138].
Karna Chokshi shifted the focus to startups, insisting that responsible AI must be productised: governance, observability and human-in-the-loop controls should be baked into the core AI product rather than relegated to a lengthy PDF [96-102]. She described a design where guardrails are applied at the prompt, during tool-calling and at output, and where the human-in-the-loop is treated as a first-class feature, not a failure point [98-99]. By productising these safeguards, her company has enabled 30 000 organisations to deploy voice-agent interview tools within minutes, demonstrating mass-adoption potential [100-102]. She further advocated turning compliance into reusable APIs with sensible defaults, arguing that such infrastructure-level solutions would make governance scalable and encourage default-protective settings for user data [151-169][174-177].
Ankush Sabharwal highlighted the imperative of data sovereignty for high-stakes sectors such as defence and finance. He explained that clients demand full control over data, prompting the development of on-premise and edge AI appliances (e.g., the Vada GPT super-computer) that keep processing within the customer’s premises [108-110]. Trust, for his clients, is built on near-perfect accuracy (99.9 %), rigorous bias and hallucination checks, and the ability to opt-in to data use rather than defaulting to it [111-118][141-148][165-168]. He also noted that compliance can be addressed through software-level APIs or hardware-level data control, allowing organisations to select the regulations they need without over-burdening probabilistic AI systems [108-110].
During the audience Q&A, a participant asked about the emerging small language models (SLMs) versus large language models (LLMs). Karna responded that enterprises typically start with powerful LLMs to accelerate value creation, then transition to SLMs when latency, cost or data-sensitivity considerations become paramount [191-193].
The panel converged on three recurring themes: (1) trust as the foundational value of AI, (2) the necessity of embedding governance, observability and built-in trust layers directly into AI products, and (3) the promotion of the RAISE Index as a unifying, iterative assessment tool [12][24-27][151-169][202-208]. Different perspectives were offered regarding how trust is delivered-cloud-native TrustLayers versus on-premise sovereign appliances-and how compliance might be realised-software-level APIs versus hardware-level data control [112-118][108-110][151-169].
In closing, Kazim Rizvi thanked the participants, reiterated that the RAISE Index represents India’s first responsible-AI readiness tool, and urged the audience to adopt it to embed responsible AI by design [202-208]. He called for continued ecosystem collaboration-among technologists, policymakers, think-tanks and startups-to ensure AI delivers societal benefits without unintended harms, and announced further Dialog-led policy conversations on AI governance [209-211].
Overall, the discussion mapped a roadmap from high-level principles of safe, ethical AI to practical mechanisms: co-created, evolving benchmarks; the first-of-its-kind Telangana Data Exchange; product-centric governance, observability, and built-in trust layers; sovereign data solutions; and a globally-harmonised, phase-based assessment framework. Agreed-upon actions include publishing and iterating the RAISE Index, expanding the Telangana Data Exchange, open-sourcing compliance APIs, and pursuing a global compact on responsible AI to align standards and prevent misuse [13][39-42][151-169][202-208].
Thank you. Good afternoon, everyone. I know it’s Friday afternoon, almost end of a fantastic Global AI Summit. And good afternoon to my fellow distinguished panelists. I think the topic of this particular panel, it’s probably the apt one to wrap up this Global AI Summit because the most important arc in this innovation, the innovation of AI is making sure… the impact of the AI is safe, responsible, ethical, inclusive, and explainable, right? And it has to be holistic at the end of the day. I think there’s a lot that we have learned over the course of this week, listening to a number of different thought leaders talking about how AI could be channeled in a manner where it delivers the intended impact without getting into unintended consequences.
I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing this safe and ethical AI, right? Starting with benchmarks must emerge from deployment reality. And not just research labs. Safety benchmarks fail. when developed in isolation, the most effective ones come from institutions building, deploying, and maintaining AI at scale, right? Government innovation hubs sit at this critical intersection between policy intent and operational reality, surfacing failure modes and trust gaps. The second most important element in this framework is to ensure these safety benchmarks are co -created with the industry and with academia and the research institutions. ICOM and the dialogue developed one of its kind index called the RAISE Index over the last year and a half that we have been working together, which is the first of its kind in quantifying the impact or the quantifying the value or the quantifying the impact of AI within deployment and during development on the safety and responsibility matrix.
And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to the entire framework and you could even test your respective AI solutions or AI systems that you might be developing or you already have in production, test it against that and then see what the index comes back and tells you. The third is making benchmarks practical. And in Telangana, we have launched Telangana Data Exchange, which is first of its kind, digital public infrastructure, within the realm of AI. It provides startups access to government data sets in a sandboxed environment. This is where benchmarks get validated and time tested. Startups can test their AI systems against actual data, actual use cases, actual constraints before deployment.
The third is we all understand and recognize that startups move at a rapid pace. So when startups are deploying AI solutions, there’s a number of risks that emerge. And we are providing this index again as part and parcel of the whole startup ecosystem that we are building. And as a result, we expect them to detect any early warning signs within this framework and continue to improve this. The last is benchmarks and frameworks must be living infrastructure, not static checklists, right? AI. Capabilities evolve faster than regulatory cycles. Static benchmarks become. Hubs must institutionalize continuous benchmark evolution. This raised index methodology includes phase -based assessment, ensuring benchmarks remain relevant to company maturity stages. So if you take this broader framework of making sure, how do we make sure AI systems are safe and responsible and ethical, the question comes down to how is India leveraging its innovation hubs and its leadership position in shaping the global dialogue on inclusive and responsible AI.
What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed for high resource, homogenous environments. India operates in the context that most of the developing world shares, which is multilingual populations, infrastructure, and innovation. It has infrastructure constraints, massive scale, and the imperative to serve both economic growth and social inclusion. This is not a limitation. This is a significant competitive advantage that India has in shaping the global standards. Number two is demonstrating responsible AI in high stakes, high scale deployments, which we are offering. ICOM, the first of its kind innovation, AI innovation entity out of Telangana, with its research and co -innovation pillar, helps build AI solutions for healthcare, agriculture, climate, financial inclusion, where failures have immediate societal impact.
When we document how these systems are designed, tested, and governed, we contribute frameworks that have been validated under real world complexity, not just lab conditions. This particular RAISE index is India’s contribution to global standardization. You will notice the more you dig into this index, the index harmonizes requirements across leading global frameworks. Be it EU AI Act, NIST AI Risk Management Framework, the Singapore Mass Guidelines or the UK AI Assurance. We brought it all together into a single portable assessment. Organizations operating in multiple markets can use one assessment to evaluate alignment with diverse regulatory escalations. The methodology is open and adaptable for other jurisdictions. And I would leave you with last but very important point of institutionalized continuous learning in responsible AI practice, right?
Most frameworks are static standards. ICOM believes in creating systems with ongoing feedback, tracking system performance over time, updating benchmarks as models evolve, incorporating new research. And Raise Index is designed as an iterative framework. What we are releasing today is the first edition and it will continue to evolve through pilot phases. stakeholder consultation and it’s not a one time standard we all know AI is an evolving technology and this has to evolve but our intent and goal and hope is this would keep pace with the pace with which the technology is moving and that is very critical and that’s a common responsibility that we all hold be it technologists, be it policy makers be it think tanks or be it researchers or start ups it is we all have to come together as an ecosystem to ensure the technology that we put out there with the intent of doing benefit for the society does exactly that without any unintended consequences so I think we are up for a fantastic panel and you guys absolutely would enjoy the conversation that is going to be held now.
Thank you.
Thank you so much, sir, for setting the context. And I think like that deep, like sets the perfect context for us to like pick up the conversation from there, which is going to be like we are discussing today in terms of like reimagining like responsible AI. What are we trying to like do today in this panel is to like, you know, understand like what are the shifts that are needed like when it comes to responsibilities with evolving innovations and like how we can take the needle forward when it comes to responsibilities. I would like to start with Ms. Arunthati Bhattacharya here. Thank you so much, ma ‘am, for taking the time. It’s absolutely a pleasure to host you.
And first question is to you, ma ‘am, is that is like as you are a global enterprise leader, how do you see the balance between the rapid AI innovation with there is a need for a trust and accountability and customer protection as well? So how do you see that balance
So, you know, in the company that I work for, Salesforce, we started our AI journey in 2014. And in 2014, we also set up within the company an office for the humane and ethical use of technology. So this is an office, by the way, which goes through every one of our products, every one of our processes, before it is allowed to make its debut in the market. Because we realized very early on that while technology and AI could give us many advantages, it would also be used by bad actors for doing things which it was never intended for. And that is true of every single thing that, you know, we come up with. Whether it be a new medicine, whether it be nuclear energy, whether it be anything that we come up with, it can have its good use.
It can also be used for the wrong reasons. And that is something that we must come together in a global compact in order to defeat and in order to stop. Again, this has to be a global compact. It’s not something that one country or one organization or one effort can probably ensure. Because unless and until we have sufficient transparent information exchange, unless and until we all say together that this is not something that we will allow, it would be very difficult for us to stop the bad actors. It’s not easy. Today you see the kind of deep fakes that are there, stuff that we never thought of in our childhood, families having safe words amongst themselves.
It’s not something that was there at all. But today, in fact, I was asking a colleague from the US. And he was saying, yes, we do have a safe word in the family because we don’t know when somebody is going to get a call that’s going to sound like me. And it’s going to say that I’m in the hospital and I need so much money. Please come and get me. And it might be somebody entirely different trying to scam you. So we do have safe words. Now, imagine the extent to which we have gone, where we are having to teach children that these are the ways that you can be sure and you can be safe.
Now, this is not something that we want, because obviously, AI is also something that can speed up things like medical research. It can actually speed up skilling. It can speed up many things which enable us and empower us to come up to potential. So a technology this powerful should not and cannot be stopped because bad actors are misusing it. And therefore, it’s up to all of us to come up with a framework. A global compact, again, as I say, a framework that will enable us. to ensure that we are all of us together trying to stop the bad actors and ensuring that this is being used for the good of humanity.
Excellent point, ma ‘am. I think a very interesting aspect is your starting remark in terms of putting together an office on the humane aspect, which actually shows that it’s not only the technical side can solve the problem when we talk about responsibility, it’s also organizational ethics and organizational ethos which kind of brings that kind of essence to it. And great submission on the global compact, and I think that’s something that we should all strive towards, and I hope the summit will kickstart that process for us as well. I’ll come back to you, ma ‘am. I know you have a hard stop, but I’ll do come back to you for one more question. But now I would like to go to Karna here.
Thank you so much, Karna, for joining. We did hear from ma ‘am in terms of what can be actually done in terms of… Thank you. from how larger organizations are looking from this. But I would like to pick up your brains in terms of as a startup and an MSME, what are the operational challenges that you guys face when you are trying to balance this equation of responsibility versus innovation? And also you guys are looking at it from a four -sidedness and new technologies. So any thoughts there would be
make the AI technology which comes with a lot of power be a bit more Enterprise software ish in terms of compliance governance observability. So we that’s what we do is which means the way we believe is if governance looks like a 200 page PDF for all companies MSM is to figure out we will see them struggle and our our idea is it should be a part of the core product as a lot of us are building solutions for customers governance should be the core product like we believe product is it product as it product as it and that allows mass adoption and the way we do it is so governance to product as it we just writing into the prompt is just the first line of defense.
It should be the core part through the entire agentic lifecycle. Which means. At the time you’re giving it an input and it’s reasoning there are guardrails it check before it does some tool calling which is like hey i’m gonna write uh to the crm or i’m gonna talk to uh one of your customer on this topic there is again guardrails before that and even when you do an output there needs to be guardrail and the guardrails should be a part of the core product and that is important to drive mass adoption and secondly the way we think is knowing we build voice agents for companies uh we still believe human in the loop is a first class feature not a failure point which means you should design the system that it doesn’t in the intent to give an answer it doesn’t give wrong answers it’s okay to figure out when it should transition from a fully autonomous to an assisted agent to fully autonomous to a human and that principles of using humans in the right place should be the core product of our product and that that productization has allowed us so we also have another company up now which is a hiring platform which allows around 3 lakh companies.
Now, because what we saw beautifully when we productize a lot of these, every year, every month in fact, 3 ,000 MSMEs are building voice interview agents on their own. They’re not even realizing because we have productized it that at the back of it, there are three agents they are creating and training for their recruiting process and they’re deploying it and within a matter of five minutes. So, and that has driven to adoption of 30 ,000 companies who are doing it on their own and if we want the entire India, all companies to leverage it, more and more as software, agent -based software builders, we productize it, the better the adoption will be.
That’s an excellent point, right? Like, I think like this is something that like we kind of like also keep speaking is that productization of responsible AI from a value proposition perspective, right? Like how can responsible AI be embedded as a value proposition towards the product that you’re building, which also is one of the selling points for like whatever that is like taken. That’s a great, great point. So I’ll definitely come back to you, but I would like to go to Ankush and then like I’ll come back to ma ‘am again. is like quickly like Ankush wanted to like understand you guys build AI systems so what are the governance challenges that you see most are like you know difference when it’s for public and private
yeah I think one is control I think when it’s about sovereign AI so it’s not just the data residency which matters to our client they want the complete control no one else other government no other party should be able to even see that sniff that audit that so I think that is something which our clients ask for it and that’s why though we work with almost all the cloud providers and but we let the decision be with our clients like which data center they want us to hold and now we see the huge demand of on premise solution and that’s why it’s now even we had seen the need of the edge ai day before yesterday with nvidia we have launched vada gpt desk ai appliance so that’s a supercomputer itself that process is around one petaflops floating point instructions and you know 4db hard disk and you know and that can run a model with one one trillion parameter huge right so but our vada gpt model is just half a billion parameters so means they can do multiple models multiple use cases just one box and we’ll be announcing that soon we we’re working with the defense and now there’s a huge need to have not just on not just in india not just on premise it’s just in the room on the desk right now when the army is doing critical meetings so they don’t want the data to even go out of the room so even that kind of but with a complete processing complete sovereignty and they also don’t want to limit the use cases also right so they want to start with minutes of meeting a change and the aspirations keep increasing so we needed to have a super computer thanks to NVIDIA who’s powering our box there so I think that is the major part rest we all know about explainability inclusivity and privacy and purpose so I think this is something where I think that’s why many many data centers are coming up in the country there is a need of having our own data center here
that’s excellent like I think like what you’re trying to underline is the trust over the solutions and that’s coming through the sovereignty of the data more they have control over it more it is
that’s correct so now tagline is AI with purpose and trust trust is of course important for any relationship like vendor so I think with AI the trust is more important because they are trusting us they are giving us data to create the models so that’s why many new companies are coming up you know of course I thank and welcome them to come to the table but I think now the old players are still being valued so the work is still concentrated here though the deliveries are taking time and all that but there is definitely now need and we need to I think my message to all the new startups and AI startups is yes innovation you have to keep showing doing but show the trustworthy part of it said about observability I think that’s very very important so enterprises want more of trust scale security than the innovation I’m not saying don’t do the innovation but the trust part is very important especially when AI comes
that’s a great great important submission so but ma ‘am over to you I think like you have to leave in five so like any closing remarks that you would like to like you know provide
no the one thing that I wanted to talk about was trust because that’s what was being discussed Trust in Salesforce. Trust is our number one value. We have five values. The first is trust. The second is customer success, followed by innovation, equality and sustainability. But trust is definitely number one. Now, having said that, we are number one in trust. We are also a cloud native company. OK, so we do not have on prem systems. And we also believe that basically it is important for us to adopt asset like models, mainly because today the need for storage and compute is so high, given the fact that AI is able to handle trillions and trillions of data points.
And the more you have data points, the better your answers will be. Of course, not for everything. You don’t need to boil the ocean for every single thing. But where there are really deep questions that will benefit from the diversity and the extent of the data, it is very important. For us to have the right kind of compute and storage facility. Now, obviously, you know, if you’re going to have that kind of storage and compute facilities that is entirely on -prem, it also means a pretty high amount of investment into the hardware resource. And India is not very well known for having deep pools of resources. So given the fact that we necessarily have to have capital -like models, it’s important for us to find ways and means of ensuring logical security and trust.
And there are ways of doing this. There are several ways of doing this. One of the reasons why, by the way, we were behind Copilot in bringing our enterprise -level offerings to the market was because we were working very hard on the trust layer. Because the trust layer is not only about access. It’s also about ensuring. not only that your data doesn’t go out, but also the fact that your data doesn’t have any toxicity, that your data doesn’t have bias, that your data is not hallucinating. And by the way, the bigger the data, the amount of data, the more is the tendency to hallucinate. And obviously, you don’t want something as important as this to hallucinate and give you a right wrong answer.
So TrustLayer actually performs a number of these actions, which is all meant towards ensuring that the results that come are not only responsible, they are trustworthy. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
and we created it. We launched it when we have seen, and I’m still not saying we are 100 % safe, but I’ve seen the world is now okay to have inaccuracies, right? So we are a bit risk averse, right? We are not that risk takers when the whole world was okay. Because we have the client, so you see our clients IRCTC, LIC, NPCI, and Army Defense, they used to expect 99 .9 % accuracy. When the whole world is okay, was okay getting wrong answers from these general purpose LLMs, they got more convinced and most of our clients before 12 GPT days, so that was classic NLP. I liked your point where we don’t have to answer everything, right? So God really is important.
But now most of our clients have gone to Genia. not just gen not only gen a only thing we do composite ai so we still follow the conversation the classic nlp based intent classification entity extraction you would not believe so 80 percent 80 to 90 percent of our interactions happen classic nlp without gen ai because we think we all are different we are not right so so when say in one of them say irctc say four million people come to irctc if i open the dashboard they’re only eight to ten intense people you have to book cancel change board station whatever so 80 percent use cases if someone is saying i want to travel from bangalore to delhi tomorrow there is no gen ai involved i just have to call nlu is involved that old model works just cause the api gets the data right no gen ai if someone say hey i have three pets then how do i do if it is one pet that is a policy that we know it says i have three pet can i carry in my train right so probably that answer is not there in classic nlp for that we you do the rack base with barrager bar gpt so I think if we safety is important I think that should be the core of design and then composite air don’t do just Gen AI because Gen AI is easily available and don’t use Gen AI because you have money to buy GPUs and burn the tokens so idea is do purpose led innovation begin with end in the mind I have told this line I think 10 times today first see what problem you are solving and then you see which solution then which model if model is available use the available model if not build the
that’s an excellent point thank you so much Ankush for making the time and quickly moving to Karna any closing remarks that you would have and also whatever you want to add to your previous point
yeah so I think to the point Ankush was mentioning AI technology is fundamentally designed on probabilistic model and and we are all used to software working in a deterministic manner, right? So it has to exactly do this. Now when it comes to this topic of large processes for large enterprises, I think compliance is one area which is super hard to think about, right? AI is probabilistic, but compliance, you always want it to be correct. So I think what to enable the ecosystem, what we believe is we are converting compliance into APIs. So what I mean by that is, so we’re deploying voice agent in one of the large mutual fund houses, all the compliance for that industry are checkbox.
So every company can pick what compliances they need. They just need to take the APIs which they want to ensure and that makes the entire ecosystem flourish and these APIs should ideally get open sourced in the market. So there is enough level of validation across all players that, hey, this SEBI guideline, this is an API which you can invoke into your agent and agent will follow it. And this has pressure test. This takes this burden of ensuring AI works 100 % correct in all use cases, which is not the power of the technology. But if we don’t think like that, then we’ll become very restrictive in its application. While we work a lot on making it P99 accuracy, but there is always the probabilistic chance of it.
And I think the second point we should think about is I think the human state of mind works well in default versus optional. What I mean by that is whatever is the default selection in any of the things you do has 90 % adoption or 80 % adoption and whatever is the change is a 20%. So the way we think about it is a lot of things should be a default. Yes. So customer data should not be used by default to train LLMs or models. It should be an optional add -on rather than the other way around, which you see. Right. Because that’s how most. startups, MSMEs, businesses would otherwise ignore it and the scale of innovation will not happen if that’s not the default state.
And lastly, explainable is extremely important because as models are making decisions, how do you know why this decision was made? And if we make that more as a core output of the API and not think of it as, oh, if something breaks, we will figure out how it works. You will not enable your partners to be a decision maker with you when you’re designing AI solutions for them. So that’s what we focus on. We focus on how do we make a PAT technology, P99 available for enterprises and or governance is the prime topic which comes on why, what is the missing element to get mass adoption and that’s something which I want the entire ecosystem to embrace.
Can we make it an API? Can compliance, governance be more of an infrastructure rather than a paperwork? Because if it is that, then we’re going to slower adoption in India than maybe other parts of the world.
That’s a great point. Thank you so much, Karna. But we have very few minutes left and we have one panelist who has dedicated full time for us. So, like, you know, kudos to that. So, opening up to the floor, any questions? I think, like, we can take two questions, given the time frame. Any questions to Karna? Anybody? Yeah. They’re all very clear. Yeah. Hi. Good evening. Hello. So, my question is related to small language models which are becoming increasingly popular. Within the developer… community so for businesses like yourself yeah do you see a profitable path ahead for slms or do we continue depending on this llms which i think will be raised to the bottom
yeah no great question i think we think about it a lot and a lot of customers of our ask the question hey would you be in using slm will we use an llm i think the place where we are we all will benefit from the flexibility of llms because frankly most companies are deploying their first or second actual large -scale deployment i think it is helpful to leverage the power of the larger models at that time and over time you will learn what actually is needed in it and over time you can transition llms to an slm where you get the advantage of sometimes latency sometimes cost depending on what your use case optimizes for but i think in the interest of speed of innovation it’s okay to just use llm figure out where the value is getting coming to your business and then explore through the journey of an SLM model which can give you additional advantages Thank you
Anyone else? Awesome So thank you I would request now Sarj to take it over
Thank you so much Thank you so much Thank you so much Kamesh Thank you so much to all of our panel members I think it’s been a really really interesting discussion on how where responsible AI is now and its future particularly with artificial intelligence going ahead I’ll call Mr. Kazan Rizvi the founding director of the dialogue to give the closing remarks for the session Kazan
This works, this doesn’t work Thank you I think this mic works Yeah, okay, great. Thanks a lot, Sahish. And thank you, Kamesh. Thank you to all those who stayed back till now. I think we are crossing the limit of event fatigue. I know a lot of us are quite tired and sort of very, very sort of exhausted, too many events. But I think the last one week has been fantastic. We’ve had the pleasure and the honor of hosting a few events over the last week. But I think specifically on Responsible AI, as Fani was talking in the beginning, the Dialog and ICOM have developed India’s first tool to assess Responsible AI readiness. So we urge and we encourage and we motivate all of you guys to sort of look into that.
But thank you, Kamesh, for moderating. Thank you to all our speakers for joining in. I think it’s important that we all work towards building Responsible AI practices from the beginning by design. I think that’s something which, you know, even the tool will encourage. So please have a look at that. But all… of you have a good evening I think for what is left of the AI summit it’s been a fantastic summit and hopefully all of us got to learn a lot I did myself but look forward to seeing you all soon dialogue will be hosting multiple conversations on AI policy and we encourage you all to join that but until then have a good evening enjoy your weekend and thank you to all our panelists again thank you thank you Thank you.
Thank you. Thank you. Thank you.
This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with trust, accountability, and ethical considerations. The moderator introduced Indi…
EventBenchmarks created jointly by academia and industry are needed to test multi‑agent behaviours before deployment.
EventThe evaluation ecosystem should be multi-stakeholder, involving government, industry, researchers, civil society, and individuals, though the field remains in early stages requiring continued experime…
EventAnd this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to the entire framework and you could even test your respective AI solutions or AI …
Event_reporting<strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun…
EventDr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits are accessible to all, noting significant gaps in venture capital distribution and…
Event“At GSMA, about 12 months ago, we formed a coalition called Cross -Sector Any Scam Task Force”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/building-the-next-wave-of-ai_-responsible-fra…
EventAbsolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Prativa here. Welcome, Prativa. So Adobe is consistently really p…
EventAnd last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excited to go first, that really look at transparency and AI responsibility as an opp…
EventBoth industry and humanitarian perspectives converged on integrating governance considerations throughout the entire AI system lifecycle. This approach prevents governance from becoming mere complianc…
EventSovereignty has multiple layers: data, operations, technology stack – can control three out of four
EventTrust and data sovereignty concerns
EventIn summary, the fear of government access to data poses a threat to the free flow of data with trust. Microsoft’s statistics highlight the extent of government requests for user data, raising concerns…
EventCommon definitions on data sovereignty are required Enabling a free flow of data is essential for access to new technology, as restrictions can hinder innovation and digital transformation. Trust in …
EventLarge Language Models (LLMs)are trained on vast datasets containing billions or trillions of words from across the internet, books, and other text sources. They typically have billions or tens of bill…
BlogBalance between large foundational models and small specialized models Ioanna Ntinou acknowledged the tension between developing efficient small models and continuing to advance the field through lar…
EventMarlene Owizniak: And before I open it up to the floor, I just wanted to highlight a few of the key risks that we found, just following up on the speaker’s points. And I really do encourage you to rea…
Event“Benchmarks must emerge from deployment reality rather than isolated research labs.”
The knowledge base states that benchmarks should emerge from deployment reality and not just research labs, confirming the claim [S2].
“The Telangana Data Exchange is a first‑of‑its‑kind digital public infrastructure within the realm of AI that gives startups sandboxed access to government datasets for validating models.”
While the knowledge base does not mention Telangana specifically, it discusses India’s approach of treating AI as a shared public infrastructure, which adds context to the claim about a sandboxed data exchange for startups [S84].
The panel shows strong convergence on three pillars: (1) trust as the core value of AI, (2) the necessity of embedding governance and responsible‑AI safeguards directly into products and keeping them alive through co‑creation and continuous evolution, and (3) the promotion of the RAISE Index as a unifying, iterative assessment framework. These shared positions cut across enterprise, startup, and policy perspectives, indicating a high level of consensus on how to operationalise responsible AI.
High consensus – the alignment across diverse stakeholders (large corporations, startups, policy‑makers) suggests that future initiatives are likely to focus on trust‑centric product design, living benchmark ecosystems, and the adoption of the RAISE Index, which could accelerate coherent global standards and practical implementation.
The panel largely concurs on the importance of responsible, trustworthy AI and the need for collaborative standards. Disagreements cluster around implementation pathways: cloud‑native trust layers versus on‑premise sovereign solutions; software‑integrated governance APIs versus hardware‑centric data control; and risk‑averse accuracy‑first approaches versus rapid, product‑centric innovation.
Moderate – while foundational goals are shared, the divergent technical strategies could hinder the formation of unified standards unless a flexible framework accommodates both cloud and on‑premise models. The implications are a need for hybrid benchmark designs that recognize multiple trust architectures and for policy that allows both approaches to coexist.
The discussion was shaped by a handful of pivotal remarks that moved the conversation from abstract principles to concrete, implementable solutions. The Moderator’s opening call for real‑world benchmarks set the stage, while Arundhati’s early description of an internal ethics office and the call for a global compact broadened the scope to international cooperation. Karna’s product‑centric view of embedding governance directly into AI systems and converting compliance into APIs offered a practical pathway for mass adoption. Ankush’s focus on data sovereignty introduced a technical trust mechanism that complemented the earlier governance ideas. Together, these comments created a progressive narrative: starting with the need for grounded standards, moving through organizational and global frameworks, and culminating in actionable engineering approaches that address trust, compliance, and scalability. This sequence steered the panel toward actionable outcomes, such as the promotion of the RAISE Index and the emphasis on building living, adaptable governance infrastructures.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

