Scaling Enterprise-Grade Responsible AI Across the Global South

20 Feb 2026 18:00h - 19:00h

Scaling Enterprise-Grade Responsible AI Across the Global South

Session transcript

Sunita Mohanty

Thank you very much. Thank you everyone and thank you to our esteemed panelists and everyone who’s come here braving the traffic. I know it’s a fag end of the AI Impact Summit and as people were saying, they’ve heard so much of AI this week that they could decompress for the next one month and not here. So, but however, we can’t wish it out because it’s a very significant part of our life. I’m Sunita Mohanty, Managing Director at Primus Partners and it’s a pleasure being here. We started with the inaugural session on the 16th and we are ending with a session today. So, it’s a very significant moment for us to be. here today. So I’m going to quickly ask, we have a really good set of panelists here, so I’m going to start with you, Babak.

So we’ve been talking a lot, we’ve been attending a lot of sessions, people are talking about what is real in AI and today’s topic is about how do we connect India and the global south and what are some of the guardrails we can build here and especially as the chief AI officer at Cognizant, you’re seeing how AI is really impacting real life and enterprises are really moving to delivery architectures and operating models in AI in mission critical infrastructures like banking and healthcare. So from your point of view, what are you seeing as the guardrails and the trust frameworks that organizations are creating to make sure that these are safe and what would your advice be for India and the global south, what kind of frameworks should be adopted?

Babak Hodjat

yeah AI is real and both the promise and the risk is real and so guardrails are needed we can’t fall off either ledge of trusting AI either mistrusting it to the point where we debilitated by basically having you know a human rubber stamp every single step or the other way basically thinking that it’s you know some magic pixie dust that you just pour over your organization and then turn it on and it’s AI enabled so guardrails are important and there are different ways there’s no panacea to to ensure safety as well as reliability of these systems One of the biggest risks is this notion that because the AI systems respond and reason very well, after one or two reasoning steps that we can allow them to just continuously reason, they do make mistakes, even very trivial mistakes, after several hundred reasoning steps.

So, we’ve been here before, for example, with telecommunications, where a bit might flip when a truck is driving down the road. And so we know how to error correct through redundancy and through other means. We know how to engineer systems that are reliable. And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agents. In the loop and on the loop. Checking other agents’ works. assessing uncertainty in an agent’s output and deciding not to take its output at face value, basically taking the output as well as its own measure of certainty in its output as a measure of whether or not we bring a human in.

So these are just some techniques, but there are a multitude of techniques that can be used. There is also increasingly this issue of agentic identity. When you’re building a system fully in -house for your own use, then you pretty much have control over the agents. You know which agent is talking to which agent, and they’re all built in in -house. But increasingly we’re moving into a world where you have agents from third parties, maybe another business, maybe your consumer represented by an agent, B2C maybe an agent, coming in and talking to your agents. How do you? How do you assess? how do you determine the identity of this agent? We don’t really have very well -established standards for that just yet.

I know our friends at Google are working on that in A2A and there are other standards coming out, but it’s still not well -established. So there’s risks external to your agentic systems as well. So I just listed a whole bunch of different areas. When it comes to India, I know that there’s talk about, for example, building these systems within India, like sovereign LLMs to back the agentic systems. Regulation does play a part. Again, there’s a risk of over -regulating versus under -regulating. I think it’s important, again, not to fall off the ledge from one side or the other. I have opinions on that too but I’ve realized I’m talking too long so I should

Sunita Mohanty

Now thank you Babak really good point I know you have a very difficult job at Cognizant but the two things that you mentioned about keeping agents and human in the loop and we also heard in this one week a lot of people talking about having humans at the center of everything that you build so that’s amazing morning we also heard about regulation versus innovation and US is at the point of innovation Europe is at the point of regulation and where does India and the global south stand so that’s good I’ll move to the academician point of view Anupam with you next and much of the responsible AI research still assumes that data is clean and there is a stable infrastructure but that’s not true about the global south because we do operate here on very heterogeneous data intermittent compute access and multilingual environment so from a research perspective what are some of the new technical directions hardware aware AI and robust architecture evaluation models that are needed to make AI really trustworthy

Anupam Chattopadhyay

I think this is a very important question. We do see the scale of innovation and the pace. This is going at so high rate. It’s not always easy to just take a back seat and think of the research as a standalone component that matures and goes to industry. People are releasing tools and things are going out of hand very quickly. And for that reason, what we figure out is it’s always good to keep the research very grounded and try to test the waters with some real world scenario. And one of the examples that I pick up here is one spin off that we had from a research group that’s on deep fake detection. And there we are facing exactly the problems that the models that we begin with when we start training it, it is actually showing very poor results when we are testing subjects from global, like for the images or for the audio, or if we are putting the audio under some circumstances where there is a lot of noise.

because it was tested on very much clean data and under noisy atmosphere the accuracy of the detection is failing it’s a huge concern because people are also not always educated there is a already a digital barrier and on top of that there is a ai barrier that’s coming up so which is making people fall prey to a lot of cyber scans very easily so that’s the bad side of the ai that we are observing we are trying to defend it and for that the technologies that we are trying to bring in is of course one is to create synthetic data sets so we have a tunable noise addition on top of the data then collecting as much as possible say data by scraping the internet but for deep fake it brings a different problem you see a video or an image and from a human point it’s not even discernible that it’s deep fake or not it looks so original right so we had to create a separate automatic fact -checker which is looking if there is a news that is linking that image with something and news is coming from a trustworthy source only when then we call it say original image or otherwise it is refit.

So that is the data collection issue, but when it goes to the even the implementation aspects that of course not everyone is having access to the high performance computing and we have to cut all the data or the models back to the bare minimum and there we have to resort to techniques where we are doing say mixture of experts that there are different models with different defecitation capabilities we put them together and sometimes the models are proprietary and we want to take it from a particular vendor, merge them together or an organization they have their own contextual model but they don’t want to share the models as it is and for that we have techniques like federated learning on how to merge the models and still guarantee that their training data or their models will never be leaked.

So it’s a privacy aware building up. So we do have all the technologies and tools just a short glimpse of that.

Sunita Mohanty

Thank you Anupam. I think one of the things that we are always discussing about there is not enough data to train the models and that’s why there was a lot of emphasis during this week around getting language models in so that there is enough Indic language as well as across APAC. The other thing is about synthetic data which is very important for us to keep the data clean and one of the conversations we were also having is how do you enable creation of synthetic data in countries like India and the Global South by creating AI in a box which is a very modular infrastructure that is available to students, researchers in a very small minimal environment for them to be able to create some of this data.

So thank you. With that Amod I wanted to speak to you about AI infrastructure given that you are in that business with SubMod and that is now becoming central to the responsibility defense and debate as well. So the IEA estimates a data center electronically and the electricity demand will by 2030 and AI workloads are a key driver. How do enterprises and governments think about responsible AI not just from a model creation perspective but generally at an infrastructure, environment, ecosystem perspective in terms of cooling, energy transparency, resilience?

Amod Kabade

So from our point of view, the responsible AI starts from the design of the infrastructure. So if my design of the data center is sustainable that is where I am going to then be able to achieve my goals efficiently and sustainably. So to do that, today we can leverage liquid cooling technologies which will minimize the overheads in terms of cooling this infrastructure requirements and allow us to scale AI rapidly for betterment of people and the planet. We need to, as government I would say, we need to get to a point where we can define KPIs around energy consumption per token or water consumption per token for these type of massive infrastructures and incentivize the players who are actually achieving or crossing those KPIs.

Essentially it is all about making these data center designs sustainable from the power consumption standpoint and achieving a much better outcome that we want to achieve in terms of AI and its scale.

Sunita Mohanty

So that’s a good point because Tanvi and I and Babak, you as well have all come from Davos. I think one of the main conversations there was around energy and how do you make it efficient and in one of the conversations at Bloomberg we did hear about ROI and how do you measure the cost of query, like what is it on the infrastructure. So I hope that at least with renewable energy and efficient cooling system, we get better as well as optimized query capabilities. So with that, Tanvi, I wanted to move on to you and especially because you’re creating sovereign LLMs now with the Vatican as well as with the New York City. So drawing from your experience of leading AI transformations, how do you think that deep tech startups in critical sectors like BFSI can build advanced functionality and across complex regulatory systems and also what’s your sense about the definition of sovereignty given it’s a very loosely used terms across all and we’re hearing a lot of it this week.

Tanvi Singh

Thank you, Sunita. Thank you everyone for having me here. I’m very happy, very excited to come back to my homeland in Delhi again from Zurich and the conversation was very enlightening. We call it Davos 2 .0 of what India is hosting now with the AI impacts on it. So thank you for having me. I think the question is super loaded. I can go on and on and on for multiple hours. But let’s just pivot into the conversation on ROI. I come in from a banking background, worked for more than a decade in Swiss banking. And the conversation always around if you’re putting in an investment, what’s the return of it? So whether we are calling the use of LLMs, the frontier models, we all know what the return of investment for the consumer is.

I think this is the first technology that touched consumers first, enterprise and governments later. We always had technology touch enterprise and government first and consumers later. But here, since the equation has turned lopsided, there are lots of factors that goes into ROI. So going back to sovereignty, I think President Trump has really done the marketing and sales for sovereignty. And everybody tends for themselves, whether it comes to defense, or whether it comes to owning your own infrastructure. your data and your intelligence and your cognition, which we call as models. But one of the factors that always resonates coming in from banking, if you cannot control, if you’re not accountable to what you present, you would never pass the regulatory bars on using the technology.

So we had this very famous team called the Model Risk Management, and we used it for AIML for the longest time. I think anybody in banking would resonate with that, similarly for health care, similarly for the regulated industry. So with the use of LLMs, and I had the opportunity of working very closely with OpenAI during my time at UPS, being Microsoft’s primary partners, we have the entire world’s data in ChatGPT and all the other LLMs. There’s no way we can guarantee what output the system’s going to throw at you. So the control on cognition and intelligence is as important as the control on infrastructure is, and that is paramount, and which gave birth to what we’re now building at ECTA, which is the Domain Specific Model.

It doesn’t get trained on an open source. It’s not an American first or a Chinese first or a French first. We’re talking a lot about France and Mistral. It’s your own model. And from a sovereignty perspective, it’s important that we can build our own models where data is not a constraint. You could use the data of your own content, of your own organization, of your own government in your native language, and there’s no translation required. And you could use multiple use cases across that domain, which is extremely applicable and hopefully gives the ROI to the sovereign stacks that different governments and different organizations are building for themselves. Because if your model is in your control, you could put them into consumer -facing use cases and not the internal productivity use cases.

And the value of this whole technology for enterprise and government is only applicable when the end consumer gets to use it the way the retail consumers are using OpenAI and Anthropix

Sunita Mohanty

Thank you so much for that. Because I think one of the other things that I hope… this week at the conference is also not only protection and guardrail at a model level, but there was also a demo of a product where at the hardware level, they are trying to put some kind of controls so that there is a break. So we’ll get to that topic. I wanted to move to you, Balaji, because Flipkart is right in the middle of consumers, like user -user -consumer, very much like our LLM models. So how do you think you operate at a population scale like in India and Global South and in a high -velocity environment? Where does responsible AI collide with business realities?

For example, how do you manage personalization, data security, fairness, and marketplace trust?

Balaji Thiagarajan

Yeah, Sunita, thank you for the question. You’re right, Flipkart operates at Internet scale pretty much, right? So we have 500 million users. See, when you talk about fairness, it’s across multiple areas, and we’ll talk about sovereignty separately. Pricing, we have to be fair. The quality of the things that we sell in our marketplace, it has got to be good quality so that when buyers buy something and they see the quality that they see in our applications and our marketplaces, they get exactly what they expect. Third, around fairness and pricing is also quality of service. When we deliver something, like it’s okay to deliver milk or groceries, but if you’re going to deliver some big equipment like an air conditioner, the quality of service is also not about just delivering.

It’s also about helping them understand how to use the product, how to install it, how to do after -sales service. And so we have companies in the Flipkart group like Jeeves that also does that. So for us, fairness is across a broad spectrum of things, starting from the beginning of the customer journey all the way through servicing the customer through the life of the product. Right. Now, if you think about how we achieve that. you know, it’s not a formula that we know exactly how to kind of implement. There is a recipe, what we call standard operating procedure. It starts with data. We need to have good quality data, right? If we don’t have good quality data, then from there on, everything starts getting diluted further and further and further.

Now, on top of that good quality of data, the other thing that we do is the access controls on the data and who can access what data is also very important. So, you know, that’s where we bring in security aspects of it from an access control perspective. Then when you’re talking about interchanging data between organizations, between services and so on and so forth, it’s not only data addressed, it’s also data, you know, in motion. So how do you secure that data? So that is all about encryption and everything else that goes on. And then when we go into the modeling layer, you know, at Flipkart, you know, I think Anupam talked about it, we use a mixture of experts.

the concept of a world model being able to serve our needs or for that matter anybody’s needs to the specificity of fidelity of information or accuracy is something that I have not seen work right so at a broad information level the LLMs of the world the chat GPTs of the world you know cloud opus and everything works but when you get into very specifics for example I’ll give you example you know we work on image generating models a seller today can bring you know a piece of you know whatever you know skew or listings that they want to sell they can take a picture and based on that picture we can actually create a listings in a catalog and the seller can be in business in a marketplace in a matter of 20 minutes right so how do you do that to do that we have to recognize that we have to recognize that we have to recognize that we have to recognize that we have to recognize what this picture is and based on that we also have to recognize all the things that we need from the picture to create a catalog listing and based on that listing we also want to tell the seller what kind of price ranges you can actually sell this equipment for.

So when you go through all these things, when you talk to an LLM it’s going to give you a range, it’s going to take some international data into account but you have to train the what we call a domain specific models which is what Tanvi was talking about. We call it SLMs for the specific domain, for the specific region, for the specific demography which is India in this case and then price it according to that. Sellers are not selling to somebody sitting in US or England or wherever else. They are selling to somebody in India and by the way we can also give them a price range that if you are selling it in Bombay or Mumbai versus Delhi versus Calcutta versus some other place in Bihar this is the kind of price ranges you can have.

That’s

Sunita Mohanty

So that’s a good point. But again, Balaji, I want to ask a quick question because when you use agents in services like yours for customer service, which is a very important component of the job, are you transparent about this being a bot versus a human? And that conversation has also come up.

Balaji Thiagarajan

Yeah, so look, we today, in customer service, when we deploy our agents, they are primarily co -pilots, right? The reason is we have not mastered the technology yet in terms of actually using voice bots that can directly talk to somebody, respond to somebody in a multilingual way. Also, and when we say we have not mastered it, we know how to do it conceptually, but the models hallucinate, right? That’s number one. Number two, we do have a very, very, very strong ethical and compliance. system in -house which says fair disclosure and transparency is by far the most important thing to win customer trust. So if you are going to have a conversation with an agent, an agent is going to talk to you, for example, our UX experience teams, we actually look at it from how will the customer understand who we are talking to and we actually have what is called disclaimer saying that you might be talking to a machine here and if you do not want to have that conversation, you can always opt out.

So there is an opt out position by default. You have to opt in to actually have the conversation rather than opt out to have a conversation. If you see a lot of companies, including the Apples of the world and the Googles of the world and everybody else, default is opt in. And you have to very carefully think about it because if you are not conscious about it, you have just opted in. Right? That’s not how we do it. We opt out and then we have folks opt in for us.

Sunita Mohanty

That’s very refreshing to hear. I would go back to Babak next, but before I go back, I have a very unusual request. They want us to huddle together for another group photo in the middle of this before we get to you again. So please can I request everyone. Okay. Moving on after the picture. Back to you. I mean one of the questions that a lot of the representatives from the government have been asked over the last one week is what is the framework for building a government stack, a viable government stack, AI stack. So if you have to do scaled AI deployment in the global south which includes monitoring, human oversight and vendor accountability. what would be the framework that you would recommend or your advice to the government on how we should look at it?

Babak Hodjat

I would start off with processing capacity. That’s the underpinning for building these systems in -house and running inference on them. If you really want to build something internally. And I would actually create a publicly available processing capacity. It’s something that everybody’s complaining about everywhere around the world. Most of the processing capacity is concentrated in private companies or large companies and not available to producers. For example, students or the public to experiment and build stuff. and then rely on academia and students and research and government entities in the public domain to actually build on top of that. And so that’s one thing I would suggest. It would attract talent. It will reinvigorate innovation outside of this very exclusive few big companies that can innovate in AI.

And then I would also create a sandbox, sort of a sovereign sandbox, in which to invite both entrepreneurs, startups, academia, the regulator, to, in a safe environment, safe and controlled environment, be able to try out various different applications, various different interoperability between these agentic systems. and come up with the regulatory framework that is well -suited for India specifically. I don’t know if the role of the government is to actually build an AI stack. I would think that the role of the government is to actually create the ecosystem within which this stack can organically create and be safely created. We talked briefly about regulation. You can’t be front -running regulation. You can’t also be completely negligent of regulation.

It’s risky either way. And the best way to do that, I think, is in some form of a safe sort of sandbox environment where the regulator can try different things, can observe. And if something goes wrong within a sandbox, you have control over it. The implications are limited. and then you gradually move that out to the more general usage. That would be my recommendation.

Sunita Mohanty

No, that’s music to our ears, Babak, because to be honest, the Indian government has actually, under the aegis of the AI mission, done exactly that. They’ve created 60 ,000 GPUs. They’ve procured and provided to states and to institutions to create, and we’re seeing a lot of innovation come out of this. We saw some of our sovereign LLM models that are now going to go open source with everything that they have created, which is amazing. So we were in some of the announcements that were happening last week, and Sarvam spoke about the models that they are creating. So with that, I’m going to you next, Amod. So we spoke about the infrastructure. So you worked on enterprise and data center operations, and you are now… You are now moving from small AI pilots to sustain high -density production environment.

So based on your experience across projects, what patterns have you seen in organizations that are successfully scaling their AI infrastructure? And what are one or two cases in early design choices, whether it’s how you code or your cooling, your density, or your deployment planning that has actually made a decisive impact on reliability and trust?

Amod Kabade

is no more working. Now one needs to look at the chips which are going to use today and the chips which you have a future roadmap of. That needs to be a core part of your design and build and even with that you still need that design to be modular, flexible and most importantly sustainable. And why we need that is because traditionally when you design and build data centers it takes anywhere between two to three years and there are cases wherein it exceeds that also but let’s talk about on an average two to three years. In that period I mean and all those who are tracking all these activities from Nvidia within two years they would have launched three or four generations of their new chips and suddenly what you plan for would have become redundant or obsolete.

So whatever you plan for today needs to be flexible enough to accommodate. all those future road maps as well and how do you do that is by designing your data centers in a modular fashion leveraging technologies which allow you to accommodate future chips which are going to be all the more resource hungry they are going to generate all the more heat so we need to have those technologies in place so that your designs can sustain that over a long period of time. So that is one pattern that is clearly coming out with people who are now moving from let’s say pilot to production or from prototype to pilot people are understanding that aspect and that making a key consideration around these areas.

Coming to the cases of benefits I mean we have seen cases wherein by using this sustainability focused technologies in terms of cooling companies are getting benefits upwards of we are seeing customers who are live more than 3 years with 0 IT failures which talks a lot about the reliability aspect of the setup that gets designed here so all of it, if I have to put a summary of all of it it is all about making your design decision which keeps it flexible, modular and scalable and I would like to just leave a thought here that the way we see cars getting manufactured in factories today, wherein you source many components of the car, certain components are manufactured by the manufacturers in their factory and everything gets assembled and rolled out as a car, as a product, we see data centers moving in that fashion wherein the electrical, mechanical the IT will be manufactured, designed and manufactured as modules and then rolled out to the sites as modular, scalable, sustainable infrastructure for AI factories of future.

Thank you.

Sunita Mohanty

Great. And I really hope we get a great design playbook for building data centers that are accessing renewable power, better cooling systems, and better ROI. So with that, again, Anupam, to your view from research, how do you think that academia and industry should jointly rethink model efficiency, efficiency, reliability, and assurance as a single design problem rather than treating ethics, performance, and infrastructure as different layers?

Anupam Chattopadhyay

Okay. So I’ll take a little bit step back from this problem to highlight that for any technology, we do have the good side and the bad side. And before it brings its role out to the industry and masses in general, there needs to be enough safeguards in place. And in academics or university, we do have that liberty to take pot shots saying this is wrong. like we do feel right now there is a lot of gaps in the cyber security of AI and we are trying to raise as much attention to that as possible so we are doing that as part of the research that the models are not properly trained that there are possible loopholes in hallucinations and there could be alignment issues and without these things properly being regulated when it rolls out to industry there will be repercussions, there will be setbacks so we draw caution to this and to address that problem what needs to be done particularly in global south because I spent a lot of time in research in Europe so I have seen that and I can make a comparison is a very very strong industry academia partnership that the industry brings up a problem and tells ok this is what needs to be solved and we want your students to actually learn this before they come to the industry and we try to say align with that kind of philosophy so one of the things that I like very much from my perspective in Singapore and NTU that they started this AI .sg as a single window consortium sort of stuff which has multiple steps like starting from the research that they are giving funding then there is a technology innovation then there is a technology transfer and commercialization, there is a dissemination and there is a regulation so no matter who is a researcher or university or the company, they can participate at any level of this so the problems can be very different because university is like a melting pot so we are making some model with little bit of training and little amount of data but when it goes out then we see the problem is now becoming AI for automotive or AI for perception module, AI for agents, this is not what we can control because every industry have their own regulation, their requirements moves at different pace right so this is what we try to address by having the single point window and clearly define the parameters and benchmarks for example fairness and ethical thing is what is the recurring theme in the discussion here is often underrepresented we highlight the performance but not the ethical lapses and the hallucination and the alignment lapses as much as possible jailbreaking or getting data out of a model it’s so easy that we are really scared before someone says okay start rolling out this cloud but we know from academic point of view it’s weak but we cannot control this unless enterprises and the policy makers steps in and say this must be regulated

Sunita Mohanty

that’s a good point and i think the example that you took both out of europe and singapore is critical but at least that way i have seen with artificial intelligence there’s been a lot of collaboration that’s happening between you and the other countries and i think that’s a good point and i think that’s a good point industry and academia throughout the world so we hope that continues so to you Anvi, given your work with platforms like Palantir and OpenAI, how should AI applications balance broad interoperability with deep scalable domain integration? And also, we’d love to hear your experience of what you’ve done in New York City as well as Vatican and what are some of the learnings that we can take from there.

Tanvi Singh

Thank you, Sunita. I think when you ask about learnings from Palantir and OpenAI, and I’m fortunate to be a design partner in both the cases in my work at the bank. So Palantir, this was way behind when they were more a government technology provider for the U.S. defense services, and they wanted to make an enterprise play. So my bank was a design partner from financial services. So see the transition from being a defense services company to a platform company in the space of AI and ML, and that has been very interesting. Because to biology… point, right? There is no one word model that can fit everything and Ballantyre obviously is one of the best software out there when it comes to AI ML.

So they developed a stack that you could do the customized AI ML at scale for AI ML and that was a huge learning. Being in a bank, one size doesn’t fit all and you can’t think of a domain as a financial domain and a healthcare domain because the way we do finance in Switzerland is very different than the way we do it in the UK and the regulators are different. Our retail use cases are very different from the wealth use cases and one size does not fit all, especially in the regulated industry. So that was a very important learning. With OpenAI this was, now all the data, 80 % of the enterprise data still remains within somewhere that we keep on storing and archiving in Switzerland.

You have to have 10 years worth of every single conversation that has happened with the clients, every single information that has been manufactured in terms of data while doing it. any regulatory work. So we have that data, we never used it. Not even with Palantir, it’s very AIML. With OpenAI, you get this whole unbound data that you could use for a lot of interesting things to manage your regulatory and compliance requirements, which is the biggest cost in technology for a bank, but also for engaging with your clients better, right? So, but then it’s an API, right? It’s a platform is what I got to experience with Palantir, and with an API access, we could get it into the early 23, 24 set of timeframe.

So with those two learnings, what if you could create a scalable, customizable platform like Palantir, but for generative AI, which is what we started building at Ecta. And the idea here is very much, you build in the guardrails, you build in the security as part of your four layers that we have at Ecta, and use your domain knowledge, your domain corpus of information to train and train your clients. So it’s very much yours. There’s no translation required. It’s very language -oriented. It’s very deeply culturally oriented, and that’s why the work of the Vatican was very significant to say, if the church is going to trust you with their literature, with their information as a benchmark against some of the hardest questions that get asked to the church, then we have a fair chance to get introduced to the enterprise and to the governments.

And from a New York perspective, there’s a lot of work we’re doing, starting with AI and education, which is what we’re also hoping to do more of in India. The challenge remains at least 50 students to every teacher, lots of languages, lots of cultural aspects, and the infrastructure is yet not there to match what the students really need. But now with AI, you could hyper -personalize the experience based on every student, so you do not have to learn English to learn math. You could very much do math in your local language, in Marathi, Bihari, or any other state language, and that sort of barrier could go away. And I think proof is always in the pudding.

So we get to see… how these domain models get to work in enterprise as well as in the governments.

Sunita Mohanty

Wonderful. And you have a lot of insights on what’s being asked to the church. So we’ll have to catch you on that someday. But thank you. So coming back to you, Balaji, from Flipkart’s perspective, how do you decide what do you build internally using AI and what to adapt, what to rely internally for, where do you use an external model and how do these choices affect your long -term decisions with business and customers?

Balaji Thiagarajan

Yeah, you know, I think we talked about this. As far as I can tell, unless we decide to build our own foundational models from ground up, we will always use a mixture of experts where at different layers we’ll use different kind of, you know, parametrized models. Usually, if you look at, you know, a workflow that is getting executed, like I’m trying to buy something, a shopping journey, or I’m trying to decide in a discovery funnel or whatever else, the top of the funnel usually is a very generic statement. That’s where the trillion parameter LLMs actually help. And for us, it works because at that point in time, all you’re talking about is an intent and trying to get understanding of what the user is trying to say.

But as you start getting into the further details and the intent becomes more and more clearer, where we want to provide the right recommendations or hyper -personalize information or adapt to what the customer is doing, that’s where the smaller, what we call SLM, the smaller language models come in. And for that, the way we think about this is we have an agentic orchestration framework. Each agent actually decides what is the task on hand. And based on the task on hand, we have SLMs that have been trained for a specific task domain or a specific task even. And the agent knows at that point in time, I’ve got to go to this particular, you know, infrastructure of an LLM or an SLM.

and then get the answers from there. So we have an agent tech orchestration framework that is a very dynamically learning framework that understands what’s going on, adapts to what is happening in the ecosystem, makes decisions online, and then it depends on what is happening. It redirects the traffic to the right SLM. For example, if the consumer is asking for, you know, show me the best price for these categories of products in a specific region, that’s usually a pricing and promotion, you know, domain. And that domain might be a domain of data that we have trained on a specific SLM on a specific catalog of items for that particular area, if you will. Now, if somebody comes and says, I’m just looking for running shoes, right?

That’s a very, very different, there’s a very, very different query. And in that query, what you do is you actually look at the whole catalog. Okay. and then you marry that catalog results with you as a person of your interest and then we kind of filter that out and give it. So that’s the way it usually works. So today, like Nandan Nilakani was telling that people who use UPI, everybody uses UPI in India, but nobody knows what is the technology behind it. So hopefully we’ll get to a point where we don’t know what’s the technology behind, but it makes every user’s life so easy and contextual that it is actually then had an impact.

Sunita Mohanty

So with that one last question for all of you. So Babak, I’ll start with you and we’ll go from left to right. What’s your feeling about the last one week? What is your key takeaway from this? What are you taking back outside of the traffic and the crowd? And any piece of advice that you would give?

Babak Hodjat

You know, I was at… AI Everything Summit Africa last week in Egypt. and they said it’s huge, it’s one of the biggest summits, 23 ,000 people. And I came here and they told me it’s 300 ,000 people. So just the scale, the scope, and, you know, India is in a unique position that its starting point is a starting point of technology and IT. So I think it’s much better prepared to understand AI, its implications, how it can be used. Very strong startup scene. I was very impressed by that. And, yeah, so to me, it’s one of the largest and most interesting, and I go to a lot of these conferences that I’ve been, so very, very impressive.

Sunita Mohanty

Yeah, that’s good, because when we started, a lot of the planning started in October or even before that. I think we never thought about the size of the event. And when we saw the footfall, and there was government, there were researchers, there were students, there was business, it’s just amazing that we could. really run that scale. So thank you so much. Amol?

Amod Kabade

I think it has been a fantastic week here in Delhi participating in the AI Impact Summit. And I’ll just go back to the three sutras, people, planet and progress. I would only say that it is our responsibility to build AI infrastructure or the entire ecosystem around AI, which is planet friendly. And focusing on the real use cases which address the last mile, the last citizen of the country. And progress is something that is bound to follow.

Tanvi Singh

Okay, so I can articulate the journey from Paris where the AI Impact Summit started, which I was in last year, to just being a dialogue between the political leaders to … There was earlier this year in January where sovereignty and building AI for everyone and not just the big frontier models that we see coming out of America or the competition from DeepSeek and other major players from China and Global South, building the technology and the sovereignty becomes the main theme in Davos as a conversation to actually seeing it implemented. It’s been implemented here across the halls of Bharat Bandhapa. It’s fascinating. I feel very proud to be of Indian origin. And also like taking what India has done to Geneva as part of the organizing committee in Switzerland where I come from, I think there will be very hard shoes to fill.

And coming in from an ETA perspective, my company, I think sky’s the limit to the opportunity. Hearing from Balaji and many, many other practitioners including ServiceNow and others, it just seems like the opportunity is there. people are ready to experiment, people are looking towards not piloting but actual return of investments. We see that with infrastructure. We see that with what really works and without customization. This is the deepest and the most important part of every organization of every government with what we do with our data and how we use the cognition where we have control over that cognition. And I like what Mr. Farnovi said. He said, like, we don’t want the American and the Chinese babies.

I like what ACTA is doing. It’s bringing in a lot of Indian babies to the world, which is what domain models do. So very much looking forward to hosting many of you also in Geneva next year and a very big learning and a very impactful week that India has organized for the world. Thank you.

Sunita Mohanty

Anupam? Okay.

Anupam Chattopadhyay

In one word, summer is just fantastic. Like, I have not seen skills like this. because in academics you know we go to technical conferences ranging from very small 100 to 100 people. The largest one I attended was having 9000 attendees. That’s triple AI, also an AI conference. But here it’s like a complete order of magnitude more. And this is very much essential that we have the dialogue between researchers, entrepreneurs, policy makers, ministers all in a single stage. That’s really, really wonderful. One thing I was curious about maybe as part of organization team you can throw some lights into is how much AI was actually used to arrange this and to maybe defend against cyber attack and all the systems to detect if there are people passing through.

So that I am curious about. So that would be like AI in action in hosting an AI summit.

Sunita Mohanty

We did use a significant amount of AI but obviously not everything. But one of the most really amazing things, I don’t know how many of you saw the Prime Minister’s address. But one of the real things was we had AI agent that was actually doing real time translations which was more for accessibility purpose. So I think those are examples of where we have really used and of course in the planning. Although this is not just with the government, I think a lot of people from business, from academia have all come together. So it’s primarily a win across India. I haven’t seen that scale of partnership that has happened. We have a team that sits in the ministry and for the last 6 -7 months the amount of people that have been just coming in, volunteering, supporting, it’s just amazing to see how it’s come together.

Balaji? Look

Balaji Thiagarajan

I’ve been to the first AI impact summit. I’ve not been to other places. But the way I look at it is, the commitment to AI, the government of India, decided to do this. It’s a masterstroke for multiple reasons. One is that it brings the government, it brings the industries, it brings the academia, it brings the students, and it brings the imagination of the whole country together that this is doable. The art of the possible is absolutely there. And more importantly, when I think about this, India’s technology underpinnings was from a service -based industry, right? And if you harken back to the world of telecommunication where we leapfrog landlines to mobile, I think this is the opportunity for India and India -based companies and any company that wants to operate in India to kind of leapfrog this whole SaaS -based technologies, web -based technologies, what have you, and directly leapfrog.

And India can take that opportunity and become the number one industry. software provider, not of services, of systems and products that are at a world scale. We do not have a brand, we do not have a software brand in India that sells on a worldwide scale. Service is not a software brand. This opportunity provides India to leapfrog because we have the scale, we have the people, we have the intelligence, we have the ability to actually think very, very differently at a price point that nobody can imagine, to be honest. And then, now the government is behind this and with the public infrastructure it’s also reinforcing all the research that needs to happen. So this is an opportunity for India to take or lose as the case might be, but I think India is going to take

Sunita Mohanty

No, thank you so much and on that optimistic note, thank you all of you for being here and we started the conference with talking about the theme which is Sarvajana Hitaya, Sarvajana Sukhaya welfare for all and happiness for all and I hope we carry this message across the global south into Geneva and bring in Europe and US into this. as well. Thank you so much.

B

Babak Hodjat

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI Guardrails, Trust Frameworks, and Regulation

Explanation

Babak stresses that AI systems require human‑in‑the‑loop or on‑the‑loop oversight, uncertainty assessment, and clear agent identity to be safe. He also proposes public processing capacity and sovereign sandboxes while warning against both over‑ and under‑regulation, calling for balanced frameworks.


Evidence

“And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agents.” [1]. “yeah AI is real and both the promise and the risk is real and so guardrails are needed we can’t fall off either ledge of trusting AI either mistrusting it to the point where we debilitated by basically having you know a human rubber stamp every single step or the other way basically thinking that it’s you know some magic pixie dust that you just pour over your organization and then turn it on and it’s AI enabled so guardrails are important” [2]. “assessing uncertainty in an agent’s output and deciding not to take its output at face value, basically taking the output as well as its own measure of certainty in its output as a measure of whether or not we bring a human in.” [8]. “how do you determine the identity of this agent?” [9]. “And I would also create a sandbox, sort of a sovereign sandbox, in which to invite both entrepreneurs, startups, academia, the regulator, to, in a safe environment, safe and controlled environment, be able to try out various different applications, various different interoperability between these agentic systems.” [16]. “And I would actually create a publicly available processing capacity.” [17]. “Again, there’s a risk of over -regulating versus under -regulating.” [30]. “Regulation does play a part.” [31].


Major discussion point

AI Guardrails, Trust Frameworks, and Regulation


Topics

Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs


A

Anupam Chattopadhyay

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Robust Models and Synthetic Data for the Global South

Explanation

Anupam points out that AI models must handle noisy, multilingual data and proposes synthetic data generation and federated learning as key techniques for low‑resource environments.


Evidence

“because it was tested on very much clean data and under noisy atmosphere the accuracy of the detection is failing it’s a huge concern because people are also not always educated there is a already a digital barrier and on top of that there is a ai barrier that’s coming up so that’s the bad side of the ai that we are observing we are trying to defend it and for that the technologies that we are trying to bring in is of course one is to create synthetic data sets so we have a tunable noise addition on top of the data then collecting as much as possible say data by scraping the internet but for deep fake it brings a different problem you see a video or an image and from a human point it’s not even discernible that it’s deep fake or not it looks so original right so we had to create a separate automatic fact -checker which is looking if there is a news that is linking that image with something and news is coming from a trustworthy source only when then we call it say original image or otherwise it is refit.” [45]. “there are different models with different defecitation capabilities we put them together and sometimes the models are proprietary and we want to take it from a particular vendor, merge them together … we have techniques like federated learning on how to merge the models and still guarantee that their training data or their models will never be leaked.” [42]. “And there we are facing exactly the problems that the models that we begin with when we start training it, it is actually showing very poor results when we are testing subjects from global, like for the images or for the audio, or if we are putting the audio under some circumstances where there is a lot of noise.” [47].


Major discussion point

Technical Challenges and Research Directions for the Global South


Topics

Artificial intelligence | Capacity development | Closing all digital divides


Single‑Window Consortium for Academia‑Industry Partnership

Explanation

Anupam advocates a single‑window model (AI.sg) that integrates research funding, technology innovation, transfer, commercialization, and regulation, fostering strong academia‑industry collaboration.


Evidence

“they started this AI .sg as a single window consortium sort of stuff which has multiple steps like starting from the research that they are giving funding then there is a technology innovation then there is a technology transfer and commercialization, there is a dissemination and there is a regulation so no matter who is a researcher or university or the company, they can participate at any level of this so the problems can be very different because university is like a melting pot so we are making some model with little bit of training and little amount of data but when it goes out then we see the problem is now becoming AI for automotive or AI for perception module, AI for agents, this is not what we can control because every industry have their own regulation, their requirements moves at different pace right so this is what we try to address by having the single point window and clearly define the parameters and benchmarks for example fairness and ethical thing is what is the recurring theme in the discussion here is often underrepresented we highlight the performance but not the ethical lapses and the hallucination and the alignment lapses as much as possible jailbreaking or getting data out of a model it’s so easy that we are really scared before someone says okay start rolling out this cloud but we know from academic point of view it’s weak but we cannot control this unless enterprises and the policy makers steps in and say this must be regulated” [37].


Major discussion point

Academia‑Industry Collaboration for Ethics and Efficiency


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


S

Sunita Mohanty

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Synthetic Data and AI‑in‑a‑Box for the Global South

Explanation

Sunita highlights the need for clean synthetic data and a modular “AI‑in‑a‑box” platform that can be used by students and researchers in resource‑constrained countries.


Evidence

“The other thing is about synthetic data which is very important for us to keep the data clean and one of the conversations we were also having is how do you enable creation of synthetic data in countries like India and the Global South by creating AI in a box which is a very modular infrastructure that is available to students, researchers in a very small minimal environment for them to be able to create some of this data.” [25].


Major discussion point

Technical Challenges and Research Directions for the Global South


Topics

Artificial intelligence | Closing all digital divides | Capacity development


Sovereign LLMs and Language Coverage

Explanation

Sunita stresses building sovereign large language models to ensure coverage of Indic and APAC languages, giving nations control over data and cognition.


Evidence

“I think one of the things we are always discussing about there is not enough data to train the models and that’s why there was a lot of emphasis during this week around getting language models in so that there is enough Indic language as well as across APAC.” [43]. “When it comes to India, I know that there’s talk about, for example, building these systems within India, like sovereign LLMs to back the agentic systems.” [99].


Major discussion point

Sovereignty, Domain‑Specific Models, and ROI


Topics

Artificial intelligence | Data governance | Human rights and the ethical dimensions of the information society


Responsible AI Consumer Transparency

Explanation

Sunita raises the need for clear disclosure when AI agents interact with consumers, emphasizing transparency as a trust‑building measure.


Evidence

“But again, Balaji, I want to ask a quick question because when you use agents in services like yours for customer service, which is a very important component of the job, are you transparent about this being a bot versus a human?” [119].


Major discussion point

Responsible AI in Consumer Platforms


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | The digital economy


Sustainable AI Infrastructure ROI and Renewable Energy

Explanation

Sunita calls for renewable‑powered, efficiently cooled data centers with clear ROI metrics such as query cost and energy‑per‑token to make AI deployment responsible and cost‑effective.


Evidence

“I really hope we get a great design playbook for building data centers that are accessing renewable power, better cooling systems, and better ROI.” [75]. “I think one of the main conversations there was around energy and how do you make it efficient and in one of the conversations at Bloomberg we did hear about ROI and how do you measure the cost of query, like what is it on the infrastructure.” [87]. “So I hope that at least with renewable energy and efficient cooling system, we get better as well as optimized query capabilities.” [79].


Major discussion point

Sustainable AI Infrastructure and Design


Topics

Environmental impacts | Financial mechanisms | Artificial intelligence


A

Amod Kabade

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Sustainable Data‑Center Design with Liquid Cooling and KPIs

Explanation

Amod proposes token‑based energy and water KPIs, liquid‑cooling technologies, and modular, sustainable data‑center designs to reduce environmental impact and support future AI workloads.


Evidence

“We need to, as government I would say, we need to get to a point where we can define KPIs around energy consumption per token or water consumption per token for these type of massive infrastructures and incentivize the players who are actually achieving or crossing those KPIs.” [39]. “So to do that, today we can leverage liquid cooling technologies which will minimize the overheads in terms of cooling this infrastructure requirements and allow us to scale AI rapidly for betterment of people and the planet.” [62]. “I would like to just leave a thought here that the way we see cars getting manufactured in factories today, … we see data centers moving in that fashion wherein the electrical, mechanical the IT will be manufactured, designed and manufactured as modules and then rolled out to the sites as modular, scalable, sustainable infrastructure for AI factories of future.” [61].


Major discussion point

Sustainable AI Infrastructure and Design


Topics

Environmental impacts | Artificial intelligence | The enabling environment for digital development


Modular, Flexible Infrastructure for Future Chips

Explanation

Amod emphasizes designing data centers in a modular, flexible way so they can accommodate future, more demanding AI chips, ensuring long‑term reliability and sustainability.


Evidence

“all those future road maps as well and how do you do that is by designing your data centers in a modular fashion leveraging technologies which allow you to accommodate future chips which are going to be all the more resource hungry they are going to generate all the more heat so we need to have those technologies in place so that your designs can sustain that over a long period of time.” [81]. “Whatever you plan for today needs to be flexible enough to accommodate.” [85]. “That needs to be a core part of your design and build and even with that you still need that design to be modular, flexible and most importantly sustainable.” [84].


Major discussion point

Sustainable AI Infrastructure and Design


Topics

Environmental impacts | Artificial intelligence | The enabling environment for digital development


T

Tanvi Singh

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Sovereign and Domain‑Specific Models for ROI

Explanation

Tanvi argues that building sovereign, language‑oriented domain models gives nations control over data and delivers clear ROI, linking cognition control with infrastructure control.


Evidence

“from a sovereignty perspective, it’s important that we can build our own models where data is not a constraint.” [49]. “But here, since the equation has turned lopsided, there are lots of factors that goes into ROI.” [50]. “It’s very language -oriented.” [51]. “control on cognition and intelligence is as important as the control on infrastructure is, and that gave birth to what we’re now building at ECTA, which is the Domain Specific Model.” [90].


Major discussion point

Sovereignty, Domain‑Specific Models, and ROI


Topics

Artificial intelligence | Financial mechanisms | Data governance


Customizable AI Platform with Built‑in Guardrails

Explanation

Tanvi proposes a scalable, customizable generative‑AI platform (akin to Palantir) that embeds security guardrails and leverages domain knowledge for client training.


Evidence

“And the idea here is very much, you build in the guardrails, you build in the security as part of your four layers that we have at Ecta, and use your domain knowledge, your domain corpus of information to train and train your clients.” [5]. “So with those two learnings, what if you could create a scalable, customizable platform like Palantir, but for generative AI, which is what we started building at Ecta.” [63].


Major discussion point

Academia‑Industry Collaboration for Ethics and Efficiency


Topics

Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs


B

Balaji Thiagarajan

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Fairness, Data Quality, Access Controls, and Encryption

Explanation

Balaji states that fairness in pricing and service requires high‑quality data, strict access controls, and encryption to protect consumer trust and ensure equitable outcomes.


Evidence

“Pricing, we have to be fair.” [40]. “We need to have good quality data, right?” [44]. “the other thing that we do is the access controls on the data and who can access what data is also very important.” [110]. “So that is all about encryption and everything else that goes on.” [115].


Major discussion point

Responsible AI in Consumer Platforms


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | Building confidence and security in the use of ICTs


Transparency and Opt‑out Default in Customer‑Service Bots

Explanation

Balaji advocates for a default opt‑out setting and clear disclosure that a bot, not a human, is handling the interaction, to build consumer trust.


Evidence

“There is an opt out position by default.” [118]. “are you transparent about this being a bot versus a human?” [119]. “We opt out and then we have folks opt in for us.” [120].


Major discussion point

Responsible AI in Consumer Platforms


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | The digital economy


Domain‑Specific SLMs within Agentic Orchestration

Explanation

Balaji describes an agentic orchestration framework where domain‑specific small language models (SLMs) are selected dynamically to handle tasks efficiently and cost‑effectively.


Evidence

“And for that, the way we think about this is we have an agentic orchestration framework.” [11]. “you have to train the what we call a domain specific models which is what Tanvi was talking about.” [52]. “we have SLMs that have been trained for a specific task domain or a specific task even.” [94]. “It redirects the traffic to the right SLM.” [102].


Major discussion point

Sovereignty, Domain‑Specific Models, and ROI


Topics

Artificial intelligence | Data governance | Financial mechanisms


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.