Scaling Enterprise-Grade Responsible AI Across the Global South

20 Feb 2026 18:00h - 19:00h

Scaling Enterprise-Grade Responsible AI Across the Global South

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel at the AI Impact Summit examined how India and the Global South can adopt trustworthy AI, focusing on guardrails, regulation, and sovereign models [9-10]. Sunita introduced the theme of connecting India with the global south and asked the chief AI officer about trust frameworks [9].


Babak emphasized that AI systems need balanced safeguards, combining human-in-the-loop and agentic oversight, while noting the lack of standards for third-party agent identities and the risk of both over- and under-regulation, especially as sovereign LLM initiatives emerge in India [11-24][28-32]. He warned that continuous reasoning can lead to trivial mistakes and that redundancy and uncertainty assessment are needed to decide when to involve humans [11][16-18].


Anupam highlighted that models trained on clean data perform poorly on the noisy, multilingual data typical of the Global South, proposing synthetic data generation, federated learning and privacy-aware techniques to improve robustness [41-44]. Amod argued that responsible AI must start with sustainable data-center design, using liquid cooling and energy-per-token KPIs to make AI infrastructure environmentally sound [50-54].


Tanvi argued that true sovereignty means building domain-specific LLMs that keep data and cognition under local control, enabling ROI for regulated sectors [64-88]. She noted that controlling model outputs and avoiding hallucinations is essential for banking, healthcare and other regulated industries [76-80]. Balaji described Flipkart’s fairness strategy, which relies on high-quality data, strict access controls, encryption, and transparent bot interactions that default to opt-out for users [100-112][113-118][129-135]. He explained that Flipkart uses a mixture-of-experts architecture, routing generic queries to large foundation models and domain-specific SLMs for localized pricing and recommendations [225-238].


Babak later recommended creating publicly available processing capacity and sovereign sandboxes to let academia, startups and regulators experiment safely, avoiding both regulatory overreach and neglect [145-165]. Participants pointed to India’s proactive steps such as provisioning 60,000 GPUs and adopting modular, future-proof data-center designs to support AI scaling [166-170][177-184]. The summit’s scale and cross-sector collaboration were praised as evidence that India is uniquely positioned to lead trustworthy AI deployment across the Global South [252-259][306-312].


Keypoints


Major discussion points


Guardrails and trust frameworks for AI deployments – Babak emphasized the need for balanced safeguards, including human-in-the-loop/on-the-loop, uncertainty assessment, and mechanisms to verify the identity of third-party agents, while warning against both over- and under-regulation [11-18][20-24][28-32].


Technical and data challenges unique to the Global South – Anupam highlighted that models trained on clean data fail on noisy, multilingual, and intermittent-compute environments; he advocated synthetic data generation, noise-robust training, federated learning, and privacy-preserving model merging to make AI trustworthy in such settings [34-43][44].


Sustainable AI infrastructure as a foundation for responsible AI – Amod described how responsible AI starts with data-center design: liquid cooling, modular and flexible architectures, and clear energy-per-token KPIs to ensure scalability, reliability, and low environmental impact [50-54][177-185].


Sovereignty, domain-specific models, and ROI – Tanvi argued that true AI sovereignty comes from building locally-controlled, domain-specific LLMs (SLMs) that avoid reliance on foreign “frontier” models, thereby satisfying regulatory model-risk requirements and delivering measurable ROI for enterprises and governments [64-71][74-88][85-89].


Operationalizing AI at Internet scale (e-commerce) – Balaji explained how Flipkart ensures fairness, data quality, and security through access controls, mixture-of-experts architectures, and an agentic orchestration framework that routes tasks to either large foundation models or specialized SLMs, while maintaining transparency about bot interactions [98-108][110-118][127-136][225-236].


Overall purpose / goal of the discussion


The panel aimed to explore how India and the broader Global South can adopt and scale AI responsibly: establishing effective guardrails, addressing data and infrastructure constraints, fostering AI sovereignty, and translating research into practical, high-impact applications across sectors such as finance, healthcare, and e-commerce.


Overall tone and its evolution


The conversation began with a cautiously optimistic tone, acknowledging AI hype and the risks of both mistrust and over-regulation. As speakers moved into technical details, the tone shifted to pragmatic problem-solving, offering concrete methods (synthetic data, modular data-centers, domain-specific models). Toward the end, the tone became enthusiastic and celebratory, highlighting India’s progress, the scale of the summit, and a collective sense of pride and momentum for future AI initiatives.


Speakers

Sunita Mohanty


– Role/Title: Managing Director, Primus Partners


– Areas of Expertise: AI strategy, responsible AI, AI policy, conference moderation


Babak Hodjat


– Role/Title: Chief AI Officer, Cognizant (as referenced in the opening)


– Areas of Expertise: AI safety, guardrails, agentic systems, AI governance, regulatory frameworks


– Citations: [S4]


Tanvi Singh


– Role/Title: AI Transformation Leader, involved with sovereign LLM initiatives (e.g., Vatican, New York City) – partner at ECTA


– Areas of Expertise: Sovereign large language models, domain-specific AI, AI for education, AI governance, AI-driven personalization


– Citations: [S7], [S8], [S9]


Anupam Chattopadhyay


– Role/Title: Researcher/Academic (focus on deep-fake detection, synthetic data, federated learning)


– Areas of Expertise: Computer vision, deep-fake detection, synthetic data generation, federated learning, AI security, AI ethics


– Citations: [S10], [S11], [S12]


Balaji Thiagarajan


– Role/Title: Senior Executive, Flipkart (lead for AI/ML and marketplace trust)


– Areas of Expertise: Large-scale consumer AI, fairness, personalization, data security, marketplace trust, agentic orchestration frameworks


– Citations: [S13], [S14], [S15]


Amod Kabade


– Role/Title: Leader at SubMod (AI infrastructure and data-center solutions)


– Areas of Expertise: Sustainable AI infrastructure, liquid cooling, modular data-center design, energy-efficient AI compute, AI-factory architecture


– Citations: [S1], [S2]


Additional speakers:


Mr. Farnovi – Mentioned briefly by Sunita; no role or expertise detailed.


Nandan Nilakani – Cited by Balaji in an example; no role or expertise detailed.


Amol – Referred to by Sunita (likely a mis-address to Amod Kabade); no separate speaker role.


Full session reportComprehensive analysis and detailed insights

Babak Hodjat – Guardrails & Public Compute


Sunita Mohanty opened the closing session of the AI Impact Summit, thanking the audience, panelists and organisers, and noting that the two-day event was both inaugural and final, aimed at “connecting India and the global south” on trustworthy AI [1-4][5-10]. She then asked Babak Hodjat, Chief AI Officer at Cognizant, about the guard-rails and trust frameworks organisations are building for mission-critical AI in sectors such as banking and healthcare [9-10].


Babak answered that AI’s promise and its risks are both real, so balanced safeguards are essential; neither blind trust nor total scepticism is acceptable [11]. He described a suite of techniques-human-in-the-loop or human-on-the-loop oversight, agents that monitor each other, and explicit uncertainty assessment that triggers human intervention [15-18]. He warned that continuous reasoning can accumulate trivial errors after many steps, a failure mode that must be mitigated through redundancy and error-correction [11-13]. Regarding the emerging multi-agent ecosystem, he noted the lack of robust standards for verifying third-party agent identity, a gap that Google’s A2A work is beginning to address [20-26]. He concluded with a call for balanced regulation, cautioning against both over- and under-regulation as India pursues sovereign large-language models (LLMs) [28-32].


Anupam Chattopadhyay – Synthetic Data & Federated Learning


Anupam highlighted that many AI models are trained on clean, well-curated data but perform poorly on the noisy, multilingual, intermittently-connected environments typical of the Global South. Using a deep-fake detection project, he showed accuracy collapse when the model encountered noisy audio or images from diverse regions [41-42]. To address this, his team creates synthetic datasets with tunable noise, scrapes additional data from the web, and builds an automatic fact-checking pipeline that cross-references news from trusted sources [41-42]. Because high-performance compute is scarce, they employ federated learning and mixture-of-experts techniques to merge proprietary models while preserving data and model privacy [42-44].


Amod Kabade – Sustainable & Modular Data-Centre Design


Amod advocated liquid-cooling technologies to reduce cooling overheads and suggested quantitative KPIs such as “energy-per-token” or “water-per-token” to incentivise efficient operation [50-54]. He further stressed that data-centre designs need to be modular, flexible and future-proof, allowing easy integration of newer AI chips that generate more heat and demand higher density [177-184]. By treating the data-centre as a set of interchangeable modules-electrical, mechanical and IT-organisations can achieve long-term reliability and lower carbon footprints [185].


Tanvi Singh – AI Sovereignty & Domain-Specific Models


Tanvi explained that true AI sovereignty means building domain-specific LLMs trained on locally owned data in native languages, eliminating translation bottlenecks and giving organisations control over the cognition that regulators audit [64-71][74-80][85-89]. She linked this to the Model Risk Management framework used in banking, arguing that without such control the technology cannot pass regulatory scrutiny [76-78]. She illustrated the approach with concrete collaborations: trust-building work with the Vatican’s literature and a hyper-personalised education pilot in New York City [200-208]. Deploying “Domain Specific Models” that are not tied to foreign ecosystems, she argued, enables measurable ROI while respecting data-locality and compliance [81-88][89].


Balaji Thiagarajan – Responsible AI at Scale & Agentic Orchestration


Balaji described Flipkart’s operationalisation of responsible AI at Internet scale. He said fairness spans pricing, product quality and after-sales service, all of which depend on high-quality data and strict access-control policies [100-108][110-112]. Data in motion is protected through encryption, and the modelling layer uses a mixture-of-experts architecture: large foundation models handle generic intent detection, while smaller, region-specific SLMs provide precise pricing and catalogue generation for Indian demographies [225-238][119-124]. He also detailed the agentic orchestration framework that routes each query either to a generic LLM or to a region-specific SLM based on the task, ensuring optimal performance and compliance [260-268]. To maintain user trust, Flipkart’s customer-service agents are presented as co-pilots with a default opt-out disclosure; users must explicitly opt-in to interact with a bot, a practice Balaji said is essential for transparency and compliance [127-135][136-137].


Policy Levers & Public Compute


When asked about concrete policy levers, Babak proposed two complementary measures. First, a publicly available processing-capacity platform that would democratise access to compute resources currently concentrated in a few large firms [145-152]. Second, a sovereign sandbox where startups, academia, regulators and entrepreneurs can safely experiment with agentic systems and co-develop appropriate regulations [155-165]. He stressed that the government’s role should be to nurture an ecosystem rather than to build every AI stack itself [156-158][162-164]. Sunita noted that India has already begun to materialise this vision by provisioning 60 000 GPUs to states and institutions, enabling the creation of open-source sovereign LLMs [166-170].


AI-in-a-Box Question


Sunita also raised a forward-looking question about an “AI-in-a-box” modular infrastructure that could enable synthetic-data creation for students and researchers across the Global South, highlighting its potential to democratise access to AI tools [90-93].


Consensus Points


All speakers agreed that balanced guard-rails-human-in-the-loop oversight, uncertainty quantification and clear user disclosure-are indispensable to avoid both blind reliance and excessive rubber-stamping [11-18][33-34][127-135]. There was broad agreement on the need for publicly accessible compute resources and sandbox environments to democratise innovation [145-165][166-170], and on the importance of sustainable, modular data-centre designs with measurable energy KPIs [50-54][177-185].


Complementary Approaches


The panel highlighted two complementary approaches: Babak emphasized the need for publicly-available compute resources and sandbox environments, while Tanvi stressed the importance of building sovereign, domain-specific models that reduce dependence on external LLMs [145-152][155-165] versus [80-87].


Actionable Take-aways


– Implement guard-rails that combine human oversight with automated uncertainty checks.


– Develop standards for agentic identity verification.


– Establish public compute platforms and sovereign sandboxes.


– Build synthetic data pipelines with tunable noise and federated-learning workflows for heterogeneous, multilingual data.


– Accelerate domain-specific sovereign LLMs for low-resource languages.


– Adopt liquid-cooling, modular data-centre designs and KPI-based incentives.


– Enforce high-quality data, strict access controls, encryption and transparent bot disclosures in large-scale e-commerce platforms.


These suggestions reflect the collective recommendations of the speakers [145-165][166-170][177-185][41-44][80-89][100-108][110-118][127-135].


Conclusion & Future Work


The summit showcased India’s unique position-grounded in a strong service-based IT ecosystem, a proactive AI mission that has already distributed massive GPU resources, and a vibrant startup community-to lead responsible AI deployment across the Global South. The panel indicated that future work should focus on translating these consensus points into concrete policies, standards and public-private partnerships that can sustain the momentum generated over the past week [300-312].


Session transcriptComplete transcript of the session
Sunita Mohanty

Thank you very much. Thank you everyone and thank you to our esteemed panelists and everyone who’s come here braving the traffic. I know it’s a fag end of the AI Impact Summit and as people were saying, they’ve heard so much of AI this week that they could decompress for the next one month and not here. So, but however, we can’t wish it out because it’s a very significant part of our life. I’m Sunita Mohanty, Managing Director at Primus Partners and it’s a pleasure being here. We started with the inaugural session on the 16th and we are ending with a session today. So, it’s a very significant moment for us to be. here today. So I’m going to quickly ask, we have a really good set of panelists here, so I’m going to start with you, Babak.

So we’ve been talking a lot, we’ve been attending a lot of sessions, people are talking about what is real in AI and today’s topic is about how do we connect India and the global south and what are some of the guardrails we can build here and especially as the chief AI officer at Cognizant, you’re seeing how AI is really impacting real life and enterprises are really moving to delivery architectures and operating models in AI in mission critical infrastructures like banking and healthcare. So from your point of view, what are you seeing as the guardrails and the trust frameworks that organizations are creating to make sure that these are safe and what would your advice be for India and the global south, what kind of frameworks should be adopted?

Babak Hodjat

yeah AI is real and both the promise and the risk is real and so guardrails are needed we can’t fall off either ledge of trusting AI either mistrusting it to the point where we debilitated by basically having you know a human rubber stamp every single step or the other way basically thinking that it’s you know some magic pixie dust that you just pour over your organization and then turn it on and it’s AI enabled so guardrails are important and there are different ways there’s no panacea to to ensure safety as well as reliability of these systems One of the biggest risks is this notion that because the AI systems respond and reason very well, after one or two reasoning steps that we can allow them to just continuously reason, they do make mistakes, even very trivial mistakes, after several hundred reasoning steps.

So, we’ve been here before, for example, with telecommunications, where a bit might flip when a truck is driving down the road. And so we know how to error correct through redundancy and through other means. We know how to engineer systems that are reliable. And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agents. In the loop and on the loop. Checking other agents’ works. assessing uncertainty in an agent’s output and deciding not to take its output at face value, basically taking the output as well as its own measure of certainty in its output as a measure of whether or not we bring a human in.

So these are just some techniques, but there are a multitude of techniques that can be used. There is also increasingly this issue of agentic identity. When you’re building a system fully in -house for your own use, then you pretty much have control over the agents. You know which agent is talking to which agent, and they’re all built in in -house. But increasingly we’re moving into a world where you have agents from third parties, maybe another business, maybe your consumer represented by an agent, B2C maybe an agent, coming in and talking to your agents. How do you? How do you assess? how do you determine the identity of this agent? We don’t really have very well -established standards for that just yet.

I know our friends at Google are working on that in A2A and there are other standards coming out, but it’s still not well -established. So there’s risks external to your agentic systems as well. So I just listed a whole bunch of different areas. When it comes to India, I know that there’s talk about, for example, building these systems within India, like sovereign LLMs to back the agentic systems. Regulation does play a part. Again, there’s a risk of over -regulating versus under -regulating. I think it’s important, again, not to fall off the ledge from one side or the other. I have opinions on that too but I’ve realized I’m talking too long so I should

Sunita Mohanty

Now thank you Babak really good point I know you have a very difficult job at Cognizant but the two things that you mentioned about keeping agents and human in the loop and we also heard in this one week a lot of people talking about having humans at the center of everything that you build so that’s amazing morning we also heard about regulation versus innovation and US is at the point of innovation Europe is at the point of regulation and where does India and the global south stand so that’s good I’ll move to the academician point of view Anupam with you next and much of the responsible AI research still assumes that data is clean and there is a stable infrastructure but that’s not true about the global south because we do operate here on very heterogeneous data intermittent compute access and multilingual environment so from a research perspective what are some of the new technical directions hardware aware AI and robust architecture evaluation models that are needed to make AI really trustworthy

Anupam Chattopadhyay

I think this is a very important question. We do see the scale of innovation and the pace. This is going at so high rate. It’s not always easy to just take a back seat and think of the research as a standalone component that matures and goes to industry. People are releasing tools and things are going out of hand very quickly. And for that reason, what we figure out is it’s always good to keep the research very grounded and try to test the waters with some real world scenario. And one of the examples that I pick up here is one spin off that we had from a research group that’s on deep fake detection. And there we are facing exactly the problems that the models that we begin with when we start training it, it is actually showing very poor results when we are testing subjects from global, like for the images or for the audio, or if we are putting the audio under some circumstances where there is a lot of noise.

because it was tested on very much clean data and under noisy atmosphere the accuracy of the detection is failing it’s a huge concern because people are also not always educated there is a already a digital barrier and on top of that there is a ai barrier that’s coming up so which is making people fall prey to a lot of cyber scans very easily so that’s the bad side of the ai that we are observing we are trying to defend it and for that the technologies that we are trying to bring in is of course one is to create synthetic data sets so we have a tunable noise addition on top of the data then collecting as much as possible say data by scraping the internet but for deep fake it brings a different problem you see a video or an image and from a human point it’s not even discernible that it’s deep fake or not it looks so original right so we had to create a separate automatic fact -checker which is looking if there is a news that is linking that image with something and news is coming from a trustworthy source only when then we call it say original image or otherwise it is refit.

So that is the data collection issue, but when it goes to the even the implementation aspects that of course not everyone is having access to the high performance computing and we have to cut all the data or the models back to the bare minimum and there we have to resort to techniques where we are doing say mixture of experts that there are different models with different defecitation capabilities we put them together and sometimes the models are proprietary and we want to take it from a particular vendor, merge them together or an organization they have their own contextual model but they don’t want to share the models as it is and for that we have techniques like federated learning on how to merge the models and still guarantee that their training data or their models will never be leaked.

So it’s a privacy aware building up. So we do have all the technologies and tools just a short glimpse of that.

Sunita Mohanty

Thank you Anupam. I think one of the things that we are always discussing about there is not enough data to train the models and that’s why there was a lot of emphasis during this week around getting language models in so that there is enough Indic language as well as across APAC. The other thing is about synthetic data which is very important for us to keep the data clean and one of the conversations we were also having is how do you enable creation of synthetic data in countries like India and the Global South by creating AI in a box which is a very modular infrastructure that is available to students, researchers in a very small minimal environment for them to be able to create some of this data.

So thank you. With that Amod I wanted to speak to you about AI infrastructure given that you are in that business with SubMod and that is now becoming central to the responsibility defense and debate as well. So the IEA estimates a data center electronically and the electricity demand will by 2030 and AI workloads are a key driver. How do enterprises and governments think about responsible AI not just from a model creation perspective but generally at an infrastructure, environment, ecosystem perspective in terms of cooling, energy transparency, resilience?

Amod Kabade

So from our point of view, the responsible AI starts from the design of the infrastructure. So if my design of the data center is sustainable that is where I am going to then be able to achieve my goals efficiently and sustainably. So to do that, today we can leverage liquid cooling technologies which will minimize the overheads in terms of cooling this infrastructure requirements and allow us to scale AI rapidly for betterment of people and the planet. We need to, as government I would say, we need to get to a point where we can define KPIs around energy consumption per token or water consumption per token for these type of massive infrastructures and incentivize the players who are actually achieving or crossing those KPIs.

Essentially it is all about making these data center designs sustainable from the power consumption standpoint and achieving a much better outcome that we want to achieve in terms of AI and its scale.

Sunita Mohanty

So that’s a good point because Tanvi and I and Babak, you as well have all come from Davos. I think one of the main conversations there was around energy and how do you make it efficient and in one of the conversations at Bloomberg we did hear about ROI and how do you measure the cost of query, like what is it on the infrastructure. So I hope that at least with renewable energy and efficient cooling system, we get better as well as optimized query capabilities. So with that, Tanvi, I wanted to move on to you and especially because you’re creating sovereign LLMs now with the Vatican as well as with the New York City. So drawing from your experience of leading AI transformations, how do you think that deep tech startups in critical sectors like BFSI can build advanced functionality and across complex regulatory systems and also what’s your sense about the definition of sovereignty given it’s a very loosely used terms across all and we’re hearing a lot of it this week.

Tanvi Singh

Thank you, Sunita. Thank you everyone for having me here. I’m very happy, very excited to come back to my homeland in Delhi again from Zurich and the conversation was very enlightening. We call it Davos 2 .0 of what India is hosting now with the AI impacts on it. So thank you for having me. I think the question is super loaded. I can go on and on and on for multiple hours. But let’s just pivot into the conversation on ROI. I come in from a banking background, worked for more than a decade in Swiss banking. And the conversation always around if you’re putting in an investment, what’s the return of it? So whether we are calling the use of LLMs, the frontier models, we all know what the return of investment for the consumer is.

I think this is the first technology that touched consumers first, enterprise and governments later. We always had technology touch enterprise and government first and consumers later. But here, since the equation has turned lopsided, there are lots of factors that goes into ROI. So going back to sovereignty, I think President Trump has really done the marketing and sales for sovereignty. And everybody tends for themselves, whether it comes to defense, or whether it comes to owning your own infrastructure. your data and your intelligence and your cognition, which we call as models. But one of the factors that always resonates coming in from banking, if you cannot control, if you’re not accountable to what you present, you would never pass the regulatory bars on using the technology.

So we had this very famous team called the Model Risk Management, and we used it for AIML for the longest time. I think anybody in banking would resonate with that, similarly for health care, similarly for the regulated industry. So with the use of LLMs, and I had the opportunity of working very closely with OpenAI during my time at UPS, being Microsoft’s primary partners, we have the entire world’s data in ChatGPT and all the other LLMs. There’s no way we can guarantee what output the system’s going to throw at you. So the control on cognition and intelligence is as important as the control on infrastructure is, and that is paramount, and which gave birth to what we’re now building at ECTA, which is the Domain Specific Model.

It doesn’t get trained on an open source. It’s not an American first or a Chinese first or a French first. We’re talking a lot about France and Mistral. It’s your own model. And from a sovereignty perspective, it’s important that we can build our own models where data is not a constraint. You could use the data of your own content, of your own organization, of your own government in your native language, and there’s no translation required. And you could use multiple use cases across that domain, which is extremely applicable and hopefully gives the ROI to the sovereign stacks that different governments and different organizations are building for themselves. Because if your model is in your control, you could put them into consumer -facing use cases and not the internal productivity use cases.

And the value of this whole technology for enterprise and government is only applicable when the end consumer gets to use it the way the retail consumers are using OpenAI and Anthropix

Sunita Mohanty

Thank you so much for that. Because I think one of the other things that I hope… this week at the conference is also not only protection and guardrail at a model level, but there was also a demo of a product where at the hardware level, they are trying to put some kind of controls so that there is a break. So we’ll get to that topic. I wanted to move to you, Balaji, because Flipkart is right in the middle of consumers, like user -user -consumer, very much like our LLM models. So how do you think you operate at a population scale like in India and Global South and in a high -velocity environment? Where does responsible AI collide with business realities?

For example, how do you manage personalization, data security, fairness, and marketplace trust?

Balaji Thiagarajan

Yeah, Sunita, thank you for the question. You’re right, Flipkart operates at Internet scale pretty much, right? So we have 500 million users. See, when you talk about fairness, it’s across multiple areas, and we’ll talk about sovereignty separately. Pricing, we have to be fair. The quality of the things that we sell in our marketplace, it has got to be good quality so that when buyers buy something and they see the quality that they see in our applications and our marketplaces, they get exactly what they expect. Third, around fairness and pricing is also quality of service. When we deliver something, like it’s okay to deliver milk or groceries, but if you’re going to deliver some big equipment like an air conditioner, the quality of service is also not about just delivering.

It’s also about helping them understand how to use the product, how to install it, how to do after -sales service. And so we have companies in the Flipkart group like Jeeves that also does that. So for us, fairness is across a broad spectrum of things, starting from the beginning of the customer journey all the way through servicing the customer through the life of the product. Right. Now, if you think about how we achieve that. you know, it’s not a formula that we know exactly how to kind of implement. There is a recipe, what we call standard operating procedure. It starts with data. We need to have good quality data, right? If we don’t have good quality data, then from there on, everything starts getting diluted further and further and further.

Now, on top of that good quality of data, the other thing that we do is the access controls on the data and who can access what data is also very important. So, you know, that’s where we bring in security aspects of it from an access control perspective. Then when you’re talking about interchanging data between organizations, between services and so on and so forth, it’s not only data addressed, it’s also data, you know, in motion. So how do you secure that data? So that is all about encryption and everything else that goes on. And then when we go into the modeling layer, you know, at Flipkart, you know, I think Anupam talked about it, we use a mixture of experts.

the concept of a world model being able to serve our needs or for that matter anybody’s needs to the specificity of fidelity of information or accuracy is something that I have not seen work right so at a broad information level the LLMs of the world the chat GPTs of the world you know cloud opus and everything works but when you get into very specifics for example I’ll give you example you know we work on image generating models a seller today can bring you know a piece of you know whatever you know skew or listings that they want to sell they can take a picture and based on that picture we can actually create a listings in a catalog and the seller can be in business in a marketplace in a matter of 20 minutes right so how do you do that to do that we have to recognize that we have to recognize that we have to recognize that we have to recognize that we have to recognize what this picture is and based on that we also have to recognize all the things that we need from the picture to create a catalog listing and based on that listing we also want to tell the seller what kind of price ranges you can actually sell this equipment for.

So when you go through all these things, when you talk to an LLM it’s going to give you a range, it’s going to take some international data into account but you have to train the what we call a domain specific models which is what Tanvi was talking about. We call it SLMs for the specific domain, for the specific region, for the specific demography which is India in this case and then price it according to that. Sellers are not selling to somebody sitting in US or England or wherever else. They are selling to somebody in India and by the way we can also give them a price range that if you are selling it in Bombay or Mumbai versus Delhi versus Calcutta versus some other place in Bihar this is the kind of price ranges you can have.

That’s

Sunita Mohanty

So that’s a good point. But again, Balaji, I want to ask a quick question because when you use agents in services like yours for customer service, which is a very important component of the job, are you transparent about this being a bot versus a human? And that conversation has also come up.

Balaji Thiagarajan

Yeah, so look, we today, in customer service, when we deploy our agents, they are primarily co -pilots, right? The reason is we have not mastered the technology yet in terms of actually using voice bots that can directly talk to somebody, respond to somebody in a multilingual way. Also, and when we say we have not mastered it, we know how to do it conceptually, but the models hallucinate, right? That’s number one. Number two, we do have a very, very, very strong ethical and compliance. system in -house which says fair disclosure and transparency is by far the most important thing to win customer trust. So if you are going to have a conversation with an agent, an agent is going to talk to you, for example, our UX experience teams, we actually look at it from how will the customer understand who we are talking to and we actually have what is called disclaimer saying that you might be talking to a machine here and if you do not want to have that conversation, you can always opt out.

So there is an opt out position by default. You have to opt in to actually have the conversation rather than opt out to have a conversation. If you see a lot of companies, including the Apples of the world and the Googles of the world and everybody else, default is opt in. And you have to very carefully think about it because if you are not conscious about it, you have just opted in. Right? That’s not how we do it. We opt out and then we have folks opt in for us.

Sunita Mohanty

That’s very refreshing to hear. I would go back to Babak next, but before I go back, I have a very unusual request. They want us to huddle together for another group photo in the middle of this before we get to you again. So please can I request everyone. Okay. Moving on after the picture. Back to you. I mean one of the questions that a lot of the representatives from the government have been asked over the last one week is what is the framework for building a government stack, a viable government stack, AI stack. So if you have to do scaled AI deployment in the global south which includes monitoring, human oversight and vendor accountability. what would be the framework that you would recommend or your advice to the government on how we should look at it?

Babak Hodjat

I would start off with processing capacity. That’s the underpinning for building these systems in -house and running inference on them. If you really want to build something internally. And I would actually create a publicly available processing capacity. It’s something that everybody’s complaining about everywhere around the world. Most of the processing capacity is concentrated in private companies or large companies and not available to producers. For example, students or the public to experiment and build stuff. and then rely on academia and students and research and government entities in the public domain to actually build on top of that. And so that’s one thing I would suggest. It would attract talent. It will reinvigorate innovation outside of this very exclusive few big companies that can innovate in AI.

And then I would also create a sandbox, sort of a sovereign sandbox, in which to invite both entrepreneurs, startups, academia, the regulator, to, in a safe environment, safe and controlled environment, be able to try out various different applications, various different interoperability between these agentic systems. and come up with the regulatory framework that is well -suited for India specifically. I don’t know if the role of the government is to actually build an AI stack. I would think that the role of the government is to actually create the ecosystem within which this stack can organically create and be safely created. We talked briefly about regulation. You can’t be front -running regulation. You can’t also be completely negligent of regulation.

It’s risky either way. And the best way to do that, I think, is in some form of a safe sort of sandbox environment where the regulator can try different things, can observe. And if something goes wrong within a sandbox, you have control over it. The implications are limited. and then you gradually move that out to the more general usage. That would be my recommendation.

Sunita Mohanty

No, that’s music to our ears, Babak, because to be honest, the Indian government has actually, under the aegis of the AI mission, done exactly that. They’ve created 60 ,000 GPUs. They’ve procured and provided to states and to institutions to create, and we’re seeing a lot of innovation come out of this. We saw some of our sovereign LLM models that are now going to go open source with everything that they have created, which is amazing. So we were in some of the announcements that were happening last week, and Sarvam spoke about the models that they are creating. So with that, I’m going to you next, Amod. So we spoke about the infrastructure. So you worked on enterprise and data center operations, and you are now…

You are now moving from small AI pilots to sustain high -density production environment. So based on your experience across projects, what patterns have you seen in organizations that are successfully scaling their AI infrastructure? And what are one or two cases in early design choices, whether it’s how you code or your cooling, your density, or your deployment planning that has actually made a decisive impact on reliability and trust?

Amod Kabade

is no more working. Now one needs to look at the chips which are going to use today and the chips which you have a future roadmap of. That needs to be a core part of your design and build and even with that you still need that design to be modular, flexible and most importantly sustainable. And why we need that is because traditionally when you design and build data centers it takes anywhere between two to three years and there are cases wherein it exceeds that also but let’s talk about on an average two to three years. In that period I mean and all those who are tracking all these activities from Nvidia within two years they would have launched three or four generations of their new chips and suddenly what you plan for would have become redundant or obsolete.

So whatever you plan for today needs to be flexible enough to accommodate. all those future road maps as well and how do you do that is by designing your data centers in a modular fashion leveraging technologies which allow you to accommodate future chips which are going to be all the more resource hungry they are going to generate all the more heat so we need to have those technologies in place so that your designs can sustain that over a long period of time. So that is one pattern that is clearly coming out with people who are now moving from let’s say pilot to production or from prototype to pilot people are understanding that aspect and that making a key consideration around these areas.

Coming to the cases of benefits I mean we have seen cases wherein by using this sustainability focused technologies in terms of cooling companies are getting benefits upwards of we are seeing customers who are live more than 3 years with 0 IT failures which talks a lot about the reliability aspect of the setup that gets designed here so all of it, if I have to put a summary of all of it it is all about making your design decision which keeps it flexible, modular and scalable and I would like to just leave a thought here that the way we see cars getting manufactured in factories today, wherein you source many components of the car, certain components are manufactured by the manufacturers in their factory and everything gets assembled and rolled out as a car, as a product, we see data centers moving in that fashion wherein the electrical, mechanical the IT will be manufactured, designed and manufactured as modules and then rolled out to the sites as modular, scalable, sustainable infrastructure for AI factories of future.

Thank you.

Sunita Mohanty

Great. And I really hope we get a great design playbook for building data centers that are accessing renewable power, better cooling systems, and better ROI. So with that, again, Anupam, to your view from research, how do you think that academia and industry should jointly rethink model efficiency, efficiency, reliability, and assurance as a single design problem rather than treating ethics, performance, and infrastructure as different layers?

Anupam Chattopadhyay

Okay. So I’ll take a little bit step back from this problem to highlight that for any technology, we do have the good side and the bad side. And before it brings its role out to the industry and masses in general, there needs to be enough safeguards in place. And in academics or university, we do have that liberty to take pot shots saying this is wrong. like we do feel right now there is a lot of gaps in the cyber security of AI and we are trying to raise as much attention to that as possible so we are doing that as part of the research that the models are not properly trained that there are possible loopholes in hallucinations and there could be alignment issues and without these things properly being regulated when it rolls out to industry there will be repercussions, there will be setbacks so we draw caution to this and to address that problem what needs to be done particularly in global south because I spent a lot of time in research in Europe so I have seen that and I can make a comparison is a very very strong industry academia partnership that the industry brings up a problem and tells ok this is what needs to be solved and we want your students to actually learn this before they come to the industry and we try to say align with that kind of philosophy so one of the things that I like very much from my perspective in Singapore and NTU that they started this AI .sg as a single window consortium sort of stuff which has multiple steps like starting from the research that they are giving funding then there is a technology innovation then there is a technology transfer and commercialization, there is a dissemination and there is a regulation so no matter who is a researcher or university or the company, they can participate at any level of this so the problems can be very different because university is like a melting pot so we are making some model with little bit of training and little amount of data but when it goes out then we see the problem is now becoming AI for automotive or AI for perception module, AI for agents, this is not what we can control because every industry have their own regulation, their requirements moves at different pace right so this is what we try to address by having the single point window and clearly define the parameters and benchmarks for example fairness and ethical thing is what is the recurring theme in the discussion here is often underrepresented we highlight the performance but not the ethical lapses and the hallucination and the alignment lapses as much as possible jailbreaking or getting data out of a model it’s so easy that we are really scared before someone says okay start rolling out this cloud but we know from academic point of view it’s weak but we cannot control this unless enterprises and the policy makers steps in and say this must be regulated

Sunita Mohanty

that’s a good point and i think the example that you took both out of europe and singapore is critical but at least that way i have seen with artificial intelligence there’s been a lot of collaboration that’s happening between you and the other countries and i think that’s a good point and i think that’s a good point industry and academia throughout the world so we hope that continues so to you Anvi, given your work with platforms like Palantir and OpenAI, how should AI applications balance broad interoperability with deep scalable domain integration? And also, we’d love to hear your experience of what you’ve done in New York City as well as Vatican and what are some of the learnings that we can take from there.

Tanvi Singh

Thank you, Sunita. I think when you ask about learnings from Palantir and OpenAI, and I’m fortunate to be a design partner in both the cases in my work at the bank. So Palantir, this was way behind when they were more a government technology provider for the U .S. defense services, and they wanted to make an enterprise play. So my bank was a design partner from financial services. So see the transition from being a defense services company to a platform company in the space of AI and ML, and that has been very interesting. Because to biology… point, right? There is no one word model that can fit everything and Ballantyre obviously is one of the best software out there when it comes to AI ML.

So they developed a stack that you could do the customized AI ML at scale for AI ML and that was a huge learning. Being in a bank, one size doesn’t fit all and you can’t think of a domain as a financial domain and a healthcare domain because the way we do finance in Switzerland is very different than the way we do it in the UK and the regulators are different. Our retail use cases are very different from the wealth use cases and one size does not fit all, especially in the regulated industry. So that was a very important learning. With OpenAI this was, now all the data, 80 % of the enterprise data still remains within somewhere that we keep on storing and archiving in Switzerland.

You have to have 10 years worth of every single conversation that has happened with the clients, every single information that has been manufactured in terms of data while doing it. any regulatory work. So we have that data, we never used it. Not even with Palantir, it’s very AIML. With OpenAI, you get this whole unbound data that you could use for a lot of interesting things to manage your regulatory and compliance requirements, which is the biggest cost in technology for a bank, but also for engaging with your clients better, right? So, but then it’s an API, right? It’s a platform is what I got to experience with Palantir, and with an API access, we could get it into the early 23, 24 set of timeframe.

So with those two learnings, what if you could create a scalable, customizable platform like Palantir, but for generative AI, which is what we started building at Ecta. And the idea here is very much, you build in the guardrails, you build in the security as part of your four layers that we have at Ecta, and use your domain knowledge, your domain corpus of information to train and train your clients. So it’s very much yours. There’s no translation required. It’s very language -oriented. It’s very deeply culturally oriented, and that’s why the work of the Vatican was very significant to say, if the church is going to trust you with their literature, with their information as a benchmark against some of the hardest questions that get asked to the church, then we have a fair chance to get introduced to the enterprise and to the governments.

And from a New York perspective, there’s a lot of work we’re doing, starting with AI and education, which is what we’re also hoping to do more of in India. The challenge remains at least 50 students to every teacher, lots of languages, lots of cultural aspects, and the infrastructure is yet not there to match what the students really need. But now with AI, you could hyper -personalize the experience based on every student, so you do not have to learn English to learn math. You could very much do math in your local language, in Marathi, Bihari, or any other state language, and that sort of barrier could go away. And I think proof is always in the pudding.

So we get to see… how these domain models get to work in enterprise as well as in the governments.

Sunita Mohanty

Wonderful. And you have a lot of insights on what’s being asked to the church. So we’ll have to catch you on that someday. But thank you. So coming back to you, Balaji, from Flipkart’s perspective, how do you decide what do you build internally using AI and what to adapt, what to rely internally for, where do you use an external model and how do these choices affect your long -term decisions with business and customers?

Balaji Thiagarajan

Yeah, you know, I think we talked about this. As far as I can tell, unless we decide to build our own foundational models from ground up, we will always use a mixture of experts where at different layers we’ll use different kind of, you know, parametrized models. Usually, if you look at, you know, a workflow that is getting executed, like I’m trying to buy something, a shopping journey, or I’m trying to decide in a discovery funnel or whatever else, the top of the funnel usually is a very generic statement. That’s where the trillion parameter LLMs actually help. And for us, it works because at that point in time, all you’re talking about is an intent and trying to get understanding of what the user is trying to say.

But as you start getting into the further details and the intent becomes more and more clearer, where we want to provide the right recommendations or hyper -personalize information or adapt to what the customer is doing, that’s where the smaller, what we call SLM, the smaller language models come in. And for that, the way we think about this is we have an agentic orchestration framework. Each agent actually decides what is the task on hand. And based on the task on hand, we have SLMs that have been trained for a specific task domain or a specific task even. And the agent knows at that point in time, I’ve got to go to this particular, you know, infrastructure of an LLM or an SLM.

and then get the answers from there. So we have an agent tech orchestration framework that is a very dynamically learning framework that understands what’s going on, adapts to what is happening in the ecosystem, makes decisions online, and then it depends on what is happening. It redirects the traffic to the right SLM. For example, if the consumer is asking for, you know, show me the best price for these categories of products in a specific region, that’s usually a pricing and promotion, you know, domain. And that domain might be a domain of data that we have trained on a specific SLM on a specific catalog of items for that particular area, if you will. Now, if somebody comes and says, I’m just looking for running shoes, right?

That’s a very, very different, there’s a very, very different query. And in that query, what you do is you actually look at the whole catalog. Okay. and then you marry that catalog results with you as a person of your interest and then we kind of filter that out and give it. So that’s the way it usually works. So today, like Nandan Nilakani was telling that people who use UPI, everybody uses UPI in India, but nobody knows what is the technology behind it. So hopefully we’ll get to a point where we don’t know what’s the technology behind, but it makes every user’s life so easy and contextual that it is actually then had an impact.

Sunita Mohanty

So with that one last question for all of you. So Babak, I’ll start with you and we’ll go from left to right. What’s your feeling about the last one week? What is your key takeaway from this? What are you taking back outside of the traffic and the crowd? And any piece of advice that you would give?

Babak Hodjat

You know, I was at… AI Everything Summit Africa last week in Egypt. and they said it’s huge, it’s one of the biggest summits, 23 ,000 people. And I came here and they told me it’s 300 ,000 people. So just the scale, the scope, and, you know, India is in a unique position that its starting point is a starting point of technology and IT. So I think it’s much better prepared to understand AI, its implications, how it can be used. Very strong startup scene. I was very impressed by that. And, yeah, so to me, it’s one of the largest and most interesting, and I go to a lot of these conferences that I’ve been, so very, very impressive.

Sunita Mohanty

Yeah, that’s good, because when we started, a lot of the planning started in October or even before that. I think we never thought about the size of the event. And when we saw the footfall, and there was government, there were researchers, there were students, there was business, it’s just amazing that we could. really run that scale. So thank you so much. Amol?

Amod Kabade

I think it has been a fantastic week here in Delhi participating in the AI Impact Summit. And I’ll just go back to the three sutras, people, planet and progress. I would only say that it is our responsibility to build AI infrastructure or the entire ecosystem around AI, which is planet friendly. And focusing on the real use cases which address the last mile, the last citizen of the country. And progress is something that is bound to follow.

Tanvi Singh

Okay, so I can articulate the journey from Paris where the AI Impact Summit started, which I was in last year, to just being a dialogue between the political leaders to … There was earlier this year in January where sovereignty and building AI for everyone and not just the big frontier models that we see coming out of America or the competition from DeepSeek and other major players from China and Global South, building the technology and the sovereignty becomes the main theme in Davos as a conversation to actually seeing it implemented. It’s been implemented here across the halls of Bharat Bandhapa. It’s fascinating. I feel very proud to be of Indian origin. And also like taking what India has done to Geneva as part of the organizing committee in Switzerland where I come from, I think there will be very hard shoes to fill.

And coming in from an ETA perspective, my company, I think sky’s the limit to the opportunity. Hearing from Balaji and many, many other practitioners including ServiceNow and others, it just seems like the opportunity is there. people are ready to experiment, people are looking towards not piloting but actual return of investments. We see that with infrastructure. We see that with what really works and without customization. This is the deepest and the most important part of every organization of every government with what we do with our data and how we use the cognition where we have control over that cognition. And I like what Mr. Farnovi said. He said, like, we don’t want the American and the Chinese babies.

I like what ACTA is doing. It’s bringing in a lot of Indian babies to the world, which is what domain models do. So very much looking forward to hosting many of you also in Geneva next year and a very big learning and a very impactful week that India has organized for the world. Thank you.

Sunita Mohanty

Anupam? Okay.

Anupam Chattopadhyay

In one word, summer is just fantastic. Like, I have not seen skills like this. because in academics you know we go to technical conferences ranging from very small 100 to 100 people. The largest one I attended was having 9000 attendees. That’s triple AI, also an AI conference. But here it’s like a complete order of magnitude more. And this is very much essential that we have the dialogue between researchers, entrepreneurs, policy makers, ministers all in a single stage. That’s really, really wonderful. One thing I was curious about maybe as part of organization team you can throw some lights into is how much AI was actually used to arrange this and to maybe defend against cyber attack and all the systems to detect if there are people passing through.

So that I am curious about. So that would be like AI in action in hosting an AI summit.

Sunita Mohanty

We did use a significant amount of AI but obviously not everything. But one of the most really amazing things, I don’t know how many of you saw the Prime Minister’s address. But one of the real things was we had AI agent that was actually doing real time translations which was more for accessibility purpose. So I think those are examples of where we have really used and of course in the planning. Although this is not just with the government, I think a lot of people from business, from academia have all come together. So it’s primarily a win across India. I haven’t seen that scale of partnership that has happened. We have a team that sits in the ministry and for the last 6 -7 months the amount of people that have been just coming in, volunteering, supporting, it’s just amazing to see how it’s come together.

Balaji? Look

Balaji Thiagarajan

I’ve been to the first AI impact summit. I’ve not been to other places. But the way I look at it is, the commitment to AI, the government of India, decided to do this. It’s a masterstroke for multiple reasons. One is that it brings the government, it brings the industries, it brings the academia, it brings the students, and it brings the imagination of the whole country together that this is doable. The art of the possible is absolutely there. And more importantly, when I think about this, India’s technology underpinnings was from a service -based industry, right? And if you harken back to the world of telecommunication where we leapfrog landlines to mobile, I think this is the opportunity for India and India -based companies and any company that wants to operate in India to kind of leapfrog this whole SaaS -based technologies, web -based technologies, what have you, and directly leapfrog.

And India can take that opportunity and become the number one industry. software provider, not of services, of systems and products that are at a world scale. We do not have a brand, we do not have a software brand in India that sells on a worldwide scale. Service is not a software brand. This opportunity provides India to leapfrog because we have the scale, we have the people, we have the intelligence, we have the ability to actually think very, very differently at a price point that nobody can imagine, to be honest. And then, now the government is behind this and with the public infrastructure it’s also reinforcing all the research that needs to happen. So this is an opportunity for India to take or lose as the case might be, but I think India is going to take

Sunita Mohanty

No, thank you so much and on that optimistic note, thank you all of you for being here and we started the conference with talking about the theme which is Sarvajana Hitaya, Sarvajana Sukhaya welfare for all and happiness for all and I hope we carry this message across the global south into Geneva and bring in Europe and US into this. as well. Thank you so much.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (9)
Confirmedhigh

“Babak Hodjat emphasized that AI’s promise and its risks are both real, requiring balanced safeguards; neither blind trust nor total scepticism is acceptable.”

The knowledge base notes that both innovation advocates and public safety advocates have valid concerns that need to be balanced, confirming the need for a middle-ground approach [S97].

Additional Contextmedium

“He described techniques such as human‑in‑the‑loop or human‑on‑the‑loop oversight, agents monitoring each other, and explicit uncertainty assessment that triggers human intervention.”

The AUDA-NEPAD White Paper outlines three levels of human oversight for AI systems, providing additional detail on human-in-the-loop frameworks [S40].

Additional Contextmedium

“Continuous reasoning can accumulate trivial errors after many steps, requiring redundancy and error‑correction mechanisms.”

Research on internet resilience highlights cascading failure risks and the importance of redundancy and error-correction in complex systems [S103].

Additional Contextmedium

“There is a lack of robust standards for verifying third‑party agent identity, a gap that Google’s A2A work is beginning to address.”

Google’s recent AI agent toolkit release (Agent Development Kit) represents an effort to create standards and interoperability for AI agents, aligning with the described gap [S106].

Additional Contexthigh

“Balanced regulation is needed in India as the country pursues sovereign large‑language models, avoiding both over‑ and under‑regulation.”

Discussions on AI regulation in India stress the delicate balance between business-friendliness and oversight, echoing the call for measured regulation [S26] and concerns about over-regulation outcomes [S95].

Confirmedhigh

“Many AI models trained on clean, well‑curated data perform poorly on noisy, multilingual, intermittently‑connected environments typical of the Global South.”

Evidence shows that language models exhibit significantly lower performance on non-English languages such as Telugu, confirming challenges with noisy and multilingual contexts [S110].

Additional Contextmedium

“Amod advocated liquid‑cooling technologies to reduce data‑centre cooling overheads and suggested KPIs like “energy‑per‑token” or “water‑per‑token”.”

The AI Impact Summit highlighted unprecedented power and cooling demands of AI workloads, underscoring the relevance of liquid-cooling and efficiency metrics for data centres [S8].

Additional Contextlow

“Data‑centre designs should be modular, flexible, and future‑proof to accommodate newer AI chips that generate more heat and demand higher density.”

Resilience guidelines for IoT and data-centre services emphasize modular, scalable designs to maintain operation under varying loads and power constraints [S104].

Additional Contextmedium

“Babak Hodjat described the evolution from single‑agent systems to complex multi‑agent ecosystems where agents must coordinate while protecting their interests.”

A dedicated session on multi-agent systems at the summit featured Babak Hodjat discussing this evolution, providing additional detail on his perspective [S4].

External Sources (111)
S1
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — -Prime Minister Modi: Role/Title: Honorable Prime Minister of India; Area of expertise: Government leadership, policy
S2
Announcement of New Delhi Frontier AI Commitments — -Brad: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified -S…
S3
IGF 2023 Global Youth Summit — Audience:Thank you, everyone. My name is Emad Karim. I’m from UN Women, working on online gender-based violence. And my …
S4
Challenging the status quo of AI security — ### Multi-Agent Systems at Enterprise Scale (Babak Hodjat) Sounil Yu: Thanks, Babak. And one of the standards that we a…
S5
Subrata K. Mitra Jivanta Schottli Markus Pauli — An analysis of India’s foreign policy over seven decades will inevitably reveal evidence of both change and continuity i…
S6
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S7
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Abhay Soi- Tanvi Lall – Jigar Halani- Tanvi Lall
S8
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — “Sir, my question is directly to you”[1]. “I wanted to know on that”[2]. “My name is Umesh Prasad Singh and I’m an assoc…
S9
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — -Shailendra Pal Singh: Role/title not explicitly mentioned, but appears to be a co-presenter/expert on Bhashini translat…
S10
POST-QUANTUM CRYPTOGRAPHY — – [86] Submission requirements and evaluation criteria for the post-quantum cryptography standardization process, 2016. …
S11
https://dig.watch/event/india-ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — Thank you sir, that was quite reassuring as well And since you spoke about quantum I want to bring in Dr. Anupam Chattop…
S12
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai-starts-with-people-not-just-algorithms — Hi, my name is Anupama. I am one of AI Kiran members. Professionally, I’m a data scientist. Now moved to a technical lea…
S14
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S15
https://dig.watch/event/india-ai-impact-summit-2026/from-kw-to-gw-scaling-the-infrastructure-of-the-global-ai-economy — By all means. The second layer is one of the layer is the serving layer when you build these applications. How do you do…
S16
Safe and Responsible AI at Scale Practical Pathways — – Rohit Bardawaj- Audience LLMs comprise only 10-15% of a solution, with the remaining 85% being guardrails, human-in-l…
S17
Agentic AI in Focus Opportunities Risks and Governance — They’re not responsible. They can’t take accountability. It’s the humans. It’s the business owner who takes it. So havin…
S18
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S19
WS #283 AI Agents: Ensuring Responsible Deployment — Carter argues that developing standards for how agents authenticate themselves and identify themselves to third parties …
S20
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — An interesting observation from the discussion is that sandboxes can facilitate the growth of digital banks and electron…
S21
Dynamic Coalition Collaborative Session — Panelists discussed balancing ethical guidelines with practical implementation. Gupta warned against over-regulation tha…
S22
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — The Governor challenges the common perception that regulation stifles innovation, arguing instead that appropriate regul…
S23
E-commerce and Sustainability: an overlooked nexus (Brazilian Center for International Relation – CEBRI) — They caution against excessive regulation, as it may stifle innovation and economic progress, particularly in developing…
S24
WS #100 Integrating the Global South in Global AI Governance — – Regulatory uncertainty is a major challenge for companies 2. Regulatory Uncertainty Salma Alkhoudi: So this slide i…
S25
Regulating Open Data_ Principles Challenges and Opportunities — Global governance and increasingly the global south is not merely observing this evolution, it is participating in it. I…
S26
Building fair markets in the algorithmic age (The Dialogue) — In India, a delicate balance must be maintained between being business-friendly and regulating dominant platforms. Key p…
S27
Shaping the Future AI Strategies for Jobs and Economic Development — -Infrastructure and Energy Challenges: Significant discussion around the massive infrastructure requirements for AI depl…
S28
Robotics and the Medical Internet of Things /MIoT — Another crucial aspect discussed was the energy efficiency of data centres, which are vital in supporting human-computer…
S29
Ethical AI_ Keeping Humanity in the Loop While Innovating — Innovation is much more than that. innovation is really challenging ourselves to go further. And I want to go back to a …
S30
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S31
Driving Indias AI Future Growth Innovation and Impact — Rajgopal advocates for minimal regulation to avoid stifling innovation, arguing that benefits outweigh risks and issues …
S32
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Caio Machado:is yours. Thank you very much. It’s great seeing all of you. I’m going to quickly put a slide up with my co…
S33
The rise and risks of synthetic media — The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in he…
S34
Artificial intelligence (AI) – UN Security Council — Finally, synthetic data enhances representativeness by allowing for thecreation of diverse and comprehensive datasets th…
S35
Trusted Personal Data Management Service — He suggests the use of privacy-protecting technologies, such as federated learning, where the data remains with the orig…
S36
AI for Good Technology That Empowers People — “So to make it even faster and achieve the sub 10 milliseconds, you actually have to bring in inference and training to …
S37
Transforming Health Systems with AI From Lab to Last Mile — Implement federated learning approaches that allow local data privacy while contributing to model improvement
S38
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — To ensure successful integration, bridging the gap between academia and industry is essential. Due to the rapid advancem…
S39
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — At the same time, these people have to be trained to give back more. If we can get every person to be evaluated or value…
S40
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Academia-industry partnerships are crucial for fostering innovation, addressing industry challenges, and bridging the ga…
S41
Open Forum #17 AI Regulation Insights From Parliaments — Countries in the Global South face multiple challenges including lack of computational power, data access gaps, and insu…
S42
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Jovan Kurbalija: Thank you. She’s quiet. Okay, okay. Good. Great. We heard from our excellent speakers at the very begin…
S43
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S44
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S45
Building Indias Digital and Industrial Future with AI — This comment introduced nuance to the sovereignty debate and influenced the conversation toward finding balance between …
S46
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — This comment reframes AI sovereignty from a purely nationalistic concept to a practical business and security imperative…
S47
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S48
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Understanding algorithms used in consumer interactions is another key area of focus for the ACCC. Regulators must be abl…
S49
WS #31 Cybersecurity in AI: balancing innovation and risks — Melodena Stephens: Thank you First of all, I want to mention that digital literacy is not the same thing as AI litera…
S50
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — “But then there is also a policy and regulatory landscape for discovering price of power for data centers”[60]. “Data ce…
S51
Scaling Enterprise-Grade Responsible AI Across the Global South — Great. And I really hope we get a great design playbook for building data centers that are accessing renewable power, be…
S52
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Achieving a sustainable and resilient future in 2025 will requirecollaboration across sectors, robust governance, and st…
S53
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of reso…
S54
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion revealed strong alignment between industry needs, academic capabilities, and government policy. David Fre…
S55
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — To ensure successful integration, bridging the gap between academia and industry is essential. Due to the rapid advancem…
S56
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort, Vice President Consulting at CGI in the Netherlands, supported this view: “Regulation does not hamper innov…
S57
Agentic AI in Focus Opportunities Risks and Governance — -Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety m…
S58
The fading of human agency in automated systems — This gap between language and reality matters, especially in governance contexts where assurances of human oversight are…
S59
Why science metters in global AI governance — helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so polic…
S60
Projecting Digital economy rules on Global South’s AI regulations: what is needed to safeguard human rights? ( Data Privacy Brasil Research Association) — Lastly, the analysis underscores the importance of global solidarity and the pursuit of a fairer level playing field in …
S61
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Based on the analysis provided, AI is significantly transforming consumer protection. It is crucial to strike the right …
S62
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — Artificial intelligence | Data governance | Capacity development T. Srinivasan explained the development of a sovereign…
S63
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Armando Guio Espanol: Perfect, no, not 30 seconds. No, well, I was just going to say that definitely this is very contex…
S64
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Despite coming from very different industries (healthcare vs. payments), both speakers independently emphasized the crit…
S65
The Global Power Shift India’s Rise in AI &amp; Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S66
What policy levers can bridge the AI divide? — ## Infrastructure as Foundation A central theme throughout the discussion was that meaningful AI implementation cannot …
S67
Artificial intelligence (AI) – UN Security Council — Finally, synthetic data enhances representativeness by allowing for thecreation of diverse and comprehensive datasets th…
S68
World Economic Forum 2025 at Davos — Finally, the use of synthetic data can enhance therepresentativenessof datasets, particularly in scenarios where real-wo…
S69
EU Artificial Intelligence Act — (60n)    It is appropriate to establish a methodology for the classification of general purpose AI models as general pur…
S70
What is it about AI that we need to regulate? — Based on the available meeting transcripts from the Internet Governance Forum 2025, the question of leveraging synthetic…
S71
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Advocates for a harmonised approach to regulation and policy-making believe that this method can yield positive outcomes…
S72
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — Modern regulation requires innovative approaches including data-driven regulation and regulatory sandboxes for experimen…
S73
Secure Finance Risk-Based AI Policy for the Banking Sector — Implement a balanced regulatory approach that encourages experimentation through sandboxes while maintaining institution…
S74
WS #283 AI Agents: Ensuring Responsible Deployment — Lazanski warned that the attack surface for agentic AI will be enormous, requiring shared security practices among compa…
S75
Safe and Responsible AI at Scale Practical Pathways — “guardrails human in the loop risk assessment these are the tools which are available today …”[95]. “If we immediately…
S76
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S77
Towards a Safer South Launching the Global South AI Safety Research Network — All speakers identify capacity building as a fundamental challenge, noting gaps in technical capacity, institutional fra…
S78
WS #100 Integrating the Global South in Global AI Governance — Use of synthetic data to address data scarcity issues in the Global South
S79
WS #205 Contextualising Fairness: AI Governance in Asia — Milton Mueller: Can you hear me? Am I on? Okay, thank you very much. Yeah, I am going to, yeah, first issue you a f…
S80
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Sustainable development | Development | Infrastructure
S81
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Africa is one of the most energy -constrained regions. It’s also a continent where adoption is becoming very frequent. W…
S82
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Zhang and Professor Gong Ke agreed on the fundamental importance of infrastructure development for AI advancement. Their…
S83
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — He warns that water resources are already stressed by data‑center cooling, making the adoption of liquid and two‑phase s…
S84
Panel Discussion Data Sovereignty India AI Impact Summit — This comment reframes the entire sovereignty debate by distinguishing between isolation and strategic control. It moves …
S85
Building Indias Digital and Industrial Future with AI — This comment introduced nuance to the sovereignty debate and influenced the conversation toward finding balance between …
S86
Building Sovereign and Responsible AI Beyond Proof of Concepts — Okay, everyone is sovereignty. Sorry, did you say something else? A responsible AI? I think that could also be here, bec…
S87
WS #31 Cybersecurity in AI: balancing innovation and risks — Melodena Stephens: Thank you First of all, I want to mention that digital literacy is not the same thing as AI litera…
S88
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S89
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Regulators must be able to explain how these algorithms operate to ensure transparency and fairness in the marketplace. …
S90
Powering AI Global Leaders Session AI Impact Summit India — -Prime Minister: (mentioned as having spoken the day before, but did not speak in this transcript) -Sam Altman: CEO and…
S91
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Honourable Prime Minister Modi, Excellencies, dear colleagues, ladies and gentlemen. It is a great honour for me to be i…
S92
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — And I want to acknowledge the countries that came forward to really put this initiative together, starting first, of cou…
S93
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S94
Bridging the AI innovation gap — This was mentioned as part of their research sharing but indicates a need for further development of sector-specific fra…
S95
Conversational AI in low income &amp; resource settings | IGF 2023 — Rajendra Pratap Gupta:But Sameer, even after the Sarbanes-Oxley Act in the financial markets, we had the subprime crisis…
S96
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S97
Optimism for AI – Leading with empathy — Recognition that both innovation advocates and public safety advocates have valid concerns that need to be balanced
S98
Seeing, moving, living: AI’s promise for accessible technology — Privacy frameworks must evolve to account for technologies that are simultaneously personal and public. A blind person u…
S99
The Overlooked Peril: Cyber failures amidst AI hype — This is not to say that we should abandon discussions about the potential long-term risks of AI. Rather, we must strike …
S100
National Disaster Management Authority — The Minister stressed the critical importance of creating digital twins and thermal maps for emergency response, but str…
S101
Strategic prudence in AI: Experts advise incremental approach for meaningful advancements — At TechCrunch Disrupt 2024, data management leadersadvisedAI-driven businesses to focus on incremental, practical applic…
S102
Bottom-up AI and the right to be humanly imperfect (DiploFoundation) — From the analysis of these arguments, it can be inferred that while third-party tools offer convenience and efficiency i…
S103
WS #139 Internet Resilience Securing a Stronger Supply Chain — Complex interdependencies create cascading failure risks Despite hiring top engineers and implementing redundancy measu…
S104
Introduction — Resilience should be built in to IoT devices and services where required by their usage or by other relying systems, tak…
S105
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All right. Just speaking for myself, I can’t wait to use agents. I feel like it’s a lot of developer communities that ha…
S106
Google unveils new AI agent toolkit — This week at Google Cloud Next in Las Vegas, Googlerevealedits latest push into ‘agentic AI’. A software designed to act…
S107
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances commu…
S108
Steering the future of AI — Limitations and Future of Large Language Models (LLMs)
S109
Multi-stakeholder Discussion on issues about Generative AI — Luciano Mazza de Andrade:Sorry I was off. Thank you very much, Yoshi. Well, I think our colleagues and previous speakers…
S110
How can AI improve multilingualism — ChatGPT-4 performances in languages other than English are of lower qulity. In a recent test, ChatGPT-4 scored 85% on a …
S111
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Babak Hodjat
4 arguments127 words per minute954 words449 seconds
Argument 1
Balanced guardrails with human‑in‑the‑loop and uncertainty assessment (Babak Hodjat)
EXPLANATION
Babak stresses that AI systems need guardrails that avoid both blind trust and excessive mistrust. He proposes techniques such as keeping humans in the loop, assessing the uncertainty of an agent’s output, and deciding when to intervene based on confidence levels.
EVIDENCE
He explains that AI promises and risks are real and that guardrails are essential to prevent over-trust or mistrust, citing the need for human-in-the-loop or on-the-loop mechanisms and uncertainty assessment of agent outputs as safeguards [11-12][15-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The predominance of guardrails (85% of solution) and the need for human-in-the-loop and uncertainty assessment are emphasized in [S16]; further discussion of these safeguards appears in [S17] and [S18].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
AGREED WITH
Sunita Mohanty, Balaji Thiagarajan
Argument 2
Need standards for agent identity and third‑party agents in multi‑agent ecosystems (Babak Hodjat)
EXPLANATION
He points out that as AI systems incorporate agents from multiple external parties, there is currently no clear way to verify their identities. Establishing standards for agentic identity is crucial to manage risks from third‑party interactions.
EVIDENCE
Babak describes the challenge of identifying agents when third-party or consumer agents interact with internal agents, noting the lack of well-established standards and mentioning Google’s work on A2A as an early effort [20-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Identity management challenges for agentic AI and the call for authentication standards are covered in [S4] and [S19]; the lack of established standards is noted in [S13].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
Argument 3
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat)
EXPLANATION
He recommends creating publicly accessible compute resources and a sovereign sandbox where innovators can test AI applications under regulatory oversight. This would democratise access to AI infrastructure and foster safe experimentation.
EVIDENCE
Babak proposes a publicly available processing capacity for students, academia, and startups, and a sovereign sandbox that brings together entrepreneurs, regulators, and academia to trial applications and shape regulation in a controlled environment [145-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sandbox approaches for regulated experimentation are described in [S20]; similar sandbox concepts for AI governance are referenced in [S18].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
AGREED WITH
Sunita Mohanty
DISAGREED WITH
Tanvi Singh
Argument 4
Caution against both over‑regulation and under‑regulation; advocate a balanced policy framework (Babak Hodjat)
EXPLANATION
He warns that excessive regulation can stifle innovation while insufficient regulation can expose societies to AI risks. A balanced approach, possibly via sandbox testing, is needed to navigate this tension.
EVIDENCE
Babak notes the risk of over-regulating versus under-regulating and stresses the importance of not falling off either ledge, suggesting a sandbox as a way to test and refine regulation safely [30-32][158-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing ethical guidelines with practical implementation and avoiding over-regulation is discussed in [S21]; the view that appropriate regulation enables innovation appears in [S22] and [S31]; concerns about excessive regulation stifling growth are raised in [S23] and [S24].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
AGREED WITH
Sunita Mohanty
DISAGREED WITH
Tanvi Singh
S
Sunita Mohanty
3 arguments117 words per minute2003 words1024 seconds
Argument 1
Framing the tension between regulation and innovation for India and the Global South (Sunita Mohanty)
EXPLANATION
Sunita highlights the global debate where the US pushes rapid AI innovation, Europe emphasizes regulation, and asks where India and the Global South should position themselves. She seeks guidance on balancing these forces.
EVIDENCE
She references the ongoing conversation about regulation versus innovation, noting the US is at the innovation stage and Europe at the regulation stage, and asks where India and the Global South should stand [33-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The debate on regulation versus innovation for emerging economies is highlighted in [S22], [S23], [S24] and [S31].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
AGREED WITH
Babak Hodjat
Argument 2
Emphasis on renewable energy, efficient cooling and ROI‑focused infrastructure planning (Sunita Mohanty)
EXPLANATION
Sunita stresses that AI infrastructure must be environmentally sustainable, using renewable power and efficient cooling, while also delivering clear return‑on‑investment metrics for queries and operations.
EVIDENCE
She mentions discussions at Davos and Bloomberg about energy efficiency, renewable power, efficient cooling, and measuring query cost to improve ROI for AI workloads [55-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Infrastructure energy challenges, renewable power and cooling efficiency for AI workloads are examined in [S27] and [S28]; a design playbook for renewable-powered data centres is mentioned in [S18].
MAJOR DISCUSSION POINT
Sustainable AI Infrastructure and Energy Efficiency
AGREED WITH
Amod Kabade
Argument 3
Human‑centered AI and the need for clear regulatory guidance to sustain innovation while protecting users (Sunita Mohanty)
EXPLANATION
Sunita calls for AI systems that keep humans at the centre of design and operation, coupled with transparent regulatory frameworks that enable innovation without compromising user safety.
EVIDENCE
She references earlier remarks about keeping humans at the centre of AI development and the broader debate on regulation versus innovation, underscoring the need for clear guidance [33-34][93-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop and ethical AI principles are discussed in [S29]; inclusive, human-centered governance frameworks are presented in [S30] and [S31].
MAJOR DISCUSSION POINT
Responsible AI at Scale in Consumer Platforms
AGREED WITH
Babak Hodjat, Balaji Thiagarajan
A
Anupam Chattopadhyay
3 arguments173 words per minute1226 words423 seconds
Argument 1
Synthetic data generation with tunable noise to improve deep‑fake detection on noisy, multilingual data (Anupam Chattopadhyay)
EXPLANATION
Anupam proposes creating synthetic datasets where controlled noise can be added, enabling deep‑fake detectors to perform reliably on real‑world noisy and multilingual inputs.
EVIDENCE
He describes building synthetic data with tunable noise to address poor detection performance on noisy, multilingual images and audio, noting the challenge of models trained on clean data failing in real conditions [41-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of synthetic data for robustness and representativeness is described in [S33] and [S34].
MAJOR DISCUSSION POINT
Technical Robustness & Research Directions for Heterogeneous Environments
AGREED WITH
Sunita Mohanty
Argument 2
Federated learning techniques to merge proprietary models while preserving data and model privacy (Anupam Chattopadhyay)
EXPLANATION
He suggests using federated learning to combine models from different vendors or organisations without exposing underlying data, thereby maintaining privacy and intellectual property.
EVIDENCE
Anupam explains that federated learning allows merging of proprietary models while guaranteeing that training data or model parameters are never leaked, supporting privacy-aware model building [42-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Federated learning as a privacy-preserving method for collaborative model building is covered in [S35], [S36] and [S37].
MAJOR DISCUSSION POINT
Technical Robustness & Research Directions for Heterogeneous Environments
Argument 3
Single‑window consortium linking research funding, technology transfer, commercialization and regulation to close the academia‑industry gap (Anupam Chattopadhyay)
EXPLANATION
He highlights the AI.sg model, a single‑window consortium that integrates research funding, innovation, technology transfer, commercialization, and regulation, enabling seamless collaboration across stakeholders.
EVIDENCE
Anupam details the AI.sg single-window consortium that coordinates research funding, technology innovation, transfer, commercialization, and regulation, allowing universities, companies, and policymakers to participate at any stage [188-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of academia-industry partnerships and coordinated consortia for AI innovation is highlighted in [S38] and [S40].
MAJOR DISCUSSION POINT
Technical Robustness & Research Directions for Heterogeneous Environments
AGREED WITH
Sunita Mohanty
T
Tanvi Singh
2 arguments179 words per minute1571 words526 seconds
Argument 1
Development of sovereign, domain‑specific LLMs to handle low‑resource languages and reduce translation overhead (Tanvi Singh)
EXPLANATION
Tanvi argues for building domain‑specific large language models that are trained on local data, avoiding reliance on open‑source or foreign models and eliminating the need for translation.
EVIDENCE
She explains that their domain-specific model is not trained on open-source data, is owned locally, uses native language content, and removes translation requirements, thereby supporting sovereignty [80-87].
MAJOR DISCUSSION POINT
Technical Robustness & Research Directions for Heterogeneous Environments
AGREED WITH
Balaji Thiagarajan
DISAGREED WITH
Babak Hodjat
Argument 2
Sovereign LLMs give organisations control over their data, eliminate translation bottlenecks and improve return on AI investment (Tanvi Singh)
EXPLANATION
She emphasizes that sovereign LLMs let organisations train on their own data, maintain data sovereignty, avoid translation delays, and deliver better ROI for AI deployments.
EVIDENCE
Tanvi reiterates that sovereign LLMs allow use of an organisation’s own content in native language without translation, improving control and ROI for sovereign AI stacks [85-87].
MAJOR DISCUSSION POINT
Sovereignty, Domain‑Specific Models, and ROI
A
Amod Kabade
2 arguments131 words per minute694 words316 seconds
Argument 1
Adoption of liquid‑cooling and definition of KPIs such as energy‑per‑token to make data centres climate‑friendly (Amod Kabade)
EXPLANATION
Amod recommends using liquid‑cooling technologies to reduce cooling overhead and establishing metrics like energy‑per‑token to monitor and improve the environmental performance of AI data centres.
EVIDENCE
He notes that liquid cooling can minimise cooling overhead and suggests defining KPIs such as energy consumption per token to drive sustainable AI infrastructure [52-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sustainable AI infrastructure, energy consumption metrics and cooling efficiency are discussed in [S27] and [S28].
MAJOR DISCUSSION POINT
Sustainable AI Infrastructure and Energy Efficiency
AGREED WITH
Sunita Mohanty
Argument 2
Modular, future‑proof data‑centre design that can accommodate rapidly evolving AI chips and workloads (Amod Kabade)
EXPLANATION
He advocates designing data centres in a modular, flexible fashion so they can be upgraded to support new AI chip generations and higher heat loads, avoiding obsolescence.
EVIDENCE
Amod describes modular, future-proof designs that allow incorporation of newer, more resource-hungry chips, emphasizing flexibility and sustainability to keep infrastructure relevant over time [177-184].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future-proof, modular data-centre designs and the need to address growing AI compute and energy demands are examined in [S27] and [S28].
MAJOR DISCUSSION POINT
Sustainable AI Infrastructure and Energy Efficiency
B
Balaji Thiagarajan
3 arguments173 words per minute1838 words634 seconds
Argument 1
Use of smaller, domain‑specific models (SLMs) for regional pricing, catalog generation and hyper‑personalisation (Balaji Thiagarajan)
EXPLANATION
Balaji explains that beyond generic large models, Flipkart employs smaller, domain‑specific language models to deliver region‑specific pricing, generate product listings quickly, and personalize offers.
EVIDENCE
He provides examples of using SLMs to create catalog listings from seller images within 20 minutes and to give price ranges tailored to specific Indian cities, demonstrating regional hyper-personalisation [119-124].
MAJOR DISCUSSION POINT
Sovereignty, Domain‑Specific Models, and ROI
AGREED WITH
Tanvi Singh
Argument 2
Fairness across pricing, product quality and service delivery; requires high‑quality data, strict access controls and privacy‑preserving model orchestration (Balaji Thiagarajan)
EXPLANATION
Balaji outlines that fairness at Flipkart spans pricing, product quality, and service, and can be achieved through good data, robust access controls, encryption, and privacy‑aware orchestration of multiple expert models.
EVIDENCE
He discusses fairness in pricing, quality of goods, service delivery, the need for high-quality data, access controls, encryption for data in motion, and a mixture-of-experts approach to model orchestration [100-108][110-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fair market principles and algorithmic fairness considerations are outlined in [S26]; ethical AI and human-centered governance that support fairness are discussed in [S29] and [S30].
MAJOR DISCUSSION POINT
Responsible AI at Scale in Consumer Platforms
Argument 3
Transparency in AI‑driven customer service: explicit opt‑out disclosure so users know when they are interacting with a bot (Balaji Thiagarajan)
EXPLANATION
He states that Flipkart’s AI agents act as co‑pilots and that customers are shown a disclaimer with an opt‑out default, ensuring users are aware when they are speaking with a machine.
EVIDENCE
Balaji explains that agents are co-pilots, a disclaimer is shown indicating possible machine interaction, and the system defaults to opt-out, requiring users to actively opt-in for bot conversations [127-135].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transparency and human-in-the-loop requirements for AI systems are emphasized in [S29]; inclusive governance frameworks that promote disclosure are presented in [S30].
MAJOR DISCUSSION POINT
Responsible AI at Scale in Consumer Platforms
AGREED WITH
Babak Hodjat, Sunita Mohanty
Agreements
Agreement Points
Balanced guardrails with human‑in‑the‑loop, uncertainty assessment and transparency to avoid over‑trust or mistrust of AI systems
Speakers: Babak Hodjat, Sunita Mohanty, Balaji Thiagarajan
Balanced guardrails with human‑in‑the‑loop and uncertainty assessment (Babak Hodjat) Human‑centered AI and the need for clear regulatory guidance to sustain innovation while protecting users (Sunita Mohanty) Transparency in AI‑driven customer service: explicit opt‑out disclosure so users know when they are interacting with a bot (Balaji Thiagarajan)
All three speakers stress that AI systems must incorporate guardrails such as human-in-the-loop mechanisms, uncertainty evaluation and clear disclosure to users, thereby preventing blind trust or excessive mistrust of AI outputs [11-18][33-34][127-135].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on human-in-the-loop and robust guardrails aligns with enterprise AI safety guidelines and the EU AI Act’s systemic-risk classification, and reflects panel calls for clear safety measures in high-stakes AI deployments [S57][S58][S69][S56].
Public processing capacity and sovereign sandbox to democratise AI experimentation and foster safe innovation
Speakers: Babak Hodjat, Sunita Mohanty
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) Government provision of 60,000 GPUs and creation of sandbox‑like ecosystem for AI innovation (Sunita Mohanty)
Both speakers advocate for publicly available compute resources and sandbox environments that enable students, startups and regulators to safely develop and test AI applications, reducing concentration of power in a few large firms [145-165][166-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulatory sandboxes are promoted as a way to enable sovereign, safe AI experimentation while maintaining oversight, a view echoed in IGF workshops and recent policy roadmaps on sandbox-driven innovation [S71][S72][S73][S66].
Balanced regulatory approach – avoid both over‑regulation and under‑regulation
Speakers: Babak Hodjat, Sunita Mohanty
Caution against both over‑regulation and under‑regulation; advocate a balanced policy framework (Babak Hodjat) Framing the tension between regulation and innovation for India and the Global South (Sunita Mohanty)
Both emphasize the need for a middle-ground regulatory stance that protects society without stifling AI innovation, warning against the risks of too much or too little regulation [30-32][33-34].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence-based AI policy frameworks argue that clear, proportionate regulation reduces uncertainty and accelerates innovation, mirroring recommendations from CGI’s policy roadmap and IGF discussions on balanced AI governance [S56][S59][S61][S73].
Renewable energy, efficient cooling and KPI‑based metrics for sustainable AI data‑centre operation
Speakers: Amod Kabade, Sunita Mohanty
Adoption of liquid‑cooling and definition of KPIs such as energy‑per‑token to make data centres climate‑friendly (Amod Kabade) Emphasis on renewable energy, efficient cooling and ROI‑focused infrastructure planning (Sunita Mohanty)
Both call for environmentally sustainable AI infrastructure, highlighting liquid cooling, renewable power and quantitative KPIs (e.g., energy per token) to improve efficiency and demonstrate ROI [52-54][55-58].
POLICY CONTEXT (KNOWLEDGE BASE)
Sustainable data-centre operation is a policy priority, highlighted in the AI Impact Summit’s focus on power pricing and cooling, and reinforced by national efficiency targets such as Germany’s binding renewable-energy mandates for data centres [S50][S51][S52].
Development and use of sovereign, domain‑specific LLMs (SLMs) for regional languages, pricing and hyper‑personalisation
Speakers: Tanvi Singh, Balaji Thiagarajan
Development of sovereign, domain‑specific LLMs to handle low‑resource languages and reduce translation overhead (Tanvi Singh) Use of smaller, domain‑specific models (SLMs) for regional pricing, catalog generation and hyper‑personalisation (Balaji Thiagarajan)
Both highlight the strategic importance of building locally owned, domain-specific language models that operate in native languages, enable region-specific pricing and rapid catalog creation, thereby supporting AI sovereignty and ROI [80-87][119-124].
POLICY CONTEXT (KNOWLEDGE BASE)
Sovereign, domain-specific LLMs are advocated to keep data local and lower training costs, exemplified by a tax-domain LLM using LoRA adaptation and regional language model initiatives [S62][S63].
Synthetic data generation and noise‑tuning to improve model robustness in heterogeneous, multilingual environments
Speakers: Anupam Chattopadhyay, Sunita Mohanty
Synthetic data generation with tunable noise to improve deep‑fake detection on noisy, multilingual data (Anupam Chattopadhyay) Synthetic data importance for keeping data clean and enabling AI in the Global South (Sunita Mohanty)
Both agree that synthetic data, especially with controllable noise, is essential to train robust AI models that perform well on noisy, multilingual real-world data typical of the Global South [41-42][45-47].
POLICY CONTEXT (KNOWLEDGE BASE)
Synthetic data is recognised for enhancing dataset representativeness and model robustness across diverse linguistic contexts, as discussed in UN and World Economic Forum analyses [S67][S68].
Strong academia‑industry partnership through coordinated consortia to bridge research, commercialization and regulation
Speakers: Anupam Chattopadhyay, Sunita Mohanty
Single‑window consortium linking research funding, technology transfer, commercialization and regulation to close the academia‑industry gap (Anupam Chattopadhyay) Calls for collaboration across academia, industry and government to sustain AI innovation (Sunita Mohanty)
Both stress the need for structured, multi-stakeholder platforms that align research, funding, technology transfer and policy, facilitating seamless collaboration and faster deployment of responsible AI solutions [188-196][33-34].
POLICY CONTEXT (KNOWLEDGE BASE)
Collaboration between academia and industry is deemed essential for responsible AI development, reflected in IGF and WTO reports emphasizing coordinated consortia and knowledge exchange [S53][S55][S56].
Fairness in AI‑driven commerce requires high‑quality data, access controls, encryption and modular model orchestration
Speakers: Balaji Thiagarajan
Fairness across pricing, product quality and service delivery; requires high‑quality data, strict access controls and privacy‑preserving model orchestration (Balaji Thiagarajan)
Balaji outlines that achieving fairness at scale hinges on reliable data, robust access-control mechanisms, encryption for data in motion and a modular mixture-of-experts architecture to ensure accurate, equitable outcomes [100-108][110-118].
POLICY CONTEXT (KNOWLEDGE BASE)
Fairness and consumer protection in AI-enabled commerce are highlighted in IGF consumer-protection forums, which call for strong data-governance, encryption and modular architectures to prevent unfair practices [S61].
Similar Viewpoints
Both see the creation of publicly accessible compute resources and sandbox environments as essential for democratizing AI development and ensuring safe experimentation [145-165][166-170].
Speakers: Babak Hodjat, Sunita Mohanty
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) Government provision of 60,000 GPUs and creation of sandbox‑like ecosystem for AI innovation (Sunita Mohanty)
Both advocate for locally owned, domain‑specific language models that address regional language needs and enable tailored commercial applications such as pricing and catalog creation [80-87][119-124].
Speakers: Tanvi Singh, Balaji Thiagarajan
Development of sovereign, domain‑specific LLMs to handle low‑resource languages and reduce translation overhead (Tanvi Singh) Use of smaller, domain‑specific models (SLMs) for regional pricing, catalog generation and hyper‑personalisation (Balaji Thiagarajan)
Both consider synthetic data a crucial tool for improving model robustness and overcoming data scarcity in heterogeneous, multilingual contexts [41-42][45-47].
Speakers: Anupam Chattopadhyay, Sunita Mohanty
Synthetic data generation with tunable noise to improve deep‑fake detection on noisy, multilingual data (Anupam Chattopadhyay) Synthetic data importance for keeping data clean and enabling AI in the Global South (Sunita Mohanty)
Unexpected Consensus
Both technology‑focused speakers (Babak Hodjat and Amod Kabade) highlighted modular, future‑proof infrastructure design as a key enabler for AI scalability, despite coming from different domains (AI governance vs data‑centre engineering)
Speakers: Babak Hodjat, Amod Kabade
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) Modular, future‑proof data‑centre design that can accommodate rapidly evolving AI chips and workloads (Amod Kabade)
While Babak focused on compute access and sandboxing, and Amod on physical data-centre modularity, both converged on the necessity of flexible, upgradable infrastructure to support AI growth, an alignment not explicitly anticipated in the agenda [145-165][177-184].
Overall Assessment

The panel displayed strong consensus on the need for balanced guardrails, transparent human‑in‑the‑loop mechanisms, sustainable and publicly accessible AI infrastructure, and the strategic development of sovereign, domain‑specific models. There was also broad agreement on the importance of renewable‑energy‑driven data‑centres, synthetic data for robustness, and structured academia‑industry collaborations.

High consensus across technical, regulatory and sustainability dimensions, indicating a shared vision that responsible AI deployment in India and the Global South requires coordinated policy, infrastructure investment and localized model development.

Differences
Different Viewpoints
How AI model development should be sourced and supported – public shared compute resources vs building sovereign, in‑house models
Speakers: Babak Hodjat, Tanvi Singh
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) Development of sovereign, domain‑specific LLMs to handle low‑resource languages and reduce translation overhead (Tanvi Singh)
Babak argues that democratising AI requires publicly available processing capacity and a sandbox where innovators can experiment on shared infrastructure [145-152][155-165]. Tanvi, by contrast, stresses the need for organisations to build their own sovereign, domain-specific large language models that run on locally owned data and avoid dependence on external compute or foreign models [80-87]. The two positions differ on whether the primary solution is shared public resources or self-contained sovereign stacks.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between public compute provision and sovereign model development mirrors case studies of tax-domain LLMs and regional AI infrastructure strategies that stress data locality and cost-effective training [S62][S63][S66].
Preferred regulatory approach – balanced sandbox‑driven experimentation versus strong sovereignty‑driven control
Speakers: Babak Hodjat, Tanvi Singh
Caution against both over‑regulation and under‑regulation; advocate a balanced policy framework (Babak Hodjat) Sovereign LLMs give organisations control over their data, eliminate translation bottlenecks and improve ROI (Tanvi Singh)
Babak warns that excessive regulation can choke innovation while too little leaves societies exposed, proposing a sandbox to test rules in a controlled way [30-32][158-162]. Tanvi’s focus on sovereign models implies a tighter, nation-centric control over AI assets and data, which can be interpreted as favouring a more protective, possibly stricter regulatory stance to safeguard sovereignty [73-87]. The speakers therefore diverge on how much regulatory oversight is appropriate versus how much self-reliance should be pursued.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on sandbox-driven experimentation versus sovereign control are reflected in IGF workshops advocating harmonised sandbox frameworks while respecting national policy objectives [S71][S72][S73].
Unexpected Differences
Assumptions about the availability of AI infrastructure versus the need for new public resources
Speakers: Babak Hodjat, Sunita Mohanty
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) India has already provisioned 60,000 GPUs to states and institutions, enabling sovereign LLM development (Sunita Mohanty)
Babak proposes creating new publicly accessible compute capacity because most processing power is concentrated in private firms [145-152]. Sunita, however, points out that the Indian government has already distributed a large GPU fleet to foster innovation, suggesting that the immediate need for additional public compute may be less urgent than Babak assumes [166-168]. This contrast between perceived scarcity and reported abundance was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs stress that reliable AI deployment depends on foundational connectivity and public infrastructure, prompting calls for new public resources to meet growing AI compute demands [S66][S63][S50].
Overall Assessment

The panel largely converged on the importance of guardrails, human‑centric design, and sustainable infrastructure. Divergences emerged around the preferred route to AI capability – shared public compute and sandbox experimentation versus building sovereign, in‑house models – and around the regulatory posture needed to protect sovereignty while fostering innovation. An unexpected tension appeared between the claim of limited public compute resources and the reported government‑provided GPU fleet.

Moderate. While there is broad consensus on the goals of responsible, sustainable AI, the differing strategies for infrastructure provision and regulatory balance could shape policy and investment decisions in India and the Global South. These disagreements suggest that future work will need to reconcile public‑resource democratisation with sovereign model development, and to clarify the appropriate level of regulatory intervention.

Partial Agreements
All three agree that AI systems need safeguards that keep humans informed and in control. Babak stresses technical guardrails such as human‑in‑the‑loop and uncertainty checks [15-18]. Sunita calls for keeping humans at the centre of design and clear regulation [33-34]. Balaji implements this through transparent opt‑out disclosures for bot interactions [127-135]. The disagreement lies in the concrete mechanism (technical uncertainty metrics vs policy guidance vs UI disclosure) rather than the shared goal of responsible, human‑centric AI.
Speakers: Babak Hodjat, Sunita Mohanty, Balaji Thiagarajan
Balanced guardrails with human‑in‑the‑loop and uncertainty assessment (Babak Hodjat) Human‑centered AI and the need for clear regulatory guidance to sustain innovation while protecting users (Sunita Mohanty) Transparency in AI‑driven customer service: explicit opt‑out disclosure so users know when they are interacting with a bot (Balaji Thiagarajan)
Takeaways
Key takeaways
Effective AI guardrails require a balance between human‑in‑the‑loop oversight and automated uncertainty assessment, avoiding both blind trust and excessive rubber‑stamping. Standards for agent identity and interoperability in multi‑agent ecosystems are still immature and need development. Public processing capacity and sovereign sandbox environments are essential to enable academia, startups, and regulators to experiment safely. Technical robustness for the Global South must address heterogeneous, noisy, multilingual data through synthetic data generation, tunable noise, and federated learning to preserve privacy. Domain‑specific, sovereign LLMs are critical for low‑resource languages, reducing translation overhead and improving ROI for enterprises and governments. Sustainable AI infrastructure should prioritize liquid cooling, renewable energy, and clear KPIs such as energy‑per‑token, while employing modular, future‑proof data‑center designs. Responsible AI at consumer scale demands high‑quality data, strict access controls, fairness across pricing and service quality, and transparent disclosure (opt‑out) for AI‑driven customer interactions. A coordinated academia‑industry‑government pipeline (single‑window consortium) can align research, technology transfer, commercialization, and regulation.
Resolutions and action items
Create a publicly accessible processing‑capacity platform to democratize AI experimentation (suggested by Babak). Establish a sovereign sandbox where startups, academia, regulators, and enterprises can test agentic systems and co‑develop regulatory frameworks (Babak). Define and adopt energy‑per‑token and water‑per‑token KPIs for data‑center operations, incentivising compliance (Amod). Adopt modular, scalable data‑center designs that can accommodate future AI chip generations (Amod). Develop synthetic data pipelines with tunable noise for training robust models on noisy, multilingual data (Anupam). Implement federated‑learning workflows to merge proprietary models while preserving data/model privacy (Anupam). Accelerate development of sovereign, domain‑specific LLMs for Indic and other low‑resource languages (Tanvi). Integrate explicit opt‑out disclosures for AI‑driven customer service bots and enforce transparency policies (Balaji). Promote a balanced regulatory approach that avoids both over‑regulation and under‑regulation, using sandbox feedback to inform policy (Babak, Sunita).
Unresolved issues
No established industry standards for verifying the identity and trustworthiness of third‑party AI agents in multi‑agent ecosystems. Specific mechanisms for measuring ROI of AI deployments across diverse sectors remain vague. How to uniformly enforce fairness and quality across millions of sellers and products on large marketplaces like Flipkart is still an open challenge. The exact process for scaling sovereign LLMs to cover all regional languages and dialects has not been finalized. Details on how government will sustain and fund the public processing‑capacity platform and sandbox over the long term were not addressed. Methods for continuous monitoring and human oversight of AI systems at national‑scale deployments need further definition.
Suggested compromises
Adopt a balanced regulatory stance—neither overly restrictive nor completely laissez‑faire—using sandbox experiments to calibrate rules (Babak). Implement default opt‑out for AI‑driven interactions, allowing users to opt in if they wish, balancing transparency with user convenience (Balaji). Combine generic large‑scale LLMs for high‑level intent detection with smaller, domain‑specific models for detailed, localized tasks, achieving both breadth and precision (Balaji). Design data‑center infrastructure that is modular and future‑proof, allowing incremental upgrades without full rebuilds, thereby reconciling sustainability goals with rapid AI workload growth (Amod).
Thought Provoking Comments
One of the biggest risks is this notion that because the AI systems respond and reason very well, after one or two reasoning steps we can let them continuously reason – they make trivial mistakes after several hundred reasoning steps.
Highlights a subtle but critical failure mode of AI systems: error accumulation over long inference chains, which is often overlooked in hype‑driven discussions.
Shifted the conversation from generic guardrails to concrete technical challenges, prompting later speakers (e.g., Anupam) to discuss robustness and error‑mitigation techniques such as synthetic data and uncertainty estimation.
Speaker: Babak Hodjat
When you’re building a system fully in‑house you control the agents, but increasingly we have third‑party agents talking to ours – we don’t have well‑established standards to determine the identity of these agents.
Introduces the emerging problem of agentic identity and interoperability, a gap in current AI governance frameworks.
Led Sunita to ask about sovereign LLMs and regulation, and set up Babak’s later suggestion for a public sandbox and processing capacity to address ecosystem‑wide standards.
Speaker: Babak Hodjat
Our deep‑fake detection model performed well on clean data but failed dramatically on noisy, multilingual inputs; we tackled this by creating synthetic noisy datasets, automatic fact‑checking pipelines, and federated learning to merge proprietary models without leaking data.
Provides a concrete, real‑world example of how data quality, heterogeneity, and privacy constraints affect AI reliability in the Global South.
Expanded the discussion from abstract guardrails to practical research solutions, influencing subsequent dialogue on synthetic data, hardware‑aware AI, and the need for domain‑specific models (referenced later by Tanvi and Balaji).
Speaker: Anupam Chattopadhyay
Sovereignty means building domain‑specific models that are trained on our own data, in our own language, so we control the cognition and can meet regulatory accountability – this is why we are creating ‘Domain Specific Models’ rather than relying on open‑source or foreign LLMs.
Frames AI sovereignty as a technical and regulatory imperative, linking model ownership to compliance, ROI, and national security.
Prompted Sunita to explore ROI and sovereign LLMs, and inspired Balaji’s explanation of internal vs external model usage and the agentic orchestration framework.
Speaker: Tanvi Singh
Fairness at Flipkart spans the entire customer journey – from pricing, product quality, to after‑sales service – and is achieved through high‑quality data, strict access controls, encryption, and a mixture‑of‑experts architecture that selects domain‑specific models for each task.
Connects abstract fairness concepts to operational practices at massive scale, illustrating how data governance, model selection, and architecture intertwine.
Deepened the conversation on practical implementation of responsible AI, leading to follow‑up questions about transparency (bot vs human) and influencing Babak’s later policy‑sandbox proposal.
Speaker: Balaji Thiagarajan
We need KPIs such as energy‑per‑token or water‑per‑token for data centers and incentives for those who meet them; sustainable design (e.g., liquid cooling) is the foundation of responsible AI.
Introduces measurable, infrastructure‑level guardrails that link AI usage directly to environmental impact, a perspective often missing in model‑centric debates.
Steered the discussion toward the physical layer of AI responsibility, prompting Sunita to tie infrastructure considerations to ROI and later to Amod’s modular, future‑proof data‑center design recommendations.
Speaker: Amod Kabade
Create publicly available processing capacity and a sovereign sandbox where startups, academia, regulators, and entrepreneurs can safely experiment; the government’s role is to nurture the ecosystem, not to build every stack itself.
Proposes a concrete policy mechanism that balances innovation with oversight, addressing earlier concerns about over‑ and under‑regulation.
Served as a turning point toward actionable governance recommendations, influencing Amod’s emphasis on modular design and Anupam’s call for a single‑window consortium linking research to regulation.
Speaker: Babak Hodjat
The AI.sg model in Singapore provides a single‑window consortium that links research funding, technology innovation, transfer, commercialization, dissemination, and regulation – a template for strong industry‑academia partnership.
Offers a proven governance framework that integrates multiple stages of the AI lifecycle, addressing the fragmented approach observed elsewhere.
Inspired Sunita and other panelists to consider similar structures for India and the Global South, reinforcing the theme of ecosystem‑wide collaboration.
Speaker: Anupam Chattopadhyay
Overall Assessment

The discussion was driven forward by a handful of pivotal insights that moved the conversation from high‑level optimism to concrete, actionable challenges. Babak’s early warnings about cumulative reasoning errors and agentic identity opened a technical‑policy gap that was later filled by Tanvi’s sovereignty argument and Babak’s sandbox proposal. Anupam’s deep‑fake case study grounded the debate in data quality and privacy realities of the Global South, prompting Balaji to showcase how large‑scale commerce can embed fairness through architecture and governance. Amod’s focus on measurable infrastructure KPIs added an environmental dimension, while his modular data‑center vision linked back to Babak’s ecosystem‑building recommendation. Collectively, these comments reframed the dialogue around practical guardrails—spanning model reliability, data stewardship, regulatory sandboxes, and sustainable infrastructure—shaping a nuanced, multi‑layered roadmap for responsible AI in India and the broader Global South.

Follow-up Questions
What standards and protocols are needed to reliably identify and verify third‑party AI agents (agentic identity) in multi‑agent systems?
Without well‑established standards, integrating external agents poses security, trust, and interoperability risks for enterprises.
Speaker: Babak Hodjat
How can synthetic data creation be enabled in India and the Global South through modular “AI‑in‑a‑box” platforms for students and researchers?
Synthetic data can address data scarcity and privacy constraints, fostering trustworthy AI development in resource‑limited settings.
Speaker: Sunita Mohanty
What metrics (e.g., energy consumption per token, water consumption per token, query cost) should be used to measure AI infrastructure ROI and sustainability?
Transparent, quantifiable metrics are essential for responsible, cost‑effective, and environmentally friendly AI deployment.
Speaker: Sunita Mohanty
How should ‘sovereignty’ be defined and operationalised for AI models in critical sectors like BFSI, especially regarding data control and regulatory compliance?
Clear definition of AI sovereignty impacts model risk management, regulatory approval, and trust in regulated industries.
Speaker: Tanvi Singh
What framework should governments in the Global South adopt for building scalable AI stacks that include monitoring, human oversight, and vendor accountability?
A structured framework guides safe, transparent, and inclusive public AI deployments while balancing regulation and innovation.
Speaker: Babak Hodjat
What early design patterns (e.g., modular data‑center architecture, chip roadmap alignment, liquid cooling) enable reliable and trustworthy scaling of AI infrastructure?
Identifying proven design choices helps organisations avoid costly retrofits and ensures sustainable high‑density AI operations.
Speaker: Amod Kabade
How can academia and industry jointly treat model efficiency, reliability, and assurance as a single design problem rather than separate ethical, performance, and infrastructure layers?
A unified approach aligns research outcomes with real‑world constraints, reducing gaps between theory and deployment.
Speaker: Anupam Chattopadhyay
How can AI applications balance broad interoperability with deep, scalable domain‑specific integration, as learned from work with Palantir, OpenAI, the Vatican, and New York City?
Finding this balance ensures solutions are reusable across contexts while meeting specialized regulatory and cultural requirements.
Speaker: Tanvi Singh
What criteria should organisations use to decide when to build internal AI models versus adopting external models, and how do these choices affect long‑term business strategy?
Strategic model‑selection impacts cost, control, compliance, and competitive advantage for large‑scale consumer platforms.
Speaker: Balaji Thiagarajan
To what extent was AI actually used in organizing the AI Impact Summit (e.g., for cyber‑security, logistics, real‑time translation), and what lessons can be drawn?
Understanding real‑world AI deployment at scale showcases capabilities and reveals gaps for future event‑level AI applications.
Speaker: Anupam Chattopadhyay
How can a publicly available processing capacity (e.g., shared GPU cloud) be created to democratise AI experimentation for students, startups, and researchers in the Global South?
Open compute resources lower entry barriers, stimulate innovation, and reduce concentration of AI capabilities in a few large firms.
Speaker: Babak Hodjat
What would a ‘single‑window’ consortium model (like AI.sg) look like for end‑to‑end AI development—from research funding to regulation—and how can it be replicated in other regions?
A unified platform streamlines collaboration, accelerates technology transfer, and ensures coordinated governance across stakeholders.
Speaker: Anupam Chattopadhyay
What standardized benchmarks should be established for fairness, ethical lapses, hallucinations, alignment issues, and jailbreak resistance in AI models?
Measurable standards are needed to assess and enforce trustworthy AI behavior across diverse applications.
Speaker: Anupam Chattopadhyay
What key performance indicators (KPIs) such as ‘energy per token’ or ‘water per token’ should be defined for AI data‑centers, and how can incentives be structured to promote compliance?
KPIs linked to sustainability drive greener AI infrastructure and provide clear targets for industry and regulators.
Speaker: Amod Kabade
What best practices ensure transparent disclosure in AI‑driven customer service (e.g., opting out vs. opting in by default) to maintain user trust and meet compliance?
Clear disclosure policies affect consumer confidence, regulatory adherence, and ethical deployment of conversational agents.
Speaker: Balaji Thiagarajan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.