Survival Tech Harnessing AI to Manage Global Climate Extremes
20 Feb 2026 18:00h - 19:00h
Survival Tech Harnessing AI to Manage Global Climate Extremes
Summary
The panel convened to explore how artificial intelligence can be applied to India’s climate challenges, especially extreme weather and sustainability [1]. Amit Sheth explained that the Indian Research Organisation (IRO) was created after a December 2023 meeting with the Prime Minister to develop original, small and agile AI models tailored to Indian needs rather than relying on large foundational models, focusing on weather, health and pharma verticals [20-24][26-34][35-38]. He emphasized building hyper-local models for extreme weather that integrate spatial-temporal dynamics without the “baggage” of pre-trained large language models [27-30][31-33].
M. Ravichandran highlighted that traditional physics-based forecasts capture spatial patterns but miss fine-grained temporal rhythms, requiring a fusion of numerical models with AI to predict high-impact events such as cloudbursts [47-61][62-66]. Shivkumar Kalayanaraman added that low-cost cameras, multimodal sensors and low-Earth-orbit satellites can provide real-time visual data that, when combined with generative AI, enable short-term cloud forecasting and insight-level fusion across modalities [76-84][85-89]. Praful Chandra pointed out that achieving such specificity depends on “small-data fine-tuning” of large foundation models, questioning how minimal a dataset can be while still delivering accurate domain performance [104-108]. Karthik Kashinath argued that transfer learning and benchmark datasets-similar to ImageNet’s role in computer vision-are essential to adapt global models to India’s hyper-local, data-sparse regions [110-114][115-119].
Shivkumar described the ANRF’s dual funding streams-a grant programme for non-profit research and a one-lakh-crore RDI capital fund for private-sector translation-along with targeted initiatives such as the AI-for-Weather & Climate track and the Leapfrog Demonstrators for Societal Innovation [193-210][216-224][225-232]. He noted recent hackathons in partnership with IBM and IIT Delhi that provide curated datasets to accelerate prototype development, while urging collaboration with agencies like NDMA and MoES [218-224]. Akshara and Sandeep emphasized that public-private partnerships, open IP licensing, and industry-academia consortia are being promoted to move solutions from TRL 1-2 to operational readiness and to attract both government and philanthropic capital [233-242][280-287].
Manish Bhardwaj illustrated how AI-enhanced early-warning systems that fuse terrestrial, satellite and sensor data can improve evacuation planning for multi-hazard events such as cloudbursts, landslides and flash floods, thereby reducing mortality [161-169][174-186]. Praful Chandra gave a concrete example of AI-driven hyper-local solar generation forecasts feeding into India’s digitized grid (India Energy Stack) to enable demand-flexibility and better load balancing [291-298]. Dev Niyogi argued that weather services must become decision-oriented “digital twins” that translate forecasts into actionable, monetizable products-such as insurance pricing-rather than generic climate data [313-322][327-330][337-342]. The participants agreed that building trustworthy, validated models and establishing robust data sharing, funding, and partnership mechanisms are critical steps toward operational AI solutions for climate resilience in India [144-147][155-158][233-242].
Overall, the discussion concluded that coordinated AI research, targeted funding, and cross-sector collaboration can transform climate prediction into actionable services that protect vulnerable populations and support sustainable economic growth [156-160][331-336].
Keypoints
Major discussion points
– Building purpose-built, hyper-local AI models instead of relying on large foundational models – IRO is focusing on “very agile, small, specific models” for extreme-weather use-cases and deliberately avoiding the “baggage” of big language models [26-30]. Panelists stressed the need to fuse physics-based numerical forecasts with AI time-series methods to capture both “elephant”-scale and “ant”-scale phenomena and to improve prediction of events such as cloudbursts [47-66].
– Data availability, open-access and interdisciplinary collaboration as the backbone of trustworthy AI forecasts – India’s massive historical weather archives provide a “huge” data resource that must be opened up for broader use, and young talent should be mobilised to “interpret the data differently” and reduce error and uncertainty [126-138]. Experts also highlighted the creation of benchmark datasets and metrics (e.g., the ECMWF ERIF set) as essential for achieving operational quality at hyper-local scales [263-270].
– Funding mechanisms and public-private partnerships to move from research to operational products – ANRF’s grant programmes, the one-lakh-crore RDI fund, hackathons, and “Leapfrog Demonstrators for Societal Innovation” are being deployed to catalyse AI-for-weather projects and to ensure industry-academia collaboration [193-232]. Venture-capital perspective reinforced that startups must partner with government, segment markets, and identify monetisable pathways (e.g., insurance, enterprise services) while leveraging public-private capital [280-287].
– Concrete AI-driven applications for climate resilience – Early-warning dissemination through trusted DPG systems, multimodal sensor networks (including low-cost cameras and LEO satellites), voice-assistant tools for household-level action, digital twins for decision-specific forecasting, and AI-enhanced renewable-energy grid management were all cited as high-impact use cases [70-75][76-84][91-100][291-298][337-342].
– Technical challenges that must be solved to realise these applications – Small-data fine-tuning of foundation models, transfer learning across data-rich and data-sparse regions, and establishing validation/verification pipelines were identified as research frontiers that will determine trust and adoption [104-108][109-114][263-270].
Overall purpose / goal of the discussion
The panel was convened to map India’s strategic roadmap for leveraging artificial intelligence to tackle climate-related challenges-particularly extreme weather and sustainability-by (i) defining the scientific and technical directions (hyper-local modelling, data fusion, validation), (ii) identifying institutional and funding levers (ANRF, RDI, public-private consortia), and (iii) pinpointing immediate, high-impact applications that can be piloted and scaled across the country.
Tone of the discussion
The conversation began with an optimistic, visionary tone, emphasizing the promise of AI-driven breakthroughs for climate science. As the dialogue progressed, it became increasingly pragmatic, focusing on concrete hurdles (data openness, benchmark creation, trust) and concrete mechanisms (funding programmes, partnership models). The closing remarks retained the collaborative spirit but shifted toward a call-to-action, urging stakeholders to translate ideas into operational solutions. Overall, the tone remained constructive and forward-looking throughout.
Speakers
– Akshara Kaginalkar – Panel moderator/host of the AI Summit discussion.
– Amit Sheth – Founder/CEO of IRO (Institute for Research in AI for climate and sustainability); leads development of small, agile AI models for extreme weather and health applications.
– M. Ravichandran – Secretary, Ministry of Earth Sciences, Government of India; oversees weather, climate and sustainability initiatives. [S16]
– Manish Bhardwaj – Secretary, National Disaster Management Authority (NDMA), India; responsible for disaster preparedness and early-warning systems. [S15]
– Shivkumar Kalayanaraman – AI researcher and speaker on multimodal models for weather forecasting and climate-impact applications.
– Sandeep Singhal – Venture capitalist; manages investment portfolios in energy transition, mobility and climate-tech startups. [S1]
– Dev Niyogi – Professor, University of Texas at Austin; affiliated with IIT Roorkee; member of the founding team of IRO. [S2][S3]
– Praphul Chandra – Professor; Head of the Center for Excellence for Data Sciences and Dean R & D, Atria University, Bangalore. [S8][S9]
– Karthik Kashinath – Director, Center for Excellence for Data Sciences; Distinguished Scientist at NVIDIA. [S10]
– Audience – Audience participant who raised a question on insurance and climate risk.
Additional speakers:
– Dr. Shiv Kumar – CEO, NRF (National Research Foundation); champion of AI for science and supporter of the panel discussion.
The panel convened to chart a national roadmap for applying artificial intelligence to India’s climate-related challenges, with a particular focus on extreme weather, disaster resilience and sustainability [1-13]. The moderator, Akshara Kaginalkar, introduced a cross-sectoral audience that included the Secretary of the Ministry of Earth Sciences, a venture-capitalist, university professors, the NRF CEO and the NDMA secretary, underscoring the breadth of expertise required for the task [2-14][15-17]. Akshara also referenced the “dew effect” as an illustration of how micro-scale phenomena can influence larger weather patterns, highlighting the need for models that operate across scales [260-262].
Dr. Amit Sheth explained that the Indian Research Organisation (IRO) was created after a direct meeting with the Prime Minister in December 2023, where the leader asked for home-grown AI solutions that would not simply imitate Western or Chinese models [20-24]. IRO’s strategy is to develop “very agile, small, specific models” for hyper-local extreme-weather problems, deliberately avoiding large foundational models whose training data and computational baggage are opaque [26-31]. The institute also plans to extend this approach to health and pharma verticals, leveraging partnerships with the Indian Pharma Alliance and health organisations [32-38].
Dr. M. Ravichandran, Secretary, Ministry of Earth Sciences, highlighted the limits of conventional physics-based forecasts, which capture broad spatial patterns but miss fine-grained temporal rhythms. He used the metaphor of “the elephant plus the ant” to argue that both spatial (physics-driven) and temporal (AI-driven) components must be fused to predict high-impact events such as cloudbursts [47-66]. He also emphasized that robust validation and verification frameworks are essential to build confidence in AI-augmented forecasts [145-147]. This hybrid vision was echoed by Manish Bhardwaj, who called for a trusted, low-cost early-warning system that blends AI with terrestrial sensors, satellite feeds and existing alert-generation agencies, thereby improving granularity even where sensor coverage is sparse [70-75][175-180].
Prof. Shivkumar Kalayanaraman described multimodal AI pipelines that combine time-series models with visual data from inexpensive cameras, infrared or multispectral sensors, and low-Earth-orbit satellites. He argued that the focus should shift from raw data-fusion, which is “painfully complex”, to “insight-level fusion” that can deliver now-casting forecasts of clouds a few hours ahead [76-84][85-89]. Such multimodal approaches could be integrated into existing now-casting and forecasting systems to amplify impact.
Data was identified as both a strength and a bottleneck. Ravichandran noted that India possesses “hundreds of 150-year-old” weather records, but these archives are not yet fully exploitable because they remain siloed [126-128]. He called for open-access policies that would allow the nation’s “young brains” to interpret the data in diverse ways, reduce model error, improve initial conditions and enhance down-scaling to kilometre-scale forecasts [129-144][145-147]. The need for benchmark datasets was reinforced by Prof. Karthik Kashinath, NVIDIA, who likened the situation to the ImageNet breakthrough: creating standard data-sets and metrics (e.g., ECMWF’s ERA5-based weather-bench) would drive operational quality at hyper-local resolutions [263-270]. He also pointed to super-resolution techniques already used in the Earth2 programme, suggesting that generative-AI methods could further shrink the resolution gap within the next two to three years [271-274].
The question of how to cope with data scarcity generated divergent views. Praful Chandra asked how small a dataset could be while still fine-tuning a large foundation model for a specific climate task, framing this as a potential breakthrough [104-108]. By contrast, Sheth argued for building original, lightweight models from scratch, avoiding the “baggage” of pre-trained large models altogether [26-31]. A related disagreement concerned transfer learning: while Chandra advocated re-using knowledge from data-rich regions to India’s data-sparse locales [110-114], Sheth’s approach favours locally-engineered models that do not depend on external pre-training [26-31].
Funding and translation pathways were outlined by Prof. Shivkumar Kalayanaraman on behalf of the National Research Foundation (NRF). The NRF runs an AI-for-Science & Engineering programme with a dedicated AI-for-Weather & Climate track, which collaborates with MoES on the Mission Morrison programme [196-199], a one-lakh-crore RDI capital fund for private-sector scaling, and a forthcoming “Leapfrog Demonstrators for Societal Innovation” scheme that rewards high-impact, non-incremental solutions [193-210][216-224]. Recent hackathons, co-organised with IBM and IIT Delhi, already provide curated datasets to accelerate prototype development [218-224]. NRF is also partnering with the Ministry of Earth Sciences on the “Mission Morrison” programme and has launched Translation Research Centres that require joint industry-academic participation to move prototypes toward commercial deployment [190-193][225-227].
Public-private partnership (PPP) models were repeatedly stressed as essential for moving from research (TRL 1-2) to market-ready services (TRL 5-6). Akshara highlighted the government’s push for consortium-based proposals, open IP licensing and hub-spoke collaborations, which would allow startups to pick up academic IP and translate it quickly [233-242][245-251]. Sandeep Singhal added that successful scaling requires clear market segmentation-distinguishing public-good services from monetisable private-good offerings such as insurance or enterprise risk tools-and that both government capital and emerging philanthropic funds are ready to back such ventures [280-287][340-345].
Concrete application domains emerged across the discussion. Bhardwaj described AI-enhanced early-warning pipelines that could predict cascading hazards (cloudburst → landslide → flash flood) and enable timely evacuations, thereby reducing mortality [161-169][174-186]. Praful Chandra illustrated how hyper-local solar generation forecasts, fed into the India Energy Stack, can support demand-flexibility and peer-to-peer energy trading, turning weather predictions into direct grid-management value [291-298]. Dev Niyogi introduced the concept of decision-specific “box models” and digital twins that translate raw forecasts into actionable recommendations-ranging from long-term hedging to immediate shade-seeking decisions-thereby turning weather into a monetisable product and addressing the “tragedy of the commons” [313-322][327-330][318-322].
From the discussion, four recurring themes emerge: (1) hybrid AI-physics or AI-sensor systems for hyper-local forecasting; (2) open, benchmarked data and collaborative consortia to build trustworthy models; (3) robust PPP frameworks with open IP to accelerate translation; and (4) a preference for lightweight, domain-specific models or fine-tuned foundations that can be deployed rapidly [47-66][70-75][126-144][233-242][104-108][263-270]. Remaining points of contention-whether to prioritise bespoke small models versus fine-tuning large foundations, and how expansive digital-twin architectures should be-highlight the need for coordinated research agendas that accommodate both approaches [26-31][104-108][313-322][331-333].
In closing, participants identified a set of immediate research and policy actions: develop benchmark hyper-local datasets; explore small-data fine-tuning and transfer-learning pipelines; establish validation and verification protocols for AI-augmented forecasts; open legacy weather archives to the broader community; design multimodal insight-fusion frameworks; launch voice-based personal resilience assistants; pilot AI-driven climate-risk insurance products; and embed AI forecasts within the India Energy Stack for renewable-grid optimisation. These steps, underpinned by the NRF’s funding mechanisms and the IRO’s model-building agenda, aim to transform India’s climate prediction ecosystem from a purely physics-driven service into an actionable, decision-oriented platform that safeguards vulnerable populations while supporting sustainable economic growth [263-274][291-298][313-322][193-224][233-242].
top -down approaches in terms of finding the AI solutions, India’s critical problems and weather and climate is a major vertical. So welcome, sir. We have Dr. Ravichandran, he doesn’t need any introduction, but he’s the Ministry of Earth Sciences Secretary and everything and anything under weather and climate and sustainability, sir, is heading it and we look forward to your contribution. We have Mr. Singhal, who is a venture capitalist and he will give a very, very important aspect about how funding and economy is going to drive the solutions in AI for climate. Professor Dev Niyogi, he is professor from UT Austin, that is University of Texas at Austin. Also, he’s affiliated to IIT Roorkee and now one of the founding team of IRO.
Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for science. And we look forward to your support as well as your inputs on how can we proceed on this. And we have Mr. Manish Bharadwaj, who has a very critical role in India as the secretary of disaster NDMA. And we have Professor Praful Chandra. He’s heading the Center for Excellence for Data Sciences, as well as he’s dean R &D, Atria University, Bangalore. And we have Dr. Kartik, who is the director of the Center for Excellence for Data Sciences. He’s a distinguished scientist and engineer, NVIDIA. And he has played a major role in the very famous AI models, which all of us are hearing.
And they are, you know. changing the scenario of modeling and the way science is going to happen. So welcome. So I look forward to your contribution. Oh, okay. Can we stand just here? Okay. So before we open up the panel, I just wanted to have a very quick question to Professor Seth in terms of what was the objective, what we are looking when you started IRO as a, you know, in India, we wanted to have this type of a research organization. So if you can quickly tell us about what was the thought process behind IRO and what do you foresee?
So the idea of IRO kind of. was initiated when I had a chance to meet the PM in December of 2023. I was asked to come and discuss with him. He is always very curious in technology and so he wanted to hear about the ideas on AI. Since I had multiple interactions on research and AI with him during his CM time, this was a fantastic opportunity for me to meet and kind of discuss where India can shine and not necessarily follow the West or China in what we need to do. And so I presented both the core foundational AI focus on enterprises, not necessarily consumer and web, and some of the areas of where we can make big economic and social impact, as well as we can support the startup ecosystem where AI can empower deep AI technology that drives the global products from India.
So that was a broad idea. And so IRO currently is developing original work on building very agile, small, specific models. In this context, for example, if you want to make a model for serving extreme weather related issue that is hyper local, then all the spatial temporal aspects, all the relative modeling aspects, all the prediction algorithms, those are the things that we will bring in. But we will not be building on the top of large language models or so -called foundational model, which come with a lot of baggage. We don’t know what kind of data it has been trained on, many other things. So original research in creating new, small, agile models. And so it will be a platform on the top of India AI structure to be able to create models.
And one area in which we would love to create models, we have technology. expertise here, Dave and many other people. And we can, you know, so earth science, including disaster, including, you know, sustainability issues is one of the vertical. Other two are health and pharma. Pharma, we have very strong partnership with Indian Pharma Alliance and the 23 major pharma, which is 80 % of India’s pharma, you know, kind of output. And similarly, we are working with some health partners and all. But here you see the potential partners that we could have in making impact into the sustainability and health area. So thank you.
We would like to now start with one open questions and then we’ll have an individual question because I’m very sorry, the time is very short. The whole format is actually we had a one day full workshop and we had to squeeze it in. to start. Yeah, so one disclaimer that it’s not my personal thing, but I may request you to finish it in time. Definitely would like to hear a lot from all of you, but due to constraint of time. So first one, opening questions, what we’d like to have is all of us would like you to say is what would be one AI application or a discovery that would excite you about AI helping in this domain of climate as well as extreme events and sustainability as a broader thing, because everything is driven by weather and climate.
We have energy, we have health, we have economics and we have agriculture, many, many aspects of it. So we’d like to see what do you foresee and how do you would like to say that which one development will help us. And we’ll start with you, sir.
When you talk about the weather, of course, it is now depends on various applications. So when we are doing the weather forecast, earlier we just to tell that in suppose how the elephant is going, I’m able to see that elephant, how it is going. I’m able to tell that tomorrow it will come here. But now the problem is whether because of the climate change and other things, the space and time has changed. Now, we have to see on the elephant some ant is sitting. That ant, how it is going, we want to know. So we want to see the elephant plus ant. So I want to see two things. One is time series. Other one is a spatial.
If anything on spatial, I think the physics based numerical model is doing better job. But if you want to go for time series, local rhythm, then A is better. So we need to do. Integrate or we need to fuse both together in order to understand the local weather in a fine scale. And you want to go suppose cloudburst is there. So you cannot do only with. numerical model and with AI also. So, we need to blend both. That is more important. So, we want to go for high impact weather events, how to predict, especially cloud burst and other thing. We do not know how to predict. So, that is why we are looking at whether AI can help or not.
That is one of the objectives. Thank you.
I fully agree with what Ravichandran sir has just said. From the early warning point of view, the idea is to have DPG sort of asset for the public so that we are able to disseminate early warning for all. So, idea is to have trusted early warning for all to be given to the citizens. at low cost and this is where AI can definitely play a supporting role. It cannot be purely an AI. It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the satellite data which is available to us from various alert generating agencies but to have a source of a trusted and reliable and resilient early warning systems wherein I definitely foresee AI playing a great, great role.
Thank you. Yeah, I
think I’ll just double down on the multimodal models that are coming out. I mean one is the time series model. There are special models and I will also mention that today with generative AI you can just put a camera pointed to the sky and then you can actually not only see the patterns of clouds, you can forecast one hour ahead. Two hours ahead, even four hours ahead. make it an IR camera or make it some other multispectral camera when all the costs are dramatically dropping. So you can imagine a network of sensors that complements also the great work that’s being done in Mission Mouse and so on. And plus now with the low Earth orbit satellites going up and also having much more Earth observability, I think the opportunity to fuse insights as opposed to fusing data.
I mean, data fusion is a painfully, you know, mind -bogglingly complex, unnecessary and complex as a thing. But now there’s an opportunity to take insights from A, insights from B and fuse it across modes and also forecasting across these modes. I think that’s a wonderful opportunity. I think that’ll have a huge thing. And once you integrate that into, you know, sort of now casting and other systems, I think we can have a great amount of impact. The other dimension is, of course, AI helping in discoveries and of new materials and you know, sort of simulations and so on. I think these have wonderful opportunities. And of course, as you know, the Nobel Prize for Chemistry went to somebody from an AI background.
Sure.
So I will put a consumer lens to this. Sirs have brought up the point around what is the technology needed. I think with what is happening with the voice agents right now, I think there is a need to have a simple voice framework or a voice sort of app which allows you to send not just information, but actually create a resilience approach for the person who is who can literally just click a button and say, OK, in the next week, these are the things that you need to do to survive the whatever is happening from a climate perspective. Right. Or what do you need to do in the next month? So there is a there is a forecasting aspect to it.
But more importantly, how it integrates with my life. Do I need to stay at home? If I’m a farmer, what do I do? If I’m a, you know, liberal, what do I do? So that ability to bring that to my day to day life and allow me to actually act a certain way because of what I expect, what I expect to see in the environment around me. And that includes daily air. I’ll
just add one term you guys know this word Jugaad so this is a very India thing Jugaad we can so there is a framework that is mathematically feasible that we can model very well that follows equations that follows laws of nature and then there is a human element that we always beat the system and make that happen mapping that has been very difficult in a predictive models and this is where I think AI is coming into play that it brings the human dimensions and it brings the societal aspect with the physical constraints and this is what is most exciting about it into a way that it will be becoming much more accessible is where I think we’ll be going we had heard also about the agentic AI now I heard about the ant AI thanks to you so
I’m going to pick up where Professor Neogi and Dr. Shiv Kumar left you know we work across several AI foundation models in biology in materials and we have looked at foundation models in weather I think the breakthrough that I am most anxious to look for is what we call small data fine tuning. What that means is that when you look at these large foundation models they are fairly general in their applicability and as Professor Sheth was saying when you have to fine tune them for a specific use case you still need data. How small can that data be? Can you use small data to fine tune large foundation models? I think if you are able to have that breakthrough it has applications across multiple domains that we talked about.
I think a lot has already been shared which is very exciting on many different fronts. One thing I would like to see more used in practice is transfer learning which of course some regions of the world are data rich and some others are data sparse. Problems are shared across the planet. The physics of weather and climate are the same no matter where you are in the planet. But at the same time, there’s uniqueness at hyperlocal scales. But if we can transfer learn efficiently from one region to another with constraints of what exactly we’re trying to transfer learn, I think that would be very impactful.
Thank you, Dr. Kalpik. I think we have a mic here. We saw right from the spatial, as sir said, it’s like from Akashse and Tak, we can see everything. And I think that matters. I remember once I think I was discussing with sir, he said even the dew effect you have on the immediate temperature and that can affect your surrounding and everything. So from small to big is definitely there. And AI also from small to big we should see. And that leads to now I will ask the next round of question is very, very specific to. areas in which all of you are working as well as having a lot of influence and that’s where we would like to hear from you and to have a direction in what way we can go.
At the end of this panel, that’s what, you know, can we all consolidate and can we look at, you know, what are the three to four immediate things which we can do it. And with that respect, I would like to ask Dr. Avichandran, how can India’s national capabilities in AI research, technology development, and very importantly, human resource also, evolve to enable the transition from current physics driven prediction system to AI enabled user specific decision systems. What are the bottlenecks in that and how can we overcome?
So as pointed out, basically we have a capability, basically one of the strengths what we have is basically the data. The data volumes are huge nowadays because we have hundreds of 150 years old old IMD’s things that legacy as well as data available. Now how to utilize this data? And we have young brains of so many young people but we have not fully utilized that one because each one can interpret the data different way. But finally it has to come out into concrete solution. When you talk about AI and weather, if you are talking about, why we want to go for AI first of all because the numerical model, we have a lot of assumptions. Because of that assumptions, the error grows.
Now that error grows whether with the AI we can reduce that is number one. When you are going for initial condition is better, you can predict better. So we have to have a initial condition in better way by reducing the error. So I think many people, even some of the people, many people are working in AI, different people. I think we need to pull the many resource people in our domain so that they can look at data differently and also they can use how to minimize the error and also how to reduce the uncertainties. And also there are various techniques to improve the forecast. So that’s what I, because nowadays the downscaling is one of the important things.
In the large scale model, it defiles. So the AI can downscale better way in the localized, suppose one kilometer resolution weather forecast, we want to forecast how we can do. So we need to have more and more minds and more and more people have to work on it. And I think we need to open up the data so that we have to, that means different people can, can come back and work on that. I have only one important thing is basically this, when you are talking about EIML, the trust is more important, as you pointed out. I think we need to have a better trust in the forecast system. I think where there’s a need for validation and verification, that also very important in EIML can make it.
So our capabilities are huge, but we need to, what is called, utilize them with the data’s strongness. Because now the biology people, even biology people are working in EIML. That same people we can do. One more important point is our people, we are always addicted to the same set. We are thinking only this is the way, but there are multiple ways. That’s why some other discipline people also look at this because this is data driven. Other discipline can look at it differently. We can have some. pathway or way forward. That may be one of the things we can look at.
That’s a very, very important point because we look only weather from maybe only physics angle or weather angle. So, looking at that is very important. And that leads to, you know, what is important for the disaster management service, we would like to ask because highly dependent on the extreme events and the managing that is very difficult. So, how do you foresee adoption of AI for infrastructural preparedness for disaster management and especially reducing the severity impacts on vulnerable population because cities and all maybe and those who have access to many good things they can handle, but we have large vulnerable population. So, how do you see AI helping in the last mile application?
Very apt questions. As you all are aware that India is vulnerable to multi multiple hazards, not only cyclones, tsunamis, earthquakes, landslides, flash floods, even gloves, soap, and looking at the vast geography and the population which can be impacted. It is very essential that from the disaster management point of view, we have a system of adequate preparedness and early warning capabilities. Nonetheless, the disaster, and secondly, though the country has made, we have made as a whole of government approach undertaken various mitigation measures to mitigate the disasters, but disasters, we can only mitigate the effect of the disasters. So how do we keep the population? We have to keep the population in a way so that, you know, the early warning system capabilities are of the highest order.
that we are able to minimize lots of lives. Now, this is a very important challenge. And various agencies, particularly, as Ravi Chandran sir has rightly said, the IMD and from several, we have, over the period of time, we have developed enormous capability to predict, say, cyclone path and trajectory very clearly, five days ahead of its landfall. So, in a way, we are able to do timely evacuation, repositioning of the response teams, which helps in minimizing and even achieving zero mortality milestones. But there are other hazards. And secondly, the way the hazard scenario has unfolded in the last few years, it has become a multi -hazard, cascading hazard sort of scenario in which one hazard leads to other hazards.
So, there are incidents of cloudburst. Which are currently cannot be predicted. because there are various technical issues also behind it, but cloud bursts leading to landslides, leading to flash floods are a serious concern. So how do we prepare ourselves given the current state of resources and the developments? This is where AI can definitely pitch in. So the idea is actually to get the various, from the alert generating agencies, all the data which are coming from our terrestrial, the satellite data, the sensory data, and then to be able to use it for predictive forecasting or also to better the now casting to increase the granularity of even the early warning signal because there are limitations of how many satellite systems we can put into place.
It is not possible to map each and every, the hill in the vulnerable areas. So this is where the complications arise. And since development also has to take place in the vulnerable, particularly in the Himalayan zone, so the challenge is here to use technology at the maximum. What I foresee is that the availability of the data from various multiple sources can definitely be analyzed and used for even with the current set of sensor network capabilities to predict or rather to pinpointedly and accurately predict the forecast, the early warning signals for the targeted population. And then it will help the district authorities, the state authorities for timely evacuation and response and relief operations to be carried out.
So this is one field where NDMA particularly is collaborating with multiple national agencies and IMD. And Mr. War Sciences are playing a very major contributory role in that development of the such DPG. I am very sure that the startup ecosystem in our country definitely carries the agility to provide, to do a collaborative support the efforts of the NDMA and the national agencies in taking this mission forward. So, and this is where I believe that we can definitely reduce the, we can definitely increase our, the early warning capabilities, particularly regarding flash floods and the glacial lake outburst floods, the lightning and the landslides. And we are very hopeful that with the support of the IMD and the Ministry of Earth Sciences, we can definitely also take major and change.
Take different steps towards even predicting or identifying the most vulnerable or the potential cloudburst type situation so that we are able to timely warn the public.
Thank you, sir. And it’s an important point, as Dr. Avichandran has said, and which you have taken into the need of the data and the infrastructure also linking that to and the setup which we have and we have seen it in the expo. So many people are working on climate and sustainability. How can we put that together and how can we have the best out of it? So that leads to a question to Dr. Shokumar. NRF is enabling the research ecosystem as well as the product ecosystem. So we would like to see how NRF is helping in terms of creating AI funds, what advice you can give it to the community and how to be making and developing products and what sort of support we can expect from ANRF on that.
Okay. So for folks who may not be, how many of you know about ANRF? Maybe just I can get a show of hands. Okay. All right. Not too many, but so ANRF is a statutory body of government of India and Dr. Avichandran is on my board as well. So this is a body which is, you know, sort of meant to catalyze research and development funding in India. So we have grant funding, oops, and also we have, you know, a capital fund called RDI, which is a one lakh crore fund, which is meant only for the private sector. The grant funding is typically for the, you know, not -for -profit research sector, which includes academia, labs.
you know, Section 8 companies and others, right? So research entities are recognized by SARU, DSIR and so on. So our thinking is that we not only have broad -based funding for, you know, like what National Science Foundation does, but we also have more focused funding in a mission mode. So we have a couple of programs that might be of interest. One is our AI for Science and Engineering is a program we have currently underway. And one of the tracks of that is AI for Weather and Climate. So it’s already there. And in addition, we are going to be launching a major program in about a month called Leapfrog Demonstrators for Societal Innovation. Leapfrog Demonstrators for Societal Innovation.
So the idea is that you take a societal problem, then rather than talking about it, let’s do something about it, okay? And then not do just incremental thing. It should be a leapfrog demonstrator. And it should be a demonstrator, not just a theoretical thing. So these are kinds of things we’re doing. And alongside it, we are also doing challenges, sir. We’ll be introducing more challenge mode. you know sort of things that we don’t see come bottom up in our proposal formats. So as part of that we are also collaborating deeply. Our AI for Science and Engineering, the Weather and Climate, we are actually collaborating with MOES and with their Mission Morrison program. So we are linking, we are getting you know both the expertise as well as the data and you know so that we can put together the AI expertise along with the sensor expertise and data and we hope to similarly collaborate with other parts of the government and you know I would strongly urge collaboration from NDMA also at this stage.
So that’s the general approach and then in the, so that accelerates and also I just want to mention that just two days back we have announced a hackathon also, AI for Science and Engineering hackathon for you know Weather and Climate actually. So it’s currently open it’s done in partnership with IBM and IIT Delhi. So we put out data set and also in partnership with MOAS and others. So we have data sets and we are encouraging some of the work there. But in addition, we’ll be doing more, as I said, there’s a societal innovation program, which can also admit of newer types, where you bring together disciplines. We actually then go to solve real problems and so on.
So I think that’s the nature of what we’ll try to do. And then the RDI fund is meant for translation and scaling. In addition, we also have translation centers. We have a program that is open right now and so on. So these are various programs and mechanisms we plan to do. But the goal of all of this is to always focus on impact and working backwards, rather than doing some undirected research. So we want to drive research in a more directed way towards impact. But at the same time, we do support curiosity -based, broad -based research as well. So that’s the balance we’re trying to strike.
we are doing, if we would like to have consistent solutions and not only as a demonstration product, but as operational, where we have every day some services coming out of it. How do you see the public -private partnership coming out? In all our mission mode programs, the goal is to accelerate things from a lower technology readiness, like TRL 1 or 2, to its mid -range, like 506 and so on. That is the purpose of that. And as part of all of those programs, we are supporting programs at a critical scale. So we are encouraging consortiums to come and bid for it, or a hub -and -spoke type setup. We are explicitly saying, don’t make it individual proposals. It has to be collaborative proposals.
In some of our programs, we have put out open IP licensing so that when you have a company or a startup and so on, they can actually partner with academia, pick up the IP and quickly translate. That will also encourage rapid translation. So we are introducing, you know, IP and other innovations to drive translation. So we are going to be doing this in a few more programs. Plus, we have this Translational Research Centers program, which has mandates partnership with industry as well. So we are using different mechanisms. All of them are driving collaborations. Plus the RDI fund, which is a one lakh crore fund. By the time it hits the market, it will become three or four lakh crores.
It is only for industry, but the industry, if they don’t have capabilities, they must collaborate with academia and so on. So there’ll be a demand for industry academic collaboration coming from that side as well. So we are attacking the problem from multiple directions. And, you know, all of these are meant to encourage collaboration for impact, collaboration for impact. So that collaboration leads us to, you know, industry. As we know, NVIDIA is… very much into and pioneering in terms of many models coming in and Dr. Karthik is part of the model development. So foundational AI weather models and climate models such as Earth 2, GraphCast and AIFS and many more are now demonstrating good performance at a global scale.
So what further development do you see basically the physics, how can we interpret the physics coming in the AI models and the validation is very, very important as sir has said that very, very local scale. We are talking about even air quality at a 400 meter or floods at 10 meters or something like that we are talking about. So how do you see what is more to be done in terms of models operationally robust at a hyper local scale? Thank you.
Yeah, that’s a rich question but I’m going to keep it fairly brief because it could take the next 30 minutes to get through that. So I’ll touch on three things. One is I think creating the benchmark data sets and the benchmark metrics that are needed to achieve operational quality. And if you look at what has led to the developments at the global scale at 25 kilometer resolution is the ERIF data set from ECMWF and the benchmark problems that they’ve defined on that data set, like the weather bench for example. So I think if we want to get down to the hyperlocal scales, which of course depends on the region that you’re talking about and the types of metrics that you care about, it would be very helpful to create the benchmark data sets and the associated benchmark metrics that can drive towards that.
And if we just wind the clock back, the whole AI revolution in deep learning began because of ImageNet. And that was 15, well 12 years ago. And they defined benchmark data sets and benchmark metrics that drove the revolution in AI. So I think we can do the same thing if we take it down to the hyperlocal level. The second is to leverage the superization techniques that AI has shown to be very powerful. We’re already doing that right now in the Earth2 program with taking 25 kilometer data and super resolving it to one kilometer. Also, we’ve been doing this in weather and climate for decades with downscaling the process of taking coarse resolution simulations and making high resolution. So if we can stretch that even further to go down to these hyperlocal scales, I’m fairly confident that the technologies needed in generative AI to get us to that scale either already exist or will be invented in the next two to three years.
So I’m hopeful that that will help us get there. Thank you.
I think that’s important. We look forward to and that’s where public -private partnership comes in picture because when we see it very specific to India and within India also very specific to a region which we’ll have to, you know, because we have a very different climate all across. Right from north, south, east, west. So I think having maybe small models for a region also can be a future maybe in the. so that once we have this system in terms of you know what is to be done and we have the modeling in place we need a computational power for that because all these models still we need a lot of so that comes to the investments and that’s where we would like to ask Mr.
Sandeep Singhal your investment portfolios have energy transition mobility because see when we speak weather and climate it’s not just weather and climate it’s broadly everything in terms of cloud in terms of energy in terms of health all those things so when you look at your portfolios what advice would you like to give to startups to be able to successfully scale up all these individual domains as well as integrated domains
so I think in terms of scale up the first thing that at least in the climate space is very clear is that partnership with the government is critical because that’s where all the discussion we are having on data all the discussion we are having on deployment the government is the one that’s driving it. So I think any of our portfolio companies that are working in this space, we end up involving government institutions that they would work with, and we build those relationships with ministries at the fund level also so that we can introduce them to the various government programs. Beyond that, the other advice is that you have to start thinking about segmenting the market that you’re targeting.
So there is the general population, and that goes to the government. There is that funding, I think, as Dr. Shukman said, has to come in a public -private partnership because collaboration, I think, is an important word you used. And I think that collaboration is both on the deployment side but also on the funding side. So it’s great to see what the government has done with ANRF, with RDI, and that capital that is becoming available. And there’s also philanthropic capital that is actually now becoming available in this space. So there are philanthropists that are looking at… programs at scale and saying okay if this program can scale we’ll put money behind it so that’s one part but the other segment is that you have to also think about where is monetization possible and there are enough segments where core business is getting impacted because of weather or other events right and that core business is willing to pay so you have to therefore segregate the two in some ways if you think about it you are building for a public good but the distillation of that allows you to build something for private good and charge for it
because now climate is linked very much to the economics absolutely climate and economics is one and the same thing and it’s not just short term we have to worry about next 10 years 20 years 30 years you know everything so that’s a very very important point so that leads to like how are we preparing ourselves and that comes to Dr. Praful, a key challenge for India is balancing economic growth while protecting our natural ecosystem. So can you give an example of real world application where AI can enable this transition as well as the creation of solutions which balance
I am going to pick up on something that Dr. Karthik said and Manish also mentioned which is the intersection of weather and energy. You know India is transitioning from a fossil fuel based economy to a renewable energy power based economy and renewable energy is dominated by solar right. Now if you look at the kind of models that are becoming available for hyper local forecasting they are also giving us much more predictive power in terms of how much energy will one rooftop solar panel generate which is critical for managing the grid right. India’s grid needs to be digitized and in fact we have a team from the University here which is doing a demo on combining digital public infrastructure from the Ministry of Power, which is India Energy Stack, combined with AI models, which use weather forecasting and do forecasting about grid loads to be able to trade energy between consumers and producers.
Or to do demand flexibility. Now, demand flexibility is, again, something that I see critically important as we talk about sustainable AI. When you move to a data center economy, which is huge consumption of energy, you need to be able to support dynamic demand flexibility using a combination of AI and public infrastructure. So I think the intersection of AI energy is something that deserves quite a bit of attention, and I think we are there to kind of address that.
Thank you. See, we have data in place. We have policies in place. We have science in place. Now, what? Money in place. So what is important is how do you give these solutions to the stakeholders and end users, and that leads the question to Professor. Professor Dave, because he. He has an experience of connecting the science to the governance to the actual stakeholders. And you have been leading the digital twin and AI driven modeling frameworks. So what opportunities do you see? You have done it in Austin, but in India, we are all aware of our different types of cities we have. So what opportunities do you see in building digital twins that support climate extremes and disaster management goals, goals which all of us have just now deliberated upon?
The challenges are there. The solutions are there. How do you link it?
Right. I have two minutes, looks like, before we end the session. So this is a course I take over two semesters. But what I’ll say is that weather is the tragedy of commons. Everyone is affected by it, but no one can pay for it. And the same way when we have to have institutional investments, the question comes up, how do you make this into a monetizable product? And this is where the issues like, you know, today morning, the Director General Mahapatra mentioned that We can create some box models which are very simple, scalable, and transferable. And we can create digital twins which are very decision -specific. We don’t need to predict every variable at every scale for everything to try to do that.
So if we define why we are creating models, what decision we are going to guide based on that data -to -decision framework, we can make that into a very intelligent, scalable modeling system. And that, I think, is where the joy of bringing AI and physics and human decisions and dimensions come into picture. People don’t need weather. They need weather that can help them make a decision. And this is where we need to move from simply creating the weather output to adding something which is going to help me make an intelligent decision, whatever that may be. It could be a long -term hedging against something or a short -term decision of whether I walk inside or in the shade.
And if we achieve that, I think we are going to make this into something. Which could transform the manner in which we are predicting, which is not for a variable of interest, but a decision. that we want to make. That is where I think digital twins come into picture. I’ll stop there.
So I think digital twin can be one of our first you know, we can look into the complete AI spectra right from monitoring to processing to modeling to reaching out to the end users. We can have a complete you know, portfolio of AI applications. So this leads to now the end of the session and we would like to open just for half a minute. I’m very sorry for this format. Disclaimer, it’s not my doing. Yeah.
One word I didn’t hear too much of was insurance and climate risk typically climate risk typically reflects in insurance rates either becoming so high or just your house goes uninsured which is happening. In Northern California and Florida. I’m not sure in India how . predominant this is, but how can you kind of marry, I mean, ultimately people have to stay there, it’s difficult to move. So how do you marry the two?
Yeah, so that I sort of somehow sort of refer to it in this notion of translating the work that you’re doing on the DPI side and bringing that technology into sort of more monetizable projects. And insurance actually ends up being one of the first monetizable product that comes out of this.
We can have just one question maybe and we can always discuss it outside because this is a very good opportunity. We have the experts here and we would definitely like I have a few questions, but I’ll ask you outside. I just want to quickly also mention that we’ve announced in this AI Summit partnerships with NVIDIA, with Google and Qualcomm as well as we’re doing other things at the Gates Foundation. So there’s many things happening so I invite my colleagues here to work with us more and focus on India as well in addition to the world. thank you sir so would like to thank and it was a great great listening to all of you and we forward to you yeah and see I will tell you don’t get me wrong I was thinking you know there are 8 people and I am the only one then I was thinking it should be equal number and I was disturbed you Thank you.
“And so IRO currently is developing original work on building very agile, small, specific models”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/survival-tech-harnessing-ai-to-manage-glob…
EventFocus on smaller, task-specific models while not neglecting progress made with large language models
EventEma Arisa:Thank you, Ms. Wan. I would like to move on to the next question. So the guiding principles and code of conduct include principles and actions for AI developers to invest in and develop secu…
EventShri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the relevancy of the topic. As we all know that the government is always catching up …
EventThe aim is for GPI to have an independent identity, similar to that of the World Health Organization (WHO) in the field of health. Finally, the speakers emphasized India’s AI approach of democratizing…
EventIf wrong data will be fed to the tool, the wrong decisions will be indicated. So as it has been told by my colleague, we need so far is quality of the data, relevance of the data and all these has to …
EventEstablishing grant programs and public-private partnerships as potential funding mechanisms
EventIt has to be collaborative proposals. In some of our programs, we have put out open IP licensing so that when you have a company or a startup and so on, they can actually partner with academia, pick u…
Event_reportingDevelopment | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues that public-private partnerships are not optional but essential for AI development…
EventPedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It’s a pleasure to actually reconnect with the IGF after 10 years. I had the honor…
EventAntonia Gawel:I mean, I think very much a focus on decarbonization of the power sector is a critical input and a significant part of the footprint. So working together to ensure that grids around the …
EventThe deployment of emerging technologies, such as artificial intelligence, is seen as promising in addressing climate change. The analysis presents examples of countries like Singapore, Japan, and the …
EventBuilding trust is highlighted as a fundamental requirement for data governance in multilateral environments. Trust can be established through adherence to norms, standards, and law enforcement. Additi…
EventIndustry Standards and Regulatory Approaches Legal and regulatory | Infrastructure Trager identifies the need for a structured progression from research activities to pre-standardization and then to…
Event“The moderator introduced a cross‑sectoral audience that included the NRF CEO and the NDMA secretary.”
The knowledge base lists Dr. Shiv Kumar as the NRF CEO and Manish Bharadwaj as a key figure from NDMA, confirming their presence on the panel [S2].
“Dr. Amit Sheth is the founder of the Indian AI Research Organization (IRO) and promotes compact, custom models rather than large foundational models.”
Source S5 explicitly describes Dr. Amit Sheth as the founder of IRO and his advocacy for small, explainable neurosymbolic models, matching the report’s description.
“IRO’s strategy is to develop very agile, small, specific models for hyper‑local extreme‑weather problems, deliberately avoiding large foundational models whose training data and computational baggage are opaque.”
Both S5 and S49 emphasize IRO’s focus on “small AI” – practical, affordable, locally-relevant models that avoid the opacity of large foundation models [S5] and [S49].
“Manish Bhardwaj called for a trusted, low‑cost early‑warning system that blends AI with terrestrial sensors, satellite feeds and existing alert‑generation agencies.”
The knowledge base notes Manish Bhardwaj’s emphasis on reliable, trusted, and accessible early-warning systems for disaster preparedness, aligning with the report’s statement [S1].
“Dr. Amit Sheth’s approach emphasizes explainability, safety and alignment in AI models for specific problems.”
S5 adds that IRO’s models are designed with explainability, safety, and alignment as core qualities, providing additional nuance to the report’s description of the institute’s strategy.
The panel shows strong convergence on four pillars: (1) hybrid AI‑physics/sensor systems for hyper‑local forecasting, (2) the necessity of open, benchmarked data and collaborative consortia, (3) public‑private partnership and dedicated funding as the engine for translation, and (4) a shared preference for small, agile, or fine‑tuned models that are easy to deploy. These agreements span technical, institutional, and economic dimensions, indicating a cohesive national roadmap for AI‑enabled climate resilience.
High consensus – the majority of speakers align on the same strategic directions, suggesting that India’s AI‑climate agenda is likely to move forward with coordinated policy support, funding structures, and a focus on lightweight, data‑efficient models.
The panel largely converged on the need for hybrid AI‑physics solutions, public‑private collaboration, and open data to improve early‑warning and hyper‑local forecasting. The most pronounced disagreements centered on the technical route for model development (small bespoke models vs fine‑tuned large foundations) and the architectural scope of digital twins. These methodological splits reflect differing risk appetites and resource strategies rather than fundamental opposition to AI’s role in climate resilience.
Moderate – while core objectives (enhanced forecasting, disaster preparedness, and scalable deployment) are shared, the divergent views on model architecture and digital‑twin scope could slow consensus on research funding priorities and implementation road‑maps, requiring explicit coordination to align technical pathways.
The discussion was shaped by a series of pivotal insights that moved the conversation from a high‑level vision of AI for climate to concrete, actionable pathways. Amit Sheth’s emphasis on small, domain‑specific models set the strategic tone, which was deepened by Ravichandran’s hybrid‑model metaphor and Shivkumar’s multimodal sensor vision. Consumer‑centric ideas from Singhal and societal integration from Niyogi broadened the scope to end‑user impact. Technical breakthroughs around data efficiency (Chandra) and transfer learning (Kashinath) offered feasible research directions, while the ANRF funding framework provided the necessary policy and financial scaffolding. Finally, the framing of weather as a tragedy of the commons and the push for decision‑specific digital twins unified the technical, social, and economic threads, steering the panel toward a roadmap that links AI research, public‑private partnerships, and marketable solutions.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

