Survival Tech Harnessing AI to Manage Global Climate Extremes

20 Feb 2026 18:00h - 19:00h

Survival Tech Harnessing AI to Manage Global Climate Extremes

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to explore how artificial intelligence can be applied to India’s climate challenges, especially extreme weather and sustainability [1]. Amit Sheth explained that the Indian Research Organisation (IRO) was created after a December 2023 meeting with the Prime Minister to develop original, small and agile AI models tailored to Indian needs rather than relying on large foundational models, focusing on weather, health and pharma verticals [20-24][26-34][35-38]. He emphasized building hyper-local models for extreme weather that integrate spatial-temporal dynamics without the “baggage” of pre-trained large language models [27-30][31-33].


M. Ravichandran highlighted that traditional physics-based forecasts capture spatial patterns but miss fine-grained temporal rhythms, requiring a fusion of numerical models with AI to predict high-impact events such as cloudbursts [47-61][62-66]. Shivkumar Kalayanaraman added that low-cost cameras, multimodal sensors and low-Earth-orbit satellites can provide real-time visual data that, when combined with generative AI, enable short-term cloud forecasting and insight-level fusion across modalities [76-84][85-89]. Praful Chandra pointed out that achieving such specificity depends on “small-data fine-tuning” of large foundation models, questioning how minimal a dataset can be while still delivering accurate domain performance [104-108]. Karthik Kashinath argued that transfer learning and benchmark datasets-similar to ImageNet’s role in computer vision-are essential to adapt global models to India’s hyper-local, data-sparse regions [110-114][115-119].


Shivkumar described the ANRF’s dual funding streams-a grant programme for non-profit research and a one-lakh-crore RDI capital fund for private-sector translation-along with targeted initiatives such as the AI-for-Weather & Climate track and the Leapfrog Demonstrators for Societal Innovation [193-210][216-224][225-232]. He noted recent hackathons in partnership with IBM and IIT Delhi that provide curated datasets to accelerate prototype development, while urging collaboration with agencies like NDMA and MoES [218-224]. Akshara and Sandeep emphasized that public-private partnerships, open IP licensing, and industry-academia consortia are being promoted to move solutions from TRL 1-2 to operational readiness and to attract both government and philanthropic capital [233-242][280-287].


Manish Bhardwaj illustrated how AI-enhanced early-warning systems that fuse terrestrial, satellite and sensor data can improve evacuation planning for multi-hazard events such as cloudbursts, landslides and flash floods, thereby reducing mortality [161-169][174-186]. Praful Chandra gave a concrete example of AI-driven hyper-local solar generation forecasts feeding into India’s digitized grid (India Energy Stack) to enable demand-flexibility and better load balancing [291-298]. Dev Niyogi argued that weather services must become decision-oriented “digital twins” that translate forecasts into actionable, monetizable products-such as insurance pricing-rather than generic climate data [313-322][327-330][337-342]. The participants agreed that building trustworthy, validated models and establishing robust data sharing, funding, and partnership mechanisms are critical steps toward operational AI solutions for climate resilience in India [144-147][155-158][233-242].


Overall, the discussion concluded that coordinated AI research, targeted funding, and cross-sector collaboration can transform climate prediction into actionable services that protect vulnerable populations and support sustainable economic growth [156-160][331-336].


Keypoints


Major discussion points


Building purpose-built, hyper-local AI models instead of relying on large foundational models – IRO is focusing on “very agile, small, specific models” for extreme-weather use-cases and deliberately avoiding the “baggage” of big language models [26-30]. Panelists stressed the need to fuse physics-based numerical forecasts with AI time-series methods to capture both “elephant”-scale and “ant”-scale phenomena and to improve prediction of events such as cloudbursts [47-66].


Data availability, open-access and interdisciplinary collaboration as the backbone of trustworthy AI forecasts – India’s massive historical weather archives provide a “huge” data resource that must be opened up for broader use, and young talent should be mobilised to “interpret the data differently” and reduce error and uncertainty [126-138]. Experts also highlighted the creation of benchmark datasets and metrics (e.g., the ECMWF ERIF set) as essential for achieving operational quality at hyper-local scales [263-270].


Funding mechanisms and public-private partnerships to move from research to operational products – ANRF’s grant programmes, the one-lakh-crore RDI fund, hackathons, and “Leapfrog Demonstrators for Societal Innovation” are being deployed to catalyse AI-for-weather projects and to ensure industry-academia collaboration [193-232]. Venture-capital perspective reinforced that startups must partner with government, segment markets, and identify monetisable pathways (e.g., insurance, enterprise services) while leveraging public-private capital [280-287].


Concrete AI-driven applications for climate resilience – Early-warning dissemination through trusted DPG systems, multimodal sensor networks (including low-cost cameras and LEO satellites), voice-assistant tools for household-level action, digital twins for decision-specific forecasting, and AI-enhanced renewable-energy grid management were all cited as high-impact use cases [70-75][76-84][91-100][291-298][337-342].


Technical challenges that must be solved to realise these applications – Small-data fine-tuning of foundation models, transfer learning across data-rich and data-sparse regions, and establishing validation/verification pipelines were identified as research frontiers that will determine trust and adoption [104-108][109-114][263-270].


Overall purpose / goal of the discussion


The panel was convened to map India’s strategic roadmap for leveraging artificial intelligence to tackle climate-related challenges-particularly extreme weather and sustainability-by (i) defining the scientific and technical directions (hyper-local modelling, data fusion, validation), (ii) identifying institutional and funding levers (ANRF, RDI, public-private consortia), and (iii) pinpointing immediate, high-impact applications that can be piloted and scaled across the country.


Tone of the discussion


The conversation began with an optimistic, visionary tone, emphasizing the promise of AI-driven breakthroughs for climate science. As the dialogue progressed, it became increasingly pragmatic, focusing on concrete hurdles (data openness, benchmark creation, trust) and concrete mechanisms (funding programmes, partnership models). The closing remarks retained the collaborative spirit but shifted toward a call-to-action, urging stakeholders to translate ideas into operational solutions. Overall, the tone remained constructive and forward-looking throughout.


Speakers

Akshara Kaginalkar – Panel moderator/host of the AI Summit discussion.


Amit Sheth – Founder/CEO of IRO (Institute for Research in AI for climate and sustainability); leads development of small, agile AI models for extreme weather and health applications.


M. Ravichandran – Secretary, Ministry of Earth Sciences, Government of India; oversees weather, climate and sustainability initiatives. [S16]


Manish Bhardwaj – Secretary, National Disaster Management Authority (NDMA), India; responsible for disaster preparedness and early-warning systems. [S15]


Shivkumar Kalayanaraman – AI researcher and speaker on multimodal models for weather forecasting and climate-impact applications.


Sandeep Singhal – Venture capitalist; manages investment portfolios in energy transition, mobility and climate-tech startups. [S1]


Dev Niyogi – Professor, University of Texas at Austin; affiliated with IIT Roorkee; member of the founding team of IRO. [S2][S3]


Praphul Chandra – Professor; Head of the Center for Excellence for Data Sciences and Dean R & D, Atria University, Bangalore. [S8][S9]


Karthik Kashinath – Director, Center for Excellence for Data Sciences; Distinguished Scientist at NVIDIA. [S10]


Audience – Audience participant who raised a question on insurance and climate risk.


Additional speakers:


Dr. Shiv Kumar – CEO, NRF (National Research Foundation); champion of AI for science and supporter of the panel discussion.


Full session reportComprehensive analysis and detailed insights

The panel convened to chart a national roadmap for applying artificial intelligence to India’s climate-related challenges, with a particular focus on extreme weather, disaster resilience and sustainability [1-13]. The moderator, Akshara Kaginalkar, introduced a cross-sectoral audience that included the Secretary of the Ministry of Earth Sciences, a venture-capitalist, university professors, the NRF CEO and the NDMA secretary, underscoring the breadth of expertise required for the task [2-14][15-17]. Akshara also referenced the “dew effect” as an illustration of how micro-scale phenomena can influence larger weather patterns, highlighting the need for models that operate across scales [260-262].


Dr. Amit Sheth explained that the Indian Research Organisation (IRO) was created after a direct meeting with the Prime Minister in December 2023, where the leader asked for home-grown AI solutions that would not simply imitate Western or Chinese models [20-24]. IRO’s strategy is to develop “very agile, small, specific models” for hyper-local extreme-weather problems, deliberately avoiding large foundational models whose training data and computational baggage are opaque [26-31]. The institute also plans to extend this approach to health and pharma verticals, leveraging partnerships with the Indian Pharma Alliance and health organisations [32-38].


Dr. M. Ravichandran, Secretary, Ministry of Earth Sciences, highlighted the limits of conventional physics-based forecasts, which capture broad spatial patterns but miss fine-grained temporal rhythms. He used the metaphor of “the elephant plus the ant” to argue that both spatial (physics-driven) and temporal (AI-driven) components must be fused to predict high-impact events such as cloudbursts [47-66]. He also emphasized that robust validation and verification frameworks are essential to build confidence in AI-augmented forecasts [145-147]. This hybrid vision was echoed by Manish Bhardwaj, who called for a trusted, low-cost early-warning system that blends AI with terrestrial sensors, satellite feeds and existing alert-generation agencies, thereby improving granularity even where sensor coverage is sparse [70-75][175-180].


Prof. Shivkumar Kalayanaraman described multimodal AI pipelines that combine time-series models with visual data from inexpensive cameras, infrared or multispectral sensors, and low-Earth-orbit satellites. He argued that the focus should shift from raw data-fusion, which is “painfully complex”, to “insight-level fusion” that can deliver now-casting forecasts of clouds a few hours ahead [76-84][85-89]. Such multimodal approaches could be integrated into existing now-casting and forecasting systems to amplify impact.


Data was identified as both a strength and a bottleneck. Ravichandran noted that India possesses “hundreds of 150-year-old” weather records, but these archives are not yet fully exploitable because they remain siloed [126-128]. He called for open-access policies that would allow the nation’s “young brains” to interpret the data in diverse ways, reduce model error, improve initial conditions and enhance down-scaling to kilometre-scale forecasts [129-144][145-147]. The need for benchmark datasets was reinforced by Prof. Karthik Kashinath, NVIDIA, who likened the situation to the ImageNet breakthrough: creating standard data-sets and metrics (e.g., ECMWF’s ERA5-based weather-bench) would drive operational quality at hyper-local resolutions [263-270]. He also pointed to super-resolution techniques already used in the Earth2 programme, suggesting that generative-AI methods could further shrink the resolution gap within the next two to three years [271-274].


The question of how to cope with data scarcity generated divergent views. Praful Chandra asked how small a dataset could be while still fine-tuning a large foundation model for a specific climate task, framing this as a potential breakthrough [104-108]. By contrast, Sheth argued for building original, lightweight models from scratch, avoiding the “baggage” of pre-trained large models altogether [26-31]. A related disagreement concerned transfer learning: while Chandra advocated re-using knowledge from data-rich regions to India’s data-sparse locales [110-114], Sheth’s approach favours locally-engineered models that do not depend on external pre-training [26-31].


Funding and translation pathways were outlined by Prof. Shivkumar Kalayanaraman on behalf of the National Research Foundation (NRF). The NRF runs an AI-for-Science & Engineering programme with a dedicated AI-for-Weather & Climate track, which collaborates with MoES on the Mission Morrison programme [196-199], a one-lakh-crore RDI capital fund for private-sector scaling, and a forthcoming “Leapfrog Demonstrators for Societal Innovation” scheme that rewards high-impact, non-incremental solutions [193-210][216-224]. Recent hackathons, co-organised with IBM and IIT Delhi, already provide curated datasets to accelerate prototype development [218-224]. NRF is also partnering with the Ministry of Earth Sciences on the “Mission Morrison” programme and has launched Translation Research Centres that require joint industry-academic participation to move prototypes toward commercial deployment [190-193][225-227].


Public-private partnership (PPP) models were repeatedly stressed as essential for moving from research (TRL 1-2) to market-ready services (TRL 5-6). Akshara highlighted the government’s push for consortium-based proposals, open IP licensing and hub-spoke collaborations, which would allow startups to pick up academic IP and translate it quickly [233-242][245-251]. Sandeep Singhal added that successful scaling requires clear market segmentation-distinguishing public-good services from monetisable private-good offerings such as insurance or enterprise risk tools-and that both government capital and emerging philanthropic funds are ready to back such ventures [280-287][340-345].


Concrete application domains emerged across the discussion. Bhardwaj described AI-enhanced early-warning pipelines that could predict cascading hazards (cloudburst → landslide → flash flood) and enable timely evacuations, thereby reducing mortality [161-169][174-186]. Praful Chandra illustrated how hyper-local solar generation forecasts, fed into the India Energy Stack, can support demand-flexibility and peer-to-peer energy trading, turning weather predictions into direct grid-management value [291-298]. Dev Niyogi introduced the concept of decision-specific “box models” and digital twins that translate raw forecasts into actionable recommendations-ranging from long-term hedging to immediate shade-seeking decisions-thereby turning weather into a monetisable product and addressing the “tragedy of the commons” [313-322][327-330][318-322].


From the discussion, four recurring themes emerge: (1) hybrid AI-physics or AI-sensor systems for hyper-local forecasting; (2) open, benchmarked data and collaborative consortia to build trustworthy models; (3) robust PPP frameworks with open IP to accelerate translation; and (4) a preference for lightweight, domain-specific models or fine-tuned foundations that can be deployed rapidly [47-66][70-75][126-144][233-242][104-108][263-270]. Remaining points of contention-whether to prioritise bespoke small models versus fine-tuning large foundations, and how expansive digital-twin architectures should be-highlight the need for coordinated research agendas that accommodate both approaches [26-31][104-108][313-322][331-333].


In closing, participants identified a set of immediate research and policy actions: develop benchmark hyper-local datasets; explore small-data fine-tuning and transfer-learning pipelines; establish validation and verification protocols for AI-augmented forecasts; open legacy weather archives to the broader community; design multimodal insight-fusion frameworks; launch voice-based personal resilience assistants; pilot AI-driven climate-risk insurance products; and embed AI forecasts within the India Energy Stack for renewable-grid optimisation. These steps, underpinned by the NRF’s funding mechanisms and the IRO’s model-building agenda, aim to transform India’s climate prediction ecosystem from a purely physics-driven service into an actionable, decision-oriented platform that safeguards vulnerable populations while supporting sustainable economic growth [263-274][291-298][313-322][193-224][233-242].


Session transcriptComplete transcript of the session
Akshara Kaginalkar

top -down approaches in terms of finding the AI solutions, India’s critical problems and weather and climate is a major vertical. So welcome, sir. We have Dr. Ravichandran, he doesn’t need any introduction, but he’s the Ministry of Earth Sciences Secretary and everything and anything under weather and climate and sustainability, sir, is heading it and we look forward to your contribution. We have Mr. Singhal, who is a venture capitalist and he will give a very, very important aspect about how funding and economy is going to drive the solutions in AI for climate. Professor Dev Niyogi, he is professor from UT Austin, that is University of Texas at Austin. Also, he’s affiliated to IIT Roorkee and now one of the founding team of IRO.

Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for science. And we look forward to your support as well as your inputs on how can we proceed on this. And we have Mr. Manish Bharadwaj, who has a very critical role in India as the secretary of disaster NDMA. And we have Professor Praful Chandra. He’s heading the Center for Excellence for Data Sciences, as well as he’s dean R &D, Atria University, Bangalore. And we have Dr. Kartik, who is the director of the Center for Excellence for Data Sciences. He’s a distinguished scientist and engineer, NVIDIA. And he has played a major role in the very famous AI models, which all of us are hearing.

And they are, you know. changing the scenario of modeling and the way science is going to happen. So welcome. So I look forward to your contribution. Oh, okay. Can we stand just here? Okay. So before we open up the panel, I just wanted to have a very quick question to Professor Seth in terms of what was the objective, what we are looking when you started IRO as a, you know, in India, we wanted to have this type of a research organization. So if you can quickly tell us about what was the thought process behind IRO and what do you foresee?

Amit Sheth

So the idea of IRO kind of. was initiated when I had a chance to meet the PM in December of 2023. I was asked to come and discuss with him. He is always very curious in technology and so he wanted to hear about the ideas on AI. Since I had multiple interactions on research and AI with him during his CM time, this was a fantastic opportunity for me to meet and kind of discuss where India can shine and not necessarily follow the West or China in what we need to do. And so I presented both the core foundational AI focus on enterprises, not necessarily consumer and web, and some of the areas of where we can make big economic and social impact, as well as we can support the startup ecosystem where AI can empower deep AI technology that drives the global products from India.

So that was a broad idea. And so IRO currently is developing original work on building very agile, small, specific models. In this context, for example, if you want to make a model for serving extreme weather related issue that is hyper local, then all the spatial temporal aspects, all the relative modeling aspects, all the prediction algorithms, those are the things that we will bring in. But we will not be building on the top of large language models or so -called foundational model, which come with a lot of baggage. We don’t know what kind of data it has been trained on, many other things. So original research in creating new, small, agile models. And so it will be a platform on the top of India AI structure to be able to create models.

And one area in which we would love to create models, we have technology. expertise here, Dave and many other people. And we can, you know, so earth science, including disaster, including, you know, sustainability issues is one of the vertical. Other two are health and pharma. Pharma, we have very strong partnership with Indian Pharma Alliance and the 23 major pharma, which is 80 % of India’s pharma, you know, kind of output. And similarly, we are working with some health partners and all. But here you see the potential partners that we could have in making impact into the sustainability and health area. So thank you.

Akshara Kaginalkar

We would like to now start with one open questions and then we’ll have an individual question because I’m very sorry, the time is very short. The whole format is actually we had a one day full workshop and we had to squeeze it in. to start. Yeah, so one disclaimer that it’s not my personal thing, but I may request you to finish it in time. Definitely would like to hear a lot from all of you, but due to constraint of time. So first one, opening questions, what we’d like to have is all of us would like you to say is what would be one AI application or a discovery that would excite you about AI helping in this domain of climate as well as extreme events and sustainability as a broader thing, because everything is driven by weather and climate.

We have energy, we have health, we have economics and we have agriculture, many, many aspects of it. So we’d like to see what do you foresee and how do you would like to say that which one development will help us. And we’ll start with you, sir.

M. Ravichandran

When you talk about the weather, of course, it is now depends on various applications. So when we are doing the weather forecast, earlier we just to tell that in suppose how the elephant is going, I’m able to see that elephant, how it is going. I’m able to tell that tomorrow it will come here. But now the problem is whether because of the climate change and other things, the space and time has changed. Now, we have to see on the elephant some ant is sitting. That ant, how it is going, we want to know. So we want to see the elephant plus ant. So I want to see two things. One is time series. Other one is a spatial.

If anything on spatial, I think the physics based numerical model is doing better job. But if you want to go for time series, local rhythm, then A is better. So we need to do. Integrate or we need to fuse both together in order to understand the local weather in a fine scale. And you want to go suppose cloudburst is there. So you cannot do only with. numerical model and with AI also. So, we need to blend both. That is more important. So, we want to go for high impact weather events, how to predict, especially cloud burst and other thing. We do not know how to predict. So, that is why we are looking at whether AI can help or not.

That is one of the objectives. Thank you.

Manish Bhardwaj

I fully agree with what Ravichandran sir has just said. From the early warning point of view, the idea is to have DPG sort of asset for the public so that we are able to disseminate early warning for all. So, idea is to have trusted early warning for all to be given to the citizens. at low cost and this is where AI can definitely play a supporting role. It cannot be purely an AI. It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the satellite data which is available to us from various alert generating agencies but to have a source of a trusted and reliable and resilient early warning systems wherein I definitely foresee AI playing a great, great role.

Thank you. Yeah, I

Shivkumar Kalayanaraman

think I’ll just double down on the multimodal models that are coming out. I mean one is the time series model. There are special models and I will also mention that today with generative AI you can just put a camera pointed to the sky and then you can actually not only see the patterns of clouds, you can forecast one hour ahead. Two hours ahead, even four hours ahead. make it an IR camera or make it some other multispectral camera when all the costs are dramatically dropping. So you can imagine a network of sensors that complements also the great work that’s being done in Mission Mouse and so on. And plus now with the low Earth orbit satellites going up and also having much more Earth observability, I think the opportunity to fuse insights as opposed to fusing data.

I mean, data fusion is a painfully, you know, mind -bogglingly complex, unnecessary and complex as a thing. But now there’s an opportunity to take insights from A, insights from B and fuse it across modes and also forecasting across these modes. I think that’s a wonderful opportunity. I think that’ll have a huge thing. And once you integrate that into, you know, sort of now casting and other systems, I think we can have a great amount of impact. The other dimension is, of course, AI helping in discoveries and of new materials and you know, sort of simulations and so on. I think these have wonderful opportunities. And of course, as you know, the Nobel Prize for Chemistry went to somebody from an AI background.

Sure.

Sandeep Singhal

So I will put a consumer lens to this. Sirs have brought up the point around what is the technology needed. I think with what is happening with the voice agents right now, I think there is a need to have a simple voice framework or a voice sort of app which allows you to send not just information, but actually create a resilience approach for the person who is who can literally just click a button and say, OK, in the next week, these are the things that you need to do to survive the whatever is happening from a climate perspective. Right. Or what do you need to do in the next month? So there is a there is a forecasting aspect to it.

But more importantly, how it integrates with my life. Do I need to stay at home? If I’m a farmer, what do I do? If I’m a, you know, liberal, what do I do? So that ability to bring that to my day to day life and allow me to actually act a certain way because of what I expect, what I expect to see in the environment around me. And that includes daily air. I’ll

Dev Niyogi

just add one term you guys know this word Jugaad so this is a very India thing Jugaad we can so there is a framework that is mathematically feasible that we can model very well that follows equations that follows laws of nature and then there is a human element that we always beat the system and make that happen mapping that has been very difficult in a predictive models and this is where I think AI is coming into play that it brings the human dimensions and it brings the societal aspect with the physical constraints and this is what is most exciting about it into a way that it will be becoming much more accessible is where I think we’ll be going we had heard also about the agentic AI now I heard about the ant AI thanks to you so

Praphul Chandra

I’m going to pick up where Professor Neogi and Dr. Shiv Kumar left you know we work across several AI foundation models in biology in materials and we have looked at foundation models in weather I think the breakthrough that I am most anxious to look for is what we call small data fine tuning. What that means is that when you look at these large foundation models they are fairly general in their applicability and as Professor Sheth was saying when you have to fine tune them for a specific use case you still need data. How small can that data be? Can you use small data to fine tune large foundation models? I think if you are able to have that breakthrough it has applications across multiple domains that we talked about.

Karthik Kashinath

I think a lot has already been shared which is very exciting on many different fronts. One thing I would like to see more used in practice is transfer learning which of course some regions of the world are data rich and some others are data sparse. Problems are shared across the planet. The physics of weather and climate are the same no matter where you are in the planet. But at the same time, there’s uniqueness at hyperlocal scales. But if we can transfer learn efficiently from one region to another with constraints of what exactly we’re trying to transfer learn, I think that would be very impactful.

Akshara Kaginalkar

Thank you, Dr. Kalpik. I think we have a mic here. We saw right from the spatial, as sir said, it’s like from Akashse and Tak, we can see everything. And I think that matters. I remember once I think I was discussing with sir, he said even the dew effect you have on the immediate temperature and that can affect your surrounding and everything. So from small to big is definitely there. And AI also from small to big we should see. And that leads to now I will ask the next round of question is very, very specific to. areas in which all of you are working as well as having a lot of influence and that’s where we would like to hear from you and to have a direction in what way we can go.

At the end of this panel, that’s what, you know, can we all consolidate and can we look at, you know, what are the three to four immediate things which we can do it. And with that respect, I would like to ask Dr. Avichandran, how can India’s national capabilities in AI research, technology development, and very importantly, human resource also, evolve to enable the transition from current physics driven prediction system to AI enabled user specific decision systems. What are the bottlenecks in that and how can we overcome?

M. Ravichandran

So as pointed out, basically we have a capability, basically one of the strengths what we have is basically the data. The data volumes are huge nowadays because we have hundreds of 150 years old old IMD’s things that legacy as well as data available. Now how to utilize this data? And we have young brains of so many young people but we have not fully utilized that one because each one can interpret the data different way. But finally it has to come out into concrete solution. When you talk about AI and weather, if you are talking about, why we want to go for AI first of all because the numerical model, we have a lot of assumptions. Because of that assumptions, the error grows.

Now that error grows whether with the AI we can reduce that is number one. When you are going for initial condition is better, you can predict better. So we have to have a initial condition in better way by reducing the error. So I think many people, even some of the people, many people are working in AI, different people. I think we need to pull the many resource people in our domain so that they can look at data differently and also they can use how to minimize the error and also how to reduce the uncertainties. And also there are various techniques to improve the forecast. So that’s what I, because nowadays the downscaling is one of the important things.

In the large scale model, it defiles. So the AI can downscale better way in the localized, suppose one kilometer resolution weather forecast, we want to forecast how we can do. So we need to have more and more minds and more and more people have to work on it. And I think we need to open up the data so that we have to, that means different people can, can come back and work on that. I have only one important thing is basically this, when you are talking about EIML, the trust is more important, as you pointed out. I think we need to have a better trust in the forecast system. I think where there’s a need for validation and verification, that also very important in EIML can make it.

So our capabilities are huge, but we need to, what is called, utilize them with the data’s strongness. Because now the biology people, even biology people are working in EIML. That same people we can do. One more important point is our people, we are always addicted to the same set. We are thinking only this is the way, but there are multiple ways. That’s why some other discipline people also look at this because this is data driven. Other discipline can look at it differently. We can have some. pathway or way forward. That may be one of the things we can look at.

Akshara Kaginalkar

That’s a very, very important point because we look only weather from maybe only physics angle or weather angle. So, looking at that is very important. And that leads to, you know, what is important for the disaster management service, we would like to ask because highly dependent on the extreme events and the managing that is very difficult. So, how do you foresee adoption of AI for infrastructural preparedness for disaster management and especially reducing the severity impacts on vulnerable population because cities and all maybe and those who have access to many good things they can handle, but we have large vulnerable population. So, how do you see AI helping in the last mile application?

Manish Bhardwaj

Very apt questions. As you all are aware that India is vulnerable to multi multiple hazards, not only cyclones, tsunamis, earthquakes, landslides, flash floods, even gloves, soap, and looking at the vast geography and the population which can be impacted. It is very essential that from the disaster management point of view, we have a system of adequate preparedness and early warning capabilities. Nonetheless, the disaster, and secondly, though the country has made, we have made as a whole of government approach undertaken various mitigation measures to mitigate the disasters, but disasters, we can only mitigate the effect of the disasters. So how do we keep the population? We have to keep the population in a way so that, you know, the early warning system capabilities are of the highest order.

that we are able to minimize lots of lives. Now, this is a very important challenge. And various agencies, particularly, as Ravi Chandran sir has rightly said, the IMD and from several, we have, over the period of time, we have developed enormous capability to predict, say, cyclone path and trajectory very clearly, five days ahead of its landfall. So, in a way, we are able to do timely evacuation, repositioning of the response teams, which helps in minimizing and even achieving zero mortality milestones. But there are other hazards. And secondly, the way the hazard scenario has unfolded in the last few years, it has become a multi -hazard, cascading hazard sort of scenario in which one hazard leads to other hazards.

So, there are incidents of cloudburst. Which are currently cannot be predicted. because there are various technical issues also behind it, but cloud bursts leading to landslides, leading to flash floods are a serious concern. So how do we prepare ourselves given the current state of resources and the developments? This is where AI can definitely pitch in. So the idea is actually to get the various, from the alert generating agencies, all the data which are coming from our terrestrial, the satellite data, the sensory data, and then to be able to use it for predictive forecasting or also to better the now casting to increase the granularity of even the early warning signal because there are limitations of how many satellite systems we can put into place.

It is not possible to map each and every, the hill in the vulnerable areas. So this is where the complications arise. And since development also has to take place in the vulnerable, particularly in the Himalayan zone, so the challenge is here to use technology at the maximum. What I foresee is that the availability of the data from various multiple sources can definitely be analyzed and used for even with the current set of sensor network capabilities to predict or rather to pinpointedly and accurately predict the forecast, the early warning signals for the targeted population. And then it will help the district authorities, the state authorities for timely evacuation and response and relief operations to be carried out.

So this is one field where NDMA particularly is collaborating with multiple national agencies and IMD. And Mr. War Sciences are playing a very major contributory role in that development of the such DPG. I am very sure that the startup ecosystem in our country definitely carries the agility to provide, to do a collaborative support the efforts of the NDMA and the national agencies in taking this mission forward. So, and this is where I believe that we can definitely reduce the, we can definitely increase our, the early warning capabilities, particularly regarding flash floods and the glacial lake outburst floods, the lightning and the landslides. And we are very hopeful that with the support of the IMD and the Ministry of Earth Sciences, we can definitely also take major and change.

Take different steps towards even predicting or identifying the most vulnerable or the potential cloudburst type situation so that we are able to timely warn the public.

Akshara Kaginalkar

Thank you, sir. And it’s an important point, as Dr. Avichandran has said, and which you have taken into the need of the data and the infrastructure also linking that to and the setup which we have and we have seen it in the expo. So many people are working on climate and sustainability. How can we put that together and how can we have the best out of it? So that leads to a question to Dr. Shokumar. NRF is enabling the research ecosystem as well as the product ecosystem. So we would like to see how NRF is helping in terms of creating AI funds, what advice you can give it to the community and how to be making and developing products and what sort of support we can expect from ANRF on that.

Shivkumar Kalayanaraman

Okay. So for folks who may not be, how many of you know about ANRF? Maybe just I can get a show of hands. Okay. All right. Not too many, but so ANRF is a statutory body of government of India and Dr. Avichandran is on my board as well. So this is a body which is, you know, sort of meant to catalyze research and development funding in India. So we have grant funding, oops, and also we have, you know, a capital fund called RDI, which is a one lakh crore fund, which is meant only for the private sector. The grant funding is typically for the, you know, not -for -profit research sector, which includes academia, labs.

you know, Section 8 companies and others, right? So research entities are recognized by SARU, DSIR and so on. So our thinking is that we not only have broad -based funding for, you know, like what National Science Foundation does, but we also have more focused funding in a mission mode. So we have a couple of programs that might be of interest. One is our AI for Science and Engineering is a program we have currently underway. And one of the tracks of that is AI for Weather and Climate. So it’s already there. And in addition, we are going to be launching a major program in about a month called Leapfrog Demonstrators for Societal Innovation. Leapfrog Demonstrators for Societal Innovation.

So the idea is that you take a societal problem, then rather than talking about it, let’s do something about it, okay? And then not do just incremental thing. It should be a leapfrog demonstrator. And it should be a demonstrator, not just a theoretical thing. So these are kinds of things we’re doing. And alongside it, we are also doing challenges, sir. We’ll be introducing more challenge mode. you know sort of things that we don’t see come bottom up in our proposal formats. So as part of that we are also collaborating deeply. Our AI for Science and Engineering, the Weather and Climate, we are actually collaborating with MOES and with their Mission Morrison program. So we are linking, we are getting you know both the expertise as well as the data and you know so that we can put together the AI expertise along with the sensor expertise and data and we hope to similarly collaborate with other parts of the government and you know I would strongly urge collaboration from NDMA also at this stage.

So that’s the general approach and then in the, so that accelerates and also I just want to mention that just two days back we have announced a hackathon also, AI for Science and Engineering hackathon for you know Weather and Climate actually. So it’s currently open it’s done in partnership with IBM and IIT Delhi. So we put out data set and also in partnership with MOAS and others. So we have data sets and we are encouraging some of the work there. But in addition, we’ll be doing more, as I said, there’s a societal innovation program, which can also admit of newer types, where you bring together disciplines. We actually then go to solve real problems and so on.

So I think that’s the nature of what we’ll try to do. And then the RDI fund is meant for translation and scaling. In addition, we also have translation centers. We have a program that is open right now and so on. So these are various programs and mechanisms we plan to do. But the goal of all of this is to always focus on impact and working backwards, rather than doing some undirected research. So we want to drive research in a more directed way towards impact. But at the same time, we do support curiosity -based, broad -based research as well. So that’s the balance we’re trying to strike.

Akshara Kaginalkar

we are doing, if we would like to have consistent solutions and not only as a demonstration product, but as operational, where we have every day some services coming out of it. How do you see the public -private partnership coming out? In all our mission mode programs, the goal is to accelerate things from a lower technology readiness, like TRL 1 or 2, to its mid -range, like 506 and so on. That is the purpose of that. And as part of all of those programs, we are supporting programs at a critical scale. So we are encouraging consortiums to come and bid for it, or a hub -and -spoke type setup. We are explicitly saying, don’t make it individual proposals. It has to be collaborative proposals.

In some of our programs, we have put out open IP licensing so that when you have a company or a startup and so on, they can actually partner with academia, pick up the IP and quickly translate. That will also encourage rapid translation. So we are introducing, you know, IP and other innovations to drive translation. So we are going to be doing this in a few more programs. Plus, we have this Translational Research Centers program, which has mandates partnership with industry as well. So we are using different mechanisms. All of them are driving collaborations. Plus the RDI fund, which is a one lakh crore fund. By the time it hits the market, it will become three or four lakh crores.

It is only for industry, but the industry, if they don’t have capabilities, they must collaborate with academia and so on. So there’ll be a demand for industry academic collaboration coming from that side as well. So we are attacking the problem from multiple directions. And, you know, all of these are meant to encourage collaboration for impact, collaboration for impact. So that collaboration leads us to, you know, industry. As we know, NVIDIA is… very much into and pioneering in terms of many models coming in and Dr. Karthik is part of the model development. So foundational AI weather models and climate models such as Earth 2, GraphCast and AIFS and many more are now demonstrating good performance at a global scale.

So what further development do you see basically the physics, how can we interpret the physics coming in the AI models and the validation is very, very important as sir has said that very, very local scale. We are talking about even air quality at a 400 meter or floods at 10 meters or something like that we are talking about. So how do you see what is more to be done in terms of models operationally robust at a hyper local scale? Thank you.

Karthik Kashinath

Yeah, that’s a rich question but I’m going to keep it fairly brief because it could take the next 30 minutes to get through that. So I’ll touch on three things. One is I think creating the benchmark data sets and the benchmark metrics that are needed to achieve operational quality. And if you look at what has led to the developments at the global scale at 25 kilometer resolution is the ERIF data set from ECMWF and the benchmark problems that they’ve defined on that data set, like the weather bench for example. So I think if we want to get down to the hyperlocal scales, which of course depends on the region that you’re talking about and the types of metrics that you care about, it would be very helpful to create the benchmark data sets and the associated benchmark metrics that can drive towards that.

And if we just wind the clock back, the whole AI revolution in deep learning began because of ImageNet. And that was 15, well 12 years ago. And they defined benchmark data sets and benchmark metrics that drove the revolution in AI. So I think we can do the same thing if we take it down to the hyperlocal level. The second is to leverage the superization techniques that AI has shown to be very powerful. We’re already doing that right now in the Earth2 program with taking 25 kilometer data and super resolving it to one kilometer. Also, we’ve been doing this in weather and climate for decades with downscaling the process of taking coarse resolution simulations and making high resolution. So if we can stretch that even further to go down to these hyperlocal scales, I’m fairly confident that the technologies needed in generative AI to get us to that scale either already exist or will be invented in the next two to three years.

So I’m hopeful that that will help us get there. Thank you.

Akshara Kaginalkar

I think that’s important. We look forward to and that’s where public -private partnership comes in picture because when we see it very specific to India and within India also very specific to a region which we’ll have to, you know, because we have a very different climate all across. Right from north, south, east, west. So I think having maybe small models for a region also can be a future maybe in the. so that once we have this system in terms of you know what is to be done and we have the modeling in place we need a computational power for that because all these models still we need a lot of so that comes to the investments and that’s where we would like to ask Mr.

Sandeep Singhal your investment portfolios have energy transition mobility because see when we speak weather and climate it’s not just weather and climate it’s broadly everything in terms of cloud in terms of energy in terms of health all those things so when you look at your portfolios what advice would you like to give to startups to be able to successfully scale up all these individual domains as well as integrated domains

Sandeep Singhal

so I think in terms of scale up the first thing that at least in the climate space is very clear is that partnership with the government is critical because that’s where all the discussion we are having on data all the discussion we are having on deployment the government is the one that’s driving it. So I think any of our portfolio companies that are working in this space, we end up involving government institutions that they would work with, and we build those relationships with ministries at the fund level also so that we can introduce them to the various government programs. Beyond that, the other advice is that you have to start thinking about segmenting the market that you’re targeting.

So there is the general population, and that goes to the government. There is that funding, I think, as Dr. Shukman said, has to come in a public -private partnership because collaboration, I think, is an important word you used. And I think that collaboration is both on the deployment side but also on the funding side. So it’s great to see what the government has done with ANRF, with RDI, and that capital that is becoming available. And there’s also philanthropic capital that is actually now becoming available in this space. So there are philanthropists that are looking at… programs at scale and saying okay if this program can scale we’ll put money behind it so that’s one part but the other segment is that you have to also think about where is monetization possible and there are enough segments where core business is getting impacted because of weather or other events right and that core business is willing to pay so you have to therefore segregate the two in some ways if you think about it you are building for a public good but the distillation of that allows you to build something for private good and charge for it

Akshara Kaginalkar

because now climate is linked very much to the economics absolutely climate and economics is one and the same thing and it’s not just short term we have to worry about next 10 years 20 years 30 years you know everything so that’s a very very important point so that leads to like how are we preparing ourselves and that comes to Dr. Praful, a key challenge for India is balancing economic growth while protecting our natural ecosystem. So can you give an example of real world application where AI can enable this transition as well as the creation of solutions which balance

Praphul Chandra

I am going to pick up on something that Dr. Karthik said and Manish also mentioned which is the intersection of weather and energy. You know India is transitioning from a fossil fuel based economy to a renewable energy power based economy and renewable energy is dominated by solar right. Now if you look at the kind of models that are becoming available for hyper local forecasting they are also giving us much more predictive power in terms of how much energy will one rooftop solar panel generate which is critical for managing the grid right. India’s grid needs to be digitized and in fact we have a team from the University here which is doing a demo on combining digital public infrastructure from the Ministry of Power, which is India Energy Stack, combined with AI models, which use weather forecasting and do forecasting about grid loads to be able to trade energy between consumers and producers.

Or to do demand flexibility. Now, demand flexibility is, again, something that I see critically important as we talk about sustainable AI. When you move to a data center economy, which is huge consumption of energy, you need to be able to support dynamic demand flexibility using a combination of AI and public infrastructure. So I think the intersection of AI energy is something that deserves quite a bit of attention, and I think we are there to kind of address that.

Akshara Kaginalkar

Thank you. See, we have data in place. We have policies in place. We have science in place. Now, what? Money in place. So what is important is how do you give these solutions to the stakeholders and end users, and that leads the question to Professor. Professor Dave, because he. He has an experience of connecting the science to the governance to the actual stakeholders. And you have been leading the digital twin and AI driven modeling frameworks. So what opportunities do you see? You have done it in Austin, but in India, we are all aware of our different types of cities we have. So what opportunities do you see in building digital twins that support climate extremes and disaster management goals, goals which all of us have just now deliberated upon?

The challenges are there. The solutions are there. How do you link it?

Dev Niyogi

Right. I have two minutes, looks like, before we end the session. So this is a course I take over two semesters. But what I’ll say is that weather is the tragedy of commons. Everyone is affected by it, but no one can pay for it. And the same way when we have to have institutional investments, the question comes up, how do you make this into a monetizable product? And this is where the issues like, you know, today morning, the Director General Mahapatra mentioned that We can create some box models which are very simple, scalable, and transferable. And we can create digital twins which are very decision -specific. We don’t need to predict every variable at every scale for everything to try to do that.

So if we define why we are creating models, what decision we are going to guide based on that data -to -decision framework, we can make that into a very intelligent, scalable modeling system. And that, I think, is where the joy of bringing AI and physics and human decisions and dimensions come into picture. People don’t need weather. They need weather that can help them make a decision. And this is where we need to move from simply creating the weather output to adding something which is going to help me make an intelligent decision, whatever that may be. It could be a long -term hedging against something or a short -term decision of whether I walk inside or in the shade.

And if we achieve that, I think we are going to make this into something. Which could transform the manner in which we are predicting, which is not for a variable of interest, but a decision. that we want to make. That is where I think digital twins come into picture. I’ll stop there.

Akshara Kaginalkar

So I think digital twin can be one of our first you know, we can look into the complete AI spectra right from monitoring to processing to modeling to reaching out to the end users. We can have a complete you know, portfolio of AI applications. So this leads to now the end of the session and we would like to open just for half a minute. I’m very sorry for this format. Disclaimer, it’s not my doing. Yeah.

Audience

One word I didn’t hear too much of was insurance and climate risk typically climate risk typically reflects in insurance rates either becoming so high or just your house goes uninsured which is happening. In Northern California and Florida. I’m not sure in India how . predominant this is, but how can you kind of marry, I mean, ultimately people have to stay there, it’s difficult to move. So how do you marry the two?

Sandeep Singhal

Yeah, so that I sort of somehow sort of refer to it in this notion of translating the work that you’re doing on the DPI side and bringing that technology into sort of more monetizable projects. And insurance actually ends up being one of the first monetizable product that comes out of this.

Akshara Kaginalkar

We can have just one question maybe and we can always discuss it outside because this is a very good opportunity. We have the experts here and we would definitely like I have a few questions, but I’ll ask you outside. I just want to quickly also mention that we’ve announced in this AI Summit partnerships with NVIDIA, with Google and Qualcomm as well as we’re doing other things at the Gates Foundation. So there’s many things happening so I invite my colleagues here to work with us more and focus on India as well in addition to the world. thank you sir so would like to thank and it was a great great listening to all of you and we forward to you yeah and see I will tell you don’t get me wrong I was thinking you know there are 8 people and I am the only one then I was thinking it should be equal number and I was disturbed you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The moderator introduced a cross‑sectoral audience that included the NRF CEO and the NDMA secretary.”

The knowledge base lists Dr. Shiv Kumar as the NRF CEO and Manish Bharadwaj as a key figure from NDMA, confirming their presence on the panel [S2].

Confirmedhigh

“Dr. Amit Sheth is the founder of the Indian AI Research Organization (IRO) and promotes compact, custom models rather than large foundational models.”

Source S5 explicitly describes Dr. Amit Sheth as the founder of IRO and his advocacy for small, explainable neurosymbolic models, matching the report’s description.

Confirmedhigh

“IRO’s strategy is to develop very agile, small, specific models for hyper‑local extreme‑weather problems, deliberately avoiding large foundational models whose training data and computational baggage are opaque.”

Both S5 and S49 emphasize IRO’s focus on “small AI” – practical, affordable, locally-relevant models that avoid the opacity of large foundation models [S5] and [S49].

Confirmedhigh

“Manish Bhardwaj called for a trusted, low‑cost early‑warning system that blends AI with terrestrial sensors, satellite feeds and existing alert‑generation agencies.”

The knowledge base notes Manish Bhardwaj’s emphasis on reliable, trusted, and accessible early-warning systems for disaster preparedness, aligning with the report’s statement [S1].

Additional Contextmedium

“Dr. Amit Sheth’s approach emphasizes explainability, safety and alignment in AI models for specific problems.”

S5 adds that IRO’s models are designed with explainability, safety, and alignment as core qualities, providing additional nuance to the report’s description of the institute’s strategy.

External Sources (73)
S1
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Sandeep Singhal- Venture capitalist with investment portfolios in energy transition and mobility
S2
https://dig.watch/event/india-ai-impact-summit-2026/survival-tech-harnessing-ai-to-manage-global-climate-extremes — Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for…
S3
Survival Tech Harnessing AI to Manage Global Climate Extremes — – Amit Sheth- Praphul Chandra- Dev Niyogi – M. Ravichandran- Dev Niyogi- Akshara Kaginalkar
S4
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Akshara Kaginalkar- Panel moderator/host This panel discussion at an AI Summit brought together leading experts from g…
S6
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Professor Seth- Referenced in transcript but appears to be referring to Amit Sheth
S7
Survival Tech Harnessing AI to Manage Global Climate Extremes — – Shivkumar Kalayanaraman- Sandeep Singhal
S8
https://dig.watch/event/india-ai-impact-summit-2026/survival-tech-harnessing-ai-to-manage-global-climate-extremes — Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for…
S10
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Dr. Kartik- Mentioned in introduction as director of Center for Excellence for Data Sciences, distinguished scientist a…
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S14
https://dig.watch/event/india-ai-impact-summit-2026/survival-tech-harnessing-ai-to-manage-global-climate-extremes — Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for…
S15
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Manish Bhardwaj- Secretary of NDMA (National Disaster Management Authority), disaster management
S16
Survival Tech Harnessing AI to Manage Global Climate Extremes — -M. Ravichandran- Ministry of Earth Sciences Secretary, leading weather, climate and sustainability initiatives
S17
The Foundation of AI Democratizing Compute Data Infrastructure — Focus on developing domain-specific, smaller models that require less computational power and infrastructure
S18
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy…
S19
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, Prime Minister, for having us. As my colleagues have said, India will no doubt be a powerhouse in AI in many …
S20
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S21
India harnesses AI for advanced weather forecasting amid climate challenges — India is leveraging AI to enhance its weather forecasting capabilities in response to the escalating challenges posed by…
S22
Regulating Open Data_ Principles Challenges and Opportunities — I think we now need to look at what data sets are needed for research, which could be academia and research students and…
S23
Connecting open code with policymakers to development | IGF 2023 WS #500 — Helani Galpaya:And I agree with the minister. Some of the solutions are technical. We’ve certainly worked with different…
S24
The AI revolutionizing weather forecasting — The European Centre for Medium-Range Weather Forecasts (ECMWF)has teamed up with Huawei to develop an AI-based forecasti…
S25
AI for agriculture Scaling Intelegence for food and climate resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S26
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — In addition to supporting climate action, AI is expected to play a significant role in digitally managed energy systems….
S27
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Waqas Hassan:I’d like to add one thing to say, we would just start, and I said, she’s spoken about global cooperation as…
S28
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Focus on smaller, task-specific models while not neglecting progress made with large language models
S29
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S30
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S31
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development The speaker argues that public-private partnerships are not optional but …
S32
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of reso…
S33
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Pedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It…
S34
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — It is argued that understanding the environmental consequences can catalyse more efficient methods for reducing and mana…
S35
Building Climate-Resilient Systems with AI — “we are quite privileged to work with the Grail team and, of course, global experts to start to now quantify, both in te…
S36
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — In conclusion, sandboxes are valuable tools for testing and implementing regulatory policies. The Brazil case highlights…
S37
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S38
Survival Tech Harnessing AI to Manage Global Climate Extremes — “One thing I would like to see more used in practice is transfer learning which of course some regions of the world are …
S39
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — Marie Granis suggests that instead of building a pan-African LLM, each country should develop small models for their spe…
S40
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — When facing limited datasets for minor Indian languages, India launched crowd-sourcing initiatives that allowed people t…
S41
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S42
World Meteorological Organization — WMO recognises the potential power of Artificial Intelligence to revolutionise weather forecasts and early warnings. WMO…
S43
AI: Lifting All Boats / DAVOS 2025 — Dowidar mentioned ongoing work with UNDP on AI-powered early warning systems. Further research on implementation and sca…
S44
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — The conversation touched on artificial intelligence, with a call for proactively shaping AI policies to reflect regional…
S45
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Public-private partnerships and global cooperation essential for sharing applications, datasets, and expertise
S46
The Foundation of AI Democratizing Compute Data Infrastructure — Focus on developing domain-specific, smaller models that require less computational power and infrastructure
S47
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Balance between large foundational models and small specialized models Development | Infrastructure | Economic Ioanna …
S48
Part 7: ‘Converging realities: Embedding governance through digital twins’ — Digital twin governance begins at the intersection of technical design and responsibility. To function effectively withi…
S49
How AI Drives Innovation and Economic Growth — Central to Zutt’s analysis was the concept of “small AI”—practical, affordable, locally relevant applications that addre…
S50
DIGITAL DIVIDENDS — As digital development proceeds from emerging to transitioning and then to transforming, policy reforms beco…
S51
Survival Tech Harnessing AI to Manage Global Climate Extremes — “And so IRO currently is developing original work on building very agile, small, specific models”[1]. “So original resea…
S52
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Focus on smaller, task-specific models while not neglecting progress made with large language models
S53
OPENING SESSION | IGF 2023 — Ema Arisa:Thank you, Ms. Wan. I would like to move on to the next question. So the guiding principles and code of conduc…
S54
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S55
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — The aim is for GPI to have an independent identity, similar to that of the World Health Organization (WHO) in the field …
S56
AI and Data Driving India’s Energy Transformation for Climate Solutions — If wrong data will be fed to the tool, the wrong decisions will be indicated. So as it has been told by my colleague, we…
S57
Agenda item 6 — Establishing grant programs and public-private partnerships as potential funding mechanisms
S58
https://dig.watch/event/india-ai-impact-summit-2026/survival-tech-harnessing-ai-to-manage-global-climate-extremes — It has to be collaborative proposals. In some of our programs, we have put out open IP licensing so that when you have a…
S59
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S60
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Pedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It…
S61
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Antonia Gawel:I mean, I think very much a focus on decarbonization of the power sector is a critical input and a signifi…
S62
Building Climate-Resilient Systems with AI — Artificial intelligence | Environmental impacts
S63
HIGH LEVEL LEADERS SESSION IV — The deployment of emerging technologies, such as artificial intelligence, is seen as promising in addressing climate cha…
S64
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — Building trust is highlighted as a fundamental requirement for data governance in multilateral environments. Trust can b…
S65
Can we test for trust? The verification challenge in AI — Industry Standards and Regulatory Approaches Legal and regulatory | Infrastructure Trager identifies the need for a st…
S66
The Final Frontier: Emerging Tech and Space Economy for Sustainable Earth — Moderator encourages audience to introduce themselves and ask questions to any panelist Moderator invites audience ques…
S67
GUIDE ON THE APPLICATION OF NEW TECHNOLOGY AND RESEARCH TO PUBLIC WEATHER SERVICES — – Long-range forecasting (from 30 days up to two years): – -monthly outlook: description of averaged weather parameters …
S68
Manual on the Global Data-processing and Forecasting System — – ( c ) Areas of showers Large shower symbols distributed over the area with the symbol for rain, snow or hail added as…
S69
www.ssoar.info — (1) The tendency in global health to focus pri -marily on controlling and treating specific diseas -es (in developing …
S70
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Geralyn Miller:A couple of things, I think, from the pandemic, and that’s a really great question, because as a society,…
S71
[Tentative Translation] — 202 Currently under the consideration of the Integrated Innovation Strategy Promotion Council as of March 2021. fundin…
S72
Scaling AI for Billions_ Building Digital Public Infrastructure — A critical concern emerged around the fragility of existing digital infrastructure and organisations’ readiness for AI i…
S73
Quantum hype and predictions for the future of technology — He illustrated his uncertainty using parallels with aviation:
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Amit Sheth
2 arguments141 words per minute394 words166 seconds
Argument 1
Focus on building small, agile, domain‑specific models rather than large foundational models (Amit Sheth)
EXPLANATION
IRO intends to create original, lightweight AI models that are tailored to specific tasks such as hyper‑local extreme‑weather prediction. The strategy deliberately avoids relying on large foundation models because of their opaque training data and computational baggage.
EVIDENCE
He explained that IRO is developing original work on building very agile, small, specific models for hyper-local extreme weather issues, and that they will not be built on top of large language or foundational models which come with a lot of baggage and unknown training data [26-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IRO’s focus on lightweight, domain-specific models is described in the discussion overview and aligns with calls for smaller models in AI democratization literature [S1][S17][S18].
MAJOR DISCUSSION POINT
National AI research vision for climate
AGREED WITH
Praphul Chandra, Dev Niyogi
DISAGREED WITH
Praphul Chandra
Argument 2
Position India to lead in AI‑driven climate solutions and support startups in health, pharma, and sustainability (Amit Sheth)
EXPLANATION
The vision presented to the Prime Minister highlighted AI as a lever for large economic and social impact, especially by empowering startups in health, pharma, and sustainability sectors. Partnerships with industry bodies are meant to translate research into global products originating from India.
EVIDENCE
He recounted presenting to the PM a broad idea that includes a core foundational AI focus on enterprises, supporting the startup ecosystem, and specific partnerships such as with the Indian Pharma Alliance covering 80 % of India’s pharma output and health partners, aiming to make impact in sustainability, health and pharma [24-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision to position India as a leader in AI-driven climate and startup support is highlighted in the opening remarks and in a plenary noting India’s AI ambitions [S1][S19][S18].
MAJOR DISCUSSION POINT
National AI research vision for climate
M
M. Ravichandran
4 arguments169 words per minute744 words263 seconds
Argument 1
Fuse physics‑based numerical models with AI to capture both spatial and temporal dynamics for hyper‑local forecasts (M. Ravichandran)
EXPLANATION
Ravichandran argues that accurate local weather prediction requires a hybrid approach that combines the spatial strength of physics‑based numerical models with the temporal pattern‑recognition ability of AI. This integration is essential for forecasting high‑impact events like cloudbursts at fine scales.
EVIDENCE
He used the analogy of needing to see both the elephant (large-scale) and the ant (small-scale) and stated that while physics-based models handle spatial aspects, AI is better for time-series, so both must be fused to understand local weather and predict events such as cloudbursts [47-61].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ravichandran’s call to fuse physics-based models with AI mirrors recommendations for an optimal AI-physics blend and the need to integrate both spatial and temporal aspects [S1][S2][S20].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
AGREED WITH
Manish Bhardwaj
Argument 2
Use AI to improve prediction of extreme events such as cloudbursts, enabling timely evacuations and response (M. Ravichandran)
EXPLANATION
He points out that current forecasting systems cannot reliably predict cloudbursts, which cascade into landslides and floods. AI is being explored as a tool to fill this gap and support early‑warning and evacuation decisions.
EVIDENCE
He noted that they do not know how to predict cloudbursts and are investigating whether AI can help, and later elaborated on multi-hazard scenarios where cloudbursts trigger landslides and flash floods, emphasizing AI’s role in improving early-warning signals for targeted populations [65-68][161-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s efforts to use AI for extreme-event forecasting, such as floods and cloudbursts, are reported in recent climate-forecasting initiatives [S21][S1].
MAJOR DISCUSSION POINT
Early warning and disaster management
AGREED WITH
Manish Bhardwaj
Argument 3
Open data policies and collaborative consortia are needed to turn research into deployable systems (M. Ravichandran)
EXPLANATION
Ravichandran stresses that India possesses vast historical weather data, but its full potential can be realized only if the data are openly shared and multidisciplinary teams are engaged. Open data would enable diverse researchers to reduce model errors, improve initial conditions, and build trustworthy forecasts.
EVIDENCE
He highlighted the huge volumes of legacy IMD data, the need to utilize it, the importance of opening up the data so many minds can work on it, and the necessity of validation, verification, and trust in AI-enabled forecast systems [126-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of open data and collaborative consortia is discussed in policy guidance on open data principles and in calls for shared datasets for AI research [S22][S23][S1].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
AGREED WITH
Karthik Kashinath
Argument 4
AI can be used for downscaling coarse‑resolution weather models to hyper‑local (kilometer‑scale) forecasts, improving prediction accuracy for localized events.
EXPLANATION
Ravichandran explains that while large‑scale numerical models provide broad forecasts, AI techniques can refine these outputs to much finer spatial resolutions, enabling accurate local weather predictions such as one‑kilometer forecasts.
EVIDENCE
He notes that AI can downscale better, mentioning the need for one-kilometer resolution forecasts and that AI can achieve this downscaling, thereby improving localized weather prediction [140-142].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-based downscaling approaches like Pangu-Weather demonstrate the feasibility of converting coarse forecasts to kilometer-scale predictions [S24][S20][S1].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
S
Shivkumar Kalayanaraman
4 arguments178 words per minute924 words311 seconds
Argument 1
Deploy multimodal AI (time‑series, vision, multispectral cameras) for ultra‑short‑term weather prediction from low‑cost sensors (Shivkumar Kalayanaraman)
EXPLANATION
Kalayanaraman envisions using inexpensive cameras and multispectral sensors to capture sky patterns and generate forecasts a few hours ahead. By fusing insights across modalities rather than raw data, AI can deliver rapid, localized predictions.
EVIDENCE
He described pointing a camera at the sky, using IR or multispectral cameras to forecast one to four hours ahead, noting the dramatic cost drop of sensors and the opportunity to fuse insights across modes instead of complex data fusion [76-84].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
Argument 2
NRF’s targeted funding programs accelerate AI solutions for weather, climate, and disaster risk (Shivkumar Kalayanaraman)
EXPLANATION
Kalayanaraman outlines how the National Research Foundation (NRF) provides both grant and capital funding, mission‑mode programs, and challenge‑driven initiatives to fast‑track AI research in weather, climate and disaster risk. These mechanisms aim to move from basic research to demonstrable, impact‑oriented solutions.
EVIDENCE
He explained that NRF is a statutory body offering grant funding, a one-lakh-crore RDI capital fund, the AI for Science & Engineering program (including a track for Weather and Climate), the Leapfrog Demonstrators for Societal Innovation, a hackathon with IBM and IIT Delhi, and collaborations with MoES and other agencies [193-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
NRF’s funding mechanisms, including grant programmes and the one-lakh-crore RDI fund, are outlined as accelerators for AI weather solutions [S2][S1].
MAJOR DISCUSSION POINT
Early warning and disaster management
AGREED WITH
Akshara Kaginalkar, Sandeep Singhal
Argument 3
NRF’s AI for Science & Engineering program and Leapfrog Demonstrators provide mission‑mode funding and challenge‑driven collaboration (Shivkumar Kalayanaraman)
EXPLANATION
He emphasizes two flagship NRF initiatives: the AI for Science & Engineering program, which funds weather and climate AI research, and the Leapfrog Demonstrators, which focus on rapid, high‑impact societal solutions. Both are designed to encourage collaborative, outcome‑focused projects.
EVIDENCE
He detailed the AI for Science & Engineering program’s weather and climate track, the upcoming Leapfrog Demonstrators for Societal Innovation, and the emphasis on working backwards from impact while still supporting curiosity-driven research [203-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI for Science & Engineering programme and Leapfrog Demonstrators are described as mission-mode, challenge-driven funding streams in NRF’s strategy documents [S2].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
Argument 4
The AI for Science & Engineering hackathon, organized with IBM and IIT Delhi, provides curated datasets and a collaborative platform to accelerate AI research for weather and climate.
EXPLANATION
Kalayanaraman highlights that the hackathon releases specific weather and climate datasets to participants, fostering rapid prototyping and community engagement, which speeds up the development of AI solutions for societal challenges.
EVIDENCE
He mentions a hackathon conducted in partnership with IBM and IIT Delhi that provides data sets for weather and climate AI research and encourages participants to develop solutions in this domain [218-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI for Science & Engineering hackathon, co-organized with IBM and IIT Delhi, provides curated datasets for rapid prototyping [S2].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
K
Karthik Kashinath
1 argument166 words per minute436 words157 seconds
Argument 1
Develop benchmark datasets and super‑resolution techniques to achieve operational hyper‑local models (Karthik Kashinath)
EXPLANATION
Kashinath proposes creating benchmark datasets and metrics analogous to ImageNet to drive AI progress at hyper‑local scales. He also suggests leveraging super‑resolution methods to downscale coarse climate data to kilometer‑level resolution, enabling operational local forecasts.
EVIDENCE
He cited the ERIF dataset from ECMWF and the Weather Bench as examples of benchmarks that spurred global-scale AI models, argued for similar hyper-local benchmarks, and described using super-resolution (e.g., Earth2 program) to transform 25 km data to 1 km resolution, expecting generative AI to fill remaining gaps [261-274].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Benchmark datasets and super-resolution methods are central to recent AI weather projects such as Pangu-Weather, supporting operational hyper-local modelling [S24][S20].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
AGREED WITH
M. Ravichandran
D
Dev Niyogi
3 arguments193 words per minute448 words139 seconds
Argument 1
Incorporate human and societal dimensions into models to make forecasts decision‑oriented (Dev Niyogi)
EXPLANATION
Niyogi stresses that purely physical models miss the human and societal factors that affect outcomes. AI can bridge this gap by embedding the ‘Jugaad’ mindset and societal constraints, making forecasts more accessible and actionable.
EVIDENCE
He introduced the Indian concept of ‘Jugaad’, explained that while equations capture natural laws, the human element is often missing, and argued that AI brings societal dimensions into predictive models, making them more usable [103-108].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
Argument 2
Build decision‑specific digital twins that turn raw weather data into actionable insights for users (Dev Niyogi)
EXPLANATION
Niyogi proposes creating digital twins that are tailored to specific decisions rather than generic weather outputs. By defining the decision context, these twins can provide scalable, transferable models that directly support user actions.
EVIDENCE
He described ‘box models’ that are simple, scalable, and transferable, and explained that decision-specific digital twins focus on why a model is created and what decision it informs, turning raw data into intelligent, actionable insights for users ranging from long-term hedging to short-term shade decisions [313-330].
MAJOR DISCUSSION POINT
Digital twins and decision‑oriented AI
Argument 3
Develop simple, scalable “box models” that can be transferred across cities for disaster and climate decision support (Dev Niyogi)
EXPLANATION
He suggests building modular ‘box models’ that can be quickly adapted to different urban contexts, providing a common decision‑support framework for disaster and climate management. Such models avoid the need to predict every variable at every scale.
EVIDENCE
He referenced the creation of simple, scalable box models and decision-specific digital twins, emphasizing that defining the decision-to-data framework allows for transferable solutions across cities [318-322].
MAJOR DISCUSSION POINT
Digital twins and decision‑oriented AI
AGREED WITH
Amit Sheth, Praphul Chandra
M
Manish Bhardwaj
2 arguments125 words per minute804 words384 seconds
Argument 1
Create hybrid AI‑sensor systems that deliver trusted, low‑cost early warnings to the public (Manish Bhardwaj)
EXPLANATION
Bhardwaj advocates for an early‑warning architecture that combines AI analytics with physical sensor networks and satellite data to provide reliable alerts at minimal cost. The system must be hybrid, not purely AI, to ensure trust and resilience.
EVIDENCE
He emphasized the need for a trusted early-warning asset for the public, describing a hybrid model that integrates AI with sensor fabrics, satellite data, and alerts from various agencies, positioning AI as a supporting role rather than the sole solution [70-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hybrid AI-sensor early-warning architectures are part of India’s AI-enhanced disaster management pilots, emphasizing trusted low-cost alerts [S21][S1].
MAJOR DISCUSSION POINT
Early warning and disaster management
AGREED WITH
M. Ravichandran
Argument 2
AI can increase the granularity of early‑warning signals by fusing terrestrial, satellite, and sensor data, even when sensor coverage is limited.
EXPLANATION
Bhardwaj argues that integrating multiple data streams through AI enables more precise, localized warnings for hazards such as flash floods and glacial‑lake outburst floods, supporting targeted evacuations and response actions.
EVIDENCE
He describes using data from alert-generating agencies, terrestrial sensors, satellite observations, and other sensory inputs to improve the granularity of early-warning signals despite limitations in satellite coverage, thereby enhancing targeted early warnings [175-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fusing satellite, sensor and agency data to improve warning granularity is highlighted in national AI-for-weather initiatives [S21].
MAJOR DISCUSSION POINT
Early warning and disaster management
S
Sandeep Singhal
2 arguments171 words per minute580 words203 seconds
Argument 1
Public‑private partnerships, clear market segmentation, and monetization pathways (e.g., insurance, enterprise services) are critical for scaling AI climate solutions (Sandeep Singhal)
EXPLANATION
Singhal highlights that collaboration with government agencies is essential for data access and deployment, while startups must segment their markets (public vs. private) and identify revenue streams such as insurance or enterprise services. He also notes the growing role of philanthropic capital.
EVIDENCE
He stated that partnership with the government is critical for data and deployment, advised startups to segment markets (general public vs. government), mentioned public-private partnership funding, philanthropic capital, and identified insurance as an early monetizable product [280-288].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public-private partnership models and market-segmentation guidance are emphasized in NRF’s RDI fund and in broader discussions of India’s AI strategy [S2][S19].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
AGREED WITH
Akshara Kaginalkar, Shivkumar Kalayanaraman
Argument 2
Insurance is a natural first monetizable product for AI‑driven climate risk assessments (Sandeep Singhal)
EXPLANATION
He points out that climate‑risk AI outputs can be directly packaged into insurance products, providing a clear commercial route for early‑stage AI solutions in the climate domain.
EVIDENCE
He explicitly said that insurance ends up being one of the first monetizable products that emerges from AI-driven climate risk work [341-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Insurance as an early monetisation route for climate-risk AI is noted in reports on AI-driven risk-assessment products [S21].
MAJOR DISCUSSION POINT
Climate risk insurance and monetization
P
Praphul Chandra
5 arguments156 words per minute373 words142 seconds
Argument 1
Achieve domain‑specific performance by fine‑tuning large foundation models with very small datasets (Praphul Chandra)
EXPLANATION
Chandra questions how little data is needed to fine‑tune large foundation models for specific climate applications, proposing that breakthroughs in small‑data fine‑tuning could unlock cross‑domain utility.
EVIDENCE
He described the concept of ‘small data fine tuning’, asking how small the dataset can be for effective fine-tuning of large foundation models and noting its potential across multiple domains [104-108].
MAJOR DISCUSSION POINT
Data challenges, small‑data fine‑tuning, and transfer learning
AGREED WITH
Amit Sheth, Dev Niyogi
DISAGREED WITH
Amit Sheth
Argument 2
Leverage transfer learning to share knowledge between data‑rich and data‑sparse regions, reducing data requirements (Praphul Chandra)
EXPLANATION
He suggests using transfer learning to apply models trained in data‑rich regions to data‑sparse areas, acknowledging that while physics is universal, hyper‑local uniqueness requires careful constraint handling.
EVIDENCE
He noted that some regions are data-rich while others are data-sparse, the physics of weather is the same globally, but hyper-local uniqueness exists, and efficient transfer learning could be impactful [110-114].
MAJOR DISCUSSION POINT
Data challenges, small‑data fine‑tuning, and transfer learning
Argument 3
Hyper‑local solar generation forecasts enable precise grid load balancing and demand‑flexibility mechanisms (Praphul Chandra)
EXPLANATION
Chandra explains that AI models capable of forecasting rooftop solar output at a hyper‑local level can help grid operators balance loads and implement demand‑flexibility, crucial for a renewable‑energy‑driven grid.
EVIDENCE
He highlighted that hyper-local forecasts can predict how much energy a rooftop solar panel will generate, which is critical for managing the grid, and linked this to demand-flexibility and data-center energy consumption [292-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled hyper-local solar forecasts for grid balancing are highlighted in discussions of AI for energy management and digital marketplaces [S26].
MAJOR DISCUSSION POINT
AI for energy and grid management
Argument 4
Combine AI weather forecasts with the India Energy Stack to create digital marketplaces for energy trading (Praphul Chandra)
EXPLANATION
He describes integrating AI‑driven weather forecasts with the India Energy Stack—a digital public infrastructure—to enable a marketplace where consumers and producers can trade energy based on predictive load information.
EVIDENCE
He mentioned a team combining the India Energy Stack with AI weather models to forecast grid loads, facilitating energy trading between consumers and producers and supporting demand flexibility [294-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Integration of AI weather forecasts with the India Energy Stack to enable energy trading platforms is described in AI-energy system sessions [S26].
MAJOR DISCUSSION POINT
AI for energy and grid management
Argument 5
AI‑driven demand‑flexibility solutions can reduce data‑center energy consumption by aligning workloads with weather‑informed grid load forecasts.
EXPLANATION
Chandra points out that using AI weather forecasts together with the India Energy Stack enables dynamic adjustment of data‑center demand, supporting sustainable AI operations and improving overall grid stability.
EVIDENCE
He notes that demand flexibility for data centers can be supported by AI and public infrastructure, linking weather forecasts to dynamic energy management to reduce consumption [296-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Demand-flexibility for data-centers using weather-informed grid forecasts is mentioned in AI-energy management contexts [S26].
MAJOR DISCUSSION POINT
AI for energy and grid management
A
Audience
1 argument154 words per minute75 words29 seconds
Argument 1
Climate‑risk AI outputs can be packaged into insurance products, providing a viable commercial avenue (Audience)
EXPLANATION
An audience member highlighted that climate risk directly influences insurance pricing and availability, and suggested that AI‑derived risk assessments could be integrated into insurance offerings.
EVIDENCE
He noted that climate risk affects insurance rates and leads to situations where houses become uninsured, asking how AI risk assessments could be married to insurance products [337-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Audience comment aligns with documented use cases of AI in climate-risk insurance offerings [S21].
MAJOR DISCUSSION POINT
Climate risk insurance and monetization
A
Akshara Kaginalkar
3 arguments146 words per minute2193 words897 seconds
Argument 1
Public‑private partnership models with open IP licensing and consortium‑based approaches are essential to translate AI research into operational climate and disaster services.
EXPLANATION
Kaginalkar stresses that scaling AI solutions requires collaborative proposals, hub‑and‑spoke consortia, and open IP licensing so startups can partner with academia. She highlights multiple mechanisms such as translational research centres and the large RDI fund that mandate industry‑academic collaboration to move prototypes to market.
EVIDENCE
She describes encouraging consortium bids, hub-and-spoke setups, open IP licensing for startups to partner with academia, translational research centres mandating industry partnership, and the one-lakh-crore RDI fund that pushes industry-academia collaboration for scaling AI solutions [236-251].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
PPP models with open IP licensing and consortium structures are advocated in NRF’s RDI fund and collaborative policy frameworks [S2][S18].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
Argument 2
Strategic partnerships with global technology leaders (NVIDIA, Google, Qualcomm) and foundations (Gates) are critical to accelerate AI‑driven climate and disaster solutions in India.
EXPLANATION
Kaginalkar notes that the AI Summit has secured collaborations with major tech companies and philanthropic foundations, indicating that leveraging their expertise, platforms, and resources will fast‑track the development and deployment of AI applications for weather, energy, and disaster management.
EVIDENCE
She mentions announced partnerships with NVIDIA, Google, Qualcomm, and the Gates Foundation as part of the AI Summit initiatives, underscoring their role in supporting AI for climate and disaster projects [345-346].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Strategic collaborations with NVIDIA, Google, Qualcomm and the Gates Foundation are announced as part of AI Summit partnerships supporting climate solutions [S2][S1].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
Argument 3
Digital twins should be integrated across the entire AI pipeline—from monitoring to user‑facing applications—to deliver end‑to‑end climate services.
EXPLANATION
Kaginalkar proposes that digital twins can link data acquisition, processing, modeling, and delivery, creating a comprehensive portfolio of AI applications that support climate extremes and disaster management at the user level.
EVIDENCE
She states that digital twins can cover the whole AI spectrum from monitoring to reaching end users, suggesting a holistic approach to climate services [331-333].
MAJOR DISCUSSION POINT
Digital twins and decision‑oriented AI
Agreements
Agreement Points
Hybrid AI‑physics and sensor approaches are needed for accurate hyper‑local weather forecasts and early warnings.
Speakers: M. Ravichandran, Manish Bhardwaj
Fuse physics‑based numerical models with AI to capture both spatial and temporal dynamics for hyper‑local forecasts (M. Ravichandran) Use AI to improve prediction of extreme events such as cloudbursts, enabling timely evacuations and response (M. Ravichandran) Create hybrid AI‑sensor systems that deliver trusted, low‑cost early warnings to the public (Manish Bhardwaj) AI can increase the granularity of early‑warning signals by fusing terrestrial, satellite, and sensor data, even when sensor coverage is limited (Manish Bhardwaj)
Both speakers stress that AI alone is insufficient; it must be combined with physics-based numerical models or physical sensor networks to capture spatial patterns and temporal rhythms, thereby enabling reliable, fine-scale forecasts of extreme events such as cloudbursts and delivering trusted early warnings [47-61][65-68][70-75][175-180].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the UK Met Office’s strategic plan to blend AI with physics-based forecasting and the World Meteorological Organization’s call for AI-enhanced early warning systems [S41], [S42].
Open data, benchmark datasets and collaborative consortia are essential to develop operational hyper‑local AI weather models.
Speakers: M. Ravichandran, Karthik Kashinath
Open data policies and collaborative consortia are needed to turn research into deployable systems (M. Ravichandran) Develop benchmark datasets and super‑resolution techniques to achieve operational hyper‑local models (Karthik Kashinath)
Ravichandran highlights India’s massive legacy weather archives and calls for open access to enable many researchers to improve model error and trust, while Karthik proposes creating benchmark data sets and metrics-akin to ImageNet-to drive progress at kilometer-scale resolution, both underscoring the centrality of shared data for AI advancement [126-144][261-274].
POLICY CONTEXT (KNOWLEDGE BASE)
Crowd-sourcing data initiatives (e.g., India’s language data effort) illustrate the push for open, community-generated datasets, while international forums stress collaborative consortia for sharing weather AI resources [S40], [S45].
Public‑private partnerships and dedicated funding mechanisms are critical to translate AI research into scalable climate and disaster solutions.
Speakers: Akshara Kaginalkar, Sandeep Singhal, Shivkumar Kalayanaraman
Public‑private partnership models with open IP licensing and consortium‑based approaches are essential to translate AI research into operational climate and disaster services (Akshara Kaginalkar) Public‑private partnerships, clear market segmentation, and monetization pathways (e.g., insurance, enterprise services) are critical for scaling AI climate solutions (Sandeep Singhal) NRF’s targeted funding programs accelerate AI solutions for weather, climate, and disaster risk (Shivkumar Kalayanaraman)
All three speakers emphasize that government-backed funding (NRF grants, RDI capital fund), collaborative consortium structures, and open IP licensing are needed to move AI prototypes to market-ready services, with startups advised to align with ministries and investors urged to support these PPP models [236-251][280-288][193-224].
POLICY CONTEXT (KNOWLEDGE BASE)
UNDP collaborations on AI-powered early warning systems and multiple forum discussions highlight the necessity of public-private partnerships and earmarked funding for scalable climate AI solutions [S43], [S45].
Emphasis on small, domain‑specific, agile models (or fine‑tuned foundation models with minimal data) and simple transferable ‘box’ models for climate applications.
Speakers: Amit Sheth, Praphul Chandra, Dev Niyogi
Focus on building small, agile, domain‑specific models rather than large foundational models (Amit Sheth) Achieve domain‑specific performance by fine‑tuning large foundation models with very small datasets (Praphul Chandra) Develop simple, scalable “box models” that can be transferred across cities for disaster and climate decision support (Dev Niyogi)
The three speakers converge on the need for lightweight, purpose-built AI solutions: Amit advocates original small models; Praphul explores fine-tuning large models with tiny data; Dev proposes modular box models that are easy to transfer, all aiming for rapid, context-aware climate services [26-31][104-108][318-322].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on “small AI” advocate for domain-specific, low-compute models and a balanced use of large foundation models for climate tasks [S46], [S47], [S49].
Similar Viewpoints
Both speakers advocate leveraging inexpensive visual or multispectral sensors combined with AI to produce very short‑term forecasts or early warnings, stressing cost‑effectiveness and the need for AI to interpret sensor outputs rather than replace physical infrastructure [76-84][70-75].
Speakers: Shivkumar Kalayanaraman, Manish Bhardwaj
Deploy multimodal AI (time‑series, vision, multispectral cameras) for ultra‑short‑term weather prediction from low‑cost sensors (Shivkumar Kalayanaraman) Create hybrid AI‑sensor systems that deliver trusted, low‑cost early warnings to the public (Manish Bhardwaj)
Both highlight strategies to overcome data scarcity: Praphul through transfer learning across regions, and Karthik via benchmark datasets and super‑resolution that repurpose coarse data for fine‑scale modeling, indicating a common focus on data‑efficient model development [110-114][261-274].
Speakers: Praphul Chandra, Karthik Kashinath
Leverage transfer learning to share knowledge between data‑rich and data‑sparse regions, reducing data requirements (Praphul Chandra) Develop benchmark datasets and super‑resolution techniques to achieve operational hyper‑local models (Karthik Kashinath)
Unexpected Consensus
Insurance as an early monetizable product for AI‑driven climate risk assessments.
Speakers: Audience, Sandeep Singhal
Climate‑risk AI outputs can be packaged into insurance products, providing a viable commercial avenue (Audience) Insurance is a natural first monetizable product for AI‑driven climate risk assessments (Sandeep Singhal)
While most participants focused on technical and policy aspects, both an audience member and the venture-capitalist Sandeep independently identified insurance as the first marketable application of climate-AI, revealing an unanticipated convergence on a concrete commercial pathway [337-340][341-342].
Overall Assessment

The panel shows strong convergence on four pillars: (1) hybrid AI‑physics/sensor systems for hyper‑local forecasting, (2) the necessity of open, benchmarked data and collaborative consortia, (3) public‑private partnership and dedicated funding as the engine for translation, and (4) a shared preference for small, agile, or fine‑tuned models that are easy to deploy. These agreements span technical, institutional, and economic dimensions, indicating a cohesive national roadmap for AI‑enabled climate resilience.

High consensus – the majority of speakers align on the same strategic directions, suggesting that India’s AI‑climate agenda is likely to move forward with coordinated policy support, funding structures, and a focus on lightweight, data‑efficient models.

Differences
Different Viewpoints
Model development strategy – building small, agile, domain‑specific models from scratch versus fine‑tuning large foundation models with very small datasets
Speakers: Amit Sheth, Praphul Chandra
Focus on building small, agile, domain‑specific models rather than large foundational models (Amit Sheth) Achieve domain‑specific performance by fine‑tuning large foundation models with very small datasets (Praphul Chandra)
Sheth argues that IRO will create original lightweight models and explicitly avoid large foundation models because of unknown training data and computational baggage [26-31]. Chandra, in contrast, asks how small a dataset can be to fine-tune large foundation models for climate applications, seeing this as a breakthrough that could serve many domains [104-108]. The two positions conflict on whether to rely on new small models or to adapt existing large models.
POLICY CONTEXT (KNOWLEDGE BASE)
Expert discussions note the trade-off between efficient small models and the capabilities of large foundation models, urging a balanced, context-driven approach [S46], [S47], [S49].
Approach to data scarcity – transfer learning from data‑rich regions versus building new small models locally
Speakers: Praphul Chandra, Amit Sheth
Leverage transfer learning to share knowledge between data‑rich and data‑sparse regions (Praphul Chandra) Develop original small, agile models for each specific task without depending on large pre‑trained models (Amit Sheth)
Chandra proposes using transfer learning to apply models trained where data are abundant to data-sparse Indian contexts, emphasizing efficient reuse of knowledge [110-114]. Sheth prefers to create original, locally-tailored models from the ground up, avoiding reliance on existing foundation models [26-31]. This reflects a methodological disagreement on how to handle limited data.
POLICY CONTEXT (KNOWLEDGE BASE)
Research on climate extremes emphasizes transfer learning from data-rich to data-sparse regions, while regional recommendations favor locally built small models and shared protocols [S38], [S39].
Scope of digital‑twin development – decision‑specific, minimal “box” models versus end‑to‑end AI pipelines covering monitoring to user delivery
Speakers: Dev Niyogi, Akshara Kaginalkar
Build decision‑specific digital twins that are simple, scalable, and transferable, focusing on the decision‑to‑data framework (Dev Niyogi) Integrate digital twins across the whole AI spectrum—from monitoring to processing to user‑facing applications (Akshara Kaginalkar)
Niyogi suggests creating lightweight “box models” and digital twins that are tied to a specific decision context, avoiding the need to predict every variable [318-322]. Kaginalkar envisions digital twins as a holistic layer that links data acquisition, modeling, and delivery to end users, covering the full AI pipeline [331-333]. The disagreement lies in the breadth and complexity of the twin architecture.
POLICY CONTEXT (KNOWLEDGE BASE)
Digital twin governance literature stresses the need to define scope and integration within broader systems, informing the debate between lightweight decision-specific twins and full-stack pipelines [S48].
Unexpected Differences
AI model strategy – small bespoke models vs fine‑tuning large foundation models
Speakers: Amit Sheth, Praphul Chandra
Focus on building small, agile, domain‑specific models rather than large foundational models (Amit Sheth) Achieve domain‑specific performance by fine‑tuning large foundation models with very small datasets (Praphul Chandra)
Both speakers are senior AI experts, yet they propose opposite technical routes for climate AI: Sheth rejects large foundation models altogether, while Chandra sees them as the core asset to be adapted with minimal data. This contrast was not anticipated given the shared goal of rapid climate impact.
POLICY CONTEXT (KNOWLEDGE BASE)
Sustainable AI policy analyses call for a balance between small, specialized models and large foundational models to ensure efficiency and impact [S46], [S47], [S49].
Breadth of digital‑twin implementation – minimal decision‑specific twins vs full‑stack AI‑driven twins
Speakers: Dev Niyogi, Akshara Kaginalkar
Build decision‑specific digital twins that are simple, scalable, and transferable (Dev Niyogi) Integrate digital twins across the whole AI pipeline from monitoring to end‑user services (Akshara Kaginalkar)
Niyogi’s emphasis on lightweight, purpose‑built twins contrasts with Kaginalkar’s vision of comprehensive, end‑to‑end twin ecosystems. The divergence in scope and complexity was not overtly discussed elsewhere, making it an unexpected point of contention.
POLICY CONTEXT (KNOWLEDGE BASE)
Governance frameworks for digital twins discuss choices between lightweight decision-specific twins and comprehensive AI-driven twins, highlighting policy implications of each approach [S48].
Overall Assessment

The panel largely converged on the need for hybrid AI‑physics solutions, public‑private collaboration, and open data to improve early‑warning and hyper‑local forecasting. The most pronounced disagreements centered on the technical route for model development (small bespoke models vs fine‑tuned large foundations) and the architectural scope of digital twins. These methodological splits reflect differing risk appetites and resource strategies rather than fundamental opposition to AI’s role in climate resilience.

Moderate – while core objectives (enhanced forecasting, disaster preparedness, and scalable deployment) are shared, the divergent views on model architecture and digital‑twin scope could slow consensus on research funding priorities and implementation road‑maps, requiring explicit coordination to align technical pathways.

Partial Agreements
All three stress that AI alone is insufficient; a hybrid approach that combines physical models or sensor networks with AI analytics is needed to produce reliable, fine‑grained early‑warning and forecasting systems [47-61][70-75][175-180].
Speakers: M. Ravichandran, Manish Bhardwaj, Shivkumar Kalayanaraman
Fuse physics‑based numerical models with AI to capture spatial and temporal dynamics for hyper‑local forecasts (M. Ravichandran) Create hybrid AI‑sensor systems that deliver trusted, low‑cost early warnings (Manish Bhardwaj) Use AI to increase granularity of early‑warning signals by fusing terrestrial, satellite, and sensor data (Shivkumar Kalayanaraman)
While the focus differs (PPP mechanisms vs technical benchmarks), all agree that coordinated structures—whether through funding programmes, open IP, or shared benchmark data—are required to move AI prototypes to scalable, operational services [236-251][280-288][261-274].
Speakers: Akshara Kaginalkar, Sandeep Singhal, Karthik Kashinath
Public‑private partnership models with open IP licensing and consortium‑based approaches are essential to translate AI research into operational climate and disaster services (Akshara Kaginalkar) Public‑private partnerships, clear market segmentation and monetisation pathways (including insurance) are critical for scaling AI climate solutions (Sandeep Singhal) Develop benchmark datasets and super‑resolution techniques to achieve operational hyper‑local models (Karthik Kashinath)
Takeaways
Key takeaways
The Indian Research Organisation (IRO) will focus on building small, agile, domain‑specific AI models rather than relying on large foundational models, targeting climate, health, and pharma verticals. Effective weather and climate forecasting requires a hybrid approach that fuses physics‑based numerical models with AI to capture both spatial and temporal dynamics, especially for hyper‑local events. Multimodal AI (time‑series, vision, multispectral sensors) and low‑cost sensor networks can enable ultra‑short‑term predictions and improve now‑casting. Benchmark datasets, super‑resolution techniques, and transfer learning are critical to achieve operational, hyper‑local AI models across data‑rich and data‑sparse regions. Early‑warning systems must be trusted, low‑cost, and integrated with existing sensor and satellite data; AI can enhance prediction of extreme events such as cloudbursts, flash floods, and landslides. NRF’s AI for Science & Engineering program, the upcoming Leapfrog Demonstrators, and recent hackathon provide mission‑mode funding and challenge‑driven pathways for AI‑climate solutions. Public‑private partnerships, open data policies, and open IP licensing are essential to translate research into deployable products and scale startups. Small‑data fine‑tuning of large foundation models and Jugaad‑style integration of human/social dimensions can make AI solutions more accessible and decision‑oriented. AI‑driven hyper‑local solar generation forecasts can support grid load balancing, demand‑flexibility, and digital energy marketplaces (India Energy Stack). Decision‑specific digital twins and simple “box models” can turn raw weather data into actionable insights for disaster management and everyday user decisions. Climate‑risk insurance is identified as a natural first monetizable product for AI‑generated risk assessments.
Resolutions and action items
NRF will continue and expand the AI for Weather and Climate track within its AI for Science & Engineering program and launch the Leapfrog Demonstrators for Societal Innovation. NRF announced an AI for Science & Engineering hackathon (partnering with IBM and IIT Delhi) to provide datasets and stimulate solutions. IRO will develop original, small, agile AI models for extreme‑weather use cases, avoiding reliance on large foundational models. Stakeholders agreed to open up weather and climate data to broader research communities to enable diverse AI approaches. Create benchmark datasets and metrics for hyper‑local forecasting, modeled after the ECMWF ERA5 benchmark, to drive operational quality. Promote transfer learning and small‑data fine‑tuning techniques to leverage knowledge from data‑rich regions for data‑sparse Indian locales. Encourage public‑private consortia and hub‑spoke collaborations, with open IP licensing, to accelerate translation of AI models into products. Integrate AI weather forecasts with the India Energy Stack to enable digital energy marketplaces and demand‑flexibility mechanisms. Develop voice‑based consumer applications that translate forecasts into actionable resilience recommendations for end‑users. Explore insurance‑linked monetization pathways for AI‑driven climate risk assessments.
Unresolved issues
Establishing robust validation and verification frameworks to build trust in AI‑augmented forecasts. Defining concrete mechanisms for sustained open data sharing while protecting privacy and security. Detailing business models and market segmentation for scaling AI climate solutions beyond pilot projects. Operationalizing hyper‑local models at scale, including computational resource requirements and deployment pipelines. Integrating AI outputs into insurance underwriting processes and determining regulatory implications. Addressing multi‑hazard cascading events (e.g., cloudburst → landslide → flash flood) within AI prediction frameworks. Clarifying the role of human/social dimensions (Jugaad) in model design and how to quantify them. Finalizing IP and licensing terms for collaborative projects between academia, startups, and industry.
Suggested compromises
Adopt a hybrid modeling approach that combines physics‑based numerical models with AI to leverage strengths of both. Use large foundation models as a starting point but fine‑tune them with minimal domain‑specific data to reduce dependence on massive datasets. Balance mission‑mode, impact‑driven funding with curiosity‑driven, broad‑based research to ensure both innovation and applicability. Public‑private partnership model where government provides data, validation, and policy support while private sector supplies agility and capital. Open data access coupled with rigorous validation protocols to maintain trust while encouraging diverse AI development.
Thought Provoking Comments
IRO will focus on building very agile, small, specific models for hyper‑local extreme‑weather issues, rather than building on top of large foundational models that come with a lot of baggage.
Challenges the prevailing trend of using massive foundation models and proposes a fundamentally different, India‑centric research strategy that emphasizes domain‑specific, lightweight AI.
Set the agenda for the panel by framing the discussion around bespoke, small‑scale models; prompted other speakers to consider how such models could be integrated with existing physics‑based systems and opened the conversation to data efficiency and deployment challenges.
Speaker: Amit Sheth
We need to see the elephant plus the ant – i.e., combine spatial (physics‑based numerical) models with fine‑grained time‑series AI to predict high‑impact events like cloudbursts.
Uses a vivid metaphor to illustrate the necessity of hybridizing traditional weather models with AI, highlighting a gap in current forecasting capabilities.
Shifted the discussion toward hybrid modeling approaches; other participants (Manish Bhardwaj, Shivkumar) expanded on multimodal data fusion and early‑warning systems, deepening the technical focus.
Speaker: M. Ravichandran
Multimodal models can fuse insights from cameras, IR, low‑cost sensors, and low‑Earth‑orbit satellites, moving from data‑fusion (which is complex) to insight‑fusion for now‑casting and forecasting.
Introduces a concrete, technology‑driven pathway for real‑time weather sensing and forecasting, emphasizing the practical deployment of AI at scale.
Prompted the panel to discuss sensor networks, cost reductions, and the role of generative AI in operational forecasting; influenced later remarks on hyper‑local modeling and public‑private collaborations.
Speaker: Shivkumar Kalayanaraman
A simple voice framework/app that lets a user say ‘OK, what should I do next week to stay safe from climate impacts?’ – turning forecasts into actionable personal resilience advice.
Brings the consumer perspective into the conversation, highlighting the need to translate AI outputs into everyday decision‑making tools.
Shifted the tone toward end‑user engagement; led to discussions about personalization, market segmentation, and the monetization of climate AI services.
Speaker: Sandeep Singhal
Jugaad – AI can bring the human and societal dimensions into predictive models, bridging the gap between mathematically feasible equations and real‑world human behavior.
Introduces a culturally resonant concept to argue for socio‑technical integration, expanding the scope beyond pure technical accuracy.
Encouraged participants to consider social factors in model design; influenced later comments on decision‑specific digital twins and the ‘tragedy of the commons’ framing.
Speaker: Dev Niyogi
The breakthrough we need is small‑data fine‑tuning of large foundation models – how little data can we use to adapt a model for a specific use case?
Raises a critical research question about data efficiency, directly relevant to India’s data‑sparse regions and the feasibility of deploying AI solutions.
Spurred discussion on transfer learning and benchmark datasets (later echoed by Karthik Kashinath); highlighted a concrete research direction for the community.
Speaker: Praphul Chandra
Transfer learning across data‑rich and data‑sparse regions, leveraging the universal physics of weather while adapting to hyper‑local uniqueness, can dramatically accelerate AI adoption.
Offers a practical solution to the data disparity problem and connects it to proven AI techniques, providing a roadmap for scaling models nationwide.
Guided the conversation toward methodological strategies (benchmark datasets, super‑resolution) and reinforced the need for collaborative, cross‑regional efforts.
Speaker: Karthik Kashinath
ANRF’s Leapfrog Demonstrators for Societal Innovation and challenge‑mode funding aim to move from incremental research to high‑impact, operational solutions, with open IP licensing to accelerate industry‑academia collaboration.
Outlines concrete funding mechanisms and policy levers that can turn ideas into scalable products, addressing the earlier identified bottlenecks of data access and translation.
Provided a clear pathway for turning technical ideas into funded projects; prompted participants to discuss partnerships, IP strategies, and the role of government in de‑risking innovation.
Speaker: Shivkumar Kalayanaraman (ANRF)
Weather is the tragedy of the commons – everyone is affected but no one can pay for it. We must create decision‑specific digital twins that turn weather data into monetizable, actionable products.
Reframes the entire problem from raw forecasting to decision support and economic sustainability, linking technical, societal, and business dimensions.
Served as a concluding turning point, steering the discussion toward productization, market models, and the need for a decision‑centric approach; resonated with earlier consumer‑focus and funding comments.
Speaker: Dev Niyogi
AI‑enabled hyper‑local solar generation forecasts can be combined with the India Energy Stack to enable demand flexibility and grid trading, turning weather predictions into direct energy market value.
Connects climate AI directly to a critical economic sector (energy), illustrating a tangible use‑case where AI adds measurable value.
Bridged the climate‑AI discussion with the broader economic transition narrative; reinforced the earlier point about AI’s role in renewable integration and attracted interest from investors and policymakers.
Speaker: Praphul Chandra
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved the conversation from a high‑level vision of AI for climate to concrete, actionable pathways. Amit Sheth’s emphasis on small, domain‑specific models set the strategic tone, which was deepened by Ravichandran’s hybrid‑model metaphor and Shivkumar’s multimodal sensor vision. Consumer‑centric ideas from Singhal and societal integration from Niyogi broadened the scope to end‑user impact. Technical breakthroughs around data efficiency (Chandra) and transfer learning (Kashinath) offered feasible research directions, while the ANRF funding framework provided the necessary policy and financial scaffolding. Finally, the framing of weather as a tragedy of the commons and the push for decision‑specific digital twins unified the technical, social, and economic threads, steering the panel toward a roadmap that links AI research, public‑private partnerships, and marketable solutions.

Follow-up Questions
Develop benchmark datasets and metrics for hyperlocal weather AI models to drive operational quality
Benchmark datasets have historically accelerated AI progress (e.g., ImageNet). Creating similar standards for hyperlocal scales will enable consistent evaluation and rapid improvement of models.
Speaker: Karthik Kashinath
Investigate small‑data fine‑tuning techniques for large foundation models in weather and climate applications
If large models can be effectively adapted with minimal domain‑specific data, AI solutions become feasible in data‑sparse regions, expanding impact across India.
Speaker: Praphul Chandra
Research efficient transfer learning methods to adapt models from data‑rich regions to data‑sparse regions while preserving local specificity
Weather physics is universal, but hyperlocal nuances matter; transfer learning can leverage global knowledge and reduce the need for extensive local data collection.
Speaker: Karthik Kashinath
Establish robust validation and verification frameworks to build trust in AI‑enabled weather forecasts
Decision makers require confidence in AI predictions; systematic V&V will address concerns about model reliability and facilitate adoption.
Speaker: M. Ravichandran
Create open data platforms that provide broad access to historical and real‑time weather datasets for the research community
Open data enables diverse teams to experiment, fostering innovation and preventing siloed efforts in AI for weather.
Speaker: M. Ravichandran
Develop multimodal insight‑level fusion approaches that combine satellite, sensor, and AI outputs without raw data overload
Insight‑level fusion simplifies integration, reduces computational complexity, and can improve forecast accuracy across modalities.
Speaker: Shivkumar Kalayanaraman
Design hybrid AI‑physics early warning systems that integrate sensor networks, satellite data, and AI predictions for extreme events
Combining physical models with AI can enhance the timeliness and reliability of warnings, especially for cascading hazards like cloudbursts.
Speaker: Manish Bhardwaj
Create voice‑based personal resilience assistants that translate climate forecasts into actionable recommendations for individuals (e.g., farmers, citizens)
A voice interface can bridge the gap between technical forecasts and everyday decision‑making, increasing user adoption and safety.
Speaker: Sandeep Singhal
Explore AI‑driven climate‑risk insurance products and pricing models to address affordability and coverage gaps
Insurance is a critical monetization pathway for climate AI; developing accurate risk models can make coverage sustainable and protect vulnerable populations.
Speaker: Audience (unspecified) and Sandeep Singhal
Build decision‑specific digital twins that convert weather data into actionable insights for various stakeholders
Digital twins focused on decisions (rather than raw variables) can provide tailored guidance, making AI weather outputs directly useful for planning and response.
Speaker: Dev Niyogi
Integrate AI with hyperlocal renewable energy forecasting (e.g., rooftop solar) and grid demand‑flexibility using digital public infrastructure
Accurate local generation forecasts are essential for grid stability and efficient energy trading as India scales its renewable portfolio.
Speaker: Praphul Chandra
Establish effective public‑private partnership frameworks, open IP licensing, and translation centers to move AI climate solutions from early research (TRL 1‑2) to operational deployment (TRL 5‑6)
Coordinated collaboration and clear IP policies accelerate commercialization and ensure that innovations reach end‑users.
Speaker: Shivkumar Kalayanaraman
Coordinate disparate climate and sustainability initiatives across agencies and stakeholders to avoid duplication and maximize national impact
A unified strategy is needed to align research, funding, and deployment efforts, ensuring resources are used efficiently.
Speaker: Akshara Kaginalkar
Develop AI models capable of predicting cloudburst events and associated cascading hazards such as landslides and flash floods
Current models cannot predict cloudbursts, a critical gap for disaster preparedness; targeted AI research could fill this void.
Speaker: M. Ravichandran
Apply AI for the discovery of new sustainable materials and chemicals to accelerate climate mitigation technologies
AI‑driven material discovery can speed up the development of greener alternatives, supporting broader sustainability goals.
Speaker: Shivkumar Kalayanaraman

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.