Survival Tech Harnessing AI to Manage Global Climate Extremes
20 Feb 2026 18:00h - 19:00h
Survival Tech Harnessing AI to Manage Global Climate Extremes
Session at a glance
Summary
This panel discussion focused on the application of artificial intelligence to address India’s critical challenges in weather, climate, and sustainability. The conversation was moderated by Akshara Kaginalkar and featured experts from government agencies, academia, venture capital, and technology companies, including representatives from the Ministry of Earth Sciences, NDMA, ANRF, NVIDIA, and the newly established India Research Organization (IRO).
Professor Amit Sheth explained that IRO was conceived following discussions with the Prime Minister to develop original AI research focused on creating small, agile, domain-specific models rather than relying on large foundational models. The organization aims to build hyper-local solutions for extreme weather prediction, health, and pharmaceutical applications. Dr. M. Ravichandran from the Ministry of Earth Sciences emphasized the need to integrate physics-based numerical models with AI for time-series analysis, particularly for predicting high-impact weather events like cloudbursts that current models cannot forecast accurately.
Several participants highlighted the importance of multimodal AI approaches that combine satellite data, sensor networks, and ground observations. Dr. Shivkumar Kalayanaraman discussed how generative AI could enable simple camera-based forecasting systems, while Karthik Kashinath from NVIDIA emphasized the potential of transfer learning and super-resolution techniques to achieve hyperlocal predictions. The discussion revealed that while global-scale AI weather models are showing promising results, significant work remains to achieve operational robustness at hyperlocal scales relevant to Indian conditions.
A key theme throughout the discussion was the critical importance of public-private partnerships and the need for trusted, accessible early warning systems. Manish Bhardwaj from NDMA stressed the requirement for reliable disaster preparedness systems that can serve vulnerable populations, while Sandeep Singhal highlighted the monetization challenges and opportunities in climate-related AI applications. The panel concluded that successful implementation requires collaboration between government agencies, research institutions, and private sector partners to create scalable solutions that balance economic growth with environmental protection.
Keypoints
Overall Purpose
This panel discussion was part of an AI Summit focused on exploring how artificial intelligence can address India’s critical challenges in weather, climate, and sustainability. The session brought together government officials, researchers, venture capitalists, and technology experts to discuss practical applications, funding mechanisms, and collaborative approaches for implementing AI solutions in climate science and disaster management.
Major Discussion Points
– Integration of AI with Physics-Based Models: Multiple speakers emphasized the need to blend traditional numerical weather prediction models with AI approaches rather than replacing one with the other. The focus was on creating hybrid systems that leverage AI’s strength in time-series analysis and pattern recognition while maintaining the spatial accuracy of physics-based models, particularly for predicting extreme events like cloudbursts.
– Hyperlocal and Multi-Modal Forecasting: There was significant discussion about developing AI systems capable of providing highly localized predictions (down to 1km resolution or finer) by combining multiple data sources including satellite imagery, ground sensors, and even simple camera-based systems. The emphasis was on creating “small, agile models” rather than large foundational models for specific regional applications.
– Early Warning Systems and Disaster Preparedness: The conversation highlighted the critical need for trusted, accessible early warning systems that can reach vulnerable populations. This included discussion of voice-based applications that provide actionable guidance to different user groups (farmers, urban residents, etc.) and the integration of AI with existing disaster management infrastructure.
– Funding and Public-Private Partnerships: Significant attention was given to funding mechanisms through ANRF (Anusandhan National Research Foundation), venture capital, and the need for collaborative approaches between government agencies, academia, and private sector. The discussion covered both grant funding for research and translation funding for scaling solutions.
– Data Accessibility and Cross-Disciplinary Collaboration: Speakers emphasized the importance of opening up weather and climate data to researchers from diverse backgrounds beyond traditional meteorology, encouraging “jugaad” (innovative problem-solving) approaches that combine human behavioral elements with physical constraints in predictive models.
Overall Tone
The discussion maintained an optimistic and collaborative tone throughout, with participants showing enthusiasm for AI’s potential while acknowledging current limitations. The conversation was solution-oriented and practical, focusing on immediate actionable steps rather than theoretical possibilities. There was a strong emphasis on India-specific solutions and leveraging the country’s strengths in data availability and human resources. The tone remained consistently forward-looking, with participants building on each other’s ideas and expressing willingness to collaborate across sectors and disciplines.
Speakers
Speakers from the provided list:
– Akshara Kaginalkar – Panel moderator/host
– M. Ravichandran – Ministry of Earth Sciences Secretary, leading weather, climate and sustainability initiatives
– Amit Sheth – Founder/founding team member of IRO (India Research Organization), AI researcher with focus on enterprise AI and foundational models
– Manish Bhardwaj – Secretary of NDMA (National Disaster Management Authority), disaster management
– Shivkumar Kalayanaraman – NRF (ANRF) CEO, research funding and AI for science initiatives
– Sandeep Singhal – Venture capitalist with investment portfolios in energy transition and mobility
– Dev Niyogi – Professor at UT Austin (University of Texas at Austin), affiliated to IIT Roorkee, founding team member of IRO, digital twin and AI-driven modeling frameworks
– Praphul Chandra – Dean R&D at Atria University Bangalore, heading Center for Excellence for Data Sciences
– Karthik Kashinath – Director of Center for Excellence for Data Sciences, Distinguished Scientist and Engineer at NVIDIA, AI model development (Earth2, weather and climate models)
– Audience – Unidentified audience member who asked about insurance and climate risk
Additional speakers:
– Dr. Shiv Kumar – Mentioned in introduction as NRF CEO and supporter of AI for science (appears to be the same person as Shivkumar Kalayanaraman)
– Dr. Kartik – Mentioned in introduction as director of Center for Excellence for Data Sciences, distinguished scientist at NVIDIA (appears to be the same person as Karthik Kashinath)
– Professor Seth – Referenced in transcript but appears to be referring to Amit Sheth
Full session report
This panel discussion at an AI Summit brought together leading experts from government agencies, academia, venture capital, and technology companies to explore how artificial intelligence can address India’s critical challenges in weather, climate, and sustainability. Moderated by Akshara Kaginalkar, the session featured representatives from the Ministry of Earth Sciences, NDMA, ANRF, NVIDIA, and the newly established India Research Organisation (IRO), creating a comprehensive dialogue between policy makers, researchers, funders, and technologists.
The Genesis of India’s AI Climate Initiative
Professor Amit Sheth opened the discussion by explaining the origins of IRO, which emerged from a December 2023 meeting with the Prime Minister. The organisation was conceived to develop India’s unique AI capabilities rather than following Western or Chinese approaches. IRO’s strategy focuses on creating small, agile, domain-specific models rather than large foundational models, with earth science, health, and pharmaceuticals as key verticals. This approach aims to avoid the “baggage” of large language models with unknown training data, instead building original research capabilities that can serve India’s specific needs whilst supporting the global startup ecosystem.
The Complexity Challenge: From Elephants to Ants
Dr. M. Ravichandran from the Ministry of Earth Sciences provided perhaps the most memorable metaphor of the discussion, describing how weather prediction has evolved from tracking “elephants” (large-scale weather patterns) to needing to observe “ants sitting on elephants” (hyperlocal phenomena). This vivid illustration captured the fundamental challenge facing modern meteorology: climate change has altered spatial and temporal scales, requiring unprecedented granularity in forecasting to predict both large-scale patterns and hyperlocal phenomena simultaneously.
Traditional physics-based numerical models excel at spatial predictions, whilst AI demonstrates superior capabilities in time-series analysis, necessitating hybrid approaches that integrate both methodologies. The challenge is particularly acute for extreme weather events like cloudbursts, which current models cannot predict effectively. Dr. Ravichandran emphasised that neither purely numerical models nor standalone AI systems can address these phenomena adequately, and stressed the critical importance of trust, validation, and verification in AI/ML forecasting systems for operational deployment.
The solution lies in blending both approaches whilst leveraging India’s strength in data availability—over 150 years of meteorological records from IMD. Crucially, Dr. Ravichandran advocated for opening this data to researchers from diverse disciplines beyond traditional meteorology, recognising that interdisciplinary approaches could unlock new insights for weather prediction challenges.
Technological Innovations and Multimodal Approaches
The discussion revealed significant enthusiasm for emerging AI technologies that could democratise weather prediction. Dr. Shivkumar Kalayanaraman highlighted the potential of generative AI to enable simple camera-based forecasting systems, where cameras pointed at the sky could provide one to four-hour forecasts. This approach, combined with dropping costs of multispectral cameras and expanding low Earth orbit satellite networks, could create comprehensive sensor networks that complement existing infrastructure like Mission Mausam.
A key insight emerged around the distinction between data fusion and insight fusion. Rather than attempting the “mind-bogglingly complex” task of fusing raw data from multiple sources, the focus should shift to fusing insights from different modalities. This approach could significantly simplify the technical challenges whilst improving forecasting accuracy across spatial and temporal scales.
Dr. Karthik Kashinath from NVIDIA emphasised the importance of transfer learning for adapting knowledge between data-rich and data-sparse regions whilst maintaining local uniqueness. The success of global-scale AI weather models at 25-kilometre resolution, driven by benchmark datasets like ERA5 from ECMWF, demonstrates the potential for similar breakthroughs at hyperlocal scales. However, this requires creating appropriate benchmark datasets and metrics for fine-scale applications, similar to how ImageNet revolutionised computer vision.
Early Warning Systems and Disaster Management
Manish Bhardwaj from NDMA provided crucial insights into the practical applications of AI for disaster management. India’s vulnerability to multiple hazards—cyclones, tsunamis, earthquakes, landslides, flash floods, and cloudbursts—across its vast geography and population requires sophisticated early warning capabilities. Whilst India has achieved remarkable success in cyclone prediction and evacuation, achieving zero mortality milestones, other hazards present greater challenges.
The emerging pattern of cascading, multi-hazard scenarios—where cloudbursts lead to landslides and flash floods—requires AI systems capable of analysing multiple data sources simultaneously. Current sensor networks cannot map every vulnerable location, particularly in the Himalayan regions where development continues despite risks. AI offers the potential to enhance granularity and accuracy of early warning signals by integrating terrestrial, satellite, and sensor data for predictive forecasting and improved nowcasting.
The vision extends to creating Digital Public Goods (DPG) that provide trusted early warning for all citizens at low cost. This requires hybrid models connecting AI with physical sensor networks and satellite data from various alert-generating agencies, creating resilient early warning systems that can reach vulnerable populations effectively.
Funding Mechanisms and Market Opportunities
Dr. Shivkumar Kalayanaraman, speaking as ANRF CEO, outlined comprehensive support for AI climate research through multiple funding mechanisms. The organisation provides grant funding for not-for-profit research entities and operates a one lakh crore RDI fund specifically for private sector applications. Key programmes include “AI for Science and Engineering” with a dedicated Weather and Climate track, and the upcoming “Leapfrog Demonstrators for Societal Innovation” programme, which emphasises collaborative proposals addressing real societal problems with demonstrable impact.
ANRF’s strategy explicitly encourages consortium-based applications rather than individual proposals, recognising that complex challenges require interdisciplinary collaboration. The organisation announced partnerships with IBM and IIT Delhi for an AI for Science and Engineering hackathon, along with collaboration with MOES on Mission Mausam, demonstrating practical implementation of these funding strategies.
Sandeep Singhal brought a venture capital perspective, emphasising the critical importance of government partnerships for startups working in climate AI. The economic model recognises that climate represents both a public good requiring government support and a commercial opportunity where businesses affected by weather events are willing to pay for predictive services. This dual approach enables sustainable business models whilst ensuring broad societal benefit, with philanthropic capital increasingly available for large-scale programmes.
Human-Centric AI and Cultural Innovation
Professor Dev Niyogi introduced a uniquely Indian perspective through the concept of “Jugaad”—the cultural practice of innovative problem-solving that enables people to “beat the system” and adapt to challenging circumstances. This human element has been notoriously difficult to incorporate into predictive models, yet represents a crucial factor in how communities respond to weather events.
AI’s capability to map human behavioural elements and societal aspects alongside physical constraints offers the potential to create more accessible and accurate prediction systems. The discussion emphasised moving from weather output generation to decision support systems. As Professor Niyogi noted, “People don’t need weather. They need weather that can help them make a decision.” This paradigm shift from data provision to actionable intelligence represents a fundamental reframing of weather services.
Energy Integration and Sustainability Applications
Dr. Praphul Chandra highlighted the critical intersection between AI weather forecasting and India’s renewable energy transition. As the country shifts from fossil fuels to solar-dominated renewable energy, hyperlocal weather predictions become essential for grid management. He mentioned a demonstration by a local university team combining digital public infrastructure from the Ministry of Power with AI models for energy applications, illustrating the practical potential of these integrated approaches.
This application demonstrates how AI climate solutions can simultaneously address multiple challenges—energy security, grid stability, economic efficiency, and environmental sustainability—whilst creating commercially viable business models that support broader adoption.
Technical Challenges and Collaborative Frameworks
The discussion identified several critical technical challenges requiring continued research. Small data fine-tuning emerged as a key breakthrough requirement—the ability to fine-tune large foundation models to specific use cases with minimal local data. This capability would unlock AI applications in data-sparse regions whilst maintaining the benefits of large-scale model training.
The creation of benchmark datasets and metrics for hyperlocal applications parallels the role ImageNet played in computer vision development. Without standardised benchmarks, progress in hyperlocal weather prediction will remain fragmented and difficult to measure.
The discussion revealed strong consensus around the necessity of collaborative approaches spanning government agencies, research institutions, and private sector partners. IRO’s partnerships with NVIDIA, Google, and Qualcomm, announced at this AI Summit, exemplify this collaborative approach, combining technological capabilities with domain expertise and implementation resources.
Challenges and Future Outlook
Despite the optimistic tone and clear pathways forward, significant challenges remain. Cloudburst prediction continues to elude current modelling capabilities, representing a critical gap in disaster preparedness. The validation and verification of AI-enabled forecasting systems requires substantial work to build operational trust, particularly for life-critical applications.
However, the discussion demonstrated remarkable alignment among participants on fundamental principles and approaches. The convergence around hybrid AI-physics models, collaborative partnerships, hyperlocal applications, and data accessibility suggests a mature understanding of both challenges and solutions.
The emphasis on India-specific solutions whilst contributing to global knowledge represents a balanced approach that leverages local strengths—extensive historical data, young talent, and cultural innovation—whilst participating in international scientific collaboration. This strategy positions India to become a leader in AI climate applications rather than merely adopting technologies developed elsewhere.
Conclusion
This panel discussion revealed a sophisticated understanding of how AI can address India’s climate and weather challenges whilst contributing to global solutions. The convergence of government policy support, research funding, technological capabilities, and market opportunities creates an unprecedented opportunity for breakthrough applications.
The shift from traditional weather prediction to decision-support systems, combined with the integration of human behavioural factors and cultural innovation, represents a uniquely comprehensive approach to climate AI. The emphasis on hyperlocal applications, collaborative partnerships, and sustainable business models provides a realistic pathway for scaling solutions from research demonstrations to operational systems serving millions of citizens.
The success of this initiative will depend on continued collaboration across sectors, sustained investment in research and infrastructure, and the ability to translate technical capabilities into trusted, accessible services that enhance resilience and support sustainable development.
Session transcript
top -down approaches in terms of finding the AI solutions, India’s critical problems and weather and climate is a major vertical. So welcome, sir. We have Dr. Ravichandran, he doesn’t need any introduction, but he’s the Ministry of Earth Sciences Secretary and everything and anything under weather and climate and sustainability, sir, is heading it and we look forward to your contribution. We have Mr. Singhal, who is a venture capitalist and he will give a very, very important aspect about how funding and economy is going to drive the solutions in AI for climate. Professor Dev Niyogi, he is professor from UT Austin, that is University of Texas at Austin. Also, he’s affiliated to IIT Roorkee and now one of the founding team of IRO.
Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for science. And we look forward to your support as well as your inputs on how can we proceed on this. And we have Mr. Manish Bharadwaj, who has a very critical role in India as the secretary of disaster NDMA. And we have Professor Praful Chandra. He’s heading the Center for Excellence for Data Sciences, as well as he’s dean R &D, Atria University, Bangalore. And we have Dr. Kartik, who is the director of the Center for Excellence for Data Sciences. He’s a distinguished scientist and engineer, NVIDIA. And he has played a major role in the very famous AI models, which all of us are hearing.
And they are, you know. changing the scenario of modeling and the way science is going to happen. So welcome. So I look forward to your contribution. Oh, okay. Can we stand just here? Okay. So before we open up the panel, I just wanted to have a very quick question to Professor Seth in terms of what was the objective, what we are looking when you started IRO as a, you know, in India, we wanted to have this type of a research organization. So if you can quickly tell us about what was the thought process behind IRO and what do you foresee?
So the idea of IRO kind of. was initiated when I had a chance to meet the PM in December of 2023. I was asked to come and discuss with him. He is always very curious in technology and so he wanted to hear about the ideas on AI. Since I had multiple interactions on research and AI with him during his CM time, this was a fantastic opportunity for me to meet and kind of discuss where India can shine and not necessarily follow the West or China in what we need to do. And so I presented both the core foundational AI focus on enterprises, not necessarily consumer and web, and some of the areas of where we can make big economic and social impact, as well as we can support the startup ecosystem where AI can empower deep AI technology that drives the global products from India.
So that was a broad idea. And so IRO currently is developing original work on building very agile, small, specific models. In this context, for example, if you want to make a model for serving extreme weather related issue that is hyper local, then all the spatial temporal aspects, all the relative modeling aspects, all the prediction algorithms, those are the things that we will bring in. But we will not be building on the top of large language models or so -called foundational model, which come with a lot of baggage. We don’t know what kind of data it has been trained on, many other things. So original research in creating new, small, agile models. And so it will be a platform on the top of India AI structure to be able to create models.
And one area in which we would love to create models, we have technology. expertise here, Dave and many other people. And we can, you know, so earth science, including disaster, including, you know, sustainability issues is one of the vertical. Other two are health and pharma. Pharma, we have very strong partnership with Indian Pharma Alliance and the 23 major pharma, which is 80 % of India’s pharma, you know, kind of output. And similarly, we are working with some health partners and all. But here you see the potential partners that we could have in making impact into the sustainability and health area. So thank you.
We would like to now start with one open questions and then we’ll have an individual question because I’m very sorry, the time is very short. The whole format is actually we had a one day full workshop and we had to squeeze it in. to start. Yeah, so one disclaimer that it’s not my personal thing, but I may request you to finish it in time. Definitely would like to hear a lot from all of you, but due to constraint of time. So first one, opening questions, what we’d like to have is all of us would like you to say is what would be one AI application or a discovery that would excite you about AI helping in this domain of climate as well as extreme events and sustainability as a broader thing, because everything is driven by weather and climate.
We have energy, we have health, we have economics and we have agriculture, many, many aspects of it. So we’d like to see what do you foresee and how do you would like to say that which one development will help us. And we’ll start with you, sir.
When you talk about the weather, of course, it is now depends on various applications. So when we are doing the weather forecast, earlier we just to tell that in suppose how the elephant is going, I’m able to see that elephant, how it is going. I’m able to tell that tomorrow it will come here. But now the problem is whether because of the climate change and other things, the space and time has changed. Now, we have to see on the elephant some ant is sitting. That ant, how it is going, we want to know. So we want to see the elephant plus ant. So I want to see two things. One is time series. Other one is a spatial.
If anything on spatial, I think the physics based numerical model is doing better job. But if you want to go for time series, local rhythm, then A is better. So we need to do. Integrate or we need to fuse both together in order to understand the local weather in a fine scale. And you want to go suppose cloudburst is there. So you cannot do only with. numerical model and with AI also. So, we need to blend both. That is more important. So, we want to go for high impact weather events, how to predict, especially cloud burst and other thing. We do not know how to predict. So, that is why we are looking at whether AI can help or not.
That is one of the objectives. Thank you.
I fully agree with what Ravichandran sir has just said. From the early warning point of view, the idea is to have DPG sort of asset for the public so that we are able to disseminate early warning for all. So, idea is to have trusted early warning for all to be given to the citizens. at low cost and this is where AI can definitely play a supporting role. It cannot be purely an AI. It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the satellite data which is available to us from various alert generating agencies but to have a source of a trusted and reliable and resilient early warning systems wherein I definitely foresee AI playing a great, great role.
Thank you. Yeah, I
think I’ll just double down on the multimodal models that are coming out. I mean one is the time series model. There are special models and I will also mention that today with generative AI you can just put a camera pointed to the sky and then you can actually not only see the patterns of clouds, you can forecast one hour ahead. Two hours ahead, even four hours ahead. make it an IR camera or make it some other multispectral camera when all the costs are dramatically dropping. So you can imagine a network of sensors that complements also the great work that’s being done in Mission Mouse and so on. And plus now with the low Earth orbit satellites going up and also having much more Earth observability, I think the opportunity to fuse insights as opposed to fusing data.
I mean, data fusion is a painfully, you know, mind -bogglingly complex, unnecessary and complex as a thing. But now there’s an opportunity to take insights from A, insights from B and fuse it across modes and also forecasting across these modes. I think that’s a wonderful opportunity. I think that’ll have a huge thing. And once you integrate that into, you know, sort of now casting and other systems, I think we can have a great amount of impact. The other dimension is, of course, AI helping in discoveries and of new materials and you know, sort of simulations and so on. I think these have wonderful opportunities. And of course, as you know, the Nobel Prize for Chemistry went to somebody from an AI background.
So I will put a consumer lens to this. Sirs have brought up the point around what is the technology needed. I think with what is happening with the voice agents right now, I think there is a need to have a simple voice framework or a voice sort of app which allows you to send not just information, but actually create a resilience approach for the person who is who can literally just click a button and say, OK, in the next week, these are the things that you need to do to survive the whatever is happening from a climate perspective. Right. Or what do you need to do in the next month? So there is a there is a forecasting aspect to it.
But more importantly, how it integrates with my life. Do I need to stay at home? If I’m a farmer, what do I do? If I’m a, you know, liberal, what do I do? So that ability to bring that to my day to day life and allow me to actually act a certain way because of what I expect, what I expect to see in the environment around me. And that includes daily air. I’ll
just add one term you guys know this word Jugaad so this is a very India thing Jugaad we can so there is a framework that is mathematically feasible that we can model very well that follows equations that follows laws of nature and then there is a human element that we always beat the system and make that happen mapping that has been very difficult in a predictive models and this is where I think AI is coming into play that it brings the human dimensions and it brings the societal aspect with the physical constraints and this is what is most exciting about it into a way that it will be becoming much more accessible is where I think we’ll be going we had heard also about the agentic AI now I heard about the ant AI thanks to you so
I’m going to pick up where Professor Neogi and Dr. Shiv Kumar left you know we work across several AI foundation models in biology in materials and we have looked at foundation models in weather I think the breakthrough that I am most anxious to look for is what we call small data fine tuning. What that means is that when you look at these large foundation models they are fairly general in their applicability and as Professor Sheth was saying when you have to fine tune them for a specific use case you still need data. How small can that data be? Can you use small data to fine tune large foundation models? I think if you are able to have that breakthrough it has applications across multiple domains that we talked about.
I think a lot has already been shared which is very exciting on many different fronts. One thing I would like to see more used in practice is transfer learning which of course some regions of the world are data rich and some others are data sparse. Problems are shared across the planet. The physics of weather and climate are the same no matter where you are in the planet. But at the same time, there’s uniqueness at hyperlocal scales. But if we can transfer learn efficiently from one region to another with constraints of what exactly we’re trying to transfer learn, I think that would be very impactful.
Thank you, Dr. Kalpik. I think we have a mic here. We saw right from the spatial, as sir said, it’s like from Akashse and Tak, we can see everything. And I think that matters. I remember once I think I was discussing with sir, he said even the dew effect you have on the immediate temperature and that can affect your surrounding and everything. So from small to big is definitely there. And AI also from small to big we should see. And that leads to now I will ask the next round of question is very, very specific to. areas in which all of you are working as well as having a lot of influence and that’s where we would like to hear from you and to have a direction in what way we can go.
At the end of this panel, that’s what, you know, can we all consolidate and can we look at, you know, what are the three to four immediate things which we can do it. And with that respect, I would like to ask Dr. Avichandran, how can India’s national capabilities in AI research, technology development, and very importantly, human resource also, evolve to enable the transition from current physics driven prediction system to AI enabled user specific decision systems. What are the bottlenecks in that and how can we overcome?
So as pointed out, basically we have a capability, basically one of the strengths what we have is basically the data. The data volumes are huge nowadays because we have hundreds of 150 years old old IMD’s things that legacy as well as data available. Now how to utilize this data? And we have young brains of so many young people but we have not fully utilized that one because each one can interpret the data different way. But finally it has to come out into concrete solution. When you talk about AI and weather, if you are talking about, why we want to go for AI first of all because the numerical model, we have a lot of assumptions.
Because of that assumptions, the error grows. Now that error grows whether with the AI we can reduce that is number one. When you are going for initial condition is better, you can predict better. So we have to have a initial condition in better way by reducing the error. So I think many people, even some of the people, many people are working in AI, different people. I think we need to pull the many resource people in our domain so that they can look at data differently and also they can use how to minimize the error and also how to reduce the uncertainties. And also there are various techniques to improve the forecast. So that’s what I, because nowadays the downscaling is one of the important things.
In the large scale model, it defiles. So the AI can downscale better way in the localized, suppose one kilometer resolution weather forecast, we want to forecast how we can do. So we need to have more and more minds and more and more people have to work on it. And I think we need to open up the data so that we have to, that means different people can, can come back and work on that. I have only one important thing is basically this, when you are talking about EIML, the trust is more important, as you pointed out. I think we need to have a better trust in the forecast system. I think where there’s a need for validation and verification, that also very important in EIML can make it.
So our capabilities are huge, but we need to, what is called, utilize them with the data’s strongness. Because now the biology people, even biology people are working in EIML. That same people we can do. One more important point is our people, we are always addicted to the same set. We are thinking only this is the way, but there are multiple ways. That’s why some other discipline people also look at this because this is data driven. Other discipline can look at it differently. We can have some. pathway or way forward. That may be one of the things we can look at.
That’s a very, very important point because we look only weather from maybe only physics angle or weather angle. So, looking at that is very important. And that leads to, you know, what is important for the disaster management service, we would like to ask because highly dependent on the extreme events and the managing that is very difficult. So, how do you foresee adoption of AI for infrastructural preparedness for disaster management and especially reducing the severity impacts on vulnerable population because cities and all maybe and those who have access to many good things they can handle, but we have large vulnerable population. So, how do you see AI helping in the last mile application?
Very apt questions. As you all are aware that India is vulnerable to multi multiple hazards, not only cyclones, tsunamis, earthquakes, landslides, flash floods, even gloves, soap, and looking at the vast geography and the population which can be impacted. It is very essential that from the disaster management point of view, we have a system of adequate preparedness and early warning capabilities. Nonetheless, the disaster, and secondly, though the country has made, we have made as a whole of government approach undertaken various mitigation measures to mitigate the disasters, but disasters, we can only mitigate the effect of the disasters. So how do we keep the population? We have to keep the population in a way so that, you know, the early warning system capabilities are of the highest order.
that we are able to minimize lots of lives. Now, this is a very important challenge. And various agencies, particularly, as Ravi Chandran sir has rightly said, the IMD and from several, we have, over the period of time, we have developed enormous capability to predict, say, cyclone path and trajectory very clearly, five days ahead of its landfall. So, in a way, we are able to do timely evacuation, repositioning of the response teams, which helps in minimizing and even achieving zero mortality milestones. But there are other hazards. And secondly, the way the hazard scenario has unfolded in the last few years, it has become a multi -hazard, cascading hazard sort of scenario in which one hazard leads to other hazards.
So, there are incidents of cloudburst. Which are currently cannot be predicted. because there are various technical issues also behind it, but cloud bursts leading to landslides, leading to flash floods are a serious concern. So how do we prepare ourselves given the current state of resources and the developments? This is where AI can definitely pitch in. So the idea is actually to get the various, from the alert generating agencies, all the data which are coming from our terrestrial, the satellite data, the sensory data, and then to be able to use it for predictive forecasting or also to better the now casting to increase the granularity of even the early warning signal because there are limitations of how many satellite systems we can put into place.
It is not possible to map each and every, the hill in the vulnerable areas. So this is where the complications arise. And since development also has to take place in the vulnerable, particularly in the Himalayan zone, so the challenge is here to use technology at the maximum. What I foresee is that the availability of the data from various multiple sources can definitely be analyzed and used for even with the current set of sensor network capabilities to predict or rather to pinpointedly and accurately predict the forecast, the early warning signals for the targeted population. And then it will help the district authorities, the state authorities for timely evacuation and response and relief operations to be carried out.
So this is one field where NDMA particularly is collaborating with multiple national agencies and IMD. And Mr. War Sciences are playing a very major contributory role in that development of the such DPG. I am very sure that the startup ecosystem in our country definitely carries the agility to provide, to do a collaborative support the efforts of the NDMA and the national agencies in taking this mission forward. So, and this is where I believe that we can definitely reduce the, we can definitely increase our, the early warning capabilities, particularly regarding flash floods and the glacial lake outburst floods, the lightning and the landslides. And we are very hopeful that with the support of the IMD and the Ministry of Earth Sciences, we can definitely also take major and change.
Take different steps towards even predicting or identifying the most vulnerable or the potential cloudburst type situation so that we are able to timely warn the public.
Thank you, sir. And it’s an important point, as Dr. Avichandran has said, and which you have taken into the need of the data and the infrastructure also linking that to and the setup which we have and we have seen it in the expo. So many people are working on climate and sustainability. How can we put that together and how can we have the best out of it? So that leads to a question to Dr. Shokumar. NRF is enabling the research ecosystem as well as the product ecosystem. So we would like to see how NRF is helping in terms of creating AI funds, what advice you can give it to the community and how to be making and developing products and what sort of support we can expect from ANRF on that.
Okay. So for folks who may not be, how many of you know about ANRF? Maybe just I can get a show of hands. Okay. All right. Not too many, but so ANRF is a statutory body of government of India and Dr. Avichandran is on my board as well. So this is a body which is, you know, sort of meant to catalyze research and development funding in India. So we have grant funding, oops, and also we have, you know, a capital fund called RDI, which is a one lakh crore fund, which is meant only for the private sector. The grant funding is typically for the, you know, not -for -profit research sector, which includes academia, labs.
you know, Section 8 companies and others, right? So research entities are recognized by SARU, DSIR and so on. So our thinking is that we not only have broad -based funding for, you know, like what National Science Foundation does, but we also have more focused funding in a mission mode. So we have a couple of programs that might be of interest. One is our AI for Science and Engineering is a program we have currently underway. And one of the tracks of that is AI for Weather and Climate. So it’s already there. And in addition, we are going to be launching a major program in about a month called Leapfrog Demonstrators for Societal Innovation. Leapfrog Demonstrators for Societal Innovation.
So the idea is that you take a societal problem, then rather than talking about it, let’s do something about it, okay? And then not do just incremental thing. It should be a leapfrog demonstrator. And it should be a demonstrator, not just a theoretical thing. So these are kinds of things we’re doing. And alongside it, we are also doing challenges, sir. We’ll be introducing more challenge mode. you know sort of things that we don’t see come bottom up in our proposal formats. So as part of that we are also collaborating deeply. Our AI for Science and Engineering, the Weather and Climate, we are actually collaborating with MOES and with their Mission Morrison program. So we are linking, we are getting you know both the expertise as well as the data and you know so that we can put together the AI expertise along with the sensor expertise and data and we hope to similarly collaborate with other parts of the government and you know I would strongly urge collaboration from NDMA also at this stage.
So that’s the general approach and then in the, so that accelerates and also I just want to mention that just two days back we have announced a hackathon also, AI for Science and Engineering hackathon for you know Weather and Climate actually. So it’s currently open it’s done in partnership with IBM and IIT Delhi. So we put out data set and also in partnership with MOAS and others. So we have data sets and we are encouraging some of the work there. But in addition, we’ll be doing more, as I said, there’s a societal innovation program, which can also admit of newer types, where you bring together disciplines. We actually then go to solve real problems and so on.
So I think that’s the nature of what we’ll try to do. And then the RDI fund is meant for translation and scaling. In addition, we also have translation centers. We have a program that is open right now and so on. So these are various programs and mechanisms we plan to do. But the goal of all of this is to always focus on impact and working backwards, rather than doing some undirected research. So we want to drive research in a more directed way towards impact. But at the same time, we do support curiosity -based, broad -based research as well. So that’s the balance we’re trying to strike.
we are doing, if we would like to have consistent solutions and not only as a demonstration product, but as operational, where we have every day some services coming out of it. How do you see the public -private partnership coming out? In all our mission mode programs, the goal is to accelerate things from a lower technology readiness, like TRL 1 or 2, to its mid -range, like 506 and so on. That is the purpose of that. And as part of all of those programs, we are supporting programs at a critical scale. So we are encouraging consortiums to come and bid for it, or a hub -and -spoke type setup. We are explicitly saying, don’t make it individual proposals.
It has to be collaborative proposals. In some of our programs, we have put out open IP licensing so that when you have a company or a startup and so on, they can actually partner with academia, pick up the IP and quickly translate. That will also encourage rapid translation. So we are introducing, you know, IP and other innovations to drive translation. So we are going to be doing this in a few more programs. Plus, we have this Translational Research Centers program, which has mandates partnership with industry as well. So we are using different mechanisms. All of them are driving collaborations. Plus the RDI fund, which is a one lakh crore fund. By the time it hits the market, it will become three or four lakh crores.
It is only for industry, but the industry, if they don’t have capabilities, they must collaborate with academia and so on. So there’ll be a demand for industry academic collaboration coming from that side as well. So we are attacking the problem from multiple directions. And, you know, all of these are meant to encourage collaboration for impact, collaboration for impact. So that collaboration leads us to, you know, industry. As we know, NVIDIA is… very much into and pioneering in terms of many models coming in and Dr. Karthik is part of the model development. So foundational AI weather models and climate models such as Earth 2, GraphCast and AIFS and many more are now demonstrating good performance at a global scale.
So what further development do you see basically the physics, how can we interpret the physics coming in the AI models and the validation is very, very important as sir has said that very, very local scale. We are talking about even air quality at a 400 meter or floods at 10 meters or something like that we are talking about. So how do you see what is more to be done in terms of models operationally robust at a hyper local scale? Thank you.
Yeah, that’s a rich question but I’m going to keep it fairly brief because it could take the next 30 minutes to get through that. So I’ll touch on three things. One is I think creating the benchmark data sets and the benchmark metrics that are needed to achieve operational quality. And if you look at what has led to the developments at the global scale at 25 kilometer resolution is the ERIF data set from ECMWF and the benchmark problems that they’ve defined on that data set, like the weather bench for example. So I think if we want to get down to the hyperlocal scales, which of course depends on the region that you’re talking about and the types of metrics that you care about, it would be very helpful to create the benchmark data sets and the associated benchmark metrics that can drive towards that.
And if we just wind the clock back, the whole AI revolution in deep learning began because of ImageNet. And that was 15, well 12 years ago. And they defined benchmark data sets and benchmark metrics that drove the revolution in AI. So I think we can do the same thing if we take it down to the hyperlocal level. The second is to leverage the superization techniques that AI has shown to be very powerful. We’re already doing that right now in the Earth2 program with taking 25 kilometer data and super resolving it to one kilometer. Also, we’ve been doing this in weather and climate for decades with downscaling the process of taking coarse resolution simulations and making high resolution.
So if we can stretch that even further to go down to these hyperlocal scales, I’m fairly confident that the technologies needed in generative AI to get us to that scale either already exist or will be invented in the next two to three years. So I’m hopeful that that will help us get there. Thank you.
I think that’s important. We look forward to and that’s where public -private partnership comes in picture because when we see it very specific to India and within India also very specific to a region which we’ll have to, you know, because we have a very different climate all across. Right from north, south, east, west. So I think having maybe small models for a region also can be a future maybe in the. so that once we have this system in terms of you know what is to be done and we have the modeling in place we need a computational power for that because all these models still we need a lot of so that comes to the investments and that’s where we would like to ask Mr.
Sandeep Singhal your investment portfolios have energy transition mobility because see when we speak weather and climate it’s not just weather and climate it’s broadly everything in terms of cloud in terms of energy in terms of health all those things so when you look at your portfolios what advice would you like to give to startups to be able to successfully scale up all these individual domains as well as integrated domains
so I think in terms of scale up the first thing that at least in the climate space is very clear is that partnership with the government is critical because that’s where all the discussion we are having on data all the discussion we are having on deployment the government is the one that’s driving it. So I think any of our portfolio companies that are working in this space, we end up involving government institutions that they would work with, and we build those relationships with ministries at the fund level also so that we can introduce them to the various government programs. Beyond that, the other advice is that you have to start thinking about segmenting the market that you’re targeting.
So there is the general population, and that goes to the government. There is that funding, I think, as Dr. Shukman said, has to come in a public -private partnership because collaboration, I think, is an important word you used. And I think that collaboration is both on the deployment side but also on the funding side. So it’s great to see what the government has done with ANRF, with RDI, and that capital that is becoming available. And there’s also philanthropic capital that is actually now becoming available in this space. So there are philanthropists that are looking at… programs at scale and saying okay if this program can scale we’ll put money behind it so that’s one part but the other segment is that you have to also think about where is monetization possible and there are enough segments where core business is getting impacted because of weather or other events right and that core business is willing to pay so you have to therefore segregate the two in some ways if you think about it you are building for a public good but the distillation of that allows you to build something for private good and charge for it
because now climate is linked very much to the economics absolutely climate and economics is one and the same thing and it’s not just short term we have to worry about next 10 years 20 years 30 years you know everything so that’s a very very important point so that leads to like how are we preparing ourselves and that comes to Dr. Praful, a key challenge for India is balancing economic growth while protecting our natural ecosystem. So can you give an example of real world application where AI can enable this transition as well as the creation of solutions which balance
I am going to pick up on something that Dr. Karthik said and Manish also mentioned which is the intersection of weather and energy. You know India is transitioning from a fossil fuel based economy to a renewable energy power based economy and renewable energy is dominated by solar right. Now if you look at the kind of models that are becoming available for hyper local forecasting they are also giving us much more predictive power in terms of how much energy will one rooftop solar panel generate which is critical for managing the grid right. India’s grid needs to be digitized and in fact we have a team from the University here which is doing a demo on combining digital public infrastructure from the Ministry of Power, which is India Energy Stack, combined with AI models, which use weather forecasting and do forecasting about grid loads to be able to trade energy between consumers and producers.
Or to do demand flexibility. Now, demand flexibility is, again, something that I see critically important as we talk about sustainable AI. When you move to a data center economy, which is huge consumption of energy, you need to be able to support dynamic demand flexibility using a combination of AI and public infrastructure. So I think the intersection of AI energy is something that deserves quite a bit of attention, and I think we are there to kind of address that.
Thank you. See, we have data in place. We have policies in place. We have science in place. Now, what? Money in place. So what is important is how do you give these solutions to the stakeholders and end users, and that leads the question to Professor. Professor Dave, because he. He has an experience of connecting the science to the governance to the actual stakeholders. And you have been leading the digital twin and AI driven modeling frameworks. So what opportunities do you see? You have done it in Austin, but in India, we are all aware of our different types of cities we have. So what opportunities do you see in building digital twins that support climate extremes and disaster management goals, goals which all of us have just now deliberated upon?
The challenges are there. The solutions are there. How do you link it?
Right. I have two minutes, looks like, before we end the session. So this is a course I take over two semesters. But what I’ll say is that weather is the tragedy of commons. Everyone is affected by it, but no one can pay for it. And the same way when we have to have institutional investments, the question comes up, how do you make this into a monetizable product? And this is where the issues like, you know, today morning, the Director General Mahapatra mentioned that We can create some box models which are very simple, scalable, and transferable. And we can create digital twins which are very decision -specific. We don’t need to predict every variable at every scale for everything to try to do that.
So if we define why we are creating models, what decision we are going to guide based on that data -to -decision framework, we can make that into a very intelligent, scalable modeling system. And that, I think, is where the joy of bringing AI and physics and human decisions and dimensions come into picture. People don’t need weather. They need weather that can help them make a decision. And this is where we need to move from simply creating the weather output to adding something which is going to help me make an intelligent decision, whatever that may be. It could be a long -term hedging against something or a short -term decision of whether I walk inside or in the shade.
And if we achieve that, I think we are going to make this into something. Which could transform the manner in which we are predicting, which is not for a variable of interest, but a decision. that we want to make. That is where I think digital twins come into picture. I’ll stop there.
So I think digital twin can be one of our first you know, we can look into the complete AI spectra right from monitoring to processing to modeling to reaching out to the end users. We can have a complete you know, portfolio of AI applications. So this leads to now the end of the session and we would like to open just for half a minute. I’m very sorry for this format. Disclaimer, it’s not my doing. Yeah.
One word I didn’t hear too much of was insurance and climate risk typically climate risk typically reflects in insurance rates either becoming so high or just your house goes uninsured which is happening. In Northern California and Florida. I’m not sure in India how . predominant this is, but how can you kind of marry, I mean, ultimately people have to stay there, it’s difficult to move. So how do you marry the two?
Yeah, so that I sort of somehow sort of refer to it in this notion of translating the work that you’re doing on the DPI side and bringing that technology into sort of more monetizable projects. And insurance actually ends up being one of the first monetizable product that comes out of this.
We can have just one question maybe and we can always discuss it outside because this is a very good opportunity. We have the experts here and we would definitely like I have a few questions, but I’ll ask you outside. I just want to quickly also mention that we’ve announced in this AI Summit partnerships with NVIDIA, with Google and Qualcomm as well as we’re doing other things at the Gates Foundation. So there’s many things happening so I invite my colleagues here to work with us more and focus on India as well in addition to the world. thank you sir so would like to thank and it was a great great listening to all of you and we forward to you yeah and see I will tell you don’t get me wrong I was thinking you know there are 8 people and I am the only one then I was thinking it should be equal number and I was disturbed you Thank you.
Amit Sheth
Speech speed
141 words per minute
Speech length
394 words
Speech time
166 seconds
Establishing IRO to develop small, agile AI models
Explanation
Amit Sheth explains that the Indian Research Organisation (IRO) will focus on creating original, small and domain‑specific AI models rather than building on large foundational models, which he views as burdensome.
Evidence
“And so IRO currently is developing original work on building very agile, small, specific models” [1]. “So original research in creating new, small, agile models” [2]. “But we will not be building on the top of large language models or so -called foundational model, which come with a lot of baggage” [4]. “So the idea of IRO kind of” [5].
Major discussion point
Strategic vision for AI research institutions in India
Topics
Artificial intelligence | The enabling environment for digital development
Positioning AI research to address critical verticals (weather, health, pharma)
Explanation
Sheth outlines that the AI platform under IRO will target high‑impact verticals such as weather, health and pharmaceuticals, leveraging partnerships with industry and academia to generate economic and social benefits.
Evidence
“And so I presented both the core foundational AI focus on enterprises, not necessarily consumer and web, and some of the areas of where we can make big economic and social impact, as well as we can support the startup ecosystem where AI can empower deep AI technology that drives the global products from India” [10].
Major discussion point
Strategic vision for AI research institutions in India
Topics
Artificial intelligence | Social and economic development | The enabling environment for digital development
M. Ravichandran
Speech speed
169 words per minute
Speech length
744 words
Speech time
263 seconds
Fusing AI time‑series methods with physics‑based models for hyper‑local forecasts
Explanation
Ravichandran stresses that to capture fine‑scale weather events like cloudbursts, AI must be integrated with traditional numerical weather models, combining data‑driven and physics‑driven approaches.
Evidence
“When you talk about AI and weather, if you are talking about, why we want to go for AI first of all because the numerical model, we have a lot of assumptions” [20]. “Integrate or we need to fuse both together in order to understand the local weather in a fine scale” [31]. “So we want to go for high impact weather events, how to predict, especially cloud burst and other thing” [32].
Major discussion point
AI‑driven weather and climate forecasting, extreme‑event prediction
Topics
Artificial intelligence | Environmental impacts
Improving initial conditions and downscaling through AI to reduce forecast errors
Explanation
He points out that better initial conditions and AI‑enhanced downscaling can lower error growth in physics‑driven forecasts, leading to more accurate hyper‑local predictions.
Evidence
“So the AI can downscale better way in the localized, suppose one kilometer resolution weather forecast, we want to forecast how we can do” [22]. “So we have to have a initial condition in better way by reducing the error” [63]. “When you are going for initial condition is better, you can predict better” [64]. “Now that error grows whether with the AI we can reduce that is number one” [65]. “So that’s what I, because nowadays the downscaling is one of the important things” [67].
Major discussion point
Hybrid AI‑physics early warning systems
Topics
Artificial intelligence | Environmental impacts | Capacity development
Manish Bhardwaj
Speech speed
125 words per minute
Speech length
804 words
Speech time
384 seconds
Hybrid AI‑physics early‑warning system that is low‑cost and trusted
Explanation
Bhardwaj envisions an early‑warning architecture that blends AI with sensor fabrics, satellite feeds and physical models to deliver reliable, affordable alerts for disasters such as flash floods and landslides.
Evidence
“It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the satellite data which is available to us from various alert generating agencies but to have a source of a trusted and reliable and resilient early warning systems wherein I definitely foresee AI playing a great, great role” [35]. “So, idea is to have trusted early warning for all to be given to the citizens” [57]. “From the early warning point of view, the idea is to have DPG sort of asset for the public so that we are able to disseminate early warning for all” [58]. “So, and this is where I believe that we can definitely reduce the, we can definitely increase our, the early warning capabilities, particularly regarding flash floods and the glacial lake outburst floods, the lightning and the landslides” [60].
Major discussion point
Hybrid AI‑physics early warning systems
Topics
Artificial intelligence | Environmental impacts | Financial mechanisms
Shivkumar Kalayanaraman
Speech speed
178 words per minute
Speech length
924 words
Speech time
311 seconds
AI for Weather and Climate track and community hackathon
Explanation
Shivkumar describes the dedicated AI‑for‑Weather and Climate track within the AI for Science and Engineering program and mentions a recent hackathon to accelerate research collaborations.
Evidence
“And one of the tracks of that is AI for Weather and Climate” [17]. “So that’s the general approach and then in the, so that accelerates and also I just want to mention that just two days back we have announced a hackathon also, AI for Science and Engineering hackathon for you know Weather and Climate actually” [18].
Major discussion point
AI‑driven weather and climate forecasting, extreme‑event prediction
Topics
Artificial intelligence | Environmental impacts
Leveraging multimodal and generative AI (sky‑camera, multispectral) for short‑term forecasts
Explanation
He highlights that generative AI combined with sky‑camera or multispectral sensors can predict cloud patterns an hour ahead, opening new low‑cost forecasting avenues.
Evidence
“There are special models and I will also mention that today with generative AI you can just put a camera pointed to the sky and then you can actually not only see the patterns of clouds, you can forecast one hour ahead” [41]. “make it an IR camera or make it some other multispectral camera when all the costs are dramatically dropping” [45].
Major discussion point
AI‑driven weather and climate forecasting, extreme‑event prediction
Topics
Artificial intelligence | Environmental impacts
Insight‑level fusion rather than raw data fusion
Explanation
Shivkumar argues that fusing insights from different modalities is more effective than attempting to fuse raw sensor data, which he describes as overly complex.
Evidence
“data fusion is a painfully, you know, mind‑bogglingly complex, unnecessary and complex as a thing” [71]. “And plus now with the low Earth orbit satellites going up and also having much more Earth observability, I think the opportunity to fuse insights as opposed to fusing data” [69].
Major discussion point
Hybrid AI‑physics early warning systems
Topics
Artificial intelligence | Capacity development
Funding mechanisms: NRF grant, Leapfrog Demonstrators, RDI capital fund
Explanation
He outlines the National Research Foundation’s suite of financial instruments – grant funding for not‑for‑profit research, the Leapfrog Demonstrators program, and a large RDI capital fund – to catalyze AI for climate solutions.
Evidence
“Leapfrog Demonstrators for Societal Innovation” [90]. “And then the RDI fund is meant for translation and scaling” [91]. “We have grant funding, oops, and also we have, you know, a capital fund called RDI, which is a one lakh crore fund, which is meant only for the private sector” [95]. “The grant funding is typically for the, you know, not‑for‑profit research sector, which includes academia, labs” [97]. “And we are linking, we are getting you know both the expertise as well as the data and you know so that we can put together the AI expertise along with the sensor expertise and data and we hope to similarly collaborate with other parts of the government” [19].
Major discussion point
Funding mechanisms, public‑private partnerships, and ecosystem support
Topics
Financial mechanisms | The enabling environment for digital development | Artificial intelligence
AI for material discovery and broader sustainability research
Explanation
He notes that AI’s impact extends beyond weather to material science, enabling new discoveries that support sustainability.
Evidence
“The other dimension is, of course, AI helping in discoveries and of new materials and you know, sort of simulations and so on” [127].
Major discussion point
AI applications in energy management and sustainability
Topics
Artificial intelligence | Environmental impacts
Benchmark datasets and super‑resolution for hyper‑local modeling
Explanation
Shivkumar stresses the need for benchmark data and metrics, similar to ImageNet, and mentions super‑resolution techniques to push forecasts from 25 km to 1 km and beyond.
Evidence
“So I think if we want to get down to the hyperlocal scales, which of course depends on the region that you’re talking about and the types of metrics that you care about, it would be very helpful to create the benchmark data sets and the associated benchmark metrics that can drive towards that” [47]. “And we are already doing that right now in the Earth2 program with taking 25 kilometer data and super resolving it to one kilometer” [54]. “So if we can stretch that even further to go down to these hyperlocal scales, I’m fairly confident that the technologies needed in generative AI to get us to that scale either already exist or will be invented in the next two to three years” [55].
Major discussion point
Technical advances: small‑data fine‑tuning, transfer learning, benchmarks, and super‑resolution
Topics
Artificial intelligence | Monitoring and measurement | Capacity development
Sandeep Singhal
Speech speed
171 words per minute
Speech length
580 words
Speech time
203 seconds
Voice‑activated personal assistants for climate resilience
Explanation
Singhal proposes building voice‑based agents that not only deliver forecasts but also give actionable daily recommendations to help users prepare for climate events.
Evidence
“I think with what is happening with the voice agents right now, I think there is a need to have a simple voice framework or a voice sort of app which allows you to send not just information, but actually create a resilience approach for the person who is who can literally just click a button and say, OK, in the next week, these are the things that you need to do to survive the whatever is happening from a climate perspective” [77].
Major discussion point
Consumer‑oriented AI tools for climate resilience
Topics
Artificial intelligence | Social and economic development
Public‑private partnership and market segmentation for scaling AI climate solutions
Explanation
He emphasizes that government partnership, market segmentation, and collaboration with philanthropic and private capital are essential to scale AI‑driven climate products.
Evidence
“I think in terms of scale up the first thing that at least in the climate space is very clear is that partnership with the government is critical because that’s where all the discussion we are having on data all the discussion we are having on deployment the government is the one that’s driving it” [105]. “There is that funding, I think, as Dr. Shukman said, has to come in a public‑private partnership because collaboration, I think, is an important word you used” [109]. “And I think that collaboration is both on the deployment side but also on the funding side” [114].
Major discussion point
Funding mechanisms, public‑private partnerships, and ecosystem support
Topics
Financial mechanisms | The enabling environment for digital development
Insurance as an early monetizable climate‑risk product
Explanation
Singhal points out that climate‑risk assessments powered by AI can be packaged into insurance products, representing a near‑term revenue opportunity.
Evidence
“And insurance actually ends up being one of the first monetizable product that comes out of this” [130]. “Yeah, so that I sort of somehow refer to it in this notion of translating the work that you’re doing on the DPI side and bringing that technology into sort of more monetizable projects” [133].
Major discussion point
Monetization pathways: insurance and climate risk products
Topics
Financial mechanisms | Artificial intelligence | Environmental impacts
Dev Niyogi
Speech speed
193 words per minute
Speech length
448 words
Speech time
139 seconds
Decision‑specific digital twins that translate forecasts into actions
Explanation
Niyogi advocates for a data‑to‑decision framework where AI models output weather‑driven digital twins tailored to specific user decisions rather than raw variables.
Evidence
“So if we define why we are creating models, what decision we are going to guide based on that data‑to‑decision framework, we can make that into a very intelligent, scalable modeling system” [15]. “And this is where we need to move from simply creating the weather output to adding something which is going to help me make an intelligent decision, whatever that may be” [46]. “And we can create digital twins which are very decision‑specific” [84]. “That is where I think digital twins come into picture” [86].
Major discussion point
Digital twins and decision‑specific AI frameworks
Topics
Artificial intelligence | Social and economic development | Capacity development
Praphul Chandra
Speech speed
156 words per minute
Speech length
373 words
Speech time
142 seconds
Small‑data fine‑tuning of large foundation models for climate tasks
Explanation
Chandra raises the question of how little labeled data is needed to adapt large foundation models to specific weather or climate applications, emphasizing the importance of data efficiency.
Evidence
“Can you use small data to fine tune large foundation models?” [3]. “I’m going to pick up where Professor Neogi and Dr. Shiv Kumar left you know we work across several AI foundation models in biology in materials and we have looked at foundation models in weather I think the breakthrough that I am most anxious to look for is what we call small data fine tuning” [14]. “What that means is that when you look at these large foundation models they are fairly general in their applicability and as Professor Sheth was saying when you have to fine tune them for a specific use case you still need data” [119]. “How small can that data be?” [120].
Major discussion point
Technical advances: small‑data fine‑tuning, transfer learning, benchmarks, and super‑resolution
Topics
Artificial intelligence | Capacity development
Hyper‑local solar generation forecasts combined with AI‑driven grid load predictions for demand flexibility
Explanation
He describes using AI‑enhanced solar output forecasts together with grid load predictions to enable peer‑to‑peer energy trading and flexible demand management.
Evidence
“India’s grid needs to be digitized and in fact we have a team from the University here which is doing a demo on combining digital public infrastructure from the Ministry of Power, which is India Energy Stack, combined with AI models, which use weather forecasting and do forecasting about grid loads to be able to trade energy between consumers and producers” [28]. “Now if you look at the kind of models that are becoming available for hyper local forecasting they are also giving us much more predictive power in terms of how much energy will one rooftop solar panel generate which is critical for managing the grid right” [50]. “When you move to a data center economy, which is huge consumption of energy, you need to be able to support dynamic demand flexibility using a combination of AI and public infrastructure” [110]. “Demand flexibility is, again, something that I see critically important as we talk about sustainable AI” [125].
Major discussion point
AI applications in energy management and sustainability
Topics
Environmental impacts | Artificial intelligence | Financial mechanisms
Karthik Kashinath
Speech speed
166 words per minute
Speech length
436 words
Speech time
157 seconds
Creating benchmark datasets and metrics for hyper‑local operational quality
Explanation
Kashinath argues that defining benchmark data and associated metrics, akin to ImageNet or WeatherBench, is essential to drive progress toward reliable hyper‑local forecasting.
Evidence
“So I think if we want to get down to the hyperlocal scales, which of course depends on the region that you’re talking about and the types of metrics that you care about, it would be very helpful to create the benchmark data sets and the associated benchmark metrics that can drive towards that” [47]. “One is I think creating the benchmark data sets and the benchmark metrics that are needed to achieve operational quality” [48]. “And they defined benchmark data sets and benchmark metrics that drove the revolution in AI” [49]. “And if you look at what has led to the developments at the global scale at 25 kilometer resolution is the ERIF data set from ECMWF and the benchmark problems that they’ve defined on that data set, like the weather bench for example” [52]. “So I think we can do the same thing if we take it down to the hyperlocal level” [53].
Major discussion point
Technical advances: small‑data fine‑tuning, transfer learning, benchmarks, and super‑resolution
Topics
Artificial intelligence | Monitoring and measurement | Capacity development
Transfer learning across data‑rich and data‑sparse regions
Explanation
He highlights the potential of transfer learning to share weather‑physics knowledge from data‑rich to data‑sparse regions, improving model performance where observations are limited.
Evidence
“One thing I would like to see more used in practice is transfer learning which of course some regions of the world are data rich and some others are data sparse” [121]. “But if we can transfer learn efficiently from one region to another with constraints of what exactly we’re trying to transfer learn, I think that would be very impactful” [122]. “The physics of weather and climate are the same no matter where you are on the planet” [123]. “But at the same time, there’s uniqueness at hyperlocal scales” [124].
Major discussion point
Technical advances: small‑data fine‑tuning, transfer learning, benchmarks, and super‑resolution
Topics
Artificial intelligence | Capacity development
Super‑resolution techniques to achieve hyper‑local forecast resolution
Explanation
He notes ongoing work that upscales 25 km data to 1 km and anticipates further advances to reach even finer scales using generative AI.
Evidence
“We’re already doing that right now in the Earth2 program with taking 25 kilometer data and super resolving it to one kilometer” [54]. “So if we can stretch that even further to go down to these hyperlocal scales, I’m fairly confident that the technologies needed in generative AI to get us to that scale either already exist or will be invented in the next two to three years” [55].
Major discussion point
Technical advances: small‑data fine‑tuning, transfer learning, benchmarks, and super‑resolution
Topics
Artificial intelligence | Environmental impacts | Monitoring and measurement
Akshara Kaginalkar
Speech speed
146 words per minute
Speech length
2193 words
Speech time
897 seconds
Highlighting weather and climate as a critical vertical for AI solutions
Explanation
Akshara stresses that weather and climate constitute a major vertical for AI interventions in India, requiring top‑down approaches to address critical national problems.
Evidence
“top‑down approaches in terms of finding the AI solutions, India’s critical problems and weather and climate is a major vertical” [16].
Major discussion point
Strategic vision for AI research institutions in India
Topics
Artificial intelligence | The enabling environment for digital development
Announcing partnerships with major technology firms and foundations
Explanation
She notes collaborations with NVIDIA, Google, Qualcomm and the Gates Foundation to strengthen AI capabilities for climate applications.
Evidence
“I just want to quickly also mention that we’ve announced in this AI Summit partnerships with NVIDIA, with Google and Qualcomm as well as we’re doing other things at the Gates Foundation” [27].
Major discussion point
Funding mechanisms, public‑private partnerships, and ecosystem support
Topics
Financial mechanisms | The enabling environment for digital development
Promoting open IP licensing and industry‑academic consortia for translation
Explanation
Akshara describes initiatives such as open IP licensing, translational research centers, and consortium‑based bidding to accelerate AI‑driven climate solutions.
Evidence
“We are introducing, you know, IP and other innovations to drive translation” [99]. “In some of our programs, we have put out open IP licensing so that when you have a company or a startup and so on, they can actually partner with academia, pick up the IP and quickly translate” [100]. “We have this Translational Research Centers program, which has mandates partnership with industry as well” [111]. “So there’ll be a demand for industry academic collaboration coming from that side as well” [112]. “So we are encouraging consortiums to come and bid for it, or a hub‑and‑spoke type setup” [113].
Major discussion point
Funding mechanisms, public‑private partnerships, and ecosystem support
Topics
Financial mechanisms | The enabling environment for digital development | Capacity development
Emphasizing data governance and open data for broader participation
Explanation
She calls for opening up weather data to enable multiple stakeholders to develop AI solutions and stresses the importance of validation at very local scales.
Evidence
“And you have to open up the data so that we have to, that means different people can, can come back and work on that” [83]. “So how do you see what is more to be done in terms of models operationally robust at a hyper local scale?” [36]. “So what further development do you see basically the physics, how can we interpret the physics coming in the AI models and the validation is very, very important as sir has said that very, very local scale” [30].
Major discussion point
AI‑driven weather and climate forecasting, extreme‑event prediction
Topics
Data governance | Artificial intelligence | Capacity development
Audience
Speech speed
154 words per minute
Speech length
75 words
Speech time
29 seconds
Insurance as a primary monetizable climate‑risk product
Explanation
An audience member points out that climate risk translates directly into insurance pricing and coverage, making insurance a key market for AI‑enhanced risk assessments.
Evidence
“One word I didn’t hear too much of was insurance and climate risk typically climate risk reflects in insurance rates either becoming so high or just your house goes uninsured which is happening” [132].
Major discussion point
Monetization pathways: insurance and climate risk products
Topics
Financial mechanisms | Artificial intelligence | Environmental impacts
Agreements
Agreement points
Integration of AI with physics-based models is essential for effective weather prediction
Speakers
– M. Ravichandran
– Manish Bhardwaj
Arguments
Need to integrate physics-based models with AI for spatial and temporal predictions, especially for extreme events like cloudbursts
Trusted early warning systems for all citizens require hybrid models connecting AI with physical sensor networks and satellite data
Summary
Both speakers emphasize that purely AI-based solutions are insufficient and that hybrid approaches combining AI with physics-based models and physical sensor networks are necessary for reliable weather prediction and early warning systems
Topics
Artificial intelligence | Environmental impacts | Information and communication technologies for development
Collaboration and partnerships are crucial for scaling AI climate solutions
Speakers
– Shivkumar Kalayanaraman
– Sandeep Singhal
– Akshara Kaginalkar
Arguments
Leapfrog Demonstrators for Societal Innovation program will support collaborative proposals that address real societal problems with demonstrable impact
Public-private partnerships are essential for scaling solutions, with philanthropic capital also becoming available for large-scale programs
Interdisciplinary collaboration for AI climate solutions
Summary
All three speakers stress the importance of collaborative approaches, whether through government programs encouraging consortiums, public-private partnerships for funding and deployment, or interdisciplinary expertise integration
Topics
Financial mechanisms | The enabling environment for digital development | Social and economic development
Hyperlocal scale predictions are critical for practical applications
Speakers
– M. Ravichandran
– Karthik Kashinath
– Akshara Kaginalkar
Arguments
Need to integrate physics-based models with AI for spatial and temporal predictions, especially for extreme events like cloudbursts
Superresolution techniques can downscale global models from 25km to 1km resolution and potentially to hyperlocal scales
Hyperlocal scale requirements for practical climate services
Summary
All speakers recognize that moving from large-scale global models to hyperlocal predictions (down to meters or kilometers) is essential for practical climate applications and disaster management
Topics
Artificial intelligence | Environmental impacts | Social and economic development
Data accessibility and standardization are fundamental requirements
Speakers
– M. Ravichandran
– Karthik Kashinath
Arguments
India’s strength lies in 150+ years of weather data that needs to be opened up for diverse researchers to reduce forecast errors and uncertainties
Benchmark datasets and metrics at hyperlocal scales are needed to drive AI development, similar to how ImageNet revolutionized computer vision
Summary
Both speakers emphasize that making data accessible to diverse researchers and creating standardized benchmark datasets are essential for advancing AI applications in weather and climate
Topics
Data governance | Artificial intelligence | Capacity development
Similar viewpoints
Both speakers advocate for application-specific AI solutions that focus on particular decision-making needs rather than general-purpose predictions, emphasizing practical utility over comprehensive modeling
Speakers
– Dev Niyogi
– Praphul Chandra
Arguments
Digital twins should focus on decision-specific modeling rather than predicting every variable, transforming weather data into actionable intelligence
AI weather forecasting can enable renewable energy grid management by predicting solar panel output for demand flexibility and energy trading
Topics
Artificial intelligence | The digital economy | Social and economic development
Both speakers recognize the need for differentiated funding and business models that separate public good applications (requiring government support) from commercial applications (that can be monetized)
Speakers
– Shivkumar Kalayanaraman
– Sandeep Singhal
Arguments
ANRF provides grant funding for research entities and RDI fund for private sector, focusing on mission-mode programs like AI for Weather and Climate
Startups need government partnerships for data access and deployment, while segmenting markets between public good and monetizable private applications
Topics
Financial mechanisms | The digital economy | The enabling environment for digital development
Both speakers focus on techniques to make large AI models more accessible and applicable to specific use cases with limited data, whether through transfer learning or small data fine-tuning
Speakers
– Karthik Kashinath
– Praphul Chandra
Arguments
Transfer learning can help apply weather models across different regions while accounting for hyperlocal variations
Small data fine-tuning of large foundation models could enable specific use case applications with minimal data requirements
Topics
Artificial intelligence | Capacity development | Information and communication technologies for development
Unexpected consensus
Need for interdisciplinary collaboration beyond traditional weather science
Speakers
– M. Ravichandran
– Dev Niyogi
– Akshara Kaginalkar
Arguments
India’s strength lies in 150+ years of weather data that needs to be opened up for diverse researchers to reduce forecast errors and uncertainties
AI can map human behavioral elements and societal aspects with physical constraints for more accessible predictions
Interdisciplinary collaboration for AI climate solutions
Explanation
It’s unexpected that a government meteorological official (Ravichandran) would strongly advocate for opening up data to researchers from completely different disciplines like biology, aligning with academic perspectives on interdisciplinary approaches
Topics
Capacity development | Data governance | Artificial intelligence
Focus on small, specialized models rather than large general models
Speakers
– Amit Sheth
– Praphul Chandra
– Dev Niyogi
Arguments
IRO focuses on building small, agile, specific models rather than large language models, with earth science as a key vertical
Small data fine-tuning of large foundation models could enable specific use case applications with minimal data requirements
Digital twins should focus on decision-specific modeling rather than predicting every variable, transforming weather data into actionable intelligence
Explanation
There’s unexpected consensus across different types of organizations (research institute, academia, and industry-academia hybrid) on moving away from the current trend of ever-larger AI models toward smaller, more specialized solutions
Topics
Artificial intelligence | Information and communication technologies for development | The enabling environment for digital development
Overall assessment
Summary
The speakers demonstrate strong consensus on the need for hybrid AI-physics approaches, collaborative partnerships, hyperlocal scale predictions, and data accessibility. There’s also unexpected agreement on interdisciplinary collaboration and preference for specialized over general AI models.
Consensus level
High level of consensus with complementary rather than conflicting viewpoints. The implications are positive for coordinated action, as speakers from different sectors (government, academia, industry, funding) align on key principles while bringing different expertise to implementation. This suggests a mature understanding of the challenges and realistic pathways forward for AI applications in climate and weather prediction.
Differences
Different viewpoints
Approach to AI model development – large foundation models vs. small specific models
Speakers
– Amit Sheth
– Praphul Chandra
Arguments
IRO focuses on building small, agile, specific models rather than large language models, with earth science as a key vertical
Small data fine-tuning of large foundation models could enable specific use case applications with minimal data requirements
Summary
Sheth advocates for building original small, agile models from scratch to avoid the ‘baggage’ of large language models with unknown training data, while Chandra sees potential in fine-tuning existing large foundation models with small datasets for specific applications
Topics
Artificial intelligence | Information and communication technologies for development
Data fusion vs. insight fusion approaches
Speakers
– Shivkumar Kalayanaraman
– M. Ravichandran
Arguments
Multimodal models combining time series and spatial data with generative AI can forecast weather patterns using simple camera networks
Need to integrate physics-based models with AI for spatial and temporal predictions, especially for extreme events like cloudbursts
Summary
Kalayanaraman emphasizes fusing insights from different modes rather than raw data fusion, advocating for generative AI with simple sensor networks, while Ravichandran focuses on integrating traditional physics-based numerical models with AI approaches
Topics
Artificial intelligence | Environmental impacts | Information and communication technologies for development
Unexpected differences
Role of interdisciplinary collaboration in weather prediction
Speakers
– M. Ravichandran
Arguments
India’s strength lies in 150+ years of weather data that needs to be opened up for diverse researchers to reduce forecast errors and uncertainties
Explanation
Ravichandran uniquely emphasized bringing in researchers from biology and other non-meteorological disciplines to work on weather data, suggesting that weather experts may be too constrained by traditional thinking. This was unexpected as other speakers focused on technical integration rather than disciplinary diversity
Topics
Capacity development | Data governance | Artificial intelligence
Consumer-focused vs. institutional applications
Speakers
– Sandeep Singhal
– Other speakers
Arguments
Voice-based consumer applications should provide personalized resilience guidance for different user types during climate events
Explanation
Singhal was the only speaker to emphasize direct consumer applications with voice interfaces for individual resilience, while other speakers focused on institutional, research, or infrastructure-level solutions. This represents an unexpected divide between consumer-facing and institutional approaches
Topics
Social and economic development | Closing all digital divides | The digital economy
Overall assessment
Summary
The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts centered on technical approaches rather than goals. Main areas of disagreement included AI model development strategies (small specific models vs. fine-tuned large models) and implementation approaches (data fusion vs. insight fusion). Most speakers agreed on the need for hybrid AI-physics approaches, public-private partnerships, and hyperlocal applications.
Disagreement level
Low to moderate disagreement level with high convergence on goals but some divergence on methods. The implications are positive for the field as speakers complement rather than contradict each other, suggesting multiple viable pathways toward AI-enabled climate solutions. The main challenge will be coordinating these different approaches rather than resolving fundamental conflicts.
Partial agreements
Partial agreements
All speakers agree on the need for hybrid approaches combining AI with traditional methods, but disagree on the specific implementation – Ravichandran emphasizes physics-based model integration, Bhardwaj focuses on sensor network integration, and Kalayanaraman advocates for insight fusion over data fusion
Speakers
– M. Ravichandran
– Manish Bhardwaj
– Shivkumar Kalayanaraman
Arguments
Need to integrate physics-based models with AI for spatial and temporal predictions, especially for extreme events like cloudbursts
Trusted early warning systems for all citizens require hybrid models connecting AI with physical sensor networks and satellite data
Multimodal models combining time series and spatial data with generative AI can forecast weather patterns using simple camera networks
Topics
Artificial intelligence | Environmental impacts | Building confidence and security in the use of ICTs
Both agree on the importance of public-private partnerships and funding mechanisms, but Singhal emphasizes market segmentation between public good and commercial applications, while Kalayanaraman focuses on institutional funding structures and collaborative research programs
Speakers
– Sandeep Singhal
– Shivkumar Kalayanaraman
Arguments
Startups need government partnerships for data access and deployment, while segmenting markets between public good and monetizable private applications
ANRF provides grant funding for research entities and RDI fund for private sector, focusing on mission-mode programs like AI for Weather and Climate
Topics
Financial mechanisms | The enabling environment for digital development | The digital economy
Both agree on the need for more targeted, practical applications, but Niyogi emphasizes decision-focused modeling for specific user needs, while Kashinath focuses on creating standardized benchmarks and datasets to drive technical development
Speakers
– Dev Niyogi
– Karthik Kashinath
Arguments
Digital twins should focus on decision-specific modeling rather than predicting every variable, transforming weather data into actionable intelligence
Benchmark datasets and metrics at hyperlocal scales are needed to drive AI development, similar to how ImageNet revolutionized computer vision
Topics
Artificial intelligence | Social and economic development | Monitoring and measurement
Similar viewpoints
Both speakers advocate for application-specific AI solutions that focus on particular decision-making needs rather than general-purpose predictions, emphasizing practical utility over comprehensive modeling
Speakers
– Dev Niyogi
– Praphul Chandra
Arguments
Digital twins should focus on decision-specific modeling rather than predicting every variable, transforming weather data into actionable intelligence
AI weather forecasting can enable renewable energy grid management by predicting solar panel output for demand flexibility and energy trading
Topics
Artificial intelligence | The digital economy | Social and economic development
Both speakers recognize the need for differentiated funding and business models that separate public good applications (requiring government support) from commercial applications (that can be monetized)
Speakers
– Shivkumar Kalayanaraman
– Sandeep Singhal
Arguments
ANRF provides grant funding for research entities and RDI fund for private sector, focusing on mission-mode programs like AI for Weather and Climate
Startups need government partnerships for data access and deployment, while segmenting markets between public good and monetizable private applications
Topics
Financial mechanisms | The digital economy | The enabling environment for digital development
Both speakers focus on techniques to make large AI models more accessible and applicable to specific use cases with limited data, whether through transfer learning or small data fine-tuning
Speakers
– Karthik Kashinath
– Praphul Chandra
Arguments
Transfer learning can help apply weather models across different regions while accounting for hyperlocal variations
Small data fine-tuning of large foundation models could enable specific use case applications with minimal data requirements
Topics
Artificial intelligence | Capacity development | Information and communication technologies for development
Takeaways
Key takeaways
AI integration with physics-based weather models is essential for accurate hyperlocal predictions, especially for extreme events like cloudbursts that current numerical models cannot predict effectively
India’s competitive advantage lies in its 150+ years of weather data and young talent pool, but this requires opening up data access to diverse researchers from multiple disciplines beyond traditional meteorology
Trusted early warning systems for disaster management need hybrid AI models that combine multiple data sources (satellite, sensor networks, terrestrial) to provide granular, targeted warnings to vulnerable populations
Transfer learning and small data fine-tuning techniques can enable AI weather models to work across different regions while maintaining hyperlocal specificity
Digital twins should be decision-focused rather than attempting to predict every variable, transforming weather data into actionable intelligence for specific use cases
Public-private partnerships are critical for scaling AI climate solutions, with government providing data access and deployment infrastructure while private sector handles monetization
The intersection of AI weather forecasting with renewable energy grid management represents a significant opportunity for India’s energy transition
Resolutions and action items
ANRF announced collaboration with MOES on Mission Morrison program for AI Weather and Climate applications
ANRF launched AI for Science and Engineering hackathon for Weather and Climate in partnership with IBM and IIT Delhi
ANRF will launch Leapfrog Demonstrators for Societal Innovation program within a month to support collaborative proposals addressing societal problems
IRO partnerships announced with NVIDIA, Google, Qualcomm, and Gates Foundation to focus on India-specific AI applications
Need to create benchmark datasets and metrics for hyperlocal scale weather prediction similar to ImageNet for computer vision
Requirement to open up India’s weather data for broader research community access to enable diverse approaches to error reduction
Unresolved issues
How to predict cloudbursts and other extreme weather events that current models cannot forecast
Specific mechanisms for validating and building trust in AI-enabled weather prediction systems for operational use
Technical details of how to effectively integrate physics-based models with AI approaches without losing interpretability
Scalability challenges of deploying hyperlocal AI weather models across India’s diverse geographic and climatic regions
How to balance the computational requirements of advanced AI models with practical deployment constraints
Specific frameworks for translating AI weather predictions into actionable guidance for different user segments (farmers, urban populations, etc.)
Integration challenges between climate risk prediction technology and insurance industry applications
Suggested compromises
Hybrid approach combining physics-based numerical models for spatial predictions with AI for time series and local patterns rather than purely AI-based solutions
Segmented market approach where public good applications are funded through government partnerships while private monetizable applications support business sustainability
Collaborative consortium-based research proposals rather than individual efforts to leverage diverse expertise and resources
Focus on small, agile, specific AI models for particular use cases rather than attempting to build large general-purpose foundation models
Gradual scaling from global 25km resolution models to 1km through superresolution techniques, then further to hyperlocal scales as technology matures
Open IP licensing arrangements to encourage rapid translation from academic research to industry applications
Thought provoking comments
When you talk about the weather… earlier we just to tell that in suppose how the elephant is going, I’m able to see that elephant, how it is going. I’m able to tell that tomorrow it will come here. But now the problem is whether because of the climate change and other things, the space and time has changed. Now, we have to see on the elephant some ant is sitting. That ant, how it is going, we want to know. So we want to see the elephant plus ant.
Speaker
M. Ravichandran
Reason
This metaphor brilliantly captures the fundamental challenge in modern weather prediction – the need to understand both macro and micro-scale phenomena simultaneously. It illustrates how climate change has made weather systems more complex and unpredictable, requiring unprecedented granularity in forecasting.
Impact
This comment set the tone for the entire discussion by establishing the core challenge that AI must address. It influenced subsequent speakers to focus on multi-scale, multi-modal approaches and the need for hybrid physics-AI models. The metaphor became a reference point for discussing the complexity of modern weather prediction.
I think I’ll just double down on the multimodal models that are coming out… today with generative AI you can just put a camera pointed to the sky and then you can actually not only see the patterns of clouds, you can forecast one hour ahead… the opportunity to fuse insights as opposed to fusing data. I mean, data fusion is a painfully, you know, mind-bogglingly complex, unnecessary and complex as a thing.
Speaker
Shivkumar Kalayanaraman
Reason
This comment introduces a paradigm shift from traditional data fusion to insight fusion, which is a more sophisticated approach. It also highlights how accessible technology (simple cameras) can now provide sophisticated forecasting capabilities, democratizing weather prediction.
Impact
This shifted the conversation from discussing complex technical infrastructure to more accessible, scalable solutions. It influenced the discussion toward practical, deployable technologies and reinforced the theme of making AI weather solutions more democratized and cost-effective.
I’ll just add one term you guys know this word Jugaad so this is a very India thing Jugaad… there is a framework that is mathematically feasible that we can model very well that follows equations that follows laws of nature and then there is a human element that we always beat the system and make that happen mapping that has been very difficult in a predictive models and this is where I think AI is coming into play that it brings the human dimensions and it brings the societal aspect with the physical constraints
Speaker
Dev Niyogi
Reason
This comment brilliantly connects Indian cultural innovation (‘Jugaad’) with AI’s capability to model human behavior and societal factors alongside physical phenomena. It recognizes that weather prediction isn’t just about physics but about how humans adapt and respond to weather systems.
Impact
This comment introduced a uniquely Indian perspective to the global AI-weather discussion and emphasized the importance of incorporating human behavioral patterns into predictive models. It influenced the conversation to consider cultural and social factors in AI model development, making the discussion more holistic and locally relevant.
Weather is the tragedy of commons. Everyone is affected by it, but no one can pay for it… People don’t need weather. They need weather that can help them make a decision… we need to move from simply creating the weather output to adding something which is going to help me make an intelligent decision
Speaker
Dev Niyogi
Reason
This comment reframes the entire value proposition of weather prediction from a technical exercise to a decision-support system. It addresses the fundamental economic challenge of weather services and proposes a solution-oriented approach that focuses on actionable intelligence rather than raw data.
Impact
This comment fundamentally shifted the discussion from technical capabilities to user-centric value creation. It influenced the conversation toward practical applications and monetization strategies, connecting with earlier points about public-private partnerships and making weather services economically sustainable.
The breakthrough that I am most anxious to look for is what we call small data fine tuning… when you have to fine tune them for a specific use case you still need data. How small can that data be? Can you use small data to fine tune large foundation models?
Speaker
Praphul Chandra
Reason
This addresses a critical practical challenge in AI deployment – the data requirement paradox. While foundation models are powerful, they often require substantial data for fine-tuning, which may not be available for specific local applications. This comment identifies a key technical breakthrough needed for widespread adoption.
Impact
This comment focused the technical discussion on a specific, solvable problem that could unlock broader AI applications in weather and climate. It influenced subsequent discussions about transfer learning and the practical deployment of AI models in data-sparse regions.
One thing I would like to see more used in practice is transfer learning which of course some regions of the world are data rich and some others are data sparse. Problems are shared across the planet. The physics of weather and climate are the same no matter where you are in the planet. But at the same time, there’s uniqueness at hyperlocal scales.
Speaker
Karthik Kashinath
Reason
This comment addresses the global-local paradox in weather modeling and proposes transfer learning as a solution to bridge data inequality between regions. It recognizes both the universality of physical laws and the uniqueness of local conditions.
Impact
This comment provided a technical pathway for addressing data scarcity issues raised earlier and connected with the discussion about hyperlocal forecasting. It influenced the conversation toward collaborative, global approaches to AI model development while maintaining local relevance.
Overall assessment
These key comments collectively transformed the discussion from a technical showcase of AI capabilities to a nuanced exploration of practical challenges and solutions. The elephant-ant metaphor established the complexity challenge, while the Jugaad concept introduced cultural and human dimensions. The ‘tragedy of commons’ reframing shifted focus from technology to value creation, and the technical insights about small data fine-tuning and transfer learning provided concrete pathways forward. Together, these comments created a comprehensive framework that balanced technical innovation with practical deployment, economic sustainability, and social relevance – making the discussion uniquely valuable for understanding how AI can address India’s specific weather and climate challenges while contributing to global solutions.
Follow-up questions
How can we predict cloudbursts and other extreme weather events that are currently unpredictable?
Speaker
M. Ravichandran
Explanation
This is a critical gap in current forecasting capabilities that affects disaster preparedness and early warning systems
How can we effectively blend physics-based numerical models with AI for better local weather prediction?
Speaker
M. Ravichandran
Explanation
Integration of traditional modeling approaches with AI is essential for improving forecast accuracy at fine scales
How small can the data be for fine-tuning large foundation models for specific use cases?
Speaker
Praphul Chandra
Explanation
Small data fine-tuning is a breakthrough needed to make foundation models more applicable to specific regional or local problems
How can we create benchmark datasets and metrics for hyperlocal scale weather prediction?
Speaker
Karthik Kashinath
Explanation
Standardized benchmarks are needed to drive AI development for hyperlocal applications, similar to how ImageNet drove computer vision advances
How can we effectively transfer learning from data-rich regions to data-sparse regions while maintaining local specificity?
Speaker
Karthik Kashinath
Explanation
This addresses the global challenge of uneven data availability while leveraging shared physics across regions
How can AI help predict glacial lake outburst floods and improve early warning for cascading multi-hazard scenarios?
Speaker
Manish Bhardwaj
Explanation
These complex, cascading disasters require advanced prediction capabilities that current systems cannot provide
How can we develop voice-based AI frameworks that provide personalized resilience guidance for different user types (farmers, urban dwellers, etc.)?
Speaker
Sandeep Singhal
Explanation
Consumer-facing applications need to translate complex weather information into actionable guidance for different user segments
How can we map human behavioral elements (Jugaad) into predictive models using AI?
Speaker
Dev Niyogi
Explanation
Human adaptation and innovation in response to weather events is difficult to model but crucial for accurate predictions
How can we develop decision-specific digital twins rather than comprehensive weather models?
Speaker
Dev Niyogi
Explanation
Moving from general weather prediction to decision-support systems requires targeted modeling approaches
How can climate risk assessment be integrated with insurance products in the Indian context?
Speaker
Audience member
Explanation
Insurance applications represent a monetizable use case for climate AI that could drive private sector investment
How can we validate and verify AI/ML weather forecasts to build trust in the system?
Speaker
M. Ravichandran
Explanation
Trust in AI-based forecasting systems is crucial for operational adoption and public acceptance
How can we open up weather data to enable broader participation from different disciplines in AI/ML development?
Speaker
M. Ravichandran
Explanation
Cross-disciplinary collaboration could bring new perspectives and solutions to weather prediction challenges
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

