National Disaster Management Authority

20 Feb 2026 15:00h - 16:00h

National Disaster Management Authority

Session at a glance

Summary

This panel discussion focused on integrating artificial intelligence into disaster risk reduction (DRR) systems to build national resilience, particularly examining how countries like India can develop scalable AI-enabled early warning systems. The moderator emphasized that the next frontier in DRR involves institutionalizing AI within national resilience architecture rather than simply developing better algorithms. Minister Avinash Ramtohul from Mauritius highlighted the importance of creating digital twins that bridge physical and virtual worlds, while stressing the need for human oversight in automated decision-making processes to prevent dangerous outcomes from fully automated systems. He also warned about cybersecurity threats as disasters affecting the virtual world that require equal attention.


Beth Woodhams from the UK Met Office explained their approach of gradually blending machine learning models with traditional physics-based weather models rather than completely replacing them, emphasizing the importance of co-development with international partners and standardized benchmarking. Som Satsangi identified critical infrastructure gaps, noting that India’s current 40 petaflop computing capacity is insufficient compared to the 1-2 exaflop systems deployed in the United States for real-time disaster management. He emphasized that addressing infrastructure, procurement policies, and power requirements through public-private partnerships is essential for scaling AI solutions.


Pankaj Shukla from Google Cloud outlined a five-layer AI architecture spanning infrastructure to applications, highlighting the need for systems that can operate in disconnected environments during disasters. Nikhilesh Kumar from Vassar Labs demonstrated how startups can contribute by developing Digital Public Goods (DPGs) that connect scattered data across agencies, citing successful real-time dam monitoring systems. Dr. Mohapatra from India’s Meteorological Department confirmed IMD’s adoption of hybrid AI-physical models while acknowledging computational limitations, and Dr. Krishna Vatsa from NDMA outlined plans for massive expansion of observational networks but identified the challenge of processing exponentially increasing data volumes. The discussion concluded with recognition that while India has ambitious plans for AI-driven disaster management, significant investments in infrastructure, clear architectural frameworks, and collaborative approaches between government, private sector, and startups are essential for successful implementation at population scale.


Keypoints

Major Discussion Points:

AI Integration with Physical Models for Weather Forecasting: The discussion emphasized that AI should complement, not replace, traditional physical weather models. Speakers highlighted the need for hybrid approaches that blend AI-driven insights with physics-based models to maintain trust and accuracy in meteorological forecasting.


Infrastructure and Resource Challenges: A critical gap was identified in India’s computational infrastructure capacity. While countries like the US have exaflop-level systems for real-time disaster management, India currently has less than 100 petaflops, creating limitations for implementing AI-driven early warning systems at the required scale.


Human-in-the-Loop Decision Making: Multiple panelists stressed the importance of maintaining human oversight in AI-driven disaster management systems, particularly for life-saving decisions. The discussion highlighted concerns about fully automated systems and the need for human verification, especially in sensitive early warning communications.


Data Integration and Interoperability: The conversation focused on the challenge of creating unified systems that can integrate diverse data sources across multiple agencies and governance levels. This includes developing digital public goods (DPGs) and ensuring sovereign data architectures work across federal and state government structures.


Last-Mile Implementation and Accessibility: Speakers addressed the need for AI systems that can function in low-connectivity, high-risk environments and reach vulnerable populations. This includes developing rugged, disconnected systems that can operate during disasters when traditional infrastructure may be compromised.


Overall Purpose:

The discussion aimed to explore how India can develop and institutionalize AI-enabled disaster risk reduction (DRR) systems at a national scale. The goal was to move beyond pilot projects to create sustainable, integrated AI frameworks for early warning systems, emergency response, and resilience building that can protect India’s 1.2+ billion citizens from increasing climate-related disasters.


Overall Tone:

The discussion maintained a collaborative and solution-oriented tone throughout, with participants sharing both challenges and opportunities. While there was acknowledgment of significant infrastructure and resource gaps, the tone remained optimistic about India’s potential to become a global leader in AI-driven disaster management. The conversation was technical yet accessible, with speakers building on each other’s insights to create a comprehensive view of the path forward. The tone became more urgent when discussing the scale of investment needed but remained constructive in proposing public-private partnerships and incremental implementation strategies.


Speakers

Speakers from the provided list:


Moderator – Session moderator facilitating the panel discussion on AI for disaster risk reduction


Avinash Ramtohul – Minister for Information Technology, Communication and Innovation, Republic of Mauritius; key contributor to national strategies for AI in resilient infrastructure and South-South cooperation


Beth Woodhams – Senior Manager, UK Met Office; specialist in disaster risk reduction via forecasting innovations and AI explorations for prediction


Som Satsangi – Former SVP and Managing Director, Hewlett Packard Enterprise India; industry insights on AI deployment in geospatial and climate analytics


Nikhilesh Kumar – CEO and co-founder, Vassar Labs; innovator in leveraging AI for disaster risk reduction


Pankaj Shukla – Head of Customer Engineering, Google Cloud India; practical AI applications, hazard mapping, predictive analytics, and early warning systems scale-up


Dr. Mrutyunjay Mohapatra – Director General, India Meteorological Department (IMD); expertise in meteorological services and early warning systems


Dr. Krishna Vatsa – Member and Head of Department, National Disaster Management Authority (NDMA); national disaster management and policy


Additional speakers:


Dr. Komal Kishore – Mentioned as respected official (specific role not detailed in transcript)


Adas Nand – Mentioned as respected official (specific role not detailed in transcript)


Krishnamurthy – Mentioned as respected official (specific role not detailed in transcript)


Martin – Referenced speaker from previous panel (specific details not provided)


Full session report

This comprehensive panel discussion examined the critical challenge of integrating artificial intelligence into disaster risk reduction (DRR) systems to build national resilience, with particular focus on how countries like India can develop scalable, AI-enabled early warning systems. The moderator established that while disasters are increasing in frequency, intensity, and complexity due to climate variability and urbanisation, unprecedented advances in AI offer new opportunities for resilience building. The central thesis presented was that “the next frontier in DRR is not better algorithms alone, it is institutionalizing AI within national resilience architecture,” requiring a fundamental shift from pilot projects to comprehensive national and global resilience systems.


Policy Frameworks and Governance Challenges

Minister Avinash Ramtohul from Mauritius provided a unique perspective by fundamentally expanding the conceptual framework of disaster risk reduction. He argued that disaster management must encompass both physical and virtual worlds, noting that “just like disaster can strike the physical world, disaster can also strike the virtual world” through cybersecurity attacks. He emphasized that “just like virus infects people virus also infects systems and virus get contagious in computers as well,” highlighting the vulnerability of AI systems to cyber threats.


The Minister stressed the critical importance of creating digital twins and thermal maps for emergency response, but strongly cautioned against complete automation in life-critical decisions, arguing that “100% automation in the field of AI where it concerns the lives of people can be dangerous.” His advocacy for “human in the loop” or “human on the loop” approaches became a recurring theme throughout the discussion. He referenced that “Prime Minister Modi ji also mentioned in his intervention yesterday” the necessity of keeping humans in decision-making processes for AI applications in disaster management.


For small island developing states, Ramtohul outlined policy reforms including cell broadcast systems planned for deployment in Mauritius, with human-verified messaging protocols to prevent the dissemination of false information that could create panic during emergencies.


Meteorological Integration and Trust Building

Beth Woodhams from the UK Met Office provided crucial insights into how national meteorological agencies can integrate AI while maintaining scientific rigour and public trust. She explicitly stated that machine learning weather models should complement rather than replace physics-based models, with implementation occurring through gradual blending approaches. The Met Office’s strategy involves “step-by-step implementation through hybrid models or blending outputs from both physics-based and machine learning models,” acknowledging that “we don’t know what the answer to this solution is yet” regarding optimal blending methods.


Woodhams emphasized the importance of co-development with international partners, particularly highlighting the Met Office’s collaboration with India through programmes like WC SSP India and WISER Asia Pacific. She stressed that “sovereign capability remains important for public sectors,” but co-development of both models and benchmarking standards is essential. Her focus on developing standardised benchmarking and evaluation methods reflected concerns that current metrics may not address the needs most important to end users.


Infrastructure and Computational Challenges

Som Satsangi, former SVP and Managing Director for Hewlett Packard Enterprise India, provided a stark reality check regarding India’s computational infrastructure capabilities. Despite ambitious plans under the National Supercomputer Mission launched in 2015 with ₹4,500 crore investment, India has developed only 37 supercomputers with 40 petaflops of capacity over the past decade. He contrasted this with United States systems like Al Capitan (1.8 exaflops), Frontier (1.3 exaflops), and Aurora (1 exaflop), noting that “one exaflop is close to around thousand petaflop” while “whole India we don’t have even today 100 petaflop of capacity.”


This infrastructure gap represents a fundamental barrier to implementing real-time AI-driven early warning systems at the scale required for India’s population. Satsangi noted that each advanced system costs between $400-500 million to $1 billion, making government procurement challenging. Beyond computational capacity, he identified power, energy, and water as critical bottlenecks for advanced AI systems requiring hundreds of thousands of GPUs and CPUs.


Despite these challenges, Satsangi expressed confidence in India’s capability to deliver at scale, citing his 9-year tenure involvement in successful implementations like UIDAI (Aadhaar) and COVID applications. He argued that with proper infrastructure, procurement policy reforms, and public-private partnerships, India could leverage its capabilities to become a global leader in AI-driven disaster management.


Cloud Architecture and Edge Computing Solutions

Pankaj Shukla from Google Cloud articulated a comprehensive five-layer AI architecture spanning infrastructure, operating systems, platform services, models, and applications. He emphasized that effective disaster management requires transforming “chaotic reality on the ground into actionable intelligence” by consolidating fragmented data from multiple ministries and social media sources.


Shukla’s vision involved creating “living intelligence” through multi-modal AI models capable of processing structured and unstructured data at unprecedented speed. His proposed federated architecture enables central intelligence development while maintaining the ability to deploy applications at tactical locations that may become disconnected during disasters. This approach allows organizations to bring hyperscaler cloud capabilities to on-premises environments with air-gapped, zero-trust security.


The discussion highlighted the need for rugged devices containing essential AI models that can operate independently during disasters when connectivity is compromised, addressing the reality that disaster response often occurs in environments with limited connectivity.


Startup Innovation and Digital Public Goods

Nikhilesh Kumar from Vassar Labs demonstrated how startups can contribute to population-scale DRR through Digital Public Goods development. He outlined a four-layer framework encompassing hazard prediction, data integration, asset and people impact assessment, and workflow translation into actionable responses.


Kumar provided a compelling example of real-time dam monitoring systems that leverage AI with satellite and radar data to provide nowcasting for nearly one million water bodies. During cyclone season, approximately 5,000 dams received real-time monitoring, demonstrating AI’s potential for addressing critical infrastructure vulnerabilities. His work highlighted how AI can fill data gaps and provide hydraulic analysis for unregulated dams lacking traditional forecasting capabilities.


Additionally, Kumar addressed risk assessment challenges, proposing the use of AI to extract structured information from unstructured news reports and historical records, creating comprehensive databases of location-specific hazard frequency and intensity for insurance sectors and risk assessment frameworks.


National Implementation Perspectives

Dr. Mrutyunjay Mohapatra from India Meteorological Department provided authoritative insights into national-scale implementation challenges. He contextualized the discussion within the UN and WMO’s “Early Warning for All” initiative, which aims to achieve 100% global coverage by 2027, noting that less than 50% of countries had adequate early warning systems when launched in 2022.


Dr. Mohapatra confirmed that IMD utilizes AI through hybrid models combined with physical models, emphasizing that “we cannot get away with the physical models because physical models provide you the physical understanding, the reasoning.” He highlighted that weather forecasting remains an initial value problem requiring accurate observational data.


A critical revelation was that only 5% of satellite data is currently usable due to quality issues, representing a massive untapped resource that AI could help exploit. With IMD currently operating “at least now 28 petaflops” of computing capacity, Dr. Mohapatra acknowledged computational limitations while noting that AI offers opportunities for affordable GPU-based solutions that could enable resource-constrained countries to access advanced forecasting capabilities.


Institutional Integration and Scaling Challenges

Dr. Krishna Vatsa from NDMA provided crucial insights into institutional challenges of scaling AI-driven disaster management systems. He outlined ambitious plans for expanding observational networks, stating that “every village in India will have an automated weather station” within five years, along with quadrupling seismometers and strong motion accelerographs for earthquake monitoring.


However, Dr. Vatsa identified a critical gap between data collection and actionable citizen-focused applications, noting that “it’s one thing to set up the observational network. The another thing is to how to collect the data, process the data and generate the information that could be used and more so when it comes to informing the common citizens.” He emphasized that systems are being developed for people affected by disasters, not for scientists.


The discussion revealed institutional uncertainty about optimal architecture for integrating data centres with individual early warning agencies. Dr. Vatsa explicitly stated that “we need more clarity” on justifying massive data centre investments when early warning agencies operate independently, highlighting the need for comprehensive governance frameworks.


Technical Consensus and Hybrid Approaches

Throughout the discussion, strong consensus emerged around hybrid approaches that blend AI capabilities with traditional physics-based models and human oversight. This reflected recognition that AI should augment rather than replace existing capabilities, maintaining scientific understanding and human judgment while leveraging AI’s data processing capabilities.


Technical challenges identified included data quality issues, computational infrastructure gaps, interoperability requirements across diverse governance ecosystems, and the need for explainable AI systems in life-critical decisions. Speakers emphasized the importance of co-development with international partners, standardized benchmarking methods, and gradual implementation strategies that build trust incrementally.


Conclusion and Future Directions

The panel successfully established a comprehensive framework for understanding both opportunities and challenges of integrating AI into national disaster risk reduction systems. The discussion evolved from broad conceptual frameworks to specific technical and institutional implementation challenges, emphasizing that successful AI integration requires coordinated approaches spanning technology, policy, governance, and international cooperation.


Key themes included the critical importance of maintaining human oversight in life-critical decisions, the need for hybrid approaches combining AI with physics-based models and human expertise, and recognition that massive infrastructure investments and institutional reforms are required for population-scale implementation. The panel concluded with recognition that India has the technical capability and experience to become a global leader in AI-driven disaster management, but achieving this vision requires coordinated action across government, private sector, academia, and international partners to address infrastructure gaps and develop appropriate governance frameworks that translate advanced AI capabilities into effective protection for vulnerable populations.


Session transcript

Moderator

defining moment for disaster risk governance. Around the world, the frequency, intensity, and complexity of disasters are increasing. Climate variability is compounding existing vulnerabilities. Urbanization is concentrating risk, and cascading hazards are challenging traditional response models. At the same time, we are witnessing unprecedented advances in AI. So, at this point of time, how does India bring or develop a model with AI for resilience? We believe that the next frontier in DRR is not better algorithms alone, it is institutionalizing AI within national resilience architecture. Thank you very much. Thank you very much. from pilot projects to national and global resilience systems. Before we start the discussion, let me invite and call on the stage for the panel discussion His Excellency Dr. Avinash Ramtohol, the Minister for Information Technology, Communication and Innovation from the Republic of Mauritius.

Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South cooperation. I would like to invite Ms. Beth Woodham, Senior Manager from UK Met Office. She is a specialist in disaster risk reduction via forecasting innovations and AI explorations for prediction. Welcome. I would like to invite Mr. Som Satsangi, former SVP and Managing Director for Hewlett Packard Enterprise India with industry insights on AI deployment in geospatial and climate analytics. Welcome, Mr. Som. I would like to now call upon Mr. Nikhilesh Kumar, CEO and co -founder of Vassar Labs. He is an innovator in leveraging AI for DRR. Welcome, Nikhilesh. And lastly, Mr. Pankaj Shukla, Head of Customer Engineering, Google Cloud India, for Practical AI Applications, Hazard Mapping, Predictive Analytics, and EWS Scale -Up.

Thank you. Thank you. and focus on double integration during this panel discussion. So my question first to Minister for IT, Communication and Innovation, Republic of Mauritius. Minister, the small island developing states face existential climatic threats. From your perspective, what policy reforms are required to institutionalize AI -enabled early warning and alerting systems within national governance frameworks? And how can countries with limited resources ensure sustainability in such ventures?

Avinash Ramtohul

Thank you and good morning, everybody. Thank you for the opportunity to be here amongst you. First of all, I would like to say a couple of points before I get into the actual response there. Today, just like we have the physical world, we have a virtual world as well in which we all live. And that virtual world is so much bigger. Than the physical world we can see here in front of us at the moment. And just like disaster can strike the physical world, and that is the scope of the discussion, disaster can also strike the virtual world. And as we grow in dependency on the virtual world, on our digital systems, we should be well aware that disaster is not just the flood, the cyclone, the drought.

Disaster can also be the cybersecurity attacks that can actually create havoc in our lives. Therefore, it is very important that the scope of the discussions when we look at disaster be also extended to the virtual world and cybersecurity attacks. Now, having said this, in terms of policy reform, it is very important that we also create this bridge between the physical world and the virtual world. And I will explain myself. Just imagine, as we speak here, there is a big fire that breaks out in one organization. There is a big fire that breaks out in one organization. and because it broke out there are you know automated connections that go to the fire services to the medical services they will proactively now start driving to this place but when they come to this place where would they know where are the people because their main objective is to save the lives of the people secondary the material where would they know where are the people do they have a plan a structural plan of this this space do they know where do the pipes cross now I’m talking about a digital twin it is really important that we create that digital twin which will be the bridge between the physical world and the virtual world and the architectural map of that digital twin should be accessible to a certain set of operators the medical the fire services now this is part of the reform that we are looking at And in a small administration, it becomes easier to do it, as opposed to a huge administration like India.

Now, there’s one more thing in there. As, let’s say, we also have the structural plan, how do we know where the people are? Can we have heartbeat indication? Can we have the thermal map of the place so that we know wherever there’s 37, 38, 39 degrees, well, 37, 38 is better. Do we know where the people are located so that when the fire services come, they go straight to that spot? So this is very important. And another reform that is important that we be aware of is that when there is some kind of a pandemic, there is, and which is contagious, there is human -to -human virus transfer. Now, we are all very excited. We are very excited about artificial intelligence.

intelligence but we are also aware that there is this possibility of virus infecting systems right and just like virus infects people virus also infects systems and virus get contagious in computers as well we all know that therefore we need to also have mechanisms to protect because if we have a message that goes through an early warning system to people this already creates an alert in the minds of people the adrenaline surge starts already but if that message is infected it can create a lot of disruption in our daily lives and this we need to be very careful of therefore in terms of reform the decision making process and i think it was mr somebody earlier mentioned in the previous panel the decision making process is automated now automation can be and 100 % automation in the field of AI where it concerns the lives of people can be dangerous.

Therefore, human in the loop or human on the loop is critical in these kinds of environments. And this is also part of what we are looking at in Mauritius. Yes, it’s true that as a small island developing state, we call it SIDS, we have our own set of flash floods that can actually occur. Within a couple of hours, we can have flash floods and we can see cars floating around already. And this has happened in the country. And we don’t want that to happen again. Therefore, there are early warning systems that we are deploying, like cell broadcast systems, which we have planned to deploy. Now, again, the message that goes into that system should be a message that is human verified.

That is, decisions like these that are sensitive, highly sensitive, cannot be 100 % automated. That’s part of our policy. as well and want to ensure that humans are involved because machines cannot decide for humans human decide for machines and this needs to be critical and needs to be given the attention that it deserves and I believe our Prime Minister Modi ji also mentioned in his intervention yesterday that there is a great necessity to ensure that human remain part of the decision -making process in the application of AI for disaster management so these are a few points I wanted to mention thank you

Moderator

insights in developing resilient governance frameworks which actually are scalable across the nation’s and this is a way to go resilient system which is resilience to even cyber attacks and of sustainable meaningful and not giving any say fatigue you you of alerts also is very much vital for a robust system to be effective across all disasters. So we now come to the second panelist, Ms. Woodham. My question to you is the national meteorological agencies play a crucial role in operational forecasting and early warning delivery. From the perspective of the UK Met Office, how can AI complement physical weather and climatic models to improve forecast, lead time, and impact, basically impact -based warnings to gain public trust?

And what institutional partnerships are necessary to ensure that AI -driven meteorological insights translate into actionable decisions and actions at national and local levels, with special emphasis on low -resource countries?

Beth Woodhams

Hello? Yeah. Right. Thank you. Thank you for your question. Thank you, it’s a real honour to be part of this panel. So at the Met Office we are currently developing machine learning weather models and we absolutely do not see these as a replacement for our physical models. Our plan over the coming years is to step by step implement these models through blending. This could be hybrid models, physics based and machine learning based. It could be blending the output from both of these models after they’ve run. The truth is we don’t know what the answer to this solution is yet but in order to build the trust amongst the users of our models, the customers of our data we’re certainly not going to have a complete shift.

We are going to do this step by step increasing our blending. As we become more confident with the data. so it’s from this conference you know it’s very clear that um companies from the private sector are developing these models in the public sector of course we’re developing them too and sovereign capability remains really important but for public sectors we really need to have that co that co -development um at the met office we have a long history of co -developing with partners like india so through wc ssp india and through age um wiser asia pacific we have these partnerships we’ve co -developed physics -based models and we really want to do the same with machine learning models as well at the met office we’re starting standardizing our benchmark and evaluation benchmarking and evaluation we really want to make sure that when we’re doing comparisons between machine learning and physics -based models we’re being focused on the same thing so we’re really wanting to do the same thing with machine learning and we’re really wanting to do the same thing with machine learning and we’re really wanting to space models.

There’s a lot of metrics we can look at that show machine learning models are doing well, but are these the metrics that are most important to users? Therefore, not only do we want to co -develop the actual models with partners, we want to co -develop the benchmarking and the tests that we do on these models. Thank you.

Moderator

Thank you, Beth, for giving your insight into how the National Meteorological Agency’s may plan to use AI to the systems. So now we move towards the technologies so that how do we really create resilient systems for forecasting. So my first question would be to Mr. Som Sasangi. The private sector innovation has advanced rapidly. So there are first foremost question I think which comes to my mind is how can technology providers design AI systems that are interoperable and with sovereign data architectures because that is the crucial issue to be cracked. So we have to design AI systems that are interoperable with sovereign data architectures and compatible with diverse governance ecosystems. For a country like India with federal government and the state governments so this is a very vital nut to be cracked from the technology’s point of view and similarly what standards of explainability are necessary when AI informs life -saving decisions.

Som Satsangi

Thanks Manish. Really a great question and probably in this room I’ll be calling something which is very very important because just when I walked in I heard Mr. Martin and he spoke a couple of points which are so important. and critical for a country like India. He spoke about the government. He spoke about the procurement policies and the scale. So all these three things are so important and critical when we look from India’s standpoint with 1 .2 billion plus citizens. I’ve been the managing director of Hewlett Packard Enterprise for the last nine years, and I know I’ve been involved almost in all large critical infrastructure projects, whether it’s UIDI or any kind of transaction, COVID, all applications.

And we know that all these things we have developed at this scale and delivered. Probably when we look, the most important aspect of the human life with the climate change, the disaster which is happening across the world, the length and breadth of India on coastal side. So, but somehow, are we ready to do it? I don’t think we are ready. While, and I’ll give you some pointer which are very important and why it’s not happening to the Mr. Martin point. And I’ll just, while India has very ambitious plan for national supercomputer mission way back in 2015, where India said, okay, we’ll be investing 4 ,500 crore to develop some of the supercomputer which will be the high class.

But in last 10 year, what we have developed is some 37 supercomputer with just the 40 petaflop of capital. is that sufficient? Now we are planning, okay, we’ll add another 50 petaflop. But when you look at the global level, the kind of infrastructure which is, if we have to manage in a real time this alert and warning system, and I’ll give you one or two examples in United States. The top which have been developed and deployed to do these things, they have got a capacity of almost one to two exaflop. And one exaflop is close to around thousand petaflop. Whole India we don’t have even today 100 petaflop of data. And US, there are multiple systems which provide this real time information and each of them, one is Al Capitan which is 1 .8 exaflop then Frontier system which has been deployed by the Oak Ridge National University has got 1 .3 exaflop.

capacity. Aurora, which is recently deployed by the Argonne National University, has got one extra flop of power and capability. So these are the kind of systems which are deployed so that actually they can take the power of AI in a real -time environment, whether it’s geospatial data or satellite information data or it’s any kind of live information, and analyze these things with the help of AI in a real -time environment and provide the alert much ahead of those things. Somehow we are not able to provide. So in India if we want this early warning system to be done, I think our main focus needs to be how we can have the core infrastructure which will meet the requirement of this.

And last couple of days in every discussion, this is what is coming out with the global CIO and CEO, that probably India need the core infrastructure which we have not developed. Now, we might say, okay, we are doing 10 ,000 to the AI and all, but that is getting distributed to a large number of tech and SMB guys who are developing the application. But what government needs, because it’s a sovereign data, that government need to buy this kind of infrastructure. But I know each of the system will cost anything between 400, 500 million dollars to a billion dollars. Government may not be able to spend that kind of money. So probably that’s a place where private partnership becomes very, very important in cricket.

So my request is that probably department should be looking how the large global institution and technology partner can bring the core infrastructure and technology, because today technology is not a barrier. It’s an infrastructure and the scale and the procurement process and some of these policies. How the various data will be getting integrated is a problem. So if we can address these things, to the scale probably India has done on the DPI side where we have implemented and the best example to the global level where the UIDI is being used almost by almost more than 800 -900 million citizens in the country. We can deliver that. We have got a capability and with all these AI transformation which is happening, our Honorable Prime Minister already said that India is going to be leapfrogging on those things and going to the global leader in the AI space with all those technology embedded along with the capability what India has got.

Only what is required the infrastructure, but infrastructure will come with a huge cost. When you are going to get the infrastructure, another element comes is the power, energy and water. That’s going to be very critical. So somebody has to look at all the three aspects. You can get the infrastructure if you don’t have the power. So we need to have the power which can help and power these kind of systems. So alternative power resources are going to be very, very critical. They’ll be all water -cooled system because they will have hundreds of thousands of GPU and CPUs running together kind of thing. They will require a huge power and huge water capabilities. So need to have that thing.

So India need to start thinking on those lines to create that thing. If we have to protect and we have to get the right early warning alert to save the life of millions of citizens in the country. Thank you.

Moderator

Thank you. And definitely DRR also offers an opportunity for us to ponder. So taking forward from Mr. Som, I’ll go to Mr. Pankaj Shuklaji, the head of customer engineering at Google Cloud AI. Basically, cloud computing and AI platforms enable real analytics, real -time analytics at scale. So what are the critical infrastructure investments which are essential to support AI deployment in low connectivity and high -risk environments? That’s very vital, looking at the geography of our nation. And how can AI -driven dissemination ensure last -mile inclusion while mitigating misinformation risk? So your insights on that.

Pankaj Shukla

Good afternoon, everyone. So irrespective of the technology, when we talk of disaster management and resilience, essentially what we are trying to do is, we are trying to turn the chaotic reality on the ground into actionable intelligence. so for example the data fragmentation which is sitting with multiple ministries and social media various places all across all of that first of all needs to be brought to a place or at least should be you should have an ability to bring all of that data and turn into a living intelligence so once the data is there which is structured as well as unstructured data then we have the ability of our AI models today which are multi -modal to make sense of completely chaotic data noisy data into a real intelligence at unimaginable speed that is essentially what AI is all about so when it comes to the real implementation of this entire architecture and panelists spoke about multiple aspects that how can we use AI how do we actually implement it on ground if you look at what we need and essentially if we talk of AI broadly it has it is it is there at the five layers.

One is the infrastructure layer. Second is the operating system layer which runs on top of infrastructure. I am not talking about just servers and data centers but an operating system layer which scales from a central to an edge location to a multiple regional locations. Then on top of that the services which are required, platform services which are required to basically build the AI applications, make the use of the right models etc. Then you have got the models, the multi -model ability of the models like Gemini and various other hyperscalers which provide and for example the NDIA mission, lot of Indian providers are building models. Ma ‘am spoke about lot of the models which for example other companies are building.

So the question is that how are we able to make the use of all diverse set of models in a dynamic manner and use agentic AI on top of that. To build applications and turn that into a real action. which can be disseminated at the places where we want. For both proactive during the response and as well as after the response. So the question is how do we implement it? So implementation of this will require a framework or an architecture which essentially has a central living intelligence of all the data on which you are basically experimenting and pre -training the models and tuning using different type of models and building applications. The real application of that is going to happen at a place which might get completely disconnected from a central place.

So you should be able to actually build all of these applications, AI applications, make use of that data, make a single central single truth centrally but ability to send that intelligence back to a tactical location. Today it is exactly possible. So organizations for example Google and many other organizations are actually trying to build. Basically bring all the goodness of hyperscaler cloud. for the entire infrastructure and managed services layer as well as AI tooling to on -prem and then ability to also run those in a completely disconnected air gap environment in a zero trust manner. So you have the security of your data and your applications and then also you should have an ability to connect the edge locations in a federated manner to the central place but if required during a disaster you should be able to carry it a rugged device which basically has a basic small set of central intelligence sitting into that with all these necessary models to basically take action on the ground and that action could be related to either finding out where are your assets where is the maximum impact which has happened, how do you actually kind of send the information to various places.

So all of those things are absolutely possible today and while there is huge amount of infrastructure at its own set is required actually to train models, build models. but that’s happening across the country we should have an ability to bring all the good models, the models for the right thing to basically run it on prem at a smaller set of infrastructure and make a smaller set of that which can run in a tactical location which sits possibly in a very limited infrastructure and compute that is what we

Moderator

Thank you Pankaj, I think for giving Google’s insight into building rugged systems at scale deployment of AI solutions in low cost and high risk environments we have another contributor to the DRR particularly AI deployment in DRR could be from the startups and we have Mr. Nikhilesh Kumar CEO and founder from Vassar Labs basically Nikhilesh you can enlighten us about how startups can contribute in developing a DPG at population scale for DRR particularly for countries like India

Nikhilesh Kumar

The modeling layer, which is transforming the data into various insights, hazards. The asset and the people layer, which is getting impacted, where we need to know today in personalized and in precision, where exactly we talked about, like if flood is coming, which road, which houses, if a landslide is coming, which area. And the fourth, which is most important, is the workflows to translate the actions. And this is where we see a role of DPI and DPG, because all these four layers are not done by one person. They are looking to data scattered across various agencies. And we need to have DPIs and DPGs, which are built across this data right from. different institutions which are bringing metrological data, some institutions which are bringing water and other asset related data, some institutions which are creating different layers like survey of India in case of India on earthquake and various other layers.

Now we also see a role of AI today being playing a role and I will just give an example for that. So we see as today extreme events are happening, one of the first pressure is in the water sector where we see extreme floods sudden gush of water coming into the large dams and dams is one of the most extreme and vulnerable asset that we are all impacted to it and we would also see that the large dams are perhaps we have got a good handle to control them during the disaster but we have big number of dams in the country where they are unregulated and they are scattered across in large numbers and there is no forecast available for them.

So how do we churn in real time, in near real time, both hourly and in days time for close to 1 million water bodies which anytime can be vulnerable. So one of the solutions that we recently saw was translating and making leveraging actually the data gap utilizing AI and AI sitting on real time satellite data 30 minutes interval, radar data coming from, currently all these assets are available from IMD and translating that into a nowcast. Now that nowcast layer translated into hydraulics to each of the dams and in cyclone month close to 5000 dams were given in real time. So such use cases you see is where data is getting connected is available in real time in an interoperable format and then there are players who can translate this data into actions.

I see that contributions where such platforms are brought at national and state scale and these use cases are also packaged and made available for different recipient departments to translate them into actions. So I would say that sir and one more thing I would just like to add taking this as a forum sir. Risk assessment and risk reduction both has a very big gap when it comes to data especially various events across take about earthquake, take about other type of disasters where… where parametric measurements historic have not been available and knowing a frequency… of location specific frequency of these hazards has been actually lacking because you don’t have a database. Now AI can play a very good role here where the information is lying in lots of news which has happened and which contains unstructured information on location, unstructured information on the hazard that has impacted the damages.

So AI can actually uncover these informations and create a structured data sets each hazard wise and these will also feed into various DPGs that will further unlock to the insurance sector to make this which will basically benefit from knowing location specific intensity and also frequency of the risks. I will just close with that sir.

Moderator

Thank you Nikhilish. You have aptly summarized that startups in this sector, definitely can play a very vital role particularly for developing a rugged AI systems for India at population scale. Now we have heard the panelists from the Indian perspective since we are running large systems. So we would like also to have all the members to have the benefit of the insights of how the national systems are functioning in India and how technology is basically being deployed at scale for DRR perspectives. So firstly I would like to get insight from Dr. Prithunjay Mahapatra. Yes. DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed, are being deployed at population scale in the Indian context.

DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed at population scale in the Indian context. Dr. Bapa.

Dr. Mrutyunjay Mohapatra

Namaskar. Good morning to all of you. Respected Dr. Komal Kishore sir, Adas Nand sir, our Krishnamurthy sir and distinguished panelists and their delegates, friends and colleagues. On the outside, I congratulate NDA. For organizing this session, which has given a lot of thoughts to each of us who are represented here. I’ll just start with what the initiative has been taken up by the UN and the WMO. A clarion call was given in 2022 that early warning for all. And when you go for early warning for all, it includes all the countries, all the people and all sectors, all the strata of the society. So with that, when it came of that, actually less than 50 % of countries had the early warning at that time.

Now the number is increasing. but still the time is short now by 2027 we have to achieve the 100 % what’s the early warning for all it is a long goal and if we review now we find that during this last 5 years there is a huge jump in technology and AI is one such technology which is helping for extension of the early warning for all looking at the various components of that you need first the risk knowledge at each and every point what our friend told us like Nikhil and it is not possible with the existing network of any country to have the risk knowledge at each and every location but at the same time there are unstructured data as it is told which can be utilized to create the knowledge to create the risk hazard vulnerability assessment which can be the historical knowledge which can be utilized in real time when you go for the prediction of any severe weather event Next point comes the early warning.

Yes, over the early warning aspects, you will see that there has been also a huge jump in recent years with the inclusion of many AI -based models. You will find that each and every large NMHS, you can say, established NMHS, they are utilizing AI. IMD is also utilizing AI for taking a decision with respect to the early warning. At the same time, I will tell you, AI has come off as a hybrid model along with the physical models. We cannot get away with the physical models because physical models provide you the physical understanding, the reasoning. And hence, the human knowledge gets into the picture with the help of these physical models. So therefore, AI has to be suitably connected with the physical models.

That’s what everyone is doing, starting with European Center or Indian Center. And to do that also, there’s been many collaborations and integrations towards that. So after that, if you look at the basic backbone, which is the modeling, the modeling starts with the basic assumption that weather forecasting is an initial value problem. You cannot give weather forecast if you do not have what is the initial status of Earth, ocean, and atmosphere. So therefore, the basic thing which we are talking of now, that is already defined in the physical modeling system. The system, unless you improve the data, initial data, with all types of observational tools and techniques, you cannot improve the weather forecast. So therefore, here, by collecting or by creating the data with the help of AI also will go a long way in improving not only the AI models, but also the physical models and the hybrid models.

Once you have the good data, the quality of the data can be improved with the help of AI. There are, I’ll tell you, from satellite we get data, a lot of data, but only five. Five percent of the data from satellite is usable. We cannot use the data from satellites because of the quality. And further is that the quantity. you cannot accommodate all types of data in the physical modeling system as our friends have been telling that you need infrastructure, you do not have a computational infrastructure where you can utilize 100 % satellite data so yes you are true that in India we do not have sufficient computing infrastructure, we have at least now 28 petaflops in IMD and outside of course with National Subcontinent Motion we have come up like that, but that is not sufficient and therefore there is scope for public engagement for the augmenting the computational infrastructure and other digital infrastructures but at the same time there is another scope because of AI, now box model has come up a poor country, a poor small island nation cannot venture or cannot even dream to have a high performance computing system, they can go for AI system, a box model has come up where you can give it a small island nation and there with the help of a few GPU nodes they can have the forecast and that has come up and it will grow gradually and we will have the affordability to early warning with the help of AI.

of this GPU -based AI -driven or data -driven models. So after that comes the forecast. Once you come to forecast, now we have already come up to AI consensus. But physical consensus plus AI consensus, then again you will go for the final forecast. Then finally you go to the sectoral applications. There is a huge scope here with the improvement of economic conditions and societal conditions in every country to improve our decision -making for each and every sector, and there AML can play a role. So I urge upon all the industries, academia, R &D, and think tanks to collaborate with NMHS, especially with the India Metrology Department and other organizations here, to have very authentic, specific, and judicious utilization of AI with limited reasonable resources available in the country.

So thank you very much.

Moderator

Thank you, Dr. Mapatra, for your valuable insight. Now, since NDMA is the APEC’s national body, which basically will integrate… all the varied systems into creating a rugged AI… systems. I would like the entire audience to get the benefit of the vision of NDMA from member and HOD Dr. Krishna Vassa. Sir, can you please elaborate how NDMA intends to take this forward to create a sustainable, low -cost, at -scale model for the country?

Dr. Krishna Vatsa

Thank you very much for giving me this platform. I would like to mention that we already have a huge amount of data that exists in relation to almost all the hazards. Look at the earthquakes. We record all the micro -earthquakes for the entire country. The kind of data that exists for the earthquakes which are below 3 also can give us a very good indication of the kind of earthquakes that we can experience in the Himalayas. and other regions. And this data, the availability of data is going to increase exponentially as we are investing in the observational networks. Almost every mitigation program that we are doing, we have included a significant aspect of early warning systems.

In the next five years or so, every village in India will have an automated weather station. We will have a large number of instrumentation for measuring landslides. We are going to at least quadrupling the seismometers, strong motion accelerographs. So we will be investing a huge amount of money in improving our observational networks across the hazards, which will mean that we will have access. to still a larger amount of data. What is important is that we need to have that capacity to process the data and apply the AI models and improve the precision of early warning. That is an area that is the sphere where we are struggling right now. It’s one thing to set up the observational network.

The another thing is to how to collect the data, process the data and generate the information that could be used and more so when it comes to informing the common citizens. Scientists is one thing. You are getting a huge amount of data, but we are not doing it for the scientists. We are doing it for the people who get affected by disasters. So how do we go about it? And the roadmap is not sufficiently clear and I keep talking to all kinds of people. somebody would come and say that you set up a huge data center. No? Okay, that’s fine, great. But people also say if you are setting up a huge data center and you are not really empowering all the early warning agencies, then how are you going to justify the investment in data centers?

The data comes to individual agencies. How do the data center and the individual early warning agencies interact so that we have a good model available? And we don’t have unlimited resources. So the point is, this is where we need more clarity. How do we go about using our existing networks to improve the precision in early warning risk information through a gradually incremental way of building capacities that of course include the data center that should include improving our connection with the LLM models. But it’s also very, very important that we need to find a way of improving the overall architecture. That is one area where we are struggling and where we need some guidance. Thank you very much.

Moderator

Thank you, sir. I think we are coming to the close of the discussion. I’ll request Krishna Vassa, sir, to please give a moment to our panelists. Just give the back. also then request all our dignitaries in the front row to after this memento is over for a quick photograph and then we vacate the room. Thank you. Thank you. I’ll request the leadership from the states of Tamil Nadu and Andhra Pradesh, Telangana also to come, please, in the front for the photograph. We are very happy to inform that most of the states are also represented from the State Disaster Management Authority. Thank you very much. Thank you.

A

Avinash Ramtohul

Speech speed

153 words per minute

Speech length

918 words

Speech time

358 seconds

Digital‑twin bridge between physical and virtual worlds

Explanation

Avinash calls for policy reform to create a digital twin that links the physical environment with its virtual counterpart, providing an architectural map accessible to emergency operators and improving early‑warning coordination.


Evidence

“Now, having said this, in terms of policy reform, it is very important that we also create this bridge between the physical world and the virtual world” [1]. “and because it broke out there are you know automated connections that go to the fire services to the medical services … I’m talking about a digital twin it is really important that we create that digital twin which will be the bridge between the physical world and the virtual world and the architectural map of that digital twin should be accessible to a certain set of operators the medical the fire services” [2].


Major discussion point

Policy and Governance for AI‑Enabled Early Warning Systems


Topics

The enabling environment for digital development | Artificial intelligence | Data governance


Mandatory human verification of AI‑generated alerts

Explanation

He stresses that any AI‑produced warning must be vetted by a human before dissemination to avoid misinformation, cyber‑risk, and the dangers of fully automated decision‑making.


Evidence

“Now, again, the message that goes into that system should be a message that is human verified” [17]. “intelligence but we are also aware that there is this possibility of virus infecting systems … automation can be and 100 % automation in the field of AI where it concerns the lives of people can be dangerous” [21]. “and as well and want to ensure that humans are involved because machines cannot decide for humans human decide for machines and this needs to be critical” [27].


Major discussion point

Policy and Governance for AI‑Enabled Early Warning Systems


Topics

Building confidence and security in the use of ICTs | Artificial intelligence | The enabling environment for digital development


B

Beth Woodhams

Speech speed

154 words per minute

Speech length

385 words

Speech time

149 seconds

Hybrid blending of physics‑based and machine‑learning weather models

Explanation

Beth outlines a step‑by‑step plan to blend machine‑learning models with traditional physics‑based forecasts, emphasizing that ML will augment, not replace, existing models.


Evidence

“This could be hybrid models, physics based and machine learning based” [12]. “Our plan over the coming years is to step by step implement these models through blending” [31]. “We are currently developing machine learning weather models and we absolutely do not see these as a replacement for our physical models” [32]. “We are going to do this step by step increasing our blending” [33]. “It could be blending the output from both of these models after they’ve run” [34].


Major discussion point

Integration of AI with Physical Models and Building Trust


Topics

Artificial intelligence | Capacity development | Data governance


Co‑development of models and benchmarking standards with partner agencies

Explanation

She calls for joint development of AI and physics models together with common benchmarking and evaluation frameworks, citing collaborations with India and other partners.


Evidence

“Therefore, not only do we want to co -develop the actual models with partners, we want to co -develop the benchmarking and the tests that we do on these models” [41]. “… we have a long history of co -developing with partners like india … we really want to do the same with machine learning models as well … we really want to make sure that when we’re doing comparisons between machine learning and physics -based models we’re being focused on the same thing” [42].


Major discussion point

Integration of AI with Physical Models and Building Trust


Topics

Data governance | Capacity development | Artificial intelligence


S

Som Satsangi

Speech speed

147 words per minute

Speech length

983 words

Speech time

398 seconds

Exaflop‑scale sovereign supercomputing capacity; private‑public partnership

Explanation

Som argues that AI‑driven early warning requires exaflop‑level computing, which is costly, and recommends leveraging global technology partners and public‑private partnerships to provide the core infrastructure.


Evidence

“The top which have been developed and deployed to do these things, they have got a capacity of almost one to two exaflop” [58]. “Only what is required the infrastructure, but infrastructure will come with a huge cost” [64]. “my request is that probably department should be looking how the large global institution and technology partner can bring the core infrastructure and technology, because today technology is not a barrier” [65].


Major discussion point

Technical Infrastructure, Interoperability, and Architecture for AI in DRR


Topics

Artificial intelligence | Financial mechanisms | The enabling environment for digital development


Private‑sector expertise for large‑scale infrastructure, procurement & standards

Explanation

He highlights the need for private‑sector knowledge to manage procurement, set standards, and deliver sovereign data infrastructure for AI‑enabled early warning.


Evidence

“It’s an infrastructure and the scale and the procurement process and some of these policies” [51]. “But what government needs, because it’s a sovereign data, that government need to buy this kind of infrastructure” [55]. “capacity” [57].


Major discussion point

Private‑Sector and Startup Contributions to AI‑Driven DRR


Topics

Financial mechanisms | The enabling environment for digital development | Data governance


P

Pankaj Shukla

Speech speed

161 words per minute

Speech length

761 words

Speech time

283 seconds

Layered AI architecture with federated edge, zero‑trust, offline rugged devices

Explanation

Pankaj describes a multi‑layered AI stack—infra, OS, services, models, applications—designed to run on‑prem, in air‑gap mode, and on rugged edge devices that can operate disconnected from the cloud.


Evidence

“One is the infrastructure layer” [9]. “Second is the operating system layer which runs on top of infrastructure” [62]. “for the entire infrastructure and managed services layer as well as AI tooling to on‑prem and then ability to also run those in a completely disconnected air gap environment in a zero trust manner” [67]. “Then on top of that the services which are required, platform services which are required to basically build the AI applications” [68]. “… you should be able to carry a rugged device which basically has a basic small set of central intelligence sitting into that with all these necessary models to basically take action on the ground” [69]. “Implementation of this will require a framework or an architecture which essentially has a central living intelligence of all the data” [71]. “I am not talking about just servers and data centers but an operating system layer which scales from a central to an edge location to a multiple regional locations” [74].


Major discussion point

Technical Infrastructure, Interoperability, and Architecture for AI in DRR


Topics

Artificial intelligence | Data governance | Building confidence and security in the use of ICTs


Cloud‑based scalable AI services; real‑time analytics and last‑mile dissemination via edge devices

Explanation

He advocates using hyperscaler cloud and multi‑modal AI to ingest fragmented data, generate real‑time insights, and push intelligence to tactical edge locations for last‑mile delivery.


Evidence

“So you have the security of your data and your applications and then also you should have an ability to connect the edge locations in a federated manner to the central place but if required during a disaster you should be able to carry it a rugged device…” [69]. “So you should be able to actually build all of these applications, AI applications, make use of that data, make a single central single truth centrally but ability to send that intelligence back to a tactical location” [73]. “Basically bring all the goodness of hyperscaler cloud” [88].


Major discussion point

Private‑Sector and Startup Contributions to AI‑Driven DRR


Topics

Artificial intelligence | Information and communication technologies for development | Building confidence and security in the use of ICTs


D

Dr. Mrutyunjay Mohapatra

Speech speed

171 words per minute

Speech length

982 words

Speech time

344 seconds

Hybrid AI‑physical forecasting framework

Explanation

He argues that AI should be coupled with physics‑based models, creating a consensus between AI and physical predictions to enhance early‑warning accuracy.


Evidence

“And hence, the human knowledge gets into the picture with the help of these physical models” [5]. “So therefore, here, by collecting or by creating the data with the help of AI also will go a long way in improving not only the AI models, but also the physical models and the hybrid models” [11]. “At the same time, I will tell you, AI has come off as a hybrid model along with the physical models” [14]. “So therefore, AI has to be suitably connected with the physical models” [30]. “But physical consensus plus AI consensus, then again you will go for the final forecast” [36].


Major discussion point

Integration of AI with Physical Models and Building Trust


Topics

Artificial intelligence | Capacity development | Data governance


AI‑driven extraction of hazard information from unstructured news

Explanation

He highlights that AI can parse unstructured news sources to generate structured, location‑specific hazard datasets, feeding risk assessments and insurance products.


Evidence

“Now AI can play a very good role here where the information is lying in lots of news which has happened and which contains unstructured information on location, unstructured information on the hazard that has impacted the damages” [95]. “AI can actually uncover these informations and create a structured data sets each hazard wise and these will also feed into various DPGs” [93].


Major discussion point

Data Integration, Quality, and AI‑Driven Risk Assessment


Topics

Artificial intelligence | Data governance | Capacity development


Enhancing satellite data quality and initial condition datasets using AI

Explanation

He notes that only a small fraction of satellite data is usable, but AI can improve its quality and make it suitable for forecasting.


Evidence

“Once you have the good data, the quality of the data can be improved with the help of AI” [90]. “Five percent of the data from satellite is usable” [101]. “We cannot use the data from satellites because of the quality” [102].


Major discussion point

Data Integration, Quality, and AI‑Driven Risk Assessment


Topics

Artificial intelligence | Environmental impacts | Data governance


AI “box” models for affordable forecasting in low‑resource nations

Explanation

He proposes lightweight AI box models that run on a few GPU nodes, enabling small or poor countries to generate forecasts without high‑performance supercomputers.


Evidence

“… a poor small island nation cannot venture or cannot even dream to have a high performance computing system, they can go for AI system, a box model has come up where you can give it a small island nation and there with the help of a few GPU nodes they can have the forecast” [103].


Major discussion point

Sustainability and Low‑Resource Contexts


Topics

Artificial intelligence | Closing all digital divides | The enabling environment for digital development


D

Dr. Krishna Vatsa

Speech speed

126 words per minute

Speech length

507 words

Speech time

240 seconds

Scaling data‑processing capacity and linking data centres with early‑warning agencies

Explanation

He stresses the need for sufficient processing power and clear integration between data centres and warning agencies to deliver precise, actionable alerts.


Evidence

“What is important is that we need to have that capacity to process the data and apply the AI models and improve the precision of early warning” [24]. “How do the data center and the individual early warning agencies interact so that we have a good model available?” [44]. “But people also say if you are setting up a huge data center and you are not really empowering all the early warning agencies, then how are you going to justify the investment in data centers?” [76]. “How do we go about using our existing networks to improve the precision in early warning risk information through a gradually incremental way of building capacities” [77].


Major discussion point

Technical Infrastructure, Interoperability, and Architecture for AI in DRR


Topics

Artificial intelligence | Data governance | Capacity development


Expanding observational networks and improving data‑processing pipelines for population‑scale early warnings

Explanation

He outlines major investments in observational infrastructure to exponentially increase data availability, which will enhance early‑warning precision for large populations.


Evidence

“It’s one thing to set up the observational network” [105]. “And this data, the availability of data is going to increase exponentially as we are investing in the observational networks” [106]. “So we will be investing a huge amount of money in improving our observational networks across the hazards, which will mean that we will have access” [107].


Major discussion point

Data Integration, Quality, and AI‑Driven Risk Assessment


Topics

Environmental impacts | Artificial intelligence | Information and communication technologies for development


M

Moderator

Speech speed

87 words per minute

Speech length

1167 words

Speech time

797 seconds

Ensuring AI‑driven dissemination reaches last‑mile users while mitigating misinformation risk

Explanation

The moderator raises the challenge of using AI to deliver alerts to remote populations without spreading false or misleading information.


Evidence

“And how can AI -driven dissemination ensure last -mile inclusion while mitigating misinformation risk?” [16].


Major discussion point

Policy and Governance for AI‑Enabled Early Warning Systems


Topics

Building confidence and security in the use of ICTs | Information and communication technologies for development


N

Nikhilesh Kumar

Speech speed

128 words per minute

Speech length

627 words

Speech time

293 seconds

Platform‑level integration for actionable AI use cases

Explanation

Nikhilesh stresses that AI platforms deployed at national and state levels must package use cases and make them available to various recipient departments, turning interoperable data into concrete disaster‑response actions.


Evidence

“I see that contributions where such platforms are brought at national and state scale and these use cases are also packaged and made available for different recipient departments to translate them into actions.” [3]. “So such use cases you see is where data is getting connected is available in real time in an interoperable format and then there are players who can translate this data into actions.” [15].


Major discussion point

Data Integration, Quality, and AI‑Driven Risk Assessment


Topics

Data governance | Artificial intelligence | Capacity development


Layered data architecture across agencies

Explanation

He points out that disaster‑related data is scattered across many agencies and requires a dedicated modeling layer to transform raw inputs into hazard insights, leveraging meteorological, water‑resource and asset datasets from multiple institutions.


Evidence

“They are looking to data scattered across various agencies.” [4]. “The modeling layer, which is transforming the data into various insights, hazards.” [5]. “different institutions which are bringing metrological data, some institutions which are bringing water and other asset related data, some institutions which are creating different layers like survey of India in case of India on earthquake and various other layers.” [9].


Major discussion point

Technical Infrastructure, Interoperability, and Architecture for AI in DRR


Topics

Artificial intelligence | Data governance | The enabling environment for digital development


Real‑time nowcasting using AI on satellite and radar feeds

Explanation

Nikhilesh highlights that AI can bridge data gaps by ingesting 30‑minute interval satellite imagery and radar feeds to produce immediate nowcasts, enhancing early‑warning capabilities.


Evidence

“one of the solutions that we recently saw was translating and making leveraging actually the data gap utilizing AI and AI sitting on real time satellite data 30 minutes interval, radar data coming from, currently all these assets are available from IMD and translating that into a nowcast.” [10].


Major discussion point

Data Integration, Quality, and AI‑Driven Risk Assessment


Topics

Artificial intelligence | Environmental impacts | Capacity development


Granular asset‑and‑population impact mapping

Explanation

He advocates AI‑driven creation of precise, personalized impact layers—identifying affected roads, houses or landslide zones—to enable targeted, location‑specific response actions.


Evidence

“The asset and the people layer, which is getting impacted, where we need to know today in personalized and in precision, where exactly we talked about, like if flood is coming, which road, which houses, if a landslide is coming, which area.” [11].


Major discussion point

Data Integration, Quality, and AI‑Driven Risk Assessment


Topics

Artificial intelligence | Social and economic development | Data governance


AI‑enabled workflow automation via DPI/DPG

Explanation

He underscores the need for dedicated Data Processing Infrastructures (DPI) and Data Product Governance (DPG) to automate the workflow that translates AI‑generated insights into concrete disaster‑management actions.


Evidence

“And the fourth, which is most important, is the workflows to translate the actions.” [6]. “And this is where we see a role of DPI and DPG, because all these four layers are not done by one person.” [7]. “And we need to have DPIs and DPGs, which are built across this data right from.” [8].


Major discussion point

Technical Infrastructure, Interoperability, and Architecture for AI in DRR


Topics

Artificial intelligence | Data governance | Capacity development


Agreements

Agreement points

AI should complement rather than replace physical models in weather forecasting and disaster prediction

Speakers

– Beth Woodhams
– Dr. Mrutyunjay Mohapatra

Arguments

AI should complement physical weather models through hybrid approaches rather than complete replacement


Physical models provide essential reasoning and human knowledge that cannot be replaced by AI alone


AI consensus should work alongside physical consensus for final forecasting decisions


Summary

Both speakers emphasize that AI models must work in conjunction with traditional physical models rather than replacing them entirely. Physical models provide crucial understanding and reasoning that AI cannot replicate, making hybrid approaches essential for reliable forecasting.


Topics

Artificial intelligence | Social and economic development


Human oversight is critical in AI systems making life-sensitive decisions

Speakers

– Avinash Ramtohul
– Moderator

Arguments

Human-in-the-loop or human-on-the-loop approaches are critical for life-sensitive automated decisions


100% automation in AI systems concerning human lives can be dangerous and requires human verification


Standards of explainability are necessary when AI informs life-saving decisions


Summary

There is strong consensus that AI systems dealing with human lives in disaster scenarios must maintain human oversight and verification. Complete automation is considered dangerous when lives are at stake, requiring human-in-the-loop approaches.


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Massive computational infrastructure gaps exist and require significant investment

Speakers

– Som Satsangi
– Dr. Mrutyunjay Mohapatra
– Dr. Krishna Vatsa

Arguments

India lacks sufficient computational infrastructure with only 40 petaflops compared to US systems with 1-2 exaflops


IMD currently has 28 petaflops but needs more computational infrastructure for utilizing satellite data effectively


Need for clear roadmap on how data centers and early warning agencies interact to justify investments


Summary

All speakers acknowledge that current computational infrastructure is vastly insufficient for effective AI-powered disaster management. There is consensus on the need for massive infrastructure investments, though concerns exist about cost and implementation strategies.


Topics

The enabling environment for digital development | Artificial intelligence


Data integration across multiple agencies and sources is essential for effective AI systems

Speakers

– Pankaj Shukla
– Nikhilesh Kumar
– Dr. Krishna Vatsa

Arguments

Data fragmentation across multiple ministries needs to be consolidated into unified intelligence systems


DPIs and DPGs are essential for connecting scattered data across various agencies and institutions


Massive amounts of existing hazard data need better processing capacity and AI model application


Summary

There is unanimous agreement that fragmented data across various government agencies and institutions must be integrated into unified systems. Digital Public Infrastructure and Digital Public Goods are seen as crucial for enabling this integration.


Topics

Data governance | Information and communication technologies for development


AI can transform unstructured disaster data into actionable intelligence

Speakers

– Pankaj Shukla
– Nikhilesh Kumar

Arguments

AI can transform chaotic disaster data into actionable intelligence at unprecedented speed using multi-modal capabilities


AI can help create structured datasets from unstructured historical disaster information in news reports


Summary

Both speakers agree that AI’s ability to process unstructured data from multiple sources (social media, news reports, government databases) and convert it into structured, actionable intelligence is a key advantage for disaster management.


Topics

Artificial intelligence | Data governance


Similar viewpoints

Both technology industry representatives emphasize the enormous scale of infrastructure investment required and the need for systems that can operate in challenging, disconnected environments during disasters.

Speakers

– Som Satsangi
– Pankaj Shukla

Arguments

Large-scale AI deployment requires massive infrastructure investments costing $400-500 million to $1 billion per system


Cloud architecture should enable central intelligence with ability to operate in disconnected, air-gapped environments


Topics

The enabling environment for digital development | Financial mechanisms


Both speakers recognize the potential of AI to better utilize existing data sources, particularly satellite data, and to provide real-time monitoring and forecasting for critical infrastructure.

Speakers

– Dr. Mrutyunjay Mohapatra
– Nikhilesh Kumar

Arguments

Only 5% of satellite data is currently usable due to quality issues that AI could help improve


Real-time nowcasting for water bodies and dams can provide critical early warnings for vulnerable assets


Topics

Artificial intelligence | Data governance


Both speakers acknowledge resource constraints and emphasize the need for sustainable, incremental approaches to AI implementation that work within existing capabilities rather than requiring massive upfront investments.

Speakers

– Avinash Ramtohul
– Dr. Krishna Vatsa

Arguments

Small island developing states need sustainable AI solutions despite limited resources


Incremental capacity building approach needed to improve early warning precision within existing resource constraints


Topics

Closing all digital divides | Capacity development


Unexpected consensus

Cybersecurity as part of disaster management scope

Speakers

– Avinash Ramtohul
– Pankaj Shukla

Arguments

Disaster scope should extend beyond physical events to include cybersecurity attacks on digital systems


Early warning messages must be protected from virus infections that could create disruption and panic


Cloud architecture should enable central intelligence with ability to operate in disconnected, air-gapped environments


Explanation

There was unexpected consensus on expanding the traditional definition of disasters to include cyber threats. This represents a broader understanding of disaster risk that encompasses both physical and virtual vulnerabilities, which is significant for comprehensive resilience planning.


Topics

Building confidence and security in the use of ICTs | Social and economic development


Affordable AI solutions for resource-constrained countries

Speakers

– Dr. Mrutyunjay Mohapatra
– Avinash Ramtohul

Arguments

GPU-based AI models offer affordable alternatives for countries unable to invest in high-performance computing


Small island developing states need sustainable AI solutions despite limited resources


Explanation

Unexpected agreement emerged on the potential for AI to democratize disaster management capabilities through affordable, GPU-based solutions that don’t require massive supercomputing infrastructure. This suggests AI could help bridge the digital divide in disaster preparedness.


Topics

Closing all digital divides | Artificial intelligence


Overall assessment

Summary

Strong consensus exists on the need for hybrid AI-physical model approaches, human oversight in life-critical decisions, massive infrastructure investments, and data integration across agencies. There is also agreement on AI’s potential to process unstructured data and the importance of sustainable solutions for resource-constrained environments.


Consensus level

High level of consensus with complementary perspectives rather than conflicting views. The agreement spans technical, policy, and implementation aspects, suggesting a mature understanding of both AI’s potential and limitations in disaster risk reduction. This consensus provides a solid foundation for developing comprehensive AI-enabled disaster management frameworks that balance innovation with safety and sustainability.


Differences

Different viewpoints

Level of automation in AI decision-making for disaster management

Speakers

– Avinash Ramtohul
– Pankaj Shukla

Arguments

100% automation in the field of AI where it concerns the lives of people can be dangerous. Therefore, human in the loop or human on the loop is critical in these kinds of environments.


AI can transform chaotic disaster data into actionable intelligence at unprecedented speed using multi-modal capabilities


Summary

Ramtohul strongly advocates for human oversight in automated AI systems affecting human lives, emphasizing that 100% automation is dangerous. Shukla focuses on AI’s capability to process data at unprecedented speed without explicitly addressing human oversight requirements.


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Infrastructure investment approach – centralized vs distributed

Speakers

– Som Satsangi
– Dr. Krishna Vatsa

Arguments

Large-scale AI deployment requires massive infrastructure investments costing $400-500 million to $1 billion per system


Incremental capacity building approach needed to improve early warning precision within existing resource constraints


Summary

Satsangi advocates for massive infrastructure investments comparable to US systems, while Vatsa emphasizes working within existing resource constraints through gradual, incremental improvements.


Topics

The enabling environment for digital development | Financial mechanisms


AI model implementation strategy

Speakers

– Beth Woodhams
– Dr. Mrutyunjay Mohapatra

Arguments

AI should complement physical weather models through hybrid approaches rather than complete replacement


GPU-based AI models offer affordable alternatives for countries unable to invest in high-performance computing


Summary

Woodhams emphasizes gradual blending of AI with physical models to build trust, while Mohapatra suggests AI box models as standalone affordable alternatives for resource-constrained countries.


Topics

Artificial intelligence | Closing all digital divides


Unexpected differences

Scope of disaster management beyond physical disasters

Speakers

– Avinash Ramtohul
– Other panelists

Arguments

Disaster scope should extend beyond physical events to include cybersecurity attacks on digital systems


Most other speakers focused primarily on natural disasters and physical hazards


Explanation

Ramtohul uniquely expanded the disaster management scope to include cybersecurity threats in the virtual world, while other panelists remained focused on traditional natural disasters. This represents an unexpected broadening of the discussion scope.


Topics

Building confidence and security in the use of ICTs | Social and economic development


Data utilization efficiency

Speakers

– Dr. Mrutyunjay Mohapatra
– Som Satsangi

Arguments

Only 5% of satellite data is currently usable due to quality issues that AI could help improve


India lacks sufficient computational infrastructure with only 40 petaflops compared to US systems with 1-2 exaflops


Explanation

An unexpected disagreement emerged on the primary bottleneck – Mohapatra identifies data quality as the main issue (only 5% of satellite data usable) while Satsangi emphasizes computational capacity limitations. This suggests different perspectives on where to prioritize improvements.


Topics

Data governance | The enabling environment for digital development


Overall assessment

Summary

The discussion revealed moderate levels of disagreement primarily around implementation approaches rather than fundamental goals. Key areas of disagreement included the level of automation appropriate for life-critical decisions, infrastructure investment strategies (massive vs incremental), and AI model implementation approaches (hybrid vs standalone).


Disagreement level

Moderate disagreement with significant implications for policy and implementation. While speakers agreed on the importance of AI for disaster risk reduction, their different approaches to human oversight, resource allocation, and technical implementation could lead to fundamentally different system architectures and governance frameworks. The disagreements suggest a need for more detailed policy discussions to reconcile these different perspectives before large-scale implementation.


Partial agreements

Partial agreements

Both agree that physical models remain essential and AI should not completely replace them, but they differ on implementation – Woodhams focuses on gradual blending for trust-building while Mohapatra emphasizes the irreplaceable reasoning capabilities of physical models.

Speakers

– Beth Woodhams
– Dr. Mrutyunjay Mohapatra

Arguments

AI should complement physical weather models through hybrid approaches rather than complete replacement


Physical models provide essential reasoning and human knowledge that cannot be replaced by AI alone


Topics

Artificial intelligence | Social and economic development


Both acknowledge India’s insufficient computational capacity for processing available data, but disagree on solutions – Satsangi advocates for massive new investments while Vatsa prefers incremental improvements within existing constraints.

Speakers

– Som Satsangi
– Dr. Krishna Vatsa

Arguments

India lacks sufficient computational infrastructure with only 40 petaflops compared to US systems with 1-2 exaflops


Massive amounts of existing hazard data need better processing capacity and AI model application


Topics

The enabling environment for digital development | Data governance


Both recognize the critical need to integrate fragmented data across agencies, but approach it differently – Shukla focuses on technical cloud architecture solutions while Kumar emphasizes Digital Public Infrastructure and Digital Public Goods frameworks.

Speakers

– Pankaj Shukla
– Nikhilesh Kumar

Arguments

Data fragmentation across multiple ministries needs to be consolidated into unified intelligence systems


DPIs and DPGs are essential for connecting scattered data across various agencies and institutions


Topics

Data governance | Information and communication technologies for development


Similar viewpoints

Both technology industry representatives emphasize the enormous scale of infrastructure investment required and the need for systems that can operate in challenging, disconnected environments during disasters.

Speakers

– Som Satsangi
– Pankaj Shukla

Arguments

Large-scale AI deployment requires massive infrastructure investments costing $400-500 million to $1 billion per system


Cloud architecture should enable central intelligence with ability to operate in disconnected, air-gapped environments


Topics

The enabling environment for digital development | Financial mechanisms


Both speakers recognize the potential of AI to better utilize existing data sources, particularly satellite data, and to provide real-time monitoring and forecasting for critical infrastructure.

Speakers

– Dr. Mrutyunjay Mohapatra
– Nikhilesh Kumar

Arguments

Only 5% of satellite data is currently usable due to quality issues that AI could help improve


Real-time nowcasting for water bodies and dams can provide critical early warnings for vulnerable assets


Topics

Artificial intelligence | Data governance


Both speakers acknowledge resource constraints and emphasize the need for sustainable, incremental approaches to AI implementation that work within existing capabilities rather than requiring massive upfront investments.

Speakers

– Avinash Ramtohul
– Dr. Krishna Vatsa

Arguments

Small island developing states need sustainable AI solutions despite limited resources


Incremental capacity building approach needed to improve early warning precision within existing resource constraints


Topics

Closing all digital divides | Capacity development


Takeaways

Key takeaways

AI should complement rather than replace physical weather models through hybrid approaches that build user trust gradually


India faces critical infrastructure gaps with only 40 petaflops of computing capacity compared to US systems with 1-2 exaflops needed for real-time disaster management


Human-in-the-loop approaches are essential for AI systems making life-critical decisions – 100% automation is dangerous


Digital twins bridging physical and virtual worlds are necessary for effective emergency response coordination


Data fragmentation across multiple agencies must be consolidated through DPIs and DPGs for effective AI implementation


Cybersecurity threats to early warning systems pose equal risks to physical disasters and require protection


Massive infrastructure investments ($400-500 million to $1 billion per system) along with alternative power and cooling resources are required


AI can transform only 5% of currently usable satellite data into actionable intelligence and improve data quality


Early warning systems must achieve 100% global coverage by 2027 as per UN/WMO goals


GPU-based AI models offer affordable alternatives for resource-constrained countries


Resolutions and action items

IMD to continue developing AI consensus models alongside physical consensus for improved forecasting


NDMA to work on clarifying the roadmap for data center and early warning agency interactions


Need for public-private partnerships to address computational infrastructure gaps


Co-development of benchmarking and evaluation methods with international partners for model validation


Investment in observational networks to provide automated weather stations in every Indian village within five years


Quadrupling of seismometers and strong motion accelerographs for improved earthquake monitoring


Development of rugged AI systems capable of operating in disconnected, air-gapped environments


Implementation of cell broadcast systems with human-verified messaging protocols


Unresolved issues

How to justify massive data center investments when early warning agencies operate independently


Lack of clear architecture for integrating existing networks with new AI capabilities within resource constraints


Uncertainty about optimal blending methods for physics-based and machine learning models


Insufficient clarity on standards for AI explainability in life-saving decisions


Gap between risk assessment and risk reduction due to lack of parametric measurement databases


How to ensure AI-driven systems reach last-mile populations while preventing misinformation


Procurement policy challenges for acquiring high-cost computational infrastructure


Integration challenges between sovereign data architectures and interoperable AI systems


Suggested compromises

Step-by-step implementation of AI models through gradual blending rather than complete replacement of existing systems


Hybrid models combining physics-based understanding with AI capabilities to maintain reasoning transparency


Incremental capacity building approach to improve early warning precision within existing resource constraints


Public-private partnerships to share the cost burden of expensive computational infrastructure


GPU-based AI solutions as affordable alternatives for countries unable to invest in high-performance computing


Co-development approaches with international partners to share costs and expertise


Federated architecture allowing central intelligence with tactical edge deployment capabilities


Human verification requirements for highly sensitive automated disaster communications


Thought provoking comments

Today, just like we have the physical world, we have a virtual world as well in which we all live. And that virtual world is so much bigger… disaster can also strike the virtual world… disaster is not just the flood, the cyclone, the drought. Disaster can also be the cybersecurity attacks that can actually create havoc in our lives.

Speaker

Avinash Ramtohul


Reason

This comment fundamentally expanded the scope of disaster risk reduction beyond traditional natural disasters to include cyber threats. It introduced a paradigm shift by recognizing the interconnectedness of physical and virtual vulnerabilities, which is particularly relevant as AI systems become more integral to disaster management.


Impact

This comment set the tone for the entire discussion by broadening the conceptual framework. It influenced subsequent speakers to consider the security and reliability aspects of AI systems, and established the need for human oversight in automated systems. The Minister’s emphasis on ‘human in the loop’ became a recurring theme throughout the panel.


While India has very ambitious plan for national supercomputer mission… in last 10 year, what we have developed is some 37 supercomputer with just the 40 petaflop of capital… US, there are multiple systems which provide this real time information and each of them… has 1.8 exaflop… So these are the kind of systems which are deployed so that actually they can take the power of AI in a real-time environment

Speaker

Som Satsangi


Reason

This comment provided a stark reality check by quantifying India’s computational infrastructure gap compared to global standards. It moved the discussion from theoretical AI applications to practical implementation challenges, highlighting that India’s total computing capacity is less than what single US systems possess.


Impact

This comment fundamentally shifted the discussion from ‘how to use AI’ to ‘how to build the infrastructure necessary for AI.’ It prompted other panelists to address practical deployment challenges and influenced Dr. Mohapatra to acknowledge the computational limitations. It also led to discussions about public-private partnerships and alternative deployment models like ‘box models’ for resource-constrained environments.


We are trying to turn the chaotic reality on the ground into actionable intelligence… the data fragmentation which is sitting with multiple ministries and social media various places all across all of that first of all needs to be brought to a place… and turn into a living intelligence

Speaker

Pankaj Shukla


Reason

This comment elegantly captured the core challenge of disaster management – transforming fragmented, chaotic data into coherent, actionable insights. The phrase ‘living intelligence’ introduced a dynamic concept of AI systems that continuously learn and adapt, rather than static analytical tools.


Impact

This comment helped bridge the gap between infrastructure challenges raised by Som and practical applications. It influenced the discussion toward federated systems and edge computing solutions, and provided a framework for understanding how AI can operate across different organizational boundaries while maintaining data sovereignty.


AI has come off as a hybrid model along with the physical models. We cannot get away with the physical models because physical models provide you the physical understanding, the reasoning… So therefore, AI has to be suitably connected with the physical models.

Speaker

Dr. Mrutyunjay Mohapatra


Reason

This comment provided crucial scientific grounding to the discussion by emphasizing that AI should complement, not replace, physics-based models. It addressed concerns about over-reliance on AI while maintaining scientific rigor in weather forecasting and disaster prediction.


Impact

This comment validated the hybrid approach mentioned by Beth Woodhams and provided authoritative support for gradual AI integration rather than wholesale replacement of existing systems. It influenced the discussion toward more nuanced implementation strategies and reinforced the importance of maintaining scientific understanding alongside AI capabilities.


It’s one thing to set up the observational network. The another thing is to how to collect the data, process the data and generate the information that could be used and more so when it comes to informing the common citizens… We are doing it for the people who get affected by disasters… the roadmap is not sufficiently clear

Speaker

Dr. Krishna Vatsa


Reason

This comment highlighted the critical gap between data collection and citizen-centric application. It shifted focus from technical capabilities to end-user needs and acknowledged the institutional uncertainty about implementation pathways despite significant investments in observational networks.


Impact

This comment brought the discussion full circle by emphasizing the human-centered purpose of all technological investments. It highlighted the need for clearer implementation roadmaps and influenced the conversation toward practical governance frameworks that can translate technical capabilities into citizen benefits.


Overall assessment

These key comments collectively shaped the discussion by progressively expanding and then focusing the scope of AI in disaster risk reduction. The conversation evolved from a broad conceptual framework (physical vs. virtual disasters) to practical infrastructure realities, then to technical implementation strategies, and finally to citizen-centric applications. The comments created a comprehensive narrative that acknowledged both the transformative potential of AI and the significant challenges in implementation. Most importantly, they established a balanced perspective that emphasized human oversight, hybrid approaches, and the ultimate goal of protecting citizens rather than pursuing technology for its own sake. The discussion successfully moved beyond theoretical possibilities to address real-world constraints and implementation pathways.


Follow-up questions

How can countries with limited resources ensure sustainability in AI-enabled early warning systems?

Speaker

Moderator


Explanation

This question was posed to Minister Ramtohul but wasn’t fully addressed, focusing instead on policy reforms and technical architecture


What is the optimal approach for blending machine learning and physics-based weather models?

Speaker

Beth Woodhams


Explanation

She explicitly stated ‘The truth is we don’t know what the answer to this solution is yet’ regarding the best method for hybrid model implementation


What are the most important metrics for evaluating machine learning models from a user perspective?

Speaker

Beth Woodhams


Explanation

She questioned whether current metrics showing ML models performing well are actually the most important to users


How can India develop the massive computational infrastructure required for real-time AI-driven disaster management?

Speaker

Som Satsangi


Explanation

He highlighted that India has less than 100 petaflops while the US has multiple exaflop systems, and questioned if India is ready for large-scale implementation


How can public-private partnerships be structured to provide the expensive infrastructure needed for AI disaster management systems?

Speaker

Som Satsangi


Explanation

He noted that systems cost $400-500 million to $1 billion and suggested private partnerships are critical but didn’t elaborate on implementation


How can power and water requirements for large-scale AI infrastructure be sustainably met?

Speaker

Som Satsangi


Explanation

He identified power, energy, and water as critical bottlenecks for AI infrastructure but didn’t provide solutions


How can AI-driven systems effectively mitigate misinformation risks in disaster communications?

Speaker

Moderator


Explanation

This question was posed to Pankaj Shukla but wasn’t specifically addressed in his response


How can historical unstructured data be systematically converted into structured datasets for risk assessment?

Speaker

Nikhilesh Kumar


Explanation

He mentioned AI can extract information from news and other sources but didn’t detail the methodology or implementation approach


How can data quality from satellites be improved beyond the current 5% usability rate?

Speaker

Dr. Mrutyunjay Mohapatra


Explanation

He identified this as a major limitation but didn’t provide specific solutions for improving satellite data quality


How should the overall architecture integrate data centers with individual early warning agencies?

Speaker

Dr. Krishna Vatsa


Explanation

He explicitly stated ‘this is where we need more clarity’ and mentioned struggling with how to justify investments without clear integration models


What is the optimal incremental approach for building AI capacities in disaster management with limited resources?

Speaker

Dr. Krishna Vatsa


Explanation

He mentioned needing guidance on gradually building capacities but didn’t receive specific recommendations


How can complex AI-generated risk information be effectively communicated to common citizens rather than just scientists?

Speaker

Dr. Krishna Vatsa


Explanation

He emphasized the challenge of translating technical data for public use but this wasn’t addressed by the panel


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.