National Disaster Management Authority
20 Feb 2026 15:00h - 16:00h
National Disaster Management Authority
Summary
The panel opened by highlighting the rising frequency, intensity and complexity of disasters worldwide and the parallel surge in artificial-intelligence capabilities, framing the question of how India can develop AI-driven resilience models [1-5]. Moderators emphasized that the next frontier in disaster risk reduction is not merely better algorithms but the institutionalisation of AI within national resilience architectures [7].
The discussion then turned to policy reforms, with Mauritius’s Minister Avinash Ramtohul stressing that disasters now affect both the physical and virtual realms, including cyber-attacks, and that governance must bridge these domains [26-32]. He advocated creating digital-twin representations of critical infrastructure to enable emergency services to locate people and assets in real time, and argued that such digital maps should be accessible to authorised responders as part of reform [33-36]. Ramtohul also warned that fully automated decision-making can be dangerous, insisting on a “human-in-the-loop” approach for AI-based early-warning alerts, a stance reflected in Mauritius’s policy to require human-verified messages in its cell-broadcast system [45-55].
Beth Woodham from the UK Met Office described the agency’s strategy of developing hybrid weather models that blend physics-based forecasts with machine-learning outputs, proceeding incrementally while co-developing benchmarks with partners such as India [65-73]. She noted that building trust requires aligning evaluation metrics with user needs and that joint development of both models and their testing frameworks is essential for operational adoption [72-74].
Som Satsangi highlighted India’s current shortfall in high-performance computing, noting that the country’s supercomputers total roughly 40-100 petaflops compared with the exaflop-scale systems used in the United States for real-time AI analytics [92-100]. He argued that the gap in core infrastructure, which can cost hundreds of millions of dollars, must be closed through public-private partnerships and that power and cooling requirements are equally critical for deploying large-scale AI clusters [106-124].
Pankaj Shukla outlined a five-layer AI architecture-infra-structure, operating system, platform services, models and applications-and stressed the need for a central “living intelligence” that can be synchronised with edge or air-gapped devices to deliver actionable insights even in disconnected, high-risk settings [136-144][145-152]. He explained that today’s hyperscaler clouds can be extended on-premises in a zero-trust fashion, allowing rugged devices to run distilled models locally for rapid response [150-152].
Startup founder Nikhilesh Kumar added that effective disaster-risk platforms must integrate four layers-modeling, asset, people and workflow-and that AI can transform scattered, unstructured data from satellites, social media and agency records into near-real-time nowcasts, such as the dam-level forecasts demonstrated for thousands of Indian reservoirs [155-166]. He further pointed out that AI can extract hazard information from news and other unstructured sources to build location-specific risk databases that support insurance and mitigation planning [169-171].
Dr. Mrutyunjay Mohapatra reiterated the global “early warning for all” agenda, emphasizing that hybrid AI-physical models improve forecast precision but are limited by data quality and computing capacity, and suggested low-cost GPU-based box models as a viable solution for resource-constrained nations [185-194][203-207]. Finally, Dr. Krishna Vatsa described India’s expanding observational networks and the pressing need to develop processing capacity and clear data-center architectures to turn the growing data streams into reliable, citizen-focused early warnings, concluding that coordinated investment, partnership and governance are essential to realise AI-enabled disaster resilience at scale [220-229][230-247].
Keypoints
Major discussion points
– Policy & governance: bridging the physical and virtual worlds – The Minister of IT (Mauritius) emphasized that disaster risk must cover both physical hazards and cyber-threats, calling for a “digital twin” that links real-world assets to virtual models and insisting that critical AI-driven alerts remain human-verified and “human-in-the-loop” to avoid fully automated decisions that could cause harm[26-33][34-42][45-47][52-55].
– Hybrid AI-physical modelling and co-development – The UK Met Office highlighted that AI will augment, not replace, traditional physics-based weather models through blended or hybrid approaches, and stressed the need for joint benchmarking and evaluation frameworks with partners (including low-resource countries) to build trust in AI-generated forecasts[65-71][72-74].
– National-scale infrastructure and resource constraints – Hewlett Packard Enterprise’s Som Satsangi pointed out that India’s current super-computing capacity (≈ 28 petaflops) is far below the exaflop-scale systems used elsewhere, making the cost, power, and cooling requirements for AI-driven early-warning platforms a major barrier; he called for public-private partnerships to acquire the necessary sovereign data infrastructure[92-100][106-108][109-126].
– Cloud-edge architecture for low-connectivity, high-risk environments – Google Cloud’s Pankaj Shukla described a layered architecture (infrastructure, operating system, services, models) that creates a central “living intelligence” while enabling edge-deployed, zero-trust, rugged devices to operate even when disconnected, ensuring real-time analytics and safe dissemination of warnings[136-152].
– Start-up driven DPIs/DPGs and workflow translation – Vassar Labs’ Nikhilesh Kumar outlined four AI-enabled layers (modeling, asset/people, workflow, DPI/DPG) and illustrated how startups can integrate scattered agency data, generate near-real-time nowcasts for millions of water bodies, and convert unstructured news into structured risk datasets that feed insurance and mitigation systems[155-167][168-172].
Overall purpose / goal of the discussion
The panel was convened to explore how Artificial Intelligence can be institutionalized within national disaster risk reduction (DRR) frameworks, especially for India, by examining policy reforms, technical integration, infrastructure needs, and collaborative models (government, industry, academia, and startups) that together can build scalable, trustworthy, and inclusive early-warning and resilience systems.
Tone of the discussion
– The conversation began with a formal, forward-looking tone, framing AI as the “next frontier” in disaster governance.
– It then shifted to a technical and pragmatic tone, with speakers detailing concrete challenges (cyber-security, super-computing gaps, data quality) and realistic constraints.
– As the dialogue progressed, the tone became collaborative and solution-oriented, emphasizing partnerships, co-development, and actionable road-maps.
– The session concluded on a constructive, call-to-action tone, urging coordinated effort across agencies and sectors to translate AI advances into operational resilience.
Speakers
– Moderator – Role: Moderator of the panel discussion.
– Avinash Ramtohul – Minister for Information Technology, Communication and Innovation, Republic of Mauritius [S9]; expertise: AI-enabled early warning systems, digital twins, cybersecurity, policy reforms for disaster risk governance.
– Beth Woodhams – Senior Manager, UK Met Office [S1]; expertise: disaster risk reduction, weather forecasting, integration of machine-learning models with physical weather models, international co-development of AI-driven meteorological services.
– Som Satsangi – Former SVP & Managing Director, Hewlett Packard Enterprise India [S12]; expertise: AI deployment in geospatial and climate analytics, sovereign data architectures, large-scale infrastructure for real-time disaster alerts.
– Pankaj Shukla – Head of Customer Engineering, Google Cloud India [S14]; expertise: AI-driven analytics, hazard mapping, predictive analytics, cloud and edge infrastructure for low-connectivity, high-risk environments.
– Nikhilesh Kumar – CEO & Co-founder, Vassar Labs [S11]; expertise: AI solutions for disaster risk reduction, modeling layers, data integration, startup-driven platforms for population-scale early warning.
– Dr. Mrutyunjay Mohapatra – Director General, India Meteorological Department (IMD) [S7]; expertise: AI-enhanced weather forecasting, hybrid physical-AI models, early warning systems at national scale.
– Dr. Krishna Vatsa – Head of Department, National Disaster Management Authority (NDMA) [S3]; expertise: data integration, AI for early warning precision, building scalable disaster-risk information systems.
Additional speakers:
– Mr. Martin – Mentioned in the discussion but did not speak; no role or expertise provided.
The panel opened by underscoring that disasters are becoming more frequent, intense and complex worldwide, while advances in artificial-intelligence (AI) are occurring at an unprecedented pace[1-5]. The moderator framed the central challenge: “How can India develop an AI-enabled model for resilience?” and argued that the next frontier in disaster risk reduction (DRR) is the institutionalisation of AI within national resilience architectures[1-5][7].
Policy and governance – bridging the physical and virtual worlds
Minister Avinash Ramtohul of Mauritius expanded the definition of disaster to include cyber-attacks that can cripple digital systems as well as traditional hazards such as floods and cyclones[30-32]. He advocated the creation of digital-twin representations of critical infrastructure so that emergency services can locate people and assets in real time, and insisted that these virtual maps be accessible to authorised responders (fire, medical, etc.) as part of a broader reform agenda[34-36]. Crucially, he warned against fully automated decision-making, calling for a human-in-the-loop approach and for all early-warning messages to be human-verified before broadcast, a policy already being piloted in Mauritius’s cell-broadcast system[45-55].
Hybrid AI-physical modelling – the Met Office perspective
Beth Woodham explained that the UK Met Office is developing machine-learning weather models that will augment, not replace, physics-based forecasts. The agency plans a gradual rollout in which AI outputs are blended with traditional model results, creating hybrid forecasts that increase confidence as the AI component matures[65-71]. To build trust, the Met Office is co-developing both the models and the benchmarking framework with partners such as India and the World Meteorological Organization, emphasizing that co-development and joint benchmarking are essential to operationalise AI-driven forecasts in low-resource settings[71-74].
Computational infrastructure – India’s current shortfall
Som Satsangi highlighted that India’s super-computing capacity (≈ 28 petaflops) is far below the exaflop-scale systems (1-2 exaflops) used in the United States for real-time AI analytics[92-100]. He noted that each exaflop-class machine costs US $400 million-$1 billion and requires massive power and water-cooling resources, making it unaffordable for a single government entity[106-108][109-126]. Consequently, he called for public-private partnerships to acquire and operate the necessary sovereign data infrastructure[106-108][109-110].
Interoperability, sovereign data and governance
Satsangi stressed that AI systems must be built on sovereign-compatible data architectures and that clear governance mechanisms are needed for life-saving decisions, especially in a federal context such as India where multiple state and central agencies must interoperate[80-81][46-55]. The moderator asked about “standards of explainability” for AI-driven alerts, but the panel did not reach a concrete consensus on specific metrics[7].
Cloud-edge architecture for low-connectivity, high-risk settings
Pankaj Shukla described a five-layer AI stack – infrastructure, operating system, platform services, models and applications – that creates a central “living intelligence” while allowing distilled models to run on edge or air-gapped rugged devices. This architecture enables real-time analytics even when connectivity is lost, supports zero-trust security, and mitigates misinformation by delivering verified alerts directly to field teams[136-144][148-152].
Start-up contribution – DPIs and DPGs
Nikhilesh Kumar outlined a four-layer framework (modeling, asset/people, workflow, DPI/DPG) that startups can use to turn fragmented, unstructured data into actionable insights. DPI (Disaster Prediction Interface) and DPG (Disaster Prediction Grid) are the delivery mechanisms. He gave the example of nowcasting for nearly one million water bodies by fusing 30-minute satellite imagery and radar data, then translating the output into hydraulic forecasts for thousands of dams in real time[155-166]. He also demonstrated how AI can extract hazard-specific information from news and social media to build location-specific risk databases that support insurance and mitigation planning[169-172].
National implementation – hybrid models and low-cost alternatives
Dr Mrutyunjay Mohapatra linked the discussion to the UN “early-warning for all” agenda, noting that hybrid AI-physical models improve forecast precision but are limited by data quality and computing capacity[185-194]. He pointed out that only about 5 % of satellite data is currently usable, and that improving data quality with AI benefits both AI and physics models[203-207]. For resource-constrained contexts, he promoted a GPU-based “box-model” that can deliver acceptable forecasts without the need for exaflop supercomputers, offering an affordable pathway for small island states and low-income regions[207-208].
Observational networks and processing bottlenecks
Dr Krishna Vatsa (NDMA) described India’s ambitious plan to quadruple seismometers, install automated weather stations in every village and expand landslide sensors, thereby generating massive new data streams[226-229]. However, he highlighted a critical gap: the lack of processing capacity and a clear data-centre architecture to turn this raw data into citizen-focused early warnings[230-247]. He called for a coordinated roadmap that incrementally builds AI-processing capability while ensuring that data centres meaningfully serve early-warning agencies[238-247].
Areas of consensus
All participants concurred that AI should be embedded within a coherent national resilience framework, that human oversight is essential for life-saving alerts, and that hybrid AI-physics models are the preferred technical approach. They also agreed on the necessity of large-scale computing resources, interoperable sovereign data architectures, and strong cross-sector collaboration (government, industry, academia, startups)[7][46-55][65-71][80-81][136-144][155-162].
Key points of disagreement
A tension emerged between the emphasis on sovereign-compatible data architectures (Satsangi) and the Met Office’s advocacy for open co-development and shared benchmarking (Woodham), reflecting differing priorities in balancing security with collaborative innovation[80-81][71-74]. A second divergence concerned computational scale: Satsangi argued that India must acquire exaflop-scale supercomputers to support real-time AI alerts, whereas Mohapatra suggested that GPU-based box models provide a viable, low-cost alternative for nations lacking such resources[92-100][207-208].
Conclusions and actionable recommendations
The panel distilled several key take-aways: AI must be institutionalised, human-in-the-loop, and blended with physics models; India needs to close a substantial computing-capacity gap while exploring affordable GPU solutions; a five-layer cloud-edge architecture should underpin a central living intelligence; startups can deliver DPIs/DPGs that integrate hazard, asset, and population data; and expanding observational networks must be matched with clear data-centre strategies.
Proposed action items:
1. Develop integrated digital-twin and hybrid forecasting platforms for critical infrastructure and meteorological services (Ramtohul, Woodham).
2. Create a sovereign-data architecture with defined governance and audit standards for AI-driven decisions (Satsangi).
3. Pursue public-private partnership models to fund either exaflop-scale or GPU-based compute clusters, incorporating sustainable power and cooling solutions (Satsangi).
4. Deploy the five-layer AI stack and ensure edge-ready, zero-trust devices for last-mile dissemination (Shukla).
5. Encourage startup-led DPIs/DPGs that translate multi-agency data into actionable workflows and risk databases (Kumar).
6. Promote GPU-based box-model forecasting as an interim solution for low-resource settings while larger infrastructure is built (Mohapatra).
7. Accelerate the rollout of automated weather stations, seismic sensors and landslide monitors, coupled with a roadmap for integrating these streams into AI-enabled early-warning pipelines (Vatsa).
Unresolved issues include financing the required supercomputing capacity, finalising standards for AI explainability, safeguarding AI-driven alerts against cyber-threats, and defining a governance model that balances sovereign data protection with collaborative benchmarking. Addressing these challenges will be essential for realising a scalable, trustworthy, and inclusive AI-enabled disaster resilience system across India and comparable jurisdictions[45-55][65-74][92-100][107-110][136-152][155-172][185-194][203-207][220-247].
defining moment for disaster risk governance. Around the world, the frequency, intensity, and complexity of disasters are increasing. Climate variability is compounding existing vulnerabilities. Urbanization is concentrating risk, and cascading hazards are challenging traditional response models. At the same time, we are witnessing unprecedented advances in AI. So, at this point of time, how does India bring or develop a model with AI for resilience? We believe that the next frontier in DRR is not better algorithms alone, it is institutionalizing AI within national resilience architecture. Thank you very much. Thank you very much. from pilot projects to national and global resilience systems. Before we start the discussion, let me invite and call on the stage for the panel discussion His Excellency Dr.
Avinash Ramtohol, the Minister for Information Technology, Communication and Innovation from the Republic of Mauritius. Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South cooperation. I would like to invite Ms. Beth Woodham, Senior Manager from UK Met Office. She is a specialist in disaster risk reduction via forecasting innovations and AI explorations for prediction. Welcome. I would like to invite Mr. Som Satsangi, former SVP and Managing Director for Hewlett Packard Enterprise India with industry insights on AI deployment in geospatial and climate analytics. Welcome, Mr. Som. I would like to now call upon Mr. Nikhilesh Kumar, CEO and co -founder of Vassar Labs. He is an innovator in leveraging AI for DRR.
Welcome, Nikhilesh. And lastly, Mr. Pankaj Shukla, Head of Customer Engineering, Google Cloud India, for Practical AI Applications, Hazard Mapping, Predictive Analytics, and EWS Scale -Up. Thank you. Thank you. and focus on double integration during this panel discussion. So my question first to Minister for IT, Communication and Innovation, Republic of Mauritius. Minister, the small island developing states face existential climatic threats. From your perspective, what policy reforms are required to institutionalize AI -enabled early warning and alerting systems within national governance frameworks? And how can countries with limited resources ensure sustainability in such ventures?
Thank you and good morning, everybody. Thank you for the opportunity to be here amongst you. First of all, I would like to say a couple of points before I get into the actual response there. Today, just like we have the physical world, we have a virtual world as well in which we all live. And that virtual world is so much bigger. Than the physical world we can see here in front of us at the moment. And just like disaster can strike the physical world, and that is the scope of the discussion, disaster can also strike the virtual world. And as we grow in dependency on the virtual world, on our digital systems, we should be well aware that disaster is not just the flood, the cyclone, the drought.
Disaster can also be the cybersecurity attacks that can actually create havoc in our lives. Therefore, it is very important that the scope of the discussions when we look at disaster be also extended to the virtual world and cybersecurity attacks. Now, having said this, in terms of policy reform, it is very important that we also create this bridge between the physical world and the virtual world. And I will explain myself. Just imagine, as we speak here, there is a big fire that breaks out in one organization. There is a big fire that breaks out in one organization. and because it broke out there are you know automated connections that go to the fire services to the medical services they will proactively now start driving to this place but when they come to this place where would they know where are the people because their main objective is to save the lives of the people secondary the material where would they know where are the people do they have a plan a structural plan of this this space do they know where do the pipes cross now I’m talking about a digital twin it is really important that we create that digital twin which will be the bridge between the physical world and the virtual world and the architectural map of that digital twin should be accessible to a certain set of operators the medical the fire services now this is part of the reform that we are looking at And in a small administration, it becomes easier to do it, as opposed to a huge administration like India.
Now, there’s one more thing in there. As, let’s say, we also have the structural plan, how do we know where the people are? Can we have heartbeat indication? Can we have the thermal map of the place so that we know wherever there’s 37, 38, 39 degrees, well, 37, 38 is better. Do we know where the people are located so that when the fire services come, they go straight to that spot? So this is very important. And another reform that is important that we be aware of is that when there is some kind of a pandemic, there is, and which is contagious, there is human -to -human virus transfer. Now, we are all very excited. We are very excited about artificial intelligence.
intelligence but we are also aware that there is this possibility of virus infecting systems right and just like virus infects people virus also infects systems and virus get contagious in computers as well we all know that therefore we need to also have mechanisms to protect because if we have a message that goes through an early warning system to people this already creates an alert in the minds of people the adrenaline surge starts already but if that message is infected it can create a lot of disruption in our daily lives and this we need to be very careful of therefore in terms of reform the decision making process and i think it was mr somebody earlier mentioned in the previous panel the decision making process is automated now automation can be and 100 % automation in the field of AI where it concerns the lives of people can be dangerous.
Therefore, human in the loop or human on the loop is critical in these kinds of environments. And this is also part of what we are looking at in Mauritius. Yes, it’s true that as a small island developing state, we call it SIDS, we have our own set of flash floods that can actually occur. Within a couple of hours, we can have flash floods and we can see cars floating around already. And this has happened in the country. And we don’t want that to happen again. Therefore, there are early warning systems that we are deploying, like cell broadcast systems, which we have planned to deploy. Now, again, the message that goes into that system should be a message that is human verified.
That is, decisions like these that are sensitive, highly sensitive, cannot be 100 % automated. That’s part of our policy. as well and want to ensure that humans are involved because machines cannot decide for humans human decide for machines and this needs to be critical and needs to be given the attention that it deserves and I believe our Prime Minister Modi ji also mentioned in his intervention yesterday that there is a great necessity to ensure that human remain part of the decision -making process in the application of AI for disaster management so these are a few points I wanted to mention thank you
insights in developing resilient governance frameworks which actually are scalable across the nation’s and this is a way to go resilient system which is resilience to even cyber attacks and of sustainable meaningful and not giving any say fatigue you you of alerts also is very much vital for a robust system to be effective across all disasters. So we now come to the second panelist, Ms. Woodham. My question to you is the national meteorological agencies play a crucial role in operational forecasting and early warning delivery. From the perspective of the UK Met Office, how can AI complement physical weather and climatic models to improve forecast, lead time, and impact, basically impact -based warnings to gain public trust?
And what institutional partnerships are necessary to ensure that AI -driven meteorological insights translate into actionable decisions and actions at national and local levels, with special emphasis on low -resource countries?
Hello? Yeah. Right. Thank you. Thank you for your question. Thank you, it’s a real honour to be part of this panel. So at the Met Office we are currently developing machine learning weather models and we absolutely do not see these as a replacement for our physical models. Our plan over the coming years is to step by step implement these models through blending. This could be hybrid models, physics based and machine learning based. It could be blending the output from both of these models after they’ve run. The truth is we don’t know what the answer to this solution is yet but in order to build the trust amongst the users of our models, the customers of our data we’re certainly not going to have a complete shift.
We are going to do this step by step increasing our blending. As we become more confident with the data. so it’s from this conference you know it’s very clear that um companies from the private sector are developing these models in the public sector of course we’re developing them too and sovereign capability remains really important but for public sectors we really need to have that co that co -development um at the met office we have a long history of co -developing with partners like india so through wc ssp india and through age um wiser asia pacific we have these partnerships we’ve co -developed physics -based models and we really want to do the same with machine learning models as well at the met office we’re starting standardizing our benchmark and evaluation benchmarking and evaluation we really want to make sure that when we’re doing comparisons between machine learning and physics -based models we’re being focused on the same thing so we’re really wanting to do the same thing with machine learning and we’re really wanting to do the same thing with machine learning and we’re really wanting to space models.
There’s a lot of metrics we can look at that show machine learning models are doing well, but are these the metrics that are most important to users? Therefore, not only do we want to co -develop the actual models with partners, we want to co -develop the benchmarking and the tests that we do on these models. Thank you.
Thank you, Beth, for giving your insight into how the National Meteorological Agency’s may plan to use AI to the systems. So now we move towards the technologies so that how do we really create resilient systems for forecasting. So my first question would be to Mr. Som Sasangi. The private sector innovation has advanced rapidly. So there are first foremost question I think which comes to my mind is how can technology providers design AI systems that are interoperable and with sovereign data architectures because that is the crucial issue to be cracked. So we have to design AI systems that are interoperable with sovereign data architectures and compatible with diverse governance ecosystems. For a country like India with federal government and the state governments so this is a very vital nut to be cracked from the technology’s point of view and similarly what standards of explainability are necessary when AI informs life -saving decisions.
Thanks Manish. Really a great question and probably in this room I’ll be calling something which is very very important because just when I walked in I heard Mr. Martin and he spoke a couple of points which are so important. and critical for a country like India. He spoke about the government. He spoke about the procurement policies and the scale. So all these three things are so important and critical when we look from India’s standpoint with 1 .2 billion plus citizens. I’ve been the managing director of Hewlett Packard Enterprise for the last nine years, and I know I’ve been involved almost in all large critical infrastructure projects, whether it’s UIDI or any kind of transaction, COVID, all applications.
And we know that all these things we have developed at this scale and delivered. Probably when we look, the most important aspect of the human life with the climate change, the disaster which is happening across the world, the length and breadth of India on coastal side. So, but somehow, are we ready to do it? I don’t think we are ready. While, and I’ll give you some pointer which are very important and why it’s not happening to the Mr. Martin point. And I’ll just, while India has very ambitious plan for national supercomputer mission way back in 2015, where India said, okay, we’ll be investing 4 ,500 crore to develop some of the supercomputer which will be the high class.
But in last 10 year, what we have developed is some 37 supercomputer with just the 40 petaflop of capital. is that sufficient? Now we are planning, okay, we’ll add another 50 petaflop. But when you look at the global level, the kind of infrastructure which is, if we have to manage in a real time this alert and warning system, and I’ll give you one or two examples in United States. The top which have been developed and deployed to do these things, they have got a capacity of almost one to two exaflop. And one exaflop is close to around thousand petaflop. Whole India we don’t have even today 100 petaflop of data. And US, there are multiple systems which provide this real time information and each of them, one is Al Capitan which is 1 .8 exaflop then Frontier system which has been deployed by the Oak Ridge National University has got 1 .3 exaflop.
capacity. Aurora, which is recently deployed by the Argonne National University, has got one extra flop of power and capability. So these are the kind of systems which are deployed so that actually they can take the power of AI in a real -time environment, whether it’s geospatial data or satellite information data or it’s any kind of live information, and analyze these things with the help of AI in a real -time environment and provide the alert much ahead of those things. Somehow we are not able to provide. So in India if we want this early warning system to be done, I think our main focus needs to be how we can have the core infrastructure which will meet the requirement of this.
And last couple of days in every discussion, this is what is coming out with the global CIO and CEO, that probably India need the core infrastructure which we have not developed. Now, we might say, okay, we are doing 10 ,000 to the AI and all, but that is getting distributed to a large number of tech and SMB guys who are developing the application. But what government needs, because it’s a sovereign data, that government need to buy this kind of infrastructure. But I know each of the system will cost anything between 400, 500 million dollars to a billion dollars. Government may not be able to spend that kind of money. So probably that’s a place where private partnership becomes very, very important in cricket.
So my request is that probably department should be looking how the large global institution and technology partner can bring the core infrastructure and technology, because today technology is not a barrier. It’s an infrastructure and the scale and the procurement process and some of these policies. How the various data will be getting integrated is a problem. So if we can address these things, to the scale probably India has done on the DPI side where we have implemented and the best example to the global level where the UIDI is being used almost by almost more than 800 -900 million citizens in the country. We can deliver that. We have got a capability and with all these AI transformation which is happening, our Honorable Prime Minister already said that India is going to be leapfrogging on those things and going to the global leader in the AI space with all those technology embedded along with the capability what India has got.
Only what is required the infrastructure, but infrastructure will come with a huge cost. When you are going to get the infrastructure, another element comes is the power, energy and water. That’s going to be very critical. So somebody has to look at all the three aspects. You can get the infrastructure if you don’t have the power. So we need to have the power which can help and power these kind of systems. So alternative power resources are going to be very, very critical. They’ll be all water -cooled system because they will have hundreds of thousands of GPU and CPUs running together kind of thing. They will require a huge power and huge water capabilities. So need to have that thing.
So India need to start thinking on those lines to create that thing. If we have to protect and we have to get the right early warning alert to save the life of millions of citizens in the country. Thank you.
Thank you. And definitely DRR also offers an opportunity for us to ponder. So taking forward from Mr. Som, I’ll go to Mr. Pankaj Shuklaji, the head of customer engineering at Google Cloud AI. Basically, cloud computing and AI platforms enable real analytics, real -time analytics at scale. So what are the critical infrastructure investments which are essential to support AI deployment in low connectivity and high -risk environments? That’s very vital, looking at the geography of our nation. And how can AI -driven dissemination ensure last -mile inclusion while mitigating misinformation risk? So your insights on that.
Good afternoon, everyone. So irrespective of the technology, when we talk of disaster management and resilience, essentially what we are trying to do is, we are trying to turn the chaotic reality on the ground into actionable intelligence. so for example the data fragmentation which is sitting with multiple ministries and social media various places all across all of that first of all needs to be brought to a place or at least should be you should have an ability to bring all of that data and turn into a living intelligence so once the data is there which is structured as well as unstructured data then we have the ability of our AI models today which are multi -modal to make sense of completely chaotic data noisy data into a real intelligence at unimaginable speed that is essentially what AI is all about so when it comes to the real implementation of this entire architecture and panelists spoke about multiple aspects that how can we use AI how do we actually implement it on ground if you look at what we need and essentially if we talk of AI broadly it has it is it is there at the five layers.
One is the infrastructure layer. Second is the operating system layer which runs on top of infrastructure. I am not talking about just servers and data centers but an operating system layer which scales from a central to an edge location to a multiple regional locations. Then on top of that the services which are required, platform services which are required to basically build the AI applications, make the use of the right models etc. Then you have got the models, the multi -model ability of the models like Gemini and various other hyperscalers which provide and for example the NDIA mission, lot of Indian providers are building models. Ma ‘am spoke about lot of the models which for example other companies are building.
So the question is that how are we able to make the use of all diverse set of models in a dynamic manner and use agentic AI on top of that. To build applications and turn that into a real action. which can be disseminated at the places where we want. For both proactive during the response and as well as after the response. So the question is how do we implement it? So implementation of this will require a framework or an architecture which essentially has a central living intelligence of all the data on which you are basically experimenting and pre -training the models and tuning using different type of models and building applications. The real application of that is going to happen at a place which might get completely disconnected from a central place.
So you should be able to actually build all of these applications, AI applications, make use of that data, make a single central single truth centrally but ability to send that intelligence back to a tactical location. Today it is exactly possible. So organizations for example Google and many other organizations are actually trying to build. Basically bring all the goodness of hyperscaler cloud. for the entire infrastructure and managed services layer as well as AI tooling to on -prem and then ability to also run those in a completely disconnected air gap environment in a zero trust manner. So you have the security of your data and your applications and then also you should have an ability to connect the edge locations in a federated manner to the central place but if required during a disaster you should be able to carry it a rugged device which basically has a basic small set of central intelligence sitting into that with all these necessary models to basically take action on the ground and that action could be related to either finding out where are your assets where is the maximum impact which has happened, how do you actually kind of send the information to various places.
So all of those things are absolutely possible today and while there is huge amount of infrastructure at its own set is required actually to train models, build models. but that’s happening across the country we should have an ability to bring all the good models, the models for the right thing to basically run it on prem at a smaller set of infrastructure and make a smaller set of that which can run in a tactical location which sits possibly in a very limited infrastructure and compute that is what we
Thank you Pankaj, I think for giving Google’s insight into building rugged systems at scale deployment of AI solutions in low cost and high risk environments we have another contributor to the DRR particularly AI deployment in DRR could be from the startups and we have Mr. Nikhilesh Kumar CEO and founder from Vassar Labs basically Nikhilesh you can enlighten us about how startups can contribute in developing a DPG at population scale for DRR particularly for countries like India
The modeling layer, which is transforming the data into various insights, hazards. The asset and the people layer, which is getting impacted, where we need to know today in personalized and in precision, where exactly we talked about, like if flood is coming, which road, which houses, if a landslide is coming, which area. And the fourth, which is most important, is the workflows to translate the actions. And this is where we see a role of DPI and DPG, because all these four layers are not done by one person. They are looking to data scattered across various agencies. And we need to have DPIs and DPGs, which are built across this data right from. different institutions which are bringing metrological data, some institutions which are bringing water and other asset related data, some institutions which are creating different layers like survey of India in case of India on earthquake and various other layers.
Now we also see a role of AI today being playing a role and I will just give an example for that. So we see as today extreme events are happening, one of the first pressure is in the water sector where we see extreme floods sudden gush of water coming into the large dams and dams is one of the most extreme and vulnerable asset that we are all impacted to it and we would also see that the large dams are perhaps we have got a good handle to control them during the disaster but we have big number of dams in the country where they are unregulated and they are scattered across in large numbers and there is no forecast available for them.
So how do we churn in real time, in near real time, both hourly and in days time for close to 1 million water bodies which anytime can be vulnerable. So one of the solutions that we recently saw was translating and making leveraging actually the data gap utilizing AI and AI sitting on real time satellite data 30 minutes interval, radar data coming from, currently all these assets are available from IMD and translating that into a nowcast. Now that nowcast layer translated into hydraulics to each of the dams and in cyclone month close to 5000 dams were given in real time. So such use cases you see is where data is getting connected is available in real time in an interoperable format and then there are players who can translate this data into actions.
I see that contributions where such platforms are brought at national and state scale and these use cases are also packaged and made available for different recipient departments to translate them into actions. So I would say that sir and one more thing I would just like to add taking this as a forum sir. Risk assessment and risk reduction both has a very big gap when it comes to data especially various events across take about earthquake, take about other type of disasters where… where parametric measurements historic have not been available and knowing a frequency… of location specific frequency of these hazards has been actually lacking because you don’t have a database. Now AI can play a very good role here where the information is lying in lots of news which has happened and which contains unstructured information on location, unstructured information on the hazard that has impacted the damages.
So AI can actually uncover these informations and create a structured data sets each hazard wise and these will also feed into various DPGs that will further unlock to the insurance sector to make this which will basically benefit from knowing location specific intensity and also frequency of the risks. I will just close with that sir.
Thank you Nikhilish. You have aptly summarized that startups in this sector, definitely can play a very vital role particularly for developing a rugged AI systems for India at population scale. Now we have heard the panelists from the Indian perspective since we are running large systems. So we would like also to have all the members to have the benefit of the insights of how the national systems are functioning in India and how technology is basically being deployed at scale for DRR perspectives. So firstly I would like to get insight from Dr. Prithunjay Mahapatra. Yes. DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed, are being deployed at population scale in the Indian context.
DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed at population scale in the Indian context. Dr. Bapa.
Namaskar. Good morning to all of you. Respected Dr. Komal Kishore sir, Adas Nand sir, our Krishnamurthy sir and distinguished panelists and their delegates, friends and colleagues. On the outside, I congratulate NDA. For organizing this session, which has given a lot of thoughts to each of us who are represented here. I’ll just start with what the initiative has been taken up by the UN and the WMO. A clarion call was given in 2022 that early warning for all. And when you go for early warning for all, it includes all the countries, all the people and all sectors, all the strata of the society. So with that, when it came of that, actually less than 50 % of countries had the early warning at that time.
Now the number is increasing. but still the time is short now by 2027 we have to achieve the 100 % what’s the early warning for all it is a long goal and if we review now we find that during this last 5 years there is a huge jump in technology and AI is one such technology which is helping for extension of the early warning for all looking at the various components of that you need first the risk knowledge at each and every point what our friend told us like Nikhil and it is not possible with the existing network of any country to have the risk knowledge at each and every location but at the same time there are unstructured data as it is told which can be utilized to create the knowledge to create the risk hazard vulnerability assessment which can be the historical knowledge which can be utilized in real time when you go for the prediction of any severe weather event Next point comes the early warning.
Yes, over the early warning aspects, you will see that there has been also a huge jump in recent years with the inclusion of many AI -based models. You will find that each and every large NMHS, you can say, established NMHS, they are utilizing AI. IMD is also utilizing AI for taking a decision with respect to the early warning. At the same time, I will tell you, AI has come off as a hybrid model along with the physical models. We cannot get away with the physical models because physical models provide you the physical understanding, the reasoning. And hence, the human knowledge gets into the picture with the help of these physical models. So therefore, AI has to be suitably connected with the physical models.
That’s what everyone is doing, starting with European Center or Indian Center. And to do that also, there’s been many collaborations and integrations towards that. So after that, if you look at the basic backbone, which is the modeling, the modeling starts with the basic assumption that weather forecasting is an initial value problem. You cannot give weather forecast if you do not have what is the initial status of Earth, ocean, and atmosphere. So therefore, the basic thing which we are talking of now, that is already defined in the physical modeling system. The system, unless you improve the data, initial data, with all types of observational tools and techniques, you cannot improve the weather forecast. So therefore, here, by collecting or by creating the data with the help of AI also will go a long way in improving not only the AI models, but also the physical models and the hybrid models.
Once you have the good data, the quality of the data can be improved with the help of AI. There are, I’ll tell you, from satellite we get data, a lot of data, but only five. Five percent of the data from satellite is usable. We cannot use the data from satellites because of the quality. And further is that the quantity. you cannot accommodate all types of data in the physical modeling system as our friends have been telling that you need infrastructure, you do not have a computational infrastructure where you can utilize 100 % satellite data so yes you are true that in India we do not have sufficient computing infrastructure, we have at least now 28 petaflops in IMD and outside of course with National Subcontinent Motion we have come up like that, but that is not sufficient and therefore there is scope for public engagement for the augmenting the computational infrastructure and other digital infrastructures but at the same time there is another scope because of AI, now box model has come up a poor country, a poor small island nation cannot venture or cannot even dream to have a high performance computing system, they can go for AI system, a box model has come up where you can give it a small island nation and there with the help of a few GPU nodes they can have the forecast and that has come up and it will grow gradually and we will have the affordability to early warning with the help of AI.
of this GPU -based AI -driven or data -driven models. So after that comes the forecast. Once you come to forecast, now we have already come up to AI consensus. But physical consensus plus AI consensus, then again you will go for the final forecast. Then finally you go to the sectoral applications. There is a huge scope here with the improvement of economic conditions and societal conditions in every country to improve our decision -making for each and every sector, and there AML can play a role. So I urge upon all the industries, academia, R &D, and think tanks to collaborate with NMHS, especially with the India Metrology Department and other organizations here, to have very authentic, specific, and judicious utilization of AI with limited reasonable resources available in the country.
So thank you very much.
Thank you, Dr. Mapatra, for your valuable insight. Now, since NDMA is the APEC’s national body, which basically will integrate… all the varied systems into creating a rugged AI… systems. I would like the entire audience to get the benefit of the vision of NDMA from member and HOD Dr. Krishna Vassa. Sir, can you please elaborate how NDMA intends to take this forward to create a sustainable, low -cost, at -scale model for the country?
Thank you very much for giving me this platform. I would like to mention that we already have a huge amount of data that exists in relation to almost all the hazards. Look at the earthquakes. We record all the micro -earthquakes for the entire country. The kind of data that exists for the earthquakes which are below 3 also can give us a very good indication of the kind of earthquakes that we can experience in the Himalayas. and other regions. And this data, the availability of data is going to increase exponentially as we are investing in the observational networks. Almost every mitigation program that we are doing, we have included a significant aspect of early warning systems. In the next five years or so, every village in India will have an automated weather station.
We will have a large number of instrumentation for measuring landslides. We are going to at least quadrupling the seismometers, strong motion accelerographs. So we will be investing a huge amount of money in improving our observational networks across the hazards, which will mean that we will have access. to still a larger amount of data. What is important is that we need to have that capacity to process the data and apply the AI models and improve the precision of early warning. That is an area that is the sphere where we are struggling right now. It’s one thing to set up the observational network. The another thing is to how to collect the data, process the data and generate the information that could be used and more so when it comes to informing the common citizens.
Scientists is one thing. You are getting a huge amount of data, but we are not doing it for the scientists. We are doing it for the people who get affected by disasters. So how do we go about it? And the roadmap is not sufficiently clear and I keep talking to all kinds of people. somebody would come and say that you set up a huge data center. No? Okay, that’s fine, great. But people also say if you are setting up a huge data center and you are not really empowering all the early warning agencies, then how are you going to justify the investment in data centers? The data comes to individual agencies. How do the data center and the individual early warning agencies interact so that we have a good model available?
And we don’t have unlimited resources. So the point is, this is where we need more clarity. How do we go about using our existing networks to improve the precision in early warning risk information through a gradually incremental way of building capacities that of course include the data center that should include improving our connection with the LLM models. But it’s also very, very important that we need to find a way of improving the overall architecture. That is one area where we are struggling and where we need some guidance. Thank you very much.
Thank you, sir. I think we are coming to the close of the discussion. I’ll request Krishna Vassa, sir, to please give a moment to our panelists. Just give the back. also then request all our dignitaries in the front row to after this memento is over for a quick photograph and then we vacate the room. Thank you. Thank you. I’ll request the leadership from the states of Tamil Nadu and Andhra Pradesh, Telangana also to come, please, in the front for the photograph. We are very happy to inform that most of the states are also represented from the State Disaster Management Authority. Thank you very much. Thank you.
This comment fundamentally expanded the scope of disaster risk reduction beyond traditional natural disasters to include cyber threats. It introduced a paradigm shift by recognizing the interconnected…
EventElena Plexida:Thank you, Miapetra, hello, everyone. Indeed, ICANN coordinates the internet unique identifiers, the names, the numbers, the protocols, and especially the names, the domain name system. …
EventThe UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional physics-based climate and weather models. The aim is to deliver what it calls a…
UpdatesHello? Yeah. Right. Thank you. Thank you for your question. Thank you, it’s a real honour to be part of this panel. So at the Met Office we are currently developing machine learning weather models and…
Event_reportingVivek outlines India’s national supercomputing capability, noting the installed petaflop capacity and the large number of researchers leveraging these machines via the National Knowledge Network. This…
EventAnd the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. These are going to be augmented teammates into our society where they will be wo…
EventHe explains that remote, low‑connectivity scenarios benefit from edge deployment, while most workloads run on the cloud.
EventFuad Siddiqui: Thank you. Good morning. Yeah, I’m delighted to be here. And it’s always great to be back in Saudi. I was seeing all the greatest innovations happening and pushing the limits of AI s…
EventI mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have started thinking about AI platforms, and I’ll use the word platform, it treats AI as…
EventThe discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspectives and regulatory philosophies. Speakers emphasized partnership and shared g…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by mutual respect and shared commitment to responsible AI development. Speakers demonstrated alignment on core…
EventThe tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disagreeing. While acknowledging serious challenges and risks, the discussion maintai…
EventThe tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate for a high-level international gathering, with speakers expressing honor, gratitud…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventThe discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities and significant challenges in implementing data governance. The tone was notably…
EventThe overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speakers emphasized the need for cooperation, there was an undercurrent of concern abo…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and challenges in a balanced manner. The tone was pragmatic and solution-oriented, …
EventThe discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s points rather than disagreeing. The tone was professional and solution-oriented, …
EventThe discussion maintained a professional and collaborative tone throughout, with speakers demonstrating cautious optimism about technology’s potential while acknowledging significant challenges. The t…
EventThe discussion maintained a tone of cautious optimism throughout. Speakers acknowledged significant challenges and risks while expressing hope about digital transformation’s potential. The tone was co…
EventThe discussion maintained a consistently professional and collaborative tone throughout. It began with formal introductions and technical explanations, evolved into an enthusiastic presentation of pra…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThe conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusiasm about technology’s potential while candidly acknowledging significant challen…
EventThe tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and complexity of the challenges. Speakers maintained a pragmatic optimism, recognizing si…
EventThe discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in Baroness Shields’ opening about AI engineering “simulated intimacy”), evolved int…
EventThe tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists revealed the depth of AI-related challenges. Sherry Turkle acknowledged being “the Grin…
EventThe tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for solving real-world problems through edge AI. The atmosphere was professional yet a…
EventThe discussion maintained a consistently optimistic and action-oriented tone throughout. While speakers acknowledged serious challenges and the widening digital divide, the emphasis remained on soluti…
Event“Minister Avinash Ramtohul of Mauritius expanded the definition of disaster to include cyber‑attacks that can cripple digital systems as well as traditional hazards such as floods and cyclones.”
Sources describe Ramtohul outlining policy reforms that address cyber threats alongside conventional hazards, confirming his expanded disaster definition [S1] and [S100].
“He advocated the creation of digital‑twin representations of critical infrastructure so that emergency services can locate people and assets in real time.”
While the report attributes the digital-twin idea to Ramtohul, the knowledge base discusses digital twins as a broader concept for climate-extreme management, providing additional context on the technology’s relevance [S20].
“He warned against fully automated decision‑making, calling for a human‑in‑the‑loop approach and for all early‑warning messages to be human‑verified before broadcast, a policy already being piloted in Mauritius’s cell‑broadcast system.”
The knowledge base records that Mauritius is planning cell-broadcast systems with human-verified messaging protocols to avoid misinformation during emergencies, confirming the human-in-the-loop policy [S1] and [S45-55].
“The UK Met Office is developing machine‑learning weather models that will augment, not replace, physics‑based forecasts.”
Met Office documentation outlines a strategic plan to integrate AI/ML with traditional physics-based weather and climate models, confirming the hybrid approach described in the report [S32].
“The Met Office is co‑developing both the models and the benchmarking framework with partners such as India and the World Meteorological Organization.”
The knowledge base explicitly mentions co-development of benchmarking and testing frameworks with partner organisations, aligning with the report’s statement [S8].
“Calls for human‑in‑the‑loop AI systems echo broader concerns about preserving human agency in automated decision‑making.”
Other sources highlight similar warnings about over-reliance on algorithms and stress the need for human oversight, providing broader context to Ramtohul’s stance [S105] and [S108].
There is strong, multi‑dimensional consensus among the moderator and all panelists that AI for disaster risk reduction must be hybrid, human‑centred, built on robust computational and data‑processing infrastructure, governed by interoperable sovereign‑compatible architectures, financed through public‑private partnerships, and delivered securely to end‑users. The shared emphasis on trust, explainability, and last‑mile accessibility underscores a unified vision for scalable, inclusive AI‑enabled resilience.
High consensus across technical, policy, financial and ethical dimensions, indicating that future initiatives are likely to focus on integrated hybrid models, capacity‑building infrastructure, collaborative governance frameworks and secure, human‑overseen deployment.
The panel broadly concurs on the strategic importance of AI for disaster risk reduction, yet key tensions emerge around the scale and financing of computing infrastructure, the governance model for data (sovereign versus open co‑development), and the pace and architecture of AI deployment. A notable surprise is the differing view on whether cyber‑security incidents should be treated as disasters alongside traditional physical hazards.
Moderate to high. While there is consensus on the goal of AI‑enhanced resilience, the disagreements on infrastructure investment, data governance, and scope of disaster definition could impede coordinated policy action unless reconciled. These divergences suggest the need for a hybrid policy framework that accommodates both high‑performance national infrastructure and low‑cost alternatives, aligns sovereign data requirements with collaborative benchmarking, and broadens disaster definitions to include cyber threats.
The discussion evolved from a broad conceptualization of disaster risk (including cyber threats) to concrete challenges of infrastructure, data quality, and implementation. Key comments—especially the digital‑twin vision, the quantified supercomputing gap, the five‑layer AI architecture, and the hybrid AI‑physical modeling approach—acted as turning points that redirected the conversation toward practical, scalable solutions and highlighted the necessity of public‑private partnerships, interoperable standards, and user‑centric design. Collectively, these insights shaped a nuanced narrative: while AI offers transformative potential for DRR, realizing it at national scale demands coordinated policy reforms, robust yet affordable computational resources, and integrated data‑to‑action pipelines.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

