National Disaster Management Authority

20 Feb 2026 15:00h - 16:00h

National Disaster Management Authority

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by highlighting the rising frequency, intensity and complexity of disasters worldwide and the parallel surge in artificial-intelligence capabilities, framing the question of how India can develop AI-driven resilience models [1-5]. Moderators emphasized that the next frontier in disaster risk reduction is not merely better algorithms but the institutionalisation of AI within national resilience architectures [7].


The discussion then turned to policy reforms, with Mauritius’s Minister Avinash Ramtohul stressing that disasters now affect both the physical and virtual realms, including cyber-attacks, and that governance must bridge these domains [26-32]. He advocated creating digital-twin representations of critical infrastructure to enable emergency services to locate people and assets in real time, and argued that such digital maps should be accessible to authorised responders as part of reform [33-36]. Ramtohul also warned that fully automated decision-making can be dangerous, insisting on a “human-in-the-loop” approach for AI-based early-warning alerts, a stance reflected in Mauritius’s policy to require human-verified messages in its cell-broadcast system [45-55].


Beth Woodham from the UK Met Office described the agency’s strategy of developing hybrid weather models that blend physics-based forecasts with machine-learning outputs, proceeding incrementally while co-developing benchmarks with partners such as India [65-73]. She noted that building trust requires aligning evaluation metrics with user needs and that joint development of both models and their testing frameworks is essential for operational adoption [72-74].


Som Satsangi highlighted India’s current shortfall in high-performance computing, noting that the country’s supercomputers total roughly 40-100 petaflops compared with the exaflop-scale systems used in the United States for real-time AI analytics [92-100]. He argued that the gap in core infrastructure, which can cost hundreds of millions of dollars, must be closed through public-private partnerships and that power and cooling requirements are equally critical for deploying large-scale AI clusters [106-124].


Pankaj Shukla outlined a five-layer AI architecture-infra-structure, operating system, platform services, models and applications-and stressed the need for a central “living intelligence” that can be synchronised with edge or air-gapped devices to deliver actionable insights even in disconnected, high-risk settings [136-144][145-152]. He explained that today’s hyperscaler clouds can be extended on-premises in a zero-trust fashion, allowing rugged devices to run distilled models locally for rapid response [150-152].


Startup founder Nikhilesh Kumar added that effective disaster-risk platforms must integrate four layers-modeling, asset, people and workflow-and that AI can transform scattered, unstructured data from satellites, social media and agency records into near-real-time nowcasts, such as the dam-level forecasts demonstrated for thousands of Indian reservoirs [155-166]. He further pointed out that AI can extract hazard information from news and other unstructured sources to build location-specific risk databases that support insurance and mitigation planning [169-171].


Dr. Mrutyunjay Mohapatra reiterated the global “early warning for all” agenda, emphasizing that hybrid AI-physical models improve forecast precision but are limited by data quality and computing capacity, and suggested low-cost GPU-based box models as a viable solution for resource-constrained nations [185-194][203-207]. Finally, Dr. Krishna Vatsa described India’s expanding observational networks and the pressing need to develop processing capacity and clear data-center architectures to turn the growing data streams into reliable, citizen-focused early warnings, concluding that coordinated investment, partnership and governance are essential to realise AI-enabled disaster resilience at scale [220-229][230-247].


Keypoints

Major discussion points


Policy & governance: bridging the physical and virtual worlds – The Minister of IT (Mauritius) emphasized that disaster risk must cover both physical hazards and cyber-threats, calling for a “digital twin” that links real-world assets to virtual models and insisting that critical AI-driven alerts remain human-verified and “human-in-the-loop” to avoid fully automated decisions that could cause harm[26-33][34-42][45-47][52-55].


Hybrid AI-physical modelling and co-development – The UK Met Office highlighted that AI will augment, not replace, traditional physics-based weather models through blended or hybrid approaches, and stressed the need for joint benchmarking and evaluation frameworks with partners (including low-resource countries) to build trust in AI-generated forecasts[65-71][72-74].


National-scale infrastructure and resource constraints – Hewlett Packard Enterprise’s Som Satsangi pointed out that India’s current super-computing capacity (≈ 28 petaflops) is far below the exaflop-scale systems used elsewhere, making the cost, power, and cooling requirements for AI-driven early-warning platforms a major barrier; he called for public-private partnerships to acquire the necessary sovereign data infrastructure[92-100][106-108][109-126].


Cloud-edge architecture for low-connectivity, high-risk environments – Google Cloud’s Pankaj Shukla described a layered architecture (infrastructure, operating system, services, models) that creates a central “living intelligence” while enabling edge-deployed, zero-trust, rugged devices to operate even when disconnected, ensuring real-time analytics and safe dissemination of warnings[136-152].


Start-up driven DPIs/DPGs and workflow translation – Vassar Labs’ Nikhilesh Kumar outlined four AI-enabled layers (modeling, asset/people, workflow, DPI/DPG) and illustrated how startups can integrate scattered agency data, generate near-real-time nowcasts for millions of water bodies, and convert unstructured news into structured risk datasets that feed insurance and mitigation systems[155-167][168-172].


Overall purpose / goal of the discussion


The panel was convened to explore how Artificial Intelligence can be institutionalized within national disaster risk reduction (DRR) frameworks, especially for India, by examining policy reforms, technical integration, infrastructure needs, and collaborative models (government, industry, academia, and startups) that together can build scalable, trustworthy, and inclusive early-warning and resilience systems.


Tone of the discussion


– The conversation began with a formal, forward-looking tone, framing AI as the “next frontier” in disaster governance.


– It then shifted to a technical and pragmatic tone, with speakers detailing concrete challenges (cyber-security, super-computing gaps, data quality) and realistic constraints.


– As the dialogue progressed, the tone became collaborative and solution-oriented, emphasizing partnerships, co-development, and actionable road-maps.


– The session concluded on a constructive, call-to-action tone, urging coordinated effort across agencies and sectors to translate AI advances into operational resilience.


Speakers

Moderator – Role: Moderator of the panel discussion.


Avinash Ramtohul – Minister for Information Technology, Communication and Innovation, Republic of Mauritius [S9]; expertise: AI-enabled early warning systems, digital twins, cybersecurity, policy reforms for disaster risk governance.


Beth Woodhams – Senior Manager, UK Met Office [S1]; expertise: disaster risk reduction, weather forecasting, integration of machine-learning models with physical weather models, international co-development of AI-driven meteorological services.


Som Satsangi – Former SVP & Managing Director, Hewlett Packard Enterprise India [S12]; expertise: AI deployment in geospatial and climate analytics, sovereign data architectures, large-scale infrastructure for real-time disaster alerts.


Pankaj Shukla – Head of Customer Engineering, Google Cloud India [S14]; expertise: AI-driven analytics, hazard mapping, predictive analytics, cloud and edge infrastructure for low-connectivity, high-risk environments.


Nikhilesh Kumar – CEO & Co-founder, Vassar Labs [S11]; expertise: AI solutions for disaster risk reduction, modeling layers, data integration, startup-driven platforms for population-scale early warning.


Dr. Mrutyunjay Mohapatra – Director General, India Meteorological Department (IMD) [S7]; expertise: AI-enhanced weather forecasting, hybrid physical-AI models, early warning systems at national scale.


Dr. Krishna Vatsa – Head of Department, National Disaster Management Authority (NDMA) [S3]; expertise: data integration, AI for early warning precision, building scalable disaster-risk information systems.


Additional speakers:


Mr. Martin – Mentioned in the discussion but did not speak; no role or expertise provided.


Full session reportComprehensive analysis and detailed insights

The panel opened by underscoring that disasters are becoming more frequent, intense and complex worldwide, while advances in artificial-intelligence (AI) are occurring at an unprecedented pace[1-5]. The moderator framed the central challenge: “How can India develop an AI-enabled model for resilience?” and argued that the next frontier in disaster risk reduction (DRR) is the institutionalisation of AI within national resilience architectures[1-5][7].


Policy and governance – bridging the physical and virtual worlds


Minister Avinash Ramtohul of Mauritius expanded the definition of disaster to include cyber-attacks that can cripple digital systems as well as traditional hazards such as floods and cyclones[30-32]. He advocated the creation of digital-twin representations of critical infrastructure so that emergency services can locate people and assets in real time, and insisted that these virtual maps be accessible to authorised responders (fire, medical, etc.) as part of a broader reform agenda[34-36]. Crucially, he warned against fully automated decision-making, calling for a human-in-the-loop approach and for all early-warning messages to be human-verified before broadcast, a policy already being piloted in Mauritius’s cell-broadcast system[45-55].


Hybrid AI-physical modelling – the Met Office perspective


Beth Woodham explained that the UK Met Office is developing machine-learning weather models that will augment, not replace, physics-based forecasts. The agency plans a gradual rollout in which AI outputs are blended with traditional model results, creating hybrid forecasts that increase confidence as the AI component matures[65-71]. To build trust, the Met Office is co-developing both the models and the benchmarking framework with partners such as India and the World Meteorological Organization, emphasizing that co-development and joint benchmarking are essential to operationalise AI-driven forecasts in low-resource settings[71-74].


Computational infrastructure – India’s current shortfall


Som Satsangi highlighted that India’s super-computing capacity (≈ 28 petaflops) is far below the exaflop-scale systems (1-2 exaflops) used in the United States for real-time AI analytics[92-100]. He noted that each exaflop-class machine costs US $400 million-$1 billion and requires massive power and water-cooling resources, making it unaffordable for a single government entity[106-108][109-126]. Consequently, he called for public-private partnerships to acquire and operate the necessary sovereign data infrastructure[106-108][109-110].


Interoperability, sovereign data and governance


Satsangi stressed that AI systems must be built on sovereign-compatible data architectures and that clear governance mechanisms are needed for life-saving decisions, especially in a federal context such as India where multiple state and central agencies must interoperate[80-81][46-55]. The moderator asked about “standards of explainability” for AI-driven alerts, but the panel did not reach a concrete consensus on specific metrics[7].


Cloud-edge architecture for low-connectivity, high-risk settings


Pankaj Shukla described a five-layer AI stack – infrastructure, operating system, platform services, models and applications – that creates a central “living intelligence” while allowing distilled models to run on edge or air-gapped rugged devices. This architecture enables real-time analytics even when connectivity is lost, supports zero-trust security, and mitigates misinformation by delivering verified alerts directly to field teams[136-144][148-152].


Start-up contribution – DPIs and DPGs


Nikhilesh Kumar outlined a four-layer framework (modeling, asset/people, workflow, DPI/DPG) that startups can use to turn fragmented, unstructured data into actionable insights. DPI (Disaster Prediction Interface) and DPG (Disaster Prediction Grid) are the delivery mechanisms. He gave the example of nowcasting for nearly one million water bodies by fusing 30-minute satellite imagery and radar data, then translating the output into hydraulic forecasts for thousands of dams in real time[155-166]. He also demonstrated how AI can extract hazard-specific information from news and social media to build location-specific risk databases that support insurance and mitigation planning[169-172].


National implementation – hybrid models and low-cost alternatives


Dr Mrutyunjay Mohapatra linked the discussion to the UN “early-warning for all” agenda, noting that hybrid AI-physical models improve forecast precision but are limited by data quality and computing capacity[185-194]. He pointed out that only about 5 % of satellite data is currently usable, and that improving data quality with AI benefits both AI and physics models[203-207]. For resource-constrained contexts, he promoted a GPU-based “box-model” that can deliver acceptable forecasts without the need for exaflop supercomputers, offering an affordable pathway for small island states and low-income regions[207-208].


Observational networks and processing bottlenecks


Dr Krishna Vatsa (NDMA) described India’s ambitious plan to quadruple seismometers, install automated weather stations in every village and expand landslide sensors, thereby generating massive new data streams[226-229]. However, he highlighted a critical gap: the lack of processing capacity and a clear data-centre architecture to turn this raw data into citizen-focused early warnings[230-247]. He called for a coordinated roadmap that incrementally builds AI-processing capability while ensuring that data centres meaningfully serve early-warning agencies[238-247].


Areas of consensus


All participants concurred that AI should be embedded within a coherent national resilience framework, that human oversight is essential for life-saving alerts, and that hybrid AI-physics models are the preferred technical approach. They also agreed on the necessity of large-scale computing resources, interoperable sovereign data architectures, and strong cross-sector collaboration (government, industry, academia, startups)[7][46-55][65-71][80-81][136-144][155-162].


Key points of disagreement


A tension emerged between the emphasis on sovereign-compatible data architectures (Satsangi) and the Met Office’s advocacy for open co-development and shared benchmarking (Woodham), reflecting differing priorities in balancing security with collaborative innovation[80-81][71-74]. A second divergence concerned computational scale: Satsangi argued that India must acquire exaflop-scale supercomputers to support real-time AI alerts, whereas Mohapatra suggested that GPU-based box models provide a viable, low-cost alternative for nations lacking such resources[92-100][207-208].


Conclusions and actionable recommendations


The panel distilled several key take-aways: AI must be institutionalised, human-in-the-loop, and blended with physics models; India needs to close a substantial computing-capacity gap while exploring affordable GPU solutions; a five-layer cloud-edge architecture should underpin a central living intelligence; startups can deliver DPIs/DPGs that integrate hazard, asset, and population data; and expanding observational networks must be matched with clear data-centre strategies.


Proposed action items:


1. Develop integrated digital-twin and hybrid forecasting platforms for critical infrastructure and meteorological services (Ramtohul, Woodham).


2. Create a sovereign-data architecture with defined governance and audit standards for AI-driven decisions (Satsangi).


3. Pursue public-private partnership models to fund either exaflop-scale or GPU-based compute clusters, incorporating sustainable power and cooling solutions (Satsangi).


4. Deploy the five-layer AI stack and ensure edge-ready, zero-trust devices for last-mile dissemination (Shukla).


5. Encourage startup-led DPIs/DPGs that translate multi-agency data into actionable workflows and risk databases (Kumar).


6. Promote GPU-based box-model forecasting as an interim solution for low-resource settings while larger infrastructure is built (Mohapatra).


7. Accelerate the rollout of automated weather stations, seismic sensors and landslide monitors, coupled with a roadmap for integrating these streams into AI-enabled early-warning pipelines (Vatsa).


Unresolved issues include financing the required supercomputing capacity, finalising standards for AI explainability, safeguarding AI-driven alerts against cyber-threats, and defining a governance model that balances sovereign data protection with collaborative benchmarking. Addressing these challenges will be essential for realising a scalable, trustworthy, and inclusive AI-enabled disaster resilience system across India and comparable jurisdictions[45-55][65-74][92-100][107-110][136-152][155-172][185-194][203-207][220-247].


Session transcriptComplete transcript of the session
Moderator

defining moment for disaster risk governance. Around the world, the frequency, intensity, and complexity of disasters are increasing. Climate variability is compounding existing vulnerabilities. Urbanization is concentrating risk, and cascading hazards are challenging traditional response models. At the same time, we are witnessing unprecedented advances in AI. So, at this point of time, how does India bring or develop a model with AI for resilience? We believe that the next frontier in DRR is not better algorithms alone, it is institutionalizing AI within national resilience architecture. Thank you very much. Thank you very much. from pilot projects to national and global resilience systems. Before we start the discussion, let me invite and call on the stage for the panel discussion His Excellency Dr.

Avinash Ramtohol, the Minister for Information Technology, Communication and Innovation from the Republic of Mauritius. Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South cooperation. I would like to invite Ms. Beth Woodham, Senior Manager from UK Met Office. She is a specialist in disaster risk reduction via forecasting innovations and AI explorations for prediction. Welcome. I would like to invite Mr. Som Satsangi, former SVP and Managing Director for Hewlett Packard Enterprise India with industry insights on AI deployment in geospatial and climate analytics. Welcome, Mr. Som. I would like to now call upon Mr. Nikhilesh Kumar, CEO and co -founder of Vassar Labs. He is an innovator in leveraging AI for DRR.

Welcome, Nikhilesh. And lastly, Mr. Pankaj Shukla, Head of Customer Engineering, Google Cloud India, for Practical AI Applications, Hazard Mapping, Predictive Analytics, and EWS Scale -Up. Thank you. Thank you. and focus on double integration during this panel discussion. So my question first to Minister for IT, Communication and Innovation, Republic of Mauritius. Minister, the small island developing states face existential climatic threats. From your perspective, what policy reforms are required to institutionalize AI -enabled early warning and alerting systems within national governance frameworks? And how can countries with limited resources ensure sustainability in such ventures?

Avinash Ramtohul

Thank you and good morning, everybody. Thank you for the opportunity to be here amongst you. First of all, I would like to say a couple of points before I get into the actual response there. Today, just like we have the physical world, we have a virtual world as well in which we all live. And that virtual world is so much bigger. Than the physical world we can see here in front of us at the moment. And just like disaster can strike the physical world, and that is the scope of the discussion, disaster can also strike the virtual world. And as we grow in dependency on the virtual world, on our digital systems, we should be well aware that disaster is not just the flood, the cyclone, the drought.

Disaster can also be the cybersecurity attacks that can actually create havoc in our lives. Therefore, it is very important that the scope of the discussions when we look at disaster be also extended to the virtual world and cybersecurity attacks. Now, having said this, in terms of policy reform, it is very important that we also create this bridge between the physical world and the virtual world. And I will explain myself. Just imagine, as we speak here, there is a big fire that breaks out in one organization. There is a big fire that breaks out in one organization. and because it broke out there are you know automated connections that go to the fire services to the medical services they will proactively now start driving to this place but when they come to this place where would they know where are the people because their main objective is to save the lives of the people secondary the material where would they know where are the people do they have a plan a structural plan of this this space do they know where do the pipes cross now I’m talking about a digital twin it is really important that we create that digital twin which will be the bridge between the physical world and the virtual world and the architectural map of that digital twin should be accessible to a certain set of operators the medical the fire services now this is part of the reform that we are looking at And in a small administration, it becomes easier to do it, as opposed to a huge administration like India.

Now, there’s one more thing in there. As, let’s say, we also have the structural plan, how do we know where the people are? Can we have heartbeat indication? Can we have the thermal map of the place so that we know wherever there’s 37, 38, 39 degrees, well, 37, 38 is better. Do we know where the people are located so that when the fire services come, they go straight to that spot? So this is very important. And another reform that is important that we be aware of is that when there is some kind of a pandemic, there is, and which is contagious, there is human -to -human virus transfer. Now, we are all very excited. We are very excited about artificial intelligence.

intelligence but we are also aware that there is this possibility of virus infecting systems right and just like virus infects people virus also infects systems and virus get contagious in computers as well we all know that therefore we need to also have mechanisms to protect because if we have a message that goes through an early warning system to people this already creates an alert in the minds of people the adrenaline surge starts already but if that message is infected it can create a lot of disruption in our daily lives and this we need to be very careful of therefore in terms of reform the decision making process and i think it was mr somebody earlier mentioned in the previous panel the decision making process is automated now automation can be and 100 % automation in the field of AI where it concerns the lives of people can be dangerous.

Therefore, human in the loop or human on the loop is critical in these kinds of environments. And this is also part of what we are looking at in Mauritius. Yes, it’s true that as a small island developing state, we call it SIDS, we have our own set of flash floods that can actually occur. Within a couple of hours, we can have flash floods and we can see cars floating around already. And this has happened in the country. And we don’t want that to happen again. Therefore, there are early warning systems that we are deploying, like cell broadcast systems, which we have planned to deploy. Now, again, the message that goes into that system should be a message that is human verified.

That is, decisions like these that are sensitive, highly sensitive, cannot be 100 % automated. That’s part of our policy. as well and want to ensure that humans are involved because machines cannot decide for humans human decide for machines and this needs to be critical and needs to be given the attention that it deserves and I believe our Prime Minister Modi ji also mentioned in his intervention yesterday that there is a great necessity to ensure that human remain part of the decision -making process in the application of AI for disaster management so these are a few points I wanted to mention thank you

Moderator

insights in developing resilient governance frameworks which actually are scalable across the nation’s and this is a way to go resilient system which is resilience to even cyber attacks and of sustainable meaningful and not giving any say fatigue you you of alerts also is very much vital for a robust system to be effective across all disasters. So we now come to the second panelist, Ms. Woodham. My question to you is the national meteorological agencies play a crucial role in operational forecasting and early warning delivery. From the perspective of the UK Met Office, how can AI complement physical weather and climatic models to improve forecast, lead time, and impact, basically impact -based warnings to gain public trust?

And what institutional partnerships are necessary to ensure that AI -driven meteorological insights translate into actionable decisions and actions at national and local levels, with special emphasis on low -resource countries?

Beth Woodhams

Hello? Yeah. Right. Thank you. Thank you for your question. Thank you, it’s a real honour to be part of this panel. So at the Met Office we are currently developing machine learning weather models and we absolutely do not see these as a replacement for our physical models. Our plan over the coming years is to step by step implement these models through blending. This could be hybrid models, physics based and machine learning based. It could be blending the output from both of these models after they’ve run. The truth is we don’t know what the answer to this solution is yet but in order to build the trust amongst the users of our models, the customers of our data we’re certainly not going to have a complete shift.

We are going to do this step by step increasing our blending. As we become more confident with the data. so it’s from this conference you know it’s very clear that um companies from the private sector are developing these models in the public sector of course we’re developing them too and sovereign capability remains really important but for public sectors we really need to have that co that co -development um at the met office we have a long history of co -developing with partners like india so through wc ssp india and through age um wiser asia pacific we have these partnerships we’ve co -developed physics -based models and we really want to do the same with machine learning models as well at the met office we’re starting standardizing our benchmark and evaluation benchmarking and evaluation we really want to make sure that when we’re doing comparisons between machine learning and physics -based models we’re being focused on the same thing so we’re really wanting to do the same thing with machine learning and we’re really wanting to do the same thing with machine learning and we’re really wanting to space models.

There’s a lot of metrics we can look at that show machine learning models are doing well, but are these the metrics that are most important to users? Therefore, not only do we want to co -develop the actual models with partners, we want to co -develop the benchmarking and the tests that we do on these models. Thank you.

Moderator

Thank you, Beth, for giving your insight into how the National Meteorological Agency’s may plan to use AI to the systems. So now we move towards the technologies so that how do we really create resilient systems for forecasting. So my first question would be to Mr. Som Sasangi. The private sector innovation has advanced rapidly. So there are first foremost question I think which comes to my mind is how can technology providers design AI systems that are interoperable and with sovereign data architectures because that is the crucial issue to be cracked. So we have to design AI systems that are interoperable with sovereign data architectures and compatible with diverse governance ecosystems. For a country like India with federal government and the state governments so this is a very vital nut to be cracked from the technology’s point of view and similarly what standards of explainability are necessary when AI informs life -saving decisions.

Som Satsangi

Thanks Manish. Really a great question and probably in this room I’ll be calling something which is very very important because just when I walked in I heard Mr. Martin and he spoke a couple of points which are so important. and critical for a country like India. He spoke about the government. He spoke about the procurement policies and the scale. So all these three things are so important and critical when we look from India’s standpoint with 1 .2 billion plus citizens. I’ve been the managing director of Hewlett Packard Enterprise for the last nine years, and I know I’ve been involved almost in all large critical infrastructure projects, whether it’s UIDI or any kind of transaction, COVID, all applications.

And we know that all these things we have developed at this scale and delivered. Probably when we look, the most important aspect of the human life with the climate change, the disaster which is happening across the world, the length and breadth of India on coastal side. So, but somehow, are we ready to do it? I don’t think we are ready. While, and I’ll give you some pointer which are very important and why it’s not happening to the Mr. Martin point. And I’ll just, while India has very ambitious plan for national supercomputer mission way back in 2015, where India said, okay, we’ll be investing 4 ,500 crore to develop some of the supercomputer which will be the high class.

But in last 10 year, what we have developed is some 37 supercomputer with just the 40 petaflop of capital. is that sufficient? Now we are planning, okay, we’ll add another 50 petaflop. But when you look at the global level, the kind of infrastructure which is, if we have to manage in a real time this alert and warning system, and I’ll give you one or two examples in United States. The top which have been developed and deployed to do these things, they have got a capacity of almost one to two exaflop. And one exaflop is close to around thousand petaflop. Whole India we don’t have even today 100 petaflop of data. And US, there are multiple systems which provide this real time information and each of them, one is Al Capitan which is 1 .8 exaflop then Frontier system which has been deployed by the Oak Ridge National University has got 1 .3 exaflop.

capacity. Aurora, which is recently deployed by the Argonne National University, has got one extra flop of power and capability. So these are the kind of systems which are deployed so that actually they can take the power of AI in a real -time environment, whether it’s geospatial data or satellite information data or it’s any kind of live information, and analyze these things with the help of AI in a real -time environment and provide the alert much ahead of those things. Somehow we are not able to provide. So in India if we want this early warning system to be done, I think our main focus needs to be how we can have the core infrastructure which will meet the requirement of this.

And last couple of days in every discussion, this is what is coming out with the global CIO and CEO, that probably India need the core infrastructure which we have not developed. Now, we might say, okay, we are doing 10 ,000 to the AI and all, but that is getting distributed to a large number of tech and SMB guys who are developing the application. But what government needs, because it’s a sovereign data, that government need to buy this kind of infrastructure. But I know each of the system will cost anything between 400, 500 million dollars to a billion dollars. Government may not be able to spend that kind of money. So probably that’s a place where private partnership becomes very, very important in cricket.

So my request is that probably department should be looking how the large global institution and technology partner can bring the core infrastructure and technology, because today technology is not a barrier. It’s an infrastructure and the scale and the procurement process and some of these policies. How the various data will be getting integrated is a problem. So if we can address these things, to the scale probably India has done on the DPI side where we have implemented and the best example to the global level where the UIDI is being used almost by almost more than 800 -900 million citizens in the country. We can deliver that. We have got a capability and with all these AI transformation which is happening, our Honorable Prime Minister already said that India is going to be leapfrogging on those things and going to the global leader in the AI space with all those technology embedded along with the capability what India has got.

Only what is required the infrastructure, but infrastructure will come with a huge cost. When you are going to get the infrastructure, another element comes is the power, energy and water. That’s going to be very critical. So somebody has to look at all the three aspects. You can get the infrastructure if you don’t have the power. So we need to have the power which can help and power these kind of systems. So alternative power resources are going to be very, very critical. They’ll be all water -cooled system because they will have hundreds of thousands of GPU and CPUs running together kind of thing. They will require a huge power and huge water capabilities. So need to have that thing.

So India need to start thinking on those lines to create that thing. If we have to protect and we have to get the right early warning alert to save the life of millions of citizens in the country. Thank you.

Moderator

Thank you. And definitely DRR also offers an opportunity for us to ponder. So taking forward from Mr. Som, I’ll go to Mr. Pankaj Shuklaji, the head of customer engineering at Google Cloud AI. Basically, cloud computing and AI platforms enable real analytics, real -time analytics at scale. So what are the critical infrastructure investments which are essential to support AI deployment in low connectivity and high -risk environments? That’s very vital, looking at the geography of our nation. And how can AI -driven dissemination ensure last -mile inclusion while mitigating misinformation risk? So your insights on that.

Pankaj Shukla

Good afternoon, everyone. So irrespective of the technology, when we talk of disaster management and resilience, essentially what we are trying to do is, we are trying to turn the chaotic reality on the ground into actionable intelligence. so for example the data fragmentation which is sitting with multiple ministries and social media various places all across all of that first of all needs to be brought to a place or at least should be you should have an ability to bring all of that data and turn into a living intelligence so once the data is there which is structured as well as unstructured data then we have the ability of our AI models today which are multi -modal to make sense of completely chaotic data noisy data into a real intelligence at unimaginable speed that is essentially what AI is all about so when it comes to the real implementation of this entire architecture and panelists spoke about multiple aspects that how can we use AI how do we actually implement it on ground if you look at what we need and essentially if we talk of AI broadly it has it is it is there at the five layers.

One is the infrastructure layer. Second is the operating system layer which runs on top of infrastructure. I am not talking about just servers and data centers but an operating system layer which scales from a central to an edge location to a multiple regional locations. Then on top of that the services which are required, platform services which are required to basically build the AI applications, make the use of the right models etc. Then you have got the models, the multi -model ability of the models like Gemini and various other hyperscalers which provide and for example the NDIA mission, lot of Indian providers are building models. Ma ‘am spoke about lot of the models which for example other companies are building.

So the question is that how are we able to make the use of all diverse set of models in a dynamic manner and use agentic AI on top of that. To build applications and turn that into a real action. which can be disseminated at the places where we want. For both proactive during the response and as well as after the response. So the question is how do we implement it? So implementation of this will require a framework or an architecture which essentially has a central living intelligence of all the data on which you are basically experimenting and pre -training the models and tuning using different type of models and building applications. The real application of that is going to happen at a place which might get completely disconnected from a central place.

So you should be able to actually build all of these applications, AI applications, make use of that data, make a single central single truth centrally but ability to send that intelligence back to a tactical location. Today it is exactly possible. So organizations for example Google and many other organizations are actually trying to build. Basically bring all the goodness of hyperscaler cloud. for the entire infrastructure and managed services layer as well as AI tooling to on -prem and then ability to also run those in a completely disconnected air gap environment in a zero trust manner. So you have the security of your data and your applications and then also you should have an ability to connect the edge locations in a federated manner to the central place but if required during a disaster you should be able to carry it a rugged device which basically has a basic small set of central intelligence sitting into that with all these necessary models to basically take action on the ground and that action could be related to either finding out where are your assets where is the maximum impact which has happened, how do you actually kind of send the information to various places.

So all of those things are absolutely possible today and while there is huge amount of infrastructure at its own set is required actually to train models, build models. but that’s happening across the country we should have an ability to bring all the good models, the models for the right thing to basically run it on prem at a smaller set of infrastructure and make a smaller set of that which can run in a tactical location which sits possibly in a very limited infrastructure and compute that is what we

Moderator

Thank you Pankaj, I think for giving Google’s insight into building rugged systems at scale deployment of AI solutions in low cost and high risk environments we have another contributor to the DRR particularly AI deployment in DRR could be from the startups and we have Mr. Nikhilesh Kumar CEO and founder from Vassar Labs basically Nikhilesh you can enlighten us about how startups can contribute in developing a DPG at population scale for DRR particularly for countries like India

Nikhilesh Kumar

The modeling layer, which is transforming the data into various insights, hazards. The asset and the people layer, which is getting impacted, where we need to know today in personalized and in precision, where exactly we talked about, like if flood is coming, which road, which houses, if a landslide is coming, which area. And the fourth, which is most important, is the workflows to translate the actions. And this is where we see a role of DPI and DPG, because all these four layers are not done by one person. They are looking to data scattered across various agencies. And we need to have DPIs and DPGs, which are built across this data right from. different institutions which are bringing metrological data, some institutions which are bringing water and other asset related data, some institutions which are creating different layers like survey of India in case of India on earthquake and various other layers.

Now we also see a role of AI today being playing a role and I will just give an example for that. So we see as today extreme events are happening, one of the first pressure is in the water sector where we see extreme floods sudden gush of water coming into the large dams and dams is one of the most extreme and vulnerable asset that we are all impacted to it and we would also see that the large dams are perhaps we have got a good handle to control them during the disaster but we have big number of dams in the country where they are unregulated and they are scattered across in large numbers and there is no forecast available for them.

So how do we churn in real time, in near real time, both hourly and in days time for close to 1 million water bodies which anytime can be vulnerable. So one of the solutions that we recently saw was translating and making leveraging actually the data gap utilizing AI and AI sitting on real time satellite data 30 minutes interval, radar data coming from, currently all these assets are available from IMD and translating that into a nowcast. Now that nowcast layer translated into hydraulics to each of the dams and in cyclone month close to 5000 dams were given in real time. So such use cases you see is where data is getting connected is available in real time in an interoperable format and then there are players who can translate this data into actions.

I see that contributions where such platforms are brought at national and state scale and these use cases are also packaged and made available for different recipient departments to translate them into actions. So I would say that sir and one more thing I would just like to add taking this as a forum sir. Risk assessment and risk reduction both has a very big gap when it comes to data especially various events across take about earthquake, take about other type of disasters where… where parametric measurements historic have not been available and knowing a frequency… of location specific frequency of these hazards has been actually lacking because you don’t have a database. Now AI can play a very good role here where the information is lying in lots of news which has happened and which contains unstructured information on location, unstructured information on the hazard that has impacted the damages.

So AI can actually uncover these informations and create a structured data sets each hazard wise and these will also feed into various DPGs that will further unlock to the insurance sector to make this which will basically benefit from knowing location specific intensity and also frequency of the risks. I will just close with that sir.

Moderator

Thank you Nikhilish. You have aptly summarized that startups in this sector, definitely can play a very vital role particularly for developing a rugged AI systems for India at population scale. Now we have heard the panelists from the Indian perspective since we are running large systems. So we would like also to have all the members to have the benefit of the insights of how the national systems are functioning in India and how technology is basically being deployed at scale for DRR perspectives. So firstly I would like to get insight from Dr. Prithunjay Mahapatra. Yes. DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed, are being deployed at population scale in the Indian context.

DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed at population scale in the Indian context. Dr. Bapa.

Dr. Mrutyunjay Mohapatra

Namaskar. Good morning to all of you. Respected Dr. Komal Kishore sir, Adas Nand sir, our Krishnamurthy sir and distinguished panelists and their delegates, friends and colleagues. On the outside, I congratulate NDA. For organizing this session, which has given a lot of thoughts to each of us who are represented here. I’ll just start with what the initiative has been taken up by the UN and the WMO. A clarion call was given in 2022 that early warning for all. And when you go for early warning for all, it includes all the countries, all the people and all sectors, all the strata of the society. So with that, when it came of that, actually less than 50 % of countries had the early warning at that time.

Now the number is increasing. but still the time is short now by 2027 we have to achieve the 100 % what’s the early warning for all it is a long goal and if we review now we find that during this last 5 years there is a huge jump in technology and AI is one such technology which is helping for extension of the early warning for all looking at the various components of that you need first the risk knowledge at each and every point what our friend told us like Nikhil and it is not possible with the existing network of any country to have the risk knowledge at each and every location but at the same time there are unstructured data as it is told which can be utilized to create the knowledge to create the risk hazard vulnerability assessment which can be the historical knowledge which can be utilized in real time when you go for the prediction of any severe weather event Next point comes the early warning.

Yes, over the early warning aspects, you will see that there has been also a huge jump in recent years with the inclusion of many AI -based models. You will find that each and every large NMHS, you can say, established NMHS, they are utilizing AI. IMD is also utilizing AI for taking a decision with respect to the early warning. At the same time, I will tell you, AI has come off as a hybrid model along with the physical models. We cannot get away with the physical models because physical models provide you the physical understanding, the reasoning. And hence, the human knowledge gets into the picture with the help of these physical models. So therefore, AI has to be suitably connected with the physical models.

That’s what everyone is doing, starting with European Center or Indian Center. And to do that also, there’s been many collaborations and integrations towards that. So after that, if you look at the basic backbone, which is the modeling, the modeling starts with the basic assumption that weather forecasting is an initial value problem. You cannot give weather forecast if you do not have what is the initial status of Earth, ocean, and atmosphere. So therefore, the basic thing which we are talking of now, that is already defined in the physical modeling system. The system, unless you improve the data, initial data, with all types of observational tools and techniques, you cannot improve the weather forecast. So therefore, here, by collecting or by creating the data with the help of AI also will go a long way in improving not only the AI models, but also the physical models and the hybrid models.

Once you have the good data, the quality of the data can be improved with the help of AI. There are, I’ll tell you, from satellite we get data, a lot of data, but only five. Five percent of the data from satellite is usable. We cannot use the data from satellites because of the quality. And further is that the quantity. you cannot accommodate all types of data in the physical modeling system as our friends have been telling that you need infrastructure, you do not have a computational infrastructure where you can utilize 100 % satellite data so yes you are true that in India we do not have sufficient computing infrastructure, we have at least now 28 petaflops in IMD and outside of course with National Subcontinent Motion we have come up like that, but that is not sufficient and therefore there is scope for public engagement for the augmenting the computational infrastructure and other digital infrastructures but at the same time there is another scope because of AI, now box model has come up a poor country, a poor small island nation cannot venture or cannot even dream to have a high performance computing system, they can go for AI system, a box model has come up where you can give it a small island nation and there with the help of a few GPU nodes they can have the forecast and that has come up and it will grow gradually and we will have the affordability to early warning with the help of AI.

of this GPU -based AI -driven or data -driven models. So after that comes the forecast. Once you come to forecast, now we have already come up to AI consensus. But physical consensus plus AI consensus, then again you will go for the final forecast. Then finally you go to the sectoral applications. There is a huge scope here with the improvement of economic conditions and societal conditions in every country to improve our decision -making for each and every sector, and there AML can play a role. So I urge upon all the industries, academia, R &D, and think tanks to collaborate with NMHS, especially with the India Metrology Department and other organizations here, to have very authentic, specific, and judicious utilization of AI with limited reasonable resources available in the country.

So thank you very much.

Moderator

Thank you, Dr. Mapatra, for your valuable insight. Now, since NDMA is the APEC’s national body, which basically will integrate… all the varied systems into creating a rugged AI… systems. I would like the entire audience to get the benefit of the vision of NDMA from member and HOD Dr. Krishna Vassa. Sir, can you please elaborate how NDMA intends to take this forward to create a sustainable, low -cost, at -scale model for the country?

Dr. Krishna Vatsa

Thank you very much for giving me this platform. I would like to mention that we already have a huge amount of data that exists in relation to almost all the hazards. Look at the earthquakes. We record all the micro -earthquakes for the entire country. The kind of data that exists for the earthquakes which are below 3 also can give us a very good indication of the kind of earthquakes that we can experience in the Himalayas. and other regions. And this data, the availability of data is going to increase exponentially as we are investing in the observational networks. Almost every mitigation program that we are doing, we have included a significant aspect of early warning systems. In the next five years or so, every village in India will have an automated weather station.

We will have a large number of instrumentation for measuring landslides. We are going to at least quadrupling the seismometers, strong motion accelerographs. So we will be investing a huge amount of money in improving our observational networks across the hazards, which will mean that we will have access. to still a larger amount of data. What is important is that we need to have that capacity to process the data and apply the AI models and improve the precision of early warning. That is an area that is the sphere where we are struggling right now. It’s one thing to set up the observational network. The another thing is to how to collect the data, process the data and generate the information that could be used and more so when it comes to informing the common citizens.

Scientists is one thing. You are getting a huge amount of data, but we are not doing it for the scientists. We are doing it for the people who get affected by disasters. So how do we go about it? And the roadmap is not sufficiently clear and I keep talking to all kinds of people. somebody would come and say that you set up a huge data center. No? Okay, that’s fine, great. But people also say if you are setting up a huge data center and you are not really empowering all the early warning agencies, then how are you going to justify the investment in data centers? The data comes to individual agencies. How do the data center and the individual early warning agencies interact so that we have a good model available?

And we don’t have unlimited resources. So the point is, this is where we need more clarity. How do we go about using our existing networks to improve the precision in early warning risk information through a gradually incremental way of building capacities that of course include the data center that should include improving our connection with the LLM models. But it’s also very, very important that we need to find a way of improving the overall architecture. That is one area where we are struggling and where we need some guidance. Thank you very much.

Moderator

Thank you, sir. I think we are coming to the close of the discussion. I’ll request Krishna Vassa, sir, to please give a moment to our panelists. Just give the back. also then request all our dignitaries in the front row to after this memento is over for a quick photograph and then we vacate the room. Thank you. Thank you. I’ll request the leadership from the states of Tamil Nadu and Andhra Pradesh, Telangana also to come, please, in the front for the photograph. We are very happy to inform that most of the states are also represented from the State Disaster Management Authority. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedmedium

“AI advances are occurring at an unprecedented pace.”

The knowledge base notes that artificial intelligence is advancing rapidly and unpredictably, confirming the claim of unprecedented AI pace [S92] and [S93].

Confirmedhigh

“Minister Avinash Ramtohul of Mauritius expanded the definition of disaster to include cyber‑attacks that can cripple digital systems as well as traditional hazards such as floods and cyclones.”

Sources describe Ramtohul outlining policy reforms that address cyber threats alongside conventional hazards, confirming his expanded disaster definition [S1] and [S100].

Additional Contextmedium

“He advocated the creation of digital‑twin representations of critical infrastructure so that emergency services can locate people and assets in real time.”

While the report attributes the digital-twin idea to Ramtohul, the knowledge base discusses digital twins as a broader concept for climate-extreme management, providing additional context on the technology’s relevance [S20].

Confirmedhigh

“He warned against fully automated decision‑making, calling for a human‑in‑the‑loop approach and for all early‑warning messages to be human‑verified before broadcast, a policy already being piloted in Mauritius’s cell‑broadcast system.”

The knowledge base records that Mauritius is planning cell-broadcast systems with human-verified messaging protocols to avoid misinformation during emergencies, confirming the human-in-the-loop policy [S1] and [S45-55].

Confirmedhigh

“The UK Met Office is developing machine‑learning weather models that will augment, not replace, physics‑based forecasts.”

Met Office documentation outlines a strategic plan to integrate AI/ML with traditional physics-based weather and climate models, confirming the hybrid approach described in the report [S32].

Confirmedmedium

“The Met Office is co‑developing both the models and the benchmarking framework with partners such as India and the World Meteorological Organization.”

The knowledge base explicitly mentions co-development of benchmarking and testing frameworks with partner organisations, aligning with the report’s statement [S8].

Additional Contextlow

“Calls for human‑in‑the‑loop AI systems echo broader concerns about preserving human agency in automated decision‑making.”

Other sources highlight similar warnings about over-reliance on algorithms and stress the need for human oversight, providing broader context to Ramtohul’s stance [S105] and [S108].

External Sources (109)
S1
National Disaster Management Authority — Beth Woodhams from the UK Met Office explained their approach of gradually blending machine learning models with traditi…
S2
Beneath the Shadows: Private Surveillance in Public Spaces | IGF 2023 — Beth Curley, a programme officer with the National Endowment for Democracy’s International Forum for Democracy, contribu…
S3
National Disaster Management Authority — – Pankaj Shukla- Nikhilesh Kumar- Dr. Krishna Vatsa – Som Satsangi- Dr. Mrutyunjay Mohapatra- Dr. Krishna Vatsa
S4
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S5
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S6
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S7
National Disaster Management Authority — – Beth Woodhams- Dr. Mrutyunjay Mohapatra – Som Satsangi- Dr. Mrutyunjay Mohapatra- Dr. Krishna Vatsa
S8
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South coopera…
S9
National Disaster Management Authority — Minister Avinash Ramtohul from Mauritius provided a unique perspective by fundamentally expanding the conceptual framewo…
S10
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South coopera…
S11
National Disaster Management Authority — – Pankaj Shukla- Nikhilesh Kumar – Pankaj Shukla- Nikhilesh Kumar- Dr. Krishna Vatsa
S12
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South coopera…
S13
National Disaster Management Authority — – Som Satsangi- Dr. Krishna Vatsa
S14
National Disaster Management Authority — – Pankaj Shukla- Nikhilesh Kumar – Pankaj Shukla- Nikhilesh Kumar- Dr. Krishna Vatsa
S15
Open Forum #33 Building an International AI Cooperation Ecosystem — – Balancing technological development with national security interests Participant: ≫ Distinguished guests, dear friend…
S16
WS #31 Cybersecurity in AI: balancing innovation and risks — – Gladys Yiadom: Moderator AUDIENCE: So I was just going to add today about, if you look at the traffic on the intern…
S17
Steering the future of AI — – **Nicholas Thompson**: Moderator from The Atlantic Yann LeCun: reach human level intelligence or something approachin…
S18
Building the Next Wave of AI_ Responsible Frameworks & Standards — “The second most important element in this framework is to ensure these safety benchmarks are co -created with the indus…
S19
MedTech and AI Innovations in Public Health Systems — How can the data show to them that, this is the… key problem in this particular area. We’ve been talking about with An…
S20
Survival Tech Harnessing AI to Manage Global Climate Extremes — “It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the sa…
S21
UNSC meeting: Artificial intelligence, peace and security — Gabon:Thank you, Madam President. I thank the United Kingdom for organizing this debate on artificial intelligence at a …
S22
AI Without the Cost Rethinking Intelligence for a Constrained World — But I will pick which ones I need based on my input or dynamically. And that is called dynamic sparsity, right? So So I’…
S23
Shaping the Future AI Strategies for Jobs and Economic Development — “They are giving GPUs available at 65 rupees per month.”[119]. “so there are quite a few no no it’s public it’s all publ…
S24
From India to the Global South_ Advancing Social Impact with AI — Cross-sector movement of professionals between government, academia, and industry is essential for knowledge transfer
S25
From KW to GW Scaling the Infrastructure of the Global AI Economy — A central theme was India’s potential to become a global AI hub, with projections suggesting the country will scale from…
S26
The Global Power Shift India’s Rise in AI & Semiconductors — -Public-Private Partnership Models and Capital Requirements: The discussion highlighted the need for substantial capital…
S27
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. …
S28
Building Sovereign and Responsible AI Beyond Proof of Concepts — It wasn’t applying it the same to equal to everyone. And there was no agreed process for how you would escalate if there…
S29
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S30
How to make AI governance fit for purpose? — ### Chinese Perspective ### Singapore Perspective **Additional speakers:** Anne Bouverot: Thank you so much, Gabriela…
S31
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Elena Plexida:Thank you, Miapetra, hello, everyone. Indeed, ICANN coordinates the internet unique identifiers, the names…
S32
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S33
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — He explains that remote, low‑connectivity scenarios benefit from edge deployment, while most workloads run on the cloud.
S34
Indias AI Leap Policy to Practice with AIP2 — “This is essentially what we provide for startups.”[16]. “And startups become a very, very important player in this game…
S35
AI for food systems — Boogaard argued that AI can be transformative by connecting smallholders to help them grow, process, distribute, and acc…
S36
Policy Network on Artificial Intelligence | IGF 2023 — The panel discussion explored several crucial aspects of AI technology and its societal impact. One notable challenge hi…
S37
Conversational AI in low income & resource settings | IGF 2023 — Addressing the digital divide is important, as 2.6 billion people globally lack reliable internet access, hindering effe…
S38
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Talla N’diaye Merci, merci beaucoup. Tout d’abord, je tiens à vous remercier, à remercier Henri et toute l’équipe de l’O…
S39
Operationalizing data free flow with trust | IGF 2023 WS #197 — Amid global health, financial and geopolitical crises that pose risks to the very functioning of a rules-based multilate…
S40
The Challenges of Data Governance in a Multilateral World — In conclusion, India’s progress in embracing technology and digitization, as demonstrated by the Digital Personal Data P…
S41
AI as critical infrastructure for continuity in public services — “Two, standardized API so that system -to -system communication will be smooth.”[24]. “And second, we also have a harmon…
S42
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Souhila Amazouz: Thank you. Good morning. Do you hear me? Yes, yes. Yes, good morning, everybody. And thank you, m…
S43
How to construct a global governance architecture for digital trade — Current governance arrangements that underpin data flows are incoherent and fragmented, reflecting conflicting private i…
S44
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — The analysis delves into various aspects of cross-border data, open data, and data protection. Shilpa, a researcher at t…
S45
WSIS Action Line C7 E-environment — This infrastructure enhancement will improve data sharing, forecasting accuracy, and integration with early warning syst…
S46
AI to improve forecasts and early warnings worldwide — The World Meteorological Organisation has highlighted the potential of AI toimprove weather forecastsand early warning s…
S47
AI model improves long-range space weather forecasts — Scientists from Southwest Research Institute and the National Center for Atmospheric Research, supported by the National…
S48
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S49
Diplomacy in beta: From Geneva principles to Abu Dhabi deliberations in the age of algorithms — Governance must extend across the full AI lifecycle: pre-design, design, development, evaluation, testing, procurement, …
S50
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S51
HIGH LEVEL LEADERS SESSION IV — The analysis highlights several key points regarding the importance of a human rights-based approach to new technologies…
S52
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — As AI becomes integrated into IoT systems, proper governance frameworks are essential to ensure ethical and trustworthy …
S53
The State of Digital Fragmentation (Digital Policy Alert) — Disruption is required across various spaces. Global challenges require some form of disruption. Enforcement of laws an…
S54
Top digital policy developments in 2019: A year in review — But this potential cannot be fully exploited if the world continues to be split between those who have access to digital…
S55
Building a Digital Society, from Vision to Implementation — Small island developing states face common challenges and should work together
S56
High Level Dialogue: Strengthening the Resilience of Telecommunication Submarine Cables — ### Small Island States and Landlocked Countries Sandra Maximiano: So as we actually just listen here, many accidents h…
S57
Main Session on Cybersecurity, Trust & Safety Online | IGF 2023 — The analysis also highlights the importance of knowledge-sharing in the context of cybersecurity. It suggests the creati…
S58
Cybersecurity emerges as policy topic — Cybersecurity emerges as a policy, technical, and diplomatic issue.
S59
Cybersecurity, cybercrime, and online safety — The analysis also recognises the importance of multistakeholder governance in ensuring a safer cybersecurity environment…
S60
National Disaster Management Authority — There was unexpected consensus on expanding the traditional definition of disasters to include cyber threats. This repre…
S61
AI Without the Cost Rethinking Intelligence for a Constrained World — So the energy savings come from the three orders of magnitude lower compute costs. We’ve done four presentations with NV…
S62
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Fundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—requ…
S63
AI: Lifting All Boats / DAVOS 2025 — Dowidar mentioned ongoing work with UNDP on AI-powered early warning systems. Further research on implementation and sca…
S64
National Disaster Management Authority — This comment fundamentally expanded the scope of disaster risk reduction beyond traditional natural disasters to include…
S65
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Elena Plexida:Thank you, Miapetra, hello, everyone. Indeed, ICANN coordinates the internet unique identifiers, the names…
S66
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S67
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Hello? Yeah. Right. Thank you. Thank you for your question. Thank you, it’s a real honour to be part of this panel. So a…
S68
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — Vivek outlines India’s national supercomputing capability, noting the installed petaflop capacity and the large number o…
S69
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. …
S70
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — He explains that remote, low‑connectivity scenarios benefit from edge deployment, while most workloads run on the cloud.
S71
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Fuad Siddiqui: Thank you. Good morning. Yeah, I’m delighted to be here. And it’s always great to be back in Saudi. I …
S72
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — I mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have star…
S73
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S74
Open Forum #30 High Level Review of AI Governance Including the Discussion — The discussion maintained a collaborative and constructive tone throughout, characterized by mutual respect and shared c…
S75
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S76
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S77
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S78
Day 0 Event #257 Enhancing Data Governance in the Public Sector — The discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities …
S79
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S80
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S81
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S82
WSIS Action Line C7 E-learning — The discussion maintained a professional and collaborative tone throughout, with speakers demonstrating cautious optimis…
S83
High-Level Dialogue: The role of parliaments in shaping our digital future — The discussion maintained a tone of cautious optimism throughout. Speakers acknowledged significant challenges and risks…
S84
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S85
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S86
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S87
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S88
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S89
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S90
AI for Good Technology That Empowers People — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for so…
S91
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — The discussion maintained a consistently optimistic and action-oriented tone throughout. While speakers acknowledged ser…
S92
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S93
Opening — Pace of technological progress is accelerating unpredictably
S94
9821st meeting — 2. Creation of an International Scientific Panel on Artificial Intelligence Ecuador:Mr. President, I thank the United S…
S95
Agenda item 5: Day 2 Morning session — Vietnam:Thank you Chair. Again in our first intervention this week we would like to reaffirm our strong support for the …
S96
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S97
Resilient infrastructure for a sustainable world — – **Helen Ng** – Works at UNDRR (United Nations Office for Disaster Risk Reduction), focuses on resilient infrastructure…
S98
WS #49 Benefit everyone from digital tech equally & inclusively — – The need for national platforms to coordinate disaster risk reduction efforts. 3. Information Governance for Disaster…
S99
AI Meets Agriculture Building Food Security and Climate Resilien — These key comments fundamentally shaped the discussion by introducing critical frameworks that moved the conversation be…
S100
Agenda item 5 : Day 4 Morning session — Mauritius:Good morning, Chair. In an increasingly interconnected world where the threat landscape is constantly evolving…
S101
Protecting critical infrastructure in a fragile cyberspace — ‘Securing Critical Infrastructure in Cyber: Who and How?’ is the name of one of the main panels at IGF 2024 in Riyadh, w…
S102
Dynamic Coalition Collaborative Session — Development | Economic | Infrastructure Rajendra warns that without proper classification of certain technologies as di…
S103
Roundtable — His insights notably advance the ongoing discourse regarding the identification and protection of critical infrastructur…
S104
Host Country Open Stage — D Silva emphasized the transformative potential of sustainability reporting, stating that “transparency is not just abou…
S105
Toward Collective Action_ Roundtable on Safe & Trusted AI — Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with go…
S106
UN Secretary-General warns humanity cannot rely on algorithms — UN Secretary-General António Guterres hasurgedworld leaders to act swiftly to ensure AI serves humanity rather than thre…
S107
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S108
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — Abbasi warns against over-reliance on algorithmic decision-making without proper human oversight. She argues that this a…
S109
JMA to test AI-enhanced weather forecasting — The Japan Meteorological Agency (JMA) is exploring the use of AI toimprove the accuracy of weather forecasts, with a par…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Avinash Ramtohul
1 argument153 words per minute918 words358 seconds
Argument 1
Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul)
EXPLANATION
Ramtohul stresses that disaster response must integrate a digital twin of physical infrastructure to guide emergency services, and that critical decisions should always involve a human element rather than full automation. He argues that this bridge between the virtual and physical worlds enhances situational awareness and safety during incidents such as fires.
EVIDENCE
He describes a scenario where a fire triggers automated alerts to fire and medical services, but stresses the need for a digital twin that provides structural plans, pipe locations, and real-time occupancy data so responders know exactly where people are (digital twin concept) [35-42]. He also emphasizes that decision-making must retain a human-in-the-loop to avoid dangerous 100 % automation, especially for life-saving alerts, and cites Mauritius’ policy of human-verified messages in early warning systems [46-55].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop & digital‑twin for emergency response
AGREED WITH
Beth Woodhams, Som Satsangi, Pankaj Shukla
M
Moderator
1 argument87 words per minute1167 words797 seconds
Argument 1
AI must be embedded in national resilience architecture, not just algorithms (Moderator)
EXPLANATION
The moderator argues that the next frontier in disaster risk reduction lies in institutionalising AI within the broader national resilience framework, rather than focusing solely on algorithmic improvements. Embedding AI at the governance level ensures systematic, scalable, and sustainable use across disaster management processes.
EVIDENCE
The opening remarks state that “the next frontier in DRR is not better algorithms alone, it is institutionalizing AI within national resilience architecture” [7].
MAJOR DISCUSSION POINT
AI must be embedded in national resilience architecture, not just algorithms
B
Beth Woodhams
2 arguments154 words per minute385 words149 seconds
Argument 1
Hybrid blending of AI and physics models, phased rollout (Beth Woodhams)
EXPLANATION
Woodhams explains that the Met Office will not replace physical weather models with AI but will gradually introduce machine‑learning components through hybrid blending. This phased approach allows confidence to build as AI outputs are combined with established physics‑based forecasts.
EVIDENCE
She outlines the plan to develop machine-learning weather models and to implement them step-by-step by blending physics-based and ML outputs, noting that the Met Office will increase blending as confidence grows [65-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Met Office’s strategy of gradually blending machine-learning weather models with established physics-based forecasts is documented in the NDMA report on Beth Woodhams’ presentation [S1].
MAJOR DISCUSSION POINT
Hybrid blending of AI and physics models, phased rollout
Argument 2
Co‑development and joint benchmarking with partners to ensure user‑relevant metrics (Beth Woodhams)
EXPLANATION
Woodhams stresses the importance of co‑creating AI models and their evaluation frameworks with external partners, ensuring that performance metrics align with user needs. Joint benchmarking will help maintain transparency and trust in AI‑enhanced forecasts.
EVIDENCE
She describes existing co-development partnerships with India and other regions, and the Met Office’s effort to standardise benchmarking and evaluation of ML versus physics models, emphasizing metrics that matter to users [71-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Co-creation of safety benchmarks and joint evaluation frameworks with industry and academia is emphasized in the responsible AI framework discussion [S18] and reinforced by the NDMA summary of Woodhams’ remarks [S1].
MAJOR DISCUSSION POINT
Co‑development and joint benchmarking with partners to ensure user‑relevant metrics
DISAGREED WITH
Som Satsangi
D
Dr. Mrutyunjay Mohapatra
3 arguments171 words per minute982 words344 seconds
Argument 1
AI as a hybrid complement to physical models improves early‑warning accuracy; need better data quality (Dr. Mrutyunjay Mohapatra)
EXPLANATION
Mohapatra notes that AI should augment, not replace, physical weather models, creating hybrid systems that improve forecast accuracy. He also highlights that data quality—especially from satellites—is a limiting factor and that AI can help enhance both data and model performance.
EVIDENCE
He states that large NMHSs, including IMD, are using AI alongside physical models, and that AI must be suitably connected with physical models to retain physical reasoning [190-197]. He further points out that only about five percent of satellite data is usable due to quality issues, and that AI can improve data quality and increase usable observations [203-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A hybrid approach that links AI with physical sensor and satellite systems for resilient early-warning is advocated in the “Survival Tech Harnessing AI to Manage Global Climate Extremes” briefing [S20].
MAJOR DISCUSSION POINT
AI as a hybrid complement to physical models improves early‑warning accuracy; need better data quality
Argument 2
“Box‑model” GPU‑based AI offers affordable forecasting for resource‑constrained nations (Dr. Mrutyunjay Mohapatra)
EXPLANATION
Mohapatra introduces the concept of a low‑cost “box‑model” that runs on a few GPU nodes, enabling small or low‑resource countries to generate AI‑driven forecasts without massive supercomputing infrastructure. This approach democratises access to advanced forecasting capabilities.
EVIDENCE
He explains that a box-model using GPU-based AI can provide forecasts for poor or small island nations, allowing them to achieve early warning with limited hardware, and that such models are becoming increasingly affordable [207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of low-cost, GPU-based AI models using dynamic sparsity to reduce compute requirements is described in the “AI Without the Cost” talk [S22], and the need to democratize GPU access for innovators in India is noted in the AI scaling discussion [S23].
MAJOR DISCUSSION POINT
“Box‑model” GPU‑based AI offers affordable forecasting for resource‑constrained nations
DISAGREED WITH
Som Satsangi
Argument 3
Call for cross‑sector collaboration (industry, academia, R&D) to enhance AI use in disaster management (Dr. Mrutyunjay Mohapatra)
EXPLANATION
He urges industries, academia, research institutions, and think‑tanks to work together with national meteorological and disaster agencies to ensure authentic, judicious AI deployment. Collaborative effort is presented as essential for scaling AI benefits in disaster risk reduction.
EVIDENCE
He calls on all sectors-industry, academia, R&D, and think-tanks-to collaborate with NMHSs and other organisations to achieve effective AI utilisation in disaster management [213-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-sector movement of professionals between government, academia, and industry as essential for knowledge transfer is highlighted in the “From India to the Global South” report [S24].
MAJOR DISCUSSION POINT
Call for cross‑sector collaboration (industry, academia, R&D) to enhance AI use in disaster management
S
Som Satsangi
3 arguments147 words per minute983 words398 seconds
Argument 1
India’s current supercomputing capacity is far below exaflop needs for real‑time AI alerts (Som Satsangi)
EXPLANATION
Satsangi compares India’s existing supercomputing capability (tens of petaflops) with the exaflop‑scale systems used in the United States for real‑time AI‑driven early warning. He argues that the gap in computational power hampers India’s ability to deliver timely alerts at national scale.
EVIDENCE
He cites India’s 40 petaflop capacity from the 2015 supercomputer mission and the current 37 supercomputers totaling ~40 petaflop, contrasted with US systems like Al Capitan (1.8 exaflop), Frontier (1.3 exaflop), and Aurora (1 exaflop) that support real-time AI analytics [92-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gap between India’s petaflop-scale supercomputers and the exaflop systems used for real-time AI alerts in the US is discussed in the AI infrastructure scaling overview for India [S25], and the need for massive capital investment is underscored in the public-private partnership analysis [S26].
MAJOR DISCUSSION POINT
India’s current supercomputing capacity is far below exaflop needs for real‑time AI alerts
Argument 2
Massive cost, power, and water requirements for high‑performance AI data centres; private‑public partnerships essential (Som Satsangi)
EXPLANATION
He highlights that building exaflop‑scale AI infrastructure would cost hundreds of millions to a billion dollars and would demand substantial electricity and water for cooling. Consequently, he advocates for private‑sector partnerships to share the financial and operational burden.
EVIDENCE
He notes that each exaflop-class system can cost $400-$500 million to $1 billion, and that such facilities need massive power, alternative energy sources, and water-cooling for hundreds of thousands of GPUs/CPUs [107-125].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The high financial, electricity, and water demands of exaflop-class AI data centres and the recommendation for private-public partnership models are detailed in the AI capital requirements briefing [S26] and the scaling-capacity projection for India [S25].
MAJOR DISCUSSION POINT
Massive cost, power, and water requirements for high‑performance AI data centres; private‑public partnerships essential
Argument 3
Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions (Som Satsangi)
EXPLANATION
Satsangi stresses that AI systems for disaster response must be built on architectures that respect sovereign data constraints and provide transparent, explainable outputs for critical decisions. He links this requirement to procurement policies and the need for interoperable, standards‑based solutions.
EVIDENCE
He references the necessity for AI systems to be interoperable with sovereign data architectures and compatible with diverse governance ecosystems, especially in a federal country like India, and calls for standards of explainability when AI informs life-saving decisions [80-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for sovereign-compatible AI architectures and transparent, explainable outputs for critical decisions are made in the “Building Sovereign and Responsible AI Beyond Proof of Concepts” discussion [S28] and reinforced by the UN Security Council AI governance summary on transparency [S29].
MAJOR DISCUSSION POINT
Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions
DISAGREED WITH
Beth Woodhams
D
Dr. Krishna Vatsa
2 arguments126 words per minute507 words240 seconds
Argument 1
Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings (Dr. Krishna Vatsa)
EXPLANATION
Vatsa explains that while India is rapidly expanding its observational infrastructure (weather stations, seismometers, etc.), the country lacks sufficient data‑processing capacity and a coherent architecture to turn this data into actionable early warnings for the public. This gap limits the effectiveness of early‑warning systems.
EVIDENCE
He details the planned deployment of automated weather stations in every village, increased landslide instrumentation, and quadrupling of seismometers, while noting the current struggle to process the resulting data and deliver precise citizen-focused warnings [220-236]. He also points out the unclear roadmap for integrating data centres with early-warning agencies [238-247].
MAJOR DISCUSSION POINT
Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings
Argument 2
Massive investment in observational infrastructure (weather stations, seismometers) requires parallel development of data‑processing and AI capabilities (Dr. Krishna Vatsa)
EXPLANATION
Vatsa emphasizes that the large financial outlay for expanding observational networks must be matched by investments in computational infrastructure and AI tools to fully exploit the data. Without parallel development, the observational data cannot be transformed into high‑precision early‑warning information.
EVIDENCE
He mentions the upcoming nationwide rollout of automated weather stations, extensive landslide sensors, and a four-fold increase in seismometers, and stresses the need for processing capacity and AI models to improve early-warning precision [226-231].
MAJOR DISCUSSION POINT
Massive investment in observational infrastructure (weather stations, seismometers) requires parallel development of data‑processing and AI capabilities
P
Pankaj Shukla
2 arguments161 words per minute761 words283 seconds
Argument 1
Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability (Pankaj Shukla)
EXPLANATION
Shukla outlines a five‑layer architecture for AI in disaster management, starting from the physical infrastructure up to AI‑driven applications. This layered approach creates a central “living intelligence” that can be extended to edge locations for real‑time decision support.
EVIDENCE
He describes the layers: infrastructure, operating system (central to edge), platform services for building AI applications, multi-modal models (e.g., Gemini), and applications that turn intelligence into action, emphasizing the need for a central living intelligence that feeds edge devices [136-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A layered AI architecture that creates a central “living intelligence” and extends to edge devices is described in the “Building Trusted AI at Scale” keynote [S27].
MAJOR DISCUSSION POINT
Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability
Argument 2
Ability to run AI in disconnected, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation (Pankaj Shukla)
EXPLANATION
He explains that AI solutions must operate in air‑gapped, low‑connectivity environments using rugged devices that maintain zero‑trust security. This capability ensures that critical alerts can be delivered at the last mile while protecting against misinformation and data breaches.
EVIDENCE
He details how AI applications can be packaged to run on rugged, disconnected devices with a small set of central intelligence, maintaining zero-trust security, and can still provide actionable information such as asset location and impact assessment during disasters [148-152].
MAJOR DISCUSSION POINT
Ability to run AI in disconnected, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation
N
Nikhilesh Kumar
3 arguments128 words per minute627 words293 seconds
Argument 1
Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale (Nikhilesh Kumar)
EXPLANATION
Kumar describes a four‑layer framework—hazard modeling, asset and population impact, and workflow translation—that integrates data from multiple agencies to generate actionable disaster‑risk insights. This integrated approach is essential for scaling decision‑support platforms (DPIs/DPGs).
EVIDENCE
He outlines the four layers: modeling (hazard), asset & people impact, and workflows for action, noting that data is scattered across agencies (meteorological, water, survey) and must be combined into DPIs/DPGs for effective response [155-162].
MAJOR DISCUSSION POINT
Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale
Argument 2
AI‑driven nowcasting of millions of water bodies and dams using real‑time satellite and radar data (Nikhilesh Kumar)
EXPLANATION
He presents a use case where AI processes 30‑minute interval satellite and radar data to nowcast water levels for roughly one million water bodies, delivering real‑time alerts to thousands of dams during cyclone events. This demonstrates AI’s capacity for large‑scale, near‑real‑time hazard monitoring.
EVIDENCE
He explains that AI leverages real-time satellite and radar data to nowcast conditions for about one million water bodies, translating the nowcast into hydraulic models for roughly 5,000 dams during cyclone periods, providing timely alerts [163-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The use of hybrid AI models that ingest real-time satellite and radar observations for large-scale water-body nowcasting is presented in the “Survival Tech Harnessing AI to Manage Global Climate Extremes” briefing [S20].
MAJOR DISCUSSION POINT
AI‑driven nowcasting of millions of water bodies and dams using real‑time satellite and radar data
Argument 3
Extraction of structured risk information from unstructured news to support insurance and risk‑reduction efforts (Nikhilesh Kumar)
EXPLANATION
Kumar argues that AI can mine unstructured news reports to extract location‑specific hazard and damage information, creating structured datasets that feed into DPIs and insurance models. This enhances risk assessment and supports targeted risk‑reduction strategies.
EVIDENCE
He notes that AI can process large volumes of news containing unstructured location and hazard details, converting them into structured, hazard-wise datasets that can be used by DPIs and the insurance sector for location-specific risk intensity and frequency analysis [169-172].
MAJOR DISCUSSION POINT
Extraction of structured risk information from unstructured news to support insurance and risk‑reduction efforts
Agreements
Agreement Points
Hybrid AI‑physics models improve forecast accuracy and early warning
Speakers: Beth Woodhams, Dr. Mrutyunjay Mohapatra
Hybrid blending of AI and physics models, phased rollout AI as a hybrid complement to physical models improves early‑warning accuracy; need better data quality
Both speakers emphasize that AI should augment, not replace, physical weather models, using a blended or hybrid approach to increase confidence and accuracy of forecasts and early warnings [65-71][190-197].
POLICY CONTEXT (KNOWLEDGE BASE)
The World Meteorological Organisation and the UK Met Office have highlighted AI-physics hybrid models as a way to boost forecast skill and early-warning lead times, urging public-private cooperation to deploy them [S46][S48][S45].
Substantial computational and data‑processing infrastructure is essential for AI‑driven disaster management
Speakers: Som Satsangi, Dr. Krishna Vatsa, Pankaj Shukla
India’s current supercomputing capacity is far below exaflop needs for real‑time AI alerts Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability
All three highlight the need for large-scale computing resources, data-centre capacity and a layered AI architecture to process the massive data streams from expanded sensor networks and deliver real-time alerts [92-100][106-108][111-125][220-231][238-247][136-144].
POLICY CONTEXT (KNOWLEDGE BASE)
UN-endorsed frameworks treat AI as critical infrastructure, calling for standardized APIs and robust compute resources, while recent analyses stress the high energy and cost demands of large-scale models [S41][S61][S62].
Public‑private and cross‑sector collaboration is required to finance and implement AI for DRR
Speakers: Som Satsangi, Dr. Mrutyunjay Mohapatra, Nikhilesh Kumar
Massive cost, power, and water requirements for high‑performance AI data centres; private‑public partnerships essential Call for cross‑sector collaboration (industry, academia, R&D) to enhance AI use in disaster management Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale
The speakers agree that the financial, technical and operational challenges of AI-enabled DRR can only be met through partnerships among government, industry, academia and startups, sharing costs and expertise [107-110][213-214][155-162].
POLICY CONTEXT (KNOWLEDGE BASE)
The WMO and UNDP have repeatedly called for joint financing and multi-stakeholder partnerships to scale AI-enabled early-warning systems [S46][S63][S36].
Interoperable, sovereign‑compatible data architectures and integrated data layers are critical
Speakers: Som Satsangi, Nikhilesh Kumar, Pankaj Shukla, Dr. Krishna Vatsa
Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings
All underline the necessity of a clear, interoperable data governance framework that respects sovereign data constraints and links multiple hazard, asset and population data streams into a unified AI-driven decision support system [80-81][155-162][136-144][238-247].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions at the IGF and policy papers on data sovereignty stress the need for interoperable, trust-based data flows that respect national sovereignty while enabling cross-border sharing [S38][S39][S40][S41].
Trust, explainability and human oversight are essential for AI‑driven alerts
Speakers: Avinash Ramtohul, Beth Woodhams, Som Satsangi, Pankaj Shukla
Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul) Co‑development and joint benchmarking with partners to ensure user‑relevant metrics Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions Ability to run AI in disconnected, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation
There is consensus that AI systems must retain human verification, provide transparent metrics, adhere to explainability standards and operate securely, especially when life-saving decisions are involved [46-55][71-74][80-81][148-152].
POLICY CONTEXT (KNOWLEDGE BASE)
Guidelines from the UN and multistakeholder bodies emphasize explainability, transparency, and mandatory human oversight throughout the AI lifecycle for high-risk applications such as disaster alerts [S49][S50][S51][S52].
AI solutions must reach the last mile in low‑connectivity settings while avoiding alert fatigue and misinformation
Speakers: Pankaj Shukla, Nikhilesh Kumar, Moderator
Ability to run AI in disconnected, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation AI‑driven nowcasting of millions of water bodies and dams using real‑time satellite and radar data Insights in developing resilient governance frameworks which actually are scalable across the nation’s and this is a way to go resilient system which is resilience to even cyber attacks and of sustainable meaningful and not giving any say fatigue you you of alerts also is very much vital for a robust system to be effective across all disasters
All three stress that AI-enabled early warning must be delivered to end-users in remote or disconnected areas, be designed to prevent alert fatigue and guard against misinformation, and remain robust across disaster types [148-152][163-166][56-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies on AI for food systems and the digital divide underline the importance of designing solutions that function in low-connectivity environments and reach underserved populations, warning against over-alerting and misinformation [S35][S37][S54].
Similar Viewpoints
All participants concur that AI should be institutionalised within a coherent national resilience framework, involving governance structures, interoperable architectures, human oversight and collaborative development to ensure effective, trustworthy disaster risk reduction [7][46-55][71-74][80-81][136-144][155-162].
Speakers: Moderator, Avinash Ramtohul, Beth Woodhams, Som Satsangi, Pankaj Shukla, Nikhilesh Kumar
AI must be embedded in national resilience architecture, not just algorithms (Moderator) Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul) Co‑development and joint benchmarking with partners to ensure user‑relevant metrics (Beth Woodhams) Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions (Som Satsangi) Five‑layer AI stack … enabling central “living intelligence” with edge capability (Pankaj Shukla) Multi‑layer data integration … to produce actionable insights at scale (Nikhilesh Kumar)
Unexpected Consensus
Human‑verified alerts and digital‑twin bridging of physical and virtual worlds are needed even for small island states and large federal nations alike
Speakers: Avinash Ramtohul, Dr. Krishna Vatsa
Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul) Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings (Dr. Krishna Vatsa)
Despite the difference in scale, both the Minister of Mauritius and the Indian Meteorological Director stress that disaster alerts must be grounded in accurate situational data (digital twin or sensor networks) and verified by humans before dissemination, revealing an unexpected alignment of priorities between a small island developing state and a large federal country [46-55][235-237].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs for Small Island Developing States highlight the need for digital-twin tools and verified alerts to enhance resilience, especially for critical infrastructure like submarine cables [S55][S56][S45].
Overall Assessment

There is strong, multi‑dimensional consensus among the moderator and all panelists that AI for disaster risk reduction must be hybrid, human‑centred, built on robust computational and data‑processing infrastructure, governed by interoperable sovereign‑compatible architectures, financed through public‑private partnerships, and delivered securely to end‑users. The shared emphasis on trust, explainability, and last‑mile accessibility underscores a unified vision for scalable, inclusive AI‑enabled resilience.

High consensus across technical, policy, financial and ethical dimensions, indicating that future initiatives are likely to focus on integrated hybrid models, capacity‑building infrastructure, collaborative governance frameworks and secure, human‑overseen deployment.

Differences
Different Viewpoints
Scale and cost of computing infrastructure for AI‑driven early warning
Speakers: Som Satsangi, Dr. Mrutyunjay Mohapatra
India’s current supercomputing capacity is far below ex‑aflop needs for real‑time AI alerts (Som Satsangi) “Box‑model” GPU‑based AI offers affordable forecasting for resource‑constrained nations (Dr. Mrutyunjay Mohapatra)
Som argues that India must acquire exaflop-scale supercomputers (costing $400-500 million to $1 billion each) and secure massive power and water resources, requiring private-public partnerships to meet real-time AI alert needs [92-100]. Mohapatra counters that a low-cost GPU-based “box-model” can deliver forecasts for small or low-resource countries without such massive infrastructure, presenting an alternative, affordable path [207-208].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent research shows that next-generation AI models can reduce compute costs by orders of magnitude, yet financing large-scale infrastructure remains a challenge for many regions [S61][S62].
Data governance approach: sovereign‑data architectures and explainability vs open co‑development and benchmarking
Speakers: Som Satsangi, Beth Woodhams
Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions (Som Satsangi) Co‑development and joint benchmarking with partners to ensure user‑relevant metrics (Beth Woodhams)
Som stresses that AI systems must respect sovereign data constraints and provide transparent, explainable outputs for critical decisions, linking this to procurement and interoperability requirements [80-81]. Woodhams emphasizes collaborative model creation and shared benchmarking with external partners, focusing on user-centric performance metrics rather than explicit sovereignty safeguards [71-74].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates at the IGF contrast sovereign-centric data regimes with calls for open, benchmarked AI development, reflecting broader tensions in global data governance frameworks [S38][S39][S49].
Unexpected Differences
Inclusion of cyber‑security as a disaster domain
Speakers: Avinash Ramtohul, Other panelists (e.g., Beth Woodhams, Som Satsangi, Dr. Mohapatra, etc.)
Disaster can also strike the virtual world; cybersecurity attacks can create havoc (Avinash Ramtohul) Other speakers focus exclusively on physical hazards (weather, floods, earthquakes) without mentioning cyber threats
Avinash expands the definition of disaster to include virtual-world incidents such as cyber-attacks and stresses the need for AI-enabled safeguards in that domain [30-33]. The remaining panelists discuss only physical hazards and AI for forecasting, response and infrastructure, showing an unexpected divergence in the scope of what constitutes a disaster in the AI-DRR context.
POLICY CONTEXT (KNOWLEDGE BASE)
National disaster management authorities and IGF sessions have begun to classify cyber incidents as disasters, urging integrated cybersecurity policies within resilience planning [S60][S57][S58].
Overall Assessment

The panel broadly concurs on the strategic importance of AI for disaster risk reduction, yet key tensions emerge around the scale and financing of computing infrastructure, the governance model for data (sovereign versus open co‑development), and the pace and architecture of AI deployment. A notable surprise is the differing view on whether cyber‑security incidents should be treated as disasters alongside traditional physical hazards.

Moderate to high. While there is consensus on the goal of AI‑enhanced resilience, the disagreements on infrastructure investment, data governance, and scope of disaster definition could impede coordinated policy action unless reconciled. These divergences suggest the need for a hybrid policy framework that accommodates both high‑performance national infrastructure and low‑cost alternatives, aligns sovereign data requirements with collaborative benchmarking, and broadens disaster definitions to include cyber threats.

Partial Agreements
All speakers agree that AI should be integrated into disaster risk reduction to improve early warning and response, but they diverge on the preferred implementation pathway: Avinash calls for digital twins and human‑in‑the‑loop safeguards; Woodhams proposes a gradual hybrid blending of ML with physics models; Som pushes for large‑scale sovereign‑compatible supercomputing infrastructure; Mohapatra suggests low‑cost GPU box models; Pankaj outlines a layered architecture that can operate at edge locations; Nikhilesh stresses multi‑layer data integration across agencies; and Vatsa highlights the need to match expanding sensor networks with processing capacity. These differing methods reflect varied views on speed, cost, governance and technical architecture [46-55][65-71][92-100][207-208][136-144][155-162][226-236].
Speakers: Avinash Ramtohul, Beth Woodhams, Som Satsangi, Dr. Mrutyunjay Mohapatra, Pankaj Shukla, Nikhilesh Kumar, Dr. Krishna Vatsa
Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul) Hybrid blending of AI and physics models, phased rollout (Beth Woodhams) India’s current supercomputing capacity is far below ex‑aflop needs for real‑time AI alerts (Som Satsangi) “Box‑model” GPU‑based AI offers affordable forecasting for resource‑constrained nations (Dr. Mrutyunjay Mohapatra) Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability (Pankaj Shukla) Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale (Nikhilesh Kumar) Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings (Dr. Krishna Vatsa)
Takeaways
Key takeaways
AI must be embedded within national disaster risk governance structures, not treated as a stand‑alone technology. Human‑in‑the‑loop and digital‑twin concepts are essential for safe, life‑saving AI decisions (Avinash Ramtohul). Hybrid blending of AI‑based machine‑learning models with traditional physics‑based weather models is the preferred path; rollout should be incremental and benchmarked with partners (Beth Woodhams, Dr. Mrutyunjay Mohapatra). India’s current high‑performance computing capacity (≈28 PFLOPS) is far below the exaflop scale required for real‑time, nation‑wide AI alerts; massive investment in infrastructure, power, and cooling is needed (Som Satsangi). Sovereign‑data‑compatible architectures, clear explainability standards, and robust procurement policies are critical for interoperable AI systems across federal and state agencies (Som Satsangi). A five‑layer AI stack (infrastructure, OS, platform services, models, applications) enables a central “living intelligence” with edge‑capable, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation (Pankaj Shukla). Start‑ups can add value by integrating multi‑layer data (hazard, asset, people, workflow), delivering now‑casts for millions of water bodies, and extracting structured risk information from unstructured news for insurance and risk‑reduction (Nikhilesh Kumar). Box‑model GPU‑based AI forecasting offers an affordable path for resource‑constrained nations and can complement larger supercomputing efforts (Dr. Mrutyunjay Mohapatra). Massive expansion of observational networks (weather stations, seismometers, landslide sensors) will generate huge data streams; processing capacity and a clear data‑center architecture are still lacking (Dr. Krishna Vatsa). Cross‑sector collaboration (government, academia, industry, startups, international partners) is repeatedly called for to build, benchmark, and operationalize AI for DRR.
Resolutions and action items
Develop and maintain digital‑twin representations of critical infrastructure that are accessible to emergency services (suggested by Avinash Ramtohul). Implement a phased hybrid AI‑physics modelling approach with joint benchmarking frameworks involving national meteorological agencies and international partners (Beth Woodhams). Create a sovereign‑data architecture with defined explainability and audit standards for AI‑driven life‑saving decisions (Som Satsangi). Pursue public‑private partnerships to fund and deploy exaflop‑scale computing resources, including power and water‑cooling solutions (Som Satsangi). Design and roll out the five‑layer AI stack, ensuring edge‑ready, zero‑trust devices for disconnected environments (Pankaj Shukla). Encourage startups to build Disaster‑Prediction‑Generators (DPGs) that integrate hazard, asset, people, and workflow data, and to package these for state and national agencies (Nikhilesh Kumar). Promote the box‑model GPU‑based forecasting approach for low‑resource settings as an interim solution while larger infrastructure is built (Dr. Mrutyunjay Mohapatra). Accelerate deployment of automated weather stations and other sensors to achieve village‑level coverage, coupled with a roadmap for data‑center integration and AI processing capacity (Dr. Krishna Vatsa). Establish a national coordination forum (e.g., under NDMA) to define architecture, data‑center roles, and incremental capacity‑building steps (Dr. Krishna Vatsa).
Unresolved issues
Funding mechanisms and timelines for acquiring exaflop‑scale supercomputing infrastructure and associated power/water resources. Specific governance model for sharing and protecting sovereign data across ministries, states, and private partners. Detailed standards for AI explainability and accountability in emergency decision‑making; no consensus reached. Operational plan for integrating AI‑generated alerts with existing early‑warning channels (cell broadcast, SMS, sirens) while preventing misinformation. Clear roadmap for scaling the central data‑center architecture to serve numerous state‑level early‑warning agencies. Mechanisms to ensure cybersecurity of AI‑driven alert messages and to prevent malicious manipulation of warning systems. Allocation of responsibilities among ministries, NDMA, and private sector for building and maintaining the five‑layer AI stack.
Suggested compromises
Adopt a gradual hybrid AI‑physics model rollout, blending outputs and increasing AI share as confidence grows (Beth Woodhams). Maintain human verification for high‑impact alerts while allowing AI to automate lower‑risk data processing (Avinash Ramtohul). Leverage public‑private partnerships to share the financial burden of high‑cost infrastructure, rather than relying solely on government spending (Som Satsangi). Use affordable box‑model GPU clusters for immediate forecasting needs in low‑resource contexts, while continuing to develop larger supercomputing capacity (Dr. Mrutyunjay Mohapatra). Implement incremental capacity‑building: first enhance observational networks, then develop processing pipelines, followed by full AI integration, rather than attempting a single large‑scale deployment (Dr. Krishna Vatsa).
Thought Provoking Comments
Disasters are not only physical (floods, cyclones) but also virtual – cyber‑attacks can cause havoc, and we need a bridge between the physical and virtual worlds via digital twins that map structures, utilities and even real‑time human presence.
He expanded the definition of disaster to include cybersecurity and introduced the concept of a digital twin as a critical policy reform, linking physical response with virtual data infrastructure.
Shifted the discussion from traditional DRR to a broader, integrated view that includes cyber resilience; prompted later speakers to consider data architecture, interoperability, and the need for human‑in‑the‑loop decision making.
Speaker: Avinash Ramtohul (Minister, Republic of Mauritius)
We will not replace physical weather models with AI; instead we will blend physics‑based and machine‑learning models, co‑develop benchmarks with partners, and ensure the metrics we use reflect what users actually need.
She highlighted a pragmatic, hybrid modelling approach and stressed co‑development and user‑centric evaluation, challenging any notion of AI as a silver‑bullet replacement.
Guided the conversation toward collaborative model development and the importance of trustworthy metrics, influencing subsequent remarks on standards, explainability, and partnership models.
Speaker: Beth Woodhams (Senior Manager, UK Met Office)
India lacks the exaflop‑scale supercomputing infrastructure needed for real‑time AI‑driven early warning; building such capacity costs billions, so private‑sector partnerships and massive power/water resources are essential.
He provided a stark reality check on India’s computational capacity, quantified the gap with global examples, and linked infrastructure to policy and procurement challenges.
Created a turning point focusing the panel on resource constraints and the role of public‑private collaboration; later speakers (Google, NDMA) addressed how to work around these limitations with edge and federated solutions.
Speaker: Som Satsangi (Former SVP, Hewlett Packard Enterprise India)
AI deployment requires a five‑layer architecture—infra, operating system, platform services, models, and applications—plus edge/federated capabilities that can run in air‑gapped, zero‑trust environments and even on rugged devices for disconnected disaster zones.
He articulated a concrete technical framework for scaling AI in low‑connectivity, high‑risk settings, moving the discussion from abstract policy to actionable system design.
Steered the conversation toward practical implementation strategies, influencing the startup perspective on modular platforms and prompting NDMA to consider integration of data centers with field operations.
Speaker: Pankaj Shukla (Head of Customer Engineering, Google Cloud India)
Four layers are needed for disaster‑risk platforms: modeling, asset/people mapping, and workflow translation. AI can turn unstructured news and satellite data into structured hazard databases, unlocking insurance and risk‑reduction opportunities.
He introduced a startup‑centric view that connects data ingestion, AI‑driven insight, and actionable workflows, emphasizing the role of AI in filling data gaps for risk assessment and insurance.
Expanded the dialogue to include private‑sector innovation and the importance of data pipelines, leading to acknowledgment of the need for interoperable formats and DPIs/DPGs by other panelists.
Speaker: Nikhilesh Kumar (CEO, Vassar Labs)
The UN’s ‘Early Warning for All’ goal demands hybrid AI‑physical models; AI can improve data quality (e.g., only 5 % of satellite data is usable) and even low‑resource nations can use box‑model GPU solutions instead of massive supercomputers.
He linked global policy targets with technical realities, highlighted AI’s role in data quality, and offered a scalable solution for poorer nations, reinforcing the earlier infrastructure concerns.
Reinforced the need for hybrid approaches and democratized AI access, influencing the conversation on affordable models and encouraging collaborative efforts across agencies.
Speaker: Dr. Mrutyunjay Mohapatra (Director General, India Meteorological Department)
We have massive observational data (e.g., micro‑earthquakes, automated weather stations) but lack the capacity to process it and deliver actionable warnings to citizens; the challenge is integrating data centers with early‑warning agencies in a cost‑effective, incremental way.
He pinpointed the bottleneck between data collection and actionable dissemination, emphasizing the need for clear architecture and incremental capacity building.
Served as a synthesis point, bringing together earlier themes of infrastructure, interoperability, and user‑focused delivery, and set the stage for concluding remarks on coordinated national strategy.
Speaker: Dr. Krishna Vatsa (Head of Department, NDMA)
Overall Assessment

The discussion evolved from a broad conceptualization of disaster risk (including cyber threats) to concrete challenges of infrastructure, data quality, and implementation. Key comments—especially the digital‑twin vision, the quantified supercomputing gap, the five‑layer AI architecture, and the hybrid AI‑physical modeling approach—acted as turning points that redirected the conversation toward practical, scalable solutions and highlighted the necessity of public‑private partnerships, interoperable standards, and user‑centric design. Collectively, these insights shaped a nuanced narrative: while AI offers transformative potential for DRR, realizing it at national scale demands coordinated policy reforms, robust yet affordable computational resources, and integrated data‑to‑action pipelines.

Follow-up Questions
How can a digital twin of critical infrastructure be created and made accessible to emergency services for real-time response?
Digital twin bridges physical and virtual worlds, essential for locating people and assets during disasters.
Speaker: Avinash Ramtohul
What cybersecurity safeguards are needed to protect AI-driven early warning messages from malicious code or virus infection?
Early warning messages could be compromised, leading to misinformation and panic.
Speaker: Avinash Ramtohul
What governance frameworks ensure human-in-the-loop oversight for AI decisions affecting lives in disaster management?
Full automation can be dangerous; human verification is critical.
Speaker: Avinash Ramtohul
Which performance metrics of AI weather models are most relevant to end‑users and how should they be benchmarked?
Need to build trust; metrics must reflect user needs, not just technical scores.
Speaker: Beth Woodhams
How can standardized benchmarking and evaluation protocols be co‑developed for hybrid AI‑physical weather forecasting models?
Consistent evaluation ensures comparability and trust across partners.
Speaker: Beth Woodhams
What cost‑effective strategies can India adopt to develop exaflop‑scale high‑performance computing infrastructure required for real‑time AI‑driven disaster alerts?
Current capacity far below needed; infrastructure is a bottleneck.
Speaker: Som Satsangi
What sustainable power and cooling solutions are required to support large AI supercomputers for disaster risk reduction?
High energy and water demand; need environmentally viable options.
Speaker: Som Satsangi
How can public‑private partnership models be structured to finance and operate AI infrastructure for national early warning systems?
Government alone cannot bear costs; private sector involvement essential.
Speaker: Som Satsangi
What technical standards are needed to ensure AI systems are interoperable with sovereign data architectures across federal and state levels?
Interoperability is crucial for unified DRR across jurisdictions.
Speaker: Som Satsangi
What explainability standards should be applied when AI informs life‑saving disaster response decisions?
Transparency needed for trust and accountability.
Speaker: Som Satsangi
How can AI pipelines be scaled to provide near‑real‑time nowcasting for millions of water bodies and dams using satellite and radar data?
Critical for flood management; requires handling massive data streams.
Speaker: Nikhilesh Kumar
What AI techniques can extract structured hazard and damage information from unstructured news and social media sources to build comprehensive risk databases?
Current lack of historic hazard frequency data; AI can fill gaps.
Speaker: Nikhilesh Kumar
How should Data Processing Interfaces (DPIs) and Data Product Generators (DPGs) be designed to translate multi‑agency data into actionable disaster response workflows?
Coordination across agencies is needed for effective action.
Speaker: Nikhilesh Kumar
What is the feasibility and performance of low‑cost GPU‑based ‘box model’ AI solutions for early warning in small island developing states?
Offers affordable alternative to supercomputers for resource‑constrained nations.
Speaker: Mrutyunjay Mohapatra
What mechanisms can engage the public or community to augment national computational infrastructure for AI‑driven DRR?
Leveraging broader resources could bridge infrastructure gaps.
Speaker: Mrutyunjay Mohapatra
What architectural model best links central data centers with distributed early warning agencies to ensure timely information flow?
Need clear integration to justify data center investments.
Speaker: Krishna Vatsa
What phased roadmap should be followed to incrementally build AI capacity for disaster risk reduction within limited resources?
Gradual capacity building needed to avoid over‑investment and ensure sustainability.
Speaker: Krishna Vatsa

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.