Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal

20 Feb 2026 12:00h - 13:00h

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal

Session at a glanceSummary, keypoints, and speakers overview

Summary

In his keynote address, a senior Indian Army officer highlighted how artificial intelligence is reshaping modern warfare, recalling his early experience with paper maps and slow information flow in the 1980s [6-8]. He contrasted that with today’s operation rooms dominated by massive digital displays that fuse sensor data and provide near-real-time battlefield pictures, a shift he described as “like Star Wars coming to life” [9].


To illustrate the risks of over-reliance on AI, he recounted a high-tempo mission where an algorithm recommended an immediate strike with a high confidence score and a decision window measured in seconds [10-13]. The commander paused, not because he distrusted the technology but because his experience sensed an anomaly and he asked, “What does the machine not know?” [14-16]. He discovered that a civilian evacuation had just begun and was not yet reflected in the data, meaning the target could include non-combatants, so he delayed the strike and spared innocent lives [19-21]. He used this episode to assert that AI can accelerate recommendations, but only humans can exercise judgment and bear responsibility for the outcome [24-25].


Citing recent statements by the Prime Minister, he emphasized that AI guardrails are not optional for the military but mandatory given the high stakes [26-27]. He noted that the Indian Armed Forces operate in a uniquely complex security environment across contested borders, multiple domains, dense populations, and high escalation intensity [28-30]. Accordingly, the forces view AI as a force multiplier in intelligence fusion, surveillance, logistics and other functions, and have declared 2024 the “year of networking and data-centricity” [31-33].


Indigenous platforms such as ACOM AI-as-a-service, Sama Drishti, Shakti and Akash Teer have been developed through collaboration with industry and startups, and the army remains open to further partnerships for self-reliant transformation [35-38]. The speaker outlined four responsible-AI principles: (i) certain decisions must remain human-controlled and legally accountable [41-44]; (ii) AI-enabled systems should be treated as weapons and tested in contested conditions [46-48]; (iii) transparency must turn the “black box” into a “glass box” so commanders know the data and training provenance [52-55]; and (iv) commanders and staff must be trained to integrate algorithms into operations [56].


He linked these principles to broader governance efforts, referencing India’s AI governance guidelines and ongoing United Nations discussions on meaningful human control and accountability for autonomous weapons [57-60]. Finally, he asserted that India, as a major military power and emerging AI hub grounded in ethical traditions, has both the capacity and credibility to lead the global conversation on responsible AI in warfare [61-63]. The address concluded that while AI will continue to transform the battlefield, human judgment, ethical safeguards, and international governance remain essential to ensure security and moral responsibility [24-25][57-60].


Keypoints

Major discussion points


Rapid transformation of battlefield decision-making through AI – The speaker contrasts the early days of paper maps and slow information flow ([6-9]) with today’s “massive digital display” that fuses sensor data instantly and forces decisions within seconds ([10-13]). He illustrates this shift with a high-tempo scenario where a commander pauses a machine-generated strike, asks “What does the machine not know?” and saves civilian lives by applying human judgment ([14-23]).


Human control, accountability and ethical guardrails are non-negotiable – AI may recommend actions, but ultimate authority must remain with humans; legal and moral responsibility cannot be delegated to machines ([41-45]). Because AI-enabled systems can cause harm, they must be treated as weapons, rigorously tested in contested conditions, and not just as software ([46-48]).


Indigenous AI development and industry-startup collaboration – The Indian Armed Forces are deploying home-grown applications such as ACOM AI, Sama Drishti, Shakti and Akash Teer, all built through partnerships with industry and startups, reflecting a push for self-reliant, data-centric capabilities ([35-38]).


Training and capacity-building for commanders – To operate safely in a data-rich, AI-augmented battlespace, today’s commanders and staff must be educated on algorithm integration, system command, and ethical decision-making ([55-56]).


Need for robust governance frameworks and international cooperation – The speaker calls for AI-specific legal provisions, referencing India’s AI Governance Guidelines and ongoing UN discussions on “meaningful human control” and accountability, positioning India to lead the conversation on responsible AI use in warfare ([57-60]).


Overall purpose / goal


The address aims to inform and persuade the audience-comprising military leaders, industry innovators, and policymakers-that while AI is a decisive force multiplier for the Indian Armed Forces, its deployment must be paired with strict human oversight, transparent development, rigorous testing, and comprehensive governance. The speaker seeks to rally support for collaborative, indigenous innovation and to position India as a responsible global leader in military AI ethics and regulation.


Tone of the discussion


Opening: Formal and proud, highlighting personal experience and the Army’s legacy ([1-5]).


Transition: Cautiously urgent, emphasizing the speed of modern AI-driven operations and the critical need for human judgment ([9-23]).


Prescriptive: Deliberate and normative when outlining responsibilities, accountability, and safety measures ([41-55]).


Collaborative: Optimistic and inviting, stressing partnerships with startups and industry ([35-38]).


Aspirational: Visionary and diplomatic toward the end, calling for international governance and positioning India as a moral leader ([57-63]).


Overall, the tone moves from reflective pride to a warning-laden call for responsibility, then to constructive collaboration, and finally to a forward-looking, diplomatic appeal.


Speakers

Speaker 1


– Role/Title: Keynote speaker representing the Indian Army and the Indian Armed Forces (specific rank or name not provided)


– Area of Expertise: Military operations, AI integration in defence, strategic decision‑making, AI governance and safety in the armed forces


Additional speakers:


(none identified)


Full session reportComprehensive analysis and detailed insights

The senior Indian Army officer began his keynote by acknowledging the programme’s length, greeting a diverse audience of industry leaders, academics, AI innovators, fellow uniformed colleagues, and students, and noting the honour of representing the Indian Armed Forces on this occasion [1-5].


He then traced the evolution of battlefield decision-making, recalling the analogue era of his first war-game thirty-five years ago when information arrived slowly on paper maps, notes and telephone reports and commanders deliberated with ample time [6-8]. He contrasted this with today’s operation rooms dominated by massive digital displays that fuse data from numerous sensors in real time, with AI instantly analysing the stream to produce a living picture of the battlespace – a change he likened to “Star Wars coming to life” [9].


To illustrate the risks inherent in this speed-driven environment, he described a recent high-tempo mission in which an AI system generated a high-confidence recommendation to strike a target within a decision window measured in seconds. Although the probability score was high, the senior commander deliberately paused, not out of distrust of the technology but because his experience sensed an anomaly. He asked, “What does the machine not know?” and discovered that a civilian evacuation had just begun and was not yet reflected in the sensor data, meaning the algorithm was mis-identifying civilians as enemy troops. By exercising judgement and delaying the strike, the commander spared innocent lives while still achieving the mission objective [10-23].


From this episode he drew a fundamental conclusion: AI can inform, accelerate and recommend actions, but only humans are capable of exercising moral judgement and bearing responsibility for the outcomes of lethal decisions [24-25].


He reinforced this point by referring to recent statements from the Honorable Prime Minister and other eminent speakers, who stressed that safety guardrails for AI are not optional in the military context but mandatory given the high stakes involved [26-27].


The officer highlighted the uniquely complex security environment in which the Indian Armed Forces operate-contested borders, multi-domain challenges, dense civilian populations and a high intensity of escalation-making AI an essential force multiplier across intelligence fusion, surveillance, decision support, maintenance and logistics [28-32].


In line with the vision of technological transformation, the Indian Armed Forces have declared this year the “year of networking and data-centricity” and are committed to fully equipping the services with AI-enabled, data-centric capabilities [33-35]. Indigenous development has been central to this shift. Home-grown applications such as ACOM AI-as-a-Service, the battlefield situational-awareness platform Sama Drishti, and the sensor-shooter fusion systems Shakti and Akash Teer have been created through close collaboration with industry, leaders and startups, and the army remains open to further partnerships to deepen self-reliant transformation [35-38].


Four concrete principles for responsible AI deployment were outlined:


1. Human control over lethal decisions – decisions of a lethal nature must never be delegated to machines; legal authority and moral accountability must remain with the commander, not the algorithm [41-45].


2. Treat AI-enabled systems as weapons – because they are designed to cause harm, they must be subjected to rigorous testing under contested battlefield conditions where sensors may be degraded by dust, smoke or deception [46-51].


3. Transparency (“glass-box” AI) – the “black box” of data must become a “glass box”, enabling commanders to understand the data sources and training regimes behind AI outputs [52-55].


4. Dedicated training for commanders and staff – personnel must master algorithm integration, system command and rapid OODA cycles to operate safely in a data-rich, AI-augmented battlespace [55-56].


He also recalled that long-standing treaties such as the rules governing the use of nuclear-biological-chemical (NBC) weapons, the Geneva Convention on the Treatment of Prisoners of War, and the Convention on the Use of Landmines have historically guided armed conflict [26-27].


These domestic measures dovetail with broader governance efforts. India’s newly released AI Governance Guidelines address the risks of generative AI and embed safety guardrails, while the summit’s daily declaration reinforced those guidelines, underscoring their path-breaking nature [57-60]. The speaker noted that the UN Secretary-General had addressed these initiatives just the previous day, and that the United Nations is actively discussing “meaningful human control” and accountability for autonomous weapons [57-60]. Although consensus on international conventions is still evolving, the very fact of the debate reflects a shared concern for preventing unchecked autonomy that could destabilise strategic stability [57-60].


Positioning India as uniquely suited to lead the global conversation on responsible military AI, he drew on the nation’s status as a major military power, a burgeoning AI hub, and a civilisation rooted in ethical restraint-embodied in the concepts of Shakti (force) and Dharma (righteousness). He asserted that India possesses both the capacity and the credibility to shape international norms and to champion a “Manav Vision for AI” that integrates moral and ethical systems into technology development [61-63]. He stressed that while the nature of war may evolve, the conscience of the nation must remain unchanged[61-63].


In summary, the address charted the rapid transformation of warfare from paper-based maps to AI-fused digital battle-spaces, underscored the indispensable role of human judgement and legal accountability, outlined a roadmap for indigenous AI development and industry collaboration, called for rigorous testing, transparency and training, linked national initiatives to emerging international governance frameworks, and concluded with a moral reminder that technological progress must be anchored in an unchanged national conscience.


Session transcriptComplete transcript of the session
Speaker 1

Firstly, let me just say this that, you know, I know I’m the last speaker of a long day. So I’ll do this quickly. I’ll come to the essentials. Distinguished guests, leaders of industry and academia, AI innovators, my colleagues in uniform, who are also innovators, students, ladies and gentlemen, a very good evening to you all. It’s a privilege to be speaking here as a keynote address representing the Indian Army and the Indian Armed Forces. You know, 35 years ago, when I joined the Army as a young lieutenant, in my first war game unfolded in a room dominated by large paper maps. Information arrived slowly, handed in notes, verbal updates, reports from the field taken on telephone. We pieced that picture together, physically marked it on the map using color -coded pins and flags, and presented it to the commander, who then took a decision deliberately and with reflection, fully aware that the adversary was operating within similar timelines.

Twenty years later, the rhythm began to change. intelligence became sharper and faster operation rooms had a few screens displaying maps presentations moved to powerpoint the volume of information increased timelines got compressed but there was still space to pause, breathe and the OODA cycle could still breathe today when I walk into an operation rooms the difference is stark it’s like a star wars coming to life a massive digital display dominates the wall input stream in continuously from multiple sensors intelligence is fused almost instantly analyzed by AI presenting a living dynamic picture of the battle space some of the work we did as left -handers is now automated and the commander knows that the adversary is seeing much the same picture about us at much the same speed the pressure is not anymore about awareness it is about decision seconds matter hesitation has consequences it is in this environment of speed, uncertainty and time compression that I want to transport you to an operational stage scenario During a high -tempo military operation, a senior commander was presented with a machine -generated recommendation based on multiple sensor feeds and AI analysis to engage a target immediately.

The system was confident. The probability score of the machine was high. The decision window was measured in seconds. But the commander paused. Not because he didn’t trust the technology. His experience told him that something was amiss. He asked a simple question. What does the machine not know? The pause revealed something the algorithm could not see. A civilian evacuation had just begun minutes earlier, not yet reflected in the data. The machine saw the movement as that of enemy troops, whereas they were civilians. It is even possible that troops were mixed with the civilians. However, the commander exercised judgment and restraint. The strike was delayed, innocent lives were spared, and the mission was still achieved. This moment captures a fundamental truth.

AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility for them. Yesterday our Honorable Prime Minister and many other eminent speakers spoke of the need for guardrails and safety to be built into AI -enabled models. In the case of the military, these are not essential but mandatory as the stakes are much higher. The Indian Armed Forces operate in a uniquely complex security environment. Across contested borders, multiple domains, dense populations and high escalation intensity . Therefore, ladies and gentlemen, let me clearly state that we in the Defence Forces are fully cognizant that artificial intelligence is fundamentally redefining the modern battle space. Its power in intelligence fusion, surveillance, decision support, maintenance, logistics and a host of other functions is a force multiplier in today’s multi -domain battle space.

In keeping with the vision of technological transformation, the Indian Armed Forces are committed to ensuring that the military is fully equipped with the necessary equipment The Chief of Army Staff has formally declared this year as the year of networking and data centricity. The Indian Armed Forces are committed to ensuring that the military is fully equipped with the necessary equipment and data centricity. The Indian Armed Forces are committed to ensuring that the military is fully equipped with the signaling a deliberate shift towards data -driven operations and AI -enabled capabilities. The evolution is powered by many indigenously built applications, ACOM AI as a service, Sama Drishti, which is a battlefield situational awareness software, Shakti and Akash Teer, which are sensor and shooter fusion.

All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who have been around in this summit for the last few days. For this self -reliant transformation, we are open to collaboration with many startups, innovators to build it further. However, we are fully cognizant that this needs to be a responsible development of AI. Allow me to reflect on four points in this regard. Firstly, what decisions that AI must not be delegated to must always remain human. Human control has to be institutionalized into law and moral accountability. Accountability cannot be with the machine. If a machine recommends a decision with 90 % accuracy and the commander goes with it and it is a wrong decision, it gives the commander a moral buffer.

But is that correct? Secondly, AI -enabled systems are designed to cause harm. Therefore they must be treated as a weapon and not as a software. They therefore must be evaluated and tested in contested field conditions. Remember that the battlefield is a chaotic data environment. Sensors get obscured by dust, smoke, deception and many other things. A system that performs well in controlled condition but fails in a battlefield condition is not a force multiplier, it’s a liability. Thirdly, trust and sovereignty must get built in the system. The commander taking a decision based on an AI -enabled system but know, must know what is the data being used, how it has been trained. The black box of data must become a glass box.

And fourthly, commanders and staff of today need to be trained about this fast evolving battlefield. As I told you about the operational scenario, as it was 30 years ago and it is today in the in a war game we need to be able to integrate algorithms be able to command systems and know how to go forward the indian army is taking steps in training our commanderial staff in this direction the the next thing that i’d like to say is that in some the nature of war may change but our conscious must not it is important to recognize that these concerns about ai safety and governance are not confined to the military domain alone they are increasingly shaping national policy the launch of the india ai governance guidelines and the daily declaration during the summit is a path -breaking step in this direction just happened during this summit this framework defines ai systems being generative and therefore having unintended consequences and this has lessons for us as military planners at this stage i would also like to remind ourselves of a historical truth i do believe in the wisdom of humanity whenever faced with a new crisis we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it The rules governing the use of NBC weapons, the Geneva Convention on Treatment of Prisoners of War, the Convention on Use of Landmines and other such frameworks have stood the test of time and with few exceptions have been followed during conflicts also.

In a similar manner, a set of governance frameworks and legal provisions need to be evolved about use of AI -based systems and autonomous weapons. Already under the framework of the United Nations, discussions are underway around meaningful human control and accountability. His Excellency the UN Secretary General also talked about various such initiatives just yesterday. While consensus remains complex, the debate itself reflects a shared concern for autonomy without restraint that would undermine strategic stability. India, as a major military power, a growing AI hub and a civilization deeply rooted in ethical restraint and understanding that Shakti, that is force, and Dharma, that is rightness, must go hand in hand, has both the capacity. And the credibility to lead this conversation.

The clear and all -encompassing Manav Vision for AI, enunciated by the Honorable Prime Minister in this hall yesterday, emphasizing moral and ethical systems as well as

Related ResourcesKnowledge base sources related to the discussion topics (40)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The senior Indian Army officer began his keynote representing the Indian Armed Forces.”

The knowledge base identifies Lt Gen Vipul Shinghal as a senior Indian Army officer representing the Indian Armed Forces as a keynote speaker [S10].

Confirmedmedium

“He has 35 years of military service, recalling his first war‑game thirty‑five years ago.”

S10 notes that the speaker has 35 years of service in the Indian Army, confirming the timeframe referenced in the report [S10].

Confirmedhigh

“AI can inform, accelerate and recommend actions, but only humans are capable of exercising moral judgement and bearing responsibility for lethal decisions.”

The source states that AI can inform, accelerate and recommend decisions, but emphasizes the need for human judgement and responsibility [S10].

Additional Contextmedium

“Human oversight is essential to ensure moral judgement and accountability in AI‑driven military operations.”

S105 highlights that maintaining humans-in-the-loop is crucial for oversight in AI-enabled targeting and decision-support systems, adding nuance to the report’s claim about moral judgement [S105].

Additional Contextlow

“AI‑enabled systems must be treated as weapons and evaluated in contested field conditions because the battlefield is a chaotic data environment.”

S16 explains that AI-enabled systems designed to cause harm should be treated as weapons and tested under contested, data-chaotic conditions, providing additional context to the report’s discussion of AI risks and guardrails [S16].

External Sources (113)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 1’s presentation represents a masterful progression from current state analysis to future vision, punctuated by …
S5
Using AI to tackle our planet’s most urgent problems — ## Community-Driven Mapping and Success Stories 1. **The Earth Layer**: Changes occurring over decades, representing fu…
S6
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress — Ayça Dibekoğlu: Please object now, or until the 25th of May, until when we can finalize our messages. Okay, I see no ob…
S7
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S8
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S9
Keynotes — Oleksandr Bornyakov: Dear ladies and gentlemen, I’m honored to represent Ukraine today here in Strasbourg in the heart o…
S10
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
S11
Conversation: 01 — Artificial intelligence
S12
WS #110 AI Innovation Responsible Development Ethical Imperatives — Guilherme Canela de Souza Godoi: Thank you very much. First and foremost, thank you so much for the invitation to be her…
S13
Opening Ceremony — **Lucio Adrian Ruiz**, Secretary for the Dicastery for Communication from the Holy See, provided a philosophical perspec…
S14
Ancient history can bring clarity to AI regulation and digital diplomacy — In his op-ed,From Hammurabi to ChatGPT, Jovan Kurbalija draws on the ancient Code of Hammurabi to argue for a principle …
S15
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — – Vint Cerf- Olga Cavalli- Gerald Folkvord Human rights principles | Cyberconflict and warfare Importance of human con…
S16
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-lt-gen-vipul-shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
S17
9821st meeting — At the same time, as we’ve heard, if it’s misused, AI can pose tremendous threats to the international peace and securit…
S18
UNSC meeting: Artificial intelligence, peace and security — Yi Zeng:My name is Yi Zeng and I would like to take this opportunity to share with distinguished representatives my pers…
S19
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S20
WS #123 Responsible AI in Security Governance Risks and Innovation — Alexi Drew: Thank you, I’ll run through these nice and quickly in the interest of giving people their time. I’d like to …
S21
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S22
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — -Professor Suresh: From Amrita Vishwa Vidyapetam – participated in the report launch ceremony -Speaker 1: Event moderat…
S23
Bridging the AI innovation gap — ## Call for Partnerships ### Innovation Factory and Acceleration Programme LJ Rich: to invite our opening keynote. It’…
S24
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S25
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S26
AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409 — The study cautions against completely relinquishing the final decision-making power to AI systems. It emphasises the imp…
S27
Keynote-Mukesh Dhirubhai Ambani — “First, AI for India’s deep tech and advanced manufacturing leadership.”[9]. “Second, world leading multilingual AI capa…
S28
The Global Power Shift India’s Rise in AI &amp; Semiconductors — How do you make sure that. there is enough packaging verification and many of that ecosystem getting developed. So all o…
S29
AI and international peace and security: Key issues and relevance for Geneva — Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skill…
S30
The AI soldier and the ethics of war — For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, tec…
S31
Open Forum #3 Cyberdefense and AI in Developing Economies — Capacity Building and Human Resources Development | Legal and regulatory Effective capacity building requires training…
S32
WS #184 AI in Warfare – Role of AI in upholding International Law — Reference to existing legal principles such as command responsibility and state responsibility. Shaigan discusses the c…
S33
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — I mean, with the code of practice, we try to build a culture of restraint in the functioning of systems that can prevent…
S34
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — But is that correct? Secondly, AI -enabled systems are designed to cause harm. Therefore they must be treated as a weapo…
S35
Policymaker’s Guide to International AI Safety Coordination — Translating scientific knowledge into effective policy requires extensive testing, simulations, and understanding of rea…
S36
Can we test for trust? The verification challenge in AI — Anja Kaspersen: Massively so. So let me, I’m just gonna rewind a little bit to our title of this session if you allow me…
S37
AI and international peace and security: Key issues and relevance for Geneva — Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skill…
S38
Artificial intelligence (AI) – UN Security Council — The discussions on structuring capacity-building initiatives in AI to maximize their impact, especially in regions with …
S39
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — All speakers acknowledge that having strategies and frameworks is insufficient without proper implementation mechanisms,…
S40
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — It underscores the need for capacity building, affordability, accessibility, inclusivity, and responsible governance to …
S41
Military AI: Operational dangers and the regulatory void — For the first time, in 2023, the UN Security Council discussed the implications of AI on world peace and security confir…
S42
UNSC meeting: Scientific developments, peace and security — Malta:Thank you, President. I begin by thanking the Swiss Presidency for organizing today’s briefing on this important a…
S43
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S44
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S45
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S46
How to make AI governance fit for purpose? — Given that AI technologies are inherently global, effective governance requires international engagement and cooperation…
S47
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S48
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S49
Laying the foundations for AI governance — – The need for collaboration between industry and regulators
S50
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenou…
S51
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The panel articulated a sophisticated approach to AI sovereignty that goes beyond technological nationalism. Success req…
S52
Why will AI enhance, not replace, human diplomacy? — AI can support crisis response by running simulations, analysing data in real-time, and suggesting contingency plans. Ho…
S53
WS #184 AI in Warfare – Role of AI in upholding International Law — Accountability and Responsibility Sheikh-Ali maintains that human responsibility and accountability are ultimately nece…
S54
Artificial intelligence — Within the UN System, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a…
S55
Keynote-Jeet Adani — This comment reframes potential criticism of nationalist AI policy as strategic wisdom rather than protectionism. It pro…
S56
The transformative role of ai in modern warfare: a detailed analysis — In late 2021, the Royal Navy’s collaboration with major tech companies, including Microsoft and Amazon Web Services, res…
S57
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — This comment elevated the discussion from tactical considerations to strategic and philosophical implications. It forced…
S58
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
S59
The transformative role of ai in modern warfare: a detailed analysis — In late 2021, the Royal Navy’s collaboration with major tech companies, including Microsoft and Amazon Web Services, res…
S60
AI in Action: When technology serves humanity — Principles, however, remain abstract until seen in practice. This week turns to concrete examples of AI amplifying human…
S61
AI in practice across the UN system: UN 2.0 AI Expo — TheUN 2.0 Data & Digital Community AI Expoexamined how AI is currently embedded within the operational, analytical and i…
S62
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-lt-gen-vipul-shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
S63
AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409 — The study cautions against completely relinquishing the final decision-making power to AI systems. It emphasises the imp…
S64
Ethics and AI | Part 5 — Concerned that certain activities within the lifecycle of artificial intelligence systems may undermine human dignity an…
S65
Designing Indias Digital Future AI at the Core 6G at the Edge — The strong consensus among government, industry, and technical experts on the need for indigenous capabilities, balanced…
S66
From KW to GW Scaling the Infrastructure of the Global AI Economy — “So it is basically a collaboration between the Indian startups and the global technology strength of a global company”[…
S67
India boosts military AI efforts amid China rivalry — India is ramping up itseffortsin the field of AI, not only for commercial purposes but also for military applications, a…
S68
The Global Power Shift India’s Rise in AI &amp; Semiconductors — How do you make sure that. there is enough packaging verification and many of that ecosystem getting developed. So all o…
S69
AI and international peace and security: Key issues and relevance for Geneva — Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skill…
S70
Open Forum #3 Cyberdefense and AI in Developing Economies — Capacity Building and Human Resources Effective capacity building requires training at multiple levels – technical trai…
S71
The AI soldier and the ethics of war — For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, tec…
S72
UNSC meeting: Scientific developments, peace and security — 2. Regulatory Frameworks and Governance- China: Supported UN as a platform for global technology governance and called f…
S73
Dedicated stakeholder session — Given the transformative impact of such technologies, there’s a critical need for robust legal guidelines to ensure ethi…
S74
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — I mean, with the code of practice, we try to build a culture of restraint in the functioning of systems that can prevent…
S75
Open Mic &amp; Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S76
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S77
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S78
Building Future Leaders – Competency Driven Succession Planning — This comment provides a insightful definition of leadership that goes beyond formal positions, emphasizing personal qual…
S79
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S80
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S81
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S82
AI Meets Cybersecurity Trust Governance &amp; Global Security — “Move fast, break things.”[113]”And the motto there is move deliberately and maintain things.”[114]”How to be able to ge…
S83
AI for Humanity: AI based on Human Rights (WorldBank) — Satola also highlights the interconnected nature of AI with other emerging technologies such as 5G and quantum computing…
S84
Legal Notice: — In this scenario, responsibility again creates few problems, at least as far as attribution goes. State A is under a…
S85
WS #179 Navigating Online Safety for Children and Youth — 1. Tech Companies: The role of corporations in proactively ensuring child safety was debated, with some calling for grea…
S86
Subject matter — 1. Member States shall ensure that the supervisory or enforcement measures imposed on essential entities in respect of t…
S87
PREAMBLE — unless such measures are provided for in its laws and regulations and are administered in a reasonable, objective and im…
S88
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S89
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — A conscientious request for clarity and specificity was also apparent, underlining the need for concrete, actionable pla…
S90
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S91
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S92
[Tentative Translation] — –  In order to promote the creation of needs-pull innovation by the government, the government will promote the new Jap…
S93
Keynote-Rishi Sunak — The tone was consistently optimistic and inspirational throughout. Sunak maintained an enthusiastic, forward-looking per…
S94
Ad Hoc Consultation: Friday 2nd February, Afternoon session — In summation, India’s advocacy for methodical international governance reform highlights its commitment to terminologica…
S95
Ad Hoc Consultation: Friday 9th February, Morning session — Although negative feelings are held toward Article 57 as it stands, the positive sentiment associated with backing Iran’…
S96
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-1 — My email ID is ttopgay at cabinet .gov .pt. Your Excellencies, the AI revolution will not wait for us. It will continue …
S97
FOREWORD — the desire of, leaders to wield some influence over the external images of the places they rule are, of course, as old a…
S98
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Amb Thomas Schneider — Schneider began by thanking the Indian government for bringing together leaders, innovators, researchers, and civil soci…
S99
Keynote Adresses at India AI Impact Summit 2026 — Gore reinforced this assessment, noting that “India’s entry into Pax Silica isn’t just symbolic, it’s strategic, it’s es…
S100
Knowledge and Diplomacy — In the last quarter of this fading century a technological revolution, centred around information, has transformed …
S101
The waning of mind maps — In order to survive, a hunter-gatherer of yore (or his contemporaries today) needed amind mapwith information on game, w…
S102
Most transformative decade begins as Kurzweil’s AI vision unfolds — AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translati…
S103
EU Artificial Intelligence Act — (8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as a…
S104
EU AI Act (Commission proposal) — (8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as a…
S105
Diplomacy in beta: From Geneva principles to Abu Dhabi deliberations in the age of algorithms — AI in conflict is a central concern, with risks extending far beyond LAWS. AI is integrated into target identification, …
S106
Interim Report: — 27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the l ates…
S107
Building Trustworthy AI Foundations and Practical Pathways — And it says, I have technically, correctly satisfied your query. Everything you said, I have done. And so when we give a…
S108
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-democracy_-reimagining-governance-in-the-age-of-intelligence — For the first time, a technology may reach a stage at which individuals can no longer reliably determine whether what th…
S109
Table of Contents — 3. Stakeholder Risks: lack of support, management failure, organizational structure. 4. Regulatory Risks: Noncompliance…
S110
SEARCHING FOR MEANINGFUL HUMAN CONTROL — Despite these daunting challenges, some states point out that LAWS could have military, and even humanitarian, ad…
S111
By the Same Author — After that first referendum, intense political activity had continued i n Sikkim, leading to some disturbances. O…
S112
AI and the moral compass: What we can do vs what we should do — We sometimes speak of’ethical AI’, but ethics is not a property of code. Algorithms can simulate empathy, but they canno…
S113
Is the world ready for AI to rule justice? — AI is creeping into almost every corner of our lives, and it seems the justice system’s turn has finally come. As techno…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
15 arguments177 words per minute1445 words489 seconds
Argument 1
Historical shift from paper maps to AI‑fused digital battle‑space (Speaker 1)
EXPLANATION
The speaker describes how military information processing has transitioned from manual paper maps to AI‑driven digital systems. This shift reflects a broader transformation in operational tempo and decision‑making.
EVIDENCE
He recounts joining the Army 35 years ago, using large paper maps, handwritten notes and telephone updates to build a picture of the battlefield, and contrasts that with today’s operation rooms that feature massive digital displays and AI-fused sensor data creating a living, dynamic picture of the battlespace [6-9].
MAJOR DISCUSSION POINT
Evolution of AI in Military Operations
AGREED WITH
Other eminent speakers (referenced)
Argument 2
Modern AI provides real‑time, multi‑sensor situational awareness and acts as a force multiplier (Speaker 1)
EXPLANATION
Modern AI delivers instantaneous fusion of data from multiple sensors, giving commanders a comprehensive, real‑time view of the battlefield. This capability multiplies force effectiveness across domains.
EVIDENCE
The speaker describes a massive digital display that continuously ingests data from many sensors, with AI instantly fusing and analysing it to present a living picture of the battlespace, and later labels AI as a force multiplier and a key element of data-centric transformation [9][31-32].
MAJOR DISCUSSION POINT
Evolution of AI in Military Operations
AGREED WITH
Other eminent speakers (referenced)
Argument 3
Commander’s pause despite high‑confidence AI recommendation saved civilian lives (Speaker 1)
EXPLANATION
In a high‑tempo operation, a commander halted a machine‑generated strike recommendation despite a high confidence score, asking what the system did not know. This pause uncovered an ongoing civilian evacuation, preventing civilian casualties while still achieving the mission.
EVIDENCE
The transcript details that the commander paused, asked “What does the machine not know?”, discovered that a civilian evacuation had just begun and was not reflected in the data, and consequently delayed the strike, sparing innocent lives and still achieving the mission [13-24].
MAJOR DISCUSSION POINT
Human Judgment and Accountability
Argument 4
Moral and legal responsibility must remain with humans, not machines (Speaker 1)
EXPLANATION
The speaker asserts that while AI can inform and recommend, ultimate moral and legal accountability for decisions must stay with humans. Machines cannot bear responsibility, and delegating it would create an inappropriate moral buffer.
EVIDENCE
He states that AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility, and further emphasizes that accountability cannot be placed on the machine [25][41-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Several sources stress that AI cannot bear moral or legal accountability and that responsibility must stay with humans, e.g., the speaker’s own remarks about accountability not residing with the machine [S10], the philosophical view that AI is not a subject and thus cannot be responsible [S13], and UN-level discussions on human control over lethal AI systems [S15].
MAJOR DISCUSSION POINT
Human Judgment and Accountability
AGREED WITH
Honorable Prime Minister (referenced), UN Secretary‑General (referenced)
Argument 5
Institutionalizing human control in law is essential (Speaker 1)
EXPLANATION
The speaker calls for codifying human oversight of AI systems into law, ensuring that humans retain institutional control and moral responsibility over AI‑driven actions. Such a legal framework would prevent undue reliance on autonomous decisions.
EVIDENCE
He explicitly says that human control has to be institutionalized into law and moral accountability, and that accountability cannot be with the machine [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to codify human oversight into law is highlighted in the keynote where the speaker says human control must be institutionalized [S10], reinforced by UN-focused analyses on legal frameworks for autonomous weapons [S15] and historical analogues of legal accountability [S14].
MAJOR DISCUSSION POINT
Human Judgment and Accountability
AGREED WITH
Honorable Prime Minister (referenced), UN Secretary‑General (referenced)
Argument 6
AI‑enabled systems are weapons and must be tested in contested battlefield conditions (Speaker 1)
EXPLANATION
AI‑enabled military systems are fundamentally weapons and must be evaluated under realistic, contested battlefield conditions. Testing only in controlled environments is insufficient because battlefield data can be obscured or deceptive.
EVIDENCE
The speaker notes that AI-enabled systems are designed to cause harm, must be treated as weapons, and therefore need to be evaluated and tested in contested field conditions where sensors may be obscured by dust, smoke, or deception [46-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker argues that AI-enabled systems are weapons that require testing in realistic, contested environments, a point echoed in the same keynote [S10] and in broader UN-level discussions on autonomous weapon regulation [S15].
MAJOR DISCUSSION POINT
Principles for Responsible AI Deployment
Argument 7
Certain critical decisions must never be delegated to AI (Speaker 1)
EXPLANATION
High‑stakes decisions should remain under human authority and never be handed over to AI, as delegating such decisions erodes accountability. The speaker stresses that some decisions must always stay human‑centric.
EVIDENCE
He asks “what decisions that AI must not be delegated to must always remain human” and reinforces that accountability cannot be with the machine, highlighting the need for human-only decision making [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The argument that some high-stakes decisions must remain human-centric is supported by the speaker’s statement that “what decisions that AI must not be delegated to must always remain human” [S10] and by UN-centric policy papers emphasizing human control over lethal AI functions [S15].
MAJOR DISCUSSION POINT
Principles for Responsible AI Deployment
AGREED WITH
Honorable Prime Minister (referenced), UN Secretary‑General (referenced)
Argument 8
Transparency: AI “black box” must become a “glass box” showing data sources and training (Speaker 1)
EXPLANATION
Transparency is essential; the opaque “black box” of AI must become a “glass box” where users can see the data sources and training methods. This builds trust and ensures informed use of AI systems.
EVIDENCE
He stresses that commanders must know what data is being used and how the system was trained, calling for the black box to become a glass box to foster trust and sovereignty [52-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for turning AI’s opaque black box into a transparent glass box are made in the keynote [S10] and are reinforced by broader governance recommendations for algorithmic transparency and traceability [S21].
MAJOR DISCUSSION POINT
Principles for Responsible AI Deployment
AGREED WITH
UN Secretary‑General (referenced)
Argument 9
Indigenous applications (ACOM AI, Sama Drishti, Shakti, Akash Teer) illustrate self‑reliant AI capability (Speaker 1)
EXPLANATION
India has developed several indigenous AI applications—ACOM AI, Sama Drishti, Shakti, and Akash Teer—that demonstrate self‑reliant capabilities in battlefield awareness and sensor‑shooter fusion. These showcase domestic innovation and technological independence.
EVIDENCE
He lists the indigenously built applications ACOM AI, Sama Drishti, Shakti and Akash Teer as examples of battlefield situational awareness and sensor-shooter fusion, noting they were created through collaboration with industry and startups [35-36].
MAJOR DISCUSSION POINT
Indigenous Development and Industry Collaboration
AGREED WITH
Industry leaders (referenced)
Argument 10
Open collaboration with startups and innovators to accelerate AI integration (Speaker 1)
EXPLANATION
The armed forces are actively seeking partnerships with startups and innovators to further advance AI integration, emphasizing openness to collaboration for a self‑reliant transformation. Such partnerships aim to accelerate development and deployment of AI capabilities.
EVIDENCE
He mentions collaboration with industry, leaders and startups throughout the summit and states openness to further collaboration with many startups and innovators to build AI further [36-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker highlights openness to industry and startup partnerships for AI development, a point reiterated in the keynote and related commentary on collaborative AI ecosystems [S10] and additional remarks about ongoing collaboration initiatives [S16].
MAJOR DISCUSSION POINT
Indigenous Development and Industry Collaboration
AGREED WITH
Industry leaders (referenced)
Argument 11
Command staff need education on algorithms, AI‑enabled systems, and rapid decision‑making (Speaker 1)
EXPLANATION
Modern commanders must be trained to understand algorithms, AI‑enabled systems, and the rapid decision cycles they create. This education is crucial for effective integration of AI into military operations.
EVIDENCE
He notes that commanders and staff need to be trained about fast-evolving battlefields, to integrate algorithms, command systems and know how to move forward, and that the Indian Army is taking steps in training its command staff in this direction [55-56].
MAJOR DISCUSSION POINT
Training and Capacity Building for Commanders
AGREED WITH
Defence training authorities (referenced)
Argument 12
Indian Army is implementing training programmes for AI‑driven operations (Speaker 1)
EXPLANATION
The Indian Army is instituting specific training programmes to equip its personnel with the skills needed for AI‑driven operational environments. These programmes aim to build competence in using AI tools for decision support.
EVIDENCE
The same passage on training staff about algorithms and AI-enabled systems indicates that the Indian Army is taking steps to train its command staff for AI-driven operations [55-56].
MAJOR DISCUSSION POINT
Training and Capacity Building for Commanders
AGREED WITH
Defence training authorities (referenced)
Argument 13
India’s AI governance guidelines address generative AI risks and set safety standards (Speaker 1)
EXPLANATION
India has introduced AI governance guidelines that specifically address the risks of generative AI and establish safety standards for AI deployment. These guidelines aim to ensure responsible development and use of AI technologies.
EVIDENCE
He references the launch of the India AI governance guidelines, which define generative AI systems and their unintended consequences, providing a framework for safety and responsible use [57].
MAJOR DISCUSSION POINT
AI Governance and International Legal Frameworks
AGREED WITH
UN Secretary‑General (referenced)
Argument 14
UN discussions on “meaningful human control” and accountability for autonomous weapons (Speaker 1)
EXPLANATION
The United Nations is currently debating frameworks for “meaningful human control” over autonomous weapons and mechanisms for accountability, reflecting global concern over AI militarization. These discussions aim to shape international norms and safeguards.
EVIDENCE
He notes that under the UN framework discussions are underway around meaningful human control and accountability, and that the UN Secretary-General highlighted these initiatives recently [58-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The existence of UN-level debates on meaningful human control and accountability for autonomous weapons is documented in analyses of autonomous weapon regulation and legal-ethical imperatives [S15].
MAJOR DISCUSSION POINT
AI Governance and International Legal Frameworks
AGREED WITH
UN Secretary‑General (referenced)
Argument 15
New legal conventions, akin to Geneva and land‑mine treaties, are required for AI weapons (Speaker 1)
EXPLANATION
Just as treaties like the Geneva Convention regulate conventional weapons, new international legal instruments are needed to govern AI‑based weapons, ensuring ethical use and strategic stability. Such conventions would embed accountability and human control into AI weapon systems.
EVIDENCE
He draws a parallel with historical weapons treaties (Geneva, land-mine, NBC weapons) and argues that similar governance frameworks and legal provisions are needed for AI-based systems, referencing ongoing UN discussions and the need for meaningful human control [56][57-59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for new international legal instruments governing AI weapons parallels existing treaties and is supported by UN-focused discussions on autonomous weapon regulation [S15] as well as historical perspectives on legal accountability drawn from ancient codes [S14].
MAJOR DISCUSSION POINT
AI Governance and International Legal Frameworks
AGREED WITH
UN Secretary‑General (referenced)
Agreements
Agreement Points
Human oversight and moral/legal accountability must remain with humans for AI‑enabled military decisions
Speakers: Speaker 1, Honorable Prime Minister (referenced), UN Secretary‑General (referenced)
Moral and legal responsibility must remain with humans, not machines (Speaker 1) Institutionalizing human control in law is essential (Speaker 1) Certain critical decisions must never be delegated to AI (Speaker 1) UN discussions on “meaningful human control” and accountability for autonomous weapons (Speaker 1)
Speaker 1 stresses that AI can only inform and recommend; ultimate judgment, moral and legal responsibility must stay with human commanders and be codified in law, a view echoed by the Prime Minister’s call for guardrails and the UN Secretary-General’s emphasis on meaningful human control [25][41-45][26][58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
This stance reflects International Humanitarian Law principles and the UN’s emphasis on human control over lethal autonomous systems, as highlighted by the CCW experts and UN Security Council discussions on AI in warfare [S53][S57][S54][S52].
AI is a force multiplier that fundamentally reshapes the modern battlespace
Speakers: Speaker 1, Other eminent speakers (referenced)
Historical shift from paper maps to AI‑fused digital battle‑space (Speaker 1) Modern AI provides real‑time, multi‑sensor situational awareness and acts as a force multiplier (Speaker 1)
Speaker 1 describes the evolution from manual paper maps to AI-driven digital displays that fuse sensor data instantly, positioning AI as a decisive force multiplier in today’s multi-domain operations [6-9][31-32].
POLICY CONTEXT (KNOWLEDGE BASE)
UN Security Council briefings have noted AI’s transformative impact on combat operations, describing it as a force multiplier that changes the nature of war, exemplified by projects like the Royal Navy’s “StormCloud” integration effort [S41][S56][S45].
Transparency of AI systems (turning the “black box” into a “glass box”) is required for trust and sovereignty
Speakers: Speaker 1, UN Secretary‑General (referenced)
Transparency: AI “black box” must become a “glass box” showing data sources and training (Speaker 1) UN discussions on meaningful human control and accountability (Speaker 1)
Speaker 1 calls for commanders to know the data and training behind AI outputs, urging a shift from opaque black-box models to transparent “glass-box” systems, a demand that aligns with UN calls for accountability and oversight [52-55][58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
UNESCO’s AI ethics recommendations call for traceable, explainable AI (“glass-box”) to ensure trust and sovereign decision-making, echoing verification challenges discussed in AI governance forums [S44][S47][S36][S48].
AI‑enabled weapon systems must be tested under realistic, contested battlefield conditions
Speakers: Speaker 1, UN Secretary‑General (referenced)
AI‑enabled systems are weapons and must be tested in contested field conditions (Speaker 1) UN discussions on “meaningful human control” and accountability for autonomous weapons (Speaker 1)
Speaker 1 argues that because AI systems are designed to cause harm they should be treated as weapons and evaluated in real combat environments where sensors can be obscured, a stance mirrored in UN deliberations on autonomous weapons [46-51][58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
Lt Gen Vipul Shinghal emphasized that AI weapons must be evaluated in contested field conditions, and policy guides stress extensive testing and simulations before deployment [S34][S35].
Collaboration with industry, startups and indigenous development is essential for a self‑reliant AI capability
Speakers: Speaker 1, Industry leaders (referenced)
Indigenous applications (ACOM AI, Sama Drishti, Shakti, Akash Teer) illustrate self‑reliant AI capability (Speaker 1) Open collaboration with startups and innovators to accelerate AI integration (Speaker 1)
Speaker 1 highlights home-grown AI tools such as ACOM AI and Shakti and stresses openness to partnerships with startups and industry to deepen India’s self-reliant AI transformation [35-38].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy papers underline the need for industry-government partnerships and indigenous AI development to achieve strategic autonomy, as seen in France-India collaborations and sovereign AI initiatives [S49][S50][S51][S55].
Dedicated training and capacity building for commanders on AI‑driven operations is required
Speakers: Speaker 1, Defence training authorities (referenced)
Command staff need education on algorithms, AI‑enabled systems, and rapid decision‑making (Speaker 1) Indian Army is implementing training programmes for AI‑driven operations (Speaker 1)
Speaker 1 notes that today’s commanders must understand algorithms and AI-supported decision cycles, and that the Indian Army is already launching training programmes to build this capability [55-56].
POLICY CONTEXT (KNOWLEDGE BASE)
UN and multilateral capacity-building programs advocate dedicated training for military leaders to responsibly integrate AI, highlighting the gap between policy and implementation [S37][S38][S39][S40].
National and international governance frameworks, akin to existing weapons treaties, are needed for AI weapons
Speakers: Speaker 1, UN Secretary‑General (referenced)
India’s AI governance guidelines address generative AI risks and set safety standards (Speaker 1) UN discussions on “meaningful human control” and accountability for autonomous weapons (Speaker 1) New legal conventions, akin to Geneva and land‑mine treaties, are required for AI weapons (Speaker 1)
Speaker 1 points to India’s AI governance guidelines and calls for new international conventions comparable to the Geneva Convention, echoing UN efforts on meaningful human control and accountability [57][58-60][56-59].
POLICY CONTEXT (KNOWLEDGE BASE)
The CCW’s Group of Governmental Experts and UN Security Council resolutions call for treaty-like governance structures for lethal autonomous weapons, mirroring existing arms control regimes [S41][S43][S46][S54][S57].
Similar Viewpoints
Both emphasize that AI governance must be anchored in law to ensure human accountability for lethal decisions [41-45][26].
Speakers: Speaker 1, Honorable Prime Minister (referenced)
Moral and legal responsibility must remain with humans, not machines (Speaker 1) Institutionalizing human control in law is essential (Speaker 1)
Both stress that autonomous weapon systems require rigorous testing and human oversight to avoid unintended harm [46-51][58-60].
Speakers: Speaker 1, UN Secretary‑General (referenced)
AI‑enabled systems are weapons and must be tested in contested field conditions (Speaker 1) UN discussions on meaningful human control and accountability for autonomous weapons (Speaker 1)
Both advocate for a collaborative ecosystem that leverages domestic innovation and private‑sector partnerships to build AI capacity [35-38].
Speakers: Speaker 1, Industry leaders (referenced)
Indigenous applications illustrate self‑reliant AI capability (Speaker 1) Open collaboration with startups and innovators to accelerate AI integration (Speaker 1)
Unexpected Consensus
The military’s explicit framing of AI systems as weapons that must be regulated mirrors civilian UN concerns about autonomous weapons
Speakers: Speaker 1, UN Secretary‑General (referenced)
AI‑enabled systems are weapons and must be tested in contested field conditions (Speaker 1) UN discussions on meaningful human control and accountability for autonomous weapons (Speaker 1)
It is notable that a senior defence officer treats AI as a weapon requiring battlefield testing, aligning closely with UN diplomatic discourse that typically originates from civilian human-rights perspectives, indicating cross-sector convergence on regulation [46-51][58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
Civilian UN forums have framed autonomous AI as weapons requiring regulation, a view echoed by military stakeholders and reflected in CCW deliberations [S41][S45][S53][S54].
Overall Assessment

Across the keynote and referenced remarks, there is strong convergence on six core themes: (1) human oversight and legal accountability for AI‑driven lethal decisions; (2) AI as a decisive force multiplier; (3) transparency of AI models; (4) treating AI systems as weapons that need realistic testing; (5) fostering indigenous development through industry/start‑up collaboration; (6) building dedicated training and governance frameworks, both national and international.

The consensus is high – the speaker’s positions are repeatedly reinforced by the Prime Minister’s guard‑rail call and UN Secretary‑General’s meaningful‑human‑control agenda. This broad alignment suggests that policy formulation on military AI in India is likely to proceed within a well‑defined legal‑ethical framework, facilitating coordinated national‑level implementation and international cooperation.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only a single speaker (Speaker 1). All arguments presented are from the same perspective, and no contrasting viewpoints or counter‑arguments from other participants are recorded. Consequently, there are no identifiable points of disagreement, partial agreement, or unexpected disagreement within the provided material.

None – the discussion reflects a unified stance by Speaker 1 on AI in the military, its benefits, risks, and governance. The absence of dissent means there are no implications for negotiation or policy compromise within this excerpt.

Takeaways
Key takeaways
AI has transformed military operations from paper‑map, manual processes to real‑time, multi‑sensor, AI‑fused digital battle spaces, acting as a force multiplier. Human judgment and moral/legal accountability must remain with commanders; AI can recommend but cannot replace decision‑making, especially in life‑critical contexts. Responsible AI deployment requires: (a) institutionalized human control codified in law, (b) treating AI‑enabled systems as weapons that must be rigorously tested in contested conditions, (c) transparency of data and models (turning the “black box” into a “glass box”), and (d) clear limits on which decisions may never be delegated to AI. India is pursuing an indigenous, self‑reliant AI ecosystem (e.g., ACOM AI, Sama Drishti, Shakti, Akash Teer) and is actively seeking collaboration with industry and startups. Training and capacity‑building for commanders and staff on algorithms, AI‑enabled tools, and rapid decision cycles are being implemented. National AI governance guidelines and emerging international discussions (UN “meaningful human control”, potential new legal conventions) are shaping the policy environment for military AI.
Resolutions and action items
The Indian Armed Forces will equip the military with AI‑enabled, data‑centric capabilities as part of the declared “year of networking and data centricity”. Formal commitment to collaborate with startups, industry partners, and academic innovators to further develop indigenous AI applications. Institutionalize human‑in‑the‑loop control in law and doctrine, ensuring accountability remains with humans. Develop and execute training programmes for command staff on AI algorithms, decision support, and rapid OODA cycles. Implement testing regimes for AI systems under contested battlefield conditions to verify reliability before field deployment. Advance India’s AI governance framework to address generative AI risks and promote transparency (glass‑box models). Engage in international forums (UN, etc.) to advocate for legal conventions governing autonomous weapons and meaningful human control.
Unresolved issues
Specific criteria or a definitive list of decisions that must never be delegated to AI have not been detailed. Concrete standards and metrics for transforming AI “black boxes” into “glass boxes” (e.g., data provenance, model explainability) remain undefined. The timeline and process for establishing new international legal conventions on autonomous weapons are still uncertain. How to harmonize rapid AI‑driven decision cycles with existing command‑and‑control procedures and rules of engagement needs further clarification. Mechanisms for verifying AI performance in chaotic, sensor‑degraded environments have not been fully specified.
Suggested compromises
Allow AI to provide high‑confidence recommendations while mandating a mandatory human pause or verification step before lethal action. Treat AI‑enabled systems as weapons subject to rigorous testing, yet permit limited autonomous functions under strict human oversight. Encourage open collaboration with private innovators while imposing responsible‑development safeguards and transparency requirements.
Thought Provoking Comments
He contrasts his first war game 35 years ago—using paper maps, slow information flow, and deliberate decision‑making—with today’s operation rooms dominated by massive digital displays, real‑time sensor streams, and AI‑fused intelligence that compresses decision windows to seconds.
This comparison vividly illustrates how technology has transformed the tempo of warfare, shifting the core challenge from gathering information to making ultra‑rapid decisions, thereby setting the stage for the ethical and operational dilemmas that follow.
It serves as a turning point that moves the speech from a historical anecdote to a discussion of present‑day pressures, prompting the audience to reconsider the implications of speed and AI on command authority.
Speaker: Speaker 1
In the operational scenario he tells how the commander, faced with a high‑confidence AI recommendation to strike, pauses and asks, “What does the machine not know?” discovering a civilian evacuation not yet reflected in the data.
The question encapsulates the core tension between algorithmic confidence and human situational awareness, highlighting that AI’s blind spots can have life‑or‑death consequences.
This anecdote pivots the conversation from technology’s capabilities to its limitations, reinforcing the need for human judgment and setting up the subsequent argument for mandatory guardrails.
Speaker: Speaker 1
“AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility for them.”
It crystallises the ethical premise of the entire address: responsibility cannot be delegated to machines, no matter how accurate they appear.
This statement deepens the analysis by framing AI as a tool rather than an autonomous decision‑maker, influencing the audience to view subsequent policy recommendations through a lens of human accountability.
Speaker: Speaker 1
First of four governance points: “Decisions that AI must not be delegated to must always remain human. Human control has to be institutionalized into law and moral accountability.”
It moves the discussion from anecdotal illustration to concrete policy, challenging any assumption that technological progress alone can solve ethical concerns.
Introduces a new topic—legal institutionalisation of human‑in‑the‑loop—prompting listeners to think about legislative frameworks rather than just technical safeguards.
Speaker: Speaker 1
Second point: “AI‑enabled systems are designed to cause harm. Therefore they must be treated as a weapon and not as a software. They must be evaluated and tested in contested field conditions.”
Re‑characterising AI systems as weapons reframes the risk assessment paradigm, emphasizing that performance in controlled labs is insufficient for battlefield deployment.
Shifts the tone from abstract safety to concrete operational testing, urging defense R&D and procurement to adopt rigorous, realistic validation processes.
Speaker: Speaker 1
Third point: “The black box of data must become a glass box – commanders must know what data is used and how it has been trained.”
Calls for transparency in AI models, directly confronting the opacity problem that hampers trust and accountability.
Introduces the concept of explainability as a non‑negotiable requirement, steering the conversation toward technical standards for auditability.
Speaker: Speaker 1
Fourth point: “Commanders and staff need to be trained about this fast‑evolving battlefield and be able to integrate algorithms, command systems, and know how to go forward.”
Highlights the human capacity gap, suggesting that technology adoption without corresponding skill development is futile.
Expands the discussion to education and doctrine, indicating that AI integration is as much a cultural shift as a technological one.
Speaker: Speaker 1
He links military AI concerns to national policy: “These concerns about AI safety and governance are not confined to the military domain alone… the launch of the India AI Governance Guidelines… defines AI systems as generative and therefore having unintended consequences.”
Broadens the scope from defense to civilian governance, showing that the same ethical principles apply across sectors and that India is already shaping a regulatory framework.
Creates a turning point that moves the audience from a purely defense‑focused mindset to a holistic view of AI governance, encouraging cross‑sector collaboration.
Speaker: Speaker 1
Historical analogy: “The rules governing the use of NBC weapons, the Geneva Convention, the Convention on Landmines… have stood the test of time. In a similar manner, a set of governance frameworks and legal provisions need to be evolved about use of AI‑based systems and autonomous weapons.”
Draws a parallel between established international humanitarian law and emerging AI weapon norms, challenging the belief that AI is a wholly new ethical frontier.
Strengthens the argument for immediate international dialogue, positioning AI governance as a continuation of existing legal traditions rather than an unprecedented challenge.
Speaker: Speaker 1
Closing claim: “India, as a major military power, a growing AI hub and a civilization deeply rooted in ethical restraint… has both the capacity and the credibility to lead this conversation.”
Positions India not just as a consumer of AI technology but as a moral leader, inviting other nations to look to India for guidance on responsible AI in warfare.
Ends the speech on a strategic note, shaping the overall narrative that India’s experience and values can influence global AI policy, thereby reinforcing the earlier calls for governance and collaboration.
Speaker: Speaker 1
Overall Assessment

The keynote’s most impactful moments arise from a series of deliberate pivots: from a nostalgic recount of analog war‑gaming to a vivid illustration of AI‑driven decision pressure; from a concrete battlefield vignette that exposes AI’s blind spots to a principled declaration that only humans can bear moral responsibility; and finally from technical safeguards to broader legal and geopolitical frameworks. Each of these comments introduced a fresh layer of analysis—historical, operational, ethical, technical, and strategic—forcing the audience to continually re‑evaluate the role of AI in warfare. Collectively, they transformed a simple status‑update into a compelling call for transparent, human‑centric, and internationally coordinated AI governance, positioning India as both a practitioner and a potential global standard‑setter.

Follow-up Questions
What does the machine not know?
Highlights the need for human judgment to identify missing or contextual information that AI may overlook, crucial for preventing civilian casualties.
Speaker: Senior commander
Which decisions must never be delegated to AI and should always remain under human control?
Defines the boundaries of AI use in military operations to ensure accountability and moral responsibility.
Speaker: Speaker 1
How can AI‑enabled weapon systems be evaluated and tested under contested battlefield conditions to ensure reliability?
Ensures that systems perform robustly in real‑world chaotic environments rather than only in controlled labs, preventing liability.
Speaker: Speaker 1
How can transparency be built into AI systems so that the ‘black box’ becomes a ‘glass box’, revealing data sources and training methods?
Promotes trust, sovereignty, and accountability by making the data and algorithms understandable to commanders.
Speaker: Speaker 1
What training programs and curricula are needed to equip commanders and staff with the skills to integrate and oversee AI algorithms in operations?
Prepares military personnel to effectively use, interpret, and supervise AI‑driven decision support tools.
Speaker: Speaker 1
What governance frameworks and legal provisions are required for the use of AI‑based autonomous weapons, both nationally and internationally?
Addresses ethical, legal, and strategic stability concerns by establishing clear rules and accountability mechanisms.
Speaker: Speaker 1
How can meaningful human control be defined, measured, and enforced in AI‑enabled military systems?
Ensures that humans retain decisive authority, preventing unintended autonomous actions that could destabilize conflicts.
Speaker: Speaker 1
What guardrails and safety mechanisms should be built into AI‑enabled models to prevent unintended consequences?
Mitigates risks associated with generative AI and other advanced models, aligning with national AI safety priorities.
Speaker: Speaker 1
How can the Indian Armed Forces effectively collaborate with startups and innovators while ensuring responsible AI development?
Leverages private‑sector innovation while maintaining security, ethical standards, and control over critical technologies.
Speaker: Speaker 1
What are the specific performance metrics and validation protocols for AI applications such as ACOM AI as a Service, Sama Drishti, Shakti, and Akash Teer in operational settings?
Provides measurable criteria to assess effectiveness and reliability of indigenous AI tools before deployment.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.