Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal
20 Feb 2026 12:00h - 13:00h
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal
Summary
The keynote address delivered by a senior Indian Army officer highlighted the rapid transformation of military decision-making through artificial intelligence (AI) ([5]). He contrasted his early career, when battlefield information was gathered on paper maps and relayed slowly by notes and telephone, with the present where digital walls display fused sensor data in real time ([6-8]). Over the past two decades, the pace of intelligence has accelerated, with AI instantly analysing multiple feeds and presenting a dynamic picture that compresses decision cycles to seconds ([9]). He illustrated this shift with a high-tempo operation in which an AI system recommended an immediate strike, but the commander paused to ask what the machine did not know ([10-13]). The pause revealed a civilian evacuation not yet captured by the sensors, preventing a mistaken attack and saving lives ([19-23]). He used the episode to assert that AI can advise and accelerate decisions, yet only humans can exercise judgment and bear responsibility ([25]). Emphasising national policy, he noted that recent statements by the Prime Minister and other leaders call for mandatory guardrails and safety measures for AI, especially in the armed forces ([26-28]). The Indian Armed Forces view AI as a force multiplier across intelligence fusion, surveillance, logistics and other domains, and have declared the current year as the “year of networking and data-centricity” ([30-33]). Indigenous platforms such as ACOM AI-as-a-Service, Sama Drishti, Shakti and Akash Teer have been developed in partnership with industry and startups to support this transformation ([35-38]). He outlined four governance principles: critical decisions must remain human-controlled and legally accountable; AI systems are effectively weapons and must be tested in contested conditions; transparency requires a “glass box” of data provenance; and commanders need dedicated training on AI-enabled battlefields ([41-56]). He called for international governance frameworks, citing ongoing UN discussions on meaningful human control and the need for legal provisions governing autonomous weapons ([57-60]). Positioning India as both a major military power and an emerging AI hub, he argued that the nation has the capacity and credibility to lead the development of ethical AI guidelines, echoing the Prime Minister’s “Manav Vision for AI” ([61-63]). The address concluded that responsible AI integration will reshape warfare while preserving human judgment and ethical restraint, underscoring its strategic significance for national security ([25][41-56]).
Keypoints
– Rapid transformation of military decision-making through AI – The speaker contrasts the early days of paper maps and slow information flow with today’s “massive digital display” that fuses sensor data and AI in real time, compressing decision windows to seconds [6-9][9-12].
– Human judgment remains essential despite AI recommendations – A senior commander pauses a machine-generated strike recommendation, asks “What does the machine not know?” discovers an ongoing civilian evacuation, and averts civilian casualties, illustrating that AI can advise but only humans can exercise moral judgment and bear responsibility [13-24][25].
– Mandate for responsible AI development, testing, and accountability – The speaker stresses that AI systems in the armed forces must be treated as weapons, subject to rigorous field testing, legal and moral accountability, transparency (“glass box” data), and continuous training of commanders [26-44][45-55][56-60].
– India’s strategic push for indigenous, data-centric AI and global leadership in AI governance – The Indian Armed Forces are adopting AI-enabled platforms (e.g., ACOM AI, Sama Drishti, Shakti, Akash Teer), collaborating with industry and startups, and aligning with national AI governance guidelines to shape international norms on autonomous weapons [31-38][39-43][55-57][61-63].
Overall purpose/goal
The address aims to showcase how the Indian Army is integrating AI to enhance battlefield effectiveness while underscoring the non-negotiable need for human control, ethical safeguards, and robust governance. It also positions India as a proactive leader in developing responsible AI frameworks for both national security and global policy.
Overall tone
The speaker begins with a formal, proud tone reflecting on past experiences and technological progress. The narrative then shifts to a cautionary, reflective tone when discussing the limits of AI and the necessity of human judgment. This is followed by a constructive, collaborative tone emphasizing partnerships and responsible development, and concludes with an aspirational, confident tone about India’s capacity to lead international AI governance. The tone evolves from retrospective admiration to prudent warning, then to proactive optimism.
Speakers
– Speaker 1
– Role/Title: Keynote speaker representing the Indian Army and Indian Armed Forces (senior military officer)
– Area of expertise: Military applications of AI, defence strategy, AI governance
Additional speakers:
The speaker opened with a formal greeting to a diverse audience of industry leaders, academics, AI innovators, uniformed colleagues and students, delivering the keynote on behalf of the Indian Army and the broader Indian Armed Forces [4-5].
He recalled his first war-game as a young lieutenant thirty-five years ago, when battlefield information was limited to large paper maps, hand-written notes and slow telephone reports that required manual colour-coding before a commander could deliberate a decision [6-12].
Contrasting that era, he described today’s “Star-Wars” operation rooms, where massive digital displays ingest continuous sensor streams, fuse the data instantly and hand it to AI for rapid analysis, producing a living, dynamic picture of the battle space. This transformation has compressed the OODA (Observe-Orient-Decide-Act) cycle to a matter of seconds, leaving little room for hesitation [13-22].
To illustrate the implications of such speed, he narrated a high-tempo scenario: an AI system generated a high-confidence recommendation to strike a target within a narrow decision window. The senior commander paused and asked, “What does the machine not know?” [13-17]. The pause revealed that a civilian evacuation had just begun and was not yet reflected in the sensor data, meaning the algorithm was mis-identifying civilians as enemy troops [18-22]. By exercising judgement and delaying the strike, the commander spared innocent lives while still achieving the mission objective [23-24]. This episode underscored his central thesis that AI can inform, accelerate and recommend decisions, but only humans can exercise moral judgement and bear responsibility [25].
He then outlined four governance principles for AI-enabled systems. First, decisions that must never be delegated to AI should remain under human control, with legal and moral accountability institutionalised [41-44]. Second, AI-enabled systems, being designed to cause harm, must be treated as weapons and rigorously tested in contested battlefield conditions rather than controlled labs [46-51]. Third, transparency is essential: commanders must know the data sources and training processes behind AI outputs, converting the “black box” into a “glass box” [52-55]. Fourth, continuous training of commanders and staff is required so they can integrate algorithms, command AI-enabled systems and retain decisive human judgement [56]. These principles collectively reinforce the view that AI can augment but not replace human agency [25][41-56].
The speaker also highlighted the recent launch of the India AI Governance Guidelines and the daily declaration made at the summit, calling them a “path-breaking step” that recognises generative AI systems can produce unintended consequences and that these lessons must inform military planning. He stressed that AI safety and governance are now integral to national policy, not merely a defence-only issue [45-48].
Linking operational insight to national statements, he noted that the Prime Minister and other senior leaders have called for mandatory guardrails and safety measures for AI-enabled models, especially in the armed forces where the stakes are exceptionally high [26-28]. The Indian Armed Forces operate in a uniquely complex security environment that spans contested borders, multiple domains, dense populations and high-intensity escalation [29-30].
He described AI as a force multiplier across intelligence fusion, surveillance, decision support, maintenance and logistics, and announced that this year has been declared the “year of networking and data-centricity” to accelerate the transition to data-driven operations [31-34]. Indigenous platforms such as ACOM AI-as-a-Service, the battlefield situational-awareness software Sama Drishti, and the sensor-shooter fusion systems Shakti and Akash Teer have been developed through collaboration with industry, leaders and startups, with openness to further partnerships for a self-reliant transformation [35-40].
He noted that the UN Secretary-General also addressed AI-related initiatives at the summit, underscoring the global relevance of meaningful human control and accountability in autonomous weapons discussions [58-60].
Finally, he argued that India, as a major military power, a growing AI hub and a civilisation rooted in ethical restraint-embodied in the concepts of Shakti (force) and Dharma (rightness)-has both the capacity and credibility to lead the formulation of global AI governance frameworks, echoing the Prime Minister’s “Manav Vision for AI” announced at the summit [61-63]. In closing, he emphasized that while AI reshapes military decision-making into a rapid, data-rich process, the preservation of human judgement, robust legal safeguards, transparency, rigorous testing and dedicated training are non-negotiable pillars for ethical responsibility and strategic stability [25][41-56][57-60].
Firstly, let me just say this that, you know, I know I’m the last speaker of a long day. So I’ll do this quickly. I’ll come to the essentials. Distinguished guests, leaders of industry and academia, AI innovators, my colleagues in uniform, who are also innovators, students, ladies and gentlemen, a very good evening to you all. It’s a privilege to be speaking here as a keynote address representing the Indian Army and the Indian Armed Forces. You know, 35 years ago, when I joined the Army as a young lieutenant, in my first war game unfolded in a room dominated by large paper maps. Information arrived slowly, handed in notes, verbal updates, reports from the field taken on telephone. We pieced that picture together, physically marked it on the map using color -coded pins and flags, and presented it to the commander, who then took a decision deliberately and with reflection, fully aware that the adversary was operating within similar timelines.
Twenty years later, the rhythm began to change. intelligence became sharper and faster operation rooms had a few screens displaying maps presentations moved to powerpoint the volume of information increased timelines got compressed but there was still space to pause, breathe and the OODA cycle could still breathe today when I walk into an operation rooms the difference is stark it’s like a star wars coming to life a massive digital display dominates the wall input stream in continuously from multiple sensors intelligence is fused almost instantly analyzed by AI presenting a living dynamic picture of the battle space some of the work we did as left -handers is now automated and the commander knows that the adversary is seeing much the same picture about us at much the same speed the pressure is not anymore about awareness it is about decision seconds matter hesitation has consequences it is in this environment of speed, uncertainty and time compression that I want to transport you to an operational stage scenario During a high -tempo military operation, a senior commander was presented with a machine -generated recommendation based on multiple sensor feeds and AI analysis to engage a target immediately.
The system was confident. The probability score of the machine was high. The decision window was measured in seconds. But the commander paused. Not because he didn’t trust the technology. His experience told him that something was amiss. He asked a simple question. What does the machine not know? The pause revealed something the algorithm could not see. A civilian evacuation had just begun minutes earlier, not yet reflected in the data. The machine saw the movement as that of enemy troops, whereas they were civilians. It is even possible that troops were mixed with the civilians. However, the commander exercised judgment and restraint. The strike was delayed, innocent lives were spared, and the mission was still achieved. This moment captures a fundamental truth.
AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility for them. Yesterday our Honorable Prime Minister and many other eminent speakers spoke of the need for guardrails and safety to be built into AI -enabled models. In the case of the military, these are not essential but mandatory as the stakes are much higher. The Indian Armed Forces operate in a uniquely complex security environment. Across contested borders, multiple domains, dense populations and high escalation intensity . Therefore, ladies and gentlemen, let me clearly state that we in the Defence Forces are fully cognizant that artificial intelligence is fundamentally redefining the modern battle space. Its power in intelligence fusion, surveillance, decision support, maintenance, logistics and a host of other functions is a force multiplier in today’s multi -domain battle space.
In keeping with the vision of technological transformation, the Indian Armed Forces are committed to ensuring that the military is fully equipped with the necessary equipment The Chief of Army Staff has formally declared this year as the year of networking and data centricity. The Indian Armed Forces are committed to ensuring that the military is fully equipped with the necessary equipment and data centricity. The Indian Armed Forces are committed to ensuring that the military is fully equipped with the signaling a deliberate shift towards data -driven operations and AI -enabled capabilities. The evolution is powered by many indigenously built applications, ACOM AI as a service, Sama Drishti, which is a battlefield situational awareness software, Shakti and Akash Teer, which are sensor and shooter fusion.
All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who have been around in this summit for the last few days. For this self -reliant transformation, we are open to collaboration with many startups, innovators to build it further. However, we are fully cognizant that this needs to be a responsible development of AI. Allow me to reflect on four points in this regard. Firstly, what decisions that AI must not be delegated to must always remain human. Human control has to be institutionalized into law and moral accountability. Accountability cannot be with the machine. If a machine recommends a decision with 90 % accuracy and the commander goes with it and it is a wrong decision, it gives the commander a moral buffer.
But is that correct? Secondly, AI -enabled systems are designed to cause harm. Therefore they must be treated as a weapon and not as a software. They therefore must be evaluated and tested in contested field conditions. Remember that the battlefield is a chaotic data environment. Sensors get obscured by dust, smoke, deception and many other things. A system that performs well in controlled condition but fails in a battlefield condition is not a force multiplier, it’s a liability. Thirdly, trust and sovereignty must get built in the system. The commander taking a decision based on an AI -enabled system but know, must know what is the data being used, how it has been trained. The black box of data must become a glass box.
And fourthly, commanders and staff of today need to be trained about this fast evolving battlefield. As I told you about the operational scenario, as it was 30 years ago and it is today in the in a war game we need to be able to integrate algorithms be able to command systems and know how to go forward the indian army is taking steps in training our commanderial staff in this direction the the next thing that i’d like to say is that in some the nature of war may change but our conscious must not it is important to recognize that these concerns about ai safety and governance are not confined to the military domain alone they are increasingly shaping national policy the launch of the india ai governance guidelines and the daily declaration during the summit is a path -breaking step in this direction just happened during this summit this framework defines ai systems being generative and therefore having unintended consequences and this has lessons for us as military planners at this stage i would also like to remind ourselves of a historical truth i do believe in the wisdom of humanity whenever faced with a new crisis we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it The rules governing the use of NBC weapons, the Geneva Convention on Treatment of Prisoners of War, the Convention on Use of Landmines and other such frameworks have stood the test of time and with few exceptions have been followed during conflicts also.
In a similar manner, a set of governance frameworks and legal provisions need to be evolved about use of AI -based systems and autonomous weapons. Already under the framework of the United Nations, discussions are underway around meaningful human control and accountability. His Excellency the UN Secretary General also talked about various such initiatives just yesterday. While consensus remains complex, the debate itself reflects a shared concern for autonomy without restraint that would undermine strategic stability. India, as a major military power, a growing AI hub and a civilization deeply rooted in ethical restraint and understanding that Shakti, that is force, and Dharma, that is rightness, must go hand in hand, has both the capacity. And the credibility to lead this conversation.
The clear and all -encompassing Manav Vision for AI, enunciated by the Honorable Prime Minister in this hall yesterday, emphasizing moral and ethical systems as well as
Shinghal begins with a historical perspective, contrasting his military experience from 35 years ago with today’s technologically sophisticated battlefield environment. He describes how military opera…
EventShinghal begins with a historical perspective, contrasting his military experience from 35 years ago with today’s technologically sophisticated battlefield environment. He describes how military opera…
EventMadam Chair, artificial intelligence is reshaping the way we process knowledge and information, and it is rapidly transforming all aspects of military affairs. Its profound implications for internatio…
EventIndividuals remain accountable for the outcomes of their decisions. People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
BlogA significant point of agreement among the speakers was the necessity of maintaining human control and accountability in AI-powered warfare systems. Mohamed Sheikh-Ali from the ICRC stressed that huma…
EventAlexi Drew: Thank you, I’ll run through these nice and quickly in the interest of giving people their time. I’d like to start with the silver lining to the signatures and the kind of demarking in auth…
EventRaised by:Vivek Kumar Singh This relates to developing a clear framework for strategic autonomy while maintaining beneficial international collaborations, which is crucial for national security and e…
EventAdopt strategic autonomy approach – maintain sovereignty in critical areas while collaborating globally in non-sensitive domains
Event“The speaker delivered the keynote on behalf of the Indian Army and the broader Indian Armed Forces”
The knowledge base identifies Lt Gen Vipul Shinghal as a senior Indian Army officer representing the Indian Armed Forces as a keynote speaker [S10].
“He recalled his first war‑game as a young lieutenant thirty‑five years ago”
The source notes that Shinghal has 35 years of military service, starting as a young lieutenant, matching the timeframe mentioned in the report [S10].
“The senior commander paused and asked, “What does the machine not know?” during a high‑confidence AI recommendation”
The knowledge base records the same moment: the system was confident, the decision window was seconds, and the commander paused to ask exactly that question [S21].
“AI can inform, accelerate and recommend decisions, but only humans can exercise moral judgement and bear responsibility”
The source explicitly states that AI can inform, accelerate and recommend decisions, underscoring the need for human moral judgement [S10].
“The Indian military’s AI transformation involves collaboration with industry leaders and startups”
Additional detail: the transformation includes indigenously developed platforms such as ACOM AI, Sama Drishti, Shakti and Akash Teer, built through partnerships with industry and startups [S14] and [S64].
Speaker 1 consistently stresses that AI is a powerful force‑multiplier for the Indian Armed Forces but must be governed by human judgment, legal safeguards, transparency, rigorous testing, capacity building, indigenous development, and international norms. These points co‑here into a unified vision of responsible, ethically grounded AI in defence.
High internal consensus – all arguments reinforce a single, coherent stance on responsible AI. The alignment with external actors (Prime Minister, UN Secretary‑General) further strengthens the consensus, suggesting a strong, coordinated policy direction for AI governance in the defence sector.
The transcript contains remarks only from Speaker 1. All arguments presented are his own views; no other speaker is quoted or referenced with a contrasting position. Consequently, there are no identifiable points of contention, partial consensus, or surprise disagreements among multiple participants.
Very low – the discussion is essentially a single‑speaker presentation, so the implications for the broader debate are that the transcript does not reveal any inter‑speaker conflict or divergent approaches to the issues raised.
The speaker’s narrative arc—from a personal, technology‑driven war‑game memory to a concrete ethical dilemma, followed by a structured set of governance principles and a call for international leadership—served as the backbone of the discussion. Each pivotal comment introduced a new layer (operational reality, moral responsibility, legal frameworks, transparency, and geopolitical positioning) that progressively deepened the conversation. Although the transcript records only one voice, the remarks themselves acted as catalysts, steering the audience’s attention from awe at AI’s capabilities to a nuanced debate about accountability, safety, and global governance. Collectively, these insights shaped the session into a balanced examination of AI’s transformative power and the indispensable role of human judgment and institutional safeguards.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

