India unveils AI incident reporting guidelines for critical infrastructure
As India positions itself as a global leader in AI safety, a new framework could quietly reshape how nations prepare for algorithmic failures in critical systems.

India is developing AI incident reporting guidelines for companies, developers, and public institutions to report AI-related issues affecting critical infrastructure sectors such as telecommunications, power, and energy. The government aims to create a centralised database to record and classify incidents like system failures, unexpected results, or harmful impacts caused by AI.
That initiative will help policymakers and stakeholders better understand and manage the risks AI poses to vital services, ensuring transparency and accountability. The proposed guidelines will require detailed reporting of incidents, including the AI application involved, cause, location, affected sector, and severity of harm.
The Telecommunications Engineering Centre (TEC) is spearheading the effort, focusing initially on telecom and digital infrastructure, with plans to extend the standard across other sectors and pitch it globally through the International Telecommunication Union. The framework aligns with international initiatives such as the OECD’s AI Incident Monitor and builds on government recommendations to improve oversight while fostering innovation.
Why does it matter?
The draft emphasises learning from incidents rather than penalising reporters, encouraging self-regulation to avoid excessive compliance burdens. The following approach complements broader AI safety goals of India, including the recent launch of the IndiaAI Safety Institute, which works on risk management, ethical frameworks, and detection tools.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!