AI-Driven Enforcement_ Better Governance through Effective Compliance & Services
20 Feb 2026 11:00h - 12:00h
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services
Summary
The symposium, organized by the Income Tax Department, focused on how artificial intelligence can enhance enforcement, compliance and citizen services in tax administration and broader law-enforcement contexts [5][8]. Chairman Ravi Agrawal highlighted that the upcoming Income Tax Act 2025 will create a technology-driven ecosystem, reducing interpretative ambiguity and enabling AI-based algorithms to improve tax certainty and lower litigation [32][33][34]. He emphasized that AI can amplify human capability by turning large data sets into insights, automating routine tasks, and that responsible deployment requires high-quality data, secure systems, accountability and continuous training [35][36][43]. Recent AI pilots in the department have already generated significant results, with targeted nudges prompting 1.11 crore taxpayers to file updated returns and uncovering foreign assets worth ₹99 000 crore and foreign income of ₹6 500 crore [70][71].
In the industry-academia track, Project Insight 2.0 was presented as an AI-enabled compliance platform that will provide quick, accurate information to taxpayers, improve NERJ campaigns, and use large language models to tag and predict litigation risk [88][92][93][96]. LTI’s Ramesh Revuru introduced “Bharatverse,” an Indianized multi-agent platform built on pre-assembled foundational, data, knowledge, orchestration and consumption layers, enabling faster development of domain-specific agents for the CBDT [110][112][115][116]. Technical lead T. Srinivasan explained the creation of a sovereign small-language model (SLM) fine-tuned via low-rank adaptation (LoRA) on tax-specific data, integrated with vector databases and ontologies to support multilingual, context-aware chatbots and automated compliance checks [129][132-135][141-144][148-152].
Professor Mausam broadened the discussion to law-enforcement AI, citing use cases such as facial-recognition-based crime reduction, satellite-imagery monitoring, multimodal analytics for fraud detection, and warning that bias, over-triggering and loss of human oversight must be mitigated [226-233][258-263][291-303]. Martin Wilcox described next-generation AI risk analytics, stressing the need for scalable graph analytics and multimodal data processing, and illustrated how “Bring Your Own Model” and in-warehouse inference can accelerate credit-risk scoring by 25-fold [324-327][329-336][344-347]. In the regulatory segment, RBI’s Suvendu Pati presented “MuleHunter.ai,” an AI system deployed across 26 banks that identifies mule-account patterns with accuracy above 80-90 % and is moving toward real-time transaction scoring [398-404][408-410][438-440].
Police officer Ram Ganesh demonstrated a co-pilot that ingests FIRs, generates compliant investigative pathways, automates legal requests and leverages open-source intelligence, having been used in over 467 cases in Maharashtra [465-470][474-480][482-485]. SEBI’s Avneesh Pandey outlined four AI tools-RIDAR for ad compliance, Sudarshan for multimodal fraud detection, Infomerge for data consolidation, and a cyber-resilience framework-showcasing how AI supports proactive regulation and audit automation [524-531][532-537][538-545][546-549]. Shashi Bhushan Shukla summarized the department’s AI journey, noting the scale of taxpayer data (PAN for 80 crore people, 9 crore ITRs, 650 crore SFT fields), the Nudge initiative’s success in eliciting ₹6 540 crore additional income, and plans for real-time, AI-driven pre-filing assistance [563-571][580-587][608-612].
Justice R. Mahadevan concluded that AI has moved from aspirational to operational across tax administration and law enforcement, emphasizing the need for human-in-the-loop safeguards, explainability and ethical governance [617-629]. The symposium therefore underscored AI’s potential to transform compliance, risk assessment and investigative workflows while insisting on responsible, transparent implementation to maintain public trust [35][43][617-629].
Keypoints
Major discussion points
– AI as a strategic enabler for tax administration and enforcement – The Chairman highlighted that the upcoming Income Tax Act 2025 will create a “technology-driven ecosystem” where AI reduces interpretative ambiguity, improves tax certainty and supports proactive enforcement through nudges and risk-based analytics [32-35][70-74].
– Industry-led AI solutions for taxpayer services (Project Insight 2.0 and the “Blueverse/Bharatverse” platform) – Commissioners and LTI representatives described how AI will deliver end-to-end taxpayer assistance, litigation-risk scoring, multi-agent platforms built on sovereign small-language models, and automated workflow orchestration [88-97][110-118][129-156][162-170].
– Broader law-enforcement applications of AI (multimodal analytics, predictive policing, and human-in-the-loop safeguards) – Professor Mausam outlined use-cases ranging from CCTV-based crime reduction and satellite-based surveillance to financial-data anomaly detection, stressing the need for explainability, bias mitigation and civil-liberty protections [210-224][226-236][240-254][291-304].
– Regulatory bodies deploying AI at scale (RBI’s “Mule Hunter” and AI governance framework) – The RBI’s chief explained the seven AI-governance sutras, the AI sandbox concept, and the Mule Hunter system that flags suspicious banking patterns with >90 % accuracy, illustrating concrete results and future real-time transaction scoring [369-384][389-410][416-424][428-438].
– AI for compliance and cyber-security in financial markets (SEBI initiatives) – SEBI’s executive described tools such as RIDAR for ad-compliance, Sudarshan for multimodal fraud detection, and Infomerge for investigative data integration, emphasizing democratized AI development and continuous monitoring [524-539][543-549].
Overall purpose / goal
The symposium was convened to examine how artificial intelligence can be operationalized across the Income Tax Department and other regulatory agencies to enable easier compliance, reduce disputes, and build trust-based governance ([8], [30-31]). Speakers from government, industry, and academia presented concrete projects, policy frameworks, and technical architectures aimed at turning AI from a conceptual promise into a practical, scalable tool for revenue collection, enforcement efficiency, and citizen-centric services.
Overall tone and its evolution
– The opening remarks set a formal and forward-looking tone, emphasizing the paradigm shift of the new tax law and the strategic importance of AI ([32-35]).
– As industry and academic presenters took the stage, the tone became optimistic and demonstrative, showcasing rapid prototyping, “building agents without code” and tangible performance gains ([54-63], [110-118], [129-156]).
– Professor Mausam introduced a broader, visionary yet cautionary tone, highlighting vast opportunities while warning about bias, over-triggering, and the need for human oversight ([291-304]).
– RBI and SEBI speakers adopted a pragmatic and results-focused tone, reporting concrete accuracy metrics, deployment numbers, and governance safeguards ([389-410], [524-539]).
– The closing remarks returned to a celebratory and collaborative tone, reaffirming AI’s operational status, the collective progress made, and gratitude to all participants ([617-644]).
Overall, the discussion moved from high-level policy framing, through technical showcases, to concrete regulatory deployments, maintaining a consistently constructive tone while interspersing measured cautions about ethics and accountability.
Speakers
– Amandeep Dhanoa
– Role/Title: Indian Revenue Service Officer, 2018 batch; Moderator of the symposium
– Shri Ravi Agrawal
– Role/Title: Chairman, Central Board of Direct Taxes (CBDT); Chief Executive Officer of the Department of Income Taxes
– Affiliation: Income Tax Department, Government of India [S18]
– Abhishek Kumar
– Role/Title: Commissioner of Income Tax, Insights (Project Insight 2.0)
– Ramesh Revuru
– Role/Title: Global Head of Engineering, LTI Mindtree [S11]
– T. Srinivasan
– Role/Title: Technology Lead, LTI Mindtree [S13]
– Professor Mausam
– Role/Title: Professor, AI researcher, founding head of YALI School of AI, India University [S20]
– Martin Wilcox
– Role/Title: Senior Vice President, Teradata; Global leader in AI-driven data analytics [S22]
– Shashi Bhushan Shukla
– Role/Title: Principal Commissioner, CBDT; Key architect behind Data Analytics Cell and Saksham Nudge Initiative [S23]
– Justice R. Mahadevan
– Role/Title: Joint Commissioner of Income Tax; Delivered the vote of thanks
– Suvendu Pati
– Role/Title: Chief General Manager & Head of FinTech, Reserve Bank of India (RBI) [S4][S5]
– Ram Ganesh
– Role/Title: Cyber security expert and Founder, CyberEye [S6][S7]
– Avneesh Pandey
– Role/Title: Executive Director, SEBI; National voice on technology strategy and cybersecurity governance [S21]
Additional speakers:
– Shri Shirdi Anand Jha – Principal Chief Commissioner of Income Tax, Delhi (mentioned in the opening remarks)
– Harsha Poddar – Indian Police Service (IPS) officer; Award-winning innovator in AI-driven policing (introduced in Category 2)
The symposium opened with Amandeep Dhanoa, an Indian Revenue Service officer of the 2018 batch, welcoming the distinguished guests, colleagues and speakers and stating that the Income Tax Department had convened the event to explore how artificial intelligence (AI) can improve governance through more effective compliance and services [1-3]. He introduced the Honourable Chairman of the Central Board of Direct Taxes, Shri Ravi Agrawal, a senior IRS officer who has overseen the department’s digital transformation, including the Central Processing Centre [10-18]. After a brief group-photo arrangement, Dhanoa invited the Chairman to set the tone with his opening remarks [21-24].
Chairman Agrawal linked the symposium theme “AI-driven enforcement for better governance” to the forthcoming Income Tax Act 2025, describing a technology-driven ecosystem that simplifies language, reduces interpretative ambiguity and embeds AI-based algorithms to enhance tax certainty and lower litigation [32-34]. He emphasized that AI amplifies human capability by turning vast data into insights, automating routine work and enabling faster, smarter decisions at scale [35-36]. He outlined the prerequisites for responsible AI deployment-high-quality shareable data, secure systems, clear accountability, strong safeguards and continuous training [43-44] and warned that AI must be driven by humans rather than the reverse, underscoring the need to build capacity in human resources [48-51][52-53]. An anecdote illustrated AI’s speed: with the help of his son, he generated a functional training-module code in five to six hours, a task that would normally take months [54-63]. He concluded by reporting early AI pilots: targeted nudges prompted 1.11 crore taxpayers to file updated returns, generating over ₹8,800 crore in revenue, while prompts on foreign-asset disclosures led 1.57 L taxpayers to reveal ₹99,000 cr in assets, yielding an additional ₹6,540 cr tax, and 6.96 L taxpayers withdrew bogus deductions, adding ₹1,758 cr [78-80].
Category 1 – Industry & Academia
* Abhishek Kumar (Commissioner, Income Tax – Project Insight 2.0) presented AI-enabled compliance objectives, including NERJ campaigns, litigation-risk assessment, LLM-based issue tagging and case-vulnerability prediction [95-98].
* Ramesh Revuru (Global Head of Engineering, LTI Mindtree) launched the “Bharatverse” (Indianised Blueverse) agentic platform, explained its five-layer architecture, introduced the “right-action” concept for deterministic outputs and cited eight patents on AGI [99-103].
* T. Srinivasan (Technology Lead, LTI Mindtree) detailed the technical architecture of a sovereign LLM (SLM) for tax, describing LoRA-based low-cost adaptation, vector-DB/RAD-plus retrieval, an ontology-driven knowledge graph, multilingual support and deterministic chat-bots [104-108].
* Prof. Mausam (Founding Head, YALI School of AI) offered a broad view of AI in law-enforcement, covering data modalities (structured, visual, speech, language) and illustrative use-cases such as CCTV-based crime reduction, satellite-imagery for maritime surveillance, anomaly detection in taxi behaviour and financial-crime graph analysis; she stressed critical safeguards-human-in-the-loop, bias mitigation, data centralisation and protection of civil liberties [109-115].
* Martin Wilcox (SVP, Teradata) argued for in-warehouse graph analytics and multimodal AI, highlighted “Bring-Your-Own-Model” capability and cited case studies: a Brazilian credit-union income-estimation model delivering 25× faster inference and an Asian bank’s NPS-driven chat-analysis [116-120].
Category 2 – Regulatory & Enforcement
* Suvendu Pati (Chief General Manager, RBI – FinTech) outlined RBI’s AI governance “seven sutras” and six pillars, described the AI sandbox policy and introduced the “MuleHunter.ai” system-featuring 857 variables, bank-specific relevance, accuracy above 80-90 % and real-time transaction scoring with cross-bank aggregation [121-126].
* Ram Ganesh (Founder, CyberEye) demonstrated an AI “co-pilot” for police investigations that ingests FIRs, generates SOP-compliant workflows, drafts legal requests, integrates telecom and forensic data, leverages open-source intelligence and combines four AI technologies (LLMs, graph-NNs, agentic AI, big-data analytics) [127-132].
* Avneesh Pandey (Executive Director, SEBI) described SEBI’s AI suite: RIDAR for ad-compliance monitoring of mutual-fund advertisements, Sudarshan for multimodal, multilingual fraud detection, Infomerge for investigation data-integration and report generation, and a cyber-resilience framework employing ensemble-model validation [133-138].
* Shashi Bhushan Shukla (Principal Commissioner, CBDT) traced the Income-Tax AI journey from 2004 to 2024, presented the Saksham Nudge 7-step strategy (data → analysis → action → communication → hand-holding → enablement), and shared outcomes: 1.57 L taxpayers disclosed ₹99,000 cr foreign assets, yielding ₹6,540 cr extra tax; 6.96 L taxpayers withdrew bogus deductions, adding ₹1,758 cr tax; he also announced the International AI-misuse consortium targeting synthetic identities, deep-fakes and AI-assisted fraud [139-145].
Vote-of-Thanks – Justice R. Mahadevan
Justice R. Mahadevan concluded with a concise recap, presented as bullet points:
– Opening remarks set the vision of AI-driven enforcement for better governance.
– Category 1 highlighted industry innovations: AI-enabled compliance, sovereign LLMs, agentic platforms, multimodal analytics and graph-based insights.
– Category 2 showcased regulatory frameworks, AI-assisted investigations, market surveillance tools and large-scale nudge-based outcomes.
– All speakers underscored responsible, human-centric AI deployment and the transition from aspirational concepts to operational systems.
He thanked the organizers, speakers and participants for their contributions [146-150].
Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. Thank you. Dear guests, colleagues, and esteemed speakers, namaskar. I, Amandeep Dhanua, Indian Revenue Service Officer of 2018 batch, welcome you all to this symposium by the Income Tax Department on AI -driven enforcement for better governance through effective compliance and services. Artificial intelligence today is reshaping every domain of governance. And when it comes to public services, the state… are uniquely high. Understanding these stakes, the Income Tax Department has called upon distinguished speakers from the industry, the academia, and regulatory bodies to delve on the most pertinent question of the hour, that is, how can artificial intelligence enable easier compliance, lower disputes, and strengthen trust -based governance?
Today’s sessions are structured deliberately into two categories, Category 1 of Industry and Academia and Category 2 of Regulatory Bodies. With that, I would like to introduce Honourable Chairman, Central Board of Direct Taxes, Shri Ravi Agarwal. Sir is a Distinguished Indian Revenue Service Officer of 1988 batch who brings over three decades of experience in the field of Income Tax Department. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes.
across multiple verticals of the Income Tax Department. He has played a pivotal role in key phases of the department’s digital transformation, including the establishment of Central Processing Centre. Known for his strong digital mindset and technocratic approach, he has consistently encouraged the use of data and technology to strengthen administration, administration, enhance compliance and translate data into revenue through prudent approach. Now, I request Principal Chief Commissioner of Income Tax, Delhi, Shirdi Anand Jha Sir, to kindly welcome Honourable Chairman Sir with a plant. Thank you, sir. I request all the speakers to kindly come on to this side of the stage so that we may have a group photo I request chairman sir as well as the member madams to join for the group photo all the speakers from category 1 and category 2 please join us for a group photo thank you Thank you, madams and sirs.
I request the speakers from Category 1 to kindly take their place on the stage, please. I request Abhishek Kumar, sir, Ramesh Ravuru, sir, T. Srinivasan, Professor Mossam and Shri Martin Wilcox to take the seats, please. Now I request Honourable Chairman, sir, to kindly set the tone for this symposium by his opening remarks.
Good evening, ladies and gentlemen. Good evening, gentlemen. Well, I’m delighted to welcome you all to today’s symposium, which is under the aegis of AI Impact Summit, Hitae Sarvajan Sukhai, Welfare for All, Happiness for All, which speaks the theme. In fact, it’s a very powerful theme. How do you use AI for welfare of all and happiness for all? That’s the basic intent. And within it, the sessions today would be AI driven enforcement in the summit. And it is a privilege to join a conversation that brings together policymakers, technologists, enforcement agencies, and academia on a subject that will shape the future of governance. Income Tax Administration is at a critical inflection point. especially with the enactment of the new income tax act 2025 along with the corresponding rules forms procedures which would be effective from 1st of April 26 it represents a paradigm shift in the philosophy and procedures practices of the direct tax administration in India and what makes it different is that going forward it is going to be a technology driven ecosystem that would be put in place and that’s how and that’s why the role of AI becomes so important and this gathering today becomes all the more relevant now the new income tax act while simplifying the language and procedures reduces interpretation ambiguity and brings tax certainty and as I mentioned since the beginning of the year it is going to be more rule driven, technology driven ecosystem.
The changes in the act, the language in the act would help in also putting in place the algorithms which through AI going forward would reduce and minimize the scope for different interpretations. The positive environment created by the Income Tax Act 2025 which is reflected from the feedback that we have received from the stakeholders and the prudent approach of tax administration that we are providing since a few years, since last about few years, provides a robust foundation for sustaining and advancing future reform measures to reduce litigation, enhance tax certainty and trust based voluntary compliance. AI has the potential to transform every sector by amplifying human capability, by turning vast data into insights, automating mundane and routine work, and enabling faster, smarter decisions at scale.
For law enforcement, this means we can strengthen how we prevent, detect, and respond, but only if we build the right preparedness and capacity building through high -quality shareable data, secure systems, clear accountability, strong safeguards, and continuous training. And here, the basic theme anchored by the Honorable Prime Minister in a manner of vision becomes so important. Because ultimately, what does Manav reflect? Moral and ethical systems, accountable governance, national sovereignty, and the right to justice. accessible and inclusive AI and valid and legitimate systems. So what do these words reflect ultimately? These reflect that while we have in AI a very powerful tool, but at the same time we need to be conscious about it as to how do we put in place and apply AI in our overall governance, overall welfare of people, happiness of people while being ethical.
Also conscious of the fact that if not applied with the responsibility, the results can be, you see, different. So we intend to adopt AI to support enforcement with clear accountability, build on secure and sovereign data foundations, ensure access to phased adoption and continuous training and validate systems for fairness and lawful… use. We have AI and it’s faster, you see, developing. Within the income tax department, even across, what we need to see is how do we build our capacity, our resources. Because here is a solution. You have some AI tools, solutions. But for that, you need to drive. The human has to drive the AI rather than AI driving the human. And for that to happen, you have to build that capacity in your resources, in the human resources.
We need to be conscious of the fact that, okay, what are the pluses and the pitfalls. In adopting AI, we have to be conscious about it when we are adopting AI. One of the features, I would just like to share one experience that I had just yesterday. So I was told that through AI, you can actually develop some codes. I didn’t know about that. So yesterday I asked my son, well, how is it possible? And I was proposing to develop an app for our training purposes. So he told me that, OK, this is how we can go about it. This is the open source and so on, so forth. And I put in place some sort of framework for the technology for this training module.
Spent about five, six hours in the night. And what was interesting was within five, six hours, one one actually was able to get a reasonably robust and matured code and a full application, which broadly takes care of the requirements of capturing training in the department. OK. Now, why I mentioned. example well otherwise but for this I would say facility a development of this code would have taken months but then with a you see with spending five to six hours one was able to come up with some code even if I say it is elementary it is basic you have a platform on which you can build on so that is the power of AI but can I blindly actually rely on it the answer is no I have to apply myself and see to it that okay you already have this platform how do I build it up and that is the potential of AI that it actually would help us to not do routine and mundane work it would translate our effort from a routine work to a enhanced work and that is where our capacity and our matured would lie.
So this is an opportunity for us in the tax department because we are all here in that context but also as individuals that we leverage on the power of AI but we leverage it being conscious of the fact that we have to drive it rather than AI driving us. So our approach needs to be practical with use of proven applications for data integration risk and priority scoring, anomaly detection, language support and workflow automation with constant testing and learning so we stay aligned with AI advancements and do not fall behind. This is also very important because things are developing and when we talk about Developed India 2047 how do we actually keep pace with it? So you have to actually align yourself with the developments that are taking place.
And each of the organizations, be it in the government or outside, have to align together so that together as a nation we grow and you put these opportunities to practice and provide to our taxpayers and stakeholders, you see, the best of class ecosystem and facilities. Over the past two financial years, we have applied AI in the department, though to a limited extent, but then it has yielded results. As you would all be aware, targeted nudges have led to 1 .11 crore taxpayers filing updated returns with a revenue impact of more than 8 ,800 crores. And if you talk about foreign assets, then the foreign assets, it’s worth about 99 ,000 crores. and foreign income of about 6 ,500 crores has also been declared by the taxpayers on the basis of the prompts that have been given by the tax department.
So we are moving from intent to action. We are scaling AI -based risk assessment, strengthening digital forensics and analytics, and building AI support for taxpayer services. To make compliance easier and enforcement more precise. The discussions at the summit will help us refine our approaches, set clear government standards, and scale that works to improve speed, consistency, and fairness in enforcement. I wish all the best, and I am sure that the deliberations here would be really useful and enriching. Thank you.
Thank you. Thank you, sir, for setting the tone and direction so clearly. Now, as we begin with category one, we turn to industry and academia, the two ecosystems that are shaping the intellectual and technological foundations of the artificial intelligence. While government defines purpose and safeguards, it is the industry that builds scalable systems and academia that pushes the frontiers of responsible and explainable AI. This segment will help us to understand not only what is technologically possible, but also what is practical, scalable and sustainable for public administration and law enforcement. Now we move to session one, project insight 2 .0. Where AI enabled compliance and taxpayer services are being operationalized at scale. I call upon Shri Abhishek Kumar, sir, Commissioner of Income Tax.
insights who has been instrumental in shaping the income tax department’s digital ecosystem through project insight and other initiatives. Joining him are Shri Ramesh Revuru, Global Head of Engineering at LTI Mindtree and Shri Shri Nivasan T, Technology Lead at LTI Mindtree, bringing three decades of enterprise technology leadership. May I invite all the three speakers to take us to the next phase of AI -enabled compliance. I request the speakers to be mindful of
Now, coming to last step, how does it help taxpayers in end -to -end life cycle? So, first key step is quick availability of accurate information to the taxpayers. We already discussed it as part of AI, yes, it will be enabled. Next, our NERJ campaigns will become more effective through infusion of AI. A very small fraction of cases where it leads to litigation, we will be able to do litigation risk assessment through AI infusion. With the advent of LLM, it is possible to tag issues in the assessment orders, appellate orders, judicial orders. So, we will be able to tag issues. We will be able to… We will be able to link judicial orders. and as a next step we will be able to even predict vulnerability of the case and ultimately it will result in retraction in litigation.
So all these business objectives we seek to achieve through Insight 2 .0 especially through infusion of AI. These are the business objectives and how they will be achieved, what is the technology proposed and how technical implementation will take place. That will be explained by Mr. Ramesh from LTM. Thank you.
Ma ‘am I got the message. I’ll make Maggie and finish in two minutes. Thank you very much sir. Thanks for the opportunity to be here in the August presence of all the income tax officials. I want to leave you with three key messages. The first and foremost is the launch of Bharat Varsh. Thank you, ma ‘am. Bharat Varsh. Second, I’ll talk about the importance of right action. And the last part of it is general intelligence in the context of CBDT. So, the first one, we at LTM, IIT, LTM now have this product offering called Blueverse. Blueverse is the agentic platform on which you can build your agents. Sir, Chairman Sir spoke about how he was able to build these agents without writing code.
And then think of it as this is the platform on which you can build all the five layers that are required for any multi -agent. The five layers are the foundational models, the LLMs, the data layer, the knowledge layer, the orchestration layer. and the consumption layer on the top. All these layers are pre -built and hence the ability of CBD to build their multi -agentic system faster is what we bring and this is what we have implemented for our global customers. What we are launching is the Indianized version of Blueverse, what we are calling Bharatverse and hence purpose built for CBDT. Why is right action important? Right action, as you might know, generative AI is probabilistic in nature.
It is going to guess the next word or generate the next word or the next pixel, the next frame in the video. But in the context of CBDT, you cannot have something which is probabilistic. You need to move the needle to become more deterministic. And hence our ability to guarantee that right action in every condition, scenario, criteria is what right action is all about. I was in the morning listening to Demis Hassabis, who is the Google deep mind, and he said AGI probably is five years away. While AGI, which is general intelligence, human -like intelligence, is five years away. What you need is general intelligence of the CBDT. And hence, right data, right context leading to that right action.
We have filed eight patents on creating this AGI, and AGI will be bundled with our Bharatverse that will get implemented for CBDT. I’ll ask Srinivasan in the interest of time to take us through the technical architecture, but a big thank you for the opportunity.
Thank you, Abhishek sir, and thank you, Demis. So. Quickly, like, you know, I’ll start it very fast. So the most important thing is that. you know while he spoke about right action everything how do I do it so everybody talks about LLM it’s not about deploying a simple LLM like what we are doing here what we are doing is we are building something called as your SLMs which is just long small language model along with the regular LLM which will be using for the particular system as such okay so if you see this the purpose of this SLM is going to be it is going to be very very much income tax based it is going to be for the ITD officials and it is for the going to be for the CBDT and we are going to ingest them with data which is going to be your income tax loss your information which is very closely related to the environment which means that there’s going to be a data control you’re going to have a quality vetted data everything is going to be within the system it is going to be secure nothing is going to go outside at all it is going to be what I call as a sovereign LLM for this system as such okay so when you look at it how am I going to to do that.
So I cannot be retraining the entire LLM fully. It’s not cost effective. So we use the concept called as LORA, which is your, LORA is nothing but your low ranking adaptation, where you can just spend, you can do it at 1 to 2 % of the training, overall training or cost, we can do that. So what it does is that it takes all the data. It does not, it does nothing for us. Every LLM has got a deep package. It removes that. It does just the matrices, add the matrices which is related to this particular data and it starts training on that. So what happens is that you get the proper details. Now what do I do is that still I need to clean it up.
So I use RAD plus vector DB to make sure that all the details which is there related to the retrieval and source citation which you are going to get, that is going to be given to you. Then I am going to distill it. Imagine this as a teacher, this as a student. it will have the lower version of it or specialized version, I will call it Generate and Masters. So for certain set of things you will use this. And like now quantization basically improves because there is going to be nation scale. So I want to make sure that we are very effective and efficient. So we will be using Indate. And the last and the most important thing is your ontology.
What you are going to do is that I am going to look at your structure, data structure. Then we are going to look at the sections, the precedents, then entire entities and the compliance rules. Everything is going to go inside this model. So completely we are building it for you. What actually happens is that because of this you will be able to summarize, you will be able to have multi language capability. Then you also have the, it is not your typical generic NLM. It is going to be a very focused area where it is going to do for any kind of task which has got legal intelligence, legal interpretation intelligence. And it will work on that.
So advantage is that, sales code, context and analytics, and the other things that we are going to have to do is that we are going to have a multi language capability. Directly being able to use it. Let me take next two minutes. What am I going to do? So how am I going to do this? So I am just going to take you two journeys. We are going to do 25 or 30 of them, but I don’t know. I will just have two of them for you, to show you a sample. So the first thing is about your AI. The first time, the most important thing is that we do not want to frustrate the people because they are getting the data from external sources.
We are getting external sources data, and we end up, if there is issues in that, it’s a problem. So what we are doing is that we validate them at the data source level. So we are going to have a proper agentic AI, which does that. Then, the grievances. When you want to talk about it, currently it is going to be FAQs probably, and it is going to be a bunch of, you know, chat, probably, you know, deterministic chat is the word I use. Now, I am going to make it more, what I call as truly context -amid and intent -driven chatbots, or conversational AI, which will kind of enable them, it will take them through the journey for them, to say how they should do it.
Now the last, not the least, like you know even then, pre -filled data is very powerful. But if that data is not proper, get into trouble. So what we are doing is that, there also we are putting in the internet. So internet is continuously there. Now this is going to reduce the overall submission by the taxpayer as such, okay. So and next what we do is that we have a proper, we are continuing the intelligence here. So we do a verification. So we will be able to auto detect and match the discrepancies. We are able to match the data saying that where they have gone wrong. Rather than telling them what is done right or wrong, we will help them to do that.
Then once that is done, this will actually go through that. And not everybody is intentionally doing it. I am sorry I am not using the mic. Please, I think I am loud enough to talk to everybody. Everybody is able to hear me. So I do not, I have a nutshell. So this agent, primarily what it does is that, it makes sure that compliance is intentional. We make it, the template is also very very human, human based. Like human based and makes sure that you have it. So here I am going to show you how to do it. So here I am going to show you how to do it. So here I am going to show you how to do it.
So here I am going to show you how to do it. the end the last not the least we have people who still will do it we have problems so what we do is that we identify agents we look at the cases every other detail and make sure that the vulnerability is predicted so it is going for this the entire thing if you see that the flm which what is being created is going to be your primary what i call the input for you to do this okay let me go to the next one this is going to be general though i put it as the air so when i talk about conversational assistant it is for anybody and everybody most importantly for anyone to look at the portals understand where it is and then go behind them is a problem so what am i going to do is that i’m going to have a context aware intelligent uh domain aware uh nlp chatbot which understands and explains them what i call the idiot proof which is like a common man he should not get worried about the legal jargons rather tell him how or what he should do step by step that is one of the primary focus for me and we are going to use certain set of LAMA, we are going to use the SLMs and also the SLM which is your inbuilt SLM which is going to be built across.
So overall if you really ask me, Insight 2 .0 will be moving away from, it is more about enabling intelligence to the both to the officers and to the citizens and making them more happy and you know as
Thank you sirs. We are already running behind by 20 minutes so I request the further speakers to kindly speed up. Now Professor Mohsen, founding head of YALI school of AI at India University of India. Professor Mohsen, founding head of YALI school of AI at India University of India. Thank you. and one of India’s foremost AI thought leaders will share perspectives on the possible usage of AI by law enforcement agencies and the road ahead. Sir, the floor is yours.
Thank you for the kind introduction. I was asked to speak on the usage of AI by law enforcement agencies in general and not just on income tax. So I will talk about a little bit more general perspective. I should say that I have been fortunate to be involved with some of the earlier activities in the income tax department and I personally feel that the kind of support that the income tax department in India gives to our users is much better than the support that the US gives their users. I have seen that because I have filed taxes in both countries and I feel that there is no equivalent of the 26AS form that the US gives to our users.
There is just so much support that we have here. we can check how the work is going forward. Also, I am not a law enforcement expert, so I got some feedback from Shankar Jaiswal, who is DGP Lakshadip and Sunny Manchanda, who is director of DRDO Young Scientist Lab. So thank you to them. So to me, this is the context. Per 100 ,000 people, the number of police officers in India are much less than in developed countries like, and I will count China now as a developed country, in China, US and Germany. We are at 155, they are at 200, 300 and so on. For judges per million people, it is recommended that we should have 50. We have about 15 to 22, depending upon which news article you read.
There are about 29 % police cases which are still pending investigation and about 4 .85 court cases pending for over one year. With that kind of a sentiment, and I don’t know how it is for the income tax, but we are always in use of high expertise. And therefore, the need to use AI in India, Because if we need to deal with this setup that we are in, we need to somehow augment ourselves with technology. Now, of course, you can use AI in various ways and the aspects to think about it. Are you using in law enforcement before a crime is committed? Are you using it to predict crime? Are you using it to figure out what we should do to stop crime?
Or when the crime has been committed, are we thinking about how to investigate crime and how to make judgments for it? And depending upon where we are, all of these places AI can be used. Similarly, AI can be used not only by income tax and GST, but also by military, by maritime, by traffic police, by other police, and so on and so forth. And also, what kind of data are we getting in? So for most of this conversation, we are going to be talking about financial data, which is structured data. But there’s also visual data, language data, speech data. And bringing it together adds up to a lot of information. to intelligence so you can now take one from the first column one from the second column and one for the third column and actually create new AI use cases so for example we can do much better job of monitoring in traffic police if we somehow use the visual data in some ways and so on so forth so you can actually start thinking about really really interesting possibilities here and in the next few slides I’m just going to show you very basic some examples so for example in the case of image of video in 2014 we were very proud that surat used to say that 27 percent crime has been reduced just because there were CCTVs and there was face recognition and if three and three to four people in a known database came together police would go there and it would reduce the crime somehow we haven’t seen that happening in India elsewhere other than so that I don’t know why this is really old so we are really poised to do this we should really have CCTVs everywhere so that we can do a much better job of crime surveillance.
DRDO lab is doing a very interesting job of obfuscation. If you are wearing a mask, if you have an interesting, you know, hairline has changed, can we figure out who you are? Of course, visual intelligence for traffic should be very easy. We still see people coming in the opposite side of the road or not using helmet. That can be absolutely automated with very simple imagery. A satellite imagery analysis is also very interesting. For example, the only way we know that China has a new port in Djibouti is because of the satellite images. It’s very easy to actually, I mean, it’s not easy to analyze, but we can analyze the data is there. When did we say the data is there for all the world so easily accessible to us?
Well, today it is. And for income tax, by the way, you can also start thinking about where a person is living and what kind of locality it is and how does it light up in the night. And that tells you the affluence level and you can start using that kind of information. This was a very interesting case where U .S. Marine Coast Guard, a Marine ship carrier. A flight carrier was being chased by 20 Iranian vessels in the ocean. And we knew because the satellites can see. Same for maritime surveillance. We have so many use cases of AI today, such as DigiYatra. We can use face recognition for searching of the missing persons. But we can start thinking about anomalous behavior, like anomalous vehicular behavior.
One of the very famous car rapes that happened in Delhi happened that with one car was just going on the road, taking U -turn, going on the road, taking U -turn, going on the road. These kinds of very anomalous behavior should have easily been detected if we were doing this. Even today in taxi safety, I know that women are still worried about taking an Uber late at night in Delhi, even though we have a panic button because we don’t know whether the panic button really works or it doesn’t work. But if there’s any anomalous behavior of the taxi driver, there should be. There’s a very clear mechanism to say that the driver is not following what they’re supposed to be following.
And it should be very easy to prevent. such crime. Same for taxi, bus, train safety monitoring. We go on to textual intelligence. A lot of people are interested in anti -terrorism and so on and so forth. Can we easily and quickly answer questions? For example, if I if we just kill Osama bin Laden and we gave you the laptop of Osama bin Laden, how much time would it take us to actually go through all his documents? I hope not much because AI should be there to help you. For example, I was working on this long time back and at the time we could figure out who are the entities active in Iraq. This is what my system gave 15 years ago that these are all the players who were active in Iraq at the time and if you want to know something about one particular player, you could just say what do we know about this person and it will give you a quick answer, a quick summary.
This kind of intelligence now we have at our fingertips and it should be used for figuring out who are the bad actors whether in the income tax or in other kind of law enforcement. I can move on. Speech to text for quick FIR filing, a chatbot to support … We just heard about Project Insight 2 where there will be a chatbot, but we also need a chatbot for our own income tax department people because they need to somehow deal and work with the data. And if they have to be writing code every time, it would be very hard, as Mr. Agarwal said earlier, AI can write code now. So we should have text -to -coding systems for our own IT department so that they can easily get to the data that they are looking for and that makes it convenient for them to find the right information.
On the more financial intelligence side, the input here would be more structured data, for example. So there are so many interesting news stories that are coming out, which makes me very proud that we have a very, very active department. For example, it is just like yesterday or day before yesterday that 60 terabytes of billing data and 1 .77 lakh restaurant IDs were uncovered and 70 ,000 crore tax evasion scam was uncovered. This was just using the information and some crunching by the data by AI that these people were deleting a lot of invoices. And once the system recognized that there was a pattern going on, you figured out that this is an anomalous pattern. And then an income tax person, an analyst can actually look at it and figure out what is going on.
Similar things have happened where large or frequent bank deposits can flag mismatches, ITR filing, AI tracking suspicious tax claims. So it seems it is very clear our citizens know that there is AI that is looking at them, but there are still scams going on. And if you can detect anomalous behavior, this is not the expected behavior. We might be able to find even new scams that we couldn’t guess ahead of time. And if we start to put this information together, it becomes even more interesting. For example, mule accounts. I have been told that college students have mule accounts. But if we know from their Facebook. sorry, I’m too old, Insta pages or whatever the more recent social media is, that they are just college students, but their bank accounts are going through a lot of churn, it would be very easy for us to predict that maybe it’s a mule account.
Similar, other cases of tax evasions and money laundering could come together if multiple sources of data from the social media feed, from the financial document, from the employment feed, from investment, from the various kinds of purchases they do. Sometimes these will be invoices. If these are paper invoices, maybe an OCR would be needed. So there will be a vision requirement. There are also interesting collusion rings. Generally, we have found that people who are bad actors, they support each other in the bad acting. And so if you create a graph around it and start looking at collusion rings, we might be able to find these people better. I think the sky is the limit. There’s another one very interesting phenomenon that where crime happens more, that is where we deploy more people.
But once we do that, people who are doing the crime figure out. that oh we have more people here they go elsewhere this is a game this is a game between attacker and defender and if the attacker knows what the defender is doing it is very easy for attacker to change the the place or whatever it is that they are doing their style and continue their game so it is important to not just go where the crime is but go where the crime might be in the future once they get to know that the defenders are going there yeah i’m gonna finish in two minutes so people have studied these security games for example in elephant poaching scenarios in coastal patrol scenarios and we have found much much better performance because of that let me take the last minute and say what are the challenges in the use of ai in these scenarios because there’s a lot of opportunities it’s a lot of exciting a lot of a lot of excitement but we have to be careful first of all we cannot make this autonomous.
If AI starts to reach out to citizens directly and starts to make mistakes, it will create a lot of problem. People will be unhappy. People will be worried. We will lose the trust in the system. It is important that our intelligence analysts and AI, they work together. AI brings up the issues. AI maybe generates a lead, but the lead is processed by the human to maintain the trust in the system. Now, that is where risk assessment becomes very important. Over -triggering is also a problem because if you make lots of alerts, then people become immune to the alerts and we have to figure out how we can do this in a trustful fashion. If you don’t do this right, you get into the bias of the algorithms and it has been shown that earlier when in judicial settings AI was tried, it made mistakes in favor of white people and opposed to African -American people.
We are a very diverse society. with so many castes and social strata. If we got those biases in our models, that will be very, very devastating. So we have to be very mindful that the AI bias doesn’t creep in and we have a human in the loop. Also, it is important that we gather data in a centralized repository. We are used to a system where one hand of the government doesn’t talk to the other hand of the government. Project Insight is trying to fix that. I’m so happy. Other scenarios are also trying to fix that. We should make sure that the data comes together so that intelligence comes out of it. The inter -jurisdictional boundaries don’t come into the middle.
The other thing is that our defenders, our IT personnel, the analysts, the law enforcement agencies, they need to be smarter than the attackers. They need to be smarter than the attacker because the attacker will be always creative in figuring out the next attack. And if we have not been thinking ahead of time, we will be missing out. Finally, we have to make sure with all this new data that comes in, there is obviously… increased scrutiny, there is increased surveillance, and it doesn’t hinge on civil liberties and privacy of personnel. So these are the things that we have to be mindful of, but I think the sky is the limit, and I’m really happy that we are doing this, and it’s being used more
Thank you, sir, for such insightful perspective. Now, Mr. Martin Wilcox, Senior Vice President at TerraData and a global leader in AI -driven data analytics, will speak on AI -driven risk analytics of financial data for the law enforcement agencies. So please.
and understanding the networks of bad actors. But to build these sort of graphs at India scale is incredibly complicated because graph analytics is an O -N squared problem. And so we need, again, scalable and performance systems and to bring the complex graph algorithms to the data in the data warehouse instead of trying to copy samples of data out of the data warehouse. If we have to cut the graph by taking small samples of data out of the data warehouse, then the risk is we miss the bad actors that we’re trying to catch. I want to talk a little bit now about next -generation AI use cases. And at Teradata, when we speak of next -generation AI use cases, we’re typically looking for four characteristics.
And we won’t go through all of those four characteristics today in the interest of time, but as a couple of the previous speakers have mentioned, one of the defining characteristics of a lot of next -generation AI use cases is this idea of multimodal data. This idea that images, audio, and text… And we can leverage those data in the kinds of ways that previously we’ve only been able to leverage structured transaction and event data. Actually, and I’ll come back and talk about that specifically in a moment or two. But this is another example that I thought might be interesting to some of you. This is an example from Brazil’s largest credit union, which is a company called Secredi.
And the challenge for this particular organization is in Brazil there is a large unbanked population that’s outside the formal economy. And obviously it’s very difficult to make credit risk decisions and lending decisions if you have a group of people that can’t prove their income. So the solution for Secredi is a sophisticated set of income estimation models. And they use those models to predict an individual’s likely income. And then they make credit lending decisions on the basis of that predicted income. Now this is a model that was trained outside of the data. database. And we have a technology called Bring Your Own Model, which enables us to consume models regardless of where they’ve been trained. So if you can train a model in PMML, in Mojo, or in ONIX, we can import that model, and then we can use Teradata as a parallel harness to speed up the training of this model.
And I think this is incredibly important, because we’re at a moment now in the industry where everybody wants to talk about model training. Because model training is exciting, and model training is cool. But actually, we don’t make any money when we train a model. We only make money when we can deploy that model to production and run inference, and in this case, inference at India scale, to actually change the way we do business. And this Bring Your Own Model technology that enables us to import models regardless of where they’ve been trained, so your data scientists can use the tools that make them the most productive, but you still have… We have a mission -critical platform that enables you to score models in production.
we get very significant speed up when we use this technology. From the numbers on this slide you’ll see that in this particular case for the income estimation models in Brazil we were able to run inference 25 times faster on the parallel data warehouse by bringing the complex processing to the data instead of the other way around. And 25 times faster is the difference between running this model once per day and running this model once per hour. And if you run the model once per hour you can change your entire business model you can change the cost of credit during the working day. Now this next example is an example of that multimodal phenomenon that we were talking about.
This is again another large Asian bank. This bank cares a lot about NPS, about Net Promoter Score. They consider that Net Promoter Score is the single most important leading indicator of customer intent. Whether the customer will leave or whether the customer will stay and consume more products. The problem this bank has is it has very little understanding of the drivers of Net Promoter Score. to score. But when we were working with them, we were able to establish that they were capturing 50 ,000 customer chats per week from the online banking application.
Thank you. Thank you. Thank you. Thank you. Thank you. to all our speakers of the first category. Thank you. Thank you. Thank you. Thank you. So as we move to category 2, we now shift from perspective to practice. Across India, regulatory and enforcement agencies have increasingly embedded artificial intelligence into their core systems. This segment brings together agencies that are not just exploring AI but actively deploying it to strengthen compliance, improve oversight and enhance citizen -centric services. For this, we have among us Shri Suvendu Pati from RBI, Shri Harsh Poddar, an IPS officer, Shri Ram Ganesh from CyberEye, Shri Amnesh Pandey from SEBI, and Shri Shashi Bhushan Shukla sir from Thank you. Thank you. Thank you. All the sessions being so interesting that I see most of the audience sticking to their seats.
So for the first session in this category, I introduce Siri Sovendu Pati. Sir is the Chief General Manager and Head of FinTech at the Reserve Bank of India. Sir will present Mule Hunter, an AI -driven initiative targeting mule accounts. Sir, please.
So good evening, everyone, and thank you for the opportunity for having me. I would say that I would spend some time on what initiatives we have taken and then come to the mule hunter. First of all, recognizing the need of the governance and the financial sector has been one of the early adopters of artificial intelligence, given that most of the decisions are based on data. RBI had constituted a committee and it submitted its report last year in August and it has been placed on our website. It had recommended seven sutras or high level design principles. And there are 26 recommendations, which are 13 based on the innovation, enablement, as well as 30. you know exactly on risk mitigation.
And together with these sutras, there are six pillars under which these recommendations are classified. And these have, I would say that these seven sutras, I would come to the next slide, these have been adopted by the, this you can have a look. So I am happy to report that these seven sutras which initially we started as a recommendation or guiding principles for the financial sector has now been adopted by the government of India as the India’s design or India’s principles or sutras for AI governance across all sectors. So this is something, and on the right side, on the left side, you can see the recommendations of our RBI committee. And on the right side, you can see the recommendations which are published by the government of India on November 5th, outlining those very principles.
And so one of the foundational principles that we are talking about. is trust in the system. Any technology, it doesn’t matter how powerful it is, it will never be adopted unless it announces trust. So people should feel comfortable by the technology. And there are other, and we have another principle which cuts across every application is putting people first. You know, customers, people, citizens, they need to be protected at all times. And if in high -risk areas, high -risk decisions, one should talk about, you know, human in the loop and things like that. Other thing that we have also recommended or talked about is innovation over restraint. And in that, unless we, you know, experiment with this new technology or this technology, we would never realize the potential.
So there is a lot of apprehension in people’s mind that it is a probabilistic model, non -deterministic model, there may be mistakes. But unless we… still you know experiment do sandbox testing and do those kind of experiments we will never realize the true potential of this technology so there was a little nudges provided to the institutions do experiment do adopt there are other you know principles I would in the interest of time I would not talk about them so there are specific recommendations sorry and these are the some of the recommendations which are available on our website in the report if time permits you can you know go through at your leisure so this one of those recommendations is talking about bringing up something called the AI sandbox one of the critical elements why in India we need to do this is because entities face constraints on account of available of compute power infrastructure and also there are constraints with regard to availability of data so this is something that we recognize and as a public good we would enable AI sandbox by making cross sectoral data available and cross institutional data available in an anonymized way, which can be used by the entities and model developers.
And some of those things, capacity building, AI liability framework, another important element, how the customer needs to be protected. So, moving on with the other principles, there are risk mitigations, how the board policy should be formed, product approval process, cyber security measures, red teaming exercises. So, there are a host of, again, balancing 13 recommendations on the risk mitigation. Now, let me turn to the application that we are talking about today. This is one initiative, Mule Accounts in our banking system is a resource. It’s a real challenge. And given the number of, the huge volume of data that we have. We would never, humanly it is not possible to do it without the use of technology or machines.
So we have developed a MuleHunter .ai application, which is now implemented across 26 banks. Another three banks are in the process of implementing. And it has a lot of, you know, this, it has developed 857 features, which have been identified so far. And this is, you know, getting better and better as the model is getting trained across institutions. So out of these 857 features, let’s say for a bank like State Bank of India, only 50 features may be very critical. Whereas for other banks, like say RBL Bank or Indusint, another set of 50 features would be important. So this itself is providing insights. And based on our analysis, our understanding. and it is relatively in the progressing stage of implementation, these are the ways it is getting implemented over a period of time and currently it is deployed on -prem within each bank.
So the data really doesn’t go out of the bank themselves. But there is a central aggregation service that we are running which would take the intelligence from the features to the central aggregation model. So what we have identified, there are those insights which are predicted and we are rule -based engines, the banks which were implementing so far, they were giving 20 to 30 % level of accuracy. But this mule hunter or AI -based models, the accuracy level has significantly gone up in some institutions above 90, in some institutions above 80 and so on and so forth. And so on and so forth. As somebody said, the rule -based systems, they are handicapped. They were a handicap that human element would be required to analyze a large volume of data.
But here this number is getting reduced. So what, for example, if we have found out that there are patterns like around midnight when the customer support is not there, a lot of mule transactions take place. This is a new feature which could be found out. Similarly, there are accounts where it is remaining dormant for a long time, suddenly gets active, receives a barrage of payments, receipts and debits happen and then it gets again dormant. So these kind of pattern detections were not possible earlier. And for example, even if those accounts where it is detected like it is a salary account, likelihood of it getting classified as a mule account is very low. So some kind of BRE engine is filtering those kind of accounts.
And flagging only returns. So this is a very common problem. And the reason why it is so common is that it is not a very common problem. And it is not a very common problem. And it is not a very common problem. And it is not a very common problem. need to do a enhanced due diligence after this flag is done. We are working closely with I4C and our one limited study which has revealed that those accounts which are predicted by Mule Hunter, banks initially, some of the banks classified them as not Mule after doing the enhanced due diligence. But within a month or two, we start seeing I4C complaints on those very accounts which are flagged and such ratios are ranging up to 60%.
That means it gives us the confidence that the model is identifying correctly the Mule accounts whereas banks constrained by their own branch banking and identification systems, they are not classifying it correctly. So had we predicted, had we done this exercise in some sample, one bank we took as a sample, we could. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts.
So we see that around 75 % of the Mule accounts are Mule accounts. to 100 crores of money could have been prevented if the bank had classified this as a mule on the day zero and frozen those debit freezers. So these are some of the early insights that we are getting and we are building it as we progress. And the future is what we talk about is a digital payments intelligence platform that we are aiming at a real time transaction scoring mechanism means at the time of transactions going through the score would be provided to the banks whether to allow this transaction or not. So this is mule account detection is a once a crime has been committed we are trying to move it to a preventive action.
So this is where again AI is going to help us a lot of technology and working with you know partnering with telecom. Mobile numbers which are suspect. numbers. So those kind of filtering and smart registry is being built. I4C is also providing us insights. So this is an ecosystem building as a public good. We are not only giving directions, but we have soiled our hands in building this tool, which is now getting implemented on scale. But yes, there is a lot of improvements that can be made with the partnership across all the banks. In the Thank you. Thank you. If you are talking about the Supreme Court case, which talked about the digital arrest cases and has formed an expert committee in this.
And Reserve Bank is also a part of that committee. But much before the Supreme Court gave this direction, this initiative was already on. It is not something that post -Supreme Court direction that we have started building this. This work was undertaken almost one and a half years back. Over a period of time, 26 banks have implemented this and more are implementing. And it is a work in progress. It gets refined as we speak. So there are newer, as I said, new initiatives are also in the pipeline. And we are also working alongside the banks how to move from a manual. Based on due diligence procedure. to a hybrid of automated and human intelligence -backed enhanced due diligence process.
That would be the ultimate proof of preventing these frauds and preventing the money, hard -earned money of the gullible citizens.
Thank you, sir. We keep all the questions in answer for the end of the session. Now, quickly, I call upon Sri Harsha Poddar, Indian police service officer and an award -winning innovator in AI -driven policing. Policing and Shri Ram Ganesh, cyber security expert and founder of CyberEye, to present cyber crime enforcement in action. Gentlemen, the floor is yours.
crime and it was handed to an investigating officer, you had a series of supervisory meetings that would take place at the rank of the deputy SP, the additional SP, in order to determine the path of the investigation. Today what is happening is that this co -pilot is able to ingest the FIR, all of the documentations of the investigation, and generate an investigative path that is compliant with the standard operating procedures laid out by that particular state government, in this case Maharashtra, as well as the High Court and Supreme Court judgments that outline what the best practices in that kind of investigation are. So, put broadly, what the co -pilot does is that there are four essential tasks that it does.
After having generated this path, it also sends out a series of routine legal requests that we require for most investigations. These could be asks for telecom data. These could be asks for forensic data. It also makes sense of digital forensics, by which I mean telecom data in organized crimes, as I’m sure for those of you who worked in tax investigation are aware, there are vast volumes of telecom data that we garner, which we are able to analyze using the copilot. And then we also use open source intelligence, which, again, in police, we use a fair amount of. So different open platforms, Facebook, PhonePay, Google Pay, etc., it’s able to garner data from these that’s open source available and is then able to make that a part of the investigation.
Essentially, this is what is happening. You have an adaptive investigation part that is unique to that particular case. So remember, it’s not just an instance where it has spelt out or replicated the SOP for you. It has adapted the SOP and the judicial pronouncements on that particular head of cases and adapted it to that case. So that’s that’s what what it actually does. In terms of in terms of case ingestion and how this exactly works is it ingests the FIR to start with. It also provides victim assistance, for example, unfreezing of accounts and volumes of money that have been frozen in cybercrime cases. It generates case diaries, which is a day to day progress of the investigation itself, provides guided investigation paths, which are compliant, as I said, to standard operating procedures.
And it also profiles people on the basis of open source intelligence. Now, in my own district, as SPF Nagpur Rural, we have trained over 233 investigating officers for this, using which over 467 cases have been investigated. Over the past six months before the launch by by Mr. Satya Nadella and our. Chief Minister. The co -pilot has actually enabled us to win a series of governance awards within the state of Maharashtra as well, but that’s not so important. What’s important here, and I also want to doff my hat a little bit to the training process that’s important. When we are onboarding systems such as this, it’s important for the institution doing so to create space for training. I know, having been a beneficiary of it myself, that the Income Tax Department lays a lot of stress on training from all ranks onwards, something that we can learn within the police department.
But this is something that we stressed upon very, very substantially, and that’s been useful, and has also reduced resistance within organizations in order to onboard it and be able to use it. I’ll end by concluding four basic technologies that are available from the artificial intelligence silo. In Marvel, that’s the kind of technologies that we work upon. First is large language models, which are, I think it was spelled out in the first session, which is essentially artificial intelligence models that have been working with large amounts of text and are able to interact with government in very akin to a human being. The second is graph neural networks, which are artificial intelligence systems that make sense of siloed sets of data and the relational analysis between them.
So, in organized crime, that’s very useful in terms of being able to do a hub and spoke of who’s at the center of that crime. In Maharashtra, we have an act called Makoka, as you might be aware about, where organized crime, you need to be able to find out who the center of the gang is. Third is agentic artificial intelligence, which is co -pilots such as this, which are triggering workflows and actually walking individuals through them. And the last is big data analytics, where there’s structured analysis of large sets of data. That’s the kind of work that we’ve been doing at Marvel and this is an instance of that. I’ll end with that. It’s been a pleasure and a privilege.
Thank you very much. Jai Hind.
Thank you, sir. Now I introduce Sri Avnish Pandey, Executive Director at SEBI and a national voice on technology strategy and cybersecurity governance. Sir, please.
Thank you. First of all, a very good evening to all of you and it’s indeed a great privilege to be here and I thank CBT for giving this opportunity. For past two days, we have been listening to a lot of AI -based initiatives all over the place, but something that really had stuck with me at SEBI for some time back is The most important is to build capacity in undertaking these AI initiatives. And to that effect, we have truly democratized the AI development within the organization. And I take quite a pride in introducing some of the names which I have here in the crowd. Mr. Sandeep Kriplani, Mr. Rohit Saraf, Vikas Komera, Rajuddin Khan, and Pramit.
Pramit is the youngest of them. And I’ll tell you why this is important is because some of the initiatives that I’m going to present today to you have been handcrafted by these intellectual minds, and they are not from the IT department from SEBI. So that’s very important. It’s truly democratized to that extent. Yeah, so from SEBI’s perspective, we have quite a broad mandate to protect. The interest of the investors. to promote the development and to regulate the securities market. It’s a fairly large mandate that we have. And to that effect, we craft regulations and seek compliances. So compliances are a major part of our regulatory processes. We also conduct investigations and initiate enforcement proceedings from the data that we collect from various sources.
Going forward, we also adjudicate issue directions and libya penalties. Why am I saying this? This is to say that we have varied use cases within the organization where we have started to use the power of AI. There are four use cases that I would like to mention over here that have been kind of doing good in terms of generating valuable output for us. First of all is this. This is the RIDAR, which is a tool. This is the RIDAR. which ensures very proactive compliance for the advertisements that are being issued by a regulated entity specifically mutual funds so the ocean is a very important tool which is able to track the miss the the context that are unregistered and misleading fin influences are putting onto the social media in for much is the workflow intelligence that we have built to ensure that our investigation processes are more efficient and we are able to undertake that activity faster security compliance and audit come security cybersecurity compliance audit is some tools that we have built to ensure that the cyber security compliances that are being sent to say we are well read and we are making some good meaning out of it.
So I’ll take one by one of them. Well I’m slightly cognizant of the time that I have in hand. So first of all is the radar which I said takes care of all the advertisement that mutual fund industries are putting in. The tool basically looks into whether the advertisement which is put in is compliant to the regulatory requirements as mandated by the code of conduct. Some of the non -compliance that this tool is able to capture is illustrated here of which most of the compliance that we have caught is in terms of non -disclosures and not ensuring the disclaimers adequately put. Moving next is Sudarshan and which is trying to combat lot of financial frauds which also includes investment frauds.
So this is a tool which is able to capture all the non -compliance frauds that are part of the securities market domain and we are involved with that. Our media monitoring cell in SEBI has flagged nearly one lakh instances of misleading contents on this platform. To strengthen our approach on this, we built this product called Sudarshan, which is doing a continuous monitoring. It’s a multi -modal tool and works on multiple languages as well. Knowing that some of these scrupulous guys are using the capability of languages to defraud people, it has got enhanced detection capabilities and which we validate against with the data which is present within SEBI. By this we are able to try to figure out the financial misinformation and ensuring the financial information.
integrity. Infomerge as we call a tool that is for our investigation process as you all know the investigation process starts from case initiation, data collation, data analysis and report generation. By using this tool we are able to systematize all the data which is collected from various sources into one format. We are able to look into the company profile designations and financials of a particular company. Also are able to figure out what are the corporate announcers that are announcements that were made during the time of the investigation period. Visualization is the effect by which people are able to see the pattern and the tool has got a very innovative tools to give that finally the report writing.
From one investigating officer to another investigating officer there has been always found a variance. to ensure that we have a standardized mechanism to get a report and get it in an orderly manner. This particular tool lets do the last part of writing the report. Of course, those reports again go for a read as human in loop. Coming to the last system that we have very recently launched, SEBI initiated a cyber resilience and cyber security framework based on which we have started to get a lot of compliances, which means controls, levels, and artifacts that are being submitted for those compliances. So this particular tool autonomously reads those compliances and flags where the audit report has gotten missing.
So we have got a very novel three architectural -based framework so that if one particular model, is hallucinating other models, so that if one particular model is hallucinating other models, so that if one particular model is hallucinating other models, do take care of and give the reasonable meaningful analysis. So what it is ending up as apart from giving a dashboards and real -time visibility it is ensuring that at SEBI we at any point in time are able to do a relative analysis of all our intermediaries and know where they stand in cyber security measures. Yeah so that was a very quick sorry if I’ve been too fast with the other time I was trying to keep pace with the seconds that were clicking thank you
Thank you sir it ended in a very clockwork type manner. Now we are coming to the final technical session. Shukla principal commissioner at CBDD a key architect behind data Analytics Cell and Saksham Nudge Initiative will speak on the use of AI for ease of compliance in tax administration. So, we look forward to your insights.
Thank you, Aman. Good evening, everyone. Now, I think we are almost at the closing of this session and we are left with maybe five, seven minutes. But this has been the last session. We can take some more minutes, I think. So, this is the journey of Income Tax Department. The Income Tax Department has been pioneer in the adoption of state of art technology. And we have started using technology quite early. I have given a few examples. So, let us take a look at the graph of last 25 years. so if we see the filing of TDS return started in 2004 followed by filing of returns and then tax net has been launched CPC has started in 2009 and then CPC TDS and that as has been mentioned by Professor Mausham that the income tax department is highly taxpayer service oriented and we show them the financial information which is available with the department we started showing 26 years from 2017 onwards and then we have automated several processes of income tax department and the department has launched online issuance of form 16 the faceless assessment e -filing portal in 2021 and national cyber forensic policy was launched in 2024 and last to last year we started initiative called Nudge which is the non intrusive uses of data to guide and enable taxpayers and in the first session my colleague has talked about insight 2 .0 and there are several projects which are now getting updated with the state of art technology including the uses of artificial intelligence so to enhance the taxpayers experience for filing the tax for the ease of tax compliance so the department is using the technology including the artificial intelligence keeping the taxpayer in the heart of it so if we talk about the data what income tax department has there is a vast data which we have from the several sources and one of for example the pan 80 crore people are already issued pan now it might have reached a little more itrs are filed more than 9 crore 12 crore people are paying taxes and then sft is more than 650 crore data fields we get for the specified financial transactions which are populated in AIS which is a huge data which is available with the department and then we collect data under rule 114B, form 60 is submitted, then a specified financial transaction 61 A is submitted.
We also receive information from foreign jurisdictions. More than 100 countries share the foreign asset information, foreign income information as well from with India. So we receive around 50 lakh pieces of information every year under CRS and FATCA framework which is automatic exchange of information and we also share information in respect of non -residents who are having the foreign assets in India, who are having the assets in India with the respective foreign jurisdictions. It is around 1 crore pieces of information which is transacted. So this is a lot of information which we have, lot of data we have including the assessment order, including the appeal orders. This data can be utilized within our projects which we have inside 2 .0 and ITBA, CPC for generation of intelligence, for better compliance, for the awareness of the taxpayers and for the information of taxpayers for the payment of correct taxes.
So this initiative which is a nudge initiative was started 2 years back where we are using the data which is coming from various sources including from foreign jurisdiction and this is being used for educating the taxpayer, for guiding the taxpayer to comply with the tax laws and to correct their filing, to correct the, declare their correct assets and income. So this NUZ has the seven step strategy which is in the word Saksham which is meaning in Hindi Saksham means empowered in English. So this is basically this strategy is empowering the department as well as empowering the taxpayers for filing the correct tax. So this seven step strategy is basically how we are using the data which is a Sankalan basically compilation and collection of data as we have discussed we have lot of data which is coming from diverse sources.
Then Anushandhan how we are analyzing and doing research over the data for generation of insight and intelligence for the risk identification and how we are acting on the data which is actionable interventions for the targeted outcomes. Then we are doing the communication basically how the taxpayers to inform to the taxpayers that this may be which you need to review. your filing and maybe you will have to change your income or your computation. So this is where we are using the behavioral insight and guiding taxpayers to pay the correct taxes. And at the same time, we are also hand -holding, we are also facilitating them through the fifth step, which is called ASTAK and then ADHIKAR enablement of the taxpayer for the payment of taxes.
We have the legal changes have been brought in the Income Tax Act, where now the taxpayers are allowed to update their ITR by payment of additional taxes and to correct their income. So this is possible now with the payment of additional tax. So now we use this asking the taxpayers, it can be for four years, you can come out with the right taxes at the end. basically it is a preemptive exercise where no punitive action is taken against the taxpayers no penal consequences so the taxpayers are allowed to change their itr which has been filed originally and then the this whole cycle is then completed through evaluation where we take the feedback of taxpayers the responses we receive they are analyzed and all these steps can be further improvised so that the next nudge or when when we can communicate with the taxpayers with a better information and the better communication and this strategy has actually yielded very good result where the taxpayer have responded well it has been received well the taxpayers the trust owned by the department has given a very good result so I have given few cases case studies here some outcomes you which has also been discussed by chairman in his opening remarks So if we see the current foreign asset nudge which has been carried out in the month of December and we have sent messages to the taxpayer stating that you may have some foreign asset which has not been reported in your ITR and the taxpayer have then revised their ITR and 1 .57 lakh taxpayers they have disclosed their foreign assets which is worth 99 ,000 crore.
So this this exercise shows that once the taxpayers are informed that this is what the department knows about and you have missed it while filing your ITR they may come forward and they can declare. So this has resulted into 6 ,540 crore of additional income and 99 ,000 of crore of assets. Similarly we have also taken up few more exercises. The other one which I have mentioned is regarding the bogus donations. And the bogus deductions claimed by the taxpayers by taking certain fake receipts from unrecognized political parties and some of them are maybe the entities, the NGOs, which are not eligible for donation. So here also result has been quite encouraging. 6 .96 lakhs taxpayers, they have revised their ITR and they have withdrawn their claims worth 9879 crore and which has given the department additional taxes of 1758 crore.
And if we see how these campaigns have actually resulted into the behavioral change in the taxpayer. So these two graphs. Explain it. If you see the foreign asset behavior pattern, how the taxpayers have increased the filing of taxpayer foreign asset. Now it has increased from 1 .59 lakhs, which was before the Nudge campaign started. Now it is 4 .7 lakhs. So almost three times increase in a span of two years by this Nudge campaign. And similarly, the claim of deduction has also gone down in last two years. And it has reached to from almost half 7 ,400 crore to almost 4 ,000 crore. So this is what is the power of data and the data analytics, which we do by using the technology.
And as has been discussed in Insight 2 .0 project, that with the use of artificial intelligence and better technology, we will be able to identify the anomalies much faster. And we can Nudge taxpayers maybe at the time of filing of return or much before. So when the return is. process. This also I because there is many representative from law enforcement agencies, I wanted to discuss this particular topic as well. The India is leading one project which is based on the misuse and threats of AI in the in the tax crime and financial crimes. So this it is a 17 countries group which is being led by India and we request all the LEAs if they have come across any misuse or any challenge of AI the risk of AI in their regular working and the administration of their institutes we can they can communicate with us so that we can at the international level we can take it forward in a collaborative manner and we should try to find solution to various problems.
The misuses which have been reported so far are basically the use of generic synthetic identities and deepfake documents and fabrication of sometimes court orders. So these are the AI -assisted misuses which are happening and which are basically a challenge for all law enforcement agencies. The RBI has come out with the Mule Hunter software. Maybe for the synthetic identity identification also we can use AI where it can further support the law enforcement agencies in identifying such misuses where we can take some preemptive measure before basically this attack takes place. So this is, I will request, we will also send communication but this can be kept in mind that this project is going on. And if we talk about the future use of AI in department, basically if we see how we will be able, to enable our taxpayers to pay correct taxes at the right time without any penalty or without any additional tax.
So this is what we are trying in the department that we should use AI in an informative manner. It should also be able to cross validate various data sources so that if there is any anomaly, it can be predicted on real time basis and in a proactive manner. So when I’m saying real time basis, so maybe at the time when the taxpayer is preparing for filing of return, that time we can show the data, the financial data which we have received from third party. At the time of filing of return, we can use prompts where the taxpayers can be informed if they are making any wrongful claims or if they are reporting or not reporting assets, which may be in the knowledge of.
The department and then. once the return is filed before the verification we can further analyze the returns and we can prompt the taxpayers to correct before the processing or after the processing then we can carry out the further the nudge exercise so all this nudge will have a complete 360 degree 360 degree program where we can enable taxpayers right from the beginning to pay the correct taxes so this is what we plan to use the AI where the taxpayer services are concerned and for the administration obviously we are making ourselves capable training our manpower and adapting the technology to to serve better the country and also to collect revenue correctly on time with this I will end and this is the closing thought so it is for everyone to read thank you so much
Thank you, sir. Thank you, sir. And now I invite Shri Mahadevan K, Joint Commissioner of Income Tax for the vote of thanks.
Respected Honorable Chairman, Distinguished Speakers, Eminent Guests, Colleagues and Participants. It is my privilege to propose vote of thanks at the conclusion of this highly interesting session. Today’s deliberations have clearly demonstrated that artificial intelligence is no longer aspirational, it is operational. I begin by expressing sincere gratitude to Honorable Chairman CBDT Shri Ravi Agrawal, sir, for his visionary opening remarks. Particularly, sir highlighted how the new Income Tax Act would be tech -driven to reduce the litigations over interpretations. And sir emphasized the use of AI in ethical manner with ensuring accountability and transparency. Sir said the strategic direction for AI enabled… Sir said the strategic direction for AI -enabled trust -based governance. The session on Project Insight 2 .0 by Mr.
Srinivasan T, Sri Abhishek Kumar and Sri Ramesh Reveru demonstrated how Insight 2 .0 is reshaping the taxpayer’s life cycle through AI -enabled prefiling, conversational chatbots, behavioral nudge, AI -based litigation risk assessment and vulnerability prediction. The vision of a sovereign SLM for the tax domain stands out as a transformative initiative. The session on a Roadmap on AI for Law Enforcement by Professor Mausam outlined AI applications across preventive, predictive and investigative domains using visual, textual, financial and multimodal analytics. He highlighted use cases such as facial recognition, anomaly detection and crime forecasting to enable intelligence -led enforcement. He emphasized human and AI teams. And explainability. bias mitigation and civil liberties as essential safeguards. These aspects brought conceptual clarity and policy depth to the discussion.
The session on AI -driven risk analytics by Mr. Martin Wilcox highlighted how data analytics enhances enforcement through graph analytics, in -database model deployment and leveraging vector stores and multimodal AI for intelligent querying. The transition from a system of record to a system of intelligence was particularly compelling. The session on Maha Crime OS AI by Sri Harshiya Podasa, Sri Ram Ganesh and Sri Vikram Kale powerfully addressed the investigation crisis and showed how AI enables automated crime handle extraction and guided investigation workflows, combined with 360 -degree profiling, integrating with CDR and other tools. Particularly, the emphasis on human -in -the -loop architecture ensures accountability, all -on -set efficiency. The session on AI for ease of compliance by Sri Sashi Bhushan Shukla sir illustrated the difference between AI and AI -driven risk analytics.
The income tax department’s evolution towards AI -driven platform such as Insight 2 .0, ITBA 2 .0 and Saksham Natch. Sir explained how large -scale data integration and cross -validation enable risk -based proactive and real -time compliance support. The focus was on shifting from enforcement -led systems to AI -enabled trust -based voluntary compliance and taxpayer -centric services. The session on Mule Hunter by Sri Sumanthapati highlighted free AI framework with its seven sutras, six pillars and structured recommendations balancing innovation and risk mitigation. The presentation on Mule Hunter demonstrated how advanced ML models, graph analytics and real -time risk scoring are strengthening Mule account deductions. The proposed DPIP collaborative platform further reflects a forward -looking ecosystem -wide approach to AI -enabled financial integrity and supervisory resilience.
The session on AI -driven regulatory enforcement by Sri Avinash Pandey highlighted how SEBI is operationalizing AI across enforcement including proactive compliance review, real -time detection of misleading, financial content and its influencers. and AI -driven cybersecurity audit compliance. These initiatives reflect how AI can strengthen investor protection while ensuring regulatory prudence. A special word of appreciation to Srimati Amandeep Dhanoa for her engaging and energizing moderation. Today’s session reaffirmed that AI enhances risk intelligence, it improves service delivery, it strengthens regulatory oversight and it enables data -driven governance. On behalf of CBDT, I extend heartfelt gratitude to all speakers, institutions, organizations and participants. A special word of appreciation to the principal CCIT Delhi headquarters team and the DGIT investigation team, Delhi, for the dedicated support and medicalist coordination in organizing this event.
Thank you all for making this session impactful and forward -looking. With this, I formally conclude the session. Thank you all. Thank you.
And as has been discussed in Insight 2 .0 project, that with the use of artificial intelligence and better technology, we will be able to identify the anomalies much faster. And we can Nudge taxpayers…
EventThe analysis explores various perspectives on international taxation and global tax rules. One significant aspect is the commendation of the OECD Amount A Multilateral Convention for its comprehensive…
EventAdjusting VAT regimes to the challenges of e-commerce and ensuring global consistency are crucial while also promoting innovation. The development of globally consistent rules, the creation of partner…
Event| E EL LE EC CT TR RO ON NI IC C C CO OM MM ME ER RC CE E: : T TA AX XA AT TI IO ON N F FR RA AM ME EW WO OR RK K C CO ON ND DI IT TI IO ON NS S…………………………………………………..
Resource– Police transcription services streamlining administrative processes Seong Ju Park: Thank you, Mr Moderator. So before we start, I just want to quickly share, I was recently back in my country, Kore…
EventOn the other hand, the integration of AI in international law also presents significant opportunities. It offers the potential to enhance the implementation and enforcement of international laws by pr…
EventSometimes we might be wrong and risk unchangeable effects. So we need to build a balance that doesn’t hinder innovation, but also identifies human rights challenges. The AI Act tried to build a risk -…
EventLaw enforcement agenciesincreasingly leverage AI across critical functions, from predictive policing, surveillance and facial recognition to automated report writing and forensic analysis, to expand t…
UpdatesThe vision of a sovereign SLM for the tax domain stands out as a transformative initiative. The session on a Roadmap on AI for Law Enforcement by Professor Mausam outlined AI applications across preve…
Event_reporting-Regulatory Approach and Framework: India’s Reserve Bank of India (RBI) has adopted a progressive, principles-based approach that is technology-neutral, focusing on enabling responsible AI adoption ra…
EventManchala highlighted the RBI’s recognition that AI technology is inherently probabilistic and may experience lapses despite robust governance frameworks. The central bank’s approach includes provision…
EventAn audience question about government initiatives revealed evolving regulatory responses. The Reserve Bank of India has established a Digital Payments Intelligence Authority to enable national-level d…
EventArtificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing society. Its integration is positive, as it enhances strategy setting, decision mak…
EventThe European Central Bank(ECB) has issued a call for increased vigilance and potential regulation regarding the use of AI (AI) in the financial sector. While acknowledging the potential benefits of AI…
UpdatesThe tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate for a high-level international gathering, with speakers expressing honor, gratitud…
EventThe overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technology for government. There was a sense of urgency about the need for governments t…
EventThe discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s points rather than disagreeing. The tone was professional and solution-oriented, …
EventThe tone remained consistently optimistic and collaborative throughout both presentations. President Karis spoke with confidence about Estonia’s achievements while maintaining humility about the need …
EventThe overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological change but expressed confidence in the ability of democratic institutions and mult…
EventThe tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and progress. However, there were also notes of caution about hype and unrealistic expec…
EventThe tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiatives for technological advancement. There was a collaborative spirit, with panelist…
EventThe tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a professional, confident demeanor while discussing serious societal challenges. The ton…
EventThe discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human resilience and adaptability. While acknowledging legitimate concerns about AI’s …
EventThe tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunities and potential ways forward. There was a sense of urgency about the need for …
EventThe discussion maintained a consistently serious and concerned tone throughout, with speakers demonstrating deep expertise while expressing genuine alarm about current practices. The tone was analytic…
EventThere are risks of over-automation without adequate human oversight and potential bias issues
EventThe tone of the discussion was largely optimistic and solution-oriented. Speakers acknowledged significant challenges but focused on practical ways to overcome them through collaboration, policy chang…
EventThe tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for India’s development and the significant challenges that must be addressed. The co…
EventThe discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around infrastructure, energy, skills, and governance, speakers consistently emphasize…
EventThe tone was professional and collaborative throughout, with speakers building on each other’s points constructively. There was a sense of urgency about the challenges discussed, but also optimism abo…
EventThe tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than debating. It maintained a balance between technical expertise and practical imp…
EventThe tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementations and expressing confidence in achieving ambitious goals. There’s a sense of ur…
EventThe discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebration of achievements, and forward-looking optimism. However, there are moments of…
EventThe tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory yet professional atmosphere, with speakers expressing gratitude for the collabora…
EventThe tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusiastic and grateful atmosphere, with speakers expressing appreciation for partici…
EventThe tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciation and maintains an upbeat, accomplished atmosphere. The speakers express relief…
EventThe tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with the awards ceremony, became more personal and engaging during founder testimonials where entre…
Event“Amandeep Dhanoa is an Indian Revenue Service officer of the 2018 batch and served as moderator of the symposium.”
The knowledge base lists Amandeep Dhanoa as an Indian Revenue Service Officer of the 2018 batch and as the moderator of the symposium [S3].
“Shri Ravi Agrawal is the Chairman of the Central Board of Direct Taxes and an Indian Revenue Service officer of the 1988 batch with over three decades of experience.”
The knowledge base confirms Ravi Agrawal’s role as Chairman of the Central Board of Direct Taxes and his IRS 1988 batch background with more than thirty years of service [S3].
“Responsible AI deployment requires high‑quality shareable data, secure systems, clear accountability, strong safeguards and continuous training.”
S109 outlines key AI‑readiness requirements such as cataloguing data in machine‑readable form, security, governance and continuous skill development, which expand on the prerequisites mentioned in the report.
“AI can dramatically accelerate code development, exemplified by the Chairman generating a functional training‑module code in five to six hours.”
S111 describes a senior engineer building a complex service in 14 days using generative AI tools, illustrating comparable speed gains from AI‑assisted coding.
“AI enables rapid large‑scale data processing, such as deduplicating billions of images in a short time.”
S25 reports that AI deduplicated 90 crore photographs in 51 hours, providing concrete evidence of the high‑speed processing capability referenced in the report.
“Responsible AI deployment should consider agentic behavior, safeguards and human‑in‑the‑loop control.”
S108 discusses responsible deployment of AI agents, emphasizing autonomy, reasoning, and safety measures, which adds nuance to the report’s discussion of AI safeguards.
There is strong consensus among policymakers, technologists, and regulators that AI can materially improve tax compliance, revenue collection and law‑enforcement effectiveness, provided it is deployed within a secure, sovereign data environment, with robust ethical safeguards, human oversight and substantial capacity‑building. The alignment spans technical, regulatory and ethical dimensions, indicating a mature, multi‑stakeholder approach to AI‑enabled governance.
High – the speakers repeatedly echo the same principles across different sectors, suggesting that AI adoption in Indian public finance and enforcement is moving from experimental to operational with a clear, shared roadmap.
The symposium displayed broad consensus that AI is essential for modernising tax administration and law enforcement, yet substantive disagreements emerged around the technical architecture (deterministic platforms vs probabilistic models, data localisation vs flexible model import) and the preferred compliance strategy (behavioural nudges, risk‑scoring analytics, or investigative co‑pilots). These divergences reflect differing priorities among industry, academia, and regulators regarding control, transparency, and scalability of AI solutions.
Moderate – while no outright conflict was voiced, the speakers presented competing visions for implementation and governance. The lack of alignment on core design choices (determinism, data sovereignty, and compliance mechanisms) could affect coordination across agencies and slow the rollout of a unified AI framework unless reconciled.
The discussion was shaped by a series of pivotal remarks that moved the dialogue from abstract enthusiasm to concrete, responsible, and results‑driven AI deployment. The Chairman’s emphasis on human‑centric AI set an ethical baseline, which was deepened by Professor Mausam’s warnings about bias and over‑alerting. Technical innovators like Ramesh Revuru and T. Srinivasan responded with deterministic, sovereign models, while Martin Wilcox highlighted the necessity of scalable inference. Policy leadership from Suvendu Pati introduced a national governance framework and a high‑impact mule‑hunter use case, bridging policy and practice. Operational examples from Ram Ganesh and the Nudge outcomes presented by Shashi Bhushan Shukla demonstrated tangible benefits for law enforcement and taxpayers alike. Collectively, these comments redirected the conversation toward accountable, cross‑sectoral AI strategies that prioritize trust, effectiveness, and citizen welfare.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

