AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026

20 Feb 2026 10:00h - 11:00h

AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how AI-driven operations can be leveraged to build and preserve customer trust in telecom services, noting that AI already shapes outcomes such as outage management, grievance handling and spam-fraud prevention, but must balance efficiency with false-positive reduction and privacy compliance [11-15][16-18]. Speakers emphasized the necessity of a “human-in-the-loop” to keep AI decisions from running unchecked [19-20].


Julian Gorman described the “scam economy” outpacing regulation and presented GSMA’s Cross-Sector Any-Scam Task Force, which has collected over 40 operator case studies and is piloting data-sharing proofs of concept across Asia-Pacific [26-38]. He warned that service-based rules can stifle future innovation and argued that regulation should focus on outcomes while fostering industry-wide collaboration, especially as India assumes a global telecom leadership role [39-53].


Dr Rajkumar Upadhyay showcased CDOT’s AI suite, including “Fraud Pro” that de-duplicates SIM registrations and has disconnected 70 lakh fraudulent connections, a digital intelligence platform used by banks for financial-risk scoring, and the crowdsourced “Chakshu”/“Sanchar Sati” app with over 18 million downloads that empowers users to report and block unwanted calls [65-71][73-102]. He also described an AI-federated disaster-management system that integrates alerts from IMD, CWC and other agencies, uses cell-broadcast for geo-targeted warnings, and has reduced cyclone-related deaths in Odisha to zero, a model now being promoted to the UN for global rollout [250-266][270-276].


Mathan Babu Kasilingam explained that his telecom operator follows privacy-by-design principles, holding ISO 27701 certification, and views AI adoption through pillars of responsibility, reliability, trust and privacy [113-119]. He noted early “quick-win” AI projects in fraud detection and network self-healing, but identified siloed data repositories and high infrastructure costs (80-90 % of AI spend) as major obstacles, prompting a shift toward a unified data platform and enterprise-wide LLMs [124-138][149-170][225-230]. Centralising data also simplifies compliance with India’s DPDP rules and enables scalable AI model refinement [184-187][219].


Syed Tausif Abbas introduced a voluntary AI-incident reporting schema with taxonomy covering network components, severity and cause, which can help operators analyse failures and inform regulators [193-201]. Both Abbas and Kasilingam agreed that such standardized reporting can streamline model improvement and support cross-border data sharing, a point echoed by Gorman who called for regulated sandboxes, open-gateway APIs and four pillars-network security, ecosystem exposure, customer services and digital skills-to combat scams collaboratively [201-207][285-306].


The discussion concluded with consensus that coordinated global effort, especially data sharing across borders and adherence to emerging standards, is essential for responsible AI deployment in telecoms [321-326][327].


Keypoints

Major discussion points


AI must be harnessed to protect customers and preserve trust.


Dr. Tangirala emphasized that AI-driven decisions affect users (outage management, grievance handling) and that clear, proactive communication is essential. He highlighted the tension between aggressive fraud-spam reduction and the need to avoid false positives while respecting privacy and regulations, and called for a “human-in-the-loop” safeguard [13-18][19-21].


Industry-wide, cross-sector collaboration is critical to combat scams.


Julian Gorman described the “scam economy” as faster than regulation and outlined GSMA’s Cross-Sector Any Scam Task Force, which has gathered >40 operator case studies and is developing data-sharing proofs of concept. He stressed that regulation should focus on outcomes and that global cooperation-especially India’s emerging leadership-is needed to keep innovation alive while fighting fraud [26-35][36-44][45-53].


AI-powered solutions from CDOT illustrate concrete use-cases for fraud, identity-deduplication, financial risk, and disaster management.


Dr. Rajkumar Upadhyay presented tools such as “Fraud Pro” (detecting duplicate SIM registrations), a digital intelligence platform for banking risk scores, the crowdsourced “Chakshu” app, and an AI-driven early-warning system that fuses meteorological data and cell-broadcast alerts to save lives [65-73][74-84][85-102][241-276].


Service providers are grappling with AI adoption, data silos, infrastructure costs, and privacy-by-design.


Mathan Babu Kasilingam explained the provider’s journey: certification under ISO 27701, the shift from isolated data lakes to a unified AI platform, the challenge of massive GPU/compute spend (≈80-90 % of AI cost), and the need to balance “quick-win” pilots with a consolidated, secure architecture that can support LLMs and self-healing networks [111-119][120-138][148-166][184-188][225-236].


A voluntary AI-incident-reporting standard (TEC) was introduced to enable systematic learning from AI failures.


Syed Tausif Abbas outlined the schema (30 fields covering incident type, severity, affected subsystem, etc.) and argued that, although not mandatory, the standard would give operators a common data set for root-cause analysis and help regulators shape AI policy [193-199][200-203][204-209][210-218].


Overall purpose / goal


The panel aimed to explore how AI can be responsibly deployed across telecom operations to build and sustain customer trust, by sharing best-practice use-cases, highlighting the need for collaborative standards and cross-border data sharing, and discussing practical challenges (privacy, cost, governance) that operators, regulators, and standards bodies must jointly address.


Tone of the discussion


– The session opened with a formal, courteous tone (introductions, opening remarks).


– As speakers presented concrete AI applications and the urgency of fraud/scam mitigation, the tone became focused and problem-oriented, yet remained constructive, emphasizing solutions and collaboration.


– When the voluntary standard and cost-optimization topics were raised, the tone shifted to analytical and forward-looking, acknowledging hurdles while expressing optimism about unified platforms and global cooperation.


– The closing remarks returned to a appreciative and collegial tone, thanking participants and reinforcing the collaborative spirit of the forum.


Overall, the conversation maintained a professional and solution-driven atmosphere, moving from introductory formality through detailed technical discussion to a concluding note of mutual respect and shared commitment.


Speakers

Dr. M P Tangirala


Area of expertise: AI in telecom, customer trust, responsible AI


Role / Title: Chair of the panel / Session moderator (as introduced to begin the session)


Anil Kumar Jha


Area of expertise: Telecom regulation, policy advisory


Role / Title: Principal Advisor, Telecom Regulatory Authority of India (TRAI) [S2]


Mr. Julian Gorman


Area of expertise: Telecom industry collaboration, anti-scam initiatives, AI-driven security


Role / Title: Representative, GSMA (Asia-Pacific) – expert in telecom collaboration and scam mitigation [S4]


Syed Tausif Abbas


Area of expertise: Telecom standards, AI incident reporting standards, policy formulation


Role / Title: Senior Deputy Director General (DDG) and Head, Telecom Engineering Centre (TEC); also holding additional charge as CMDTCIL


Dr. Rajkumar Upadhyay


Area of expertise: Telecom AI applications, fraud detection, disaster-management systems, quantum communications


Role / Title: CEO, Centre for Development of Telematics (CDOT) [S8][S9]


Mathan Babu Kasilingam


Area of expertise: AI adoption in telecom service providers, privacy-by-design, AI infrastructure, LLM integration


Role / Title: Senior executive representing a telecom service provider (TSP) – speaker on AI adoption and privacy standards [S11]


Moderator


Area of expertise: Technology security, data privacy, cyber-security governance


Role / Title: Technology Security and Data Privacy Officer, Vodafone India Limited (over 20 years experience) [S12]


Additional speakers:


None. All participants in the discussion are covered by the speakers list above.


Full session reportComprehensive analysis and detailed insights

The session opened with a formal introduction by the moderator, who identified the Technology Security and Data Privacy Officer of Vodafone India and senior DDG S.T. Abbas as panelists and invited the audience to focus on “balancing information, innovation with privacy and trust” before handing over to Dr M P Tangirala to chair the discussion [1-8].


Dr Tangirala set the tone by stressing that AI-driven decisions-whether in outage management, service continuity or grievance handling-directly affect customers and therefore require clear, proactive communication [13-21]. He warned that while AI can dramatically improve fraud-spam reduction, it must be deployed so as to minimise false-positives and fully respect privacy and regulatory constraints [16-18]. A “human-in-the-loop” safeguard was presented as essential to prevent autonomous systems from making unchecked decisions [19-21], and he announced that the panel would hear from experts representing service providers, R&D and the DOT standard-setting body [22-25].


Julian Gorman described the “scam economy” as a threat that moves faster than regulation, noting that scammers are not bound by geography, law or funding limits [26-30]. To counter this, GSMA created the Cross-Sector Any-Scam Task Force, a coalition of more than 39 organisations from 17 countries-including Meta, Google, TikTok and AWS-aimed at identifying and prioritising joint anti-scam initiatives [31-35]. He reported that, within a few months, over 40 operator case studies from the Asia-Pacific region had been collected, demonstrating that operators can develop and implement successful scam-mitigation strategies without waiting for regulation [36-38]. Gorman argued that service-based rules risk stifling future innovation and that regulation should focus on outcomes while fostering industry-wide collaboration, especially as India rises to a global telecom leadership role [39-53].


Dr Rajkumar Upadhyay presented CDOT’s AI portfolio. He began with “Fraud Pro”, a system that groups images and demographic data to detect duplicate SIM registrations-an approach that has already disconnected 7 million fraudulent connections [65-71][73-84]. He also described a digital intelligence platform used by banks to assign a risk score to transaction recipients, enabling financial-risk indicators that block high-risk transfers [81-85]. The crowdsourced “Chakshu”/“Sanchar Sati” app, downloaded by more than 18 million users and generating 25 crore website hits, empowers customers to report unwanted calls and automatically disconnects fraudulent numbers, with 7 million connections removed through user-initiated verification [85-102]. Upadhyay highlighted that the AI platform was used to locate dead bodies after the Balasore train accident, demonstrating AI’s utility beyond telecom services [80-85]. In the domain of public safety, CDOT has built an AI-federated disaster-management platform that aggregates alerts from IMD, CWC and other agencies, uses AI to generate geo-targeted cell-broadcast messages (a technology that sends alerts to all devices in a geographic area), and has reduced cyclone-related deaths in Odisha to zero-a model now being promoted to the UN for global early-warning deployment by 2027 [241-266][270-276].


Mathan Babu Kasilingam outlined his operator’s AI governance framework. The company is certified under ISO 27701 (Personal Information Management System) and follows a “privacy-by-design” approach, positioning privacy as a core pillar of trust [113-119]. He traced AI adoption from the consumerisation of assistants such as Siri and Alexa to enterprise quick-wins, where AI is first applied to a single function (e.g., fraud detection) to demonstrate value [120-138]. He identified two major obstacles: fragmented data silos created by separate AI projects [148-166] and the high cost of infrastructure-80-90 % of AI spend goes to GPUs, storage and compute [225-230]. To address these, the operator is consolidating data into a single AI lake, exposing it through enterprise-wide APIs, and developing purpose-built large language models (LLMs) that can serve multiple business functions while simplifying DPDP (Data Protection and Data Privacy) compliance [184-188][170-179][219-224]. Kasilingam noted that his organisation already records incidents within its ITIL-based processes and sees the TEC schema as complementary to existing practices [193-197].


Syed Tausif Abbas introduced a voluntary AI-incident-reporting schema devised by the TEC standard-setting body. The schema comprises 30 fields covering incident type, severity, affected subsystem, cause and impact (physical, environmental, psychological), with submitter details masked for privacy [193-201][202-207][208-218]. Although the standard is not mandatory, Abbas argued that a common taxonomy will enable operators to analyse failures, refine models and provide regulators with consistent data to shape AI policy [193-197][201-207]. He likened the initiative to the early computer-emergency-response teams, suggesting that a similar mechanism is now needed for AI [195-197].


The subsequent Q&A reinforced several themes. Kasilingam emphasized the value of quick-win projects and discussed the cost pressures of AI infrastructure [225-236]. Upadhyay expanded on the disaster-management system’s scalability and its potential for international adoption [241-266]. Gorman reiterated the need for privacy-enhanced cross-industry data sharing via open-gateway APIs and regulatory sandboxes [285-295]. In response to Mr Jha’s query, he proposed two global steps-cross-border secure data sharing and collective action-and two India-specific steps-domestic anti-scam measures and knowledge export [321-326].


The moderator concluded by thanking the panelists for their “vibrant discussion on responsible AI, standards, the repository and various government apps for enhancing consumer experience,” and announced the presentation of mementos and a group photograph, underscoring the collaborative spirit of the session [328-335].


Key take-aways


– Human-in-the-loop controls and proactive customer communication are essential for trustworthy AI-driven telecom services [13-21].


– Privacy-by-design, demonstrated by ISO 27701 certification, builds customer confidence [113-119].


– A voluntary AI-incident-reporting schema with a 30-field taxonomy can increase transparency and aid regulators [193-201][202-207].


– Cross-sector collaboration-exemplified by GSMA’s Any-Scam Task Force and privacy-enhanced data-sharing sandboxes-is critical to combat scams [31-35][285-295].


– AI-based fraud tools such as “Fraud Pro”, “Sanchar Sati”, and millisecond-level call-blocking have demonstrably reduced fraudulent connections and scam calls [65-71][85-102][307-309].


– The AI-federated disaster-management system that fuses meteorological data and delivers geo-targeted cell-broadcast alerts has achieved zero-casualty outcomes in pilot regions [241-266][270-276].


Action items


– GSMA will continue expanding the Cross-Sector Any-Scam Task Force and its Southeast-Asia proof of concept [31-33].


– CDOT will promote its fraud-prevention suite, disaster-management platform, and the dead-body detection capability for international adoption [65-71][241-266][80-85].


– Kasilingam’s organisation will merge fragmented data repositories into a single AI infrastructure and expose services via enterprise APIs [152-166].


– Abbas will circulate the voluntary incident-reporting schema to encourage uptake [193-201].


– Gorman recommended establishing privacy-enhanced data-sharing sandboxes and regulatory support mechanisms [285-295].


Unresolved issues


– Detailed regulatory frameworks that enable privacy-preserving cross-industry data sharing [285-295].


– Strategies to address the shortage of skilled AI talent within telecoms [225-229].


– Methods to balance aggressive call-blocking with the guarantee of emergency call availability [307-309].


Session transcriptComplete transcript of the session
Moderator

Technology Security and Data Privacy Officer at Vodafone India Limited, with over 20 years of experience in cyber security domain and governance structure. Rounding off the panel, we welcome Mr. S .T. Abbas, Senior DDG and Head TEC, also holding additional charge CMDTCIL, with over 35 years of experience in telecom standards, certifications, spectrum management and network regulation. I would request all the panelists to please come forward for a quick photograph Thank you, sirs. Please take your seats. Let’s engage deeply on how to balance information. Innovation with privacy and trust. I now hand over to Dr. Tangirala Ji to begin the session. Thank you.

Dr. M P Tangirala

Chairman, member, Mr. Mitter, distinguished delegates, my fellow panelists, I welcome everyone to this second session. The clock is already ticking, so I will be brief in my opening remarks because I come between the audience and the distinguished panelists, which I don’t intend to do. The session title is Building Customer Trust Through AI -Driven Operations. The importance of trust was highlighted, among others, by Mr. Shantigram Jagannath as well, when he was speaking about AI through telecom networks and the at -scale problems that we could try and solve. Thank you. Now, while customers may not interact with AI models directly, they are affected by the outcomes of the decisions. And therefore, you know, whether it’s outage management, service continuity, grievance handling, you know, while efficiencies may improve, the responsibility for decision integrity ultimately remains with the telecom service providers.

And clear and proactive communication with the customers would become very important. And that is where, you know, there are impactful applications of AI in telecoms, in spam and fraud prevention, which a person had mentioned in his opening remarks about how 2 .1 million numbers were disconnected using AI -based tracking. But the challenge is also that we need to reduce this spam. while minimizing false positives, avoiding customer inconvenience, and fully respecting privacy and regulatory requirements. So that is always a big concern. Then, of course, this whole issue of the human in the loop or human in the mix. We need this automation to have an element of human control that is so that the system does not run away with its own decisions.

So we have, for all these issues and more, we have eminent speakers here, both from the service providers, from the R &D, and as well as from the standard -setting body of DOT. I will request each of them to give their thoughts, and then maybe a few of you… Both of them have presentations to make. I’ll request them to keep it to about five minutes or so, so that we have time for further discussion. Thank you.

Mr. Julian Gorman

And the reason for it is in the scam economy, regulation cannot move as fast as scammers. Scammers are not bound by geography. They’re not bound by laws. They’re very technically capable and they’re very well funded. They have all the things that mobile operators would like to have. I think it’s important to understand that we have to focus on stimulating innovation. At GSMA, about 12 months ago, we formed a coalition called Cross -Sector Any Scam Task Force. It involves more than 39 organisations from 17 countries, including the social media platforms, so Meta, Google, TikTok, AWS. And the aim was to drive or identify and prioritise initiatives and activities that we could do as an industry to help combat. Now, one of those activities was let’s gather what the industry is doing.

Now, in just the last couple of months, across Asia Pacific, we’ve gathered case studies of more than 40 instances where operators without regulation have developed, implemented, and used successfully some sort of strategy or service to combat scam. And I think that’s an indication, along with GSMA’s globally working with people like Virginia Tech and with our foundry, with our proof of concept around data sharing, is the industry is focused on this. And the danger, of course, of implementing service -based rules is they restrict innovation in the future. And so we really need to focus on outcomes when it comes to regulation. And I think we all universally subscribe to the fact we need to combat scam. We need to work together.

And it’s not just the people in this room. We need to collaborate and work across the ecosystem. to make that possible. I think those principles actually also apply in the broader sort of sense of the term is how do we grow 5G, how do we make 5G meaningful to the whole economy, to all users. It’s about stimulating that ecosystem and making sure that they are using 5G and 4G and mobile broadband into meaningful solutions for the population. And the important thing also for India is India is rising not just economically but also in its position in the telecom world and the GSMA sort of global ecosystem is India is a real telecom superpower and it’s on the rise.

And that means actually it cannot just be worried about its domestic situation. actually it has to embrace that statesman role to be a global leader. And so actually considering cross -border, how does India play its role in a global ecosystem are critical to actually the sustainability and growth of the global ecosystem of which India’s vision is dependent on. It cannot exist alone. And I think it’s important that when we focus on innovation and solving things like scam, it is as part of a global community. It’s not just a national community. And so the actions we take, the innovations we look to stimulate have to be part of that global solution. Thank you.

Dr. M P Tangirala

That was thought -provoking, some of the things that you said about collaborative innovation or innovation through collaboration. We will come to that in a bit when we go for the questions. So, with that now may I request Dr. Rajkumar Upadhyay, CEO of CDOT for his presentation and opening remarks.

Dr. Rajkumar Upadhyay

Respected Chairman, Mr. Lahoti, Mr. Mittal, Mr. Tanglura fellow panelists, industry leaders, policy makers experts, ladies and gentlemen thank you for inviting me here I think in the previous session there was talk about how do you optimize your network how do you self heal your network how do you make correction in the network so I’m not going to talk about that even though we also as India we have developed our own 4G and 5G we used because we were the late comers so we used quite a bit of AI in terms of predicting the faults because a lot of logs are generated by various systems so I’m not going to talk about that I’m going to talk about where is the PPT?

where is the PPT? So I’m going to talk about some use cases which we have developed during last few years. We are CDOT. We were established in 1984, and we had the legacy of developing the rural telecommunication. We work in primarily three to four areas, the mobile wireless, cyber security, information security that is done by quantum. Quantum AI is a horizontal thing and advanced telecom applications. But I will focus on these are our product line, and all of these products actually use AI because AI is so pervasive. Without AI, you cannot function. So all these product lines, whether it is mobile, whether it is cyber security, information security, disaster management applications, are using AI in a big way.

So one of the key application, key product what we have developed is Fraud Pro. What it does, it actually detects the fraudulent connections in the system. I think you may be aware with the cases of Jamtara, Mewat and all these sim factories running. And these sim factories were destroyed by this particular software. What it does, it groups all the images of the same person because if you go and buy 500 sims using same Aadhaar card or same driving license, this is what was happening. So it detects that and it not only matches the images, it also matches the demographics, name, father, name. And sees that whether names and photos are same but names are different. So using, I will come to the number.

I think some number was described in the beginning that how many connections were disconnected using this software. So this is deduplication and finding the, and in fact we developed for telecom, it is being used now, going to be used in driving license, passports, income tax, deduplication, Manrega deduplications. The second one, I think this, it mentions the AI. AI analysis, 86%. 7 crore mobile numbers. and it was very well used even to, you know, find out dead bodies in the Balasore train accident. The first use case of this particular platform was to identify the dead bodies. The second one is financial risk indicator. I think you would have seen in newspapers RBI has mandated the banks to use financial risk indicator.

What it does, if A is transferring money to B, so B, the credentials of B are checked with the platform which we have developed, which we call digital intelligence platform. And the platform returns a figure that this is risky number, is medium risk or low risk. If it is a high risk number, the bank will not happen, let that transition happen. And it has saved a lot of fraud cases currently. And all the banks actually are using this FRI, which is able to tell that the B number where the money is going is a, you know, dangerous number or a well -identified fraudulent number, the money is stopped. The next is the Chakshu. Chakshu is again crowdsourcing platform.

wherein if you get a fraudulent call or a promotional call or any kind of call or fake KYC or faking as police, you can report and using crowdsourcing, we are able to disconnect the number and we are able to take. And this is again using our Sanchar Sati app. Just to bring to the notice of the audience, rarely a government app has a download of 18 million plus. This is 18 million plus downloads are there. The hits in the website of Sanchar Sati are 25 crore. Very rarely you see that. So this shows the popularity of how customers are protected using the AI -based platform. This is again AI -based platform. This Tapcoff and CIR here, I don’t know how many of you have used.

I would request those who have not used, please use it. In Sanchar Sati, you go, you give your mobile number, it will tell you all the connections under your name. using fuzzy logic, fuzzy AI and fuzzy logic. It doesn’t ask you any other detail. And that number also we ask because we want to verify that it is you and the OTP sent. Otherwise, no other details are asked. Just by the detail, we are able to find out how many numbers are there. And just to bring to your notice, 70 lakh connections have been disconnected using this. People have themselves disconnected their numbers because it also tells you, this is not my number, disconnect it. This was a big problem for us.

When we blocked the SIMs in the country, these guys went outside. And they started pumping calls using Indian numbers. It’s spoof call. This technology is available. I can get call from my own number in using this. We were getting 15 million calls per day, 15 million. And now you see, this was a very complex system because when the call hits at the gateway, the system has to decide within millisecond that this call to be let it go through or block it. it has to be decision has to be millisecond and it has to be zero error because no actual call should be blocked and today we are able to do our because the rigorous testing happened with all the operators today we are able to we have totally neutralize this of course they have found another way they are they have taken sims in places like Cambodia Indonesia Myanmar from there they’re calling so again the AI based system is alerting that these are the numbers of that country and we are alerting the governments of that country AI based security solution because cyber is another major area for all of us and somebody was mentioning the AI will do cyber attacks and it’s true we see in our system AI attacking the systems earlier it was human and now it is fully AI so you have to use AI to counter it so the cyber security solution today what we provide is fully AI based so that it can coordinate between various particular solutions disaster management we have used AI you may be aware that India has deployed ITU CAP based disaster management system as well as 3GPP cell broadcast based disaster management system which is implemented across India we use AI here because what happens like IMD is giving me a warning on rain CWC is giving me a warning on flood weather report is coming so we federate all these inputs and using AI you have less than 2 minutes so this I won’t go through NMS of course we use AI to see that how the when the network is likely to be down and this is actually implemented in Bharatnet 1 and 2 it tells you that this is likely to fail this router is misbehaving or this node is misbehaving so that was my last slide so in nutshell what I am saying that the a lot of AI applications are needed for the customer side to protect the customers and we have made India has made a good progress in terms of reducing the frauds reducing the fraudulent connections therefore safeguarding the customers and we will be very happy to take these technologies to any part of the world given it is implemented at India scale thank you so much

Dr. M P Tangirala

thank you Dr. Rupajay that was very interesting the flavor of the kind of R &D that has been done and the apps that have been developed we now move to someone on the panel Mr. Matan Babu Kashi Lingam who is representing the service providers and you carry a lot of burden of customer expectations on your shoulders so do tell us about your thoughts on the topic of today’s thank you thank you

Mathan Babu Kasilingam

So a few things that we have done as a service provider, majority of the topics are touched upon from fraud and cyber security. I’m trying to say the role of AI in terms of establishing trust, our entire whole ecosystem, which is telecom ecosystem, relies primarily on the customer trust. So to ensure that we have given trust journey for our customers and in the means of adopting AI, there are various core secure pillars that we have followed through. Any AI adoption should have the reasonability, reliability, trust, privacy should come to deliver that. So as a TSP, when we have embarked on the journey of AI, that’s the first and foremost core element that we have taken into consideration.

We are one of the TSPs in the country who have taken the journey of privacy since past five plus years now. We are completely certified on PIMS ISO 27701. We are the only TSP in the country who have governed privacy by design and have certified ourselves against that as well. So that is to only ensure that the trust is given back to the customer. Now I will come back on the journey of AI adoption. First thing that has happened is consumerization of AI. AI has been part and parcel of our life since all of us learned about Siri, Alexa. Day to day home we have been living with AI for many, many years now. So the consumerization of AI happened many, many years back.

What happened in enterprises, there came the pressure of adoption of AI in enterprise. In that, the first and foremost thing that we did is we took as applied in consumer. Let us try and adopt it in AI. Enterprises as well. well. Obviously, it has its own benefit. The benefit being it shares quick win, right? So you get to see a first yearly win that you are able to see by deploying AI in your setup. So how enterprises embarked on that journey is you pick and choose one department, one function, one key problem that you are faced with, deploy AI in it, and you see results. So we saw all of these examples. Fraud. It’s a serious problem for the entire country as a whole.

What can we do? Can we leverage AI? And AI is capable of giving me million eyes and million hands in name of a single human operating that, right? So the power of AI came to aid. We are able to today identify fraud. Sir also briefly touched upon cyber security. So as national critical infrastructures as what we are as TSPs, today we are pressed with serious amount of attacks. So India apparently in the past one year has hosted as many mega significant events. whether it is G20, whether it is Mahakum, then there is situational geopolitical tensions that we went by and now we are hosting AI summit. So national critical infrastructures like TSPs are also faced with increased volume of cyber attacks increased volume if I tell would be not 10 times it would be in as many count that I could multiply with that is a quantum of increased cyber attacks that we are seeing now in the cyber field we are also limited with the number of professionals we have.

So the power of AI not just for the attackers as defenders as well we have started leveraging how can we leverage the power of AI to combat them. So we have, so those are quick wins right? Network operations with the advent of 5G we wanted self operating self healing networks. So in various smaller smaller areas where AI can be embarked that we can realize a very very quick business value, enterprises started adopting That’s the first part that we wanted the quick win, we saw the quick win. The challenge that came with that is we started seeing them in piecemeal approach. The data that we were working upon was almost similar. You gather this intelligence information from the same network elements and nodes.

But you started to look upon through different lenses. All that I need to do is to look through different lens. But I started creating individual siloed repository of data. So if you look at corporates today that have embarked AI, you will see as many isolated silos of data created for them. Because each of them want their lens. And for every lens, they didn’t see the data through the lens. They created a total isolation of the data. Second thing that happened is mammoth amount of infrastructure. Anybody who touches AI today talks about GPUs, humongous. Power that is required to run, etc., etc. That again at enterprise. Thank you. it is siloed data, siloed infrastructure that has been taken into account.

So the journey that we are today in is we had the quick wins, we have taken the first few steps, but we are re -looking at from a different standpoint as we see currently. So we have stepped back. Is there data deduplication that can be done today? In lieu of 20, 30 silos that I have created, do I want to create one single repository of this data? Thereby the secure element also becomes easier. If in silo I have to secure everywhere, bring them in one area, I have the ability to secure them well. Can I leverage a common platform infrastructure, which is the AI infrastructure that is required to put the data and then do these work?

We are doing that. So you can still leverage a comprehensive LLMs, individual businesses in variety of functions, I have taken their own purpose -built LLMs, right? Because… You will have a HR function. The provider for HR is a specific, say SAP, would be primarily driven for HR. And surrounding systems which are talking AI would have built on top of it. There will be a self -healing network. The network provider builds an AI -driven system. So there we are now stepping back to see can we build a comprehensive central LLM, which will still deliver the purpose that are required for looking at. So at V, the premise is core infrastructure, put data in comprehensive, expose them through interconnected enterprise API architecture, thereby businesses and users do not have to talk to the data directly.

They talk through the enterprise model, touch the AI infrastructure, and go and reach back the data for various reasons. So it could be to service my service provider. It could be to service my customers. It could be my customer support. Thank you. bridging them. That’s a platform journey that we are doing that. So this consolidation, like I told, privacy is by design. We are able to do the DPDP compliance inclusive, which is minimize the data. Data in one area, we are able to minimize them as appropriate. That’s what I wanted to share with. Thank you.

Dr. M P Tangirala

Thank you. Fascinating. Now we come to Mr. Abbas, who is the senior DDG from TEC standard setting. He’s promised that he will make a different presentation. So over to you, Mr. Abbas.

Syed Tausif Abbas

the name of application what are the technology used what is the purpose like that then what was the impact or harm information with the what was the incident like physical harm environmental property psychological so these things also forms part of the 30 key fields in which the input is to be given for the schema and then some of the information which is to be masked later on so those related to the name of the submitter email and other things related to submitter information which will be redacted later on similarly the taxonomy as earlier I told that it will classify the incident into different categories depending upon the incident type whether it is a subcategory as network description service quality outage or it is a security beach or AI mismanagement or then affected system whether core is affected whether the radio access network is affected whether the edge is affected or IOT components or physical so which part of the network is affected or any application which is related to user is affected and then what is the incident severity whether it is critical high moderate or low so that also will be recorded and cause of failure if it is known to the user otherwise the deployer or the service provider has to enter this what was the cause of the failure so basically this database will give input to the service provider also that they themselves can examine it they can analyze it and then realign their AI related application so that these incidents don’t recur in future so it’s a gradual auto development of their own AI system which will be then error free and gives the best results output so for this is only this standard has been made but it is not going to give any mitigation mechanism or something.

This is to be decided by the deployer who has deployed those AI application and it is not mandatory. Just as a beginning, when the new computer system came, initially there was not much thing but when the incident started then computer emergency response team was proposed and it started working on collecting the data related to the computer incident. So similarly since the AI has already begun, so we should have this mechanism in place so that we can have the AI incident reporting database also available. Thank you so much.

Dr. M P Tangirala

Thank you so much Mr. Abbas. Presenting arguably and congratulations arguably world’s first time that such a standard has been put out. So since we are fresh off with you, I will start with a question for you about what you have just now presented. you said it is not mandatory it is voluntary of course we will see where that journey goes as you said about certain coming in after the computers but can you tell us a little bit more about what value it offers to the telecom service providers if they voluntarily adopt this standard

Syed Tausif Abbas

so telecom service providers they have already started using the AI application in their network optimization network and services to the users orchestration of resources so many things already the AI application has started so if any incident which gives an unintended outcome if it is recorded and reported then it will be in the best interest of the service provider that those incidents are analyzed and then rectified for so that it is not occurring in the future so in this way it is a can be best utilized by the service provider and since the structure of the schema and taxonomy both are given so it will be a same structure compilation of data which every service provider is doing so that will give benefit to the regulator and policy makers to how to go about the AI policy because of those input which we get from those incidents.

Dr. M P Tangirala

So therefore, Mr. Kashi Lingam, would you think it offers a voluntary adoption of this standard offers any benefit to you from the side of a service provider?

Mathan Babu Kasilingam

I think like sir rightly mentioned about incident recording has been not a new phenomenon for at least people who have been in the IT industry. Recording cyber specific incident additionally has also started happening. However, we have tied back to the same ITIL framework that has been there historically followed. So enhanced AI is yet another tool which is landing up creating possibly an outcome. The outcome could be erroneous. It could be an event, incident, bias could be one of the situation that are arising. So as TSPs, individually while we have started doing this internally as we have adopted the journey of AI for us, these are recorded events. But one manner that it helps and supports in the framework as TEC has put across is, yes, it can be streamlined in a manner that the rest of the populace, if they have to refer by, can also be referred.

Because today there are no standalone companies, right? Every company is in the area. They are in the area of digital and IT. They are only doing. work in their own function. If you ask a bank, bank has to tell that I am an IT company in the service of doing banking. That is how it is changed. So IT plays a crucial role and AI will be a supporting arm in that. So this record keeping will make the ability for us to scale our AI and models as appropriately. With the advent that India wants to, and we have already announced three LLMs coming our way already, homegrown, home developed here, a platform like this will possibly help us manage and then refine our models well.

Dr. M P Tangirala

So you mentioned how enterprises are becoming digital first. And you also spoke in your initial remarks about AI for enterprises. So how do you look at this controlling costs of, you know, costs the infra part you did deal with, but how about the costs? Of AI for enterprises? Any thoughts on that?

Mathan Babu Kasilingam

Currently, it is still a significant amount that is being incurred upon. So the cost optimization, a larger chunk of cost optimization comes from the infrastructure as a whole. So about 80 -90 % of the cost to AI goes primarily on the infra in itself, both in store and as well in compute. The rest obviously comes in the skills. So today, while we definitely showcase the world that we have humongous talents that are getting built in the AI area, for an enterprise still to have these skilled engineers to build upon AI is still an adaptable work -in -progress area. So I think in the journey, we are now looking at AI to come in the aid of AI.

So we were in conversation with one of the AI -driven companies yesterday, and the way he highlighted back to us, telling that earlier… the total employee base was 10 ,000. Now there is a refinement and optimization by incorporating AI and thereby there is reduction in employee base. But then if we look at the people who are operating in AI, which was 30 now has gone to 3 ,000. So you cut down here and increase over there. So we were trying to tell them that the true power of AI is actually in making sure that AI is not touched with people, human. So reduction in human by upskilling that as appropriately is an important element for us to do. Thank you.

Dr. M P Tangirala

I’ll come to you, Dr. Upadhyay. You did, you know, I know I cut you off or sort of gave you a time pause there. Could you tell us a little bit more about what you were doing, what you’re doing with respect to disaster management, the application that you spoke about?

Dr. Rajkumar Upadhyay

Disaster, yeah. Yeah. So disaster management, as you know, earlier, how did you? It used to happen. Suppose there is a cyclone in Odisha. A mail will go from IMD to chief secretary. Chief secretary will write to district collector. District collector would, in his best way, try to send the cyclone exactly to come. And we used to have thousands of lives lost, property lost. Today, using AI and the sensors, the system what we have done, this is one unified platform where all the alert generating agencies, IMD, CWC, FRI, DGSE, so all alert generating agencies are connected through APIs, auto. All the telecom operators are connected. All the alert dissemination agencies like SDMAs in the states are connected.

So it is all powerful one system. Now there is an alarm, a sensor alarm comes that there is a cyclone likely to, or rain likely. This is automatically read by the system. It prepares the message and finds out what is the geo -targeted area. Because earlier the problem was, they will put these kind of threats but nothing will happen. So people will take next time very casually. But today it is a geo -targeted system. It will alert only to the people who are in that belt. Suppose there is a cyclone hitting Gopalpur in Odisha it will only alert the people who are likely to be affected much before. And it will tell you also you need to evacuate you need to evacuate.

If you need to evacuate what is the arrangement by the government or you need to stay indoors. So all that happens and it was actually presented in parliament. The death in for example I am taking the case of Odisha where thousands of people died in 99. The death is zero. So what happened that after that we implemented because India is a large country and sometime a large population is to be alerted in some other cases. That time SMS gets delayed. You know SMS is a sequential process. SMS is sent by SMSCs. There was a new technology called cell broadcast where you don’t see the messages common. You don’t send through SMS you just broadcast it. So we developed a technology called Cell Broadcast And it was recently used in Cyclone Montha And how do you use AI?

Because now I am getting Inputs from various agencies I federate it My system federates all these information Using AI, builds one Particular message, finds out what is The right area where it is likely to hit And sends only to those people And the beauty of this system is, earlier there was a system Of group SMS, they will find people who are staying there Now even if you are a foreigner You are available at that particular time there It will pick your number and it will Give you the message. So tsunami Is coming, so We don’t know, people may be from here And there at the beach. So this has A very good, and in fact we have Published a paper in ITU, ITU has taken This as a report So this is going Forward, we feel that this particular System will meet The requirement of early warning for all By UN by 2027 And we are already talking to many countries And soon this solution will be Deployed in few countries which is Thank you.

Thank you.

Dr. M P Tangirala

In fact, in your presentation, you also spoke about fraud pro and so on. But I will, in the interest of time, I’ll move to Mr. Gorman about this fraud and scam. Now, you did in your opening remarks talk about the importance of collaboration across sectors. Also, the opportunities of, you know, of engendering innovation through collaboration for controlling or combating scams. Could you elaborate a bit upon that?

Mr. Julian Gorman

Sure. Thanks. Thanks for your question. I think this builds on actually the last couple of comments. I mean, what we’re talking about here is sharing data between multi -parties through standardized interfaces and then using AI or something or other to produce a good outcome. And all these things are innovations. They’re on the leading edge of something. If I start with the first thing, data sharing through standardized interfaces. standard interfaces, you know, GSMA has open gateway APIs program and that is contributing to providing data points which can be used in assessing risk for transactions. There’s other data that can be shared that could help address scams earlier in the cycle. Example, there’s lots of other data points to do and that’s the proof of concept that GSMA is working on in Southeast Asia, sharing data.

The challenge with doing that is you’re at the borders of regulatory compliance. You’re talking about private information or personal information or maybe not. There’s sometimes debate. But to be effective, you’re talking about being able to measure the risk on a particular individual user by sharing information across multiple parties. That requires some regulatory support, sandboxes or other activities to help find, to develop the innovation that finds the solutions that help combat scams. I think one of the things we need to focus on in industry is how do we create that nurturing environment that permits exploration of data sharing in a privacy -enhanced way. There’s lots of nice new technologies that have the impact while complying with the regulations and the privacy we want to maintain.

But ultimately, from a mobile operator point of view, I would say there’s four pillars in this combating scam thing. The one is the network, making sure the network cannot be manipulated in favor of the scammers. And that’s by CLI spoofing, all that sort of stuff. Let’s cut that out. If you introduce AI, there’s other things you can do on top. The second is what can mobile operators expose to the ecosystem so that the ecosystem can measure and respond to risk? Open gateway APIs is one thing. The POC I talked about before is another. And there may be other things. The third is what can mobile operators provide as services to their customers? in the same way the physical environment you can provide hard hats and things like that there’s things you can help customers and they can choose to acquire or choose to use them of their own choice to help protect them online and the fourth thing is digital skills digital skills historically we’ve considered is a destination in actual fact we now know we’re never going to hit that final point skills are going to continue to adopt and to adapt and it’s critical that we focus on all four pillars and that from a regulatory point of view and ecosystem point of view we’re collaborating so that the data can flow we can try and test things and we overcome the prejudice that may be stopping innovation because there’s an expectation that you can’t do these things so it requires policy makers regulatory to sponsor to nurture these things I mean I can guarantee I work into 90 % of mobile operators in Asia Pacific and I start a sentence with I want to suggest we use consumer data for I won’t get past halfway through the sentence they’ll say nope you can’t do that But in actual fact, if we want to be successful, no single entity, especially no mobile operator, has all the information.

I mean, if a mobile operator arbitrarily starts turning off SIM cards because they think maybe that traffic looks a bit dubious, I mean, you’ve only got to look at the Optus outage in Australia where three or four people died because they couldn’t call emergency services. You don’t want to be taking that action. It requires collaboration, regulatory support and policy support.

Dr. M P Tangirala

Yeah, thank you. Thank you. Network, ecosystem, hard hats and upskilling. I think that’s a good way to end the discussion here on the panel. But we have time for one question. Yes, Mr. Jha. We have less than two minutes.

Anil Kumar Jha

Thank you. Very quick, very brief. The question from Mr. Julian Gorman. as we have said we are under attack may we attack anytime. We have also said that we should align with the global trends in order to combat these fraud and all those things. You have heard our panelists who are the icons in their field of manufacturing and standardization and PSPs. Could you suggest two steps that global leaders should take to align the world with themselves and two steps that India should take to align with the world. Thank you.

Mr. Julian Gorman

I mean two steps globally. So the proof of concept we are trying to do in Southeast Asia is actually prove that data can be shared domestically but across borders also in a safe secure way and has impact on controlling scams. One thing we need to remember with scams all we are doing by taking action against scams is increasing the cost of the business case and if we increase the cost of the business case here then another area becomes more favorable and that could be just different types of scams or it could be different locations and so that leads to the what do we need to do globally. We need to act across borders. We need to act as a collective global community.

GSMA has a program called United Against Scams there will be a lot of things about that in Barcelona but India is obviously taking great action domestically or taking steps domestically sharing that knowledge across borders and being able to share that data across borders is important and so I would leave it at those two points

Dr. M P Tangirala

Thank you, it also gives us pause for thought, maybe as regulators we also need to look at collaborating efforts across regulators because there are again sectoral issues that we need to do and so with that we are now at the end of the session, I would request the audience to give a big round of applause to my panelists who have given us very good insight into the topic at hand and thank you so much

Moderator

Thank you moderator sir and all our distinguished panelists for such a vibrant discussion on usage of responsible AI the standards, the repository, various government app for enhancing consumer experience. Your insights will greatly benefit the overall digital ecosystem. Now I would request Dr. M .P. Tangirala to present mementos to our distinguished speakers as a token of appreciation. First to Mr. Julian Gorman. To Dr. Rajkumar Upadhaya. To Mr. Mathan Babu. To Mr. S .T. Abbas. Now I invite Sri A .K. Jha, Principal Advisor, TRAI to present a memento to the moderator of this session, Dr. M .P. Tangirala as a token of appreciation for moderating such a productive session Thank you so much, sir Now I take this opportunity to invite all the speakers for a group photograph I once again would request Chairman, sir, M .P.

Tangirala, Secretary, sir and all the Principal Advisors to please join the session speakers of this panel for a group photograph Thank you give a huge round of applause to all the panelists for joining us. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The moderator introduced the Technology Security and Data Privacy Officer of Vodafone India and senior DDG S.T. Abbas as panelists.”

The knowledge base lists the Technology Security and Data Privacy Officer at Vodafone India and senior DDG S.T. Abbas as panel members, confirming their identification in the session [S100].

Confirmedhigh

“AI must be deployed so as to minimise false‑positives and fully respect privacy and regulatory constraints.”

A related statement in the knowledge base stresses that false-positives should be kept very low when deploying AI for fraud detection, supporting the claim [S105].

Additional Contextmedium

“The session highlighted the need for enhanced collaboration among regulators across different sectors.”

The knowledge base notes that Dr Tangirala concluded the session by emphasizing the need for greater collaboration among regulators, providing additional context to the report’s emphasis on cross-sector cooperation [S1].

Additional Contextmedium

“The scam economy moves faster than regulation, prompting GSMA to create the Cross‑Sector Any‑Scam Task Force involving many organisations.”

The knowledge base reports large-scale scam activity (e.g., billions of spam instances and millions of scammers flagged), underscoring the magnitude of the problem that the task force aims to address [S54].

External Sources (110)
S1
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — – Dr. M P Tangirala- Mathan Babu Kasilingam – Mathan Babu Kasilingam- Dr. M P Tangirala – Mr. Julian Gorman- Dr. Rajku…
S2
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — -Anil Kumar Jha: Principal Advisor, TRAI (Telecom Regulatory Authority of India)
S3
WS #93 My Language, My Internet – IDN Assists Next Billion Netusers — – Anil Kumar Jain: Chair of UASG at ICANN, Former CEO of National Internet Exchange of India Anil Kumar Jain: Currently…
S4
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — -Mr. Julian Gorman: Representative from GSMA, expert in telecom industry collaboration and anti-scam initiatives across …
S5
Building Indias Digital and Industrial Future with AI — -Julian Gorman- Head of APAC GSMA
S6
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — – Mathan Babu Kasilingam- Syed Tausif Abbas – Syed Tausif Abbas- Mathan Babu Kasilingam
S7
Final Report — – 12) Russian Federation ʹ H.E. Mr Rashid Ismailov, Deputy Minister of Telecom and Mass Communications. – 13) Viet Nam …
S8
WSIS Prizes 2025 Winner’s Ceremony — – **Rajkumar Upadhyay** – Dr., Representative from Centre for Development of Telematics, India India’s AI and Facial Re…
S9
IndoGerman AI Collaboration Driving Economic Development and Soc — -Dr. Rajkumar Upadhyay- CEO of Center for Development of Telematics (CDOT), expert in telecommunications, quantum commun…
S11
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — – Dr. M P Tangirala- Mathan Babu Kasilingam – Mathan Babu Kasilingam- Dr. M P Tangirala – Mr. Julian Gorman- Dr. Rajku…
S12
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S13
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S14
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S15
Building the Next Wave of AI_ Responsible Frameworks & Standards — “human in the loop is a first class feature not a failure point … design the system … transition … to a human”[79]…
S16
Science AI & Innovation_ India–Japan Collaboration Showcase — I think other is I definitely feel. I feel that we cannot discard the human in the loop. I feel like AI has to make. the…
S17
Leaders TalkX: Towards a safer connected world: collaborative strategies to strengthen digital trust and cyber resilience — – Anil Kumar Lahoti- Dimitris Papastergiou Cross-sector coordination is vital for cyber resilience due to interconnecte…
S18
7th edition — Most spam originates from outside a given country. It is a global problem requiring a global solution. There are var…
S19
(Re)-Building Trust Online: A Call to Action | IGF 2023 Launch / Award Event #144 — In summary, the analysis delved into various aspects of the global information ecosystem and its challenges. It highligh…
S20
TradeTech’s Trillion-Dollar Promise — Furthermore, cooperation between nations, the private sector, and civil society is vital for ensuring the development of…
S21
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — I think I would say that the mindset change which we have to move towards is a mindset of an ecosystem. Because we can’t…
S22
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implemen…
S23
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — However, if a sandbox had been in place, the measure could have undergone comprehensive testing and analysis, thereby av…
S24
Global telecommunication and AI standards development for all — India has been chosen to host the distinguished World Telecommunication Standardisation Assembly (WTSA 2024), set to tak…
S25
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S26
Advancing Scientific AI with Safety Ethics and Responsibility — So, those things are we are trying to do some assessments from the incident perspective. So, if you go to read the incid…
S27
Secure Finance Risk-Based AI Policy for the Banking Sector — But of course, we need to engage in it. We need to engage in these technologies and build on them. Otherwise, you know, …
S28
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S29
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S30
India’s banks encouraged to adopt AI for consumer protection — Indian banks shouldharness AIto improve internal controls and address customer complaints more effectively, according to…
S31
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — The misuses which have been reported so far are basically the use of generic synthetic identities and deepfake documents…
S32
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Amal El Fallah Seghrouchini:Hello, everybody. I am very happy to talk about AI in cybersecurity. And I think that there …
S33
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — “And spanning all of those, I think the most impactful use cases that we have seen, certainly in fraud and scams remedia…
S34
Overcoming policy silos: the next challenge in Internet governance — Stakeholders and policymakers approach the same issues, both at national and global levels, from various angles and poli…
S35
What is it about AI that we need to regulate? — The question of achieving interoperability of data systems and data governance arrangements across different stakeholder…
S36
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S37
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S38
Importance of Professional standards for AI development and testing — The discussion maintained a serious, professional tone throughout, reflecting the gravity of the subject matter. While c…
S39
Advancing Scientific AI with Safety Ethics and Responsibility — The panelists agreed that safety measures must be systemic rather than purely technical, requiring integration of existi…
S40
Building Scalable AI Through Global South Partnerships — The speakers demonstrated strong consensus on the need for government partnership, South-South collaboration, digital in…
S41
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S42
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — High level of consensus across all speakers, with particularly strong alignment between industry and regulatory perspect…
S43
Setting the Rules_ Global AI Standards for Growth and Governance — Very high level of consensus with no significant disagreements identified. This strong alignment across industry, govern…
S44
Trusted Connections_ Ethical AI in Telecom & 6G Networks — The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence i…
S45
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Trust requires an ecosystem approach with partnerships across the value chain Unexpected consensus across telecom, rese…
S46
AI and Cybersecurity  — Humans are involved in the development and operation of technologies
S47
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Angela Coriz: Thank you. I will try to be quick. So I work at Connect Europe. This is a trade association that represent…
S48
WSIS Plus 20 Review: UN General Assembly High-Level Meeting – Comprehensive Summary — Second, trust and safety must be embedded across the digital ecosystem through regulation, accountability and sustained …
S49
Multi-stakeholder Discussion on issues about Generative AI — Thus, collaboration, dialogue, and capacity-building around AI are encouraged. Collaboration is necessary due to the cro…
S50
Strategic Action Plan for Artificial Intelligence — A barrier to AI developments is that developers may not have access to certain data, because it is technically protected…
S51
Interim Report: — 39. There is, today, no shortage of guides, frameworks, and principles on AI governance. Documents have been drafted by …
S52
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — After having generated this path, it also sends out a series of routine legal requests that we require for most investig…
S53
GOVERNING AI FOR HUMANITY — – Service policy dialogues with multi-stakeholder inputs in support of interoperability and policy learning. An initial …
S54
Secure Talk Using AI to Protect Global Communications & Privacy — High level of consensus with significant implications for industry transformation. All speakers agree that traditional a…
S55
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — High level of consensus with significant implications for fraud prevention policy. The alignment across diverse stakehol…
S56
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks fo…
S57
How Trust and Safety Drive Innovation and Sustainable Growth — No, very much so. I mean, the data protection laws apply across the board wherever technology touches personal data. So …
S58
About the Authors — II: A direct corollary of the cost-effectiveness principle is that regulatory policy should be functionality-based, rath…
S59
Enhancing Digital Resilience: Cybersecurity, Data Protection, and Online Safety — The ethical use of data by private companies was discussed, with emphasis on long-term sustainability and integrity in b…
S60
Harmonizing High-Tech: The role of AI standards as an implementation tool — By uniting service quality-focused regulators with companies adept in the creation of service quality key performance in…
S61
Agentic AI in Focus Opportunities Risks and Governance — Enterprise guardrails & risk management Industry favours globally‑recognised, voluntary standards rather than prescript…
S62
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S63
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S64
India’s banks encouraged to adopt AI for consumer protection — Indian banks shouldharness AIto improve internal controls and address customer complaints more effectively, according to…
S65
Employing AI for consumer grievance redressal mechanisms in e-commerce (CUTS) — During the discussion on consumer protection and technology, several key topics were explored. One of the main points ra…
S66
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And how do we demonstrate that the risks have been managed well? And that is where the assurance ecosystem that Rebecca …
S67
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Julian Gorman from GSMA emphasized that combating scams requires cross-sector collaboration, noting that scammers operat…
S68
Leaders TalkX: Towards a safer connected world: collaborative strategies to strengthen digital trust and cyber resilience — Collaborative approach to tackle scams involves telecom operators, police, prosecutors, cybersecurity agencies and natio…
S69
How .POST powered services build Cyber Resilience within the global Postal and Logistics Sector — International collaboration is essential for combating cross-border postal scam campaigns and sharing threat intelligenc…
S70
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — The misuses which have been reported so far are basically the use of generic synthetic identities and deepfake documents…
S71
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks fo…
S72
India’s UIDAI rolls out AI-enabled biometric deduplication and document verification platform — UIDAI hasdeployedan advanced platform that uses AI-enabled models to improve biometric deduplication, the process of ens…
S73
What is it about AI that we need to regulate? — The question of achieving interoperability of data systems and data governance arrangements across different stakeholder…
S74
A Global AI in Financial Services Survey — – Data fuels AI and allow firms to scale their AI applications. Access to and quality of data remain key hurdles to AI …
S75
Multi-stakeholder Discussion on issues about Generative AI — Melinda Claybaugh:So Melinda, please. Thank you so much. So I want to share some of the AI products and developments tha…
S76
The role of standards in shaping a safe and sustainable AI-driven future — Seizo Onoe:Thank you very much. Good morning, everyone, and very warm welcome to you all. Our discussions at this summit…
S77
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S78
Importance of Professional standards for AI development and testing — The discussion maintained a serious, professional tone throughout, reflecting the gravity of the subject matter. While c…
S79
Ad Hoc Consultation: Friday 2nd February, Morning session — During the session, chaired by Mr. Chair, the speaker began by extending greetings to colleagues and esteemed delegates …
S80
Ad Hoc Consultation: Thursday 1st February, Morning session — In a formal and courteous address, the speaker began by respectfully acknowledging the presiding official, Madam Chair, …
S81
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S82
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S84
Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue — The discussion maintained a serious, analytical tone throughout, reflecting the gravity of the subject matter. While spe…
S85
WS #279 AI: Guardian for Critical Infrastructure in Developing World — The tone of the discussion was largely informative and collaborative. Speakers shared insights from their various backgr…
S86
What policy levers can bridge the AI divide? — The discussion maintained a collaborative and optimistic tone throughout, with participants sharing experiences construc…
S87
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S88
Multistakeholder Partnerships for Thriving AI Ecosystems — The tone was constructive and solution-oriented throughout, with speakers building on each other’s points rather than de…
S89
Global AI Policy Framework: International Cooperation and Historical Perspectives — The discussion maintained a constructive and optimistic tone throughout, despite acknowledging significant challenges. S…
S90
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S91
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S92
Flexibility 2.0 / Davos 2025 — The panel discussion provided a comprehensive exploration of the gig economy’s impact on the future of work. While ackno…
S93
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S94
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S95
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S96
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S97
Any other business /Adoption of the report/ Closure of the session — In closing, the speaker reiterated steadfast support for the Chairperson, the Secretariat, and the diligent team, emphas…
S98
Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages — The tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed …
S99
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The moderator introduces himself at the start of the session, establishing his presence for the audience.
S100
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — Technology Security and Data Privacy Officer at Vodafone India Limited, with over 20 years of experience in cyber securi…
S101
Internet standards and human rights | IGF 2023 WS #460 — Moderator – Sheetal Kumar:Hello, everyone. Good morning. Welcome to this session on Internet Standards and Human Rights….
S102
Opening of the session — relevance of technological innovation and the establishment of new norms to guarantee freedoms and protections online
S103
Main Session 3: Internet Governance and elections: maximising potential for trust and addressing risks — 1. Balancing Innovation and Integrity: Audience: Thank you very much. My name is Maha Abdel Nasser. I’m from the Egy…
S104
High-Level Session 2: Transforming Health: Integrating Innovation and Digital Solutions for Global Well-being — A significant portion of the discussion focused on the challenge of balancing enhanced security with user privacy protec…
S105
Responsible AI in India Leadership Ethics & Global Impact part1_2 — “I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine …
S106
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S107
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — **Human Control and Oversight**: Despite different approaches, speakers across perspectives emphasized the importance of…
S108
Deepfakes and the AI scam wave eroding trust — Calls for regulation are understandable, but policy has inherent limitations in this space. Deepfakes evolve faster than…
S109
Tackling disinformation in electoral context — Giovani Zagni: Now it’s on, now it works. Okay, thank you for this question, good afternoon and I will answer by makin…
S110
WS #198 Advancing IoT Security, Quantum Encryption & RPKI — Nicolas Fiumarelli: Sofia, thank you so much for your contributions. RPKI can sound very strange for non-technical per…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. M P Tangirala
1 argument115 words per minute929 words482 seconds
Argument 1
Emphasize human‑in‑the‑loop and proactive communication
EXPLANATION
Dr. Tangirala stresses that AI decisions affecting customers must be overseen by humans to prevent autonomous errors, and that telecom providers should communicate clearly and proactively with customers about AI‑driven outcomes.
EVIDENCE
He highlighted the need for a human control element to ensure AI systems do not act independently [19-21] and emphasized that clear, proactive communication with customers is essential for trust [15-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop is described as a primary feature for safe AI deployment and emphasized as essential, aligning with Tangirala’s point [S15]; additional commentary stresses that the human-in-the-loop cannot be discarded and should aid workers, supporting the argument [S16].
MAJOR DISCUSSION POINT
Human oversight and transparent communication
AGREED WITH
Syed Tausif Abbas, Mathan Babu Kasilingam, Julian Gorman
DISAGREED WITH
Mathan Babu Kasilingam
A
Anil Kumar Jha
2 arguments180 words per minute94 words31 seconds
Argument 1
Call for concrete global and Indian actions to align anti‑scam efforts
EXPLANATION
Jha asks the panel to suggest two steps that global leaders and two steps for India should take to harmonise anti‑scam measures worldwide and domestically.
EVIDENCE
He directly requests specific actions for global and Indian alignment in his brief question to the panel [319-320].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Jha’s request for specific steps is reflected in the summit transcript noting his call for global and Indian alignment on scam mitigation [S1]; the global nature of spam and need for cross-border cooperation are highlighted in a discussion of worldwide spam origins and collaborative solutions [S18].
MAJOR DISCUSSION POINT
Actionable steps for anti‑scam alignment
AGREED WITH
Julian Gorman, Syed Tausif Abbas, Dr. Rajkumar Upadhyay, Moderator
Argument 2
Recommend two global steps and two India‑specific steps to harmonise anti‑scam efforts
EXPLANATION
Building on his earlier request, Jha seeks concrete recommendations on how the international community and India can coordinate to combat scams more effectively.
EVIDENCE
His question explicitly asks for two global and two India-specific steps, framing the need for coordinated policy responses [319-320].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same sources that capture Jha’s request provide concrete suggestions for cross-border data sharing and collective action as recommended actions [S1]; the global problem of spam and the need for coordinated policy are discussed in the global anti-scam strategy overview [S18].
MAJOR DISCUSSION POINT
Policy recommendations for scam mitigation
M
Mr. Julian Gorman
5 arguments158 words per minute1349 words510 seconds
Argument 1
Foster collaborative ecosystem and shared standards to sustain trust
EXPLANATION
Gorman argues that trust in telecom can be maintained by encouraging innovation through collaboration across the industry, regulators, and technology platforms, underpinned by shared standards.
EVIDENCE
He describes the need to stimulate innovation, the formation of the Cross-Sector Any Scam Task Force, and the importance of ecosystem-wide collaboration for trust [31-44] and stresses India’s role in a global ecosystem [46-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-sector coordination is identified as vital for cyber resilience and trust, supporting Gorman’s ecosystem view [S17]; an ecosystem mindset and partnership examples underline the need for shared standards [S21]; global cooperation for standards is also highlighted [S20].
MAJOR DISCUSSION POINT
Collaborative innovation and standards
AGREED WITH
Julian Gorman, Syed Tausif Abbas, Mathan Babu Kasilingam, Dr. M P Tangirala, Moderator
Argument 2
Create cross‑sector task forces and share data with platforms like Meta, Google, TikTok to combat scams
EXPLANATION
Gorman outlines the establishment of a multi‑organisation task force that includes major social media and cloud platforms to coordinate anti‑scam initiatives across sectors.
EVIDENCE
He details the Cross-Sector Any Scam Task Force involving Meta, Google, TikTok, AWS and other organisations, aimed at identifying and prioritising industry-wide anti-scam actions [32-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The formation of a cross-sector coalition involving major platforms is described in the summit notes, matching Gorman’s proposal [S1]; the necessity of global data sharing to tackle scams is emphasized in the global anti-scam discussion [S18].
MAJOR DISCUSSION POINT
Cross‑sector cooperation against scams
AGREED WITH
Julian Gorman, Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam
Argument 3
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes
EXPLANATION
Gorman promotes the use of standardized APIs and regulatory sandboxes to enable privacy‑preserving data sharing that can improve risk assessment and scam detection.
EVIDENCE
He references GSMA’s open gateway APIs program, the need for privacy-enhanced data sharing, and the role of regulatory sandboxes in fostering innovation while protecting personal data [285-295].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory sandboxes are presented as mechanisms to enable responsible data sharing and protect privacy, aligning with Gorman’s suggestion [S22]; further discussion on sandboxes spurring cross-border data sharing reinforces this point [S23]; the use of open-gateway APIs for ecosystem risk assessment is noted in the summit summary [S1].
MAJOR DISCUSSION POINT
Privacy‑preserving data sharing mechanisms
AGREED WITH
Syed Tausif Abbas, Mathan Babu Kasilingam, Julian Gorman, Dr. M P Tangirala
DISAGREED WITH
Julian Gorman, Mathan Babu Kasilingam
Argument 4
Promote cross‑border data sharing and collective action against scams
EXPLANATION
Gorman emphasizes that combating scams requires coordinated action beyond national borders, urging a global community approach.
EVIDENCE
He notes the importance of cross-border collaboration, describing India’s emerging global telecom leadership and the necessity of sharing knowledge worldwide [46-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The global nature of spam and the call for cross-border collaboration are highlighted in the anti-scam global solution overview [S18]; broader cooperation between nations and the private sector for standards supports this view [S20]; India’s emerging global telecom role further underscores the need for worldwide knowledge sharing [S24].
MAJOR DISCUSSION POINT
Cross‑border collaboration on scam mitigation
AGREED WITH
Julian Gorman, Anil Kumar Jha, Syed Tausif Abbas, Dr. Rajkumar Upadhyay, Moderator
Argument 5
Emphasise India’s emerging role as a global telecom leader and the need to share knowledge worldwide
EXPLANATION
Gorman points out that India’s rising stature in telecom obliges it to act as a global leader, sharing its innovations and experiences with the international community.
EVIDENCE
He highlights India’s status as a telecom superpower and its responsibility to play a “statesman” role in the global ecosystem [46-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s selection to host the World Telecommunication Standardisation Assembly and its positioning as a telecom superpower are documented, confirming Gorman’s statement [S24]; the summit notes also reference India’s leadership in global telecom initiatives [S1].
MAJOR DISCUSSION POINT
India’s global telecom leadership
S
Syed Tausif Abbas
3 arguments148 words per minute552 words223 seconds
Argument 1
Introduce voluntary AI incident reporting schema to increase transparency
EXPLANATION
Abbas proposes a voluntary, standardized database for reporting AI incidents, detailing fields, taxonomy, and severity levels to improve transparency and learning.
EVIDENCE
He outlines a 30-field schema, taxonomy, severity classification, and notes that the reporting is voluntary rather than mandatory [193-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A 30-field AI incident reporting framework with taxonomy and severity levels is detailed in the AI incident reporting standards document, directly supporting Abbas’s proposal [S26]; broader calls for algorithmic transparency reinforce the need for such voluntary reporting [S25].
MAJOR DISCUSSION POINT
Voluntary AI incident reporting framework
AGREED WITH
Mathan Babu Kasilingam, Julian Gorman, Dr. M P Tangirala
Argument 2
Define a 30‑field schema, taxonomy and severity levels for AI incidents
EXPLANATION
He specifies the structure of the incident reporting database, including categories such as network component, incident type, severity, and cause of failure.
EVIDENCE
The description includes fields for incident type, affected system, severity, and cause of failure, forming a comprehensive taxonomy [193-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The incident reporting schema, including 30 key fields and a detailed taxonomy, is outlined in the standards reference [S26]; the same source notes the classification of incidents across multiple dimensions, matching Abbas’s description.
MAJOR DISCUSSION POINT
Standardized incident taxonomy
Argument 3
Highlight benefits for operators and regulators from a common incident database
EXPLANATION
Abbas argues that a shared incident repository enables operators to analyse failures, improve AI models, and provides regulators with data to shape AI policy.
EVIDENCE
He states that recorded incidents can be analysed by service providers to prevent recurrence and that the aggregated data assists regulators and policymakers [196-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The reporting framework is said to enable operators to analyse failures and regulators to shape policy, providing the benefits Abbas cites [S26]; algorithmic transparency discussions also note the value of shared incident data for oversight [S25].
MAJOR DISCUSSION POINT
Operator and regulator benefits
D
Dr. Rajkumar Upadhyay
5 arguments171 words per minute1925 words671 seconds
Argument 1
Deploy AI‑based fraud detection and deduplication tools (Fraud Pro, Sanchar Sati) to disconnect fraudulent SIMs
EXPLANATION
Upadhyay describes AI solutions that identify duplicate or fraudulent SIM registrations and automatically disconnect them, protecting customers from spam and fraud.
EVIDENCE
He explains Fraud Pro’s ability to group images and demographics to detect duplicate SIMs, and cites that 70 lakh (7 million) connections have been disconnected using the Sanchar Sati app [65-71] and [101-103].
MAJOR DISCUSSION POINT
AI‑driven fraud detection and SIM deduplication
AGREED WITH
Julian Gorman, Mathan Babu Kasilingam
Argument 2
Implement millisecond‑level AI call‑blocking to curb spoof and scam calls
EXPLANATION
He details an AI system that decides within milliseconds whether to allow or block a call, handling up to 15 million calls per day with near‑zero error.
EVIDENCE
He notes that the system makes decisions in milliseconds at the gateway, processing 15 million calls daily while maintaining zero false blocks [108-112].
MAJOR DISCUSSION POINT
Real‑time AI call‑blocking
Argument 3
Build a unified AI‑driven early‑warning system that aggregates sensor data from IMD, CWC, etc.
EXPLANATION
Upadhyay presents a platform that integrates data from multiple weather and disaster agencies via APIs, using AI to generate timely alerts.
EVIDENCE
He describes how the system connects IMD, CWC, FRI, DGSE and telecom operators through APIs, automatically reading sensor alarms and preparing geo-targeted messages [250-255].
MAJOR DISCUSSION POINT
Integrated AI early‑warning platform
Argument 4
Use AI to generate geo‑targeted cell‑broadcast alerts, reducing casualties to zero in pilot cases
EXPLANATION
The AI‑powered system creates location‑specific broadcast messages, ensuring only affected populations receive warnings, which has led to zero deaths in recent cyclone pilots.
EVIDENCE
He cites the cyclone-Montha case where geo-targeted cell broadcast alerts resulted in zero fatalities, contrasting with past deaths in 1999 [259-266].
MAJOR DISCUSSION POINT
Geo‑targeted AI alerts for disaster response
Argument 5
Position the solution for international adoption and UN early‑warning goals by 2027
EXPLANATION
Upadhyay notes that the system has been documented in an ITU paper and is being promoted to other countries, aiming to meet UN early‑warning objectives by 2027.
EVIDENCE
He mentions the ITU report, ongoing discussions with multiple countries, and the target of supporting UN early-warning goals by 2027 [274-276].
MAJOR DISCUSSION POINT
Global scaling of AI disaster‑warning solution
AGREED WITH
Julian Gorman, Anil Kumar Jha, Syed Tausif Abbas, Moderator
M
Mathan Babu Kasilingam
4 arguments159 words per minute1696 words637 seconds
Argument 1
Adopt privacy‑by‑design and ISO 27701 certification to assure customers
EXPLANATION
Kasilingam states that their telecom service provider has been certified under ISO 27701 and follows privacy‑by‑design principles to reinforce customer trust in AI deployments.
EVIDENCE
He notes that the company is certified on PIMS ISO 27701 and is the only TSP in the country governing privacy by design, ensuring trust back to customers [117-119].
MAJOR DISCUSSION POINT
Privacy‑by‑design and ISO certification
AGREED WITH
Julian Gorman, Syed Tausif Abbas, Moderator
Argument 2
Identify siloed data repositories as a barrier; propose unified data lake and common AI platform
EXPLANATION
Kasilingam points out that creating isolated data silos hampers AI effectiveness and suggests consolidating data into a single repository with a shared AI infrastructure.
EVIDENCE
He describes the problem of multiple isolated data silos and proposes a single data lake and common AI platform to simplify security and access [152-166].
MAJOR DISCUSSION POINT
Data consolidation for AI
AGREED WITH
Syed Tausif Abbas, Julian Gorman, Dr. M P Tangirala
DISAGREED WITH
Julian Gorman
Argument 3
Note that 80‑90 % of AI spend is on infrastructure; address skill shortages and automation of AI operations
EXPLANATION
He highlights that the majority of AI costs are tied to compute and storage infrastructure, and that there is a shortage of skilled AI engineers, prompting a move toward AI‑assisted operations.
EVIDENCE
He quantifies that 80-90 % of AI expenditure goes to infrastructure and mentions the need for skilled engineers, noting ongoing efforts to let AI aid AI development [225-229].
MAJOR DISCUSSION POINT
Cost structure and skill gaps in AI
DISAGREED WITH
Dr. M P Tangirala
Argument 4
Suggest centralised LLMs and API‑driven architecture to reduce duplication and improve security
EXPLANATION
Kasilingam proposes building a centralised large‑language‑model platform exposed via enterprise APIs, allowing various business functions to access AI without maintaining separate data silos.
EVIDENCE
He outlines the creation of purpose-built LLMs for functions like HR, the use of an enterprise API architecture, and the benefits of a single secure data repository [170-179].
MAJOR DISCUSSION POINT
Centralised LLM and API architecture
M
Moderator
3 arguments48 words per minute290 words355 seconds
Argument 1
Set the discussion context around balancing innovation, privacy and trust
EXPLANATION
The moderator frames the panel by emphasizing the need to balance technological innovation with privacy safeguards and customer trust.
EVIDENCE
Opening remarks ask participants to engage on balancing information, innovation, privacy, and trust [5-7].
MAJOR DISCUSSION POINT
Framing of trust‑innovation‑privacy balance
AGREED WITH
Julian Gorman, Mathan Babu Kasilingam, Syed Tausif Abbas
Argument 2
Reinforce the need for standards to guide responsible AI deployment
EXPLANATION
The moderator underscores the importance of establishing standards that can steer responsible AI use within the telecom sector.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
System-level interventions and the development of global standards for trustworthy information ecosystems are discussed, underscoring the call for AI standards [S19]; cooperation between nations and industry to establish standards further supports this point [S20].
MAJOR DISCUSSION POINT
Call for AI standards
Argument 3
Conclude with a call for regulators to cooperate across sectors for responsible AI
EXPLANATION
In closing, the moderator urges regulatory bodies to work together across different sectors to ensure AI is deployed responsibly.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-sector coordination is highlighted as essential for cyber resilience and responsible AI, aligning with the moderator’s closing appeal [S17]; ecosystem partnership examples illustrate the need for regulator collaboration [S21].
MAJOR DISCUSSION POINT
Regulatory cross‑sector collaboration
AGREED WITH
Julian Gorman, Anil Kumar Jha, Syed Tausif Abbas, Dr. Rajkumar Upadhyay
Agreements
Agreement Points
Broad consensus on the need for collaborative ecosystems and shared standards to build and sustain trust in AI‑driven telecom services
Speakers: Julian Gorman, Syed Tausif Abbas, Mathan Babu Kasilingam, Dr. M P Tangirala, Moderator
Foster collaborative ecosystem and shared standards to sustain trust Introduce voluntary AI incident reporting schema to increase transparency Identify siloed data repositories as a barrier; propose unified data lake and common AI platform Emphasize human‑in‑the‑loop and proactive communication Set the discussion context around balancing innovation, privacy and trust
All speakers highlighted that trust in AI-enabled telecom operations can only be achieved through industry-wide collaboration, common frameworks or databases, and coordinated standards that guide responsible AI use and transparent communication with customers [31-44][193-197][152-166][54-56][5-7].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the broader industry-wide call for collaborative ecosystem approaches and shared AI standards noted in multiple forums, such as the unexpected consensus on ecosystem collaboration in telecom-research-governance dialogues [S45] and the high-level agreement on collaborative solutions across stakeholders [S43]. It also reflects calls for common AI standards at the global level [S41] and support for AI standards exchanges coordinated by ITU, ISO/IEC and IEEE [S53].
Strong agreement on embedding privacy safeguards and privacy‑by‑design in AI deployments
Speakers: Julian Gorman, Mathan Babu Kasilingam, Syed Tausif Abbas, Moderator
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Adopt privacy‑by‑design and ISO 27701 certification to assure customers Masking submitter information in the incident reporting schema Set the discussion context around balancing innovation, privacy and trust
Speakers concurred that AI systems must protect personal data, adopt privacy-by-design principles, and use privacy-preserving data-sharing mechanisms such as sandboxes and masking to maintain user trust [285-295][117-119][193-197][5-7].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on privacy-by-design aligns with the principle highlighted in recent policy discussions on ethical data use, where privacy-by-design was identified as a crucial approach for new technologies [S59], and it resonates with regulatory perspectives that see data protection laws as foundational for trust in AI [S57]. Moreover, the functionality-based regulatory framing that prioritises privacy outcomes supports this stance [S58].
Consensus that AI is a critical tool for detecting and preventing fraud and scam activities
Speakers: Julian Gorman, Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam
Create cross‑sector task forces and share data with platforms like Meta, Google, TikTok to combat scams Deploy AI‑based fraud detection and deduplication tools (Fraud Pro, Sanchar Sati) to disconnect fraudulent SIMs Identify fraud as a serious problem and leverage AI to address it
All three speakers stressed that AI-driven analytics, data sharing and automated tools are essential to identify, block and reduce fraudulent connections and scam calls, thereby protecting customers [26-35][65-71][101-103][134-140].
POLICY CONTEXT (KNOWLEDGE BASE)
This view is consistent with the high-level consensus on fraud prevention across stakeholders at the Day 0 Event on building trust and combating fraud [S55] and with practical examples of AI-driven enforcement using telecom data to combat organized crime [S52].
Shared view on the importance of standardized data sharing and unified platforms for AI operations
Speakers: Syed Tausif Abbas, Mathan Babu Kasilingam, Julian Gorman, Dr. M P Tangirala
Introduce voluntary AI incident reporting schema to increase transparency Identify siloed data repositories as a barrier; propose unified data lake and common AI platform Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Emphasize human‑in‑the‑loop and proactive communication
Speakers agreed that establishing common data structures, APIs and a single data lake reduces silos, improves security and enables effective AI governance and incident reporting [193-197][152-166][285-295][54-56].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for standardized data sharing echoes earlier calls for standardized protocols in cross-border AI collaborations [S39] and the push for common AI standards and definitions at the global level [S41]. It is further reinforced by initiatives to establish AI standards exchanges through bodies like ITU, ISO/IEC and IEEE [S53], while recognizing data access barriers such as IP and security classifications that affect sharing [S50].
Agreement on the necessity of cross‑border and global collaboration to address AI‑related challenges, especially scams
Speakers: Julian Gorman, Anil Kumar Jha, Syed Tausif Abbas, Dr. Rajkumar Upadhyay, Moderator
Promote cross‑border data sharing and collective action against scams Call for concrete global and Indian actions to align anti‑scam efforts Highlight benefits for regulators from a common incident database Position the solution for international adoption and UN early‑warning goals by 2027 Conclude with a call for regulators to cooperate across sectors for responsible AI
All participants underscored that effective AI governance, especially for scam mitigation, requires coordinated international policies, data sharing across borders and shared standards to enable global solutions [46-53][319-320][196-197][274-276][46-53].
POLICY CONTEXT (KNOWLEDGE BASE)
Cross-border collaboration has been repeatedly emphasized, including in discussions on standardized data sharing protocols and global cooperation [S39], the advocacy for common AI standards worldwide [S41], and UN-level summaries noting that digital challenges like scams require transnational responses [S48]. Multi-stakeholder dialogues also stress the need for international cooperation on AI governance [S49], and the broader consensus on collaborative ecosystem approaches supports this view [S45].
Similar Viewpoints
Both speakers stress that privacy must be built into data‑sharing mechanisms, using technical safeguards and formal certifications to protect user data while enabling AI innovation [285-295][117-119].
Speakers: Julian Gorman, Mathan Babu Kasilingam
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Adopt privacy‑by‑design and ISO 27701 certification to assure customers
Both advocate the creation of structured, collaborative mechanisms—whether task forces or reporting schemas—to enable systematic sharing of AI‑related incident data for better scam mitigation [26-35][193-197].
Speakers: Julian Gorman, Syed Tausif Abbas
Create cross‑sector task forces and share data with platforms like Meta, Google, TikTok to combat scams Introduce voluntary AI incident reporting schema to increase transparency
Both recognize fraud as a major national issue and propose AI‑driven solutions as essential to detect and prevent fraudulent activities [65-71][101-103][134-140].
Speakers: Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam
Deploy AI‑based fraud detection and deduplication tools (Fraud Pro, Sanchar Sati) to disconnect fraudulent SIMs Identify fraud as a serious problem and leverage AI to address it
Both highlight that trust in AI systems requires not only technical safeguards but also collaborative frameworks and clear communication with stakeholders [54-56][31-44].
Speakers: Dr. M P Tangirala, Julian Gorman
Emphasize human‑in‑the‑loop and proactive communication Foster collaborative ecosystem and shared standards to sustain trust
Unexpected Consensus
Voluntary rather than mandatory AI incident reporting is accepted as beneficial by both a standards body representative and an industry service‑provider
Speakers: Syed Tausif Abbas, Mathan Babu Kasilingam
Introduce voluntary AI incident reporting schema to increase transparency Identify siloed data repositories as a barrier; propose unified data lake and common AI platform
While standards discussions often push for mandatory compliance, both speakers endorse a voluntary reporting approach, seeing it as a practical step to improve transparency and data quality without imposing regulatory burdens [193-197][152-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry preference for voluntary, globally-recognised standards over prescriptive mandates is reflected in observations that the sector favours voluntary standards for AI governance [S61], aligning with the broader consensus on collaborative, non-mandatory approaches noted in multi-stakeholder settings [S45].
Overall Assessment

The panel displayed strong convergence on four main themes: collaborative ecosystems and shared standards; privacy‑by‑design and data protection; AI as a core tool for fraud/scam mitigation; and the need for standardized, cross‑border data sharing platforms. These points were repeatedly echoed across industry, standards and policy perspectives.

High consensus – the repeated alignment across diverse speakers indicates a unified direction for responsible AI deployment in telecom, suggesting that future policy and industry initiatives are likely to prioritize collaborative standards, privacy safeguards, and AI‑driven fraud prevention.

Differences
Different Viewpoints
Degree of human involvement in AI‑driven telecom operations
Speakers: Dr. M P Tangirala, Mathan Babu Kasilingam
Emphasize human‑in‑the‑loop and proactive communication Note that 80‑90 % of AI spend is on infrastructure; address skill shortages and automation of AI operations
Dr. Tangirala stresses that AI decisions affecting customers must be overseen by humans to prevent autonomous errors and calls for clear, proactive communication with customers [15-21]. In contrast, Mathan Babu argues that the future of AI lies in reducing human involvement, using AI to automate operations and cut staff while up-skilling a smaller workforce, indicating a push for greater automation and less human oversight [225-229][236-237].
POLICY CONTEXT (KNOWLEDGE BASE)
The role of human oversight in AI systems is highlighted in discussions that humans remain integral to the development and operation of AI-enabled technologies [S46], and in business-focused sessions where the balance between automation and human control was examined [S47].
Preferred model for data governance and sharing in AI‑enabled telecom services
Speakers: Julian Gorman, Mathan Babu Kasilingam
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Identify siloed data repositories as a barrier; propose unified data lake and common AI platform
Gorman promotes privacy-preserving cross-industry data sharing through standardized open-gateway APIs and regulatory sandboxes to improve risk assessment and scam detection, emphasizing ecosystem-wide collaboration and cross-border data flows [285-295][46-53]. Mathan Babu counters by highlighting the problem of isolated data silos within a single operator and proposes consolidating data into a single repository with a shared AI infrastructure and centralised LLMs, focusing on internal consolidation rather than external data exchange [152-166][170-179].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over data governance models reference concerns about data access restrictions due to IP, trade secrets, or security classifications that can impede AI development [S50], alongside calls for standardized data sharing protocols and AI standards exchanges to facilitate governance [S39, S53].
Unexpected Differences
Voluntary vs potentially mandatory AI incident reporting standards
Speakers: Syed Tausif Abbas, Mathan Babu Kasilingam
Introduce voluntary AI incident reporting schema to increase transparency Identify siloed data repositories as a barrier; propose unified data lake and common AI platform
Abbas explicitly states that the AI incident reporting database is voluntary and not mandatory [193-197]. Kasilingam, while acknowledging the usefulness of incident records, ties them to existing ITIL frameworks and suggests internal consolidation rather than a separate voluntary reporting schema, indicating a preference for integrating incident data into internal processes rather than maintaining a distinct voluntary external database. The divergence between a stand-alone voluntary reporting mechanism and an internal unified data platform was not anticipated given their shared focus on data quality and AI governance.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between voluntary and mandatory reporting mirrors industry arguments favouring voluntary, globally-recognised standards rather than compulsory regulations [S61], a view echoed in broader multi-stakeholder discussions that highlight the benefits of voluntary frameworks [S45].
Regulatory emphasis on privacy‑by‑design versus innovation‑first stance
Speakers: Mathan Babu Kasilingam, Julian Gorman
Adopt privacy‑by‑design and ISO 27701 certification to assure customers Foster collaborative ecosystem and shared standards to sustain trust
Kasilingam highlights strict privacy-by-design compliance and ISO certification as core to maintaining trust [117-119], whereas Gorman warns that overly prescriptive regulation can stifle innovation and calls for outcome-focused, flexible regulatory approaches, including sandboxes [38-40][292-295]. The tension between a strong, certification-driven privacy regime and a more permissive, innovation-centric regulatory posture was not overtly signalled earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate reflects contrasting policy strands: the privacy-by-design approach championed in ethical data use discussions [S59] and reinforced by data protection law frameworks that support trust [S57], versus functionality-based, innovation-oriented regulatory perspectives that aim to balance privacy with technological advancement [S58].
Overall Assessment

The panel broadly concurs that AI is vital for enhancing trust, fraud mitigation, and service quality in telecom. However, clear fault lines emerge around (i) the extent of human oversight versus full automation, and (ii) the preferred architecture for data governance—external, privacy‑preserving data sharing versus internal data lake consolidation. Additional nuanced tensions appear regarding voluntary incident reporting and the balance between privacy certification and innovation‑friendly regulation.

Moderate disagreement: while all participants share the overarching goal of trustworthy AI‑enabled telecom services, the divergent views on human control, data sharing models, and regulatory balance could lead to fragmented implementation strategies, potentially slowing coordinated progress on industry‑wide standards and trust‑building measures.

Partial Agreements
All speakers agree that AI is essential for building customer trust and combating fraud in telecom, but differ on implementation: Tangirala calls for human oversight, Gorman stresses cross‑sector collaboration and standards, Upadhyay showcases specific AI‑driven fraud tools, while Kasilingam focuses on privacy‑by‑design certifications and internal data governance. The shared goal is trustworthy AI‑enabled services, yet the pathways (human control, ecosystem collaboration, product deployment, privacy certification) diverge [15-21][31-44][65-71][117-119].
Speakers: Dr. M P Tangirala, Julian Gorman, Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam
Emphasize human‑in‑the‑loop and proactive communication Foster collaborative ecosystem and shared standards to sustain trust Deploy AI‑based fraud detection and deduplication tools (Fraud Pro, Sanchar Sati) to disconnect fraudulent SIMs Adopt privacy‑by‑design and ISO 27701 certification to assure customers
Both speakers support the creation of mechanisms that increase transparency and data availability for AI systems. Gorman focuses on technical data sharing through APIs and sandboxes, while Abbas proposes a voluntary incident‑reporting database. They share the objective of better insight into AI outcomes, but differ on the scope (real‑time operational data vs post‑incident reporting) and mandatory nature of the framework [285-295][193-197].
Speakers: Julian Gorman, Syed Tausif Abbas
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Introduce voluntary AI incident reporting schema to increase transparency
Takeaways
Key takeaways
Trust in AI‑driven telecom services requires human‑in‑the‑loop controls and proactive communication with customers. Adopting privacy‑by‑design principles and certifications such as ISO 27701 (PIMS) is essential to assure customers. A voluntary AI incident‑reporting schema with a 30‑field taxonomy can increase transparency and help regulators shape AI policy. Collaboration across the telecom ecosystem, including cross‑sector task forces and data sharing with platforms like Meta, Google, and TikTok, is critical to combat scams. AI‑based fraud detection tools (Fraud Pro, Sanchar Sati, millisecond‑level call‑blocking) have demonstrably reduced fraudulent SIMs and scam calls. Unified, AI‑driven disaster‑management and early‑warning systems can deliver geo‑targeted alerts and have achieved zero‑casualty pilots. Data silos and high infrastructure costs (80‑90 % of AI spend) are major barriers; a consolidated data lake, common AI platform, and centralized LLMs with API‑driven access are proposed solutions. India is emerging as a global telecom leader and must align its domestic anti‑scam actions with international standards and cross‑border cooperation.
Resolutions and action items
GSMA will continue its Cross‑Sector Any Scam Task Force activities and expand the proof‑of‑concept for cross‑border data sharing in Southeast Asia. CDOT (Dr. Upadhyay) will promote its AI‑based fraud‑prevention suite (Fraud Pro, Chakshu, Sanchar Sati) and disaster‑management platform for adoption in other countries. Mathan Babu Kasilingam’s organization will consolidate fragmented data repositories into a single AI‑infrastructure lake, expose services via enterprise APIs, and develop purpose‑built LLMs for various functions. Syed Tausif Abbas will circulate the voluntary AI incident‑reporting schema and encourage telecom operators to adopt it for internal analysis and regulator reporting. Julian Gorman recommends establishing privacy‑enhanced data‑sharing sandboxes and regulatory support mechanisms to enable safe cross‑industry AI collaboration. Anil Kumar Jha’s request was answered with two global steps (cross‑border data sharing, collective global community) and two India‑specific steps (domestic anti‑scam actions and sharing knowledge internationally).
Unresolved issues
How to incentivize or mandate adoption of the AI incident‑reporting standard across all operators. Specific regulatory frameworks and safeguards needed for privacy‑preserving cross‑industry data sharing. Strategies to address the shortage of skilled AI talent within telecom enterprises. Balancing aggressive fraud‑call blocking with the need to guarantee emergency call availability and minimize false positives. Detailed implementation plan and financing model for the proposed unified AI data lake and centralised LLM platform. Mechanisms for continuous global coordination beyond voluntary task forces (e.g., binding agreements, standards enforcement).
Suggested compromises
Maintain human‑in‑the‑loop oversight while deploying AI for large‑scale fraud detection and network self‑healing. Adopt privacy‑by‑design and ISO 27701 certification as baseline requirements, allowing AI innovation to proceed within those safeguards. Introduce the AI incident‑reporting schema on a voluntary basis initially, using industry incentives and regulator endorsement to drive wider uptake. Utilise regulatory sandboxes to test privacy‑enhanced data‑sharing solutions before full‑scale deployment. Implement millisecond‑level call‑blocking with a zero‑error target, complemented by user‑controlled tools (e.g., Sanchar Sati) for post‑hoc verification and correction.
Thought Provoking Comments
The responsibility for decision integrity ultimately remains with the telecom service providers, and clear, proactive communication with customers is essential, especially when AI outcomes affect outage management, fraud prevention, and grievance handling.
Highlights the ethical and accountability dimension of AI deployment, stressing that automation does not absolve providers from responsibility and introduces the need for human‑in‑the‑loop oversight.
Set the tone for the discussion on trust, prompting panelists to frame their AI use‑cases (fraud detection, disaster alerts) around customer transparency and governance rather than pure efficiency.
Speaker: Dr. M P Tangirala
Scammers are not bound by geography or law; regulation cannot move as fast as they do. Hence we formed the Cross‑Sector Anti‑Scam Task Force with 39 organisations from 17 countries to share data and drive outcomes, while ensuring regulation focuses on results, not prescriptive rules.
Introduces the concept that collaborative, cross‑industry coalitions are essential to keep pace with agile threat actors, shifting the conversation from isolated operator actions to a global ecosystem approach.
Triggered a pivot toward discussing data‑sharing standards, cross‑border cooperation, and the role of India as a global telecom leader, influencing subsequent remarks by Dr. Upadhyay and Mr. Abbas about standards and international deployment.
Speaker: Mr. Julian Gorman
Our AI‑driven platform ‘Fraud Pro’ groups images and demographics to detect duplicate SIM registrations, has already disconnected 70 lakh fraudulent connections, and the same technology is being used for disaster management, dead‑body identification, and financial‑risk scoring.
Provides concrete, large‑scale examples of AI delivering public safety and fraud mitigation, illustrating how AI can be repurposed across domains and reinforcing the trust narrative with measurable outcomes.
Shifted the discussion from abstract concerns to tangible results, prompting other speakers (e.g., Mathan Babu Kasilingam) to reference these successes when describing their own AI infrastructure strategies.
Speaker: Dr. Rajkumar Upadhyay
We initially built many siloed AI data repositories for different functions, which created duplication and security overhead. Now we are consolidating into a single, privacy‑by‑design platform with a common LLM, exposing data via enterprise APIs to avoid silos and improve security.
Identifies a common enterprise pitfall—data silos—and proposes a strategic architectural shift toward centralized, privacy‑centric AI infrastructure, adding depth to the conversation about scalability and cost.
Prompted a deeper dive into cost and infrastructure challenges, leading to follow‑up questions about AI expenses and influencing the later discussion on AI‑driven cost optimisation and skill transformation.
Speaker: Mathan Babu Kasilingam
We have drafted a voluntary AI incident‑reporting schema with 30 key fields (including impact type, severity, affected system, cause) to enable service providers to log and analyse AI failures, similar to the early computer emergency response teams.
Introduces a governance framework for AI incidents, moving the conversation toward standardisation, accountability, and the potential for regulatory adoption, which had not been addressed earlier.
Created a turning point where the panel shifted from operational AI use‑cases to the need for systematic reporting and standards, eliciting responses from Mathan Babu Kasilingam about leveraging the standard for model refinement.
Speaker: Syed Tausif Abbas
Combating scams requires four pillars: securing the network, exposing risk data via open APIs, offering protective services to customers (hard‑hats), and continuously upskilling digital talent. Effective data sharing across borders needs regulatory sandboxes and privacy‑enhancing technologies.
Synthesises the discussion into a clear framework, linking technical, regulatory, and human factors, and stresses the necessity of privacy‑preserving data sharing, thereby deepening the analytical layer of the debate.
Re‑oriented the dialogue toward actionable policy recommendations, leading to the final Q&A where global and Indian steps for cross‑border collaboration were explicitly requested and answered.
Speaker: Mr. Julian Gorman
To align globally, we must prove that data can be shared securely across borders (GSMA’s United Against Scams program) and ensure that actions against scams do not inadvertently raise the cost of legitimate services, which could shift fraud to other vectors.
Highlights the unintended consequences of anti‑scam measures and underscores the importance of balanced, coordinated global action, adding nuance to earlier optimism about collaboration.
Provided a concluding perspective that balanced the earlier enthusiasm for collaboration with caution about policy side‑effects, prompting the moderator to close the session on a reflective note.
Speaker: Mr. Julian Gorman (answer to Anil Kumar Jha)
Overall Assessment

The discussion evolved from an introductory concern about AI accountability to a multi‑layered exploration of practical AI applications, infrastructural challenges, and governance mechanisms. Key comments—particularly those introducing cross‑sector collaboration, concrete AI success stories, the pitfalls of data silos, and the proposal of a voluntary incident‑reporting standard—served as turning points that redirected the conversation toward systemic solutions and policy frameworks. Julian Gorman’s coalition narrative and his four‑pillar model acted as the central pivot, linking technical implementations with regulatory and global coordination needs. Collectively, these insights shaped a cohesive narrative that balanced innovation with trust, underscoring the necessity of collaborative standards, centralized AI infrastructure, and continuous skill development to sustain responsible AI in telecom.

Follow-up Questions
What value does the voluntary AI incident reporting standard offer to telecom service providers if they adopt it?
Understanding the benefits will encourage adoption and help providers see how the standard can improve incident analysis and regulatory insight.
Speaker: Dr. M P Tangirala
Do service providers see any benefit from voluntarily adopting the AI incident reporting standard?
Seeks the provider’s perspective on practical advantages of using the standard for internal AI governance and model refinement.
Speaker: Dr. M P Tangirala (directed to Mathan Babu Kasilingam)
How should enterprises control the costs of AI, especially the infrastructure and skill expenses?
Cost is a major barrier; insights are needed on optimization strategies to make AI financially sustainable for telecom operators.
Speaker: Dr. M P Tangirala (directed to Mathan Babu Kasilingam)
Can you provide more details on the disaster‑management application that uses AI to generate geo‑targeted alerts?
Further technical and operational information is required to assess scalability and potential replication in other regions.
Speaker: Dr. M P Tangirala (directed to Dr. Rajkumar Upadhyay)
Could you elaborate on how cross‑sector collaboration can be fostered to combat scams using AI?
Collaboration is essential for data sharing and innovation; clarification is needed on mechanisms, standards, and regulatory support.
Speaker: Dr. M P Tangirala (directed to Mr. Julian Gorman)
What two steps should global leaders take to align the world against AI‑driven scams, and what two steps should India take to align with the world?
Seeks concrete, actionable recommendations for international and national coordination on scam prevention.
Speaker: Anil Kumar Jha (addressed to Mr. Julian Gorman)
Research needed: Development of mitigation mechanisms linked to the AI incident reporting standard.
The standard defines reporting but not remediation; a framework for mitigation would close the loop and improve AI safety.
Speaker: Syed Tausif Abbas
Research needed: Methods for data deduplication and consolidation of siloed AI data across telecom enterprises.
Current siloed repositories hinder security and efficiency; unified data platforms could enhance AI performance and governance.
Speaker: Mathan Babu Kasilingam
Research needed: Cost‑optimization strategies for AI infrastructure in telecom, focusing on hardware, cloud, and skill utilization.
Infrastructure accounts for 80‑90% of AI costs; identifying lower‑cost architectures and skill‑mix models is critical for scalability.
Speaker: Mathan Babu Kasilingam
Research needed: Privacy‑enhanced data‑sharing mechanisms and regulatory sandboxes to enable cross‑border collaboration against scams.
Effective scam mitigation requires sharing personal risk data while complying with privacy laws; new technical‑legal models are required.
Speaker: Julian Gorman
Research needed: Evaluation of the AI‑driven disaster‑management system’s effectiveness and its applicability in other countries.
The system has shown success in India; systematic assessment will support international deployment and standardization.
Speaker: Dr. Rajkumar Upadhyay
Research needed: Design of human‑in‑the‑loop governance models for AI decisions in telecom operations.
Ensuring AI does not act autonomously without oversight is vital for trust and regulatory compliance.
Speaker: Dr. M P Tangirala (implied)
Research needed: Impact assessment of AI‑based fraud detection on false‑positive rates and customer inconvenience.
Balancing fraud reduction with user experience requires quantitative studies on accuracy and user impact.
Speaker: Dr. M P Tangirala (implied)
Research needed: Standardization and industry adoption of a common AI incident taxonomy and schema.
A unified taxonomy would enable consistent reporting, benchmarking, and regulatory analysis across operators.
Speaker: Syed Tausif Abbas and Mathan Babu Kasilingam
Research needed: Effectiveness of crowdsourcing platforms like Chakshu for fraud reporting and mitigation.
Understanding user participation rates, detection speed, and outcome quality will inform scaling of such platforms.
Speaker: Dr. Rajkumar Upadhyay
Research needed: Role of AI in cyber‑security defense versus AI‑powered attacks within telecom networks.
As attackers adopt AI, defenders must develop AI‑driven countermeasures; systematic study is needed to stay ahead of threats.
Speaker: Dr. Rajkumar Upadhyay

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.