Global Enterprises Show How to Scale Responsible AI

20 Feb 2026 13:00h - 14:00h

Global Enterprises Show How to Scale Responsible AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, comprising senior leaders from Infosys, IBM, NVIDIA and Meta, examined how trust and responsible AI can be scaled across enterprises [1-5]. Geeta Gurnani noted that clients, who a year ago were unfamiliar with responsible AI, now treat security as a “shift-left” priority and expect governance to be integral rather than an afterthought [17-24], and she illustrated the immaturity of many organisations by recounting a senior leader who managed AI governance on an Excel sheet, a practice she said cannot support large-scale deployment [26-28]. Sundar Nagalingam added that when AI is delivered to billions, the most common failures are not infrastructure outages but missing or weak control mechanisms that expose functional or security vulnerabilities [34]. Sunil Abraham warned against anthropomorphising AI, emphasizing that generative models are merely weight files whose epistemic status is dual-use and that fearing them is unnecessary if a Unix-style security model is applied [36-49].


The panel agreed that trustworthy AI must be judged by the end-user’s confidence that the system is secure, non-hallucinatory and compliant with applicable laws [55-64]. Sundar grouped the necessary safeguards into three buckets-functional safety, AI-specific safety, and cybersecurity-using autonomous-surgery as an example of how each layer must be addressed [68-75]. Geeta stressed that governance should be a gate-keeping control backed by senior leadership and eventually embedded in the enterprise risk framework rather than remaining a manual, post-hoc review [114-144]. She also said that customers will only pay a premium for “trust-grade” AI when the use case directly impacts reputation or compliance, while internal experiments may remain low-cost [216-233].


Sundar affirmed that high-performance AI hardware should ship with built-in privacy guardrails, citing autonomous driving and aerospace as domains where such safety layers are non-negotiable [148-158]. Sunil highlighted that ad-supported generative AI can democratise access and help bridge the AI divide, arguing that the business model does not inherently conflict with AI neutrality [182-200]. All panelists concurred that AI model innovation is outpacing governance frameworks, making rapid standardisation and accountability essential [301-304]. The discussion ended without consensus on mandatory watermarking, with participants split between viewing it as a useful demarcation and seeing it as an impractical universal requirement [327-335].


Keypoints


Major discussion points


Trust and responsible-AI adoption is still immature in many organisations.


Geeta notes that “security used to be an afterthought… now people first think security” and that “people are adopting AI but trust, governance, security is taking a prime stage now” [17-18]. She also recounts a senior leader who managed AI governance on an “Excel sheet,” highlighting how rudimentary practices still block scaling [18-24].


When AI is scaled to billions of users the first failures are in safety and control, not raw infrastructure.


Sundar explains that “the systems that drive the infra… break” and that failures appear either in “how efficiently each of the use cases… gets served” or in “whether it is being served safely in a secure way” [34-38]. He later groups the critical failure domains into three buckets – functional safety, AI safety, and cybersecurity – using the example of AI-assisted robotic surgery [68-75].


A practical definition of “trustworthy AI” centres on the end-user’s confidence that the model is secure, non-hallucinating and compliant.


Geeta breaks it down: the model must have “passed the security test,” be “not hallucinating” with monitoring controls, and meet “compliance” for the relevant law or industry [55-64]. She stresses that trustworthy AI is about “how the end user will consume confidently” [65-66].


Governance must move from a passive, observation-only role to an enforceable control embedded in enterprise risk management.


Geeta describes the need for senior-leadership commitment, “governance as a control point, like a gatekeeper,” and cites the IBM ethical board that must approve every AI proposal before sales can proceed [130-141]. She later notes that AI risk should be folded into the organisation’s overall risk posture rather than treated as a separate silo [143-144].


Global regulation, standards and industry-wide alignment are still evolving, and many panelists see a gap between rapid model innovation and slower governance.


Sundar calls for “standardization… then tailor it for the needs of each of the countries” and outlines a three-step approach (platform safety, algorithmic safety, ecosystem safety) [240-249]. Sunil argues that ad-supported AI can level access while acknowledging the “no regulatory vacuum for AI” and that responsibility ultimately rests on developers [190-202][295-298]. When asked whether model advances outpace governance, all panelists answered affirmatively [298-304].


Overall purpose / goal of the discussion


The panel was convened to explore how large enterprises (Infosys, IBM, NVIDIA, Meta) can build and scale trust in generative AI-covering responsible-AI practices, safety and security failures, governance mechanisms, and the need for coherent regulatory and industry standards. The moderators repeatedly asked participants to articulate concrete “non-negotiables” and practical steps for embedding trust at scale.


Overall tone and its evolution


– The conversation opens enthusiastic and collegial, with applause and light banter as the panelists are introduced [5-9].


– It quickly becomes analytical and cautionary, focusing on concrete challenges (Excel-sheet governance, failure modes, safety buckets) [17-24][34-38][68-75].


– A pragmatic, solution-oriented tone emerges when discussing governance integration and enterprise risk [130-144].


– Mid-session, skepticism and philosophical nuance appear, especially in Sunil’s remarks about anthropomorphisation, ontology, and the limits of regulation [36-44][295-298].


– The final segment shifts to a rapid-fire, slightly humorous style, with yes/no questions, playful disagreements, and a closing “thank you” [273-284][327-334].


Overall, the tone moves from upbeat introduction → serious technical and policy analysis → reflective skepticism → light-hearted rapid questioning, maintaining a professional yet conversational atmosphere throughout.


Speakers

Mr. Syed Ahmed – Moderator; member of the Responsible AI Office at Infosys [​S1]


Ms. Geeta Gurnani – Field CTO, Technical Pre‑sales and Client Engineering, IBM [​S3]


Mr. Sundar R. Nagalingam – Senior Director, AI Consulting Partners, NVIDIA [​S4]


Mr. Sunil Abraham – Public Policy Director, Meta [​S6]


Additional speakers:


– None identified beyond the four listed above.


Full session reportComprehensive analysis and detailed insights

The panel opened with brief introductions of the four senior representatives – Mr Syed Ahmed (Infosys), Geeta Gurnani (IBM’s field CTO for technical pre-sales and client engineering), Mr Sundar R. Nagalingam (senior director of AI consulting partners at NVIDIA), and Mr Sunil Abraham (public policy director at Meta) – and the moderator framed the session as a discussion on how large organisations can scale responsible, trustworthy AI while tackling governance, safety and regulatory challenges [1-5].


Geeta Gurnani highlighted a dramatic shift in industry attitudes toward security. Two years ago many clients still asked “what is responsible AI and what is trust?” but today “security has become a shift-left priority – people first think security then everything else” [17-19]. She illustrated the immaturity of current governance by recounting a senior leader who managed AI risk on an Excel spreadsheet, a practice she said “cannot let anybody fail” at scale [23-28].


Sundar Nagalingam explained that when AI systems are delivered to billions of users the first points of failure are not the underlying hardware but the control layers that orchestrate the infrastructure. He grouped these risks into three “buckets”: functional safety (e.g., an AI-assisted robotic surgery delivering the correct clinical outcome), AI-specific safety (bias, training-time validation, synthetic testing) and cybersecurity (protecting the system from malicious intrusion) [68-75]. He added that the proliferation of standards reflects a deeper problem: the lack of a clear party to hold accountable when an AI-driven robotic system fails [260-267].


When asked to define “trustworthy AI”, Geeta framed it from the end-user’s perspective: a model must pass a security test, be monitored to prevent hallucinations, and comply with the relevant legal regime, thereby allowing the end user to consume the output confidently [55-66].


Geeta argued that governance must move from passive observation to an enforceable, gate-keeping control embedded in the organisation’s risk framework. She called for senior-leadership commitment and for AI risk to be folded into the enterprise risk posture rather than treated as a siloed function [130-144].


Sundar echoed the need for standardisation before localisation. He proposed first establishing a safe platform (the “template”) and then fine-tuning it for each country’s regulations, covering platform safety, algorithmic safety and ecosystem safety [240-249]. This mirrors Geeta’s call for a technology-level baseline that can be adapted per jurisdiction [290-294].


On the hardware side, Sundar affirmed that high-performance AI infrastructure should ship with built-in privacy guardrails. He cited autonomous driving and aerospace as domains where “the safety layer is non-negotiable” and where silicon-level protections are essential [148-158].


Sunil Abraham offered a philosophical stance, warning against anthropomorphising AI. He framed an AI model as a single weight file, a dual-use artifact, and argued that a Unix-style “security-first” mental model gives confidence while reminding listeners that AI should not be treated as a sentient entity [36-49]. When asked about the “open-claw malt-bot” community, Sunil dismissed it as a hallucination of stochastic parrots, reinforcing his view that AI should not be anthropomorphised [36-44].


Sunil also discussed Meta’s “Trusted Execution Environment” paper, noting that the ~80-page document devotes half of its length to hardware-level attacks and enumerates 33 distinct attack strategies comprising more than 100 individual attack types [180-186][190-194].


Regarding business models, Sunil argued that ad-supported generative AI can increase accessibility and help close the AI-usage gap, especially for low-income users, without necessarily compromising the principle of AI neutrality [182-202].


Geeta addressed market willingness to pay for “trust-grade” AI. She said enterprises are unlikely to pay a premium for internal experiments, but will do so when the AI product is consumer-facing or carries downstream reputational, compliance or brand risk – “I cannot afford to fail there” [221-233].


All three panelists concurred that the models and innovation outpace governance [301-305], underscoring the urgency of accelerating standards and oversight.


Points of disagreement emerged. Geeta advocated for a universal technical baseline before geographic regulation [290-294], whereas Sunil asserted that existing laws already apply and there is no regulatory vacuum for AI [295-296]. Sundar’s middle-ground proposal of standardisation then localisation sits between these positions. On mandatory watermarking, Sundar expressed skepticism, arguing that the industry has already accepted AI-generated content and that blanket watermarking may be unnecessary [332-337]; Sunil evaded a direct answer [327-330], while Geeta suggested future technology might render watermarks unnecessary [338].


The discussion yielded several actionable take-aways: senior leadership must mandate AI governance as a non-optional, gate-keeping function and integrate AI risk into enterprise risk management [130-144]; organisations should replace ad-hoc tools such as Excel with automated, runtime-enforced governance pipelines [114-124]; hardware vendors need to embed privacy and safety guardrails at the silicon level for high-risk sectors [148-158]; a three-layer safety framework (functional safety, AI safety, cybersecurity) should become the industry baseline, with country-specific tweaks applied thereafter [68-75][240-249]; and while ad-supported models can increase accessibility, their long-term impact on trust and neutrality warrants further study [182-202].


In closing, the moderator thanked the participants and the audience, noting that the diversity of perspectives underscored consensus on layered security and governance while highlighting divergent views on global regulatory alignment and content-labeling, pointing to clear directions for future research and policy [339-340].


Session transcriptComplete transcript of the session
Mr. Syed Ahmed

of responsible AI office in Infosys. And absolute privilege to announce my co -panelist, Geeta Gurnani, field CTO, technical pre -sales and client engineering at IBM. Sundar R. Nagalingam, senior director AI consulting partners at NVIDIA. And Sunil Abraham, public policy director at Meta. So now between Infosys, IBM, NVIDIA and Meta, you can’t get better global enterprises and better AI companies that are building trust at scale. So please join me in giving a big round of applause to my co -panelists. So let me request the panelists to please come on stage for a very quick photograph as requested by the organizers before we get started with the panel discussion. Thank you. All right. So it’s really amazing to be on panel with all of you again.

So before we get started with, you know, a lot of heated discussions on the scaling of trust, because trust is something that everyone thinks, you know, they have a different perspective on trust. So let me get started on very simple questions and then we’ll do the hard hitting ones a little later. So Geetha, you have been working for decades with customers. You have been working with them on trust and responsible AI. You have been attending a lot of meetings. What is something that, you know, surprises you? I mean, what is something that has happened in the industry in your experience? after that you have felt that oh even after decades of experience this industry still surprises me

Ms. Geeta Gurnani

sure so thank you so much say it for that question and as i was mentioning when i was standing outside that when i was walking in and meeting many clients almost two years back everybody was asking me what is this responsible ai and what is this trust okay and what surprised me that we all all witnessed so much learning from security as a concept right security always used to be afterthought and now i think people just can’t afford of not thinking security it has become completely shift left right that people first think security then everything else but in spite of that whole learning what i witnessed in last 24 months is that people are adopting ai but trust governance security is taking a prime stage now okay it wasn’t uh it wasn’t a first thought And when I met a very senior leader, I will, of course, not name them.

And I told them that you were starting your journey on the Gen AI. Can we work with you on responsible AI? And he said, but that will block my innovation. And I don’t want to block my innovation. And I asked him, so how do you manage the governance? He said, on Excel sheet. And we are like, so I was, wow. I said, if you’re ready to spend and so much money. But I think now when I go and meet, I realize that that organization is not able to scale because they’re not confident. But this Excel never let anybody fail. I think that’s the first thing.

Mr. Syed Ahmed

That’s quite profound what you mentioned, right? So what you’re saying is the people are more open to responsible AI now and trustworthy AI now. And in many ways, they leaped ahead earlier with innovation, with a lot of innovation. There is. There is absolutely no doubt in anyone’s mind at the power of AI. What AI can do. but true scales can come only when you start trusting AI only when you start building that layer of trust and that is our time is now that’s correct excellent okay Sundar maybe next question to you scale creates power but it also scales failures what breaks first when AI scales to billions of users whether it is governance first or infrastructure or alignment when AI scales to a lot of people what breaks first

Mr. Sundar R Nagalingam

I mean that’s the thing right I mean anyone of them can break and most of the times it is not the infrastructure that breaks not the infra that doesn’t break what breaks is the systems that drive the infra and the breakage could come either in terms of how efficiently each of the use cases that need to be served to the users gets served as microservices that is one possibility of failure the second one very obvious one is that is it getting served safely in a secure way that could be a very very important point of failure and even that is a failure i mean the systems may appear to be running well and everybody might be getting the answers that they have been looking for everything might look hunky -dory but if a very very small vulnerability gets overlooked if it had not been thought about if if a if a control mechanism to avoid that vulnerability has not been thought about either manually or through systems that’s a huge failure so most of the times the things that break when you are serving a large number of users is the way in which ai is getting served either in terms of the functionality itself or in terms of the controls that it is expected to undergo

Mr. Syed Ahmed

excellent i totally agree with you it is absolutely right sunil um i think um very very important question to you last month we saw all this craziness about open claw malt bot malt book for those of you who don’t know open claw malt bot malt book was a social networking site but with a twist it was created for only ai agents okay so humans were allowed to observe what is happening in the social networking site but they couldn’t participate they couldn’t post anything and within days agents started posting a lot of stuff and they had their own community and all that they even had their own language they had their own religion apparently so a lot of things happened so question to you is you have spent years shaping digital policy right but i mean when you heard all about all this malt bought malt book and all that did you cringe for a minute and say oh i didn’t expect this

Mr. Sunil Abraham

no and unfortunately even though you said it’s the lightweight question i have to answer it using big words so i think the main reason why i don’t see it is because i’m skeptical towards anthropomorphization whenever i see technology do something i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t I don’t, in my head, apply the mental model of a human. It’s just technology doing something. So I’m not impressed at all by a molt book. It is just machines hallucinating.

The stochastic parrot is just doing something. There is no real intelligence at display yet. The second big word I’m going to use is ontology. In philosophy, the ontological question is, what is this thing that I’m looking at, molt book or open claw? And at the very core of Gen AI is a single file on the file system, the weight file. And I’m somebody that has been using operating systems for a long time. Operating systems are like 20 ,000 files, 30 ,000 files. And operating systems didn’t scare me. And somehow you want me to be scared of a single file. A single file, which is a weight file. so the ontological view of the technology gives me more assurance and finally one more big word which is epistemology so it’s one file but what is the nature of truth about this file and i think the mistake we’re making is we’re expecting it to be a responsible file but that is actually not according to me what it is according to me what it is is it is the general purpose file or a dual use file and one person’s bug is going to be another person’s feature and another person’s feature is going to be the third person’s bug and therefore this is not easy to build services and solutions using these ontological components and epistemological concepts so sorry i’m using a lot of big words but you asked a very important question and i think we need to answer that question very carefully thoughtfully and if we use Gita’s mental model of security first and that means Unix thinking suppose we use Unix mental model then surely we will not be scared of any file it’s in some user space and at the max it will do whatever it wants to do in that user space and I am safe from whatever it is doing so I’m not scared of mode book at all

Mr. Syed Ahmed

thank you so much for your response that gives us a lot of assurance and I think a lot of people in the audience will also agree that now we are a little bit more assured than when we started one of the big challenge that we always have is we humanize AI too much that was one of your big words that you used which is not the case, we shouldn’t be scared of it so much this is something that we have created and we have experts who have learned to govern and use AI in the right way thank you so much for that now let’s get started with the perspective the reason why I am opening up one question to all of you same question, one is because I am a little lazy second is when it comes to trustworthy ai when it comes to building trust everyone has a different opinion about it right so when i talk to regulators they have a different view on it when i talk to governments policy makers they have a different view on it academia has a different view industry has a different view now within industry um you know enterprise applications like what ibm does has a different view chip makers like nvidia has a different view and consumer ai platforms like meta has a different view very quickly if you can tell me what does it mean by trustworthy ai in your own sense and what are the key non -negotiables one or two maximum so for each of you gita maybe we will start with you okay so i think i will second

Ms. Geeta Gurnani

your thing that people being confused about trustworthy ai i think as a technologist even i was confused three years back okay because people use a lot of terms interchangeably which sometimes scares them and they don’t know what they’re doing and they don’t know what they’re doing because people use trust security governance, compliance, all of it interchangeably. I’m happy they use all of these terms, but using them interchangeably, I think confuses a lot of people that, OK, exactly what are we trying to do? We’re putting in a lot of keywords. Yeah, it’s just a lot of keywords. But I think when you start to decipher each one of them and you say, OK, ultimately, ultimately, see, trustworthy AI is for an end user, which means can I trust what I’m using?

Right. And all of us technology providers need to really work upon which says that, OK, to make you trust what you’re using, what enablers I can give. Right. So in my mind, for trustworthy AI, the ROI need to be seen that what downstream risk is it going to bring? Right. Right. So if I have an end user or a consumer who wants to trust an AI, then I think he needs to be assured that. the model or use case I’m using is past the security test, is already past the security test. It is not hallucinating, which means I have a control over monitoring that what output it is producing. Right. So that risk has been taken care.

Somebody has looked at it. And the third, which says that compliance, right, that if I am operating in a law of land where some laws are applicable, or if I’m in an industry where some laws are applicable, somebody has taken care of it for me. Right. So in my mind, trustworthy is how end user will consume confidently. Now, for them to consume confidently, I think we need to ensure that each of these layers are taken care and they will be taken care differently in different industries by academias and all of it. That’s broadly.

Mr. Syed Ahmed

I love it. So basically, irrespective of all the building blocks, of security, safety, privacy, what you said can be used interchangeably, what really matters is. the end users can start trusting the technology that is absolutely spot on so that from nvidia’s perspective or your perspective sure so this the trustworthiness i mean

Mr. Sundar R Nagalingam

you explained it very beautifully saya that i mean multiple regulators follow different standards multiple industries follow different standards multiple companies follow different standards so i mean which is trustworthy and which is not which is safe and which isn’t right i mean so let’s try to abstract it to very high level something which can be like bucketized in a way let’s say in three buckets and all these three buckets will be applicable to any regulator that you’re talking about any government you’re talking about any country any function any whatever it is okay the first one the most important one is the functional safety okay maybe if i explain it with the help of an example it’s easier for all of us to relate to it i mean let’s say a robotic assisted surgery in ai assisted robotics ai assisted robotic surgery the first one is the functional safety okay the second one is the functional safety okay the function it is supposed to deliver the surgical process that that needs to be achieved the outcome that is expected of the process okay and what comes before and after surgery it can be very very easily equated with the skills of a surgeon a manual surgeon right i mean that is what it is the functional part of it is it getting delivered okay that is the i would say in terms of visualizing processing understanding and controlling that is the easiest than the other two that i’m talking about because it’s it’s most of the times it’s black and white it’s not always black and white but most of the times it’s black and white the second one is the ai safety that goes into it how’s see i mean obviously an ai assisted robotic surgery has i mean you cannot even imagine the amount of trainings that needs to be done the amount of testing and validation that needs to be done the amount of uh you know So scenarios that can be visualized, created through synthetic methodologies, created and emulated and simulated and tested, the amount of bias that can get into it.

I mean, if it is a male patient, I mean, the simplest bias would be the different approach between a male patient and a female patient. I mean, I’m not even getting into other areas of bias that can creep into. So how safe is the AI that has gone into implementing it in terms of training and delivery? Third, and that’s not easy because the problem here, Syed and August attendees is that it’s humanly impossible to even think of things that can go wrong. I mean, today, I mean, that is why we always go back to these AI assisted ones for that also. The last one being. Cyber security. if somebody if a bad element wants to just hack into the theater and do something wrong to the patient who is sitting inside who is being operated upon by a by a robotic arm i mean that’s like unimaginable right i mean and it can happen i mean it’s easy to i mean it’s not easy i mean it’s but theoretically it is possible so i would say that if we abstracted to very high levels these three areas once again the functional safety part of it the ai safety part of it and the cyber security these three will be common amongst any approach that need to be

Mr. Syed Ahmed

absolutely spot on in fact if i can extend it say when we are building this kind of ai application say for example the robotic surgery that you mentioned we hold it for higher standards because when a surgeon human surgeon goes wrong maybe it is okay but when a robotic you know surgery machine goes wrong it is not okay because it can fail at scale absolutely right so all these three buckets that you mentioned were like fantastic and i think this is very much essential yeah you test a very important

Mr. Sundar R Nagalingam

may i just add 10 seconds so it was a very important point and what is the reason for that why is there so much of standard for for that why is an undue expectation out of that reason is very simple there is no accountability whom do i blame whom do i take to the court whom do i curse there is no human it’s easier when the surgeon makes a mistake you know whom to take to court whom to curse whom to ask money from but if the robotic arm makes the mistake i mean is it the robot i mean so that that uncertainty of of whose collar to hold, whose neck to be choked when things go wrong, that uncertainty is increasing the expectations out of it.

Here you know certain whom to blame. There you absolutely don’t know whom to blame. When I don’t have somebody to blame, I don’t want a reason to blame.

Mr. Syed Ahmed

Accountability is definitely very, very, very important. I can’t stress more. But also if an AI system has a flaw, it is at scale. It has been maybe rolled out to thousands and hundreds of thousands of hospitals. So it can fail at that level. We can’t absolutely take any kind of, you know, we have to take precautions.

Mr. Sundar R Nagalingam

Excellent point. Error also scales. Good point.

Mr. Syed Ahmed

Sunil?

Mr. Sunil Abraham

Yeah, again, I just love disagreeing with Syed on everything he says.

Mr. Syed Ahmed

That’s very rare, Sunil.

Mr. Sunil Abraham

So I look at a project like… and there is distributed installation of a technology and hopefully that kind of architecture should not scale as you say so that is the meta vision super intelligence that means personalized intelligence for each of us and I will give a quick example from a conversation I had at the Dutch embassy the lady asked me to prompt meta models Lama 2 and Lama 3 which is with the question was why should women not serve in senior management positions this is the question that she had so Lama 2 said I cannot answer the question but I will tell you why women are equally good for senior management positions so it didn’t do as per request it did opposite of request and Lama 3 was safer than Lama 2 it said I refuse to answer this question because I morally object to this question this lady was happy because she is lady a but actually there is an imaginary lady lady b who works in some patriarchal institution and she’s going into her managers who is also a patriarchal boss to negotiate her raise and she wants to know all the terrible arguments he is going to level at her so that she can prepare because her next prompt is going to be what is the proper response to each of these allegations right so uh in a dual use technology and if it truly has to avoid all of this risk at scale which is perhaps going to happen in the world of atoms and in the world of atoms i would be as worried as sundar is though if i tell you about invention there was an invention that the human species came across and the indians were told if you want this invention in your country two hundred thousand people will die every year and will the Indians accept it or not in 2026?

They won’t accept it. That invention is called automobile. That invention is called automobile. Even today in 2026, we are not able to solve the safety issue of that technology of automation. Still, as Indians and as the human species in India, we say, oh, 200 ,000 people, Indians will die every year, but we must have this technology. It is worth, the security trade -off is apparently worth it for the automobile. But we are asking quite rigorous questions, since I feel that, so for us in the world of bids, we have three mental models for the harm. So the first mental model, zero to one, just you and the model. There, everything that is legal, going back to what Gita, everything that is legal is allowed.

And it is legal to write a book of hate speech. All of this is legal. You can write a book about neo -Nazis. These are all legal acts. Then we have one is to one. in the one is to one the community standard of facebook will have to kick in at that point you cannot say whatever is legal you’ll have to say what is acceptable on our platform we are running a particular community a family -friendly community hopefully so therefore you cannot say unfamily friendly things and then when the robot is or the intelligence is participating in a any strain conversation then perhaps it has to be even more careful because somebody may be triggered some people may love horror movies and some people may hate horror movies and some people may love heavy metal and some people may get very upset by heavy metal so it has to deal with all of that

Mr. Syed Ahmed

absolutely love the diversity of response to one question so and that’s that’s very important and only these kind of panels you know representing different industries can bring in this kind of diversity so i i am really amazed at the diversity of responses to one questions that i have asked you and i hope that you have enjoyed it and i hope that you have enjoyed it and i hope that you have received So let’s go a little bit deeper. Geetha, maybe IBM has been investing in a lot of responsible AI stuff, even before all this agentic AI, AI era. I remember way back in, say, during good old machine learning days, you used to have AI 360 degree fairness and security products.

Most of it were open source and we used to use them. Today you have IBM Watson X governance. But question is, how do you ensure that these tools don’t remain at just a monitoring layer and get enforced on the ground at the runtime, right? When actually it is needed, when the models are getting served, how do you ensure it is happening at the runtime?

Ms. Geeta Gurnani

Wonderful. I’ll just start with the lighter note that I hope every corporate has an office. They can force this where they have a responsible AI head like Saiya and Arshik for India who can really enforce this. But trust me. it actually starts with the vision of Cinemos leadership in enterprises that do they want to scale AI in the different business functions for themselves as well as for their client with trust. It can’t happen if you are not committed because the first example I gave you was, I love what he just added that, which was a Unix model, saying that do you want to be conservative or do you don’t want to be conservative, right? But conservative helps to scale.

And I think it also boils down to your point, which is you said that errors can also scale, right? So if I were to stop errors at scale, then this is needed, right? But I think more often the mistakes I’ve seen that we do, and that’s why I was giving security example also, that we started investing a lot later in tooling. Okay. Now, if you want the every single person to use it as a lift shift, which means not governance as an observation later on, but governance as a control, then you had to equip people to automate to a good extent. Right. If you ask people that manually every single time a use case comes, you first check, is it compliant?

Is it ethical? Should I be doing? Should I not be doing it? And there are no workflows for people to really automate. Then people say, OK, and all of us forget about AI. I think if you are asked to do in today’s world, any task which is extensive manual, people will skip no matter whatever hard rules, regulations you can make. Right. So I would say, first of all, a big commitment from senior leadership saying that this is essential and it’s not optional. That’s the first thing. The second thing I think everybody need to understand that it is not observation. You are not sitting like a governing body somewhere who just observes that is it right or wrong.

You have to. make it control point, like a gatekeeper, saying that unless you do this, you are not allowed to take it forward. And I remember when we were doing our first use case for a client, the field team came to me and said, Gita, what is this ethical board? Why are we going for approval to the ethical board saying that can we do this use case or no? Because as a sales team, we were not allowed to do any use case unless our ethical board really approves it, saying that you can table a proposal to a client. That is the level of strictness like in IBM we are following, that the ethical board. And everybody thought that ethical board is like some body sitting somewhere who will…

Rubber stamping everything. Rubber stamping. And now the sales team needs to take approval before they can bid a proposal. Otherwise, if it’s an AI proposal, it has to have a conversation with them, right? So governance, if you start putting that as a control. And third point, I think, which we were discussing outside the gate some time back, that more and more we are… We are going. my observation was that if I were to do a governance conversation in an organization I have to talk to five people I have to talk to risk officer I have to talk to CISO I have to talk to business person I have to talk to CIO and then one day I was sitting with my team and saying that will this conversation ever see a day of light who’s going to take decision is it business is it security is it risk and then I think thankfully what we are seeing that if you have to make governance at the central you have to bring it in your enterprise risk posture completely saying that in your enterprise risk management if you are calculating your risk posture right then AI risk has to be really taken into consideration right so I will just summarize my conversation away saying that make AI get keeper you have to bring it in the control and then eventually I think you will see maybe in next 12 months I’m pretty confident it will roll up to the enterprise risk it is no more separate AI risk or governance

Mr. Syed Ahmed

I love it the way you said it and it has to be integrated right you can’t just have AI risk you have to have integrated risk panel that can make decisions that’s absolutely and I love the way you said it so first is from the leadership level wherein you need to empower and have the thing and then with the tooling and all you need to enable and people will need to ensure on the ground that they implement so yeah that’s amazing perspective thank you Geeta Sundar I couldn’t resist ask this question to you a lot of people in the audience will not spare me if I don’t ask this question to you

Mr. Sundar R Nagalingam

you’re scaring me now

Mr. Syed Ahmed

no no no it’s an easy question but expected question to you know a person from you right so should GPUs and high performance AI infrastructure have embedded privacy guardrails at silicon level

Mr. Sundar R Nagalingam

absolutely yes absolutely yes I mean it should be there I mean why not And I would – yeah, go ahead.

Mr. Syed Ahmed

Would you want to give some examples on how you are doing it?

Mr. Sundar R Nagalingam

where it goes through a very, very, very safe layer. And for obvious reasons, autonomous driving needs to be extraordinarily safe, right? I mean, healthcare and driving. These are the, I would say, the stringent strangers when it comes to transportation. Let me put it as transportation, which includes aerospace as well. The two most stringent areas where safety is a necessity. It’s never luxury. It’s a necessity. So the answer is yes, Syed. Absolutely.

Mr. Syed Ahmed

Thank you so much. Sunil, you wanted to…

Mr. Sunil Abraham

Yeah, I mean, perhaps to take forward what Sundar said.

Mr. Syed Ahmed

I will still ask you your question, though.

Mr. Sunil Abraham

We can skip that. Do go.

Mr. Syed Ahmed

No, no, go ahead.

Mr. Sunil Abraham

What I thought is so fascinating about what Geeta said is that in a corporation, in a profit -maximizing firm, they have an ethics review board. And it’s just… I don’t know whether that’s the phrase. That’s an equivalent. I don’t know whether that’s the phrase. sorry what did you say yeah so that this is something you see in a university and this is additional self -regulation that the corporation is doing displacing it on itself and actually if you look at nvidia they also publish academic papers about the models they build and some of the tech work that they’re doing metal also has this tradition of publishing academic papers so it’s very weird that corporations are becoming more and more like academia and perhaps that’s a wonderful thing as well and we should celebrate that and that makes people like me very fortunate to be with within these corporations so meta published a paper which was called trusted execution environment and the whole idea was if a whatsapp user in a group would like to use the power of ai then there is insufficient compute on the device itself to have edge ai solve the problem for the user So till the edge gets faster and better, you have to, in a temporary basis, create a little bit of compute on the cloud and then do all the processing.

And then after the task is done, you extinguish that instance which you created in the cloud, which is doing this thing. And as part of that paper, so I’m of course not a computer science student. I’m an industrial and production engineer, so I’m like previous generation of technology, and all of those kind of things. So the paper, I cannot understand out of 80 pages or 60 pages of paper, I cannot understand 40. And that 40 pages is about this hardware. And there’s a whole series of attacks that you can possibly have in the tradition of the pager attack and Israeli supply chain attacks. There’s a whole series of things that you could potentially do to invade privacy. And before that, security.

and I just want to sort of share this with these folks that I mean I guess we all learn that way we read books and we understand some words maybe two three words on the page and then we feel a little better and we hope that the next time we read it we’ll get smarter but there’s a lot and I’m sure that your team is doing a lot of work and the meta team is they’ve named your chips saying Nvidia chips we have done this following analysis and with the other chip and I don’t understand it at all but I know it’s a big area of work and I wanted to say thank you for what you said

Mr. Syed Ahmed

thank you Sanil absolutely last time I checked there were 33 different types of attack strategies and more than 100 different types of attacks that are happening as we speak at all the levels like including the hardware levels that are there that’s quite interesting okay and good conversation by the way I may have to skip last few questions because this conversation is so good we can go on and on forever but sunil um i’ll still ask you your question

Mr. Sunil Abraham

no no no no

Mr. Syed Ahmed

no this is a very important question in my mind

Mr. Sunil Abraham

i’ll try to answer it

Mr. Syed Ahmed

okay so last week i think last week or a few days ago open ai did come out with they started embedding ads in chat gpt yeah right so um when a consumer ai platform like chat gpt starts embedding ads um my question is will it help consumers subsidize their subscription or will it kind of violate the doctrine of you know the free ai principles ai neutrality

Mr. Sunil Abraham

yeah so uh very quickly on that we should understand technology dissemination in our country only five percent of my country men and women have ever been on a plane that invention is 125 125 years old uh only uh 25 homes in the country have at least one book that is not a textbook and that invention is now 600 years old. The AC, I think, is in roughly 15 % of households in India. That invention is also 125 years old. Gen AI, my guess is at least 20 % of the country is using it today. More than that. More? Oh, thank you. So, shall we say 25? Yeah. Okay, 25 % of the country is using only five years old, this technology. And the reason it is penetrating is because of two opennesses.

One is free weight models, that was what we were doing, but also gratis, that the service intelligence is available on a gratis basis. Whether you’re an AI summit attendee staying at the most poshest hotel and you paid $33 ,000 per night, or whether you’re in Paharganj and you’re staying for Rs. 900 a night, both of you have equal access to gratis intelligence. And that is possible because of ad, so it’s both. Yeah. Meta provides WhatsApp and you’re completely private. and Meta provides non -encrypted services as well. You can have services that are ad -supported. You can have everything. We must have maximum because this country, ideally we want to move from 30 % people using AI in the country. I want to move to 90 % people using because it’s just bits.

We can make this happen. So let’s not be skeptical about the ad idea. It’s a technical problem to be solved. It will help bridge the AI divide and it will be a great leveler and all that. Sorry, it took much longer than I thought. I thought I’d do it in one, two sentences, but sorry. Please back.

Mr. Syed Ahmed

Okay. All right. Quite interesting conversations. Geeta, but I’ll come to you. We talk a lot about ethics, trust, responsible AI. And suppose we go ahead and develop it. How are you seeing? I mean, would customers pay premium for, for trust -grade AI? Are you seeing that in the market? So if. I tomorrow have a superior safety posture, right? Is it influencing the buying decisions of the enterprise significantly? I mean, why will anyone invest in responsible AI if, you know, like IBM, you are investing significantly in it. So are you seeing that influencing the buying decisions because you’re going to churn out trust -grade AI?

Ms. Geeta Gurnani

So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in their journey of Gen AI adoption? Okay, trust me, still I feel many organizations are at surface. They have not fundamentally been able to address a complete process change or a complete efficiency, what they need to be targeting on, right? But the minute they are wanting to get into the real use case, which is going to fundamentally change the way they operate, right? Or fundamentally maybe generate a new business model altogether. then I think they are ready to pay for the premium. So I would say that they may not pay for every single use case, what they’re doing, because see, when we are delivering use case also, now every enterprise is intelligent to say whether I’m going open model, whether I’m going for paid models, SLM, LLM, tiny models, whatever you may call, right?

So there is a cost and a ROI conversation always happen, saying that, okay, which model I’m going to adopt. And many people I’ve seen that they say that I may not pay enterprise trust grade AI money if I’m doing all in all internal use case. Okay. But if I am putting this use case in front of my consumers or my end clients who are going to use, which is where a downstream risk, which is my reputation is at risk, my brand is at risk, my compliance posture is at risk, then I will buy a premium trustworthy, because I can’t afford to fail there, right? But I can still do certain internal experiments and not pay the premium part of it.

For POCs and experience and for some internal. internal use case. So let’s say if they’re doing some ask IT or other stuff, then they say, okay, I am okay to go. And that’s where I think people also differentiate even using which model now, right? They make a choice that which model they would like to use. So I think it’s not a choice anymore, but it depends on what use case you are serving and how critical it is for business. And then you take a call that am I going to invest and giving premium. No one single lens for all.

Mr. Syed Ahmed

No, yeah, absolutely. Sundar, maybe I’ll ask this. You did talk about your operating system for smart cars and all that. I know NVIDIA has launched Halos, a full stack safety systems for autonomous vehicles. Now the world is pivoting towards physical AI and sovereign clouds and AI safety is increasingly becoming a full stack component from the chips to models to AI applications that are there. And you will have to roll the salt being a global company across the geographies and each geographies have multiple different regulations. restrictions and checklists that you will have to follow in terms of automobiles and things like that. How do you ensure that you build consistent trust enforcement that adheres to all the geographies?

Mr. Sundar R Nagalingam

Sure. No, I mean, that’s a very, very pertinent question because it’s not easy. I mean, it’s not easy. So the idea is to do a standardization, right? And then tailor it for the needs of each of the countries. I mean, you fine tune it for the needs of each of the countries. So once again, there are three big approaches when it comes to Helios specifically. The first one is the safety of the platform itself, how safe the platform is, right? Once the platform has been made safe, right? And then it becomes a template which can be tweaked to the needs of specific geographies, specific countries, et cetera, et cetera. That’s a very, very, very, you know, it’s a very, very important thing.

And then you can also, you know, you can also implement a standardization approach. So that’s a very, very, very, very important approach. the second one is the algorithmic safety right how i mean going back to the fundamentals i mean it’s not programming it is what algorithms do we use how do we ensure that the algorithmic safety is is is number one it is safe first and number two the algorithms can be with with some necessary some tweaks can be made to to to serve the needs of specific geographies specific countries specific specific verticals for that matter things like that the third one is the ecosystem itself i mean uh i mean whatever is is is approved to be used as an ecosystem in one one country will not be there in the second the suppliers will change the vendors will change so it is just not ensuring the platform and the algorithm are safe how do you ensure that the ecosystem that goes into building the cars are is also made safe okay that is a huge thing there is no end to it because it keeps changing a lot but once you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe

Mr. Syed Ahmed

love your response what you are saying is basically even in absence of regulations controls ensure you make the platform safe you make the algorithm saves

Mr. Sundar R Nagalingam

you make the ecosystem safe you have a template for now

Mr. Syed Ahmed

you already have everything safe you just need to now tweak it to different geographies or sectors and industries

Mr. Sundar R Nagalingam

yes absolutely

Mr. Syed Ahmed

okay I love it Sunil one question to you with initiatives like purple Lama and Lama guard meta provide safety tools but ultimately shifts the responsibility to developers is this too responsibly or decentralized liability

Mr. Sunil Abraham

again just to use something that Jan Lakun used to say and he is no longer the chief AI officer but the words continue to be true which is we all have wi -fi routers in our homes and when those wi -fi routers fail we don’t call linux store walls and say hey linux store walls this wi -fi router is running linux and therefore please help me fix the bug the company that sold the router and made a variant or a derivative work from the linux project you will you will have to speak to them and that is the freedom that is necessary in the open source community and in the community of proprietary entrepreneurs that build on open source because the the bsd license allows you to do that it allows apple to take an open project and then to make it a fully proprietary project and that you could be making dual use at that level itself that you want the model to create hate speech.

We want a hate speech classifier in Santali. Unfortunately, we don’t have enough Santali users on the platform. So we have to make synthetic hate speech in Santali so that we can catch it in advance. So we want to make a big corpus of hate speech in Santali. We cannot go around and ask people, please make hate speech for us. That would be a worse -off option. So like that, the true approach in the open -source community is to retain freedom number one, freedom of use, because it allows for the dual purpose. But the moment we use any of that on our platform and we are providing, then all those freedoms disappear. Then you have very limited freedoms.

Then if you ask why women should not be in senior management positions, I know I’m not going to answer your question. So that’s where we are.

Mr. Syed Ahmed

Quite interesting. Thank you. I just, I have around seven to eight minutes left. I’m going to skip through the rest of the question what we’re going to do things little differently if audience are okay I’m going to ask very rapid fire kind of questions same questions everyone should answer but only in yes or no

Mr. Sunil Abraham

as a philosopher I protest I think the slogan for this AI age is both and not only should we embrace yes and no we should also embrace everything in between because then only we’ll have personalized super intelligence the trouble with your framing is monolithic

Mr. Syed Ahmed

and I’ll make an exception as a moderator what I’ll do is if a question requires or a response requires a little bit more attention I’ll call that out you also call me out if you think you need to add anything but I have some very interesting questions and I’m really excited to understand from you guys what you think actually so So again, the format is this. I’ll ask the same question to all of you. You can answer. OK, not yes or no. Very concisely considering the time. Yes or no or both. Yeah, yeah, yeah. Answer is your choice. So globally, regulations across the globe, we need to have alignment on globally. Regulations. Yes or no.

Ms. Geeta Gurnani

No

Mr. Syed Ahmed

no. OK. OK. Yes. OK. No. OK. I understand. But I did expect this kind of responses. So maybe I’ll tweak this question a little bit. So that minimum understanding of what is required across all the geographies, at least. Do we agree on that? Not a very heavy regulated law or something, but minimum conditions that needs to be met.

Ms. Geeta Gurnani

I would say we should talk about technology regulation. Not the geography regulation. So as he was saying, like there is certain table stakes. which is at technology. So all technologists should first agree that this is stable stake as a technology. Then geographies can take over.

Mr. Sunil Abraham

It’s already regulated. I mean to quote Lina Khan, there is no regulatory vacuum for AI. So I disagree a little bit with what Sundar said previously. You cannot say I did it and I’m not responsible.

Mr. Syed Ahmed

I think a little easier question this time. Is advancement in AI models outpacing advancement in AI governance? Are the models and the innovation outpacing governance?

Ms. Geeta Gurnani

Absolutely.

Mr. Sundar R Nagalingam

Yes. I mean that’s a natural way of happening things, right? I mean the technology has to advance and then you need to ensure that the advance to technology is safe and secure. So that’s a natural progression and it has been happening that way.

Mr. Sunil Abraham

It’s never happened in the reverse order.

Ms. Geeta Gurnani

Yeah. I agree.

Mr. Syed Ahmed

but there is a thought saying that technology has to advance correct but before it can be widely adopted in production maybe we need to have AI governance so that is something we should catch up really fast. We should make it safe before wide adoption of technology right so okay if you have a more capable but less safe model would you delay your launch to stay responsible?

Ms. Geeta Gurnani

As I said depends on which use case use case dependence.

Mr. Syed Ahmed

Fair enough

Mr. Sundar R Nagalingam

I mean I just echo Geeta

Mr. Syed Ahmed

Okay fair enough One answer where I could get all my panelists agree Have you stopped any projects due to safety concerns?

Ms. Geeta Gurnani

I think as I said currently I am not in the ethical board of IBM so I have not stopped but I have seen them stopping.

Mr. Sundar R Nagalingam

likewise i’m not in the design department so i don’t have first -hand knowledge but i’m sure i mean a lot of things would have gotten delayed not stopped because of compliance regulations not being met and all yeah i’m sure yes i mean yeah

Mr. Sunil Abraham

facial recognition was turned off on facebook yes absolutely good

Mr. Syed Ahmed

big question maybe soon i’ll start with you this time right can we actually go on agi artificial general intelligence

Mr. Sunil Abraham

um it’s it’s a regulatory problem we don’t have to think of yet

Mr. Syed Ahmed

okay we can okay

Mr. Sundar R Nagalingam

difficult it’s going to be much more difficult i mean i would i mean instead of asking can we govern should we govern absolutely yes i mean i hope and pray that human beings will for the next millions and billions of years will continue to be better than machines that’s my hope and i don’t want to see a day when machines are better than human beings.

Ms. Geeta Gurnani

Okay. I think I’ll go to what you said initially, that humans should not be scared by what they have created, right? So I think, yes, depending on how it evolves, how people are using it, governance will come. I don’t think so it will be an option at some point in time.

Mr. Syed Ahmed

One last round, okay? Again, I’ll start with Sunil. Should we have mandatory watermarking in all the media text and all the content that is developed by AI?

Mr. Sunil Abraham

Should we have mandatory watermarking in photo editing tool or text editing tool?

Mr. Syed Ahmed

Yes.

Mr. Sunil Abraham

I’m answering with a question.

Mr. Syed Ahmed

Are you saying yes or no?

Mr. Sunil Abraham

I’m answering with a question.

Mr. Syed Ahmed

Okay. That’s an answer I’ll take. No answer is also an answer.

Mr. Sundar R Nagalingam

I don’t, I mean, see the fact is we have accepted it. I mean, it’s not an untouchable, an alien, a dirty thing, right? It’s acceptable. So let’s make it. good i mean look good feel good i mean no point in i mean uh watermarking everything just to brand it there will be a blurry line between a human generated content and um ai generated content and all that so absolutely and we shouldn’t we demark that my honest feedback and i’m saying this with a big heart is that that human generated content will vanish in the internet just like we don’t we don’t remember uh uh you know addresses and for my i mean i’ve asked my i mean my maybe my my i mean phone numbers we don’t remember i mean we used to remember that we used to remember roots

Mr. Syed Ahmed

but i hope not i hope i hope not

Mr. Sunil Abraham

that’s why i said i have a heavy heart

Ms. Geeta Gurnani

i’ll answer from a very personal space because my son is a creative director okay in films and i think he’s absolutely says that it has to be demarcated but sometimes he goes to the extent saying that in near future you can clearly demarcate yourself you will not need any watermark also but there is a different angle which comes when you are human creative versus when you are really dividing completely

Mr. Syed Ahmed

perfect thank you so much and that brings me on to exact time ladies and gentlemen please give a big round of applause to amazing panel thank you so much to the amazing moderator thank you so much you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The panel opened with introductions of four senior representatives – Mr Syed Ahmed (Infosys), Geeta Gurnani (IBM), Mr Sundar R. Nagalingam (NVIDIA), and Mr Sunil Abraham (Meta).”

The knowledge base lists the same four executives as panelists, confirming Geeta Gurnani, Sundar R. Nagalingam and Sunil Abraham, and notes an Infosys representative, matching the report’s description [S92] and the overall panel description [S1].

Additional Contextmedium

“Sundar Nagalingam grouped AI risks into three buckets: functional safety, AI‑specific safety, and cybersecurity.”

A referenced source outlines three broad categories of AI risk, which aligns with the three-bucket framework described in the report, providing broader context for this taxonomy [S102].

Additional Contextmedium

“Geeta Gurnani said that two years ago many clients asked “what is responsible AI and what is trust?” but today “security has become a shift‑left priority”.”

Industry commentary notes a recent shift toward prioritising security over convenience, illustrating the broader trend toward security-first thinking that underpins Gurnani’s observation [S96].

Additional Contextlow

“The proliferation of AI standards reflects a deeper problem: the lack of a clear party to hold accountable when an AI‑driven robotic system fails.”

Discussion of AI standards highlights challenges such as lack of standardisation and unclear accountability, providing additional nuance to the report’s claim about standards and responsibility [S104].

External Sources (105)
S1
Global Enterprises Show How to Scale Responsible AI — – Mr. Sundar R Nagalingam- Mr. Syed Ahmed – Mr. Sunil Abraham- Mr. Syed Ahmed – Ms. Geeta Gurnani- Mr. Syed Ahmed
S2
Global Enterprises Show How to Scale Responsible AI — Speakers:Mr. Sundar R Nagalingam, Mr. Syed Ahmed Speakers:Mr. Sunil Abraham, Mr. Syed Ahmed Speakers:Mr. Sunil Abraham…
S3
Global Enterprises Show How to Scale Responsible AI — -Ms. Geeta Gurnani- Field CTO, Technical Pre-sales and Client Engineering at IBM
S4
Global Enterprises Show How to Scale Responsible AI — – Mr. Sunil Abraham- Mr. Sundar R Nagalingam- Ms. Geeta Gurnani – Mr. Sunil Abraham- Mr. Syed Ahmed- Mr. Sundar R Nagal…
S5
Global Enterprises Show How to Scale Responsible AI — Speakers:Mr. Sundar R Nagalingam, Mr. Syed Ahmed Speakers:Mr. Sunil Abraham, Mr. Sundar R Nagalingam, Ms. Geeta Gurnani…
S6
Global Enterprises Show How to Scale Responsible AI — -Mr. Sunil Abraham- Public Policy Director at Meta
S7
https://dig.watch/event/india-ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai — Absolutely. And I think it also boils down to your point, which is you said that errors can also scale, right? So if I …
S8
29, filed Jan. 22, 2010, at 9-10. — spectrum has been to formulate policy on a band-by-band, service-by-service basis, typically in response to specific req…
S9
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Subramaniam emphasizes that while discussions often focus on protecting data and AI models, the more fundamental concern…
S10
Panel Discussion Inclusion Innovation & the Future of AI — No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful s…
S11
CourseLog – Diplo’s training at AGDA — AI era does not favour critical and lateral thinking as technology mimics existing patterns. But it is not only about AI…
S12
Driving Indias AI Future Growth Innovation and Impact — Yeah, so thank you. Thank you for the question, and thank you for the invitation to join this terrific panel. I think th…
S13
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S14
Global challenges for the governance of the digital world — Points out that technology evolves rapidly and governance must keep pace
S15
MahaAI Building Safe Secure & Smart Governance — Artificial intelligence is real and it is influencing governance, markets, public services and even geopolitics. The que…
S16
Internet Governance Forum 2024 — The conversations highlighted the challenge of developing governance models that can keep pace with rapid technological …
S17
Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All — One of the main concerns is how technology, particularly artificial intelligence (AI), can infringe upon human dignity. …
S18
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Cultural and ethnic sensitivities in conjunction with black box technology are also a concern. It is unpredictable wheth…
S19
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking …
S20
Creatives warn that AI is reshaping their jobs — AI isacceleratingacross creative fields, raising concerns among workers who say the technology is reshaping livelihoods …
S21
National Disaster Management Authority — This panel discussion focused on integrating artificial intelligence into disaster risk reduction (DRR) systems to build…
S22
Building Population-Scale Digital Public Infrastructure for AI — The discussion highlighted that AI deployment differs fundamentally from traditional software procurement. Rather than a…
S23
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Pellerin Matis: Thank you. Yeah, but not working very well, okay, it’s back It’s not back Okay, can I have another m…
S24
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S25
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S26
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-inclusion-innovation-the-future-of-ai — That’s not okay. So we’re not underestimating the risks. But we can’t approach governance from a risk management control…
S27
Main Session on Sustainability & Environment | IGF 2023 — Maike Lukien:So policymakers, same as us, can never have too much information to base evidence-based decisions on. The o…
S28
Centering People and Planet in the WSIS+20 and beyond — Addressing the governance gap between rapid technological development and slower policy/regulatory responses
S29
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S30
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — This quote from the UN Secretary General, shared by Beridze, captures a fundamental challenge in AI governance – the gap…
S31
Laying the foundations for AI governance — ### Persistent Disagreements This discussion revealed both the substantial challenges in translating AI governance prin…
S32
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collabor…
S33
UNSC meeting: Artificial intelligence, peace and security — Governments frequently lag behind in regulating them for the benefit of the general public
S34
What does a former coffee-maker-turned-AI say about AI policy on the verge of the 2020s? — However, ascribing features of agency opens a whole new can of worms when we step out of purely human traits. Here we co…
S35
Global Enterprises Show How to Scale Responsible AI — “And at the very core of Gen AI is a single file on the file system, the weight file.”[55]. “A single file, which is a w…
S36
Agentic AI in Focus Opportunities Risks and Governance — Mulvaney argues that policy has always been about preventing harm to humans, and this principle should guide AI policy a…
S37
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Brunner summarizes Trump’s AI approach as: American AI is number one and must remain the leader, compete with China, the…
S38
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S39
Building the Next Wave of AI_ Responsible Frameworks & Standards — Addressing practical deployment challenges, Bhattacharya argued that while complete on-premise deployment might seem mor…
S40
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Explanation:This disagreement is unexpected because both speakers work for technology companies and might be expected to…
S41
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Summary:Lee advocates for developing scientific foundations and evaluation techniques first before regulation, while Ami…
S42
Why science metters in global AI governance — Thank you very much. There is a computer here. I don’t know to whom it belongs. Excellencies, ladies and gentlemen. Than…
S43
What is it about AI that we need to regulate? — A key distinction emerged around technical versus broader governance issues. InWorkshop 344 on WSIS+20 Technical Layer, …
S44
Why science metters in global AI governance — And it helps us anticipate impacts early, from risks for children, to labor markets, to manipulation at scale. So countr…
S45
Open Forum #30 High Level Review of AI Governance Including the Discussion — Juha Heikkila: Thank you Yoichi and thank you very much for this invitation. So I think it’s very useful to understand t…
S46
Open Forum #26 High-level review of AI governance from Inter-governmental P — Andy Beaudoin: in this room, and maybe not within the IGF itself. Of course, AI is not just a very promising technolog…
S47
Building Trustworthy AI Foundations and Practical Pathways — Consensus level:High level of consensus with complementary expertise – Thakkar provides the broad technological and econ…
S48
Experts propose frameworks for trustworthy AI systems — A coalition of researchers and experts hasidentifiedfuture research directions aimed at enhancing AI safety, robustness …
S49
Global Enterprises Show How to Scale Responsible AI — A significant theme emerged around the unique challenges AI systems face when scaling to serve billions of users. Nagali…
S50
Security frameworks lag behind rising AI threats — A series of high-profile incidents has highlighted how AI systems are exposing organisations tonew security risksnot cov…
S51
Building Sovereign and Responsible AI Beyond Proof of Concepts — Governance failures encompass the absence of comprehensive risk management frameworks. Organisations often lack clear pr…
S52
Military AI: Operational dangers and the regulatory void — Equally concerning is the regulatory gap enabling these technologies to proliferate. Humans are present at every stage f…
S53
AI Meets Cybersecurity Trust Governance & Global Security — And it was developing norms and clarifying expectations that over time it did not eliminate risk, but it did reduce unpr…
S54
Toward Collective Action_ Roundtable on Safe & Trusted AI — An audience member suggests that requiring mandatory watermarks on AI-generated media (videos, songs, pictures) could he…
S55
Review of AI and digital developments in 2024 — For example, “Tree-Ring” watermarking is built into the process of generating AI images using diffusion models, which st…
S56
Comprehensive Report: European Approaches to AI Regulation and Governance — Both speakers emphasize the critical importance of transparency in AI systems, though from different angles. The EU focu…
S57
Main Topic 3 –  Identification of AI generated content — Paulius Pakutinskas:OK. OK, so I’m Paulius Pakutinskas. I’m Professor. in law. So, I work with UNESCO. I’m UNESCO Chair …
S58
Global Enterprises Show How to Scale Responsible AI — So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in th…
S59
Building Population-Scale Digital Public Infrastructure for AI — The discussion highlighted that AI deployment differs fundamentally from traditional software procurement. Rather than a…
S60
Responsible AI in India Leadership Ethics & Global Impact — “I’m sure every organization today has a legal team, has a compliance team”[59]. “Legal teams have to re‑opt to talk abo…
S61
Global Enterprises Show How to Scale Responsible AI — So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in th…
S62
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Than…
S63
Scaling AI for Billions_ Building Digital Public Infrastructure — And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that ther…
S64
US CTA unveils new trustworthiness standard for healthcare AI — The US Consumer Technology Association (CTA)introduceda new standard to evaluate the trustworthiness of healthcare artif…
S65
Panel Discussion Inclusion Innovation & the Future of AI — No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful s…
S66
WS #139 Internet Resilience Securing a Stronger Supply Chain — Government role ranges from awareness and incentives to mandated requirements and enforced regulations
S67
Technology Regulation and AI Governance Panel Discussion — It was quite good and quite competitive, and it’s achieved a lot of adoption since then, as have a couple other Chinese …
S68
Process coordination: GDC, WSIS+20, IGF, and beyond — Proponents highlight that the multistakeholder approach encourages diversity in thought, leading to innovative solutions…
S69
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — How to balance innovation with regulation across different jurisdictions while maintaining global competitiveness is ong…
S70
Main Session on Sustainability & Environment | IGF 2023 — The analysis also underscores the importance of policymakers having up-to-date information for evidence-based decisions….
S71
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S72
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S73
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S74
GermanAsian AI Partnerships Driving Talent Innovation the Future — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual resp…
S75
Building Future Leaders – Competency Driven Succession Planning — The tone of the discussion was thoughtful and collegial, with panelists building on each other’s points. There was gener…
S76
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S77
Defending Truth — The Commission faces the challenge of navigating a future where private companies struggle to generate revenues and prof…
S78
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Data leakage is mentioned as a common occurrence that often happens without the organization’s awareness, and it qualifi…
S79
Ready for Goodbyes? : Critical System Obsolescence — In conclusion, the analysis provides a comprehensive overview of cybersecurity in relation to industrial control systems…
S80
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S81
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S82
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S83
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S84
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S85
Afternoon session — Moderate consensus with significant polarization. While there was broad agreement on core digital governance principles …
S86
Bottom-up AI and the right to be humanly imperfect (DiploFoundation) — From the analysis of these arguments, it can be inferred that while third-party tools offer convenience and efficiency i…
S87
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — The level of disagreement was moderate and constructive. Speakers shared common goals of protecting submarine cable infr…
S88
Closing remarks — Minimal to no disagreement present. This transcript represents a closing ceremony where speakers (Doreen Bogdan Martin, …
S89
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S90
https://app.faicon.ai/ai-impact-summit-2026/smart-regulation-rightsizing-governance-for-the-ai-revolution — Thank you. I mean, you’ve done a brilliant job of putting all the free problems we’ve got and then saying you’ve got a l…
S91
Enhancing the digital infrastructure for all | IGF 2023 Open Forum #135 — During the forum, the individual made multiple requests to leave, expressing gratitude several times by saying “thank yo…
S92
https://app.faicon.ai/ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai — of responsible AI office in Infosys. And absolute privilege to announce my co -panelist, Geeta Gurnani, field CTO, techn…
S93
AI Meets Cybersecurity Trust Governance & Global Security — Impact:This comment created a significant shift in the discussion, moving away from purely regulatory solutions toward e…
S94
Session — Marilia Maciel: Thank you, Jovan. I’ll do that, but I’ll do that by going back to your question about what predominates,…
S95
How AI Drives Innovation and Economic Growth — Akcigit presented empirical evidence of troubling trends: market concentration in the United States has been increasing …
S96
Secure Talk Using AI to Protect Global Communications & Privacy — It’s unexpected that a fintech CEO would support making transactions more difficult, as this goes against the industry’s…
S97
WS #184 AI in Warfare – Role of AI in upholding International Law — Yasmin Afina: Yeah, perfect. Hi, thank you, everyone. It’s nice to meet you. My name is Yasmin Afina from the United Na…
S98
From principles to practice: Governing advanced AI in action — – Lack of consensus on what constitutes “intolerable risks” and appropriate risk thresholds globally Brian Tse: I think…
S99
Global Digital Governance & Multistakeholder Cooperation for WSIS+20 — Ebert calls for creating transparent governance rules that can keep pace with rapid AI development while ensuring benefi…
S100
AI governance debated at IGF 2025: Global cooperation meets local needs — At theInternet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of arti…
S101
Scaling AI for Billions_ Building Digital Public Infrastructure — So I think Lakshmi touched on a very important point of the underlying, the fragility of the underlying infrastructure. …
S102
How can we deal with AI risks? — There are three types of risks:
S103
Building Trustworthy AI Foundations and Practical Pathways — Alright, I can take the clicker. So, I will keep it slightly brief and I’m going to skip over some slides in the interes…
S104
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Standards play a crucial role in the field of artificial intelligence (AI), ensuring consistency, reliability, and safet…
S105
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Implementation and enforcement challenges Painter draws from his experience with cyber norms to highlight the challenge…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Ms. Geeta Gurnani
9 arguments172 words per minute1995 words693 seconds
Argument 1
Industry surprise at ad‑hoc governance (Excel‑sheet approach)
EXPLANATION
Geeta highlighted that many senior leaders still manage AI governance using simple tools like Excel spreadsheets, which she finds inadequate for scaling responsible AI. This ad‑hoc approach reflects a surprising lack of mature governance processes despite years of experience in the field.
EVIDENCE
She recounted a conversation with a senior leader who, when asked to work on responsible AI, responded that governance was handled on an Excel sheet, noting that such a method prevents the organization from scaling AI responsibly [23-27].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
Argument 2
Shift‑left security mindset now central to AI projects
EXPLANATION
Geeta observed that security, once an afterthought, has become a primary consideration that is addressed early in AI project lifecycles. This shift‑left approach mirrors trends in software security and is now a prerequisite for AI deployments.
EVIDENCE
She explained that security has moved from being an afterthought to a “shift-left” priority, with organizations now thinking about security first before anything else in AI projects [17].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift-left approach aligns with the move toward human-centred security that prioritises protecting users early in the AI lifecycle [S9] and with observations that scaling errors require early security controls [S7].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
AGREED WITH
Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Argument 3
Trustworthy AI means end‑user confidence: security testing, hallucination monitoring, compliance
EXPLANATION
Geeta defined trustworthy AI as the ability of end‑users to rely on AI outputs, which requires that models pass security tests, are monitored for hallucinations, and meet applicable compliance requirements. These three pillars ensure that AI behaves predictably and safely for consumers.
EVIDENCE
She stated that a trustworthy AI system must have passed security testing, have controls to monitor hallucinations, and be compliant with relevant laws before an end-user can confidently use it [55-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gurnani’s definition of trustworthy AI as built on security assurance, output monitoring to prevent hallucinations, and compliance is echoed in the panel discussion where she stresses these three enablers [S2].
MAJOR DISCUSSION POINT
Defining trustworthy AI and its non‑negotiables
AGREED WITH
Mr. Sundar R Nagalingam
Argument 4
Senior leadership must mandate AI governance as a gate‑keeping control, integrated into enterprise risk posture
EXPLANATION
Geeta emphasized that AI governance cannot be optional; it requires explicit commitment from senior leadership and must be embedded as a control point within the organization’s overall risk management framework. This integration ensures that AI risks are treated on par with other enterprise risks.
EVIDENCE
She described the need for senior leadership commitment, turning governance into a gate-keeping control rather than an observation, and integrating AI risk into the enterprise risk posture for consistent decision-making [129-134].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She stresses that governance must move beyond monitoring to active, leadership-driven control mechanisms, a point reinforced by the discussion on the need for strong senior commitment and automated tooling [S2].
MAJOR DISCUSSION POINT
Embedding governance into runtime and enterprise risk
AGREED WITH
Mr. Sundar R Nagalingam
Argument 5
Establish technology‑level baseline standards first; geographies can then tailor
EXPLANATION
Geeta argued that the first step toward global AI regulation should be agreement on core technology standards, after which individual countries can adapt those standards to their specific regulatory contexts. This approach separates technical baselines from jurisdiction‑specific rules.
EVIDENCE
She stated that technology regulation should be discussed first as a “table stake” before geographic regulations are applied, suggesting a universal technical baseline [290-294].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gurnani advocates a technology-first regulatory approach, arguing that technologists should agree on stable technical “table stakes” before jurisdictions add their rules [S2].
MAJOR DISCUSSION POINT
Global regulatory alignment vs. technology standards
DISAGREED WITH
Mr. Sunil Abraham, Mr. Sundar R Nagalingam
Argument 6
Model development is outpacing governance frameworks; governance must catch up quickly
EXPLANATION
Geeta affirmed that AI model innovation is moving faster than the creation of governance structures, creating a gap that needs to be closed promptly. She sees this as a pressing challenge for the industry.
EVIDENCE
She responded with a concise “Absolutely.” when asked whether AI models are outpacing governance, indicating her agreement with the premise [301].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources note the rapid pace of AI innovation versus slower governance development, highlighting the need for agile, adaptive regulatory models [S14] and the challenge of keeping governance in step with technology [S16].
MAJOR DISCUSSION POINT
Pace of AI model innovation vs. governance
AGREED WITH
Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Argument 7
Enterprises will pay a premium for trustworthy AI when the use case is consumer‑facing or carries significant downstream risk
EXPLANATION
Geeta explained that organizations are willing to invest in higher‑cost, trust‑grade AI solutions when the AI directly impacts customers or brand reputation, whereas internal or low‑risk use cases may not justify the premium. The decision hinges on the perceived downstream risk and ROI.
EVIDENCE
She noted that enterprises are prepared to pay for premium trustworthy AI when the use case is consumer-facing and involves reputation, compliance, or brand risk, but may forgo the premium for internal experiments or POCs [221-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion links willingness to invest in trust-grade AI to perceived downstream risk and ROI considerations for consumer-facing applications [S1].
MAJOR DISCUSSION POINT
Market willingness to pay for trust‑grade AI
Argument 8
Observed project halts or delays when compliance or ethical boards raise issues
EXPLANATION
Geeta mentioned that while she has not personally stopped projects, she has witnessed projects being halted or delayed due to compliance or ethical board interventions, illustrating the practical impact of governance mechanisms.
EVIDENCE
She clarified that she is not on IBM’s ethical board but has seen projects stopped when compliance concerns arise [313].
MAJOR DISCUSSION POINT
Project stoppage due to safety concerns
Argument 9
Personal view that creative industries may need demarcation, but future tech might make it unnecessary
EXPLANATION
Drawing from her son’s perspective as a creative director, Geeta expressed that while watermarking or demarcation may be needed now for creative works, advances in technology could eventually render explicit watermarks unnecessary.
EVIDENCE
She shared her son’s opinion that creative industries require clear demarcation of AI-generated content, yet suggested that future tools might eliminate the need for watermarks [338].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns from the creative sector about AI-generated content and calls for clear demarcation are documented, while broader human-rights perspectives also stress the need for watermarking or other markers [S20][S17].
MAJOR DISCUSSION POINT
Mandatory watermarking of AI‑generated content
M
Mr. Sundar R Nagalingam
8 arguments183 words per minute1756 words573 seconds
Argument 1
System‑level controls, not infrastructure, are the first failure points at scale
EXPLANATION
Sundar argued that when AI systems scale to billions of users, the breakdown typically occurs in the control systems that manage the infrastructure rather than the hardware itself. These systemic controls become the weak link under massive load.
EVIDENCE
He explained that the systems driving the infrastructure break first, not the infrastructure itself, highlighting control-layer failures as the primary risk at scale [34-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nagalingam explains that at massive scale the infrastructure itself remains intact, but the systems managing it-control layers-are the weak link, supporting this view [S2].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
Argument 2
Control and security vulnerabilities break before hardware infrastructure
EXPLANATION
Sundar emphasized that security and control vulnerabilities are likely to surface before any hardware failures when AI services are delivered to a massive user base. Overlooked vulnerabilities can cause catastrophic failures even if the hardware remains functional.
EVIDENCE
He noted that a small, overlooked vulnerability in the control mechanisms could constitute a huge failure, indicating that security controls break before the underlying hardware [34-35].
MAJOR DISCUSSION POINT
Failure modes when AI scales to billions of users
AGREED WITH
Ms. Geeta Gurnani, Mr. Sunil Abraham
Argument 3
Functional failures in AI service delivery (micro‑services, safety checks) are critical
EXPLANATION
Sundar pointed out that failures can also arise from how AI services are orchestrated, such as micro‑service breakdowns or missing safety checks, which affect the functionality delivered to end‑users.
EVIDENCE
He described possible failure modes including inefficient micro-service delivery and the lack of safety or control checks, which could cause functional breakdowns even when infrastructure appears healthy [34-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He highlights potential breakdowns in micro-service orchestration and missing safety checks as key functional failure modes when AI services scale [S2].
MAJOR DISCUSSION POINT
Failure modes when AI scales to billions of users
Argument 4
Three core buckets: functional safety, AI safety, cybersecurity
EXPLANATION
Sundar proposed a high‑level framework that groups trustworthy AI requirements into three categories: functional safety (the AI does what it is supposed to), AI safety (robustness, bias mitigation), and cybersecurity (protection against attacks). This structure can be applied across regulators and industries.
EVIDENCE
He outlined the three buckets-functional safety, AI safety, and cybersecurity-using the example of AI-assisted robotic surgery to illustrate each component [68-75].
MAJOR DISCUSSION POINT
Defining trustworthy AI and its non‑negotiables
AGREED WITH
Ms. Geeta Gurnani
Argument 5
Privacy and safety guardrails should be baked into silicon for high‑risk domains (autonomous driving, healthcare)
EXPLANATION
Sundar argued that for safety‑critical applications such as autonomous vehicles and healthcare, privacy and safety mechanisms need to be embedded at the hardware level to ensure robust protection before software layers are applied.
EVIDENCE
He affirmed that high-performance AI infrastructure should include embedded privacy guardrails at the silicon level for domains like autonomous driving and healthcare, stating “absolutely yes” to the suggestion [148-158].
MAJOR DISCUSSION POINT
Embedding governance into runtime and enterprise risk
Argument 6
Standardize core safety, then fine‑tune per country/regulation
EXPLANATION
Sundar suggested a two‑step approach: first create a standardized safety baseline for AI platforms, then adapt or fine‑tune that baseline to meet the specific regulatory requirements of each geography.
EVIDENCE
He described a process where a safe platform becomes a template that can be tweaked for each country’s needs, emphasizing standardization followed by localized adjustments [240-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nagalingam proposes a two-step approach: first create a standardized safety baseline, then adapt it to local regulatory requirements, mirroring the technology-first stance discussed in the panel [S2].
MAJOR DISCUSSION POINT
Global regulatory alignment vs. technology standards
AGREED WITH
Ms. Geeta Gurnani
DISAGREED WITH
Ms. Geeta Gurnani, Mr. Sunil Abraham
Argument 7
Natural progression: technology leads, governance follows
EXPLANATION
Sundar noted that it is natural for technological advances to outpace governance, with governance catching up after the technology has matured. This reflects the typical evolution of emerging tech ecosystems.
EVIDENCE
He stated that model development naturally leads and governance follows, describing it as “a natural way of happening things” [302-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel repeatedly notes that governance traditionally lags behind rapid AI advances, underscoring the natural order of technology outpacing policy [S14][S16].
MAJOR DISCUSSION POINT
Pace of AI model innovation vs. governance
AGREED WITH
Ms. Geeta Gurnani, Mr. Sunil Abraham
Argument 8
Supports watermarking to clearly demarcate AI‑created media
EXPLANATION
Sundar expressed support for watermarking AI‑generated content, arguing that it helps distinguish machine‑produced media from human‑created content, though he cautioned about potential blurring of lines.
EVIDENCE
He said “absolutely” to the idea of watermarking and discussed the need for demarcation while acknowledging the blurry line between AI and human content [333-335].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for demarcation of AI-generated content is highlighted both from a human-rights perspective and by concerns within the creative industry, reinforcing support for watermarking [S17][S20].
MAJOR DISCUSSION POINT
Mandatory watermarking of AI‑generated content
DISAGREED WITH
Mr. Sunil Abraham, Ms. Geeta Gurnani
M
Mr. Sunil Abraham
8 arguments167 words per minute2384 words851 seconds
Argument 1
Skepticism toward anthropomorphizing AI; focus on ontology and epistemology of models
EXPLANATION
Sunil expressed strong skepticism about treating AI systems as if they possess human qualities, insisting that they are merely technological artifacts. He emphasized the need to consider the ontological nature of AI models and the epistemological questions about truth and responsibility.
EVIDENCE
He repeatedly said “I don’t see it” and argued that AI is just technology, then discussed ontology (the nature of the weight file) and epistemology (the nature of truth about the file) [36-42].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
Argument 2
AI as a dual‑use “weight file” requires ontological and epistemological caution
EXPLANATION
Sunil highlighted that a generative AI model is essentially a single weight file, which can be used for both beneficial and harmful purposes. This dual‑use nature demands careful philosophical and ethical scrutiny regarding its deployment.
EVIDENCE
He described the core of generative AI as a single weight file, noting its dual-use potential and the challenges of assigning responsibility and truth to it [44-49].
MAJOR DISCUSSION POINT
Defining trustworthy AI and its non‑negotiables
Argument 3
Hardware attack surface and trusted execution environments are active research areas
EXPLANATION
Sunil referenced Meta’s research on Trusted Execution Environments (TEEs) and the broader landscape of hardware‑level attacks, indicating that securing the hardware stack is a critical and ongoing area of investigation.
EVIDENCE
He mentioned Meta’s paper on trusted execution environments, the hardware attack surface, and a series of possible attacks such as supply-chain and pager attacks, underscoring active research in this domain [166-176].
MAJOR DISCUSSION POINT
Embedding governance into runtime and enterprise risk
AGREED WITH
Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Argument 4
Advertising can subsidize free AI access and help bridge the digital divide without violating neutrality
EXPLANATION
Sunil argued that ad‑supported AI services can provide free access to a broad population, thereby narrowing the digital divide, and that this model does not necessarily breach AI neutrality principles.
EVIDENCE
He explained that ads enable free AI usage for both affluent and low-income users, helping move from 25 % to 90 % AI adoption, and positioned ads as a technical solution to bridge the divide [182-202].
MAJOR DISCUSSION POINT
Monetization via ads and AI neutrality
Argument 5
AI is already subject to regulation; there is no regulatory vacuum
EXPLANATION
Sunil asserted that AI is already regulated in many jurisdictions, citing statements from policymakers to counter the notion of a regulatory gap. He emphasized that existing laws already apply to AI activities.
EVIDENCE
He quoted Lina Khan, stating that “there is no regulatory vacuum for AI,” thereby rejecting the idea that AI lacks regulation [295-296].
MAJOR DISCUSSION POINT
Global regulatory alignment vs. technology standards
Argument 6
Governance has never preceded AI advances
EXPLANATION
Sunil noted that historically, technological breakthroughs have always come before the establishment of governance frameworks, implying that AI governance will continue to follow technological progress.
EVIDENCE
He succinctly said “It’s never happened in the reverse order,” confirming that governance has always lagged behind AI innovation [305].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion points out that historically governance frameworks have always followed technological breakthroughs, confirming this observation [S14][S16].
MAJOR DISCUSSION POINT
Pace of AI model innovation vs. governance
AGREED WITH
Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Argument 7
Meta disabled facial‑recognition features over safety concerns
EXPLANATION
Sunil provided a concrete example of a major tech company taking action for safety by turning off facial‑recognition capabilities on its platform, illustrating how safety concerns can lead to feature removal.
EVIDENCE
He stated plainly that “facial recognition was turned off on Facebook,” indicating a safety-driven decision [315].
MAJOR DISCUSSION POINT
Project stoppage due to safety concerns
Argument 8
Expresses hesitation and frames the issue as a question rather than a direct stance
EXPLANATION
When asked about mandatory watermarking, Sunil responded with a question instead of a clear yes or no, reflecting uncertainty or reluctance to take a definitive position on the policy.
EVIDENCE
He answered the watermarking question by replying with a question, saying “I’m answering with a question” and did not provide a direct yes/no response [327-330].
MAJOR DISCUSSION POINT
Mandatory watermarking of AI‑generated content
M
Mr. Syed Ahmed
10 arguments144 words per minute2370 words985 seconds
Argument 1
Moderator observation that trust must be built before AI can scale
EXPLANATION
Syed emphasized that while AI’s capabilities are evident, large‑scale adoption will only happen once robust trust mechanisms are in place. He framed trust as a prerequisite for scaling AI responsibly.
EVIDENCE
He summarized the panel’s point that “true scales can come only when you start trusting AI,” highlighting the need to build trust before scaling [29-33].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
Argument 2
Moderator agreement that these are key concerns
EXPLANATION
Syed echoed Sundar’s points about control and security being primary failure points, confirming that the panel collectively sees these issues as critical when AI scales.
EVIDENCE
He responded with “excellent i totally agree with you” after Sundar’s description of failure points, indicating agreement [32-33].
MAJOR DISCUSSION POINT
Failure modes when AI scales to billions of users
Argument 3
Moderator framing of the question
EXPLANATION
Syed introduced the central panel question asking each participant to define trustworthy AI and its non‑negotiables, setting the stage for the subsequent discussion.
EVIDENCE
He asked, “what does it mean by trustworthy AI in your own sense and what are the key non-negotiables,” directing the conversation to that theme [50-52].
MAJOR DISCUSSION POINT
Defining trustworthy AI and its non‑negotiables
Argument 4
Moderator prompting on implementation
EXPLANATION
Syed queried how IBM ensures that responsible‑AI tools move beyond monitoring and become enforced at runtime, pushing the panel to discuss practical deployment of governance controls.
EVIDENCE
He asked, “how do you ensure that these tools don’t remain at just a monitoring layer and get enforced on the ground at the runtime?” [108-113].
MAJOR DISCUSSION POINT
Embedding governance into runtime and enterprise risk
Argument 5
Moderator query on the impact of ads
EXPLANATION
Syed raised the question of whether embedding advertisements in consumer AI platforms like ChatGPT would subsidize services or undermine AI neutrality, seeking the panel’s view on this monetization model.
EVIDENCE
He asked, “will it help consumers subsidize their subscription or will it kind of violate the doctrine of free AI principles?” [181-184].
MAJOR DISCUSSION POINT
Monetization via ads and AI neutrality
Argument 6
Moderator seeking market insight
EXPLANATION
Syed asked whether enterprises are willing to pay a premium for trust‑grade AI, probing the commercial viability of responsible‑AI offerings.
EVIDENCE
He inquired, “are you seeing that influencing the buying decisions… would anyone invest in responsible AI?” [205-214].
MAJOR DISCUSSION POINT
Market willingness to pay for trust‑grade AI
Argument 7
Moderator asks about global alignment
EXPLANATION
Syed posed a rapid‑fire yes/no question about whether there should be global alignment on AI regulations, prompting the panel to consider the feasibility of worldwide standards.
EVIDENCE
He asked, “Regulations. Yes or no?” during the rapid-fire segment [279-280].
MAJOR DISCUSSION POINT
Global regulatory alignment vs. technology standards
Argument 8
Moderator highlights the speed gap
EXPLANATION
Syed highlighted the concern that AI model innovation is outpacing governance, framing it as a critical challenge for the panel to address.
EVIDENCE
He asked, “Are the models and the innovation outpacing governance?” and noted the speed gap [298-300].
MAJOR DISCUSSION POINT
Pace of AI model innovation vs. governance
Argument 9
Moderator probes for examples
EXPLANATION
Syed requested concrete instances where projects were halted due to safety or compliance concerns, seeking real‑world evidence of governance impact.
EVIDENCE
He asked, “have you stopped any projects due to safety concerns?” prompting examples from the panelists [312].
MAJOR DISCUSSION POINT
Project stoppage due to safety concerns
Argument 10
Moderator attempts to elicit a yes/no answer
EXPLANATION
In the rapid‑fire segment, Syed pressed Sunil for a definitive yes/no response on mandatory watermarking, illustrating his effort to obtain concise positions from the panel.
EVIDENCE
He asked, “Should we have mandatory watermarking…?” and followed up with “Yes?” after Sunil’s evasive reply [323-329].
MAJOR DISCUSSION POINT
Mandatory watermarking of AI‑generated content
Agreements
Agreement Points
Security and control vulnerabilities are primary failure points when AI systems scale
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Shift‑left security mindset now central to AI projects Control and security vulnerabilities break before hardware infrastructure Hardware attack surface and trusted execution environments are active research areas
All three panelists stress that security must be addressed early and that security or control failures are likely to break AI services before any hardware failure, making security a critical layer for trustworthy AI at scale [17][34-35][166-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry analyses show that scaling failures often stem from security and control issues, with system-driven infrastructure breaking first as AI scales [S49] and recent AI-related security incidents outpacing existing frameworks [S50].
Trustworthy AI requires multiple non‑negotiable layers (security, functional/AI safety, compliance/cybersecurity)
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Trustworthy AI means end‑user confidence: security testing, hallucination monitoring, compliance Three core buckets: functional safety, AI safety, cybersecurity
Both speakers define trustworthy AI as a set of layered guarantees: Geeta emphasizes security testing, hallucination monitoring and legal compliance for end-users, while Sundar groups requirements into functional safety, AI safety and cybersecurity, showing a shared three-layer view of trustworthiness [55-64][68-75].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for layered safeguards covering security, functional safety and compliance is echoed in trust-centric AI literature and emerging frameworks that prescribe separate safety, robustness and governance layers [S38][S48][S53].
AI governance should be embedded as a systematic, organization‑wide control mechanism, standardized then adapted per jurisdiction
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Senior leadership must mandate AI governance as a gate‑keeping control, integrated into enterprise risk posture Standardize core safety, then fine‑tune per country/regulation
Geeta calls for senior-leadership-driven, gate-keeping governance integrated into enterprise risk, while Sundar proposes a baseline safety standard that can be fine-tuned for each geography, indicating consensus on a structured, standardized governance approach [129-134][240-249].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the EU AI Act’s risk-based, organization-level controls that must be harmonized across member states while allowing local adaptation [S45] and with identified gaps in corporate risk-management processes [S51].
AI model innovation is outpacing governance frameworks
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Model development is outpacing governance frameworks; governance must catch up quickly Natural progression: technology leads, governance follows Governance has never preceded AI advances
All three agree that AI advances faster than the creation of governance structures, creating a gap that must be closed rapidly [301][302-304][305].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN Secretary-General highlighted the gap between rapid AI advances and slower policy understanding, a pattern repeatedly observed as governments lag behind technological developments [S30][S33][S42].
Similar Viewpoints
Both see AI governance as needing a top‑down, standardized foundation that is then customized for specific regulatory contexts, rather than ad‑hoc or siloed processes [129-134][240-249].
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Senior leadership must mandate AI governance as a gate‑keeping control, integrated into enterprise risk posture Standardize core safety, then fine‑tune per country/regulation
All three highlight that security considerations must be baked in early (shift‑left) and that vulnerabilities in control layers are the most likely failure points, underscoring security as a foundational element of trustworthy AI [17][34-35][166-176].
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Shift‑left security mindset now central to AI projects Control and security vulnerabilities break before hardware infrastructure Hardware attack surface and trusted execution environments are active research areas
Each acknowledges the historical pattern where AI capabilities outstrip governance, indicating a shared concern about the speed gap between innovation and regulation [301][302-304][305].
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Model development is outpacing governance frameworks; governance must catch up quickly Natural progression: technology leads, governance follows Governance has never preceded AI advances
Unexpected Consensus
All three speakers independently propose a three‑layer or three‑bucket model for trustworthy AI
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Trustworthy AI means end‑user confidence: security testing, hallucination monitoring, compliance Three core buckets: functional safety, AI safety, cybersecurity Hardware attack surface and trusted execution environments are active research areas
While Geeta frames trust in terms of security, hallucination control and compliance, Sundar groups requirements into functional safety, AI safety and cybersecurity, and Sunil emphasizes hardware-level protections (TEEs). The convergence on a multi-layered trust architecture was not explicitly coordinated, yet all three arrived at a similar structural view of trust, which is an unexpected alignment [55-64][68-75][166-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder workshops report a converging view that trustworthy AI can be organized into three core buckets (e.g., safety, security, compliance) [S47][S48].
Consensus that governance always lags behind AI advances, despite differing professional backgrounds
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Model development is outpacing governance frameworks; governance must catch up quickly Natural progression: technology leads, governance follows Governance has never preceded AI advances
Even though Geeta, Sundar and Sunil represent different organizations (IBM, NVIDIA, Meta), they all affirm the same historical pattern, which is notable given their varied perspectives on AI policy and product development [301][302-304][305].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources note the systemic lag of regulation relative to AI progress, from UN statements to industry-government roundtables [S30][S31][S33][S42].
Overall Assessment

The panel shows strong convergence on three core themes: (1) security must be addressed early and is the most likely failure point at scale; (2) trustworthy AI is best expressed as a multi‑layered framework covering functional safety, AI safety, cybersecurity, and compliance; (3) AI innovation outpaces governance, creating a pressing need for standardized, leadership‑driven governance that can be adapted per jurisdiction.

High consensus across speakers on the importance of security, layered trust mechanisms, and the speed gap between AI development and governance. This consensus suggests that industry leaders recognize common challenges and are likely to collaborate on standards, leadership mandates, and rapid governance mechanisms to enable responsible AI deployment.

Differences
Different Viewpoints
Scope and sequencing of global AI regulation versus technology‑first baseline
Speakers: Ms. Geeta Gurnani, Mr. Sunil Abraham, Mr. Sundar R Nagalingam
Establish technology‑level baseline standards first; geographies can then tailor AI is already regulated; there is no regulatory vacuum Standardize core safety, then fine‑tune per country/regulation
Geeta argues that the first step should be a universal technical baseline before any geographic regulation is applied [290-294]. Sunil counters that AI is already covered by existing laws and there is no regulatory gap to fill [295-296]. Sundar proposes a two-step approach: create a safe, standardized platform and then adapt it to each country’s rules [240-249]. The three positions differ on whether a new global alignment effort is needed, on its timing, and on the extent of existing regulation.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over whether regulation should precede or follow deployment is reflected in discussions on risk-based regulatory sequencing versus technology-first approaches [S41][S43][S45][S44].
Mandatory watermarking of AI‑generated content
Speakers: Mr. Sundar R Nagalingam, Mr. Sunil Abraham, Ms. Geeta Gurnani
Supports watermarking to clearly demarcate AI‑created media Evasive response, does not give a clear yes/no answer Future technology may make explicit watermarking unnecessary
Sundar explicitly backs mandatory watermarking, saying it helps distinguish AI content from human-generated material [333-335]. Sunil avoids a direct stance, replying with a question and offering no yes/no answer [327-330]. Geeta adds that while demarcation is currently needed, advances may eventually render watermarks obsolete [338]. The panel therefore shows clear disagreement on the policy prescription.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy proposals call for compulsory watermarks to combat disinformation [S54]; technical methods such as Tree-Ring watermarking have been demonstrated [S55]; the EU AI Act also mandates labeling of synthetic media [S56][S57].
Unexpected Differences
Existence of a regulatory vacuum for AI
Speakers: Ms. Geeta Gurnani, Mr. Sunil Abraham
Calls for technology‑first baseline before geographic regulation States that AI is already regulated and there is no vacuum
Given the panel’s composition of senior technologists from major AI firms, it is surprising that Sunil asserts a fully covered regulatory landscape, directly contradicting Geeta’s call for coordinated baseline standards and further alignment [295-296] vs. [290-294]. This unexpected clash reveals differing perceptions of regulatory sufficiency.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses describe a “regulatory void” for AI, especially in military applications, and note the broader lack of comprehensive legal frameworks [S52][S44][S33].
Attitude toward anthropomorphizing AI
Speakers: Mr. Sunil Abraham, Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Skepticism toward anthropomorphizing AI; focus on ontology and epistemology Treats AI trust as a practical, user‑facing engineering problem Frames AI failures in terms of system controls and safety buckets
Sunil repeatedly rejects any human-like framing of AI, emphasizing its status as a weight file and philosophical concerns [36-42]. In contrast, Geeta and Sundar discuss trust, safety, and governance in concrete, operational terms without invoking ontology, indicating an unexpected philosophical divergence within the same technical discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Scholars warn that attributing agency to AI creates conceptual challenges and recommend avoiding human-like mental models [S34][S35]; policy discussions emphasize human-centric safeguards over AI personhood [S36].
Overall Assessment

The panel shows moderate but substantive disagreement. Core points of contention revolve around the need for a unified global regulatory framework versus a technology‑first baseline, and the policy instrument of mandatory watermarking. While all participants concur on the importance of trustworthy AI, they propose divergent routes—leadership‑driven governance, system‑level standardization, or philosophical reframing. These differences suggest that consensus on implementation will require bridging gaps between policy‑oriented, technical, and philosophical perspectives.

Medium – the disagreements are focused on strategic approaches rather than outright denial of the problem, implying that coordinated multi‑stakeholder work will be needed to align on standards, regulation, and content‑labeling policies.

Partial Agreements
All three panelists agree that trustworthy AI is a prerequisite for scaling AI systems, but they diverge on how to achieve it. Geeta focuses on end‑user confidence through security testing, hallucination monitoring, and compliance [55-64]. Sundar structures the problem into functional safety, AI safety, and cybersecurity layers [68-75]. Sunil stresses the philosophical nature of the artefact, urging attention to ontology and epistemology rather than treating AI as a human‑like entity [36-42]. Thus, while the goal of trustworthy AI is shared, the pathways—operational controls, safety buckets, or philosophical framing—are contested.
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Trustworthy AI is essential for large‑scale adoption Three core buckets (functional safety, AI safety, cybersecurity) define trustworthy AI AI is a dual‑use weight file that requires ontological and epistemological caution
Both agree that governance cannot be optional and must be embedded in organisational processes. Geeta stresses top‑down commitment, gate‑keeping, and integration with enterprise risk management [129-134]. Sundar highlights that the breakdowns at scale occur in the control systems that manage infrastructure, implying that robust, standardized controls are essential [34-35]. They share the objective of embedding governance, but differ on emphasis—leadership‑driven policy versus technical system‑level control.
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Senior leadership must mandate AI governance as a gate‑keeping control, integrated into enterprise risk posture System‑level controls and standardization are the first failure points at scale
Takeaways
Key takeaways
Trust and responsible AI are moving from an after‑thought to a core, shift‑left priority across enterprises. Governance failures, not hardware infrastructure, are the first points of breakage when AI scales to billions of users. Trustworthy AI is defined by end‑user confidence, requiring security testing, hallucination monitoring, and compliance with applicable laws. Three universal safety buckets emerged: functional safety, AI‑specific safety, and cybersecurity. Embedding governance at runtime demands senior‑leadership mandate, gate‑keeping controls, and integration into the overall enterprise risk framework. Hardware‑level privacy and safety guardrails (e.g., trusted execution environments) are essential for high‑risk domains such as autonomous driving and healthcare. Advertising can subsidize free AI access and help bridge the digital divide without necessarily violating AI neutrality. Enterprises are willing to pay a premium for “trust‑grade” AI when the use case is consumer‑facing or carries significant downstream risk. Global regulatory alignment should start with technology‑level baseline standards, which can then be tailored to individual jurisdictions. AI model innovation is outpacing governance frameworks; governance must accelerate to keep pace.
Resolutions and action items
Senior leadership in organizations should formally mandate AI governance as a non‑optional, gate‑keeping function. AI risk should be folded into the enterprise risk management process rather than treated as a separate silo. Develop and deploy silicon‑level privacy and safety guardrails for high‑risk AI applications (e.g., autonomous vehicles, medical devices). Adopt a conservative, “Unix‑style” control model where AI services are blocked unless they pass predefined safety and compliance checks. Standardize core safety, algorithmic, and ecosystem requirements at the technology level, then fine‑tune for each geography’s regulations.
Unresolved issues
How to achieve true global regulatory alignment without creating a one‑size‑fits‑all legal framework. The long‑term impact of embedding advertisements in consumer‑facing AI services on user trust and AI neutrality. Whether mandatory watermarking of AI‑generated content should be enforced across all media types. Specific mechanisms for stopping or delaying AI projects when safety concerns arise beyond internal compliance reviews. Governance approaches for emerging dual‑use scenarios (e.g., synthetic hate‑speech corpora for low‑resource languages).
Suggested compromises
Adopt a shift‑left security mindset while allowing organizations to start with lightweight governance (e.g., Excel‑sheet tracking) as an interim step. Offer tiered trust guarantees: premium, fully‑governed AI for consumer‑facing or high‑risk use cases, and lighter controls for internal experimentation. Balance open‑source freedom with responsibility by retaining licensing freedoms but applying stricter controls when models are deployed on proprietary platforms. Use conservative (risk‑averse) defaults in AI deployment pipelines to enable scaling while minimizing early‑stage failures.
Thought Provoking Comments
Follow-up Questions
How can organizations move beyond using Excel sheets for AI governance to scalable, automated governance frameworks?
Geeta highlighted that senior leaders were managing AI governance with Excel, indicating a need for more robust, scalable tools and processes.
Speaker: Geeta Gurnani
What specific control mechanisms or system designs can prevent functional or security failures when AI services scale to billions of users?
Sundar noted that failures often stem from how AI is served rather than infrastructure, suggesting a need for detailed controls.
Speaker: Sundar R. Nagalingam
How can we establish epistemic trust and provenance for AI model weight files to address ontological and epistemological concerns?
Sunil discussed ontology and epistemology of weight files, indicating research is needed on verifying truth and trustworthiness of model artifacts.
Speaker: Sunil Abraham
What are the ethical and societal implications of embedding advertisements in generative AI services as a means to subsidize access?
Sunil raised the issue of ad-supported AI models, prompting investigation into privacy, bias, and equity impacts.
Speaker: Sunil Abraham
What frameworks or best practices can embed AI risk into existing enterprise risk management (ERM) processes?
Geeta suggested AI risk should be part of overall enterprise risk posture, requiring guidance on integration.
Speaker: Geeta Gurnani
How can industry develop universal AI safety standards that can be efficiently tailored to meet diverse geographic regulatory requirements?
Sundar described the challenge of consistent trust enforcement across geographies, indicating a need for adaptable standardization approaches.
Speaker: Sundar R. Nagalingam
How can the open‑source community balance freedom of use with responsibility for dual‑use or harmful AI applications?
Sunil highlighted tensions between open‑source freedom and liability, calling for policies or mechanisms to manage dual‑use risks.
Speaker: Sunil Abraham
What should constitute a minimal set of technology‑level safeguards that all AI systems must meet globally, regardless of jurisdiction?
Both participants debated the need for baseline technical requirements before regional regulations, suggesting a research agenda for universal safeguards.
Speaker: Geeta Gurnani, Sundar R. Nagalingam
What are the technical feasibility, effectiveness, and societal impact of mandatory watermarking for AI‑generated media and text?
The panel debated mandatory watermarking, indicating a need for studies on detection, compliance, and user perception.
Speaker: Sunil Abraham, Geeta Gurnani
How can scalable moderation frameworks be designed to handle AI‑generated content across the ‘zero‑to‑one’ and ‘one‑to‑one’ interaction models?
Sunil introduced two mental models for content moderation, pointing to a research gap in adaptable moderation strategies.
Speaker: Sunil Abraham
What strategies can be employed to manage dual‑use risks of generative AI, ensuring safety while enabling beneficial applications?
He discussed dual‑use concerns, especially around synthetic hate‑speech corpora, highlighting a need for risk mitigation research.
Speaker: Sunil Abraham
How should accountability be assigned when AI systems cause large‑scale failures, given the difficulty of attributing blame to a non‑human entity?
Sundar raised the accountability dilemma for autonomous systems, suggesting a need for legal and governance frameworks.
Speaker: Sundar R. Nagalingam
What privacy guardrails can be embedded at the silicon level of GPUs and other high‑performance AI hardware?
He affirmed the need for built‑in privacy protections, prompting investigation into hardware‑level privacy solutions.
Speaker: Sundar R. Nagalingam
What are the practical implications and security considerations of Meta’s Trusted Execution Environment approach for edge AI processing?
Sunil referenced Meta’s paper, indicating a need for deeper analysis of hardware attacks and privacy in TEEs.
Speaker: Sunil Abraham
How can AI governance be operationalized at runtime within CI/CD pipelines to shift from observation to enforcement?
Geeta emphasized moving governance from monitoring to control at runtime, requiring tooling and process research.
Speaker: Geeta Gurnani

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.