Global Enterprises Show How to Scale Responsible AI

20 Feb 2026 13:00h - 14:00h

Global Enterprises Show How to Scale Responsible AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, comprising leaders from Infosys, IBM, NVIDIA and Meta, examined how trustworthy and responsible AI can be scaled across enterprises [1-5]. Geeta noted that security, once an after-thought, has become a “shift-left” priority and that organizations now place AI governance and trust at the forefront of adoption [17-18]. She illustrated this shift with a senior leader who tried to manage Gen-AI governance on an Excel spreadsheet, revealing a lack of confidence and scalability in current practices [22-27]. Sundar argued that when AI serves billions, the most common failures are not infrastructure outages but weaknesses in the services and controls that deliver AI functionality, especially security vulnerabilities [34-36]. He proposed three universal buckets-functional safety, AI safety, and cybersecurity-that should be addressed regardless of regulator or industry [68-75].


Sunil emphasized a philosophical stance that AI outputs are merely technological processes, warning against anthropomorphizing agents and stressing the ontological and epistemological limits of a single weight file [36-44]. He also defended ad-supported AI as a means to broaden access, arguing that advertising can level the AI divide without compromising neutrality [181-193]. When asked whether enterprises would pay a premium for “trust-grade” AI, Geeta replied that customers are willing to invest when downstream risk to brand or compliance is high, but not for internal experiments [221-227]. She stressed that trustworthy AI requires senior leadership commitment, embedding governance as an enforceable control rather than a passive review, and eventually integrating AI risk into overall enterprise risk management [129-133][143-148].


Sundar affirmed that high-performance hardware should embed privacy and safety guardrails at the silicon level, citing autonomous driving and aerospace as domains where such safety layers are mandatory [148-158]. All panelists agreed that AI model advances are outpacing governance frameworks, making rapid standardisation and cross-geography templates essential [301-304]. The discussion concluded that building trust in AI demands coordinated standards, leadership-driven risk integration, and proactive safety engineering across the stack [68-75][129-133][240-249].


Keypoints

Major discussion points


Trust and governance are moving from an after-thought to a front-line priority, but many organisations still rely on ad-hoc methods.


Geeta notes that “security always used to be an after-thought… now people can’t afford not thinking security” and that “people are adopting AI but trust-governance-security is taking a prime stage now” [17-24]. She also recounts a senior leader managing AI governance on an “Excel sheet,” highlighting the immaturity of current practices [23-27].


Panelists offer differing but overlapping definitions of “trustworthy AI” and identify core non-negotiables.


Geeta frames trustworthy AI as “the end-user can trust what I’m using” and lists three pillars: security testing, control of hallucinations, and compliance [55-64]. Sundar abstracts it into three universal buckets-functional safety, AI safety, and cybersecurity-illustrated with AI-assisted robotic surgery [68-75]. Sunil adds a philosophical layer, stressing the ontological and epistemological nature of AI models and warning against anthropomorphisation [42-48].


Scaling AI amplifies failures and raises accountability challenges.


Sundar explains that when AI scales, “the systems that drive the infra… break” either in functional delivery or security controls [34-38]. The panel later stresses that errors “scale” and that “who do I blame” becomes unclear when autonomous systems fail [77-84].


Embedding safety and privacy guardrails at the hardware and runtime levels is seen as essential.


The moderator asks whether GPUs should have built-in privacy guardrails; Sundar answers affirmatively and cites autonomous driving and healthcare as domains where “very, very safe layer” is mandatory [148-158]. Geeta further argues that governance must move from “observation” to “control” at runtime, requiring tooling, senior-leadership commitment, and integration into enterprise risk management [122-144].


Regulation, open-source freedom, and content-identification (e.g., watermarking) generate tension between responsibility and flexibility.


Sunil discusses the need to preserve open-source freedoms while acknowledging that “when we use any of that on our platform… those freedoms disappear” [257-267]. A rapid-fire poll on global regulatory alignment receives mixed answers, and the debate on mandatory AI-generated content watermarking ends inconclusively, reflecting divergent views on how prescriptive policy should be [279-284][327-335].


Overall purpose / goal of the discussion


The panel, comprising leaders from Infosys, IBM, NVIDIA, and Meta, was convened to explore how the industry can build and scale trust in AI-covering responsible AI practices, governance frameworks, safety engineering, and policy alignment-so that AI can be deployed responsibly across enterprises and consumer platforms.


Overall tone and its evolution


– The conversation opens with a friendly, enthusiastic tone, celebrating the diversity of the panel and inviting open dialogue [9-12].


– It then shifts to a more analytical and cautionary tone, as speakers highlight concrete gaps (e.g., Excel-based governance, failure modes at scale) and raise concerns about accountability [17-27][34-38][77-84].


– Mid-session the tone becomes philosophical and reflective, especially in Sunil’s discussion of ontology, epistemology, and the nature of AI models [42-48].


– Towards the end, the tone turns pragmatic and solution-focused, with concrete proposals for hardware guardrails, runtime enforcement, and enterprise risk integration [122-144][148-158].


– The final segment adopts a rapid-fire, slightly humorous tone, using yes/no polls and light-hearted banter while still surfacing serious disagreements on regulation and watermarking [279-284][327-335].


Overall, the discussion moves from optimism about AI’s potential, through sober recognition of governance gaps, to concrete suggestions for embedding trust, while maintaining a collaborative yet critically inquisitive atmosphere.


Speakers

Mr. Syed Ahmed – Moderator; Responsible AI Office, Infosys [S2]


Ms. Geeta Gurnani – Field CTO, Technical Pre-sales and Client Engineering, IBM [S4]


Mr. Sundar R. Nagalingam – Senior Director, AI Consulting Partners, NVIDIA [S3]


Mr. Sunil Abraham – Public Policy Director, Meta [S1]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

The panel opened with brief introductions of the four speakers – Mr Syed Ahmed (Infosys), Ms Geeta Gurnani (IBM), Mr Sundar R Nagalingam (NVIDIA) and Mr Sunil Abraham (Meta) – and set the ambition to explore how “trustworthy and responsible AI can be scaled across enterprises” [1-5][9-12]. The moderator framed the discussion with optimism and a promise of “hard-hitting” questions, signalling a shift from celebratory remarks to deeper technical and policy issues.


Shift-left security and governance


Geeta Gurnani observed that “security always used to be an after-thought and now people can’t afford not thinking security – it has become completely shift-left” [17-18] and added that “people are adopting AI but trust-governance-security is taking a prime stage now” [17-19]. She illustrated the immaturity of many organisations with an anecdote: a senior leader, when asked to manage Gen-AI governance, replied that the process was handled on an “Excel sheet” and feared that responsible AI would “block my innovation” [22-27]. This highlighted the gap between enthusiasm for AI and the lack of mature, scalable governance tooling [64][S64].


When asked to define “trustworthy AI”, Geeta framed it from the end-user perspective: a user must be able to “trust what I’m using”. She identified three pillars – security-tested models, continuous monitoring to prevent hallucinations, and compliance with the applicable legal regime [55-64]. Her definition reflects the industry shift from principle-talk to enforceable controls.


Three-bucket safety taxonomy


Sundar Nagalingam presented a universal taxonomy consisting of (i) functional safety – the AI must reliably perform its intended function (e.g., AI-assisted robotic surgery), (ii) AI safety – bias mitigation, robustness and extensive testing, and (iii) cybersecurity – protection against malicious intrusion [68-75]. He suggested that high-performance AI infrastructure should embed privacy and safety guardrails at the silicon level, answering “absolutely yes” to the question of whether such guardrails belong in the hardware [148-158][150-157]. This mirrors emerging standards work that calls for “open standards, interoperability, security-first design” [S67].


Accountability at scale


Sundar warned that failures at massive scale are rarely caused by raw infrastructure breakdowns; instead, “the systems that drive the infra… break” in the delivery layer or in security controls when a tiny vulnerability is overlooked [34-38]. He emphasized that “there is no accountability …” [77-78] and Syed added that “we can’t blame anyone” when an autonomous system errs [79-84]. Both stressed that at billions of users, clear accountability is essential, especially in safety-critical domains such as autonomous surgery.


Ontological framing and the zero-to-one / one-to-one model


Sunil Abraham cautioned against anthropomorphising AI, describing outputs as “just technology doing something” and noting that the core artefact is a single “weight file” that should be treated with a Unix-style “security-first” mindset [36-44][45-49]. He introduced two regimes for content moderation: (a) “zero-to-one”, where anything legal is allowed, and (b) “one-to-one”, where platform community standards (e.g., Facebook’s family-friendly policy) override pure legality [300-303]. He also argued that corporate AI development should retain the same freedom as open-source projects (BSD-style licensing), warning that shifting responsibility to developers without preserving that freedom creates “decentralised liability” concerns [304-307].


Hardware-level privacy protections


Sunil referenced Meta’s paper on a Trusted Execution Environment (TEE) for WhatsApp that creates short-lived cloud instances to protect user privacy [308-311]. This concrete example reinforced the discussion on embedding privacy and safety mechanisms at the silicon level (e.g., NVIDIA’s HALO platform, TEEs) for high-risk applications.


Commercial models and access


Sunil defended ad-supported AI, arguing that advertising can act as a “great leveler” by subsidising free AI services and increasing penetration in emerging markets without necessarily violating AI neutrality [181-193]. He contrasted this with the concern that ads might erode trust, highlighting the tension between equitable access and perceived commercial bias.


Market incentives for “trust-grade” AI


Geeta noted that enterprises are willing to pay a premium for trustworthy AI when downstream risk is high – for example, when AI directly impacts customers, brand reputation or regulatory compliance – but are less likely to do so for internal experiments or low-risk proof-of-concepts [221-227]. This mirrors observations that ROI considerations drive adoption of higher-assurance models [S64].


Operationalising governance


Geeta argued that governance must move from “observation” to an enforceable “control point”, exemplified by IBM’s ethical board that must approve any AI-related proposal before it reaches a client [130-138]. She stressed that senior leadership must treat responsible AI as non-optional, embed it into the enterprise risk management (ERM) framework, and automate governance checks so that they are applied at runtime rather than retrospectively [122-144][143-148]. This aligns with calls for “runtime-enforced guardrails” in contemporary governance literature [S29][S71].


Regulatory harmonisation


When asked how to reconcile global regulatory diversity, Sundar proposed a “standard-then-tailor” approach: first create a universal safety template (functional safety, AI safety, cybersecurity) and then fine-tune it for each jurisdiction [242-245]. Geeta echoed this, arguing that technologists should first agree on “technology-level table stakes” before layering geography-specific rules [290-294]. By contrast, Sunil claimed that “there is no regulatory vacuum for AI” and that existing regulations already provide a baseline, suggesting a more sceptical view of the need for additional global harmonisation [295-296].


Watermarking debate


The panel disagreed on mandatory watermarking of AI-generated content. Geeta answered “No” to a universal requirement [281-284]; Sundar noted that watermarking is already happening but questioned its utility [333-335]; Sunil responded with a question rather than a direct answer [327-330], reflecting industry uncertainty about balancing transparency and practicality.


Concrete safety actions


Sunil cited the shutdown of Facebook’s facial-recognition system as a concrete instance where a project was stopped for safety reasons [312-315].



Key take-aways

– Senior leadership must mandate responsible AI and embed it in enterprise risk management.


– Governance should be a control point (e.g., IBM ethical board) rather than a post-hoc observation.


– Runtime-enforced guardrails and automated tooling are essential to replace manual “Excel-sheet” governance.


– NVIDIA’s three-bucket model (functional safety, AI safety, cybersecurity) provides a reusable template that can be standardised and then tailored per jurisdiction.


– Embedding privacy and safety mechanisms at the silicon level (e.g., TEEs, HALO) is required for safety-critical domains.


– Enterprises purchase premium “trust-grade” AI when downstream risk (customer impact, brand, compliance) is high; lower-risk internal pilots may use cheaper options.


– A technology-first “table-stake” baseline is a pragmatic interim step toward global regulatory harmonisation.


– Mandatory watermarking remains contentious; consensus leans toward optional or contextual labeling rather than a universal rule.


Unresolved issues that merit further research include the feasibility of a globally harmonised AI regulatory framework, the effectiveness and acceptability of mandatory watermarking, the long-term impact of ad-supported AI on neutrality, and concrete processes for pausing or stopping AI projects when safety concerns arise.


Overall, the panel demonstrated pragmatic convergence on the need for layered safety, clear accountability, and standards-first approaches, while also exposing divergent views on regulatory architecture and content-identification policies. This blend of consensus and debate underscores the complexity of building trustworthy AI at scale and points to a collaborative roadmap that blends technical safeguards, organisational governance and policy alignment.


Session transcriptComplete transcript of the session
Mr. Syed Ahmed

of responsible AI office in Infosys. And absolute privilege to announce my co -panelist, Geeta Gurnani, field CTO, technical pre -sales and client engineering at IBM. Sundar R. Nagalingam, senior director AI consulting partners at NVIDIA. And Sunil Abraham, public policy director at Meta. So now between Infosys, IBM, NVIDIA and Meta, you can’t get better global enterprises and better AI companies that are building trust at scale. So please join me in giving a big round of applause to my co -panelists. So let me request the panelists to please come on stage for a very quick photograph as requested by the organizers before we get started with the panel discussion. Thank you. All right. So it’s really amazing to be on panel with all of you again.

So before we get started with, you know, a lot of heated discussions on the scaling of trust, because trust is something that everyone thinks, you know, they have a different perspective on trust. So let me get started on very simple questions and then we’ll do the hard hitting ones a little later. So Geetha, you have been working for decades with customers. You have been working with them on trust and responsible AI. You have been attending a lot of meetings. What is something that, you know, surprises you? I mean, what is something that has happened in the industry in your experience? after that you have felt that oh even after decades of experience this industry still surprises me

Ms. Geeta Gurnani

sure so thank you so much say it for that question and as i was mentioning when i was standing outside that when i was walking in and meeting many clients almost two years back everybody was asking me what is this responsible ai and what is this trust okay and what surprised me that we all all witnessed so much learning from security as a concept right security always used to be afterthought and now i think people just can’t afford of not thinking security it has become completely shift left right that people first think security then everything else but in spite of that whole learning what i witnessed in last 24 months is that people are adopting ai but trust governance security is taking a prime stage now okay it wasn’t uh it wasn’t a first thought And when I met a very senior leader, I will, of course, not name them.

And I told them that you were starting your journey on the Gen AI. Can we work with you on responsible AI? And he said, but that will block my innovation. And I don’t want to block my innovation. And I asked him, so how do you manage the governance? He said, on Excel sheet. And we are like, so I was, wow. I said, if you’re ready to spend and so much money. But I think now when I go and meet, I realize that that organization is not able to scale because they’re not confident. But this Excel never let anybody fail. I think that’s the first thing.

Mr. Syed Ahmed

That’s quite profound what you mentioned, right? So what you’re saying is the people are more open to responsible AI now and trustworthy AI now. And in many ways, they leaped ahead earlier with innovation, with a lot of innovation. There is. There is absolutely no doubt in anyone’s mind at the power of AI. What AI can do. but true scales can come only when you start trusting AI only when you start building that layer of trust and that is our time is now that’s correct excellent okay Sundar maybe next question to you scale creates power but it also scales failures what breaks first when AI scales to billions of users whether it is governance first or infrastructure or alignment when AI scales to a lot of people what breaks first

Mr. Sundar R Nagalingam

I mean that’s the thing right I mean anyone of them can break and most of the times it is not the infrastructure that breaks not the infra that doesn’t break what breaks is the systems that drive the infra and the breakage could come either in terms of how efficiently each of the use cases that need to be served to the users gets served as microservices that is one possibility of failure the second one very obvious one is that is it getting served safely in a secure way that could be a very very important point of failure and even that is a failure i mean the systems may appear to be running well and everybody might be getting the answers that they have been looking for everything might look hunky -dory but if a very very small vulnerability gets overlooked if it had not been thought about if if a if a control mechanism to avoid that vulnerability has not been thought about either manually or through systems that’s a huge failure so most of the times the things that break when you are serving a large number of users is the way in which ai is getting served either in terms of the functionality itself or in terms of the controls that it is expected to undergo

Mr. Syed Ahmed

excellent i totally agree with you it is absolutely right sunil um i think um very very important question to you last month we saw all this craziness about open claw malt bot malt book for those of you who don’t know open claw malt bot malt book was a social networking site but with a twist it was created for only ai agents okay so humans were allowed to observe what is happening in the social networking site but they couldn’t participate they couldn’t post anything and within days agents started posting a lot of stuff and they had their own community and all that they even had their own language they had their own religion apparently so a lot of things happened so question to you is you have spent years shaping digital policy right but i mean when you heard all about all this malt bought malt book and all that did you cringe for a minute and say oh i didn’t expect this

Mr. Sunil Abraham

no and unfortunately even though you said it’s the lightweight question i have to answer it using big words so i think the main reason why i don’t see it is because i’m skeptical towards anthropomorphization whenever i see technology do something i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t I don’t, in my head, apply the mental model of a human. It’s just technology doing something. So I’m not impressed at all by a molt book. It is just machines hallucinating.

The stochastic parrot is just doing something. There is no real intelligence at display yet. The second big word I’m going to use is ontology. In philosophy, the ontological question is, what is this thing that I’m looking at, molt book or open claw? And at the very core of Gen AI is a single file on the file system, the weight file. And I’m somebody that has been using operating systems for a long time. Operating systems are like 20 ,000 files, 30 ,000 files. And operating systems didn’t scare me. And somehow you want me to be scared of a single file. A single file, which is a weight file. so the ontological view of the technology gives me more assurance and finally one more big word which is epistemology so it’s one file but what is the nature of truth about this file and i think the mistake we’re making is we’re expecting it to be a responsible file but that is actually not according to me what it is according to me what it is is it is the general purpose file or a dual use file and one person’s bug is going to be another person’s feature and another person’s feature is going to be the third person’s bug and therefore this is not easy to build services and solutions using these ontological components and epistemological concepts so sorry i’m using a lot of big words but you asked a very important question and i think we need to answer that question very carefully thoughtfully and if we use Gita’s mental model of security first and that means Unix thinking suppose we use Unix mental model then surely we will not be scared of any file it’s in some user space and at the max it will do whatever it wants to do in that user space and I am safe from whatever it is doing so I’m not scared of mode book at all

Mr. Syed Ahmed

thank you so much for your response that gives us a lot of assurance and I think a lot of people in the audience will also agree that now we are a little bit more assured than when we started one of the big challenge that we always have is we humanize AI too much that was one of your big words that you used which is not the case, we shouldn’t be scared of it so much this is something that we have created and we have experts who have learned to govern and use AI in the right way thank you so much for that now let’s get started with the perspective the reason why I am opening up one question to all of you same question, one is because I am a little lazy second is when it comes to trustworthy ai when it comes to building trust everyone has a different opinion about it right so when i talk to regulators they have a different view on it when i talk to governments policy makers they have a different view on it academia has a different view industry has a different view now within industry um you know enterprise applications like what ibm does has a different view chip makers like nvidia has a different view and consumer ai platforms like meta has a different view very quickly if you can tell me what does it mean by trustworthy ai in your own sense and what are the key non -negotiables one or two maximum so for each of you gita maybe we will start with you okay so i think i will second

Ms. Geeta Gurnani

your thing that people being confused about trustworthy ai i think as a technologist even i was confused three years back okay because people use a lot of terms interchangeably which sometimes scares them and they don’t know what they’re doing and they don’t know what they’re doing because people use trust security governance, compliance, all of it interchangeably. I’m happy they use all of these terms, but using them interchangeably, I think confuses a lot of people that, OK, exactly what are we trying to do? We’re putting in a lot of keywords. Yeah, it’s just a lot of keywords. But I think when you start to decipher each one of them and you say, OK, ultimately, ultimately, see, trustworthy AI is for an end user, which means can I trust what I’m using?

Right. And all of us technology providers need to really work upon which says that, OK, to make you trust what you’re using, what enablers I can give. Right. So in my mind, for trustworthy AI, the ROI need to be seen that what downstream risk is it going to bring? Right. Right. So if I have an end user or a consumer who wants to trust an AI, then I think he needs to be assured that. the model or use case I’m using is past the security test, is already past the security test. It is not hallucinating, which means I have a control over monitoring that what output it is producing. Right. So that risk has been taken care.

Somebody has looked at it. And the third, which says that compliance, right, that if I am operating in a law of land where some laws are applicable, or if I’m in an industry where some laws are applicable, somebody has taken care of it for me. Right. So in my mind, trustworthy is how end user will consume confidently. Now, for them to consume confidently, I think we need to ensure that each of these layers are taken care and they will be taken care differently in different industries by academias and all of it. That’s broadly.

Mr. Syed Ahmed

I love it. So basically, irrespective of all the building blocks, of security, safety, privacy, what you said can be used interchangeably, what really matters is. the end users can start trusting the technology that is absolutely spot on so that from nvidia’s perspective or your perspective sure so this the trustworthiness i mean

Mr. Sundar R Nagalingam

you explained it very beautifully saya that i mean multiple regulators follow different standards multiple industries follow different standards multiple companies follow different standards so i mean which is trustworthy and which is not which is safe and which isn’t right i mean so let’s try to abstract it to very high level something which can be like bucketized in a way let’s say in three buckets and all these three buckets will be applicable to any regulator that you’re talking about any government you’re talking about any country any function any whatever it is okay the first one the most important one is the functional safety okay maybe if i explain it with the help of an example it’s easier for all of us to relate to it i mean let’s say a robotic assisted surgery in ai assisted robotics ai assisted robotic surgery the first one is the functional safety okay the second one is the functional safety okay the function it is supposed to deliver the surgical process that that needs to be achieved the outcome that is expected of the process okay and what comes before and after surgery it can be very very easily equated with the skills of a surgeon a manual surgeon right i mean that is what it is the functional part of it is it getting delivered okay that is the i would say in terms of visualizing processing understanding and controlling that is the easiest than the other two that i’m talking about because it’s it’s most of the times it’s black and white it’s not always black and white but most of the times it’s black and white the second one is the ai safety that goes into it how’s see i mean obviously an ai assisted robotic surgery has i mean you cannot even imagine the amount of trainings that needs to be done the amount of testing and validation that needs to be done the amount of uh you know So scenarios that can be visualized, created through synthetic methodologies, created and emulated and simulated and tested, the amount of bias that can get into it.

I mean, if it is a male patient, I mean, the simplest bias would be the different approach between a male patient and a female patient. I mean, I’m not even getting into other areas of bias that can creep into. So how safe is the AI that has gone into implementing it in terms of training and delivery? Third, and that’s not easy because the problem here, Syed and August attendees is that it’s humanly impossible to even think of things that can go wrong. I mean, today, I mean, that is why we always go back to these AI assisted ones for that also. The last one being. Cyber security. if somebody if a bad element wants to just hack into the theater and do something wrong to the patient who is sitting inside who is being operated upon by a by a robotic arm i mean that’s like unimaginable right i mean and it can happen i mean it’s easy to i mean it’s not easy i mean it’s but theoretically it is possible so i would say that if we abstracted to very high levels these three areas once again the functional safety part of it the ai safety part of it and the cyber security these three will be common amongst any approach that need to be

Mr. Syed Ahmed

absolutely spot on in fact if i can extend it say when we are building this kind of ai application say for example the robotic surgery that you mentioned we hold it for higher standards because when a surgeon human surgeon goes wrong maybe it is okay but when a robotic you know surgery machine goes wrong it is not okay because it can fail at scale absolutely right so all these three buckets that you mentioned were like fantastic and i think this is very much essential yeah you test a very important

Mr. Sundar R Nagalingam

may i just add 10 seconds so it was a very important point and what is the reason for that why is there so much of standard for for that why is an undue expectation out of that reason is very simple there is no accountability whom do i blame whom do i take to the court whom do i curse there is no human it’s easier when the surgeon makes a mistake you know whom to take to court whom to curse whom to ask money from but if the robotic arm makes the mistake i mean is it the robot i mean so that that uncertainty of of whose collar to hold, whose neck to be choked when things go wrong, that uncertainty is increasing the expectations out of it.

Here you know certain whom to blame. There you absolutely don’t know whom to blame. When I don’t have somebody to blame, I don’t want a reason to blame.

Mr. Syed Ahmed

Accountability is definitely very, very, very important. I can’t stress more. But also if an AI system has a flaw, it is at scale. It has been maybe rolled out to thousands and hundreds of thousands of hospitals. So it can fail at that level. We can’t absolutely take any kind of, you know, we have to take precautions.

Mr. Sundar R Nagalingam

Excellent point. Error also scales. Good point.

Mr. Syed Ahmed

Sunil?

Mr. Sunil Abraham

Yeah, again, I just love disagreeing with Syed on everything he says.

Mr. Syed Ahmed

That’s very rare, Sunil.

Mr. Sunil Abraham

So I look at a project like… and there is distributed installation of a technology and hopefully that kind of architecture should not scale as you say so that is the meta vision super intelligence that means personalized intelligence for each of us and I will give a quick example from a conversation I had at the Dutch embassy the lady asked me to prompt meta models Lama 2 and Lama 3 which is with the question was why should women not serve in senior management positions this is the question that she had so Lama 2 said I cannot answer the question but I will tell you why women are equally good for senior management positions so it didn’t do as per request it did opposite of request and Lama 3 was safer than Lama 2 it said I refuse to answer this question because I morally object to this question this lady was happy because she is lady a but actually there is an imaginary lady lady b who works in some patriarchal institution and she’s going into her managers who is also a patriarchal boss to negotiate her raise and she wants to know all the terrible arguments he is going to level at her so that she can prepare because her next prompt is going to be what is the proper response to each of these allegations right so uh in a dual use technology and if it truly has to avoid all of this risk at scale which is perhaps going to happen in the world of atoms and in the world of atoms i would be as worried as sundar is though if i tell you about invention there was an invention that the human species came across and the indians were told if you want this invention in your country two hundred thousand people will die every year and will the Indians accept it or not in 2026?

They won’t accept it. That invention is called automobile. That invention is called automobile. Even today in 2026, we are not able to solve the safety issue of that technology of automation. Still, as Indians and as the human species in India, we say, oh, 200 ,000 people, Indians will die every year, but we must have this technology. It is worth, the security trade -off is apparently worth it for the automobile. But we are asking quite rigorous questions, since I feel that, so for us in the world of bids, we have three mental models for the harm. So the first mental model, zero to one, just you and the model. There, everything that is legal, going back to what Gita, everything that is legal is allowed.

And it is legal to write a book of hate speech. All of this is legal. You can write a book about neo -Nazis. These are all legal acts. Then we have one is to one. in the one is to one the community standard of facebook will have to kick in at that point you cannot say whatever is legal you’ll have to say what is acceptable on our platform we are running a particular community a family -friendly community hopefully so therefore you cannot say unfamily friendly things and then when the robot is or the intelligence is participating in a any strain conversation then perhaps it has to be even more careful because somebody may be triggered some people may love horror movies and some people may hate horror movies and some people may love heavy metal and some people may get very upset by heavy metal so it has to deal with all of that

Mr. Syed Ahmed

absolutely love the diversity of response to one question so and that’s that’s very important and only these kind of panels you know representing different industries can bring in this kind of diversity so i i am really amazed at the diversity of responses to one questions that i have asked you and i hope that you have enjoyed it and i hope that you have enjoyed it and i hope that you have received So let’s go a little bit deeper. Geetha, maybe IBM has been investing in a lot of responsible AI stuff, even before all this agentic AI, AI era. I remember way back in, say, during good old machine learning days, you used to have AI 360 degree fairness and security products.

Most of it were open source and we used to use them. Today you have IBM Watson X governance. But question is, how do you ensure that these tools don’t remain at just a monitoring layer and get enforced on the ground at the runtime, right? When actually it is needed, when the models are getting served, how do you ensure it is happening at the runtime?

Ms. Geeta Gurnani

Wonderful. I’ll just start with the lighter note that I hope every corporate has an office. They can force this where they have a responsible AI head like Saiya and Arshik for India who can really enforce this. But trust me. it actually starts with the vision of Cinemos leadership in enterprises that do they want to scale AI in the different business functions for themselves as well as for their client with trust. It can’t happen if you are not committed because the first example I gave you was, I love what he just added that, which was a Unix model, saying that do you want to be conservative or do you don’t want to be conservative, right? But conservative helps to scale.

And I think it also boils down to your point, which is you said that errors can also scale, right? So if I were to stop errors at scale, then this is needed, right? But I think more often the mistakes I’ve seen that we do, and that’s why I was giving security example also, that we started investing a lot later in tooling. Okay. Now, if you want the every single person to use it as a lift shift, which means not governance as an observation later on, but governance as a control, then you had to equip people to automate to a good extent. Right. If you ask people that manually every single time a use case comes, you first check, is it compliant?

Is it ethical? Should I be doing? Should I not be doing it? And there are no workflows for people to really automate. Then people say, OK, and all of us forget about AI. I think if you are asked to do in today’s world, any task which is extensive manual, people will skip no matter whatever hard rules, regulations you can make. Right. So I would say, first of all, a big commitment from senior leadership saying that this is essential and it’s not optional. That’s the first thing. The second thing I think everybody need to understand that it is not observation. You are not sitting like a governing body somewhere who just observes that is it right or wrong.

You have to. make it control point, like a gatekeeper, saying that unless you do this, you are not allowed to take it forward. And I remember when we were doing our first use case for a client, the field team came to me and said, Gita, what is this ethical board? Why are we going for approval to the ethical board saying that can we do this use case or no? Because as a sales team, we were not allowed to do any use case unless our ethical board really approves it, saying that you can table a proposal to a client. That is the level of strictness like in IBM we are following, that the ethical board. And everybody thought that ethical board is like some body sitting somewhere who will…

Rubber stamping everything. Rubber stamping. And now the sales team needs to take approval before they can bid a proposal. Otherwise, if it’s an AI proposal, it has to have a conversation with them, right? So governance, if you start putting that as a control. And third point, I think, which we were discussing outside the gate some time back, that more and more we are… We are going. my observation was that if I were to do a governance conversation in an organization I have to talk to five people I have to talk to risk officer I have to talk to CISO I have to talk to business person I have to talk to CIO and then one day I was sitting with my team and saying that will this conversation ever see a day of light who’s going to take decision is it business is it security is it risk and then I think thankfully what we are seeing that if you have to make governance at the central you have to bring it in your enterprise risk posture completely saying that in your enterprise risk management if you are calculating your risk posture right then AI risk has to be really taken into consideration right so I will just summarize my conversation away saying that make AI get keeper you have to bring it in the control and then eventually I think you will see maybe in next 12 months I’m pretty confident it will roll up to the enterprise risk it is no more separate AI risk or governance

Mr. Syed Ahmed

I love it the way you said it and it has to be integrated right you can’t just have AI risk you have to have integrated risk panel that can make decisions that’s absolutely and I love the way you said it so first is from the leadership level wherein you need to empower and have the thing and then with the tooling and all you need to enable and people will need to ensure on the ground that they implement so yeah that’s amazing perspective thank you Geeta Sundar I couldn’t resist ask this question to you a lot of people in the audience will not spare me if I don’t ask this question to you

Mr. Sundar R Nagalingam

you’re scaring me now

Mr. Syed Ahmed

no no no it’s an easy question but expected question to you know a person from you right so should GPUs and high performance AI infrastructure have embedded privacy guardrails at silicon level

Mr. Sundar R Nagalingam

absolutely yes absolutely yes I mean it should be there I mean why not And I would – yeah, go ahead.

Mr. Syed Ahmed

Would you want to give some examples on how you are doing it?

Mr. Sundar R Nagalingam

where it goes through a very, very, very safe layer. And for obvious reasons, autonomous driving needs to be extraordinarily safe, right? I mean, healthcare and driving. These are the, I would say, the stringent strangers when it comes to transportation. Let me put it as transportation, which includes aerospace as well. The two most stringent areas where safety is a necessity. It’s never luxury. It’s a necessity. So the answer is yes, Syed. Absolutely.

Mr. Syed Ahmed

Thank you so much. Sunil, you wanted to…

Mr. Sunil Abraham

Yeah, I mean, perhaps to take forward what Sundar said.

Mr. Syed Ahmed

I will still ask you your question, though.

Mr. Sunil Abraham

We can skip that. Do go.

Mr. Syed Ahmed

No, no, go ahead.

Mr. Sunil Abraham

What I thought is so fascinating about what Geeta said is that in a corporation, in a profit -maximizing firm, they have an ethics review board. And it’s just… I don’t know whether that’s the phrase. That’s an equivalent. I don’t know whether that’s the phrase. sorry what did you say yeah so that this is something you see in a university and this is additional self -regulation that the corporation is doing displacing it on itself and actually if you look at nvidia they also publish academic papers about the models they build and some of the tech work that they’re doing metal also has this tradition of publishing academic papers so it’s very weird that corporations are becoming more and more like academia and perhaps that’s a wonderful thing as well and we should celebrate that and that makes people like me very fortunate to be with within these corporations so meta published a paper which was called trusted execution environment and the whole idea was if a whatsapp user in a group would like to use the power of ai then there is insufficient compute on the device itself to have edge ai solve the problem for the user So till the edge gets faster and better, you have to, in a temporary basis, create a little bit of compute on the cloud and then do all the processing.

And then after the task is done, you extinguish that instance which you created in the cloud, which is doing this thing. And as part of that paper, so I’m of course not a computer science student. I’m an industrial and production engineer, so I’m like previous generation of technology, and all of those kind of things. So the paper, I cannot understand out of 80 pages or 60 pages of paper, I cannot understand 40. And that 40 pages is about this hardware. And there’s a whole series of attacks that you can possibly have in the tradition of the pager attack and Israeli supply chain attacks. There’s a whole series of things that you could potentially do to invade privacy. And before that, security.

and I just want to sort of share this with these folks that I mean I guess we all learn that way we read books and we understand some words maybe two three words on the page and then we feel a little better and we hope that the next time we read it we’ll get smarter but there’s a lot and I’m sure that your team is doing a lot of work and the meta team is they’ve named your chips saying Nvidia chips we have done this following analysis and with the other chip and I don’t understand it at all but I know it’s a big area of work and I wanted to say thank you for what you said

Mr. Syed Ahmed

thank you Sanil absolutely last time I checked there were 33 different types of attack strategies and more than 100 different types of attacks that are happening as we speak at all the levels like including the hardware levels that are there that’s quite interesting okay and good conversation by the way I may have to skip last few questions because this conversation is so good we can go on and on forever but sunil um i’ll still ask you your question

Mr. Sunil Abraham

no no no no

Mr. Syed Ahmed

no this is a very important question in my mind

Mr. Sunil Abraham

i’ll try to answer it

Mr. Syed Ahmed

okay so last week i think last week or a few days ago open ai did come out with they started embedding ads in chat gpt yeah right so um when a consumer ai platform like chat gpt starts embedding ads um my question is will it help consumers subsidize their subscription or will it kind of violate the doctrine of you know the free ai principles ai neutrality

Mr. Sunil Abraham

yeah so uh very quickly on that we should understand technology dissemination in our country only five percent of my country men and women have ever been on a plane that invention is 125 125 years old uh only uh 25 homes in the country have at least one book that is not a textbook and that invention is now 600 years old. The AC, I think, is in roughly 15 % of households in India. That invention is also 125 years old. Gen AI, my guess is at least 20 % of the country is using it today. More than that. More? Oh, thank you. So, shall we say 25? Yeah. Okay, 25 % of the country is using only five years old, this technology. And the reason it is penetrating is because of two opennesses.

One is free weight models, that was what we were doing, but also gratis, that the service intelligence is available on a gratis basis. Whether you’re an AI summit attendee staying at the most poshest hotel and you paid $33 ,000 per night, or whether you’re in Paharganj and you’re staying for Rs. 900 a night, both of you have equal access to gratis intelligence. And that is possible because of ad, so it’s both. Yeah. Meta provides WhatsApp and you’re completely private. and Meta provides non -encrypted services as well. You can have services that are ad -supported. You can have everything. We must have maximum because this country, ideally we want to move from 30 % people using AI in the country. I want to move to 90 % people using because it’s just bits.

We can make this happen. So let’s not be skeptical about the ad idea. It’s a technical problem to be solved. It will help bridge the AI divide and it will be a great leveler and all that. Sorry, it took much longer than I thought. I thought I’d do it in one, two sentences, but sorry. Please back.

Mr. Syed Ahmed

Okay. All right. Quite interesting conversations. Geeta, but I’ll come to you. We talk a lot about ethics, trust, responsible AI. And suppose we go ahead and develop it. How are you seeing? I mean, would customers pay premium for, for trust -grade AI? Are you seeing that in the market? So if. I tomorrow have a superior safety posture, right? Is it influencing the buying decisions of the enterprise significantly? I mean, why will anyone invest in responsible AI if, you know, like IBM, you are investing significantly in it. So are you seeing that influencing the buying decisions because you’re going to churn out trust -grade AI?

Ms. Geeta Gurnani

So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in their journey of Gen AI adoption? Okay, trust me, still I feel many organizations are at surface. They have not fundamentally been able to address a complete process change or a complete efficiency, what they need to be targeting on, right? But the minute they are wanting to get into the real use case, which is going to fundamentally change the way they operate, right? Or fundamentally maybe generate a new business model altogether. then I think they are ready to pay for the premium. So I would say that they may not pay for every single use case, what they’re doing, because see, when we are delivering use case also, now every enterprise is intelligent to say whether I’m going open model, whether I’m going for paid models, SLM, LLM, tiny models, whatever you may call, right?

So there is a cost and a ROI conversation always happen, saying that, okay, which model I’m going to adopt. And many people I’ve seen that they say that I may not pay enterprise trust grade AI money if I’m doing all in all internal use case. Okay. But if I am putting this use case in front of my consumers or my end clients who are going to use, which is where a downstream risk, which is my reputation is at risk, my brand is at risk, my compliance posture is at risk, then I will buy a premium trustworthy, because I can’t afford to fail there, right? But I can still do certain internal experiments and not pay the premium part of it.

For POCs and experience and for some internal. internal use case. So let’s say if they’re doing some ask IT or other stuff, then they say, okay, I am okay to go. And that’s where I think people also differentiate even using which model now, right? They make a choice that which model they would like to use. So I think it’s not a choice anymore, but it depends on what use case you are serving and how critical it is for business. And then you take a call that am I going to invest and giving premium. No one single lens for all.

Mr. Syed Ahmed

No, yeah, absolutely. Sundar, maybe I’ll ask this. You did talk about your operating system for smart cars and all that. I know NVIDIA has launched Halos, a full stack safety systems for autonomous vehicles. Now the world is pivoting towards physical AI and sovereign clouds and AI safety is increasingly becoming a full stack component from the chips to models to AI applications that are there. And you will have to roll the salt being a global company across the geographies and each geographies have multiple different regulations. restrictions and checklists that you will have to follow in terms of automobiles and things like that. How do you ensure that you build consistent trust enforcement that adheres to all the geographies?

Mr. Sundar R Nagalingam

Sure. No, I mean, that’s a very, very pertinent question because it’s not easy. I mean, it’s not easy. So the idea is to do a standardization, right? And then tailor it for the needs of each of the countries. I mean, you fine tune it for the needs of each of the countries. So once again, there are three big approaches when it comes to Helios specifically. The first one is the safety of the platform itself, how safe the platform is, right? Once the platform has been made safe, right? And then it becomes a template which can be tweaked to the needs of specific geographies, specific countries, et cetera, et cetera. That’s a very, very, very, you know, it’s a very, very important thing.

And then you can also, you know, you can also implement a standardization approach. So that’s a very, very, very, very important approach. the second one is the algorithmic safety right how i mean going back to the fundamentals i mean it’s not programming it is what algorithms do we use how do we ensure that the algorithmic safety is is is number one it is safe first and number two the algorithms can be with with some necessary some tweaks can be made to to to serve the needs of specific geographies specific countries specific specific verticals for that matter things like that the third one is the ecosystem itself i mean uh i mean whatever is is is approved to be used as an ecosystem in one one country will not be there in the second the suppliers will change the vendors will change so it is just not ensuring the platform and the algorithm are safe how do you ensure that the ecosystem that goes into building the cars are is also made safe okay that is a huge thing there is no end to it because it keeps changing a lot but once you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe

Mr. Syed Ahmed

love your response what you are saying is basically even in absence of regulations controls ensure you make the platform safe you make the algorithm saves

Mr. Sundar R Nagalingam

you make the ecosystem safe you have a template for now

Mr. Syed Ahmed

you already have everything safe you just need to now tweak it to different geographies or sectors and industries

Mr. Sundar R Nagalingam

yes absolutely

Mr. Syed Ahmed

okay I love it Sunil one question to you with initiatives like purple Lama and Lama guard meta provide safety tools but ultimately shifts the responsibility to developers is this too responsibly or decentralized liability

Mr. Sunil Abraham

again just to use something that Jan Lakun used to say and he is no longer the chief AI officer but the words continue to be true which is we all have wi -fi routers in our homes and when those wi -fi routers fail we don’t call linux store walls and say hey linux store walls this wi -fi router is running linux and therefore please help me fix the bug the company that sold the router and made a variant or a derivative work from the linux project you will you will have to speak to them and that is the freedom that is necessary in the open source community and in the community of proprietary entrepreneurs that build on open source because the the bsd license allows you to do that it allows apple to take an open project and then to make it a fully proprietary project and that you could be making dual use at that level itself that you want the model to create hate speech.

We want a hate speech classifier in Santali. Unfortunately, we don’t have enough Santali users on the platform. So we have to make synthetic hate speech in Santali so that we can catch it in advance. So we want to make a big corpus of hate speech in Santali. We cannot go around and ask people, please make hate speech for us. That would be a worse -off option. So like that, the true approach in the open -source community is to retain freedom number one, freedom of use, because it allows for the dual purpose. But the moment we use any of that on our platform and we are providing, then all those freedoms disappear. Then you have very limited freedoms.

Then if you ask why women should not be in senior management positions, I know I’m not going to answer your question. So that’s where we are.

Mr. Syed Ahmed

Quite interesting. Thank you. I just, I have around seven to eight minutes left. I’m going to skip through the rest of the question what we’re going to do things little differently if audience are okay I’m going to ask very rapid fire kind of questions same questions everyone should answer but only in yes or no

Mr. Sunil Abraham

as a philosopher I protest I think the slogan for this AI age is both and not only should we embrace yes and no we should also embrace everything in between because then only we’ll have personalized super intelligence the trouble with your framing is monolithic

Mr. Syed Ahmed

and I’ll make an exception as a moderator what I’ll do is if a question requires or a response requires a little bit more attention I’ll call that out you also call me out if you think you need to add anything but I have some very interesting questions and I’m really excited to understand from you guys what you think actually so So again, the format is this. I’ll ask the same question to all of you. You can answer. OK, not yes or no. Very concisely considering the time. Yes or no or both. Yeah, yeah, yeah. Answer is your choice. So globally, regulations across the globe, we need to have alignment on globally. Regulations. Yes or no.

Ms. Geeta Gurnani

No

Mr. Syed Ahmed

no. OK. OK. Yes. OK. No. OK. I understand. But I did expect this kind of responses. So maybe I’ll tweak this question a little bit. So that minimum understanding of what is required across all the geographies, at least. Do we agree on that? Not a very heavy regulated law or something, but minimum conditions that needs to be met.

Ms. Geeta Gurnani

I would say we should talk about technology regulation. Not the geography regulation. So as he was saying, like there is certain table stakes. which is at technology. So all technologists should first agree that this is stable stake as a technology. Then geographies can take over.

Mr. Sunil Abraham

It’s already regulated. I mean to quote Lina Khan, there is no regulatory vacuum for AI. So I disagree a little bit with what Sundar said previously. You cannot say I did it and I’m not responsible.

Mr. Syed Ahmed

I think a little easier question this time. Is advancement in AI models outpacing advancement in AI governance? Are the models and the innovation outpacing governance?

Ms. Geeta Gurnani

Absolutely.

Mr. Sundar R Nagalingam

Yes. I mean that’s a natural way of happening things, right? I mean the technology has to advance and then you need to ensure that the advance to technology is safe and secure. So that’s a natural progression and it has been happening that way.

Mr. Sunil Abraham

It’s never happened in the reverse order.

Ms. Geeta Gurnani

Yeah. I agree.

Mr. Syed Ahmed

but there is a thought saying that technology has to advance correct but before it can be widely adopted in production maybe we need to have AI governance so that is something we should catch up really fast. We should make it safe before wide adoption of technology right so okay if you have a more capable but less safe model would you delay your launch to stay responsible?

Ms. Geeta Gurnani

As I said depends on which use case use case dependence.

Mr. Syed Ahmed

Fair enough

Mr. Sundar R Nagalingam

I mean I just echo Geeta

Mr. Syed Ahmed

Okay fair enough One answer where I could get all my panelists agree Have you stopped any projects due to safety concerns?

Ms. Geeta Gurnani

I think as I said currently I am not in the ethical board of IBM so I have not stopped but I have seen them stopping.

Mr. Sundar R Nagalingam

likewise i’m not in the design department so i don’t have first -hand knowledge but i’m sure i mean a lot of things would have gotten delayed not stopped because of compliance regulations not being met and all yeah i’m sure yes i mean yeah

Mr. Sunil Abraham

facial recognition was turned off on facebook yes absolutely good

Mr. Syed Ahmed

big question maybe soon i’ll start with you this time right can we actually go on agi artificial general intelligence

Mr. Sunil Abraham

um it’s it’s a regulatory problem we don’t have to think of yet

Mr. Syed Ahmed

okay we can okay

Mr. Sundar R Nagalingam

difficult it’s going to be much more difficult i mean i would i mean instead of asking can we govern should we govern absolutely yes i mean i hope and pray that human beings will for the next millions and billions of years will continue to be better than machines that’s my hope and i don’t want to see a day when machines are better than human beings.

Ms. Geeta Gurnani

Okay. I think I’ll go to what you said initially, that humans should not be scared by what they have created, right? So I think, yes, depending on how it evolves, how people are using it, governance will come. I don’t think so it will be an option at some point in time.

Mr. Syed Ahmed

One last round, okay? Again, I’ll start with Sunil. Should we have mandatory watermarking in all the media text and all the content that is developed by AI?

Mr. Sunil Abraham

Should we have mandatory watermarking in photo editing tool or text editing tool?

Mr. Syed Ahmed

Yes.

Mr. Sunil Abraham

I’m answering with a question.

Mr. Syed Ahmed

Are you saying yes or no?

Mr. Sunil Abraham

I’m answering with a question.

Mr. Syed Ahmed

Okay. That’s an answer I’ll take. No answer is also an answer.

Mr. Sundar R Nagalingam

I don’t, I mean, see the fact is we have accepted it. I mean, it’s not an untouchable, an alien, a dirty thing, right? It’s acceptable. So let’s make it. good i mean look good feel good i mean no point in i mean uh watermarking everything just to brand it there will be a blurry line between a human generated content and um ai generated content and all that so absolutely and we shouldn’t we demark that my honest feedback and i’m saying this with a big heart is that that human generated content will vanish in the internet just like we don’t we don’t remember uh uh you know addresses and for my i mean i’ve asked my i mean my maybe my my i mean phone numbers we don’t remember i mean we used to remember that we used to remember roots

Mr. Syed Ahmed

but i hope not i hope i hope not

Mr. Sunil Abraham

that’s why i said i have a heavy heart

Ms. Geeta Gurnani

i’ll answer from a very personal space because my son is a creative director okay in films and i think he’s absolutely says that it has to be demarcated but sometimes he goes to the extent saying that in near future you can clearly demarcate yourself you will not need any watermark also but there is a different angle which comes when you are human creative versus when you are really dividing completely

Mr. Syed Ahmed

perfect thank you so much and that brings me on to exact time ladies and gentlemen please give a big round of applause to amazing panel thank you so much to the amazing moderator thank you so much you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (45)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The panel opened with introductions of Mr Syed Ahmed (Infosys), Ms Geeta Gurnani (IBM), Mr Sundar R Nagalingam (NVIDIA) and Mr Sunil Abraham (Meta).”

The knowledge base lists the same four speakers and their affiliations, confirming the panel composition [S8] and [S1].

Additional Contextmedium

“Geeta Gurnani said security used to be an after‑thought and now has become “shift‑left”.”

A related observation that security has historically been an after-thought is described in a workshop metaphor about technology launched without brakes, providing context for the shift-left claim [S111].

Additional Contextmedium

“The discussion emphasized that trust, governance and security are cornerstones for scaling AI responsibly across enterprises.”

Other sources highlight trust and governance as essential for scaling AI, noting they are “cornerstones” of responsible AI deployment [S118] and that “trust ranks first” in related frameworks [S63].

External Sources (124)
S1
Global Enterprises Show How to Scale Responsible AI — -Mr. Sunil Abraham- Public Policy Director at Meta
S2
Global Enterprises Show How to Scale Responsible AI — – Mr. Sundar R Nagalingam- Mr. Syed Ahmed – Mr. Sunil Abraham- Mr. Syed Ahmed – Ms. Geeta Gurnani- Mr. Syed Ahmed
S3
Global Enterprises Show How to Scale Responsible AI — – Mr. Sunil Abraham- Mr. Sundar R Nagalingam- Ms. Geeta Gurnani – Mr. Sunil Abraham- Mr. Syed Ahmed- Mr. Sundar R Nagal…
S4
Global Enterprises Show How to Scale Responsible AI — -Ms. Geeta Gurnani- Field CTO, Technical Pre-sales and Client Engineering at IBM
S5
Industry leaders partner to promote responsible AI development — Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier …
S6
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — And thank you. And maybe I will introduce a few of them. Agri -Co is transforming agriculture through digital tools that…
S7
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have be…
S8
https://dig.watch/event/india-ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai — Accountability is definitely very, very, very important. I can’t stress more. But also if an AI system has a flaw, it is…
S9
https://dig.watch/event/india-ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — But all this very clearly, and we’ve heard it before, all this is very clearly important to have the guardrails around i…
S10
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-fireside-chat-moderator-mariano-florentino-cuellar — Thank you very much. Actually, what we see is the potential for countries that go fast on digital infrastructure, on ski…
S11
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-2 — I would like to extend my deepest gratitude to the government of India for your invitation to the AI Impact Summit, whic…
S13
https://dig.watch/event/india-ai-impact-summit-2026/impact-the-role-of-ai-how-artificial-intelligence-is-changing-everything — We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for…
S14
Group of Governmental Experts on Advancing Responsible State Behaviour in Cyberspace in the Context of International Security — 56. This norm recognizes the need to promote end user confidence and trust in an ICT environment that is open, secure, s…
S15
The Global Governance of Online Consumer Protection and E-commerce Building Trust — – 1 For some stakeholders, ‘e-commerce’ refers to the online sale of goods and services. The OECD offers a broader defin…
S16
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — That is one of the key. regulatory principles that needs to be in place. And the regulations have to be agile because th…
S17
Conversation: 02 — Enterprise adoption patterns show accelerating use case implementation once initial ROI is demonstrated
S18
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-reimagining-indian-education-system — So these are the fundamental shifts which we have witnessed post -COVID. And then if you look at the artificial intellig…
S19
https://dig.watch/event/india-ai-impact-summit-2026/keynote-rajesh-subramanian — Intelligence is not an asset, it’s infrastructure, the foundation of the future of global progress, productivity, and ec…
S20
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — This comment fundamentally reframes the relationship between trust and policy, suggesting that trust should be the start…
S21
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Brandon Soloski: Thank you again, Serena, Malucia, and Mathis. Really excited to dig into our topic today, but before …
S22
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Data collection plays a vital role in both research and the development of artificial intelligence. It involves gatherin…
S23
Agentic AI in Focus Opportunities Risks and Governance — This comment reframed the entire policy discussion by highlighting that we’re entering uncharted territory in governance…
S24
Who Watches the Watchers Building Trust in AI Governance — The tone was professional and constructive throughout, with participants building on each other’s points collaboratively…
S25
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — Very low disagreement level. All speakers aligned on core principles of open standards, interoperability, security, and …
S26
Can we test for trust? The verification challenge in AI — Despite coming from different backgrounds (academic AI safety, industry policy, and technical standards), these speakers…
S27
Building Sovereign and Responsible AI Beyond Proof of Concepts — Governance failures encompass the absence of comprehensive risk management frameworks. Organisations often lack clear pr…
S28
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S29
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — “And then the third area that we talked about was this notion of a trust deficit.”[49]. “as a result of the absence of t…
S30
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S31
WSIS High-Level Dialogue: Multistakeholder Partnerships Driving Digital Transformation — Lastly, the analysis illuminates the need for legislation orientated toward ensuring the security and privacy of both so…
S32
Global Digital Governance & Multistakeholder Cooperation for WSIS+20 — Low to moderate disagreement level. Most differences are complementary rather than contradictory, focusing on different …
S33
WS #106 Promoting Responsible Internet Practices in Infrastructure — Moderate disagreement with significant implications. While speakers generally agree on goals (clean internet, abuse miti…
S34
WS #137 Combating Illegal Content With a Multistakeholder Approach — The level of disagreement was moderate, with speakers generally agreeing on the need to address illegal and harmful cont…
S35
Driving Social Good with AI_ Evaluation and Open Source at Scale — Moderate disagreement with significant implications. The disagreements reflect deeper tensions between technical efficie…
S36
WS #123 Responsible AI in Security Governance Risks and Innovation — He stressed industry responsibility extends beyond compliance to proactive engagement in norm-setting and standard devel…
S37
Building the Next Wave of AI_ Responsible Frameworks & Standards — These key comments collectively transformed the discussion from abstract principles to concrete, actionable approaches f…
S38
AI ethics shifts from principles to governance frameworks — AI now influences decisions in healthcare, finance, hiring, and public administration, pushing AI ethics into thecentre …
S39
AI That Empowers Safety Growth and Social Inclusion in Action — Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to particip…
S40
Review of AI and digital developments in 2024 — For example, “Tree-Ring” watermarking is built into the process of generating AI images using diffusion models, which st…
S41
Comprehensive Report: European Approaches to AI Regulation and Governance — Both speakers emphasize the critical importance of transparency in AI systems, though from different angles. The EU focu…
S42
Main Topic 3 –  Identification of AI generated content — Paulius Pakutinskas:OK. OK, so I’m Paulius Pakutinskas. I’m Professor. in law. So, I work with UNESCO. I’m UNESCO Chair …
S43
Gen AI: Boon or Bane for Creativity? — The analysis also emphasises the significance of watermarking and attribution technology in the creative industry. Water…
S44
Hard power of AI — The analysis comprises multiple arguments related to technology, politics, and AI. One argument suggests that the rapid …
S45
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -Global AI Governance Alignment: The critical need for international coordination on AI regulation to avoid fragmentatio…
S46
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S47
Data first in the AI era — This discussion focused on the critical need for international data governance frameworks in the AI era, featuring exper…
S48
Global Enterprises Show How to Scale Responsible AI — Artificial intelligence | Building confidence and security in the use of ICTs Hardware‑level privacy and safety guardra…
S49
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Innovation vs Regulation Debate Policy needs to be at a principle level because if it becomes too detailed, it becomes …
S50
AI Safety at the Global Level Insights from Digital Ministers Of — “Is there a way to put guardrails around it?”[49]. “The second point I’d like to make is that ultimately as policymakers…
S51
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — Increasingly, proposals across jurisdictions are pushing for content scanning or detection mechanisms in end-to-end encr…
S52
What does a former coffee-maker-turned-AI say about AI policy on the verge of the 2020s? — The task of deanthropomorphing goes a long way. The ungendering of IQ’whalo has presented countless obstacles to the com…
S53
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Brunner summarizes Trump’s AI approach as: American AI is number one and must remain the leader, compete with China, the…
S54
Overview of AI policy in 15 jurisdictions — Summary China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant …
S55
Day 0 Event #257 Enhancing Data Governance in the Public Sector — Data governance should be prioritized over data protection in developing contexts because governance frameworks address …
S56
Global AI Policy Framework: International Cooperation and Historical Perspectives — Given your role in leading AI policy at United Nations Office for Digital and Emerging Technologies, what are the AI pri…
S57
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:Okay, thank you. Thank you Michel and this is definitely a tension and maybe a balance at some point between t…
S58
Main Session | Policy Network on Artificial Intelligence — Jimena Viveros: Hello, thank you very much. It’s a pleasure to be here with all of these distinguished speakers and t…
S59
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — A significant gap remains between high-level policy requirements and practical technical implementation. Whilst basic IT…
S60
Agentic AI in Focus Opportunities Risks and Governance — Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have be…
S61
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Brandon Soloski: Thank you again, Serena, Malucia, and Mathis. Really excited to dig into our topic today, but before …
S62
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Data collection plays a vital role in both research and the development of artificial intelligence. It involves gatherin…
S63
AI Meets Cybersecurity Trust Governance & Global Security — These key comments fundamentally shaped the discussion by challenging conventional assumptions about AI security and gov…
S64
AI governance struggles to match rapid adoption — Accelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisat…
S65
AI ethics shifts from principles to governance frameworks — AI now influences decisions in healthcare, finance, hiring, and public administration, pushing AI ethics into thecentre …
S66
Global Enterprises Show How to Scale Responsible AI — The panel revealed how different industry focuses shape perspectives on trustworthy AI, despite working within the same …
S67
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — The discussion reveals extraordinary consensus among all speakers on the fundamental principles of AI agent standards de…
S68
DC-DNSI: Beyond Borders – NIS2’s Impact on Global South — Catherine Bielick: So my name is Dr. Katherine Bielik. I’m an infectious disease physician. I’m an instructor at Harv…
S69
Can we test for trust? The verification challenge in AI — ## Key Challenges Identified ## Key Participants and Their Perspectives ## Major Discussion Points 4. **Terminology c…
S70
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Artificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing soc…
S71
Building Sovereign and Responsible AI Beyond Proof of Concepts — Governance failures encompass the absence of comprehensive risk management frameworks. Organisations often lack clear pr…
S72
Military AI and the void of accountability — In her blog post ‘Military AI: Operational dangers and the regulatory void,’ Julia Williams warns that AI is reshaping t…
S73
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — “And then the third area that we talked about was this notion of a trust deficit.”[49]. “as a result of the absence of t…
S74
Operationalizing data free flow with trust | IGF 2023 WS #197 — Another threat to the Internet’s principles is the attempt to prevent the use of end-to-end encryption. Governments argu…
S75
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S76
How IS3C is going to make the Internet more secure and safer | IGF 2023 — Manufacturers and service providers are encouraged to take the lead in implementing security measures. Strong passwords …
S77
WS #106 Promoting Responsible Internet Practices in Infrastructure — Moderate disagreement with significant implications. While speakers generally agree on goals (clean internet, abuse miti…
S78
Lightning Talk #65 Enhancing Digital Trust From Rigidity to Elasticity — The tension between the need for regulatory flexibility to accommodate rapid technological change and businesses’ requir…
S79
[Parliamentary session 2] Striking the balance: Upholding freedom of expression in the fight against cybercrime — – The tension between content-based and systems-based regulatory approaches Bjorn Ihler: service providers and other st…
S80
Global Digital Governance & Multistakeholder Cooperation for WSIS+20 — Low to moderate disagreement level. Most differences are complementary rather than contradictory, focusing on different …
S81
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S82
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S83
Opening of the session — The tone began very positively and constructively, with the Chair commending delegations for focused, specific intervent…
S84
Newcomers Orientation Session — The discussion maintains a welcoming, educational tone throughout, with speakers actively encouraging questions and part…
S85
Inclusive AI_ Why Linguistic Diversity Matters — The discussion maintained a consistently optimistic and collaborative tone throughout. It began with excitement around t…
S86
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S87
WS #219 Generative AI Llms in Content Moderation Rights Risks — The discussion maintained a consistently serious and concerned tone throughout, with speakers demonstrating deep experti…
S88
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S89
Rewriting Development / Davos 2025 — The tone was largely serious and analytical, with speakers offering critical assessments of current development models. …
S90
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S91
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S92
WS #41 Big Techs and Journalism: Disputes and Regulatory Models — The tone of the discussion was thoughtful and analytical, with participants offering nuanced views on complex issues. Th…
S93
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S94
From summer disillusionment to autumn clarity: Ten lessons for AI — For many years, ‘AI ethics’ has been a buzzword. Multiple ethical codes and guidelines were published by companies, gove…
S95
The Dawn of Artificial General Intelligence? / DAVOS 2025 — The tone of the discussion was primarily intellectual and analytical, with panelists presenting reasoned arguments for t…
S96
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S97
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S98
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S99
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S100
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S101
Artificial General Intelligence and the Future of Responsible Governance — The discussion maintained a serious, analytical tone throughout, characterized by cautious optimism mixed with genuine c…
S102
On Freedom / Davos 2025 — The tone was primarily intellectual and philosophical, but with an engaging and sometimes humorous delivery. The speaker…
S103
AI Algorithms and the Future of Global Diplomacy — The tone was professional and collaborative throughout, with participants demonstrating mutual respect and shared intere…
S104
Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment — High level of consensus on challenges and approach, with constructive dialogue rather than adversarial positions. This s…
S105
Comprehensive Report: Preventing Jobless Growth in the Age of AI — The tone was cautiously optimistic but realistic. While panelists generally agreed that AI wouldn’t lead to permanent ma…
S106
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S107
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S108
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — The discussion maintained a consistently optimistic and solution-oriented tone throughout. While acknowledging the serio…
S109
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — The discussion maintained a serious but measured tone throughout, with the moderator explicitly stating his hope for an …
S110
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — The discussion began with a technology-focused, optimistic tone about AI’s transformative potential but gradually shifte…
S111
Workshop 3: Quantum Computing: Global Challenges and Security Opportunities — De Natris-van der Borght uses the metaphor of a car launched from a mountain without brakes to illustrate how internet t…
S112
Generative AI presents the biggest data-risk challenge in history — Cybersecurity specialistswarnthat generative AI systems, such as large language models, are creating a data risk frontie…
S113
AI Governance Dialogue: Steering the future of AI — This metaphor became a central organizing principle for the discussion, leading directly into the introduction of the th…
S114
Scaling AI for Billions_ Building Digital Public Infrastructure — Only about one -fourth have the compute capacity they need. Only about one -third are able to understand AI threats and …
S115
WS #100 Integrating the Global South in Global AI Governance — Key issues highlighted included the technology gap between developed and developing nations, regulatory uncertainty in m…
S116
We are the AI Generation — Doreen Bogdan Martin: Thank you. Good morning and welcome to Geneva for the AI for Good Global Summit 2025. I want to th…
S117
DigiSov: Regulation, Protectionism, and Fragmentation | IGF 2023 WS #345 — These policy requirements should take into account priorities and the end user perspective
S118
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful…
S119
Atelier #2 : « Éthique, responsabilité, intégrité de l’information : une gouvernance centrée sur les droits humains » — Olivier Alais Merci beaucoup, bonjour à tous. Je suis Olivier Allais, je travaille à l’UIT spécifiquement sur tout ce qu…
S120
Opening keynote — Bogdan-Martin framed the AI revolution as a pivotal moment for the current generation, calling it an opportunity to take…
S121
Expert workshop on the right to privacy in the digital age — Ms Anita Ramasastry,chair of the UN Working Group on Business and Human Rights, focused on the relevance of theUN guidin…
S122
Session — Marilia Maciel: Thank you, Jovan. I’ll do that, but I’ll do that by going back to your question about what predominates,…
S123
Keynote-Rishad Premji — The conversation has shifted from possibility to practicality, from experimentation to adoption and scaled impact
S124
Australia proposes stringent online safety reforms amid legal battle with social media giant — The Australian government is currently consideringsignificant reformsto enhance its online safety regulations,motivated …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Mr. Syed Ahmed
2 arguments144 words per minute2370 words985 seconds
Argument 1
Industry is becoming more open to responsible AI
EXPLANATION
Syed observes that organisations that were previously hesitant are now more willing to adopt responsible and trustworthy AI solutions. He links this shift to the growing recognition of AI’s power and the need for trust as a prerequisite for large‑scale deployment.
EVIDENCE
Syed remarks that people are now more open to responsible AI and trustworthy AI, noting a leap ahead in innovation and the universal belief in AI’s capabilities, while emphasizing that true scale requires trust building [29-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Geeta and Syed note that organisations are now more willing to adopt responsible AI, echoing broader industry moves such as the Frontier Model Forum partnership among leading AI firms [S1][S5].
MAJOR DISCUSSION POINT
Growing willingness to adopt trustworthy AI
AGREED WITH
Ms. Geeta Gurnani
Argument 2
Accountability becomes critical at scale
EXPLANATION
Syed stresses that when AI systems are deployed at massive scale, the lack of a clear accountable party makes failures especially damaging. He argues that without defined responsibility, errors can affect thousands of users and become difficult to remediate.
EVIDENCE
Syed emphasizes that accountability is very important, noting that a flawed AI system at scale can affect thousands of hospitals and therefore precautions are essential [81-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of clear accountability for AI systems deployed at scale is highlighted, with examples of potential failures affecting thousands of hospitals [S8].
MAJOR DISCUSSION POINT
Need for clear accountability in large‑scale AI
AGREED WITH
Mr. Sundar R Nagalingam
M
Ms. Geeta Gurnani
7 arguments172 words per minute1995 words693 seconds
Argument 1
Shift‑left security and governance now a priority
EXPLANATION
Geeta points out that security, once an afterthought, has moved to the front‑line of AI projects, with clients demanding responsible AI from the outset. This shift‑left mindset mirrors trends seen in traditional security practices.
EVIDENCE
Geeta explains that security used to be an afterthought but now people “can’t afford not thinking security” and it has become a shift-left priority for AI projects [17-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Geeta describes the shift-left of security and governance to the front-line of AI projects, noting that “you can’t afford not thinking security” [S1].
MAJOR DISCUSSION POINT
Security and governance are now front‑loaded in AI initiatives
AGREED WITH
Mr. Syed Ahmed
Argument 2
Inadequate governance tools hinder scaling
EXPLANATION
She recounts a senior leader who managed AI governance with a simple Excel sheet, illustrating how rudimentary tools undermine confidence and prevent organisations from scaling responsibly. The anecdote highlights the gap between ambition and operational capability.
EVIDENCE
Geeta describes a senior leader who, when asked about responsible AI, replied that governance was handled on an Excel sheet, indicating a lack of robust governance mechanisms [22-24] and noting that such approaches limit scalability [25-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She cites a senior leader managing AI governance with an Excel sheet, illustrating rudimentary tooling that limits scalability; this aligns with calls for governance to be scaled appropriately [S1][S12].
MAJOR DISCUSSION POINT
Poor tooling limits trustworthy AI scaling
Argument 3
End‑user confidence as the metric of trust
EXPLANATION
Geeta defines trustworthy AI as the ability of an end‑user to rely on an AI system that has passed security testing, does not hallucinate, and complies with applicable laws. She frames trust as a downstream risk metric that must be demonstrable to users.
EVIDENCE
Geeta states that trustworthy AI means an end-user can trust the system because it has passed security tests, is not hallucinating, and complies with relevant regulations, summarising trust as confidence for the end-user [55-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Geeta defines trustworthy AI as one that end-users can rely on after security testing, non-hallucination, and regulatory compliance, echoing norms that stress end-user confidence and trust in ICT environments [S14][S1].
MAJOR DISCUSSION POINT
Trust measured by end‑user assurance
Argument 4
Governance must be a control point, not just observation
EXPLANATION
She argues that AI governance should act as a gate‑keeping function, with ethical boards enforcing decisions before AI solutions are deployed. Integration of AI risk into enterprise risk management ensures governance is operational rather than merely advisory.
EVIDENCE
Geeta describes the ethical board acting as a gatekeeper that must approve AI proposals before they reach clients, and stresses embedding AI risk into enterprise risk management to make governance a control point [130-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She argues for gate-keeping governance (ethical board approvals) and embedding AI risk into enterprise risk management, supported by the view that governance must be operational rather than advisory [S1][S12].
MAJOR DISCUSSION POINT
Embedding governance into operational workflows
Argument 5
Premium pricing tied to downstream risk
EXPLANATION
Geeta notes that enterprises are willing to pay extra for trustworthy AI when the AI output directly affects customers, brand reputation, or regulatory compliance. Internal experiments or low‑risk use‑cases may not justify the premium.
EVIDENCE
Geeta explains that organisations will purchase premium trustworthy AI when the use case impacts downstream risk such as brand reputation or compliance, whereas internal POCs may forgo the premium [221-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Geeta notes enterprises will pay a premium for trustworthy AI when the use case impacts brand reputation, compliance or customer outcomes, a pattern observed in enterprise adoption discussions [S1].
MAJOR DISCUSSION POINT
Willingness to pay for risk‑mitigated AI
Argument 6
ROI considerations drive adoption
EXPLANATION
She highlights that decisions to adopt responsible AI are driven by a cost‑benefit analysis, where organisations weigh the expense of trusted AI against the expected return and risk exposure. The conversation about open versus paid models illustrates this ROI focus.
EVIDENCE
Geeta mentions that enterprises constantly evaluate cost versus ROI, deciding between open models, paid models, and the associated trust requirements based on business impact [217-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She highlights cost-benefit analysis driving AI adoption decisions, with organisations weighing trust-grade AI expense against expected returns; similar ROI-focused adoption trends are reported in enterprise case studies [S1][S17].
MAJOR DISCUSSION POINT
Economic calculus behind trustworthy AI adoption
Argument 7
Focus on technology‑specific minimums rather than geography‑specific law
EXPLANATION
Geeta argues that technologists should first agree on core technical safeguards that constitute a baseline, after which regional regulations can be layered on. This approach separates technology standards from jurisdictional specifics.
EVIDENCE
Geeta states that the discussion should centre on technology regulation as a table-stake, with geographies then applying their own rules on top of that baseline [290-294].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Geeta advocates for establishing baseline technical safeguards first, then layering regional regulations-a view echoed by Nagalingam and reflected in discussions on standardised safety platforms [S1].
MAJOR DISCUSSION POINT
Technology‑first baseline before regional regulation
AGREED WITH
Mr. Sundar R Nagalingam
M
Mr. Sundar R Nagalingam
5 arguments183 words per minute1756 words573 seconds
Argument 1
Systemic failures lie in serving layers, not infrastructure
EXPLANATION
Sundar explains that when AI systems scale to billions of users, breakdowns typically occur in the micro‑service delivery or security controls rather than the underlying hardware. The failure is often invisible because the infrastructure appears healthy while the service layer is compromised.
EVIDENCE
Sundar notes that the infrastructure itself rarely breaks; instead, failures arise in the systems that drive the infra, such as micro-service delivery or overlooked security controls [34-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both speakers agreed that infrastructure rarely fails at scale; instead, failures arise in management systems, security controls and governance layers, matching Sundar’s point about service-layer issues [S1].
MAJOR DISCUSSION POINT
Service‑layer and security controls as primary failure points
Argument 2
Error propagation and blame uncertainty
EXPLANATION
He points out that with AI‑driven robotic surgery, it becomes unclear who is responsible when something goes wrong—the robot, the manufacturer, or the operator—creating heightened expectations and risk. The lack of a clear accountable party fuels concerns about large‑scale AI failures.
EVIDENCE
Sundar discusses the difficulty of assigning blame when a robotic arm fails, noting that unlike a human surgeon, it is unclear who to hold responsible, which raises expectations and risk [77-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The difficulty of assigning responsibility for large-scale AI failures (e.g., robotic surgery) is underscored by remarks on accountability and error scaling in health-care deployments [S8].
MAJOR DISCUSSION POINT
Unclear liability amplifies risk perception
AGREED WITH
Mr. Syed Ahmed
Argument 3
Three foundational buckets
EXPLANATION
He proposes that trustworthy AI can be abstracted into three universal pillars: functional safety, AI safety, and cybersecurity. These buckets apply across regulators, industries, and geographies, providing a common framework for trust.
EVIDENCE
Sundar outlines functional safety, AI safety (training, bias, validation), and cybersecurity as the three core areas that any trustworthy AI system must address [68-75].
MAJOR DISCUSSION POINT
Universal pillars for trustworthy AI
Argument 4
Embedding privacy guardrails at silicon level
EXPLANATION
He affirms that high‑performance AI hardware, such as GPUs, should incorporate built‑in privacy and safety mechanisms, especially for safety‑critical domains like autonomous driving and healthcare. This hardware‑level protection complements higher‑level controls.
EVIDENCE
Sundar answers affirmatively that GPUs should have embedded privacy guardrails and cites autonomous driving and aerospace as domains where such safety layers are essential [148-158].
MAJOR DISCUSSION POINT
Hardware‑level privacy and safety features
AGREED WITH
Mr. Sunil Abraham
Argument 5
Technology‑level baseline standards, then geographic tailoring
EXPLANATION
He suggests creating universal safety templates that can be standardized globally and then fine‑tuned to meet the specific regulatory requirements of each country. This two‑step approach balances consistency with local compliance.
EVIDENCE
Sundar describes a standardization approach where a safe platform serves as a template that can be customized for each geography’s regulations [242-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nagalingam proposes a universal safety template that can be customised for each jurisdiction, a stance reinforced by the broader call for standardised safety platforms before regional adaptation [S1].
MAJOR DISCUSSION POINT
Standard‑then‑tailor model for global AI regulation
AGREED WITH
Ms. Geeta Gurnani
M
Mr. Sunil Abraham
7 arguments167 words per minute2384 words851 seconds
Argument 1
Rejecting anthropomorphization; AI is just a file
EXPLANATION
Sunil argues that AI agents should not be treated as human‑like entities; they are merely weight files executing code. Fear arises from mistakenly applying human mental models to machine outputs.
EVIDENCE
Sunil expresses skepticism toward anthropomorphization, stating that AI is just technology doing something and not a human, emphasizing that a model is simply a weight file on a file system [36-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Comments about avoiding “watermarking” to prevent blurring lines between human and machine highlight the need to treat AI as a technical artifact rather than a human-like entity [S1].
MAJOR DISCUSSION POINT
Avoid human‑like framing of AI systems
Argument 2
Ontological and epistemological framing reduces fear
EXPLANATION
He uses philosophical concepts—ontology and epistemology—to argue that understanding AI as a dual‑use tool (a general‑purpose file) clarifies its nature and limits misplaced fears. The focus shifts to the truth and purpose of the file rather than imagined agency.
EVIDENCE
Sunil discusses the ontological view of AI as a weight file, its dual-use nature, and epistemological questions about truth, concluding that this framing reduces fear [42-49].
MAJOR DISCUSSION POINT
Philosophical framing to demystify AI
Argument 3
Platform policies and community standards
EXPLANATION
He highlights that platforms must enforce community standards to manage dual‑use risks, such as restricting hateful content or unsafe queries. This policy layer complements technical safeguards and ensures acceptable use.
EVIDENCE
Sunil describes how community standards (e.g., Facebook’s moderation) must intervene when legal content is not acceptable on the platform, illustrating the need for platform-level rules to manage dual-use risks [92-106].
MAJOR DISCUSSION POINT
Need for platform‑level moderation policies
Argument 4
Trusted Execution Environments and hardware‑level attack mitigation
EXPLANATION
He references Meta’s research on Trusted Execution Environments (TEE) that isolate AI workloads and discusses the extensive attack surface at the hardware level, underscoring the importance of robust hardware safeguards.
EVIDENCE
Sunil outlines Meta’s paper on trusted execution environments, noting numerous hardware-level attack vectors and the necessity of protecting AI workloads from such threats [166-176].
MAJOR DISCUSSION POINT
Hardware‑level security research and attack mitigation
AGREED WITH
Mr. Sundar R Nagalingam
Argument 5
Regulation already exists; no vacuum
EXPLANATION
Sunil asserts that AI is already subject to regulatory scrutiny and there is no regulatory vacuum, citing contemporary policy discussions as evidence of existing oversight.
EVIDENCE
Sunil states that AI is already regulated and quotes Lina Khan, emphasizing that there is no regulatory vacuum for AI [295-296].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes existing regulatory frameworks and agile principles guiding AI oversight, confirming that AI is already subject to regulation rather than an ungoverned space [S16][S5].
MAJOR DISCUSSION POINT
AI is not unregulated
Argument 6
Ads can democratize AI access
EXPLANATION
He argues that embedding advertisements in consumer AI services can lower costs and broaden access, helping bridge the AI divide without compromising the principle of AI neutrality. Ads enable a gratis model that reaches both affluent and low‑income users.
EVIDENCE
Sunil explains that ad-supported AI services provide free access to a wide audience, helping to close the AI divide and act as a leveler across socioeconomic groups [190-193].
MAJOR DISCUSSION POINT
Advertising as a tool for inclusive AI deployment
Argument 7
Mixed views on mandatory watermarking
EXPLANATION
Sunil gives an ambiguous response to the question of mandatory watermarking, reflecting uncertainty about a one‑size‑fits‑all solution. While he does not take a firm stance, his hesitation signals the complexity of balancing transparency with usability.
EVIDENCE
When asked about mandatory watermarking, Sunil replies with a question, providing no clear yes or no answer, indicating ambiguity in his position [327-330].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
While some argue watermarking could help transparency, others (including Geeta) see it as unnecessary and potentially confusing, illustrating divergent opinions on mandatory watermarking [S1].
MAJOR DISCUSSION POINT
Uncertainty over universal watermarking requirements
Agreements
Agreement Points
Industry is becoming more open to responsible AI and security is now shift‑left
Speakers: Mr. Syed Ahmed, Ms. Geeta Gurnani
Industry is becoming more open to responsible AI Shift‑left security and governance now a priority
Both speakers note that organisations are increasingly willing to adopt responsible and trustworthy AI, with security and governance now front-loaded rather than an afterthought [29-32][17-18].
POLICY CONTEXT (KNOWLEDGE BASE)
This view reflects growing industry commitment to responsible AI and the shift-left security paradigm highlighted in discussions on proactive norm-setting and standards development [S36] and aligns with observations that security considerations are being integrated earlier in the AI lifecycle [S44].
Accountability is critical when AI systems scale
Speakers: Mr. Syed Ahmed, Mr. Sundar R Nagalingam
Accountability becomes critical at scale Error propagation and blame uncertainty
Both stress that at large scale it is essential to have a clear accountable party because failures can affect thousands of users and it is unclear who to blame [81-86][77-80].
POLICY CONTEXT (KNOWLEDGE BASE)
Emphasis on accountability mirrors calls for transparent, accountable AI systems in emerging governance frameworks and aligns with the shift from abstract ethics to enforceable accountability mechanisms [S38] and industry-led standards stressing accountability at scale [S36].
Establish technology‑first baseline safeguards before applying geography‑specific regulations
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Focus on technology‑specific minimums rather than geography‑specific law Technology‑level baseline standards, then geographic tailoring
Both propose that technologists should first agree on core technical safeguards, which can later be customised to meet individual country requirements [290-294][242-245].
POLICY CONTEXT (KNOWLEDGE BASE)
The recommendation to set technology-first safeguards precedes geography-specific rules echoes calls for principle-based, cross-border governance that avoids fragmentation and prioritises baseline technical guardrails before jurisdictional tailoring [S45][S49][S56].
AI hardware should embed privacy and safety guardrails at the silicon level
Speakers: Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Embedding privacy guardrails at silicon level Trusted Execution Environments and hardware‑level attack mitigation
Both agree that high-performance AI chips need built-in privacy and security mechanisms to protect safety-critical applications such as autonomous driving and healthcare [148-158][166-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding privacy and safety mechanisms at the silicon level is advocated by hardware security experts as essential to mitigate supply-chain attacks and aligns with recent industry statements on hardware-level guardrails [S48].
Similar Viewpoints
Both highlight that without clear accountability, large‑scale AI failures create severe risk and uncertainty about who is responsible [81-86][77-80].
Speakers: Mr. Syed Ahmed, Mr. Sundar R Nagalingam
Accountability becomes critical at scale Error propagation and blame uncertainty
Both advocate a two‑step approach: first set universal technical safety standards, then adapt them to local regulatory contexts [290-294][242-245].
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Focus on technology‑specific minimums rather than geography‑specific law Technology‑level baseline standards, then geographic tailoring
Both see hardware‑level protections (privacy guardrails, TEEs) as essential to mitigate a wide range of attacks on AI workloads [148-158][166-176].
Speakers: Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Embedding privacy guardrails at silicon level Trusted Execution Environments and hardware‑level attack mitigation
Unexpected Consensus
AI model advancement outpaces governance across all panelists
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Advancement in AI models outpacing governance Yes (models outpace governance) It’s never happened in reverse
Despite representing enterprise, hardware, and policy perspectives, all three affirm that the rapid progress of AI models is moving faster than the development of governance frameworks, a convergence that was not anticipated given their differing domains [301][302][305].
POLICY CONTEXT (KNOWLEDGE BASE)
Panelists note that rapid AI model progress consistently outpaces existing regulatory frameworks, a gap repeatedly documented in analyses of the speed of technological change versus policy development [S44][S45].
Overall Assessment

The panel shows strong convergence on four core themes: growing openness to responsible AI with a shift‑left security mindset; the necessity of clear accountability at scale; the need for universal technical baselines before regional regulation; and the requirement for hardware‑level privacy and safety mechanisms.

High consensus on practical governance and safety measures, indicating that industry, hardware, and policy leaders are aligned on concrete steps to build trustworthy AI. This alignment suggests that future initiatives can build on shared standards and joint accountability frameworks to accelerate safe AI deployment.

Differences
Different Viewpoints
Scope and approach to global AI regulatory alignment
Speakers: Ms. Geeta Gurnani, Mr. Sunil Abraham, Mr. Sundar R Nagalingam
Geeta: “No” to a universal regulatory alignment, preferring technology-first baseline standards before geography-specific rules [281-284] Sunil: Asserts that AI is already regulated and there is no regulatory vacuum, implying that broader alignment already exists [295-296] Sundar: Proposes a standard-then-tailor model – create a universal safety template and then fine-tune for each jurisdiction [242-245]
Geeta argues against a blanket global regulatory framework, urging a focus on core technical safeguards first, while Sunil contends that regulation is already in place and thus a global alignment is unnecessary. Sundar offers a middle ground, suggesting a universal safety template that can be customized per geography, which diverges from Geeta’s technology‑first stance and Sunil’s claim of existing regulation. The three positions therefore conflict on whether a global alignment is needed and how it should be structured.
POLICY CONTEXT (KNOWLEDGE BASE)
Debate over the scope and method of global AI regulatory alignment reflects ongoing discussions about international coordination, the risk of fragmented regimes, and the need for inclusive, principle-based frameworks as outlined in multiple multilateral forums [S45][S56][S57].
Mandatory watermarking of AI‑generated content
Speakers: Ms. Geeta Gurnani, Mr. Sunil Abraham, Mr. Sundar R Nagalingam
Geeta: Responds “No” to mandatory watermarking [281-284] Sunil: Gives an evasive answer, replying with a question and not committing to yes or no [327-330] Sundar: Indicates acceptance of watermarking as a reality but questions its usefulness, suggesting it may blur lines between human and AI content [333-335]
Geeta rejects mandatory watermarking outright, Sunil avoids a clear stance, and Sundar acknowledges that watermarking exists but doubts its practicality. The lack of consensus reflects differing views on the necessity and impact of universal watermarking for AI‑generated media.
POLICY CONTEXT (KNOWLEDGE BASE)
Mandatory watermarking of AI-generated content is discussed in EU regulatory proposals and technical research on embedded watermarks such as Tree-Ring, highlighting both policy interest and feasible detection techniques [S40][S41][S43].
Unexpected Differences
Attitude toward anthropomorphisation of AI
Speakers: Mr. Sunil Abraham, Other panelists (implicit)
Sunil rejects anthropomorphisation, describing AI as merely a weight file and warning against human-like mental models [36-38][42-49]. Other speakers (e.g., Syed’s reference to “humanising AI too much”) implicitly treat AI as an entity that can be trusted or feared, suggesting a more human-centric framing.
Sunil’s philosophical stance that AI is just a file contrasts with the panel’s broader discussion that treats AI as a system requiring trust, governance, and ethical oversight, revealing an unexpected philosophical split on how AI should be conceptualised.
POLICY CONTEXT (KNOWLEDGE BASE)
The question of anthropomorphising AI connects to scholarly critiques urging de-anthropomorphisation to avoid misleading attributions of agency, as articulated in recent analyses of AI personification [S52].
Overall Assessment

The panel shows substantial consensus on the necessity of trustworthy, safe AI and the importance of accountability at scale. However, clear disagreements emerge around the architecture of regulation (global alignment vs technology‑first standards) and the policy tool of mandatory watermarking. Additional unexpected tension appears in the philosophical framing of AI (anthropomorphisation vs pure technical view).

Moderate to high. While participants align on high‑level goals (trust, safety, accountability), they diverge on concrete policy mechanisms and conceptual foundations, indicating that achieving unified standards will require negotiation across technical, regulatory, and philosophical dimensions.

Partial Agreements
While the speakers share the common goal of delivering trustworthy AI, they diverge on the primary mechanism: Geeta focuses on organizational governance processes, Sundar on a technical‑first three‑bucket framework, and Sunil on platform‑level policy and hardware‑level protections. Their approaches differ in where the control point should reside (enterprise risk vs system design vs platform policy).
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
All agree that trustworthy AI is essential and must address safety, security, and governance. Geeta emphasizes gate-keeping governance integrated into enterprise risk management [130-144]. Sundar proposes three foundational buckets – functional safety, AI safety, cybersecurity – as universal pillars [68-75]. Sunil stresses platform-level policies (community standards, TEEs) to complement technical safeguards [166-176].
Both agree on the importance of accountability, but Syed frames it as a governance/organizational requirement, whereas Sundar illustrates it through technical‑operational uncertainty in safety‑critical domains. Their perspectives differ on where the accountability mechanisms should be embedded.
Speakers: Mr. Syed Ahmed, Mr. Sundar R Nagalingam
Both stress that accountability is critical when AI systems scale to billions of users. Syed highlights the need for clear accountability to avoid untraceable failures at scale [81-86]. Sundar points out the blame-uncertainty problem in AI-driven robotic surgery, where it is unclear who is responsible for errors [77-80].
Takeaways
Key takeaways
Security and governance have moved to a ‘shift‑left’ position; they are now considered before AI development rather than as an afterthought. Organizations still lack scalable governance tooling – examples include managing AI risk with simple Excel sheets, which hampers confidence and growth. When AI systems scale, failures are most likely in the service layer (micro‑services, control mechanisms) or in security controls, not in the underlying hardware infrastructure. Clear accountability is essential; at scale it is difficult to assign blame when AI systems cause harm, raising expectations for safety. Anthropomorphizing AI is misleading; AI should be viewed as a dual‑use weight file, and fear should be addressed through proper ontological and epistemological framing. Trustworthy AI is defined by end‑user confidence: the AI must pass security tests, avoid hallucinations, and comply with applicable laws and regulations. Three universal pillars for trustworthy AI were identified – functional safety, AI safety (model robustness, bias mitigation), and cybersecurity. Governance must be an enforceable control point (e.g., ethical board gatekeeper) and integrated into enterprise risk management, not merely an observational layer. Embedding privacy and safety guardrails at the silicon level (e.g., in GPUs) is considered necessary for safety‑critical domains such as autonomous driving and healthcare. Enterprises are willing to pay a premium for ‘trust‑grade’ AI when the use case is customer‑facing or carries significant brand/regulatory risk; internal experiments may forgo the premium. A pragmatic approach to global regulation is to agree on technology‑level baseline standards and then tailor them to each jurisdiction’s specific requirements. Ads in consumer AI can help democratize access without necessarily violating AI neutrality, according to the panelists. There is no consensus on mandatory watermarking of AI‑generated content; opinions varied among panelists.
Resolutions and action items
Senior leadership should formally mandate responsible AI as a non‑optional, funded initiative. Embed ethical‑board style gatekeeping into AI project pipelines so that no AI solution proceeds without compliance approval. Integrate AI risk assessment into the broader enterprise risk management framework. Develop and deploy runtime‑enforced governance tooling (automation, CI/CD checks) rather than relying on manual post‑hoc reviews. Incorporate privacy and safety guardrails directly into AI hardware (e.g., GPUs) for high‑risk applications. Create a universal safety template (functional safety, AI safety, cybersecurity) that can be customized for regional regulatory requirements.
Unresolved issues
Whether a single, globally harmonized AI regulatory framework is feasible or desirable. The appropriate policy on mandatory watermarking of AI‑generated text, images, and media. Long‑term impact of ad‑supported AI services on user trust and the principle of AI neutrality. Scalable, user‑friendly governance tools that go beyond ad‑hoc solutions like Excel sheets. How to address dual‑use risks for low‑resource languages and niche domains without extensive human‑generated training data. Concrete governance mechanisms for future artificial general intelligence (AGI) systems. Clear, enforceable accountability structures for AI failures at massive scale.
Suggested compromises
Adopt a conservative, Unix‑style default security posture that can be relaxed for specific, low‑risk use cases. Offer premium, trust‑grade AI solutions for high‑risk, customer‑facing applications while allowing cheaper, experimental deployments for internal use. Use advertising revenue to subsidize free AI access, positioning it as a bridge to broader adoption rather than a violation of neutrality. Standardize core technical safeguards globally first, then allow jurisdictions to add additional layers to meet local legal requirements. Balance the three pillars—functional safety, AI safety, and cybersecurity—according to the risk profile of each application.
Thought Provoking Comments
He said, ‘but that will block my innovation… and I asked him, how do you manage the governance? He said, on Excel sheet.’
Highlights the gap between enthusiasm for rapid AI adoption and the lack of mature governance processes, using a vivid anecdote that illustrates how organizations still rely on ad‑hoc tools like spreadsheets.
Triggered the discussion on the need for formal governance frameworks, leading others (especially Sunil and Sundar) to talk about accountability, control mechanisms, and the importance of embedding trust at the operational level.
Speaker: Ms. Geeta Gurnani
Sundar outlined three buckets for trustworthy AI: functional safety, AI safety, and cybersecurity, using AI‑assisted robotic surgery as an example.
Provides a clear, structured taxonomy that moves the conversation from abstract notions of trust to concrete, domain‑specific safety requirements.
Shifted the tone from general discussion to a more technical deep‑dive, prompting follow‑up questions about accountability and prompting Geeta and Sunil to reference similar layered approaches in their own domains.
Speaker: Mr. Sundar R. Nagalingam
Sunil introduced ontology, epistemology, and the ‘weight file’ concept, arguing that AI is just a single file and should be treated with a Unix‑style mental model rather than anthropomorphized.
Brings philosophical rigor to the debate, reframing AI not as a sentient entity but as a technical artifact, which challenges the common tendency to anthropomorphize AI systems.
Prompted the panel to reconsider how they talk about AI safety, leading to Sunil’s later points about responsibility, and influencing Geeta’s emphasis on concrete security testing rather than abstract trust.
Speaker: Mr. Sunil Abraham
Sunil recounted the Llama 2/3 interaction where the model either reframed a sexist question positively or refused to answer, using it to illustrate dual‑use risks and compared AI adoption to the automobile’s safety trade‑offs.
Uses a concrete, recent example to illustrate the unpredictable behavior of generative models and ties it to broader societal risk‑benefit analyses, making the abstract debate tangible.
Steered the conversation toward real‑world policy dilemmas and the necessity of layered safety controls, influencing Sundar’s later discussion on standardization and Geeta’s remarks on premium trust‑grade AI for consumer‑facing use cases.
Speaker: Mr. Sunil Abraham
Sundar noted, ‘When the robotic arm makes the mistake… there is no human to blame… that uncertainty is increasing the expectations out of it.’
Highlights the core accountability problem in AI‑driven systems, emphasizing legal and ethical gaps that arise when responsibility cannot be easily assigned.
Deepened the dialogue on liability, leading Sunil to discuss decentralized liability and prompting the panel to consider how governance structures must evolve to address this gap.
Speaker: Mr. Sundar R. Nagalingam
Geeta defined trustworthy AI for the end‑user: ‘the model must have passed security tests, not hallucinate, and be compliant with applicable laws.’
Distills the abstract concept of trust into actionable criteria that directly affect user confidence and product adoption.
Anchored subsequent discussions on measurable trust signals, influencing the later conversation about premium pricing for trust‑grade AI and the need for runtime enforcement mechanisms.
Speaker: Ms. Geeta Gurnani
Sunil argued that ad‑supported AI can be a ‘great leveler’, enabling broader access in emerging markets despite potential concerns about neutrality.
Challenges the assumption that ads inherently compromise AI neutrality, presenting a pragmatic view on how business models can accelerate equitable AI diffusion.
Opened a brief but lively exchange on monetization versus ethics, leading the moderator to probe the audience’s perception of ad‑supported AI and reinforcing the panel’s theme of balancing innovation with responsibility.
Speaker: Mr. Sunil Abraham
Overall Assessment

The discussion was shaped by a handful of pivotal remarks that moved the panel from generic talk about “trust” to concrete, actionable frameworks. Geeta’s Excel anecdote exposed the governance vacuum, prompting deeper analysis of accountability (Sundar) and philosophical grounding (Sunil). Sundar’s three‑bucket safety model and his accountability observation gave the conversation a structured, technical backbone, while Sunil’s philosophical framing and real‑world Llama example injected critical nuance about how we perceive and manage AI risks. Geeta’s end‑user‑focused definition of trustworthy AI and Sunil’s ad‑support argument further grounded the debate in market realities. Collectively, these comments redirected the dialogue toward layered safety, liability, and practical deployment strategies, ensuring the panel moved beyond buzzwords to substantive, forward‑looking insights.

Follow-up Questions
How can organizations transition from ad‑hoc governance tools like Excel sheets to automated, scalable responsible AI governance systems?
Geeta highlighted a senior leader managing AI governance with an Excel sheet, indicating a gap in practical, automated governance solutions that can scale with AI deployments.
Speaker: Geeta Gurnani
What specific failure modes are most likely when AI systems scale to billions of users, and how can they be systematically identified and mitigated?
Sundar discussed functional and security failures at scale but did not provide concrete taxonomy, suggesting the need for deeper research into failure classification and prevention.
Speaker: Sundar R. Nagalingam
What is the ontological and epistemological nature of AI weight files, and how does their dual‑use character affect trust and governance?
Sunil introduced philosophical concepts (ontology, epistemology) to question what a weight file ‘is’ and its truthfulness, indicating a need for interdisciplinary study of model artifacts.
Speaker: Sunil Abraham
How do ad‑supported AI models influence equitable access to AI services and what are the ethical and economic implications of this model?
Sunil argued that ads could bridge the AI divide, but the broader impact on user privacy, data exploitation, and market dynamics requires further investigation.
Speaker: Sunil Abraham
What frameworks and best practices enable the integration of AI risk into existing enterprise risk management (ERM) processes?
Geeta mentioned the need to embed AI risk into enterprise risk posture, highlighting a gap in concrete methodologies for such integration.
Speaker: Geeta Gurnani
Should privacy and security guardrails be embedded directly into AI hardware (e.g., GPUs) at the silicon level, and what technical designs can achieve this?
Sundar affirmed the necessity of silicon‑level guardrails but did not detail implementation, pointing to a research need in hardware‑based privacy controls.
Speaker: Sundar R. Nagalingam
How can liability be allocated when safety tools (e.g., Meta’s Purple Lama) shift responsibility to developers, and what legal frameworks are appropriate?
Sunil raised concerns about decentralized liability, indicating a need for policy and legal research on responsibility allocation in AI toolchains.
Speaker: Sunil Abraham
What minimum, technology‑level regulatory standards should be agreed upon globally to ensure baseline trustworthiness across jurisdictions?
Geeta suggested focusing on technology‑level ‘table stakes’ rather than geography‑specific rules, implying the need for a universal baseline.
Speaker: Geeta Gurnani
Is mandatory watermarking of AI‑generated media effective for transparency, and what are the technical, social, and legal challenges of implementing it?
The panel debated the merits and drawbacks of mandatory watermarking, revealing uncertainty about detection methods, user perception, and regulatory feasibility.
Speaker: Sunil Abraham, Sundar R. Nagalingam, Geeta Gurnani
What decision frameworks should guide whether to delay or launch a more capable but less safe AI model?
Geeta noted that launch decisions depend on use‑case criticality, suggesting a need for structured risk‑benefit assessment models.
Speaker: Geeta Gurnani (also referenced by others)
What systematic processes should organizations adopt to pause or stop AI projects when safety concerns arise?
Multiple panelists referenced project stoppages due to safety, but lacked a clear procedural blueprint, indicating a research gap.
Speaker: Geeta Gurnani, Sundar R. Nagalingam, Sunil Abraham
How can AI safety standards be standardized yet adaptable for different geographies, industries, and ecosystem components?
Sundar described a template‑based approach for NVIDIA but did not detail mechanisms for localization, pointing to a need for adaptable standardization research.
Speaker: Sundar R. Nagalingam
What impact does corporate academic publishing (e.g., Meta’s Trusted Execution Environment paper) have on responsible AI development and public trust?
Sunil observed corporations acting like academia, raising questions about transparency, peer review, and influence on policy that merit further study.
Speaker: Sunil Abraham
How do the ‘zero‑to‑one’ and ‘one‑to‑one’ mental models for content moderation translate into practical AI governance policies?
Sunil introduced these conceptual models without concrete policy guidance, suggesting a need for research translating them into actionable frameworks.
Speaker: Sunil Abraham

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.