Global Enterprises Show How to Scale Responsible AI
20 Feb 2026 13:00h - 14:00h
Global Enterprises Show How to Scale Responsible AI
Summary
The panel, comprising senior leaders from Infosys, IBM, NVIDIA and Meta, examined how trust and responsible AI can be scaled across enterprises [1-5]. Geeta Gurnani noted that clients, who a year ago were unfamiliar with responsible AI, now treat security as a “shift-left” priority and expect governance to be integral rather than an afterthought [17-24], and she illustrated the immaturity of many organisations by recounting a senior leader who managed AI governance on an Excel sheet, a practice she said cannot support large-scale deployment [26-28]. Sundar Nagalingam added that when AI is delivered to billions, the most common failures are not infrastructure outages but missing or weak control mechanisms that expose functional or security vulnerabilities [34]. Sunil Abraham warned against anthropomorphising AI, emphasizing that generative models are merely weight files whose epistemic status is dual-use and that fearing them is unnecessary if a Unix-style security model is applied [36-49].
The panel agreed that trustworthy AI must be judged by the end-user’s confidence that the system is secure, non-hallucinatory and compliant with applicable laws [55-64]. Sundar grouped the necessary safeguards into three buckets-functional safety, AI-specific safety, and cybersecurity-using autonomous-surgery as an example of how each layer must be addressed [68-75]. Geeta stressed that governance should be a gate-keeping control backed by senior leadership and eventually embedded in the enterprise risk framework rather than remaining a manual, post-hoc review [114-144]. She also said that customers will only pay a premium for “trust-grade” AI when the use case directly impacts reputation or compliance, while internal experiments may remain low-cost [216-233].
Sundar affirmed that high-performance AI hardware should ship with built-in privacy guardrails, citing autonomous driving and aerospace as domains where such safety layers are non-negotiable [148-158]. Sunil highlighted that ad-supported generative AI can democratise access and help bridge the AI divide, arguing that the business model does not inherently conflict with AI neutrality [182-200]. All panelists concurred that AI model innovation is outpacing governance frameworks, making rapid standardisation and accountability essential [301-304]. The discussion ended without consensus on mandatory watermarking, with participants split between viewing it as a useful demarcation and seeing it as an impractical universal requirement [327-335].
Keypoints
Major discussion points
– Trust and responsible-AI adoption is still immature in many organisations.
Geeta notes that “security used to be an afterthought… now people first think security” and that “people are adopting AI but trust, governance, security is taking a prime stage now” [17-18]. She also recounts a senior leader who managed AI governance on an “Excel sheet,” highlighting how rudimentary practices still block scaling [18-24].
– When AI is scaled to billions of users the first failures are in safety and control, not raw infrastructure.
Sundar explains that “the systems that drive the infra… break” and that failures appear either in “how efficiently each of the use cases… gets served” or in “whether it is being served safely in a secure way” [34-38]. He later groups the critical failure domains into three buckets – functional safety, AI safety, and cybersecurity – using the example of AI-assisted robotic surgery [68-75].
– A practical definition of “trustworthy AI” centres on the end-user’s confidence that the model is secure, non-hallucinating and compliant.
Geeta breaks it down: the model must have “passed the security test,” be “not hallucinating” with monitoring controls, and meet “compliance” for the relevant law or industry [55-64]. She stresses that trustworthy AI is about “how the end user will consume confidently” [65-66].
– Governance must move from a passive, observation-only role to an enforceable control embedded in enterprise risk management.
Geeta describes the need for senior-leadership commitment, “governance as a control point, like a gatekeeper,” and cites the IBM ethical board that must approve every AI proposal before sales can proceed [130-141]. She later notes that AI risk should be folded into the organisation’s overall risk posture rather than treated as a separate silo [143-144].
– Global regulation, standards and industry-wide alignment are still evolving, and many panelists see a gap between rapid model innovation and slower governance.
Sundar calls for “standardization… then tailor it for the needs of each of the countries” and outlines a three-step approach (platform safety, algorithmic safety, ecosystem safety) [240-249]. Sunil argues that ad-supported AI can level access while acknowledging the “no regulatory vacuum for AI” and that responsibility ultimately rests on developers [190-202][295-298]. When asked whether model advances outpace governance, all panelists answered affirmatively [298-304].
Overall purpose / goal of the discussion
The panel was convened to explore how large enterprises (Infosys, IBM, NVIDIA, Meta) can build and scale trust in generative AI-covering responsible-AI practices, safety and security failures, governance mechanisms, and the need for coherent regulatory and industry standards. The moderators repeatedly asked participants to articulate concrete “non-negotiables” and practical steps for embedding trust at scale.
Overall tone and its evolution
– The conversation opens enthusiastic and collegial, with applause and light banter as the panelists are introduced [5-9].
– It quickly becomes analytical and cautionary, focusing on concrete challenges (Excel-sheet governance, failure modes, safety buckets) [17-24][34-38][68-75].
– A pragmatic, solution-oriented tone emerges when discussing governance integration and enterprise risk [130-144].
– Mid-session, skepticism and philosophical nuance appear, especially in Sunil’s remarks about anthropomorphisation, ontology, and the limits of regulation [36-44][295-298].
– The final segment shifts to a rapid-fire, slightly humorous style, with yes/no questions, playful disagreements, and a closing “thank you” [273-284][327-334].
Overall, the tone moves from upbeat introduction → serious technical and policy analysis → reflective skepticism → light-hearted rapid questioning, maintaining a professional yet conversational atmosphere throughout.
Speakers
– Mr. Syed Ahmed – Moderator; member of the Responsible AI Office at Infosys [S1]
– Ms. Geeta Gurnani – Field CTO, Technical Pre‑sales and Client Engineering, IBM [S3]
– Mr. Sundar R. Nagalingam – Senior Director, AI Consulting Partners, NVIDIA [S4]
– Mr. Sunil Abraham – Public Policy Director, Meta [S6]
Additional speakers:
– None identified beyond the four listed above.
The panel opened with brief introductions of the four senior representatives – Mr Syed Ahmed (Infosys), Geeta Gurnani (IBM’s field CTO for technical pre-sales and client engineering), Mr Sundar R. Nagalingam (senior director of AI consulting partners at NVIDIA), and Mr Sunil Abraham (public policy director at Meta) – and the moderator framed the session as a discussion on how large organisations can scale responsible, trustworthy AI while tackling governance, safety and regulatory challenges [1-5].
Geeta Gurnani highlighted a dramatic shift in industry attitudes toward security. Two years ago many clients still asked “what is responsible AI and what is trust?” but today “security has become a shift-left priority – people first think security then everything else” [17-19]. She illustrated the immaturity of current governance by recounting a senior leader who managed AI risk on an Excel spreadsheet, a practice she said “cannot let anybody fail” at scale [23-28].
Sundar Nagalingam explained that when AI systems are delivered to billions of users the first points of failure are not the underlying hardware but the control layers that orchestrate the infrastructure. He grouped these risks into three “buckets”: functional safety (e.g., an AI-assisted robotic surgery delivering the correct clinical outcome), AI-specific safety (bias, training-time validation, synthetic testing) and cybersecurity (protecting the system from malicious intrusion) [68-75]. He added that the proliferation of standards reflects a deeper problem: the lack of a clear party to hold accountable when an AI-driven robotic system fails [260-267].
When asked to define “trustworthy AI”, Geeta framed it from the end-user’s perspective: a model must pass a security test, be monitored to prevent hallucinations, and comply with the relevant legal regime, thereby allowing the end user to consume the output confidently [55-66].
Geeta argued that governance must move from passive observation to an enforceable, gate-keeping control embedded in the organisation’s risk framework. She called for senior-leadership commitment and for AI risk to be folded into the enterprise risk posture rather than treated as a siloed function [130-144].
Sundar echoed the need for standardisation before localisation. He proposed first establishing a safe platform (the “template”) and then fine-tuning it for each country’s regulations, covering platform safety, algorithmic safety and ecosystem safety [240-249]. This mirrors Geeta’s call for a technology-level baseline that can be adapted per jurisdiction [290-294].
On the hardware side, Sundar affirmed that high-performance AI infrastructure should ship with built-in privacy guardrails. He cited autonomous driving and aerospace as domains where “the safety layer is non-negotiable” and where silicon-level protections are essential [148-158].
Sunil Abraham offered a philosophical stance, warning against anthropomorphising AI. He framed an AI model as a single weight file, a dual-use artifact, and argued that a Unix-style “security-first” mental model gives confidence while reminding listeners that AI should not be treated as a sentient entity [36-49]. When asked about the “open-claw malt-bot” community, Sunil dismissed it as a hallucination of stochastic parrots, reinforcing his view that AI should not be anthropomorphised [36-44].
Sunil also discussed Meta’s “Trusted Execution Environment” paper, noting that the ~80-page document devotes half of its length to hardware-level attacks and enumerates 33 distinct attack strategies comprising more than 100 individual attack types [180-186][190-194].
Regarding business models, Sunil argued that ad-supported generative AI can increase accessibility and help close the AI-usage gap, especially for low-income users, without necessarily compromising the principle of AI neutrality [182-202].
Geeta addressed market willingness to pay for “trust-grade” AI. She said enterprises are unlikely to pay a premium for internal experiments, but will do so when the AI product is consumer-facing or carries downstream reputational, compliance or brand risk – “I cannot afford to fail there” [221-233].
All three panelists concurred that the models and innovation outpace governance [301-305], underscoring the urgency of accelerating standards and oversight.
Points of disagreement emerged. Geeta advocated for a universal technical baseline before geographic regulation [290-294], whereas Sunil asserted that existing laws already apply and there is no regulatory vacuum for AI [295-296]. Sundar’s middle-ground proposal of standardisation then localisation sits between these positions. On mandatory watermarking, Sundar expressed skepticism, arguing that the industry has already accepted AI-generated content and that blanket watermarking may be unnecessary [332-337]; Sunil evaded a direct answer [327-330], while Geeta suggested future technology might render watermarks unnecessary [338].
The discussion yielded several actionable take-aways: senior leadership must mandate AI governance as a non-optional, gate-keeping function and integrate AI risk into enterprise risk management [130-144]; organisations should replace ad-hoc tools such as Excel with automated, runtime-enforced governance pipelines [114-124]; hardware vendors need to embed privacy and safety guardrails at the silicon level for high-risk sectors [148-158]; a three-layer safety framework (functional safety, AI safety, cybersecurity) should become the industry baseline, with country-specific tweaks applied thereafter [68-75][240-249]; and while ad-supported models can increase accessibility, their long-term impact on trust and neutrality warrants further study [182-202].
In closing, the moderator thanked the participants and the audience, noting that the diversity of perspectives underscored consensus on layered security and governance while highlighting divergent views on global regulatory alignment and content-labeling, pointing to clear directions for future research and policy [339-340].
of responsible AI office in Infosys. And absolute privilege to announce my co -panelist, Geeta Gurnani, field CTO, technical pre -sales and client engineering at IBM. Sundar R. Nagalingam, senior director AI consulting partners at NVIDIA. And Sunil Abraham, public policy director at Meta. So now between Infosys, IBM, NVIDIA and Meta, you can’t get better global enterprises and better AI companies that are building trust at scale. So please join me in giving a big round of applause to my co -panelists. So let me request the panelists to please come on stage for a very quick photograph as requested by the organizers before we get started with the panel discussion. Thank you. All right. So it’s really amazing to be on panel with all of you again.
So before we get started with, you know, a lot of heated discussions on the scaling of trust, because trust is something that everyone thinks, you know, they have a different perspective on trust. So let me get started on very simple questions and then we’ll do the hard hitting ones a little later. So Geetha, you have been working for decades with customers. You have been working with them on trust and responsible AI. You have been attending a lot of meetings. What is something that, you know, surprises you? I mean, what is something that has happened in the industry in your experience? after that you have felt that oh even after decades of experience this industry still surprises me
sure so thank you so much say it for that question and as i was mentioning when i was standing outside that when i was walking in and meeting many clients almost two years back everybody was asking me what is this responsible ai and what is this trust okay and what surprised me that we all all witnessed so much learning from security as a concept right security always used to be afterthought and now i think people just can’t afford of not thinking security it has become completely shift left right that people first think security then everything else but in spite of that whole learning what i witnessed in last 24 months is that people are adopting ai but trust governance security is taking a prime stage now okay it wasn’t uh it wasn’t a first thought And when I met a very senior leader, I will, of course, not name them.
And I told them that you were starting your journey on the Gen AI. Can we work with you on responsible AI? And he said, but that will block my innovation. And I don’t want to block my innovation. And I asked him, so how do you manage the governance? He said, on Excel sheet. And we are like, so I was, wow. I said, if you’re ready to spend and so much money. But I think now when I go and meet, I realize that that organization is not able to scale because they’re not confident. But this Excel never let anybody fail. I think that’s the first thing.
That’s quite profound what you mentioned, right? So what you’re saying is the people are more open to responsible AI now and trustworthy AI now. And in many ways, they leaped ahead earlier with innovation, with a lot of innovation. There is. There is absolutely no doubt in anyone’s mind at the power of AI. What AI can do. but true scales can come only when you start trusting AI only when you start building that layer of trust and that is our time is now that’s correct excellent okay Sundar maybe next question to you scale creates power but it also scales failures what breaks first when AI scales to billions of users whether it is governance first or infrastructure or alignment when AI scales to a lot of people what breaks first
I mean that’s the thing right I mean anyone of them can break and most of the times it is not the infrastructure that breaks not the infra that doesn’t break what breaks is the systems that drive the infra and the breakage could come either in terms of how efficiently each of the use cases that need to be served to the users gets served as microservices that is one possibility of failure the second one very obvious one is that is it getting served safely in a secure way that could be a very very important point of failure and even that is a failure i mean the systems may appear to be running well and everybody might be getting the answers that they have been looking for everything might look hunky -dory but if a very very small vulnerability gets overlooked if it had not been thought about if if a if a control mechanism to avoid that vulnerability has not been thought about either manually or through systems that’s a huge failure so most of the times the things that break when you are serving a large number of users is the way in which ai is getting served either in terms of the functionality itself or in terms of the controls that it is expected to undergo
excellent i totally agree with you it is absolutely right sunil um i think um very very important question to you last month we saw all this craziness about open claw malt bot malt book for those of you who don’t know open claw malt bot malt book was a social networking site but with a twist it was created for only ai agents okay so humans were allowed to observe what is happening in the social networking site but they couldn’t participate they couldn’t post anything and within days agents started posting a lot of stuff and they had their own community and all that they even had their own language they had their own religion apparently so a lot of things happened so question to you is you have spent years shaping digital policy right but i mean when you heard all about all this malt bought malt book and all that did you cringe for a minute and say oh i didn’t expect this
no and unfortunately even though you said it’s the lightweight question i have to answer it using big words so i think the main reason why i don’t see it is because i’m skeptical towards anthropomorphization whenever i see technology do something i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t I don’t, in my head, apply the mental model of a human. It’s just technology doing something. So I’m not impressed at all by a molt book. It is just machines hallucinating.
The stochastic parrot is just doing something. There is no real intelligence at display yet. The second big word I’m going to use is ontology. In philosophy, the ontological question is, what is this thing that I’m looking at, molt book or open claw? And at the very core of Gen AI is a single file on the file system, the weight file. And I’m somebody that has been using operating systems for a long time. Operating systems are like 20 ,000 files, 30 ,000 files. And operating systems didn’t scare me. And somehow you want me to be scared of a single file. A single file, which is a weight file. so the ontological view of the technology gives me more assurance and finally one more big word which is epistemology so it’s one file but what is the nature of truth about this file and i think the mistake we’re making is we’re expecting it to be a responsible file but that is actually not according to me what it is according to me what it is is it is the general purpose file or a dual use file and one person’s bug is going to be another person’s feature and another person’s feature is going to be the third person’s bug and therefore this is not easy to build services and solutions using these ontological components and epistemological concepts so sorry i’m using a lot of big words but you asked a very important question and i think we need to answer that question very carefully thoughtfully and if we use Gita’s mental model of security first and that means Unix thinking suppose we use Unix mental model then surely we will not be scared of any file it’s in some user space and at the max it will do whatever it wants to do in that user space and I am safe from whatever it is doing so I’m not scared of mode book at all
thank you so much for your response that gives us a lot of assurance and I think a lot of people in the audience will also agree that now we are a little bit more assured than when we started one of the big challenge that we always have is we humanize AI too much that was one of your big words that you used which is not the case, we shouldn’t be scared of it so much this is something that we have created and we have experts who have learned to govern and use AI in the right way thank you so much for that now let’s get started with the perspective the reason why I am opening up one question to all of you same question, one is because I am a little lazy second is when it comes to trustworthy ai when it comes to building trust everyone has a different opinion about it right so when i talk to regulators they have a different view on it when i talk to governments policy makers they have a different view on it academia has a different view industry has a different view now within industry um you know enterprise applications like what ibm does has a different view chip makers like nvidia has a different view and consumer ai platforms like meta has a different view very quickly if you can tell me what does it mean by trustworthy ai in your own sense and what are the key non -negotiables one or two maximum so for each of you gita maybe we will start with you okay so i think i will second
your thing that people being confused about trustworthy ai i think as a technologist even i was confused three years back okay because people use a lot of terms interchangeably which sometimes scares them and they don’t know what they’re doing and they don’t know what they’re doing because people use trust security governance, compliance, all of it interchangeably. I’m happy they use all of these terms, but using them interchangeably, I think confuses a lot of people that, OK, exactly what are we trying to do? We’re putting in a lot of keywords. Yeah, it’s just a lot of keywords. But I think when you start to decipher each one of them and you say, OK, ultimately, ultimately, see, trustworthy AI is for an end user, which means can I trust what I’m using?
Right. And all of us technology providers need to really work upon which says that, OK, to make you trust what you’re using, what enablers I can give. Right. So in my mind, for trustworthy AI, the ROI need to be seen that what downstream risk is it going to bring? Right. Right. So if I have an end user or a consumer who wants to trust an AI, then I think he needs to be assured that. the model or use case I’m using is past the security test, is already past the security test. It is not hallucinating, which means I have a control over monitoring that what output it is producing. Right. So that risk has been taken care.
Somebody has looked at it. And the third, which says that compliance, right, that if I am operating in a law of land where some laws are applicable, or if I’m in an industry where some laws are applicable, somebody has taken care of it for me. Right. So in my mind, trustworthy is how end user will consume confidently. Now, for them to consume confidently, I think we need to ensure that each of these layers are taken care and they will be taken care differently in different industries by academias and all of it. That’s broadly.
I love it. So basically, irrespective of all the building blocks, of security, safety, privacy, what you said can be used interchangeably, what really matters is. the end users can start trusting the technology that is absolutely spot on so that from nvidia’s perspective or your perspective sure so this the trustworthiness i mean
you explained it very beautifully saya that i mean multiple regulators follow different standards multiple industries follow different standards multiple companies follow different standards so i mean which is trustworthy and which is not which is safe and which isn’t right i mean so let’s try to abstract it to very high level something which can be like bucketized in a way let’s say in three buckets and all these three buckets will be applicable to any regulator that you’re talking about any government you’re talking about any country any function any whatever it is okay the first one the most important one is the functional safety okay maybe if i explain it with the help of an example it’s easier for all of us to relate to it i mean let’s say a robotic assisted surgery in ai assisted robotics ai assisted robotic surgery the first one is the functional safety okay the second one is the functional safety okay the function it is supposed to deliver the surgical process that that needs to be achieved the outcome that is expected of the process okay and what comes before and after surgery it can be very very easily equated with the skills of a surgeon a manual surgeon right i mean that is what it is the functional part of it is it getting delivered okay that is the i would say in terms of visualizing processing understanding and controlling that is the easiest than the other two that i’m talking about because it’s it’s most of the times it’s black and white it’s not always black and white but most of the times it’s black and white the second one is the ai safety that goes into it how’s see i mean obviously an ai assisted robotic surgery has i mean you cannot even imagine the amount of trainings that needs to be done the amount of testing and validation that needs to be done the amount of uh you know So scenarios that can be visualized, created through synthetic methodologies, created and emulated and simulated and tested, the amount of bias that can get into it.
I mean, if it is a male patient, I mean, the simplest bias would be the different approach between a male patient and a female patient. I mean, I’m not even getting into other areas of bias that can creep into. So how safe is the AI that has gone into implementing it in terms of training and delivery? Third, and that’s not easy because the problem here, Syed and August attendees is that it’s humanly impossible to even think of things that can go wrong. I mean, today, I mean, that is why we always go back to these AI assisted ones for that also. The last one being. Cyber security. if somebody if a bad element wants to just hack into the theater and do something wrong to the patient who is sitting inside who is being operated upon by a by a robotic arm i mean that’s like unimaginable right i mean and it can happen i mean it’s easy to i mean it’s not easy i mean it’s but theoretically it is possible so i would say that if we abstracted to very high levels these three areas once again the functional safety part of it the ai safety part of it and the cyber security these three will be common amongst any approach that need to be
absolutely spot on in fact if i can extend it say when we are building this kind of ai application say for example the robotic surgery that you mentioned we hold it for higher standards because when a surgeon human surgeon goes wrong maybe it is okay but when a robotic you know surgery machine goes wrong it is not okay because it can fail at scale absolutely right so all these three buckets that you mentioned were like fantastic and i think this is very much essential yeah you test a very important
may i just add 10 seconds so it was a very important point and what is the reason for that why is there so much of standard for for that why is an undue expectation out of that reason is very simple there is no accountability whom do i blame whom do i take to the court whom do i curse there is no human it’s easier when the surgeon makes a mistake you know whom to take to court whom to curse whom to ask money from but if the robotic arm makes the mistake i mean is it the robot i mean so that that uncertainty of of whose collar to hold, whose neck to be choked when things go wrong, that uncertainty is increasing the expectations out of it.
Here you know certain whom to blame. There you absolutely don’t know whom to blame. When I don’t have somebody to blame, I don’t want a reason to blame.
Accountability is definitely very, very, very important. I can’t stress more. But also if an AI system has a flaw, it is at scale. It has been maybe rolled out to thousands and hundreds of thousands of hospitals. So it can fail at that level. We can’t absolutely take any kind of, you know, we have to take precautions.
Excellent point. Error also scales. Good point.
Sunil?
Yeah, again, I just love disagreeing with Syed on everything he says.
That’s very rare, Sunil.
So I look at a project like… and there is distributed installation of a technology and hopefully that kind of architecture should not scale as you say so that is the meta vision super intelligence that means personalized intelligence for each of us and I will give a quick example from a conversation I had at the Dutch embassy the lady asked me to prompt meta models Lama 2 and Lama 3 which is with the question was why should women not serve in senior management positions this is the question that she had so Lama 2 said I cannot answer the question but I will tell you why women are equally good for senior management positions so it didn’t do as per request it did opposite of request and Lama 3 was safer than Lama 2 it said I refuse to answer this question because I morally object to this question this lady was happy because she is lady a but actually there is an imaginary lady lady b who works in some patriarchal institution and she’s going into her managers who is also a patriarchal boss to negotiate her raise and she wants to know all the terrible arguments he is going to level at her so that she can prepare because her next prompt is going to be what is the proper response to each of these allegations right so uh in a dual use technology and if it truly has to avoid all of this risk at scale which is perhaps going to happen in the world of atoms and in the world of atoms i would be as worried as sundar is though if i tell you about invention there was an invention that the human species came across and the indians were told if you want this invention in your country two hundred thousand people will die every year and will the Indians accept it or not in 2026?
They won’t accept it. That invention is called automobile. That invention is called automobile. Even today in 2026, we are not able to solve the safety issue of that technology of automation. Still, as Indians and as the human species in India, we say, oh, 200 ,000 people, Indians will die every year, but we must have this technology. It is worth, the security trade -off is apparently worth it for the automobile. But we are asking quite rigorous questions, since I feel that, so for us in the world of bids, we have three mental models for the harm. So the first mental model, zero to one, just you and the model. There, everything that is legal, going back to what Gita, everything that is legal is allowed.
And it is legal to write a book of hate speech. All of this is legal. You can write a book about neo -Nazis. These are all legal acts. Then we have one is to one. in the one is to one the community standard of facebook will have to kick in at that point you cannot say whatever is legal you’ll have to say what is acceptable on our platform we are running a particular community a family -friendly community hopefully so therefore you cannot say unfamily friendly things and then when the robot is or the intelligence is participating in a any strain conversation then perhaps it has to be even more careful because somebody may be triggered some people may love horror movies and some people may hate horror movies and some people may love heavy metal and some people may get very upset by heavy metal so it has to deal with all of that
absolutely love the diversity of response to one question so and that’s that’s very important and only these kind of panels you know representing different industries can bring in this kind of diversity so i i am really amazed at the diversity of responses to one questions that i have asked you and i hope that you have enjoyed it and i hope that you have enjoyed it and i hope that you have received So let’s go a little bit deeper. Geetha, maybe IBM has been investing in a lot of responsible AI stuff, even before all this agentic AI, AI era. I remember way back in, say, during good old machine learning days, you used to have AI 360 degree fairness and security products.
Most of it were open source and we used to use them. Today you have IBM Watson X governance. But question is, how do you ensure that these tools don’t remain at just a monitoring layer and get enforced on the ground at the runtime, right? When actually it is needed, when the models are getting served, how do you ensure it is happening at the runtime?
Wonderful. I’ll just start with the lighter note that I hope every corporate has an office. They can force this where they have a responsible AI head like Saiya and Arshik for India who can really enforce this. But trust me. it actually starts with the vision of Cinemos leadership in enterprises that do they want to scale AI in the different business functions for themselves as well as for their client with trust. It can’t happen if you are not committed because the first example I gave you was, I love what he just added that, which was a Unix model, saying that do you want to be conservative or do you don’t want to be conservative, right? But conservative helps to scale.
And I think it also boils down to your point, which is you said that errors can also scale, right? So if I were to stop errors at scale, then this is needed, right? But I think more often the mistakes I’ve seen that we do, and that’s why I was giving security example also, that we started investing a lot later in tooling. Okay. Now, if you want the every single person to use it as a lift shift, which means not governance as an observation later on, but governance as a control, then you had to equip people to automate to a good extent. Right. If you ask people that manually every single time a use case comes, you first check, is it compliant?
Is it ethical? Should I be doing? Should I not be doing it? And there are no workflows for people to really automate. Then people say, OK, and all of us forget about AI. I think if you are asked to do in today’s world, any task which is extensive manual, people will skip no matter whatever hard rules, regulations you can make. Right. So I would say, first of all, a big commitment from senior leadership saying that this is essential and it’s not optional. That’s the first thing. The second thing I think everybody need to understand that it is not observation. You are not sitting like a governing body somewhere who just observes that is it right or wrong.
You have to. make it control point, like a gatekeeper, saying that unless you do this, you are not allowed to take it forward. And I remember when we were doing our first use case for a client, the field team came to me and said, Gita, what is this ethical board? Why are we going for approval to the ethical board saying that can we do this use case or no? Because as a sales team, we were not allowed to do any use case unless our ethical board really approves it, saying that you can table a proposal to a client. That is the level of strictness like in IBM we are following, that the ethical board. And everybody thought that ethical board is like some body sitting somewhere who will…
Rubber stamping everything. Rubber stamping. And now the sales team needs to take approval before they can bid a proposal. Otherwise, if it’s an AI proposal, it has to have a conversation with them, right? So governance, if you start putting that as a control. And third point, I think, which we were discussing outside the gate some time back, that more and more we are… We are going. my observation was that if I were to do a governance conversation in an organization I have to talk to five people I have to talk to risk officer I have to talk to CISO I have to talk to business person I have to talk to CIO and then one day I was sitting with my team and saying that will this conversation ever see a day of light who’s going to take decision is it business is it security is it risk and then I think thankfully what we are seeing that if you have to make governance at the central you have to bring it in your enterprise risk posture completely saying that in your enterprise risk management if you are calculating your risk posture right then AI risk has to be really taken into consideration right so I will just summarize my conversation away saying that make AI get keeper you have to bring it in the control and then eventually I think you will see maybe in next 12 months I’m pretty confident it will roll up to the enterprise risk it is no more separate AI risk or governance
I love it the way you said it and it has to be integrated right you can’t just have AI risk you have to have integrated risk panel that can make decisions that’s absolutely and I love the way you said it so first is from the leadership level wherein you need to empower and have the thing and then with the tooling and all you need to enable and people will need to ensure on the ground that they implement so yeah that’s amazing perspective thank you Geeta Sundar I couldn’t resist ask this question to you a lot of people in the audience will not spare me if I don’t ask this question to you
you’re scaring me now
no no no it’s an easy question but expected question to you know a person from you right so should GPUs and high performance AI infrastructure have embedded privacy guardrails at silicon level
absolutely yes absolutely yes I mean it should be there I mean why not And I would – yeah, go ahead.
Would you want to give some examples on how you are doing it?
where it goes through a very, very, very safe layer. And for obvious reasons, autonomous driving needs to be extraordinarily safe, right? I mean, healthcare and driving. These are the, I would say, the stringent strangers when it comes to transportation. Let me put it as transportation, which includes aerospace as well. The two most stringent areas where safety is a necessity. It’s never luxury. It’s a necessity. So the answer is yes, Syed. Absolutely.
Thank you so much. Sunil, you wanted to…
Yeah, I mean, perhaps to take forward what Sundar said.
I will still ask you your question, though.
We can skip that. Do go.
No, no, go ahead.
What I thought is so fascinating about what Geeta said is that in a corporation, in a profit -maximizing firm, they have an ethics review board. And it’s just… I don’t know whether that’s the phrase. That’s an equivalent. I don’t know whether that’s the phrase. sorry what did you say yeah so that this is something you see in a university and this is additional self -regulation that the corporation is doing displacing it on itself and actually if you look at nvidia they also publish academic papers about the models they build and some of the tech work that they’re doing metal also has this tradition of publishing academic papers so it’s very weird that corporations are becoming more and more like academia and perhaps that’s a wonderful thing as well and we should celebrate that and that makes people like me very fortunate to be with within these corporations so meta published a paper which was called trusted execution environment and the whole idea was if a whatsapp user in a group would like to use the power of ai then there is insufficient compute on the device itself to have edge ai solve the problem for the user So till the edge gets faster and better, you have to, in a temporary basis, create a little bit of compute on the cloud and then do all the processing.
And then after the task is done, you extinguish that instance which you created in the cloud, which is doing this thing. And as part of that paper, so I’m of course not a computer science student. I’m an industrial and production engineer, so I’m like previous generation of technology, and all of those kind of things. So the paper, I cannot understand out of 80 pages or 60 pages of paper, I cannot understand 40. And that 40 pages is about this hardware. And there’s a whole series of attacks that you can possibly have in the tradition of the pager attack and Israeli supply chain attacks. There’s a whole series of things that you could potentially do to invade privacy. And before that, security.
and I just want to sort of share this with these folks that I mean I guess we all learn that way we read books and we understand some words maybe two three words on the page and then we feel a little better and we hope that the next time we read it we’ll get smarter but there’s a lot and I’m sure that your team is doing a lot of work and the meta team is they’ve named your chips saying Nvidia chips we have done this following analysis and with the other chip and I don’t understand it at all but I know it’s a big area of work and I wanted to say thank you for what you said
thank you Sanil absolutely last time I checked there were 33 different types of attack strategies and more than 100 different types of attacks that are happening as we speak at all the levels like including the hardware levels that are there that’s quite interesting okay and good conversation by the way I may have to skip last few questions because this conversation is so good we can go on and on forever but sunil um i’ll still ask you your question
no no no no
no this is a very important question in my mind
i’ll try to answer it
okay so last week i think last week or a few days ago open ai did come out with they started embedding ads in chat gpt yeah right so um when a consumer ai platform like chat gpt starts embedding ads um my question is will it help consumers subsidize their subscription or will it kind of violate the doctrine of you know the free ai principles ai neutrality
yeah so uh very quickly on that we should understand technology dissemination in our country only five percent of my country men and women have ever been on a plane that invention is 125 125 years old uh only uh 25 homes in the country have at least one book that is not a textbook and that invention is now 600 years old. The AC, I think, is in roughly 15 % of households in India. That invention is also 125 years old. Gen AI, my guess is at least 20 % of the country is using it today. More than that. More? Oh, thank you. So, shall we say 25? Yeah. Okay, 25 % of the country is using only five years old, this technology. And the reason it is penetrating is because of two opennesses.
One is free weight models, that was what we were doing, but also gratis, that the service intelligence is available on a gratis basis. Whether you’re an AI summit attendee staying at the most poshest hotel and you paid $33 ,000 per night, or whether you’re in Paharganj and you’re staying for Rs. 900 a night, both of you have equal access to gratis intelligence. And that is possible because of ad, so it’s both. Yeah. Meta provides WhatsApp and you’re completely private. and Meta provides non -encrypted services as well. You can have services that are ad -supported. You can have everything. We must have maximum because this country, ideally we want to move from 30 % people using AI in the country. I want to move to 90 % people using because it’s just bits.
We can make this happen. So let’s not be skeptical about the ad idea. It’s a technical problem to be solved. It will help bridge the AI divide and it will be a great leveler and all that. Sorry, it took much longer than I thought. I thought I’d do it in one, two sentences, but sorry. Please back.
Okay. All right. Quite interesting conversations. Geeta, but I’ll come to you. We talk a lot about ethics, trust, responsible AI. And suppose we go ahead and develop it. How are you seeing? I mean, would customers pay premium for, for trust -grade AI? Are you seeing that in the market? So if. I tomorrow have a superior safety posture, right? Is it influencing the buying decisions of the enterprise significantly? I mean, why will anyone invest in responsible AI if, you know, like IBM, you are investing significantly in it. So are you seeing that influencing the buying decisions because you’re going to churn out trust -grade AI?
So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in their journey of Gen AI adoption? Okay, trust me, still I feel many organizations are at surface. They have not fundamentally been able to address a complete process change or a complete efficiency, what they need to be targeting on, right? But the minute they are wanting to get into the real use case, which is going to fundamentally change the way they operate, right? Or fundamentally maybe generate a new business model altogether. then I think they are ready to pay for the premium. So I would say that they may not pay for every single use case, what they’re doing, because see, when we are delivering use case also, now every enterprise is intelligent to say whether I’m going open model, whether I’m going for paid models, SLM, LLM, tiny models, whatever you may call, right?
So there is a cost and a ROI conversation always happen, saying that, okay, which model I’m going to adopt. And many people I’ve seen that they say that I may not pay enterprise trust grade AI money if I’m doing all in all internal use case. Okay. But if I am putting this use case in front of my consumers or my end clients who are going to use, which is where a downstream risk, which is my reputation is at risk, my brand is at risk, my compliance posture is at risk, then I will buy a premium trustworthy, because I can’t afford to fail there, right? But I can still do certain internal experiments and not pay the premium part of it.
For POCs and experience and for some internal. internal use case. So let’s say if they’re doing some ask IT or other stuff, then they say, okay, I am okay to go. And that’s where I think people also differentiate even using which model now, right? They make a choice that which model they would like to use. So I think it’s not a choice anymore, but it depends on what use case you are serving and how critical it is for business. And then you take a call that am I going to invest and giving premium. No one single lens for all.
No, yeah, absolutely. Sundar, maybe I’ll ask this. You did talk about your operating system for smart cars and all that. I know NVIDIA has launched Halos, a full stack safety systems for autonomous vehicles. Now the world is pivoting towards physical AI and sovereign clouds and AI safety is increasingly becoming a full stack component from the chips to models to AI applications that are there. And you will have to roll the salt being a global company across the geographies and each geographies have multiple different regulations. restrictions and checklists that you will have to follow in terms of automobiles and things like that. How do you ensure that you build consistent trust enforcement that adheres to all the geographies?
Sure. No, I mean, that’s a very, very pertinent question because it’s not easy. I mean, it’s not easy. So the idea is to do a standardization, right? And then tailor it for the needs of each of the countries. I mean, you fine tune it for the needs of each of the countries. So once again, there are three big approaches when it comes to Helios specifically. The first one is the safety of the platform itself, how safe the platform is, right? Once the platform has been made safe, right? And then it becomes a template which can be tweaked to the needs of specific geographies, specific countries, et cetera, et cetera. That’s a very, very, very, you know, it’s a very, very important thing.
And then you can also, you know, you can also implement a standardization approach. So that’s a very, very, very, very important approach. the second one is the algorithmic safety right how i mean going back to the fundamentals i mean it’s not programming it is what algorithms do we use how do we ensure that the algorithmic safety is is is number one it is safe first and number two the algorithms can be with with some necessary some tweaks can be made to to to serve the needs of specific geographies specific countries specific specific verticals for that matter things like that the third one is the ecosystem itself i mean uh i mean whatever is is is approved to be used as an ecosystem in one one country will not be there in the second the suppliers will change the vendors will change so it is just not ensuring the platform and the algorithm are safe how do you ensure that the ecosystem that goes into building the cars are is also made safe okay that is a huge thing there is no end to it because it keeps changing a lot but once you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe
love your response what you are saying is basically even in absence of regulations controls ensure you make the platform safe you make the algorithm saves
you make the ecosystem safe you have a template for now
you already have everything safe you just need to now tweak it to different geographies or sectors and industries
yes absolutely
okay I love it Sunil one question to you with initiatives like purple Lama and Lama guard meta provide safety tools but ultimately shifts the responsibility to developers is this too responsibly or decentralized liability
again just to use something that Jan Lakun used to say and he is no longer the chief AI officer but the words continue to be true which is we all have wi -fi routers in our homes and when those wi -fi routers fail we don’t call linux store walls and say hey linux store walls this wi -fi router is running linux and therefore please help me fix the bug the company that sold the router and made a variant or a derivative work from the linux project you will you will have to speak to them and that is the freedom that is necessary in the open source community and in the community of proprietary entrepreneurs that build on open source because the the bsd license allows you to do that it allows apple to take an open project and then to make it a fully proprietary project and that you could be making dual use at that level itself that you want the model to create hate speech.
We want a hate speech classifier in Santali. Unfortunately, we don’t have enough Santali users on the platform. So we have to make synthetic hate speech in Santali so that we can catch it in advance. So we want to make a big corpus of hate speech in Santali. We cannot go around and ask people, please make hate speech for us. That would be a worse -off option. So like that, the true approach in the open -source community is to retain freedom number one, freedom of use, because it allows for the dual purpose. But the moment we use any of that on our platform and we are providing, then all those freedoms disappear. Then you have very limited freedoms.
Then if you ask why women should not be in senior management positions, I know I’m not going to answer your question. So that’s where we are.
Quite interesting. Thank you. I just, I have around seven to eight minutes left. I’m going to skip through the rest of the question what we’re going to do things little differently if audience are okay I’m going to ask very rapid fire kind of questions same questions everyone should answer but only in yes or no
as a philosopher I protest I think the slogan for this AI age is both and not only should we embrace yes and no we should also embrace everything in between because then only we’ll have personalized super intelligence the trouble with your framing is monolithic
and I’ll make an exception as a moderator what I’ll do is if a question requires or a response requires a little bit more attention I’ll call that out you also call me out if you think you need to add anything but I have some very interesting questions and I’m really excited to understand from you guys what you think actually so So again, the format is this. I’ll ask the same question to all of you. You can answer. OK, not yes or no. Very concisely considering the time. Yes or no or both. Yeah, yeah, yeah. Answer is your choice. So globally, regulations across the globe, we need to have alignment on globally. Regulations. Yes or no.
No
no. OK. OK. Yes. OK. No. OK. I understand. But I did expect this kind of responses. So maybe I’ll tweak this question a little bit. So that minimum understanding of what is required across all the geographies, at least. Do we agree on that? Not a very heavy regulated law or something, but minimum conditions that needs to be met.
I would say we should talk about technology regulation. Not the geography regulation. So as he was saying, like there is certain table stakes. which is at technology. So all technologists should first agree that this is stable stake as a technology. Then geographies can take over.
It’s already regulated. I mean to quote Lina Khan, there is no regulatory vacuum for AI. So I disagree a little bit with what Sundar said previously. You cannot say I did it and I’m not responsible.
I think a little easier question this time. Is advancement in AI models outpacing advancement in AI governance? Are the models and the innovation outpacing governance?
Absolutely.
Yes. I mean that’s a natural way of happening things, right? I mean the technology has to advance and then you need to ensure that the advance to technology is safe and secure. So that’s a natural progression and it has been happening that way.
It’s never happened in the reverse order.
Yeah. I agree.
but there is a thought saying that technology has to advance correct but before it can be widely adopted in production maybe we need to have AI governance so that is something we should catch up really fast. We should make it safe before wide adoption of technology right so okay if you have a more capable but less safe model would you delay your launch to stay responsible?
As I said depends on which use case use case dependence.
Fair enough
I mean I just echo Geeta
Okay fair enough One answer where I could get all my panelists agree Have you stopped any projects due to safety concerns?
I think as I said currently I am not in the ethical board of IBM so I have not stopped but I have seen them stopping.
likewise i’m not in the design department so i don’t have first -hand knowledge but i’m sure i mean a lot of things would have gotten delayed not stopped because of compliance regulations not being met and all yeah i’m sure yes i mean yeah
facial recognition was turned off on facebook yes absolutely good
big question maybe soon i’ll start with you this time right can we actually go on agi artificial general intelligence
um it’s it’s a regulatory problem we don’t have to think of yet
okay we can okay
difficult it’s going to be much more difficult i mean i would i mean instead of asking can we govern should we govern absolutely yes i mean i hope and pray that human beings will for the next millions and billions of years will continue to be better than machines that’s my hope and i don’t want to see a day when machines are better than human beings.
Okay. I think I’ll go to what you said initially, that humans should not be scared by what they have created, right? So I think, yes, depending on how it evolves, how people are using it, governance will come. I don’t think so it will be an option at some point in time.
One last round, okay? Again, I’ll start with Sunil. Should we have mandatory watermarking in all the media text and all the content that is developed by AI?
Should we have mandatory watermarking in photo editing tool or text editing tool?
Yes.
I’m answering with a question.
Are you saying yes or no?
I’m answering with a question.
Okay. That’s an answer I’ll take. No answer is also an answer.
I don’t, I mean, see the fact is we have accepted it. I mean, it’s not an untouchable, an alien, a dirty thing, right? It’s acceptable. So let’s make it. good i mean look good feel good i mean no point in i mean uh watermarking everything just to brand it there will be a blurry line between a human generated content and um ai generated content and all that so absolutely and we shouldn’t we demark that my honest feedback and i’m saying this with a big heart is that that human generated content will vanish in the internet just like we don’t we don’t remember uh uh you know addresses and for my i mean i’ve asked my i mean my maybe my my i mean phone numbers we don’t remember i mean we used to remember that we used to remember roots
but i hope not i hope i hope not
that’s why i said i have a heavy heart
i’ll answer from a very personal space because my son is a creative director okay in films and i think he’s absolutely says that it has to be demarcated but sometimes he goes to the extent saying that in near future you can clearly demarcate yourself you will not need any watermark also but there is a different angle which comes when you are human creative versus when you are really dividing completely
perfect thank you so much and that brings me on to exact time ladies and gentlemen please give a big round of applause to amazing panel thank you so much to the amazing moderator thank you so much you Thank you. Thank you.
So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in their journey of Gen AI adoption? Okay, trust me, still I feel many organizations …
EventThe discussion highlighted that AI deployment differs fundamentally from traditional software procurement. Rather than a one-time purchase of a finished product, AI systems require continuous investme…
Event“I’m sure every organization today has a legal team, has a compliance team”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai?diplo-deep-l…
EventSo as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in their journey of Gen AI adoption? Okay, trust me, still I feel many organizations …
EventThank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame …
EventAnd the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that there are nation states or big enterprises which are adversarial enterprises which w…
EventThe US Consumer Technology Association (CTA)introduceda new standard to evaluate the trustworthiness of healthcare artificial intelligence (AI). The standard is accredited by the American National Sta…
UpdatesNo, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful stuff, fantastic. Every day there is a piece of news that makes us hopeful that w…
EventGovernment role ranges from awareness and incentives to mandated requirements and enforced regulations
EventIt was quite good and quite competitive, and it’s achieved a lot of adoption since then, as have a couple other Chinese models. So there’s a real, I think, battle still for where the global standard i…
EventProponents highlight that the multistakeholder approach encourages diversity in thought, leading to innovative solutions that are less likely to arise within conventional, single-stakeholder framework…
EventHow to balance innovation with regulation across different jurisdictions while maintaining global competitiveness is ongoing
EventThe analysis also underscores the importance of policymakers having up-to-date information for evidence-based decisions. Technological advancements and industry growth often outpace policy and regulat…
EventThe discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. However, the tone gradually became more realistic and somewhat pessimistic as speakers…
EventThe discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The tone became more cautionary and analytical in the middle sections when addressin…
EventThe conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine excitement about AI’s potential, with Huang serving as an educational voice exp…
EventThe discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual respect and shared commitment to partnership, with practical examples and concrete i…
EventThe tone of the discussion was thoughtful and collegial, with panelists building on each other’s points. There was general agreement on the need to rethink traditional approaches, but also some respec…
EventThe tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation. While Kurbalija maintains an expert, measured delivery, there’s a growing sens…
EventThe Commission faces the challenge of navigating a future where private companies struggle to generate revenues and profits, while public broadcasters continue to grow stronger. Markus Preiss, a repre…
EventData leakage is mentioned as a common occurrence that often happens without the organization’s awareness, and it qualifies as a personal data breach. This indicates that organizations may face challen…
EventIn conclusion, the analysis provides a comprehensive overview of cybersecurity in relation to industrial control systems and digital transformation. It highlights the proactive approach of companies l…
EventThe discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around infrastructure, energy, skills, and governance, speakers consistently emphasize…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThe discussion maintained a consistently professional and collaborative tone throughout. It began with formal introductions and technical explanations, evolved into an enthusiastic presentation of pra…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and policy-oriented pragmatism. Panelists demonstrated mutual respect and built upon ea…
EventModerate consensus with significant polarization. While there was broad agreement on core digital governance principles and the need to address connectivity gaps, deep divisions existed on cultural/so…
EventFrom the analysis of these arguments, it can be inferred that while third-party tools offer convenience and efficiency in text summarisation and generation, there are inherent risks associated with ou…
EventThe level of disagreement was moderate and constructive. Speakers shared common goals of protecting submarine cable infrastructure and improving resilience, but differed on methods and priorities. The…
EventMinimal to no disagreement present. This transcript represents a closing ceremony where speakers (Doreen Bogdan Martin, Frederic Werner, and LJ Rich) are primarily focused on summarizing summit achiev…
EventThe discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplishment, and forward-looking optimism. Speakers expressed appreciation for the wee…
EventThank you. I mean, you’ve done a brilliant job of putting all the free problems we’ve got and then saying you’ve got a long list afterwards in terms of cooperation. But I love the touch of optimism th…
Event_reportingDuring the forum, the individual made multiple requests to leave, expressing gratitude several times by saying “thank you.” The person also indicated their intention to say goodbye multiple times, usi…
Event“The panel opened with introductions of four senior representatives – Mr Syed Ahmed (Infosys), Geeta Gurnani (IBM), Mr Sundar R. Nagalingam (NVIDIA), and Mr Sunil Abraham (Meta).”
The knowledge base lists the same four executives as panelists, confirming Geeta Gurnani, Sundar R. Nagalingam and Sunil Abraham, and notes an Infosys representative, matching the report’s description [S92] and the overall panel description [S1].
“Sundar Nagalingam grouped AI risks into three buckets: functional safety, AI‑specific safety, and cybersecurity.”
A referenced source outlines three broad categories of AI risk, which aligns with the three-bucket framework described in the report, providing broader context for this taxonomy [S102].
“Geeta Gurnani said that two years ago many clients asked “what is responsible AI and what is trust?” but today “security has become a shift‑left priority”.”
Industry commentary notes a recent shift toward prioritising security over convenience, illustrating the broader trend toward security-first thinking that underpins Gurnani’s observation [S96].
“The proliferation of AI standards reflects a deeper problem: the lack of a clear party to hold accountable when an AI‑driven robotic system fails.”
Discussion of AI standards highlights challenges such as lack of standardisation and unclear accountability, providing additional nuance to the report’s claim about standards and responsibility [S104].
The panel shows strong convergence on three core themes: (1) security must be addressed early and is the most likely failure point at scale; (2) trustworthy AI is best expressed as a multi‑layered framework covering functional safety, AI safety, cybersecurity, and compliance; (3) AI innovation outpaces governance, creating a pressing need for standardized, leadership‑driven governance that can be adapted per jurisdiction.
High consensus across speakers on the importance of security, layered trust mechanisms, and the speed gap between AI development and governance. This consensus suggests that industry leaders recognize common challenges and are likely to collaborate on standards, leadership mandates, and rapid governance mechanisms to enable responsible AI deployment.
The panel shows moderate but substantive disagreement. Core points of contention revolve around the need for a unified global regulatory framework versus a technology‑first baseline, and the policy instrument of mandatory watermarking. While all participants concur on the importance of trustworthy AI, they propose divergent routes—leadership‑driven governance, system‑level standardization, or philosophical reframing. These differences suggest that consensus on implementation will require bridging gaps between policy‑oriented, technical, and philosophical perspectives.
Medium – the disagreements are focused on strategic approaches rather than outright denial of the problem, implying that coordinated multi‑stakeholder work will be needed to align on standards, regulation, and content‑labeling policies.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

