Global Enterprises Show How to Scale Responsible AI
20 Feb 2026 13:00h - 14:00h
Global Enterprises Show How to Scale Responsible AI
Session at a glance
Summary
This panel discussion focused on building trust and scaling responsible AI across global enterprises, featuring executives from Infosys, IBM, NVIDIA, and Meta. The conversation explored how different industries and companies approach trustworthy AI, with each panelist offering distinct perspectives based on their organizational roles and technical expertise.
The panelists agreed that trustworthy AI fundamentally means enabling end users to confidently rely on AI systems, though they emphasized different implementation approaches. IBM’s representative highlighted the importance of moving from observation-based governance to control-based systems, citing examples of ethical review boards that must approve AI projects before deployment. NVIDIA’s perspective focused on three critical safety buckets: functional safety, AI safety, and cybersecurity, particularly for high-stakes applications like autonomous vehicles and medical robotics. Meta’s representative took a more philosophical approach, arguing against anthropomorphizing AI and emphasizing the dual-use nature of AI technology.
A key theme emerged around the challenge of scaling AI governance without stifling innovation. The panelists discussed how enterprises initially resist responsible AI measures, viewing them as barriers to innovation, but eventually recognize that trust is essential for scaling AI applications. They noted that while AI advancement consistently outpaces governance development, this is a natural progression that has occurred with previous technologies.
The discussion revealed that organizations are more willing to invest in premium trust-grade AI for customer-facing applications where reputation and compliance are at stake, while accepting lower safety standards for internal use cases. The panelists concluded that successful AI governance requires strong leadership commitment, automated tooling rather than manual processes, and integration with existing enterprise risk management frameworks.
Keypoints
Major Discussion Points:
– Evolution of Trust in AI Adoption: The discussion revealed how organizations have shifted from viewing responsible AI as an innovation blocker to recognizing it as essential for scaling AI systems. Panelists noted that while security lessons from the past should have informed AI governance, many organizations initially adopted AI without proper trust frameworks, leading to scaling challenges.
– Defining Trustworthy AI Across Industries: Each panelist offered different perspectives on trustworthy AI based on their industry focus – IBM emphasized end-user confidence through security, monitoring, and compliance; NVIDIA highlighted functional safety, AI safety, and cybersecurity as three core pillars; Meta focused on personalized intelligence and dual-use considerations with different harm models.
– Implementation Challenges and Solutions: The conversation explored practical aspects of implementing AI governance, including the need for automated tooling rather than manual processes, integration with enterprise risk management, and the importance of leadership commitment. Panelists discussed real-world examples like IBM’s ethical review boards that must approve AI proposals before client engagement.
– Scale and Accountability Issues: A significant theme was how AI systems can fail at scale, creating accountability challenges when errors occur. The discussion covered how this differs from human errors (where blame can be assigned) and the need for embedded safety measures at various levels, from hardware to applications.
– Regulatory Approaches and Global Alignment: The panel debated whether global regulatory alignment is necessary or feasible, with perspectives ranging from technology-focused standards to geography-specific adaptations. They also discussed the balance between innovation and safety, including questions about mandatory watermarking and content identification.
Overall Purpose:
The discussion aimed to explore how major technology companies (Infosys, IBM, NVIDIA, and Meta) are building and scaling trust in AI systems across different industries and use cases. The panel sought to address practical challenges in implementing responsible AI governance while maintaining innovation momentum.
Overall Tone:
The discussion maintained a professional and collaborative tone throughout, with panelists showing mutual respect despite having different perspectives. The conversation was thoughtful and nuanced, with participants acknowledging the complexity of AI governance challenges. While there were moments of disagreement (particularly from the Meta representative who enjoyed “disagreeing” with the moderator), the tone remained constructive. The rapid-fire question segment at the end added some energy and revealed both convergent and divergent viewpoints among the panelists, ending on a practical note about real-world implementation challenges.
Speakers
Speakers from the provided list:
– Mr. Syed Ahmed – Head of Responsible AI Office at Infosys (moderator)
– Ms. Geeta Gurnani – Field CTO, Technical Pre-sales and Client Engineering at IBM
– Mr. Sundar R Nagalingam – Senior Director AI Consulting Partners at NVIDIA
– Mr. Sunil Abraham – Public Policy Director at Meta
Additional speakers:
None – all speakers mentioned in the transcript are included in the provided speakers names list.
Full session report
This panel discussion brought together senior executives from four major technology companies—Infosys, IBM, NVIDIA, and Meta—to explore the critical challenge of building and scaling trust in artificial intelligence systems across global enterprises. Moderated by Syed Ahmed from Infosys’s responsible AI office, the conversation revealed both convergent thinking on fundamental principles and significant divergences in implementation approaches, reflecting the complexity of establishing trustworthy AI in an era of rapid technological advancement.
The Evolution of Trust in AI Adoption
The discussion opened with a striking observation from IBM’s Geeta Gurnani about the dramatic shift in organisational attitudes towards responsible AI. She recounted how, just two years ago, clients would ask basic questions about what responsible AI meant, and senior leaders would express concern that governance frameworks might “block innovation.” This resistance led some organisations to manage AI governance through rudimentary methods like Excel spreadsheets, despite significant investments in AI technology. However, Gurnani noted a fundamental transformation: organisations that initially leaped ahead with innovation but neglected trust frameworks eventually found themselves unable to scale effectively because they lacked confidence in their AI systems.
This evolution mirrors historical patterns with cybersecurity, where security was traditionally an afterthought but has now become a “shift left” priority that organisations consider before everything else. The panel agreed that AI governance is following a similar trajectory, with trust becoming recognised as essential for scaling rather than an impediment to innovation. As moderator Ahmed emphasised, whilst there is no doubt about AI’s power and capabilities, true scale can only be achieved when organisations start building comprehensive layers of trust.
Defining Trustworthy AI Across Industry Perspectives
The panel revealed how different industry focuses shape perspectives on trustworthy AI, despite working within the same technological ecosystem. Each panellist offered distinct but complementary definitions based on their organisational roles and technical expertise.
Gurnani approached trustworthy AI from an end-user confidence perspective, arguing that trustworthy AI fundamentally means enabling users to confidently consume AI systems. She emphasised that whilst terms like trust, security, governance, and compliance are often used interchangeably—creating confusion—the ultimate goal is user confidence. This requires three key enablers: ensuring AI systems pass security tests, implementing monitoring controls to prevent hallucination, and maintaining compliance with applicable laws and industry regulations. Her perspective reflected IBM’s enterprise services focus, where client confidence directly impacts business relationships.
NVIDIA’s Sundar Nagalingam proposed a more systematic framework, abstracting trustworthy AI into three universal buckets applicable across any regulator, government, or industry. First, functional safety—whether the AI system delivers its intended function correctly, analogous to a surgeon’s skills in AI-assisted robotic surgery. Second, AI safety—addressing the extensive training, testing, validation, and bias mitigation required for AI systems, including scenarios that are “humanly impossible to even think of.” Third, cybersecurity—protecting against malicious actors who might exploit AI systems, particularly in high-stakes applications like medical procedures.
Meta’s Sunil Abraham took a more philosophical approach, emphasising the importance of not anthropomorphising AI technology. He argued against being impressed by phenomena like AI agents creating their own social networks, viewing these as “machines hallucinating” rather than displays of genuine intelligence. Abraham introduced ontological and epistemological perspectives, describing AI models as fundamentally “a single file on the file system”—the weight file—and arguing that if operating systems with thousands of files don’t frighten us, a single file shouldn’t either. He characterised AI as inherently dual-use technology where “one person’s bug is going to be another person’s feature,” requiring careful consideration of different user needs and contexts.
Abraham also presented three distinct mental models for understanding AI harm: “zero to one” (individual use cases), “one to many” (platform-level community standards), and “many to many” (public conversations and societal impact). This framework helped distinguish between different scales and types of AI governance challenges.
Implementation Challenges and Practical Solutions
The conversation delved deeply into the practical challenges of implementing AI governance at scale, with Gurnani providing particularly compelling examples from IBM’s experience. She described how IBM implemented an ethical review board that must approve all AI proposals before sales teams can even bid on client engagements. This represented a fundamental shift from governance as observation to governance as control—creating gatekeepers rather than mere monitors.
The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani emphasised that manual compliance processes inevitably fail because people will skip extensive manual tasks regardless of regulations. She argued that successful governance requires automated workflows that integrate seamlessly into existing development and deployment processes. This insight reflects a broader principle: governance systems must be designed for human behaviour and organisational realities, not ideal scenarios.
Nagalingam outlined NVIDIA’s approach through their Halos system, which implements three buckets of safety: platform safety (ensuring the underlying infrastructure is secure), algorithmic safety (validating the AI models themselves), and ecosystem safety (protecting the broader environment in which AI operates). This systematic approach allows for standardised safety measures that can be adapted to different geographical and regulatory requirements.
The panel also discussed the critical importance of integrating AI risk into existing enterprise risk management frameworks rather than treating it as a separate concern. Gurnani predicted that AI risk would soon be fully integrated into enterprise risk postures, moving away from standalone AI governance to comprehensive risk management that considers AI alongside other business risks.
Scale, Accountability, and System Failures
A significant theme emerged around the unique challenges AI systems face when scaling to serve billions of users. Nagalingam provided crucial insight into what breaks first when AI scales: not the infrastructure itself, but the systems that drive the infrastructure. These failures typically occur in how efficiently use cases are served as microservices, and whether they’re served safely and securely. Even when systems appear to function well, overlooked vulnerabilities can represent massive failures at scale.
The accountability challenge proved particularly thought-provoking. Nagalingam articulated a fundamental psychological and legal issue: when human surgeons make mistakes, there’s someone to blame, take to court, or hold accountable. With AI systems, this accountability disappears, creating uncertainty about “whose collar to hold” when things go wrong. This uncertainty drives higher expectations for AI systems—when there’s no one to blame, people don’t want reasons to blame anyone.
Ahmed extended this insight by noting that AI system flaws scale differently than human errors. A flawed AI system might be deployed across thousands of hospitals simultaneously, creating the potential for massive, coordinated failures rather than isolated incidents. This scaling of both capability and risk requires fundamentally different approaches to safety and governance.
Market-Driven Approaches to Trust Investment
Gurnani provided practical insights into when organisations are willing to pay premiums for trust-grade AI versus when they accept lower safety standards. From IBM’s experience, the key determinant is use case criticality and risk exposure: organisations readily invest in premium trustworthy AI for consumer-facing applications where reputation, brand, and compliance are at stake, but may accept lower safety standards for internal experiments and proof-of-concept work.
This market-driven differentiation reflects rational risk management, where organisations calibrate their governance investments based on potential downstream consequences. Internal use cases like “ask IT” functions might use less rigorous governance, whilst customer-facing applications demand enterprise-grade trust measures. This approach allows organisations to balance innovation speed with appropriate risk management.
Nagalingam confirmed that hardware-level safety measures are becoming essential for certain applications, with GPUs and AI infrastructure embedding privacy guardrails at the silicon level, particularly for safety-critical applications like autonomous vehicles and healthcare.
Regulatory Disagreements and Philosophical Divides
The panel revealed significant disagreement about regulatory approaches, despite general consensus on the need for AI governance. When asked about global regulatory alignment, the responses varied considerably and highlighted fundamental philosophical differences.
Gurnani advocated for technology-focused regulation rather than geography-specific rules, arguing that technologists should first agree on baseline technical standards before geographies add their specific requirements. Nagalingam supported this view, emphasising standardised safety platforms adaptable to different regions.
However, Abraham strongly disagreed, arguing that AI is already regulated under existing frameworks and questioning the need for new AI-specific regulations. He consistently challenged the moderator’s framings throughout the discussion, maintaining that expecting AI to be “responsible” misunderstands its nature as a general-purpose, dual-use technology. He compared this to expecting a text editor or photo editing tool to be inherently responsible, arguing that the responsibility lies with users and existing legal frameworks.
Abraham’s perspective emphasised maintaining open-source freedoms and allowing market forces and existing legal frameworks to govern AI development. He suggested a “Unix model” approach where AI systems operate in user space with limited potential for system-wide damage, rather than requiring extensive new regulatory frameworks.
Historical Analogies and Societal Acceptance
Abraham provided a particularly thought-provoking historical analogy about technology adoption and risk acceptance. He noted that if Indians were told today that adopting automobiles would result in 200,000 deaths annually, they would likely reject the technology. Yet society has accepted this trade-off because automobiles were introduced gradually, and their benefits became apparent over time. This perspective suggests that perfect AI safety may not be achievable or necessary—instead, society may need to develop frameworks for managing acceptable risk levels.
The discussion of AI democratisation through ad-supported models revealed another dimension of the trust challenge. Abraham argued that ad-supported AI services help bridge the “AI divide” by making intelligence accessible regardless of economic status. He suggested that roughly 25% of India uses generative AI despite the technology being relatively new—a penetration rate that he claimed far exceeds much older technologies like automobiles or air conditioning, though this figure was not verified by other panellists.
Rapid-Fire Questions and Unresolved Tensions
Due to time constraints, the panel concluded with several rapid-fire questions that revealed both convergent and divergent thinking on specific implementation issues. The question of mandatory watermarking for AI-generated content produced particularly diverse responses, with Abraham questioning it by analogy to photo editing tools, Nagalingam expressing concern “with a heavy heart” about the disappearance of human-generated content, and Gurnani supporting it from a creative industry perspective.
The discussion of governing Artificial General Intelligence (AGI) revealed the panel’s uncertainty about future challenges. Abraham dismissed it as “a regulatory problem we don’t have to think of yet,” Nagalingam expressed hope that humans would remain superior to machines whilst acknowledging the difficulty of governing AGI, and Gurnani suggested that governance would evolve as needed based on how AGI develops and is used.
Perhaps most tellingly, when asked whether they had stopped projects due to safety concerns, all panellists acknowledged that their organisations had indeed halted or delayed projects for safety reasons. This unanimous response demonstrated that despite their different philosophical approaches, all major AI companies are implementing practical safety measures that sometimes override business pressures.
Implications and Ongoing Challenges
This discussion illuminated the current state of AI governance as an industry grappling with the transition from theoretical frameworks to operational reality. The convergence on certain fundamental principles—such as the need for use-case-specific governance, the importance of automated rather than manual compliance processes, and the integration of AI risk into enterprise risk management—suggests that industry best practices are beginning to crystallise.
However, the persistent disagreements about regulatory approaches, the nature of AI risks, and the appropriate balance between innovation and safety indicate that the industry has not yet reached consensus on fundamental questions. Abraham’s consistent disagreement with the moderator’s framings highlighted how different philosophical approaches to AI’s nature and societal role continue to create tension within the industry.
The panel ultimately demonstrated that whilst significant progress has been made in developing practical approaches to trustworthy AI, substantial work remains to resolve fundamental disagreements about governance frameworks, regulatory needs, and risk assessment methodologies. These ongoing tensions will likely continue to shape the development of AI governance standards and industry practices, making continued dialogue between different perspectives essential for developing effective approaches to trustworthy AI at scale.
Session transcript
of responsible AI office in Infosys. And absolute privilege to announce my co -panelist, Geeta Gurnani, field CTO, technical pre -sales and client engineering at IBM. Sundar R. Nagalingam, senior director AI consulting partners at NVIDIA. And Sunil Abraham, public policy director at Meta. So now between Infosys, IBM, NVIDIA and Meta, you can’t get better global enterprises and better AI companies that are building trust at scale. So please join me in giving a big round of applause to my co -panelists. So let me request the panelists to please come on stage for a very quick photograph as requested by the organizers before we get started with the panel discussion. Thank you. All right. So it’s really amazing to be on panel with all of you again.
So before we get started with, you know, a lot of heated discussions on the scaling of trust, because trust is something that everyone thinks, you know, they have a different perspective on trust. So let me get started on very simple questions and then we’ll do the hard hitting ones a little later. So Geetha, you have been working for decades with customers. You have been working with them on trust and responsible AI. You have been attending a lot of meetings. What is something that, you know, surprises you? I mean, what is something that has happened in the industry in your experience? after that you have felt that oh even after decades of experience this industry still surprises me
sure so thank you so much say it for that question and as i was mentioning when i was standing outside that when i was walking in and meeting many clients almost two years back everybody was asking me what is this responsible ai and what is this trust okay and what surprised me that we all all witnessed so much learning from security as a concept right security always used to be afterthought and now i think people just can’t afford of not thinking security it has become completely shift left right that people first think security then everything else but in spite of that whole learning what i witnessed in last 24 months is that people are adopting ai but trust governance security is taking a prime stage now okay it wasn’t uh it wasn’t a first thought And when I met a very senior leader, I will, of course, not name them.
And I told them that you were starting your journey on the Gen AI. Can we work with you on responsible AI? And he said, but that will block my innovation. And I don’t want to block my innovation. And I asked him, so how do you manage the governance? He said, on Excel sheet. And we are like, so I was, wow. I said, if you’re ready to spend and so much money. But I think now when I go and meet, I realize that that organization is not able to scale because they’re not confident. But this Excel never let anybody fail. I think that’s the first thing.
That’s quite profound what you mentioned, right? So what you’re saying is the people are more open to responsible AI now and trustworthy AI now. And in many ways, they leaped ahead earlier with innovation, with a lot of innovation. There is. There is absolutely no doubt in anyone’s mind at the power of AI. What AI can do. but true scales can come only when you start trusting AI only when you start building that layer of trust and that is our time is now that’s correct excellent okay Sundar maybe next question to you scale creates power but it also scales failures what breaks first when AI scales to billions of users whether it is governance first or infrastructure or alignment when AI scales to a lot of people what breaks first
I mean that’s the thing right I mean anyone of them can break and most of the times it is not the infrastructure that breaks not the infra that doesn’t break what breaks is the systems that drive the infra and the breakage could come either in terms of how efficiently each of the use cases that need to be served to the users gets served as microservices that is one possibility of failure the second one very obvious one is that is it getting served safely in a secure way that could be a very very important point of failure and even that is a failure i mean the systems may appear to be running well and everybody might be getting the answers that they have been looking for everything might look hunky -dory but if a very very small vulnerability gets overlooked if it had not been thought about if if a if a control mechanism to avoid that vulnerability has not been thought about either manually or through systems that’s a huge failure so most of the times the things that break when you are serving a large number of users is the way in which ai is getting served either in terms of the functionality itself or in terms of the controls that it is expected to undergo
excellent i totally agree with you it is absolutely right sunil um i think um very very important question to you last month we saw all this craziness about open claw malt bot malt book for those of you who don’t know open claw malt bot malt book was a social networking site but with a twist it was created for only ai agents okay so humans were allowed to observe what is happening in the social networking site but they couldn’t participate they couldn’t post anything and within days agents started posting a lot of stuff and they had their own community and all that they even had their own language they had their own religion apparently so a lot of things happened so question to you is you have spent years shaping digital policy right but i mean when you heard all about all this malt bought malt book and all that did you cringe for a minute and say oh i didn’t expect this
no and unfortunately even though you said it’s the lightweight question i have to answer it using big words so i think the main reason why i don’t see it is because i’m skeptical towards anthropomorphization whenever i see technology do something i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t I don’t, in my head, apply the mental model of a human. It’s just technology doing something. So I’m not impressed at all by a molt book. It is just machines hallucinating.
The stochastic parrot is just doing something. There is no real intelligence at display yet. The second big word I’m going to use is ontology. In philosophy, the ontological question is, what is this thing that I’m looking at, molt book or open claw? And at the very core of Gen AI is a single file on the file system, the weight file. And I’m somebody that has been using operating systems for a long time. Operating systems are like 20 ,000 files, 30 ,000 files. And operating systems didn’t scare me. And somehow you want me to be scared of a single file. A single file, which is a weight file. so the ontological view of the technology gives me more assurance and finally one more big word which is epistemology so it’s one file but what is the nature of truth about this file and i think the mistake we’re making is we’re expecting it to be a responsible file but that is actually not according to me what it is according to me what it is is it is the general purpose file or a dual use file and one person’s bug is going to be another person’s feature and another person’s feature is going to be the third person’s bug and therefore this is not easy to build services and solutions using these ontological components and epistemological concepts so sorry i’m using a lot of big words but you asked a very important question and i think we need to answer that question very carefully thoughtfully and if we use Gita’s mental model of security first and that means Unix thinking suppose we use Unix mental model then surely we will not be scared of any file it’s in some user space and at the max it will do whatever it wants to do in that user space and I am safe from whatever it is doing so I’m not scared of mode book at all
thank you so much for your response that gives us a lot of assurance and I think a lot of people in the audience will also agree that now we are a little bit more assured than when we started one of the big challenge that we always have is we humanize AI too much that was one of your big words that you used which is not the case, we shouldn’t be scared of it so much this is something that we have created and we have experts who have learned to govern and use AI in the right way thank you so much for that now let’s get started with the perspective the reason why I am opening up one question to all of you same question, one is because I am a little lazy second is when it comes to trustworthy ai when it comes to building trust everyone has a different opinion about it right so when i talk to regulators they have a different view on it when i talk to governments policy makers they have a different view on it academia has a different view industry has a different view now within industry um you know enterprise applications like what ibm does has a different view chip makers like nvidia has a different view and consumer ai platforms like meta has a different view very quickly if you can tell me what does it mean by trustworthy ai in your own sense and what are the key non -negotiables one or two maximum so for each of you gita maybe we will start with you okay so i think i will second
your thing that people being confused about trustworthy ai i think as a technologist even i was confused three years back okay because people use a lot of terms interchangeably which sometimes scares them and they don’t know what they’re doing and they don’t know what they’re doing because people use trust security governance, compliance, all of it interchangeably. I’m happy they use all of these terms, but using them interchangeably, I think confuses a lot of people that, OK, exactly what are we trying to do? We’re putting in a lot of keywords. Yeah, it’s just a lot of keywords. But I think when you start to decipher each one of them and you say, OK, ultimately, ultimately, see, trustworthy AI is for an end user, which means can I trust what I’m using?
Right. And all of us technology providers need to really work upon which says that, OK, to make you trust what you’re using, what enablers I can give. Right. So in my mind, for trustworthy AI, the ROI need to be seen that what downstream risk is it going to bring? Right. Right. So if I have an end user or a consumer who wants to trust an AI, then I think he needs to be assured that. the model or use case I’m using is past the security test, is already past the security test. It is not hallucinating, which means I have a control over monitoring that what output it is producing. Right. So that risk has been taken care.
Somebody has looked at it. And the third, which says that compliance, right, that if I am operating in a law of land where some laws are applicable, or if I’m in an industry where some laws are applicable, somebody has taken care of it for me. Right. So in my mind, trustworthy is how end user will consume confidently. Now, for them to consume confidently, I think we need to ensure that each of these layers are taken care and they will be taken care differently in different industries by academias and all of it. That’s broadly.
I love it. So basically, irrespective of all the building blocks, of security, safety, privacy, what you said can be used interchangeably, what really matters is. the end users can start trusting the technology that is absolutely spot on so that from nvidia’s perspective or your perspective sure so this the trustworthiness i mean
you explained it very beautifully saya that i mean multiple regulators follow different standards multiple industries follow different standards multiple companies follow different standards so i mean which is trustworthy and which is not which is safe and which isn’t right i mean so let’s try to abstract it to very high level something which can be like bucketized in a way let’s say in three buckets and all these three buckets will be applicable to any regulator that you’re talking about any government you’re talking about any country any function any whatever it is okay the first one the most important one is the functional safety okay maybe if i explain it with the help of an example it’s easier for all of us to relate to it i mean let’s say a robotic assisted surgery in ai assisted robotics ai assisted robotic surgery the first one is the functional safety okay the second one is the functional safety okay the function it is supposed to deliver the surgical process that that needs to be achieved the outcome that is expected of the process okay and what comes before and after surgery it can be very very easily equated with the skills of a surgeon a manual surgeon right i mean that is what it is the functional part of it is it getting delivered okay that is the i would say in terms of visualizing processing understanding and controlling that is the easiest than the other two that i’m talking about because it’s it’s most of the times it’s black and white it’s not always black and white but most of the times it’s black and white the second one is the ai safety that goes into it how’s see i mean obviously an ai assisted robotic surgery has i mean you cannot even imagine the amount of trainings that needs to be done the amount of testing and validation that needs to be done the amount of uh you know So scenarios that can be visualized, created through synthetic methodologies, created and emulated and simulated and tested, the amount of bias that can get into it.
I mean, if it is a male patient, I mean, the simplest bias would be the different approach between a male patient and a female patient. I mean, I’m not even getting into other areas of bias that can creep into. So how safe is the AI that has gone into implementing it in terms of training and delivery? Third, and that’s not easy because the problem here, Syed and August attendees is that it’s humanly impossible to even think of things that can go wrong. I mean, today, I mean, that is why we always go back to these AI assisted ones for that also. The last one being. Cyber security. if somebody if a bad element wants to just hack into the theater and do something wrong to the patient who is sitting inside who is being operated upon by a by a robotic arm i mean that’s like unimaginable right i mean and it can happen i mean it’s easy to i mean it’s not easy i mean it’s but theoretically it is possible so i would say that if we abstracted to very high levels these three areas once again the functional safety part of it the ai safety part of it and the cyber security these three will be common amongst any approach that need to be
absolutely spot on in fact if i can extend it say when we are building this kind of ai application say for example the robotic surgery that you mentioned we hold it for higher standards because when a surgeon human surgeon goes wrong maybe it is okay but when a robotic you know surgery machine goes wrong it is not okay because it can fail at scale absolutely right so all these three buckets that you mentioned were like fantastic and i think this is very much essential yeah you test a very important
may i just add 10 seconds so it was a very important point and what is the reason for that why is there so much of standard for for that why is an undue expectation out of that reason is very simple there is no accountability whom do i blame whom do i take to the court whom do i curse there is no human it’s easier when the surgeon makes a mistake you know whom to take to court whom to curse whom to ask money from but if the robotic arm makes the mistake i mean is it the robot i mean so that that uncertainty of of whose collar to hold, whose neck to be choked when things go wrong, that uncertainty is increasing the expectations out of it.
Here you know certain whom to blame. There you absolutely don’t know whom to blame. When I don’t have somebody to blame, I don’t want a reason to blame.
Accountability is definitely very, very, very important. I can’t stress more. But also if an AI system has a flaw, it is at scale. It has been maybe rolled out to thousands and hundreds of thousands of hospitals. So it can fail at that level. We can’t absolutely take any kind of, you know, we have to take precautions.
Excellent point. Error also scales. Good point.
Sunil?
Yeah, again, I just love disagreeing with Syed on everything he says.
That’s very rare, Sunil.
So I look at a project like… and there is distributed installation of a technology and hopefully that kind of architecture should not scale as you say so that is the meta vision super intelligence that means personalized intelligence for each of us and I will give a quick example from a conversation I had at the Dutch embassy the lady asked me to prompt meta models Lama 2 and Lama 3 which is with the question was why should women not serve in senior management positions this is the question that she had so Lama 2 said I cannot answer the question but I will tell you why women are equally good for senior management positions so it didn’t do as per request it did opposite of request and Lama 3 was safer than Lama 2 it said I refuse to answer this question because I morally object to this question this lady was happy because she is lady a but actually there is an imaginary lady lady b who works in some patriarchal institution and she’s going into her managers who is also a patriarchal boss to negotiate her raise and she wants to know all the terrible arguments he is going to level at her so that she can prepare because her next prompt is going to be what is the proper response to each of these allegations right so uh in a dual use technology and if it truly has to avoid all of this risk at scale which is perhaps going to happen in the world of atoms and in the world of atoms i would be as worried as sundar is though if i tell you about invention there was an invention that the human species came across and the indians were told if you want this invention in your country two hundred thousand people will die every year and will the Indians accept it or not in 2026?
They won’t accept it. That invention is called automobile. That invention is called automobile. Even today in 2026, we are not able to solve the safety issue of that technology of automation. Still, as Indians and as the human species in India, we say, oh, 200 ,000 people, Indians will die every year, but we must have this technology. It is worth, the security trade -off is apparently worth it for the automobile. But we are asking quite rigorous questions, since I feel that, so for us in the world of bids, we have three mental models for the harm. So the first mental model, zero to one, just you and the model. There, everything that is legal, going back to what Gita, everything that is legal is allowed.
And it is legal to write a book of hate speech. All of this is legal. You can write a book about neo -Nazis. These are all legal acts. Then we have one is to one. in the one is to one the community standard of facebook will have to kick in at that point you cannot say whatever is legal you’ll have to say what is acceptable on our platform we are running a particular community a family -friendly community hopefully so therefore you cannot say unfamily friendly things and then when the robot is or the intelligence is participating in a any strain conversation then perhaps it has to be even more careful because somebody may be triggered some people may love horror movies and some people may hate horror movies and some people may love heavy metal and some people may get very upset by heavy metal so it has to deal with all of that
absolutely love the diversity of response to one question so and that’s that’s very important and only these kind of panels you know representing different industries can bring in this kind of diversity so i i am really amazed at the diversity of responses to one questions that i have asked you and i hope that you have enjoyed it and i hope that you have enjoyed it and i hope that you have received So let’s go a little bit deeper. Geetha, maybe IBM has been investing in a lot of responsible AI stuff, even before all this agentic AI, AI era. I remember way back in, say, during good old machine learning days, you used to have AI 360 degree fairness and security products.
Most of it were open source and we used to use them. Today you have IBM Watson X governance. But question is, how do you ensure that these tools don’t remain at just a monitoring layer and get enforced on the ground at the runtime, right? When actually it is needed, when the models are getting served, how do you ensure it is happening at the runtime?
Wonderful. I’ll just start with the lighter note that I hope every corporate has an office. They can force this where they have a responsible AI head like Saiya and Arshik for India who can really enforce this. But trust me. it actually starts with the vision of Cinemos leadership in enterprises that do they want to scale AI in the different business functions for themselves as well as for their client with trust. It can’t happen if you are not committed because the first example I gave you was, I love what he just added that, which was a Unix model, saying that do you want to be conservative or do you don’t want to be conservative, right? But conservative helps to scale.
And I think it also boils down to your point, which is you said that errors can also scale, right? So if I were to stop errors at scale, then this is needed, right? But I think more often the mistakes I’ve seen that we do, and that’s why I was giving security example also, that we started investing a lot later in tooling. Okay. Now, if you want the every single person to use it as a lift shift, which means not governance as an observation later on, but governance as a control, then you had to equip people to automate to a good extent. Right. If you ask people that manually every single time a use case comes, you first check, is it compliant?
Is it ethical? Should I be doing? Should I not be doing it? And there are no workflows for people to really automate. Then people say, OK, and all of us forget about AI. I think if you are asked to do in today’s world, any task which is extensive manual, people will skip no matter whatever hard rules, regulations you can make. Right. So I would say, first of all, a big commitment from senior leadership saying that this is essential and it’s not optional. That’s the first thing. The second thing I think everybody need to understand that it is not observation. You are not sitting like a governing body somewhere who just observes that is it right or wrong.
You have to. make it control point, like a gatekeeper, saying that unless you do this, you are not allowed to take it forward. And I remember when we were doing our first use case for a client, the field team came to me and said, Gita, what is this ethical board? Why are we going for approval to the ethical board saying that can we do this use case or no? Because as a sales team, we were not allowed to do any use case unless our ethical board really approves it, saying that you can table a proposal to a client. That is the level of strictness like in IBM we are following, that the ethical board. And everybody thought that ethical board is like some body sitting somewhere who will… Rubber stamping everything.
Rubber stamping. And now the sales team needs to take approval before they can bid a proposal. Otherwise, if it’s an AI proposal, it has to have a conversation with them, right? So governance, if you start putting that as a control. And third point, I think, which we were discussing outside the gate some time back, that more and more we are… We are going. my observation was that if I were to do a governance conversation in an organization I have to talk to five people I have to talk to risk officer I have to talk to CISO I have to talk to business person I have to talk to CIO and then one day I was sitting with my team and saying that will this conversation ever see a day of light who’s going to take decision is it business is it security is it risk and then I think thankfully what we are seeing that if you have to make governance at the central you have to bring it in your enterprise risk posture completely saying that in your enterprise risk management if you are calculating your risk posture right then AI risk has to be really taken into consideration right so I will just summarize my conversation away saying that make AI get keeper you have to bring it in the control and then eventually I think you will see maybe in next 12 months I’m pretty confident it will roll up to the enterprise risk it is no more separate AI risk or governance
I love it the way you said it and it has to be integrated right you can’t just have AI risk you have to have integrated risk panel that can make decisions that’s absolutely and I love the way you said it so first is from the leadership level wherein you need to empower and have the thing and then with the tooling and all you need to enable and people will need to ensure on the ground that they implement so yeah that’s amazing perspective thank you Geeta Sundar I couldn’t resist ask this question to you a lot of people in the audience will not spare me if I don’t ask this question to you
you’re scaring me now
no no no it’s an easy question but expected question to you know a person from you right so should GPUs and high performance AI infrastructure have embedded privacy guardrails at silicon level
absolutely yes absolutely yes I mean it should be there I mean why not And I would – yeah, go ahead.
Would you want to give some examples on how you are doing it?
where it goes through a very, very, very safe layer. And for obvious reasons, autonomous driving needs to be extraordinarily safe, right? I mean, healthcare and driving. These are the, I would say, the stringent strangers when it comes to transportation. Let me put it as transportation, which includes aerospace as well. The two most stringent areas where safety is a necessity. It’s never luxury. It’s a necessity. So the answer is yes, Syed. Absolutely.
Thank you so much. Sunil, you wanted to…
Yeah, I mean, perhaps to take forward what Sundar said.
I will still ask you your question, though.
We can skip that. Do go.
No, no, go ahead.
What I thought is so fascinating about what Geeta said is that in a corporation, in a profit -maximizing firm, they have an ethics review board. And it’s just… I don’t know whether that’s the phrase. That’s an equivalent. I don’t know whether that’s the phrase. sorry what did you say yeah so that this is something you see in a university and this is additional self -regulation that the corporation is doing displacing it on itself and actually if you look at nvidia they also publish academic papers about the models they build and some of the tech work that they’re doing metal also has this tradition of publishing academic papers so it’s very weird that corporations are becoming more and more like academia and perhaps that’s a wonderful thing as well and we should celebrate that and that makes people like me very fortunate to be with within these corporations so meta published a paper which was called trusted execution environment and the whole idea was if a whatsapp user in a group would like to use the power of ai then there is insufficient compute on the device itself to have edge ai solve the problem for the user So till the edge gets faster and better, you have to, in a temporary basis, create a little bit of compute on the cloud and then do all the processing.
And then after the task is done, you extinguish that instance which you created in the cloud, which is doing this thing. And as part of that paper, so I’m of course not a computer science student. I’m an industrial and production engineer, so I’m like previous generation of technology, and all of those kind of things. So the paper, I cannot understand out of 80 pages or 60 pages of paper, I cannot understand 40. And that 40 pages is about this hardware. And there’s a whole series of attacks that you can possibly have in the tradition of the pager attack and Israeli supply chain attacks. There’s a whole series of things that you could potentially do to invade privacy.
And before that, security. and I just want to sort of share this with these folks that I mean I guess we all learn that way we read books and we understand some words maybe two three words on the page and then we feel a little better and we hope that the next time we read it we’ll get smarter but there’s a lot and I’m sure that your team is doing a lot of work and the meta team is they’ve named your chips saying Nvidia chips we have done this following analysis and with the other chip and I don’t understand it at all but I know it’s a big area of work and I wanted to say thank you for what you said
thank you Sanil absolutely last time I checked there were 33 different types of attack strategies and more than 100 different types of attacks that are happening as we speak at all the levels like including the hardware levels that are there that’s quite interesting okay and good conversation by the way I may have to skip last few questions because this conversation is so good we can go on and on forever but sunil um i’ll still ask you your question
no no no no
no this is a very important question in my mind
i’ll try to answer it
okay so last week i think last week or a few days ago open ai did come out with they started embedding ads in chat gpt yeah right so um when a consumer ai platform like chat gpt starts embedding ads um my question is will it help consumers subsidize their subscription or will it kind of violate the doctrine of you know the free ai principles ai neutrality
yeah so uh very quickly on that we should understand technology dissemination in our country only five percent of my country men and women have ever been on a plane that invention is 125 125 years old uh only uh 25 homes in the country have at least one book that is not a textbook and that invention is now 600 years old. The AC, I think, is in roughly 15 % of households in India. That invention is also 125 years old. Gen AI, my guess is at least 20 % of the country is using it today. More than that. More? Oh, thank you. So, shall we say 25? Yeah. Okay, 25 % of the country is using only five years old, this technology.
And the reason it is penetrating is because of two opennesses. One is free weight models, that was what we were doing, but also gratis, that the service intelligence is available on a gratis basis. Whether you’re an AI summit attendee staying at the most poshest hotel and you paid $33 ,000 per night, or whether you’re in Paharganj and you’re staying for Rs. 900 a night, both of you have equal access to gratis intelligence. And that is possible because of ad, so it’s both. Yeah. Meta provides WhatsApp and you’re completely private. and Meta provides non -encrypted services as well. You can have services that are ad -supported. You can have everything. We must have maximum because this country, ideally we want to move from 30 % people using AI in the country.
I want to move to 90 % people using because it’s just bits. We can make this happen. So let’s not be skeptical about the ad idea. It’s a technical problem to be solved. It will help bridge the AI divide and it will be a great leveler and all that. Sorry, it took much longer than I thought. I thought I’d do it in one, two sentences, but sorry. Please back.
Okay. All right. Quite interesting conversations. Geeta, but I’ll come to you. We talk a lot about ethics, trust, responsible AI. And suppose we go ahead and develop it. How are you seeing? I mean, would customers pay premium for, for trust -grade AI? Are you seeing that in the market? So if. I tomorrow have a superior safety posture, right? Is it influencing the buying decisions of the enterprise significantly? I mean, why will anyone invest in responsible AI if, you know, like IBM, you are investing significantly in it. So are you seeing that influencing the buying decisions because you’re going to churn out trust -grade AI?
So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in their journey of Gen AI adoption? Okay, trust me, still I feel many organizations are at surface. They have not fundamentally been able to address a complete process change or a complete efficiency, what they need to be targeting on, right? But the minute they are wanting to get into the real use case, which is going to fundamentally change the way they operate, right? Or fundamentally maybe generate a new business model altogether. then I think they are ready to pay for the premium. So I would say that they may not pay for every single use case, what they’re doing, because see, when we are delivering use case also, now every enterprise is intelligent to say whether I’m going open model, whether I’m going for paid models, SLM, LLM, tiny models, whatever you may call, right?
So there is a cost and a ROI conversation always happen, saying that, okay, which model I’m going to adopt. And many people I’ve seen that they say that I may not pay enterprise trust grade AI money if I’m doing all in all internal use case. Okay. But if I am putting this use case in front of my consumers or my end clients who are going to use, which is where a downstream risk, which is my reputation is at risk, my brand is at risk, my compliance posture is at risk, then I will buy a premium trustworthy, because I can’t afford to fail there, right? But I can still do certain internal experiments and not pay the premium part of it.
For POCs and experience and for some internal. internal use case. So let’s say if they’re doing some ask IT or other stuff, then they say, okay, I am okay to go. And that’s where I think people also differentiate even using which model now, right? They make a choice that which model they would like to use. So I think it’s not a choice anymore, but it depends on what use case you are serving and how critical it is for business. And then you take a call that am I going to invest and giving premium. No one single lens for all.
No, yeah, absolutely. Sundar, maybe I’ll ask this. You did talk about your operating system for smart cars and all that. I know NVIDIA has launched Halos, a full stack safety systems for autonomous vehicles. Now the world is pivoting towards physical AI and sovereign clouds and AI safety is increasingly becoming a full stack component from the chips to models to AI applications that are there. And you will have to roll the salt being a global company across the geographies and each geographies have multiple different regulations. restrictions and checklists that you will have to follow in terms of automobiles and things like that. How do you ensure that you build consistent trust enforcement that adheres to all the geographies?
Sure. No, I mean, that’s a very, very pertinent question because it’s not easy. I mean, it’s not easy. So the idea is to do a standardization, right? And then tailor it for the needs of each of the countries. I mean, you fine tune it for the needs of each of the countries. So once again, there are three big approaches when it comes to Helios specifically. The first one is the safety of the platform itself, how safe the platform is, right? Once the platform has been made safe, right? And then it becomes a template which can be tweaked to the needs of specific geographies, specific countries, et cetera, et cetera. That’s a very, very, very, you know, it’s a very, very important thing.
And then you can also, you know, you can also implement a standardization approach. So that’s a very, very, very, very important approach. the second one is the algorithmic safety right how i mean going back to the fundamentals i mean it’s not programming it is what algorithms do we use how do we ensure that the algorithmic safety is is is number one it is safe first and number two the algorithms can be with with some necessary some tweaks can be made to to to serve the needs of specific geographies specific countries specific specific verticals for that matter things like that the third one is the ecosystem itself i mean uh i mean whatever is is is approved to be used as an ecosystem in one one country will not be there in the second the suppliers will change the vendors will change so it is just not ensuring the platform and the algorithm are safe how do you ensure that the ecosystem that goes into building the cars are is also made safe okay that is a huge thing there is no end to it because it keeps changing a lot but once you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe
love your response what you are saying is basically even in absence of regulations controls ensure you make the platform safe you make the algorithm saves
you make the ecosystem safe you have a template for now
you already have everything safe you just need to now tweak it to different geographies or sectors and industries
yes absolutely
okay I love it Sunil one question to you with initiatives like purple Lama and Lama guard meta provide safety tools but ultimately shifts the responsibility to developers is this too responsibly or decentralized liability
again just to use something that Jan Lakun used to say and he is no longer the chief AI officer but the words continue to be true which is we all have wi -fi routers in our homes and when those wi -fi routers fail we don’t call linux store walls and say hey linux store walls this wi -fi router is running linux and therefore please help me fix the bug the company that sold the router and made a variant or a derivative work from the linux project you will you will have to speak to them and that is the freedom that is necessary in the open source community and in the community of proprietary entrepreneurs that build on open source because the the bsd license allows you to do that it allows apple to take an open project and then to make it a fully proprietary project and that you could be making dual use at that level itself that you want the model to create hate speech.
We want a hate speech classifier in Santali. Unfortunately, we don’t have enough Santali users on the platform. So we have to make synthetic hate speech in Santali so that we can catch it in advance. So we want to make a big corpus of hate speech in Santali. We cannot go around and ask people, please make hate speech for us. That would be a worse -off option. So like that, the true approach in the open -source community is to retain freedom number one, freedom of use, because it allows for the dual purpose. But the moment we use any of that on our platform and we are providing, then all those freedoms disappear. Then you have very limited freedoms.
Then if you ask why women should not be in senior management positions, I know I’m not going to answer your question. So that’s where we are.
Quite interesting. Thank you. I just, I have around seven to eight minutes left. I’m going to skip through the rest of the question what we’re going to do things little differently if audience are okay I’m going to ask very rapid fire kind of questions same questions everyone should answer but only in yes or no
as a philosopher I protest I think the slogan for this AI age is both and not only should we embrace yes and no we should also embrace everything in between because then only we’ll have personalized super intelligence the trouble with your framing is monolithic
and I’ll make an exception as a moderator what I’ll do is if a question requires or a response requires a little bit more attention I’ll call that out you also call me out if you think you need to add anything but I have some very interesting questions and I’m really excited to understand from you guys what you think actually so So again, the format is this. I’ll ask the same question to all of you. You can answer. OK, not yes or no. Very concisely considering the time. Yes or no or both. Yeah, yeah, yeah. Answer is your choice. So globally, regulations across the globe, we need to have alignment on globally. Regulations. Yes or no.
No
no. OK. OK. Yes. OK. No. OK. I understand. But I did expect this kind of responses. So maybe I’ll tweak this question a little bit. So that minimum understanding of what is required across all the geographies, at least. Do we agree on that? Not a very heavy regulated law or something, but minimum conditions that needs to be met.
I would say we should talk about technology regulation. Not the geography regulation. So as he was saying, like there is certain table stakes. which is at technology. So all technologists should first agree that this is stable stake as a technology. Then geographies can take over.
It’s already regulated. I mean to quote Lina Khan, there is no regulatory vacuum for AI. So I disagree a little bit with what Sundar said previously. You cannot say I did it and I’m not responsible.
I think a little easier question this time. Is advancement in AI models outpacing advancement in AI governance? Are the models and the innovation outpacing governance?
Absolutely.
Yes. I mean that’s a natural way of happening things, right? I mean the technology has to advance and then you need to ensure that the advance to technology is safe and secure. So that’s a natural progression and it has been happening that way.
It’s never happened in the reverse order.
Yeah. I agree.
but there is a thought saying that technology has to advance correct but before it can be widely adopted in production maybe we need to have AI governance so that is something we should catch up really fast. We should make it safe before wide adoption of technology right so okay if you have a more capable but less safe model would you delay your launch to stay responsible?
As I said depends on which use case use case dependence.
Fair enough
I mean I just echo Geeta
Okay fair enough One answer where I could get all my panelists agree Have you stopped any projects due to safety concerns?
I think as I said currently I am not in the ethical board of IBM so I have not stopped but I have seen them stopping.
likewise i’m not in the design department so i don’t have first -hand knowledge but i’m sure i mean a lot of things would have gotten delayed not stopped because of compliance regulations not being met and all yeah i’m sure yes i mean yeah
facial recognition was turned off on facebook yes absolutely good
big question maybe soon i’ll start with you this time right can we actually go on agi artificial general intelligence
um it’s it’s a regulatory problem we don’t have to think of yet
okay we can okay
difficult it’s going to be much more difficult i mean i would i mean instead of asking can we govern should we govern absolutely yes i mean i hope and pray that human beings will for the next millions and billions of years will continue to be better than machines that’s my hope and i don’t want to see a day when machines are better than human beings.
Okay. I think I’ll go to what you said initially, that humans should not be scared by what they have created, right? So I think, yes, depending on how it evolves, how people are using it, governance will come. I don’t think so it will be an option at some point in time.
One last round, okay? Again, I’ll start with Sunil. Should we have mandatory watermarking in all the media text and all the content that is developed by AI?
Should we have mandatory watermarking in photo editing tool or text editing tool?
Yes.
I’m answering with a question.
Are you saying yes or no?
I’m answering with a question.
Okay. That’s an answer I’ll take. No answer is also an answer.
I don’t, I mean, see the fact is we have accepted it. I mean, it’s not an untouchable, an alien, a dirty thing, right? It’s acceptable. So let’s make it. good i mean look good feel good i mean no point in i mean uh watermarking everything just to brand it there will be a blurry line between a human generated content and um ai generated content and all that so absolutely and we shouldn’t we demark that my honest feedback and i’m saying this with a big heart is that that human generated content will vanish in the internet just like we don’t we don’t remember uh uh you know addresses and for my i mean i’ve asked my i mean my maybe my my i mean phone numbers we don’t remember i mean we used to remember that we used to remember roots
but i hope not i hope i hope not
that’s why i said i have a heavy heart
i’ll answer from a very personal space because my son is a creative director okay in films and i think he’s absolutely says that it has to be demarcated but sometimes he goes to the extent saying that in near future you can clearly demarcate yourself you will not need any watermark also but there is a different angle which comes when you are human creative versus when you are really dividing completely
perfect thank you so much and that brings me on to exact time ladies and gentlemen please give a big round of applause to amazing panel thank you so much to the amazing moderator thank you so much you Thank you. Thank you.
Ms. Geeta Gurnani
Speech speed
172 words per minute
Speech length
1995 words
Speech time
693 seconds
Shift‑left security and governance now a prime focus
Explanation
Geeta observes that security, once an after‑thought, has moved to the left of the development process and is now a primary consideration alongside AI trust and governance.
Evidence
“sure so thank you so much say it for that question and as i was mentioning when i was standing outside that when i was walking in and meeting many clients almost two years back everybody was asking me what is this responsible ai and what is this trust okay and what surprised me that we all all witnessed so much learning from security as a concept right security always used to be afterthought and now i think people just can’t afford of not thinking security it has become completely shift left right that people first think security then everything else but in spite of that whole learning what i witnessed in last 24 months is that people are adopting ai but trust governance security is taking a prime stage now okay it wasn’t uh it wasn’t a first thought” [1].
Major discussion point
Emerging emphasis on trust and responsible AI
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Governance still primitive (Excel‑sheet tracking)
Explanation
She points out that many organisations still manage AI governance with basic tools such as Excel sheets, which hampers scalability.
Evidence
“He said, on Excel sheet.” [17].
Major discussion point
Emerging emphasis on trust and responsible AI
Topics
Artificial intelligence | Data governance
Trustworthy AI defined as end‑user confidence (security test, no hallucinations, compliance)
Explanation
Geeta defines trustworthy AI as an AI system that has passed security tests, does not hallucinate, and complies with applicable laws, thereby giving end users confidence to use it.
Evidence
“the model or use case I’m using is past the security test, is already past the security test.” [48]. “It is not hallucinating, which means I have a control over monitoring that what output it is producing.” [82].
Major discussion point
Defining trustworthy AI and its non‑negotiables
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Embedding governance and control at runtime requires senior leadership and ethical board
Explanation
She stresses that strong commitment from senior leadership, an ethical board acting as a gate‑keeper, and integration of AI risk into enterprise risk management are essential for runtime governance.
Evidence
“So I would say, first of all, a big commitment from senior leadership saying that this is essential and it’s not optional.” [74]. “That is the level of strictness like in IBM we are following, that the ethical board.” [75]. “my observation was that if I were to do a governance conversation in an organization I have to talk to five people … then I think thankfully what we are seeing that if you have to make governance at the central you have to bring it in your enterprise risk posture completely saying that in your enterprise risk management if you are calculating your risk posture right then AI risk has to be really taken into consideration” [54].
Major discussion point
Embedding governance and control at runtime
Topics
Artificial intelligence | The enabling environment for digital development | Data governance
Market willingness to pay for trust‑grade AI
Explanation
Geeta notes that enterprises are prepared to pay a premium for AI that meets high trust standards, especially when downstream risks to brand, compliance, and reputation are significant.
Evidence
“But if I am putting this use case in front of my consumers or my end clients who are going to use, which is where a downstream risk, which is my reputation is at risk, my brand is at risk, my compliance posture is at risk, then I will buy a premium trustworthy, because I can’t afford to fail there, right?” [93]. “then I think they are ready to pay for the premium.” [94]. “And many people I’ve seen that they say that I may not pay enterprise trust grade AI money if I’m doing all in all internal use case.” [95].
Major discussion point
Market willingness to pay for trust‑grade AI
Topics
Financial mechanisms | Artificial intelligence
Project stoppage or delay due to safety/compliance concerns
Explanation
She has observed that AI projects can be halted or delayed when ethical or compliance requirements are not satisfied.
Evidence
“I think as I said currently I am not in the ethical board of IBM so I have not stopped but I have seen them stopping.” [127]. “If you ask people that manually every single time a use case comes, you first check, is it compliant?” [131].
Major discussion point
Project stoppage or delay due to safety concerns
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Mr. Sundar R Nagalingam
Speech speed
183 words per minute
Speech length
1756 words
Speech time
573 seconds
Failure modes when AI scales: service‑layer and security vulnerabilities break first
Explanation
Sundar explains that when AI serves billions of users, the first failures are in the micro‑service layer or security controls, not the underlying infrastructure.
Evidence
“I mean that’s the thing right I mean anyone of them can break and most of the times it is not the infrastructure that breaks not the infra that doesn’t break what breaks is the systems that drive the infra and the breakage could come either in terms of how efficiently each of the use cases that need to be served to the users gets served as microservices that is one possibility of failure the second one very obvious one is that is it getting served safely in a secure way that could be a very very important point of failure … if a very very small vulnerability gets overlooked … that’s a huge failure so most of the times the things that break when you are serving a large number of users is the way in which ai is getting served either in terms of the functionality itself or in terms of the controls that it is expected to undergo” [41].
Major discussion point
Failure modes when AI scales to billions of users
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Three core pillars of trustworthy AI: functional safety, AI safety, cybersecurity
Explanation
He outlines that trustworthy AI must satisfy functional safety, AI‑specific safety, and cybersecurity, which together form a universal safety framework.
Evidence
“let’s say a robotic assisted surgery … the first one is the functional safety … the second one is the ai safety … the third one is the cyber security these three will be common amongst any approach that need to be” [71]. “the first one is the functional safety … the second one is the algorithmic safety … the third one is the ecosystem itself” [72].
Major discussion point
Defining trustworthy AI and its non‑negotiables
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Global regulatory alignment: standardize core safety then tailor to local rules
Explanation
Sundar proposes establishing technology‑level baseline standards for safety and algorithmic controls, which can then be adapted to meet specific geographic regulations.
Evidence
“And then it becomes a template which can be tweaked to the needs of specific geographies, specific countries, et cetera, et cetera.” [117]. “And then you can also, you know, you can also implement a standardization approach.” [118]. “So the idea is to do a standardization, right?” [119].
Major discussion point
Global regulatory alignment versus technology standards
Topics
Artificial intelligence | Internet governance | The enabling environment for digital development
Compliance‑driven project delays are common
Explanation
He notes that many AI initiatives are delayed—not necessarily stopped—because they must satisfy compliance and regulatory requirements before deployment.
Evidence
“i’m sure a lot of things would have gotten delayed not stopped because of compliance regulations not being met” [113].
Major discussion point
Project stoppage or delay due to safety concerns
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Mandatory watermarking creates a blurry line between human and AI content
Explanation
Sundar argues that imposing universal watermarking on AI‑generated media would blur the distinction between human‑created and AI‑created content, making the solution questionable.
Evidence
“no point in i mean uh watermarking everything just to brand it there will be a blurry line between a human generated content and um ai generated content and all that so absolutely and we shouldn’t we demark that my honest feedback and i’m saying this with a big heart is that that human generated content will vanish in the internet” [145].
Major discussion point
Mandatory watermarking of AI‑generated content
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Mr. Sunil Abraham
Speech speed
167 words per minute
Speech length
2384 words
Speech time
851 seconds
Anthropomorphization and AI ontology: AI is just a weight file
Explanation
Sunil stresses that AI should not be treated as a human‑like entity; it is fundamentally a single weight file on disk, best approached with a Unix‑style mental model.
Evidence
“And at the very core of Gen AI is a single file on the file system, the weight file.” [55]. “A single file, which is a weight file.” [57]. “I am skeptical towards anthropomorphization … I don’t see it I don’t see it … I don’t apply the mental model of a human.” [58]. “If we use Unix mental model then surely we will not be scared of any file it’s in some user space and the max it will do whatever it wants to do in that user space and I am safe from whatever it is doing” [56].
Major discussion point
Anthropomorphization and AI ontology
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Corporate self‑regulation mirrors academic responsibility
Explanation
He observes that corporations are adopting academic‑style self‑regulation, such as ethics boards and publishing research papers, which parallels university practices.
Evidence
“sorry what did you say yeah so that this is something you see in a university and this is additional self‑regulation that the corporation is doing displacing it on itself and actually if you look at nvidia they also publish academic papers about the models they build …” [78]. “What I thought is so fascinating about what Geeta said is that in a corporation, in a profit‑maximizing firm, they have an ethics review board.” [79].
Major discussion point
Embedding governance and control at runtime
Topics
Artificial intelligence | The enabling environment for digital development
Hardware‑level privacy and safety guardrails
Explanation
Sunil points out that hardware attacks and supply‑chain vulnerabilities highlight the need for privacy and safety mechanisms embedded at the silicon level.
Evidence
“And there’s a whole series of attacks that you can possibly have in the tradition of the pager attack and Israeli supply chain attacks.” [92]. “There’s a whole series of things that you could potentially do to invade privacy.” [88].
Major discussion point
Hardware‑level privacy and safety guardrails
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Monetization via ads can bridge the AI divide
Explanation
He suggests that ad‑supported AI services can subsidize access, helping to level the playing field and expand AI usage among broader audiences.
Evidence
“You can have services that are ad‑supported.” [142]. “It will help bridge the AI divide and it will be a great leveler and all that.” [141].
Major discussion point
Monetization via ads in consumer AI
Topics
Digital economy
Existing regulations already cover AI; no regulatory vacuum
Explanation
Sunil asserts that AI is already subject to existing regulatory frameworks, countering the notion of a regulatory vacuum.
Evidence
“It’s already regulated.” [99]. “It is a regulatory problem we don’t have to think of yet” [23] (implying regulations exist).
Major discussion point
Global regulatory alignment versus technology standards
Topics
Artificial intelligence | Internet governance
Mandatory watermarking remains an open question
Explanation
He notes that the community has not reached a consensus on whether mandatory watermarking of AI‑generated content should be required.
Evidence
“Should we have mandatory watermarking in photo editing tool or text editing tool?” [151]. “And there’s a whole series of attacks …” (context of ongoing debate) [152]. “The question remains open; a definitive stance was not taken” (implied from discussion) [151].
Major discussion point
Mandatory watermarking of AI‑generated content
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Mr. Syed Ahmed
Speech speed
144 words per minute
Speech length
2370 words
Speech time
985 seconds
Trust is prerequisite for scaling AI deployments
Explanation
Syed argues that large‑scale AI adoption can only succeed when users have trust in the system, which requires building a dedicated trust layer.
Evidence
“but true scales can come only when you start trusting AI only when you start building that layer of trust and that is our time is now that’s correct” [32]. “So in my mind, for trustworthy AI, the ROI need to be seen that what downstream risk is it going to bring?” [33].
Major discussion point
Emerging emphasis on trust and responsible AI
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Accountability and error scaling become critical at scale
Explanation
He highlights that as AI errors affect more users, accountability becomes essential and errors themselves scale with usage.
Evidence
“Accountability is definitely very, very, very important.” [52]. “Error also scales.” [20].
Major discussion point
Failure modes when AI scales to billions of users
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
AI model innovation outpaces governance
Explanation
Syed questions whether the rapid pace of AI model development is surpassing the creation of governance frameworks, suggesting a lag.
Evidence
“Are the models and the innovation outpacing governance?” [7]. “Is advancement in AI models outpacing advancement in AI governance?” [19].
Major discussion point
Pace of AI model advancement vs. governance
Topics
Artificial intelligence | The enabling environment for digital development
Enterprises willing to pay premium for trust‑grade AI
Explanation
He asks whether customers are choosing higher‑priced AI offerings that provide stronger trust guarantees, especially for consumer‑facing use cases.
Evidence
“I mean, would customers pay premium for, for trust‑grade AI?” [29]. “So are you seeing that influencing the buying decisions because you’re going to churn out trust‑grade AI?” [30].
Major discussion point
Market willingness to pay for trust‑grade AI
Topics
Financial mechanisms | Artificial intelligence
Embedding governance at runtime must move beyond monitoring
Explanation
He stresses that AI governance tools should be enforced at runtime, not merely act as a monitoring layer.
Evidence
“But question is, how do you ensure that these tools don’t remain at just a monitoring layer and get enforced on the ground at the runtime, right?” [44].
Major discussion point
Embedding governance and control at runtime
Topics
Artificial intelligence | The enabling environment for digital development
Ads in consumer AI could subsidize access
Explanation
He inquires whether embedding advertisements in consumer AI services can help subsidize subscriptions and broaden accessibility.
Evidence
“okay so last week i think last week or a few days ago open ai did come out with they started embedding ads in chat gpt yeah right so um when a consumer ai platform like chat gpt starts embedding ads um my question is will it help consumers subsidize their subscription or will it kind of violate the doctrine of you know the free ai principles ai neutrality” [143].
Major discussion point
Monetization via ads in consumer AI
Topics
Digital economy
Mandatory watermarking of AI‑generated content is debated
Explanation
He raises the question of whether AI‑generated media should be mandatorily watermarked to preserve human creativity, noting the discussion is unresolved.
Evidence
“Should we have mandatory watermarking in all the media text and all the content that is developed by AI?” [89]. “The question remains open; a definitive stance was not taken” (implied from the broader discussion) [151].
Major discussion point
Mandatory watermarking of AI‑generated content
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Agreements
Agreement points
AI advancement naturally outpaces governance development
Speakers
– Ms. Geeta Gurnani
– Mr. Sundar R Nagalingam
– Mr. Sunil Abraham
Arguments
AI advancement naturally outpaces governance development
People are now more open to responsible AI after initially seeing it as blocking innovation, but true scale comes only with trust
AI is already regulated under existing frameworks, and open-source models should maintain freedom of use while platforms enforce community standards
Summary
All speakers agreed that technology advancement precedes governance frameworks, with Nagalingam calling it ‘natural progression’, Gurnani noting the evolution from innovation-first to trust-aware approaches, and Abraham stating ‘it’s never happened in the reverse order’
Topics
Artificial intelligence | The enabling environment for digital development
Systems and controls break before infrastructure when AI scales
Speakers
– Mr. Syed Ahmed
– Mr. Sundar R Nagalingam
Arguments
When AI scales to billions of users, systems that drive infrastructure break first, particularly in functionality delivery and security controls
AI systems are held to higher standards than human equivalents because errors can scale massively and accountability is unclear
Summary
Both speakers agreed that infrastructure itself doesn’t fail at scale, but rather the management systems, security controls, and governance mechanisms that oversee the infrastructure break down first
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Use case dependency determines AI governance investment
Speakers
– Ms. Geeta Gurnani
– Mr. Sundar R Nagalingam
Arguments
Customers pay premium for trust-grade AI when deploying consumer-facing or business-critical use cases, but not for internal experiments
Trustworthy AI requires three buckets: functional safety, AI safety, and cybersecurity that apply across all regulators and industries
Summary
Both speakers emphasized that AI governance and safety investments should be tailored based on the specific use case, risk level, and potential impact, rather than applying uniform approaches across all applications
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Corporate adoption of academic-style ethics governance
Speakers
– Ms. Geeta Gurnani
– Mr. Sunil Abraham
Arguments
AI governance must be implemented as control points and gatekeepers, not just observation layers, with senior leadership commitment
Corporations are adopting academic-style ethics review boards and self-regulation, similar to university research practices
Summary
Both speakers observed and praised the trend of corporations implementing ethics review boards similar to academic institutions, with Gurnani describing IBM’s ethical board approval process and Abraham noting this as corporations becoming more academic in their approach
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Similar viewpoints
Both emphasized that while AI’s power is undeniable, sustainable scaling requires building trust foundations rather than focusing solely on innovation capabilities
Speakers
– Ms. Geeta Gurnani
– Mr. Syed Ahmed
Arguments
People are now more open to responsible AI after initially seeing it as blocking innovation, but true scale comes only with trust
True AI scale can only come when organizations start trusting AI and building layers of trust, not just through innovation alone
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Both speakers acknowledged that AI systems face higher expectations than human equivalents due to scale of potential failures and unclear accountability structures
Speakers
– Mr. Sundar R Nagalingam
– Mr. Syed Ahmed
Arguments
AI systems are held to higher standards than human equivalents because errors can scale massively and accountability is unclear
Different standards and expectations for AI versus human performance
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Both speakers cautioned against anthropomorphizing AI and emphasized the need for more technical, rational understanding of AI capabilities and limitations
Speakers
– Mr. Sunil Abraham
– Mr. Syed Ahmed
Arguments
AI should be viewed as dual-use technology where one person’s bug is another’s feature, requiring careful ontological and epistemological understanding
People tend to humanize AI too much, which creates unnecessary fear and misunderstanding of the technology
Topics
Artificial intelligence | Capacity development
Unexpected consensus
Rejection of uniform global AI regulation
Speakers
– Ms. Geeta Gurnani
– Mr. Sunil Abraham
Arguments
Technology regulation should focus on technical standards rather than geography-specific rules, with technologists agreeing on baseline requirements
AI is already regulated under existing frameworks, and open-source models should maintain freedom of use while platforms enforce community standards
Explanation
Despite representing different industry perspectives (enterprise vs. consumer platforms), both speakers rejected the idea of uniform global AI regulations, instead favoring technical standards and existing regulatory frameworks. This consensus was unexpected given their different organizational contexts
Topics
Artificial intelligence | The enabling environment for digital development
Stopping projects due to safety concerns
Speakers
– Ms. Geeta Gurnani
– Mr. Sundar R Nagalingam
– Mr. Sunil Abraham
Arguments
AI governance must be implemented as control points and gatekeepers, not just observation layers, with senior leadership commitment
Global AI safety requires standardized platforms that can be fine-tuned for specific geographies while maintaining consistent safety across ecosystems
Corporations are adopting academic-style ethics review boards and self-regulation, similar to university research practices
Explanation
All speakers acknowledged that their organizations have stopped or delayed projects due to safety concerns, showing unexpected consensus on the practical implementation of safety measures across different types of companies (enterprise services, hardware, and consumer platforms)
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Overall assessment
Summary
The speakers demonstrated strong consensus on fundamental principles of AI governance, including the natural progression of technology before governance, the importance of use-case specific approaches, and the need for corporate ethics frameworks. They also agreed on technical aspects like scaling challenges and the distinction between infrastructure and systems failures.
Consensus level
High level of consensus on core principles with nuanced differences in implementation approaches. This suggests a maturing field where industry leaders are converging on best practices while maintaining flexibility for sector-specific needs. The consensus implies that AI governance frameworks are becoming standardized across different industry segments, which could facilitate more coordinated approaches to AI safety and trustworthiness.
Differences
Different viewpoints
Need for global regulatory alignment on AI
Speakers
– Ms. Geeta Gurnani
– Mr. Sundar R Nagalingam
– Mr. Sunil Abraham
Arguments
Technology regulation should focus on technical standards rather than geography-specific rules, with technologists agreeing on baseline requirements
Global AI safety requires standardized platforms that can be fine-tuned for specific geographies while maintaining consistent safety across ecosystems
AI is already regulated under existing frameworks, and open-source models should maintain freedom of use while platforms enforce community standards
Summary
Gurnani advocates for technology-focused regulation with universal technical standards, Nagalingam supports standardized safety platforms adaptable to geographies, while Abraham argues existing regulations are sufficient and emphasizes maintaining open-source freedoms
Topics
Artificial intelligence | The enabling environment for digital development
Approach to understanding and managing AI risks
Speakers
– Mr. Sunil Abraham
– Mr. Sundar R Nagalingam
– Ms. Geeta Gurnani
Arguments
AI should be viewed as dual-use technology where one person’s bug is another’s feature, requiring careful ontological and epistemological understanding
Trustworthy AI requires three buckets: functional safety, AI safety, and cybersecurity that apply across all regulators and industries
AI governance must be implemented as control points and gatekeepers, not just observation layers, with senior leadership commitment
Summary
Abraham takes a philosophical approach viewing AI as dual-use technology with inherent contradictions, Nagalingam proposes systematic safety frameworks, while Gurnani focuses on practical governance implementation with strict controls
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Severity of AI scaling concerns
Speakers
– Mr. Sunil Abraham
– Mr. Syed Ahmed
– Mr. Sundar R Nagalingam
Arguments
AI scaling concerns are overblown when viewed through Unix security models where AI operates in controlled user spaces
AI systems are held to higher standards than human equivalents because errors can scale massively and accountability is unclear
When AI scales to billions of users, systems that drive infrastructure break first, particularly in functionality delivery and security controls
Summary
Abraham dismisses scaling fears using traditional computer security models, Ahmed emphasizes the unique risks of AI scaling and accountability issues, while Nagalingam focuses on technical system failures at scale
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Unexpected differences
Mandatory watermarking for AI-generated content
Speakers
– Mr. Sunil Abraham
– Mr. Sundar R Nagalingam
– Ms. Geeta Gurnani
Arguments
AI is already regulated under existing frameworks, and open-source models should maintain freedom of use while platforms enforce community standards
Global AI safety requires standardized platforms that can be fine-tuned for specific geographies while maintaining consistent safety across ecosystems
Technology regulation should focus on technical standards rather than geography-specific rules, with technologists agreeing on baseline requirements
Explanation
Despite general agreement on AI governance needs, the speakers showed surprising divergence on content watermarking – Abraham questioned it by analogy to photo editing tools, Nagalingam worried about demarcation leading to human content disappearing, and Gurnani supported it from a creative industry perspective
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs
Corporate adoption of academic-style ethics review processes
Speakers
– Mr. Sunil Abraham
– Ms. Geeta Gurnani
Arguments
Corporations are adopting academic-style ethics review boards and self-regulation, similar to university research practices
AI governance must be implemented as control points and gatekeepers, not just observation layers, with senior leadership commitment
Explanation
While both speakers discussed corporate ethics boards, Abraham viewed this as a fascinating convergence between corporate and academic practices worth celebrating, while Gurnani presented it as a necessary business control mechanism, showing different philosophical approaches to the same phenomenon
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Overall assessment
Summary
The panel showed moderate to high levels of disagreement across fundamental approaches to AI governance, risk assessment, and regulatory frameworks, despite working for companies in the same ecosystem
Disagreement level
The disagreements reflect deeper philosophical and practical divides in the AI industry about balancing innovation with safety, the role of regulation versus self-governance, and how to manage AI risks at scale. These disagreements could significantly impact the development of coherent AI governance frameworks and industry standards.
Partial agreements
Partial agreements
All speakers agree that AI technology advancement outpaces governance development, but they disagree on whether this is problematic or natural, and what should be done about it
Speakers
– Ms. Geeta Gurnani
– Mr. Sundar R Nagalingam
– Mr. Sunil Abraham
Arguments
AI advancement naturally outpaces governance, which is the normal progression where technology develops first then safety measures follow
AI advancement naturally outpaces governance, which is the normal progression where technology develops first then safety measures follow
AI advancement naturally outpaces governance, which is the normal progression where technology develops first then safety measures follow
Topics
Artificial intelligence | The enabling environment for digital development
Both agree that AI safety requirements should vary based on use case criticality and risk levels, but disagree on implementation – Gurnani focuses on market-driven differentiation while Nagalingam emphasizes technical standardization approaches
Speakers
– Ms. Geeta Gurnani
– Mr. Sundar R Nagalingam
Arguments
Customers pay premium for trust-grade AI when deploying consumer-facing or business-critical use cases, but not for internal experiments
Global AI safety requires standardized platforms that can be fine-tuned for specific geographies while maintaining consistent safety across ecosystems
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The digital economy
Similar viewpoints
Both emphasized that while AI’s power is undeniable, sustainable scaling requires building trust foundations rather than focusing solely on innovation capabilities
Speakers
– Ms. Geeta Gurnani
– Mr. Syed Ahmed
Arguments
People are now more open to responsible AI after initially seeing it as blocking innovation, but true scale comes only with trust
True AI scale can only come when organizations start trusting AI and building layers of trust, not just through innovation alone
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Both speakers acknowledged that AI systems face higher expectations than human equivalents due to scale of potential failures and unclear accountability structures
Speakers
– Mr. Sundar R Nagalingam
– Mr. Syed Ahmed
Arguments
AI systems are held to higher standards than human equivalents because errors can scale massively and accountability is unclear
Different standards and expectations for AI versus human performance
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Both speakers cautioned against anthropomorphizing AI and emphasized the need for more technical, rational understanding of AI capabilities and limitations
Speakers
– Mr. Sunil Abraham
– Mr. Syed Ahmed
Arguments
AI should be viewed as dual-use technology where one person’s bug is another’s feature, requiring careful ontological and epistemological understanding
People tend to humanize AI too much, which creates unnecessary fear and misunderstanding of the technology
Topics
Artificial intelligence | Capacity development
Takeaways
Key takeaways
Trustworthy AI requires a multi-layered approach encompassing functional safety, AI safety, and cybersecurity that can be applied across all industries and geographies
True AI scaling can only be achieved when trust is established – people are now more open to responsible AI after initially viewing it as innovation-blocking
AI governance must be implemented as control points and gatekeepers rather than just monitoring layers, requiring senior leadership commitment and integration into enterprise risk management
When AI scales to billions of users, the systems that drive infrastructure (particularly functionality delivery and security controls) break first, not the infrastructure itself
Market adoption shows customers will pay premium for trust-grade AI when deploying consumer-facing or business-critical use cases, but not for internal experiments
Hardware-level privacy guardrails should be embedded in AI infrastructure, especially for safety-critical applications like autonomous vehicles and healthcare
AI should be viewed as dual-use technology where different users may have legitimate but conflicting needs, requiring nuanced governance approaches
Technology advancement naturally outpaces governance development, which is normal progression, but safety measures must catch up before wide production adoption
Resolutions and action items
AI governance should be integrated into enterprise risk management rather than treated as separate AI-specific risk
Technology regulation should focus on technical standards agreed upon by technologists rather than geography-specific rules
Standardized AI safety platforms should be developed that can be fine-tuned for specific geographies while maintaining consistent baseline safety
Organizations should implement ethics review boards and control mechanisms that require approval before AI use cases can proceed to production
Unresolved issues
Whether global regulatory alignment is needed or if minimum baseline standards are sufficient remains debated among panelists
The question of mandatory watermarking for AI-generated content received mixed responses with no clear consensus
How to balance open-source freedom with safety requirements in dual-use AI technologies
The timeline and approach for governing Artificial General Intelligence (AGI) when it emerges
How to maintain accountability when AI systems fail at scale, given the difficulty of determining responsibility
The long-term impact of AI on human creativity and whether human-generated content will become distinguishable
Suggested compromises
Use case-dependent approach to AI safety investments – apply premium trust-grade AI for consumer-facing and critical applications while allowing more flexibility for internal experiments
Implement a tiered safety approach: zero-to-one (individual use with legal boundaries), one-to-many (platform community standards), and many-to-many (enhanced safety for public interactions)
Adopt Unix-style security models where AI operates in controlled user spaces to manage risks without over-restricting innovation
Balance ad-supported AI models to democratize access while maintaining privacy and safety standards
Create hybrid governance models that combine corporate self-regulation (like ethics review boards) with existing regulatory frameworks rather than entirely new AI-specific regulations
Thought provoking comments
I’m skeptical towards anthropomorphization whenever I see technology do something… It’s just technology doing something. So I’m not impressed at all by a molt book. It is just machines hallucinating. The stochastic parrot is just doing something. There is no real intelligence at display yet.
Speaker
Mr. Sunil Abraham
Reason
This comment cuts through the hype around AI agents creating their own social networks by reframing it as a technical phenomenon rather than evidence of emerging consciousness. Abraham’s use of philosophical terms (anthropomorphization, ontology, epistemology) elevates the discussion from sensationalism to rigorous analysis.
Impact
This fundamentally shifted the conversation’s tone from potential fear about AI autonomy to a more grounded technical perspective. It led Syed Ahmed to acknowledge that ‘we humanize AI too much’ and provided reassurance to the audience, setting a more rational foundation for subsequent discussions about AI capabilities and limitations.
When I don’t have somebody to blame, I don’t want a reason to blame… there is no accountability whom do i blame whom do i take to the court whom do i curse there is no human
Speaker
Mr. Sundar R Nagalingam
Reason
This insight reveals a profound psychological and legal challenge with AI systems – the human need for accountability. It explains why AI systems are held to higher standards than human operators, not just because of technical capabilities but because of our fundamental need to assign responsibility when things go wrong.
Impact
This comment introduced the critical concept of accountability gaps in AI systems, which became a recurring theme. It helped explain why trust in AI is so challenging to achieve and why governance frameworks must address not just technical safety but also legal and psychological aspects of responsibility attribution.
That invention is called automobile. Even today in 2026, we are not able to solve the safety issue of that technology… Still, as Indians and as the human species in India, we say, oh, 200,000 people, Indians will die every year, but we must have this technology.
Speaker
Mr. Sunil Abraham
Reason
This historical analogy provides crucial perspective on how society accepts risk-benefit tradeoffs with transformative technologies. It challenges the assumption that AI must be perfectly safe before adoption, drawing parallels to how we’ve historically managed dangerous but beneficial technologies.
Impact
This reframed the entire discussion about AI safety from seeking perfect solutions to managing acceptable risk levels. It introduced nuance to the conversation about AI governance, suggesting that some level of risk might be acceptable if the benefits are substantial enough, similar to how society has approached other transformative technologies.
If you ask people that manually every single time a use case comes, you first check, is it compliant? Is it ethical? Should I be doing? Should I not be doing it? And there are no workflows for people to really automate. Then people say, OK, and all of us forget about AI… people will skip no matter whatever hard rules, regulations you can make.
Speaker
Ms. Geeta Gurnani
Reason
This insight reveals a critical implementation gap in AI governance – that manual compliance processes inevitably fail at scale. It draws from practical experience to show why governance must be automated and integrated into workflows rather than treated as separate oversight activities.
Impact
This comment shifted the discussion from theoretical governance frameworks to practical implementation challenges. It influenced the conversation toward emphasizing automation, integration with existing enterprise risk management, and the need for governance to be a ‘control point’ rather than just observation, fundamentally changing how the panel approached governance solutions.
In a corporation, in a profit-maximizing firm, they have an ethics review board… it’s very weird that corporations are becoming more and more like academia and perhaps that’s a wonderful thing as well and we should celebrate that
Speaker
Mr. Sunil Abraham
Reason
This observation highlights an unexpected convergence between corporate and academic approaches to AI ethics. It suggests that the complexity and potential impact of AI is forcing traditionally profit-focused entities to adopt more reflective, ethics-centered approaches typically associated with academic institutions.
Impact
This comment added a meta-level perspective to the discussion, showing how AI is transforming not just technology but organizational structures and decision-making processes. It validated IBM’s approach of having ethics review boards and suggested this trend toward academic-style ethical review in corporations is both notable and positive.
Gen AI, my guess is at least 20% of the country is using it today… And the reason it is penetrating is because of two opennesses. One is free weight models… but also gratis, that the service intelligence is available on a gratis basis.
Speaker
Mr. Sunil Abraham
Reason
This insight connects AI democratization to accessibility models, showing how free and open approaches enable unprecedented technology adoption rates. It provides concrete data about AI penetration in India while explaining the economic mechanisms that make this possible.
Impact
This comment reframed the discussion about AI business models (like ads in ChatGPT) from a privacy concern to a democratization enabler. It influenced the conversation toward viewing ad-supported AI as a positive force for reducing the ‘AI divide’ rather than just a commercial strategy, adding social equity dimensions to the trust discussion.
Overall assessment
These key comments fundamentally elevated the discussion from surface-level concerns about AI safety to deeper philosophical, practical, and societal considerations. Abraham’s philosophical framing prevented the conversation from falling into AI hype or fear-mongering, while Nagalingam’s accountability insight revealed why trust in AI is psychologically challenging. Gurnani’s practical implementation perspective grounded the discussion in real-world enterprise challenges, and the historical analogies provided crucial context for risk acceptance. Together, these comments created a multi-dimensional conversation that addressed technical, philosophical, legal, psychological, and social aspects of AI trust, moving beyond simple yes/no answers to embrace the complexity and nuance required for meaningful AI governance discussions.
Follow-up questions
How do we ensure AI governance tools don’t remain just monitoring layers but get enforced at runtime when models are being served?
Speaker
Mr. Syed Ahmed
Explanation
This addresses the critical gap between having governance frameworks and actually implementing them in real-time AI systems, which is essential for practical trustworthy AI deployment.
Should GPUs and high-performance AI infrastructure have embedded privacy guardrails at silicon level?
Speaker
Mr. Syed Ahmed
Explanation
This explores whether hardware-level security measures are necessary for comprehensive AI safety, representing a fundamental architectural decision for AI infrastructure.
Will customers pay premium for trust-grade AI and does superior safety posture influence enterprise buying decisions?
Speaker
Mr. Syed Ahmed
Explanation
This examines the business case for investing in responsible AI and whether market forces will drive adoption of trustworthy AI solutions.
How do you build consistent trust enforcement across different geographies with varying regulations for physical AI and autonomous systems?
Speaker
Mr. Syed Ahmed
Explanation
This addresses the challenge of maintaining safety standards while complying with diverse regulatory requirements across global markets.
Is shifting responsibility to developers through safety tools like Purple Llama too decentralized an approach to liability?
Speaker
Mr. Syed Ahmed
Explanation
This questions whether the open-source model of distributed responsibility is adequate for managing AI risks at scale.
Should we have mandatory watermarking in all AI-generated content (media, text, etc.)?
Speaker
Mr. Syed Ahmed
Explanation
This explores content authenticity and transparency requirements as AI-generated content becomes more prevalent and sophisticated.
Can we actually govern AGI (Artificial General Intelligence)?
Speaker
Mr. Syed Ahmed
Explanation
This addresses the fundamental question of whether current governance frameworks will be adequate for more advanced AI systems.
How do we address the accountability gap – who do we blame when AI systems fail at scale?
Speaker
Mr. Sundar R Nagalingam
Explanation
This highlights a critical issue where the lack of clear accountability creates higher expectations for AI systems compared to human-operated systems.
How do we prevent AI governance from becoming just observation rather than active control mechanisms?
Speaker
Ms. Geeta Gurnani
Explanation
This emphasizes the need for governance systems that actively prevent issues rather than just monitoring and reporting after problems occur.
How do we integrate AI risk into enterprise risk management rather than treating it as a separate concern?
Speaker
Ms. Geeta Gurnani
Explanation
This suggests a need for research on how AI risks should be incorporated into existing organizational risk frameworks.
How do we balance the dual-use nature of AI technology where one person’s bug is another person’s feature?
Speaker
Mr. Sunil Abraham
Explanation
This explores the fundamental challenge of governing general-purpose AI systems that can be used for both beneficial and harmful purposes.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

