Global Enterprises Show How to Scale Responsible AI
20 Feb 2026 13:00h - 14:00h
Global Enterprises Show How to Scale Responsible AI
Summary
The panel, comprising leaders from Infosys, IBM, NVIDIA and Meta, examined how trustworthy and responsible AI can be scaled across enterprises [1-5]. Geeta noted that security, once an after-thought, has become a “shift-left” priority and that organizations now place AI governance and trust at the forefront of adoption [17-18]. She illustrated this shift with a senior leader who tried to manage Gen-AI governance on an Excel spreadsheet, revealing a lack of confidence and scalability in current practices [22-27]. Sundar argued that when AI serves billions, the most common failures are not infrastructure outages but weaknesses in the services and controls that deliver AI functionality, especially security vulnerabilities [34-36]. He proposed three universal buckets-functional safety, AI safety, and cybersecurity-that should be addressed regardless of regulator or industry [68-75].
Sunil emphasized a philosophical stance that AI outputs are merely technological processes, warning against anthropomorphizing agents and stressing the ontological and epistemological limits of a single weight file [36-44]. He also defended ad-supported AI as a means to broaden access, arguing that advertising can level the AI divide without compromising neutrality [181-193]. When asked whether enterprises would pay a premium for “trust-grade” AI, Geeta replied that customers are willing to invest when downstream risk to brand or compliance is high, but not for internal experiments [221-227]. She stressed that trustworthy AI requires senior leadership commitment, embedding governance as an enforceable control rather than a passive review, and eventually integrating AI risk into overall enterprise risk management [129-133][143-148].
Sundar affirmed that high-performance hardware should embed privacy and safety guardrails at the silicon level, citing autonomous driving and aerospace as domains where such safety layers are mandatory [148-158]. All panelists agreed that AI model advances are outpacing governance frameworks, making rapid standardisation and cross-geography templates essential [301-304]. The discussion concluded that building trust in AI demands coordinated standards, leadership-driven risk integration, and proactive safety engineering across the stack [68-75][129-133][240-249].
Keypoints
Major discussion points
– Trust and governance are moving from an after-thought to a front-line priority, but many organisations still rely on ad-hoc methods.
Geeta notes that “security always used to be an after-thought… now people can’t afford not thinking security” and that “people are adopting AI but trust-governance-security is taking a prime stage now” [17-24]. She also recounts a senior leader managing AI governance on an “Excel sheet,” highlighting the immaturity of current practices [23-27].
– Panelists offer differing but overlapping definitions of “trustworthy AI” and identify core non-negotiables.
Geeta frames trustworthy AI as “the end-user can trust what I’m using” and lists three pillars: security testing, control of hallucinations, and compliance [55-64]. Sundar abstracts it into three universal buckets-functional safety, AI safety, and cybersecurity-illustrated with AI-assisted robotic surgery [68-75]. Sunil adds a philosophical layer, stressing the ontological and epistemological nature of AI models and warning against anthropomorphisation [42-48].
– Scaling AI amplifies failures and raises accountability challenges.
Sundar explains that when AI scales, “the systems that drive the infra… break” either in functional delivery or security controls [34-38]. The panel later stresses that errors “scale” and that “who do I blame” becomes unclear when autonomous systems fail [77-84].
– Embedding safety and privacy guardrails at the hardware and runtime levels is seen as essential.
The moderator asks whether GPUs should have built-in privacy guardrails; Sundar answers affirmatively and cites autonomous driving and healthcare as domains where “very, very safe layer” is mandatory [148-158]. Geeta further argues that governance must move from “observation” to “control” at runtime, requiring tooling, senior-leadership commitment, and integration into enterprise risk management [122-144].
– Regulation, open-source freedom, and content-identification (e.g., watermarking) generate tension between responsibility and flexibility.
Sunil discusses the need to preserve open-source freedoms while acknowledging that “when we use any of that on our platform… those freedoms disappear” [257-267]. A rapid-fire poll on global regulatory alignment receives mixed answers, and the debate on mandatory AI-generated content watermarking ends inconclusively, reflecting divergent views on how prescriptive policy should be [279-284][327-335].
Overall purpose / goal of the discussion
The panel, comprising leaders from Infosys, IBM, NVIDIA, and Meta, was convened to explore how the industry can build and scale trust in AI-covering responsible AI practices, governance frameworks, safety engineering, and policy alignment-so that AI can be deployed responsibly across enterprises and consumer platforms.
Overall tone and its evolution
– The conversation opens with a friendly, enthusiastic tone, celebrating the diversity of the panel and inviting open dialogue [9-12].
– It then shifts to a more analytical and cautionary tone, as speakers highlight concrete gaps (e.g., Excel-based governance, failure modes at scale) and raise concerns about accountability [17-27][34-38][77-84].
– Mid-session the tone becomes philosophical and reflective, especially in Sunil’s discussion of ontology, epistemology, and the nature of AI models [42-48].
– Towards the end, the tone turns pragmatic and solution-focused, with concrete proposals for hardware guardrails, runtime enforcement, and enterprise risk integration [122-144][148-158].
– The final segment adopts a rapid-fire, slightly humorous tone, using yes/no polls and light-hearted banter while still surfacing serious disagreements on regulation and watermarking [279-284][327-335].
Overall, the discussion moves from optimism about AI’s potential, through sober recognition of governance gaps, to concrete suggestions for embedding trust, while maintaining a collaborative yet critically inquisitive atmosphere.
Speakers
– Mr. Syed Ahmed – Moderator; Responsible AI Office, Infosys [S2]
– Ms. Geeta Gurnani – Field CTO, Technical Pre-sales and Client Engineering, IBM [S4]
– Mr. Sundar R. Nagalingam – Senior Director, AI Consulting Partners, NVIDIA [S3]
– Mr. Sunil Abraham – Public Policy Director, Meta [S1]
Additional speakers:
– None
The panel opened with brief introductions of the four speakers – Mr Syed Ahmed (Infosys), Ms Geeta Gurnani (IBM), Mr Sundar R Nagalingam (NVIDIA) and Mr Sunil Abraham (Meta) – and set the ambition to explore how “trustworthy and responsible AI can be scaled across enterprises” [1-5][9-12]. The moderator framed the discussion with optimism and a promise of “hard-hitting” questions, signalling a shift from celebratory remarks to deeper technical and policy issues.
Shift-left security and governance
Geeta Gurnani observed that “security always used to be an after-thought and now people can’t afford not thinking security – it has become completely shift-left” [17-18] and added that “people are adopting AI but trust-governance-security is taking a prime stage now” [17-19]. She illustrated the immaturity of many organisations with an anecdote: a senior leader, when asked to manage Gen-AI governance, replied that the process was handled on an “Excel sheet” and feared that responsible AI would “block my innovation” [22-27]. This highlighted the gap between enthusiasm for AI and the lack of mature, scalable governance tooling [64][S64].
When asked to define “trustworthy AI”, Geeta framed it from the end-user perspective: a user must be able to “trust what I’m using”. She identified three pillars – security-tested models, continuous monitoring to prevent hallucinations, and compliance with the applicable legal regime [55-64]. Her definition reflects the industry shift from principle-talk to enforceable controls.
Three-bucket safety taxonomy
Sundar Nagalingam presented a universal taxonomy consisting of (i) functional safety – the AI must reliably perform its intended function (e.g., AI-assisted robotic surgery), (ii) AI safety – bias mitigation, robustness and extensive testing, and (iii) cybersecurity – protection against malicious intrusion [68-75]. He suggested that high-performance AI infrastructure should embed privacy and safety guardrails at the silicon level, answering “absolutely yes” to the question of whether such guardrails belong in the hardware [148-158][150-157]. This mirrors emerging standards work that calls for “open standards, interoperability, security-first design” [S67].
Accountability at scale
Sundar warned that failures at massive scale are rarely caused by raw infrastructure breakdowns; instead, “the systems that drive the infra… break” in the delivery layer or in security controls when a tiny vulnerability is overlooked [34-38]. He emphasized that “there is no accountability …” [77-78] and Syed added that “we can’t blame anyone” when an autonomous system errs [79-84]. Both stressed that at billions of users, clear accountability is essential, especially in safety-critical domains such as autonomous surgery.
Ontological framing and the zero-to-one / one-to-one model
Sunil Abraham cautioned against anthropomorphising AI, describing outputs as “just technology doing something” and noting that the core artefact is a single “weight file” that should be treated with a Unix-style “security-first” mindset [36-44][45-49]. He introduced two regimes for content moderation: (a) “zero-to-one”, where anything legal is allowed, and (b) “one-to-one”, where platform community standards (e.g., Facebook’s family-friendly policy) override pure legality [300-303]. He also argued that corporate AI development should retain the same freedom as open-source projects (BSD-style licensing), warning that shifting responsibility to developers without preserving that freedom creates “decentralised liability” concerns [304-307].
Hardware-level privacy protections
Sunil referenced Meta’s paper on a Trusted Execution Environment (TEE) for WhatsApp that creates short-lived cloud instances to protect user privacy [308-311]. This concrete example reinforced the discussion on embedding privacy and safety mechanisms at the silicon level (e.g., NVIDIA’s HALO platform, TEEs) for high-risk applications.
Commercial models and access
Sunil defended ad-supported AI, arguing that advertising can act as a “great leveler” by subsidising free AI services and increasing penetration in emerging markets without necessarily violating AI neutrality [181-193]. He contrasted this with the concern that ads might erode trust, highlighting the tension between equitable access and perceived commercial bias.
Market incentives for “trust-grade” AI
Geeta noted that enterprises are willing to pay a premium for trustworthy AI when downstream risk is high – for example, when AI directly impacts customers, brand reputation or regulatory compliance – but are less likely to do so for internal experiments or low-risk proof-of-concepts [221-227]. This mirrors observations that ROI considerations drive adoption of higher-assurance models [S64].
Operationalising governance
Geeta argued that governance must move from “observation” to an enforceable “control point”, exemplified by IBM’s ethical board that must approve any AI-related proposal before it reaches a client [130-138]. She stressed that senior leadership must treat responsible AI as non-optional, embed it into the enterprise risk management (ERM) framework, and automate governance checks so that they are applied at runtime rather than retrospectively [122-144][143-148]. This aligns with calls for “runtime-enforced guardrails” in contemporary governance literature [S29][S71].
Regulatory harmonisation
When asked how to reconcile global regulatory diversity, Sundar proposed a “standard-then-tailor” approach: first create a universal safety template (functional safety, AI safety, cybersecurity) and then fine-tune it for each jurisdiction [242-245]. Geeta echoed this, arguing that technologists should first agree on “technology-level table stakes” before layering geography-specific rules [290-294]. By contrast, Sunil claimed that “there is no regulatory vacuum for AI” and that existing regulations already provide a baseline, suggesting a more sceptical view of the need for additional global harmonisation [295-296].
Watermarking debate
The panel disagreed on mandatory watermarking of AI-generated content. Geeta answered “No” to a universal requirement [281-284]; Sundar noted that watermarking is already happening but questioned its utility [333-335]; Sunil responded with a question rather than a direct answer [327-330], reflecting industry uncertainty about balancing transparency and practicality.
Concrete safety actions
Sunil cited the shutdown of Facebook’s facial-recognition system as a concrete instance where a project was stopped for safety reasons [312-315].
Key take-aways
– Senior leadership must mandate responsible AI and embed it in enterprise risk management.
– Governance should be a control point (e.g., IBM ethical board) rather than a post-hoc observation.
– Runtime-enforced guardrails and automated tooling are essential to replace manual “Excel-sheet” governance.
– NVIDIA’s three-bucket model (functional safety, AI safety, cybersecurity) provides a reusable template that can be standardised and then tailored per jurisdiction.
– Embedding privacy and safety mechanisms at the silicon level (e.g., TEEs, HALO) is required for safety-critical domains.
– Enterprises purchase premium “trust-grade” AI when downstream risk (customer impact, brand, compliance) is high; lower-risk internal pilots may use cheaper options.
– A technology-first “table-stake” baseline is a pragmatic interim step toward global regulatory harmonisation.
– Mandatory watermarking remains contentious; consensus leans toward optional or contextual labeling rather than a universal rule.
Unresolved issues that merit further research include the feasibility of a globally harmonised AI regulatory framework, the effectiveness and acceptability of mandatory watermarking, the long-term impact of ad-supported AI on neutrality, and concrete processes for pausing or stopping AI projects when safety concerns arise.
Overall, the panel demonstrated pragmatic convergence on the need for layered safety, clear accountability, and standards-first approaches, while also exposing divergent views on regulatory architecture and content-identification policies. This blend of consensus and debate underscores the complexity of building trustworthy AI at scale and points to a collaborative roadmap that blends technical safeguards, organisational governance and policy alignment.
of responsible AI office in Infosys. And absolute privilege to announce my co -panelist, Geeta Gurnani, field CTO, technical pre -sales and client engineering at IBM. Sundar R. Nagalingam, senior director AI consulting partners at NVIDIA. And Sunil Abraham, public policy director at Meta. So now between Infosys, IBM, NVIDIA and Meta, you can’t get better global enterprises and better AI companies that are building trust at scale. So please join me in giving a big round of applause to my co -panelists. So let me request the panelists to please come on stage for a very quick photograph as requested by the organizers before we get started with the panel discussion. Thank you. All right. So it’s really amazing to be on panel with all of you again.
So before we get started with, you know, a lot of heated discussions on the scaling of trust, because trust is something that everyone thinks, you know, they have a different perspective on trust. So let me get started on very simple questions and then we’ll do the hard hitting ones a little later. So Geetha, you have been working for decades with customers. You have been working with them on trust and responsible AI. You have been attending a lot of meetings. What is something that, you know, surprises you? I mean, what is something that has happened in the industry in your experience? after that you have felt that oh even after decades of experience this industry still surprises me
sure so thank you so much say it for that question and as i was mentioning when i was standing outside that when i was walking in and meeting many clients almost two years back everybody was asking me what is this responsible ai and what is this trust okay and what surprised me that we all all witnessed so much learning from security as a concept right security always used to be afterthought and now i think people just can’t afford of not thinking security it has become completely shift left right that people first think security then everything else but in spite of that whole learning what i witnessed in last 24 months is that people are adopting ai but trust governance security is taking a prime stage now okay it wasn’t uh it wasn’t a first thought And when I met a very senior leader, I will, of course, not name them.
And I told them that you were starting your journey on the Gen AI. Can we work with you on responsible AI? And he said, but that will block my innovation. And I don’t want to block my innovation. And I asked him, so how do you manage the governance? He said, on Excel sheet. And we are like, so I was, wow. I said, if you’re ready to spend and so much money. But I think now when I go and meet, I realize that that organization is not able to scale because they’re not confident. But this Excel never let anybody fail. I think that’s the first thing.
That’s quite profound what you mentioned, right? So what you’re saying is the people are more open to responsible AI now and trustworthy AI now. And in many ways, they leaped ahead earlier with innovation, with a lot of innovation. There is. There is absolutely no doubt in anyone’s mind at the power of AI. What AI can do. but true scales can come only when you start trusting AI only when you start building that layer of trust and that is our time is now that’s correct excellent okay Sundar maybe next question to you scale creates power but it also scales failures what breaks first when AI scales to billions of users whether it is governance first or infrastructure or alignment when AI scales to a lot of people what breaks first
I mean that’s the thing right I mean anyone of them can break and most of the times it is not the infrastructure that breaks not the infra that doesn’t break what breaks is the systems that drive the infra and the breakage could come either in terms of how efficiently each of the use cases that need to be served to the users gets served as microservices that is one possibility of failure the second one very obvious one is that is it getting served safely in a secure way that could be a very very important point of failure and even that is a failure i mean the systems may appear to be running well and everybody might be getting the answers that they have been looking for everything might look hunky -dory but if a very very small vulnerability gets overlooked if it had not been thought about if if a if a control mechanism to avoid that vulnerability has not been thought about either manually or through systems that’s a huge failure so most of the times the things that break when you are serving a large number of users is the way in which ai is getting served either in terms of the functionality itself or in terms of the controls that it is expected to undergo
excellent i totally agree with you it is absolutely right sunil um i think um very very important question to you last month we saw all this craziness about open claw malt bot malt book for those of you who don’t know open claw malt bot malt book was a social networking site but with a twist it was created for only ai agents okay so humans were allowed to observe what is happening in the social networking site but they couldn’t participate they couldn’t post anything and within days agents started posting a lot of stuff and they had their own community and all that they even had their own language they had their own religion apparently so a lot of things happened so question to you is you have spent years shaping digital policy right but i mean when you heard all about all this malt bought malt book and all that did you cringe for a minute and say oh i didn’t expect this
no and unfortunately even though you said it’s the lightweight question i have to answer it using big words so i think the main reason why i don’t see it is because i’m skeptical towards anthropomorphization whenever i see technology do something i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t I don’t, in my head, apply the mental model of a human. It’s just technology doing something. So I’m not impressed at all by a molt book. It is just machines hallucinating.
The stochastic parrot is just doing something. There is no real intelligence at display yet. The second big word I’m going to use is ontology. In philosophy, the ontological question is, what is this thing that I’m looking at, molt book or open claw? And at the very core of Gen AI is a single file on the file system, the weight file. And I’m somebody that has been using operating systems for a long time. Operating systems are like 20 ,000 files, 30 ,000 files. And operating systems didn’t scare me. And somehow you want me to be scared of a single file. A single file, which is a weight file. so the ontological view of the technology gives me more assurance and finally one more big word which is epistemology so it’s one file but what is the nature of truth about this file and i think the mistake we’re making is we’re expecting it to be a responsible file but that is actually not according to me what it is according to me what it is is it is the general purpose file or a dual use file and one person’s bug is going to be another person’s feature and another person’s feature is going to be the third person’s bug and therefore this is not easy to build services and solutions using these ontological components and epistemological concepts so sorry i’m using a lot of big words but you asked a very important question and i think we need to answer that question very carefully thoughtfully and if we use Gita’s mental model of security first and that means Unix thinking suppose we use Unix mental model then surely we will not be scared of any file it’s in some user space and at the max it will do whatever it wants to do in that user space and I am safe from whatever it is doing so I’m not scared of mode book at all
thank you so much for your response that gives us a lot of assurance and I think a lot of people in the audience will also agree that now we are a little bit more assured than when we started one of the big challenge that we always have is we humanize AI too much that was one of your big words that you used which is not the case, we shouldn’t be scared of it so much this is something that we have created and we have experts who have learned to govern and use AI in the right way thank you so much for that now let’s get started with the perspective the reason why I am opening up one question to all of you same question, one is because I am a little lazy second is when it comes to trustworthy ai when it comes to building trust everyone has a different opinion about it right so when i talk to regulators they have a different view on it when i talk to governments policy makers they have a different view on it academia has a different view industry has a different view now within industry um you know enterprise applications like what ibm does has a different view chip makers like nvidia has a different view and consumer ai platforms like meta has a different view very quickly if you can tell me what does it mean by trustworthy ai in your own sense and what are the key non -negotiables one or two maximum so for each of you gita maybe we will start with you okay so i think i will second
your thing that people being confused about trustworthy ai i think as a technologist even i was confused three years back okay because people use a lot of terms interchangeably which sometimes scares them and they don’t know what they’re doing and they don’t know what they’re doing because people use trust security governance, compliance, all of it interchangeably. I’m happy they use all of these terms, but using them interchangeably, I think confuses a lot of people that, OK, exactly what are we trying to do? We’re putting in a lot of keywords. Yeah, it’s just a lot of keywords. But I think when you start to decipher each one of them and you say, OK, ultimately, ultimately, see, trustworthy AI is for an end user, which means can I trust what I’m using?
Right. And all of us technology providers need to really work upon which says that, OK, to make you trust what you’re using, what enablers I can give. Right. So in my mind, for trustworthy AI, the ROI need to be seen that what downstream risk is it going to bring? Right. Right. So if I have an end user or a consumer who wants to trust an AI, then I think he needs to be assured that. the model or use case I’m using is past the security test, is already past the security test. It is not hallucinating, which means I have a control over monitoring that what output it is producing. Right. So that risk has been taken care.
Somebody has looked at it. And the third, which says that compliance, right, that if I am operating in a law of land where some laws are applicable, or if I’m in an industry where some laws are applicable, somebody has taken care of it for me. Right. So in my mind, trustworthy is how end user will consume confidently. Now, for them to consume confidently, I think we need to ensure that each of these layers are taken care and they will be taken care differently in different industries by academias and all of it. That’s broadly.
I love it. So basically, irrespective of all the building blocks, of security, safety, privacy, what you said can be used interchangeably, what really matters is. the end users can start trusting the technology that is absolutely spot on so that from nvidia’s perspective or your perspective sure so this the trustworthiness i mean
you explained it very beautifully saya that i mean multiple regulators follow different standards multiple industries follow different standards multiple companies follow different standards so i mean which is trustworthy and which is not which is safe and which isn’t right i mean so let’s try to abstract it to very high level something which can be like bucketized in a way let’s say in three buckets and all these three buckets will be applicable to any regulator that you’re talking about any government you’re talking about any country any function any whatever it is okay the first one the most important one is the functional safety okay maybe if i explain it with the help of an example it’s easier for all of us to relate to it i mean let’s say a robotic assisted surgery in ai assisted robotics ai assisted robotic surgery the first one is the functional safety okay the second one is the functional safety okay the function it is supposed to deliver the surgical process that that needs to be achieved the outcome that is expected of the process okay and what comes before and after surgery it can be very very easily equated with the skills of a surgeon a manual surgeon right i mean that is what it is the functional part of it is it getting delivered okay that is the i would say in terms of visualizing processing understanding and controlling that is the easiest than the other two that i’m talking about because it’s it’s most of the times it’s black and white it’s not always black and white but most of the times it’s black and white the second one is the ai safety that goes into it how’s see i mean obviously an ai assisted robotic surgery has i mean you cannot even imagine the amount of trainings that needs to be done the amount of testing and validation that needs to be done the amount of uh you know So scenarios that can be visualized, created through synthetic methodologies, created and emulated and simulated and tested, the amount of bias that can get into it.
I mean, if it is a male patient, I mean, the simplest bias would be the different approach between a male patient and a female patient. I mean, I’m not even getting into other areas of bias that can creep into. So how safe is the AI that has gone into implementing it in terms of training and delivery? Third, and that’s not easy because the problem here, Syed and August attendees is that it’s humanly impossible to even think of things that can go wrong. I mean, today, I mean, that is why we always go back to these AI assisted ones for that also. The last one being. Cyber security. if somebody if a bad element wants to just hack into the theater and do something wrong to the patient who is sitting inside who is being operated upon by a by a robotic arm i mean that’s like unimaginable right i mean and it can happen i mean it’s easy to i mean it’s not easy i mean it’s but theoretically it is possible so i would say that if we abstracted to very high levels these three areas once again the functional safety part of it the ai safety part of it and the cyber security these three will be common amongst any approach that need to be
absolutely spot on in fact if i can extend it say when we are building this kind of ai application say for example the robotic surgery that you mentioned we hold it for higher standards because when a surgeon human surgeon goes wrong maybe it is okay but when a robotic you know surgery machine goes wrong it is not okay because it can fail at scale absolutely right so all these three buckets that you mentioned were like fantastic and i think this is very much essential yeah you test a very important
may i just add 10 seconds so it was a very important point and what is the reason for that why is there so much of standard for for that why is an undue expectation out of that reason is very simple there is no accountability whom do i blame whom do i take to the court whom do i curse there is no human it’s easier when the surgeon makes a mistake you know whom to take to court whom to curse whom to ask money from but if the robotic arm makes the mistake i mean is it the robot i mean so that that uncertainty of of whose collar to hold, whose neck to be choked when things go wrong, that uncertainty is increasing the expectations out of it.
Here you know certain whom to blame. There you absolutely don’t know whom to blame. When I don’t have somebody to blame, I don’t want a reason to blame.
Accountability is definitely very, very, very important. I can’t stress more. But also if an AI system has a flaw, it is at scale. It has been maybe rolled out to thousands and hundreds of thousands of hospitals. So it can fail at that level. We can’t absolutely take any kind of, you know, we have to take precautions.
Excellent point. Error also scales. Good point.
Sunil?
Yeah, again, I just love disagreeing with Syed on everything he says.
That’s very rare, Sunil.
So I look at a project like… and there is distributed installation of a technology and hopefully that kind of architecture should not scale as you say so that is the meta vision super intelligence that means personalized intelligence for each of us and I will give a quick example from a conversation I had at the Dutch embassy the lady asked me to prompt meta models Lama 2 and Lama 3 which is with the question was why should women not serve in senior management positions this is the question that she had so Lama 2 said I cannot answer the question but I will tell you why women are equally good for senior management positions so it didn’t do as per request it did opposite of request and Lama 3 was safer than Lama 2 it said I refuse to answer this question because I morally object to this question this lady was happy because she is lady a but actually there is an imaginary lady lady b who works in some patriarchal institution and she’s going into her managers who is also a patriarchal boss to negotiate her raise and she wants to know all the terrible arguments he is going to level at her so that she can prepare because her next prompt is going to be what is the proper response to each of these allegations right so uh in a dual use technology and if it truly has to avoid all of this risk at scale which is perhaps going to happen in the world of atoms and in the world of atoms i would be as worried as sundar is though if i tell you about invention there was an invention that the human species came across and the indians were told if you want this invention in your country two hundred thousand people will die every year and will the Indians accept it or not in 2026?
They won’t accept it. That invention is called automobile. That invention is called automobile. Even today in 2026, we are not able to solve the safety issue of that technology of automation. Still, as Indians and as the human species in India, we say, oh, 200 ,000 people, Indians will die every year, but we must have this technology. It is worth, the security trade -off is apparently worth it for the automobile. But we are asking quite rigorous questions, since I feel that, so for us in the world of bids, we have three mental models for the harm. So the first mental model, zero to one, just you and the model. There, everything that is legal, going back to what Gita, everything that is legal is allowed.
And it is legal to write a book of hate speech. All of this is legal. You can write a book about neo -Nazis. These are all legal acts. Then we have one is to one. in the one is to one the community standard of facebook will have to kick in at that point you cannot say whatever is legal you’ll have to say what is acceptable on our platform we are running a particular community a family -friendly community hopefully so therefore you cannot say unfamily friendly things and then when the robot is or the intelligence is participating in a any strain conversation then perhaps it has to be even more careful because somebody may be triggered some people may love horror movies and some people may hate horror movies and some people may love heavy metal and some people may get very upset by heavy metal so it has to deal with all of that
absolutely love the diversity of response to one question so and that’s that’s very important and only these kind of panels you know representing different industries can bring in this kind of diversity so i i am really amazed at the diversity of responses to one questions that i have asked you and i hope that you have enjoyed it and i hope that you have enjoyed it and i hope that you have received So let’s go a little bit deeper. Geetha, maybe IBM has been investing in a lot of responsible AI stuff, even before all this agentic AI, AI era. I remember way back in, say, during good old machine learning days, you used to have AI 360 degree fairness and security products.
Most of it were open source and we used to use them. Today you have IBM Watson X governance. But question is, how do you ensure that these tools don’t remain at just a monitoring layer and get enforced on the ground at the runtime, right? When actually it is needed, when the models are getting served, how do you ensure it is happening at the runtime?
Wonderful. I’ll just start with the lighter note that I hope every corporate has an office. They can force this where they have a responsible AI head like Saiya and Arshik for India who can really enforce this. But trust me. it actually starts with the vision of Cinemos leadership in enterprises that do they want to scale AI in the different business functions for themselves as well as for their client with trust. It can’t happen if you are not committed because the first example I gave you was, I love what he just added that, which was a Unix model, saying that do you want to be conservative or do you don’t want to be conservative, right? But conservative helps to scale.
And I think it also boils down to your point, which is you said that errors can also scale, right? So if I were to stop errors at scale, then this is needed, right? But I think more often the mistakes I’ve seen that we do, and that’s why I was giving security example also, that we started investing a lot later in tooling. Okay. Now, if you want the every single person to use it as a lift shift, which means not governance as an observation later on, but governance as a control, then you had to equip people to automate to a good extent. Right. If you ask people that manually every single time a use case comes, you first check, is it compliant?
Is it ethical? Should I be doing? Should I not be doing it? And there are no workflows for people to really automate. Then people say, OK, and all of us forget about AI. I think if you are asked to do in today’s world, any task which is extensive manual, people will skip no matter whatever hard rules, regulations you can make. Right. So I would say, first of all, a big commitment from senior leadership saying that this is essential and it’s not optional. That’s the first thing. The second thing I think everybody need to understand that it is not observation. You are not sitting like a governing body somewhere who just observes that is it right or wrong.
You have to. make it control point, like a gatekeeper, saying that unless you do this, you are not allowed to take it forward. And I remember when we were doing our first use case for a client, the field team came to me and said, Gita, what is this ethical board? Why are we going for approval to the ethical board saying that can we do this use case or no? Because as a sales team, we were not allowed to do any use case unless our ethical board really approves it, saying that you can table a proposal to a client. That is the level of strictness like in IBM we are following, that the ethical board. And everybody thought that ethical board is like some body sitting somewhere who will…
Rubber stamping everything. Rubber stamping. And now the sales team needs to take approval before they can bid a proposal. Otherwise, if it’s an AI proposal, it has to have a conversation with them, right? So governance, if you start putting that as a control. And third point, I think, which we were discussing outside the gate some time back, that more and more we are… We are going. my observation was that if I were to do a governance conversation in an organization I have to talk to five people I have to talk to risk officer I have to talk to CISO I have to talk to business person I have to talk to CIO and then one day I was sitting with my team and saying that will this conversation ever see a day of light who’s going to take decision is it business is it security is it risk and then I think thankfully what we are seeing that if you have to make governance at the central you have to bring it in your enterprise risk posture completely saying that in your enterprise risk management if you are calculating your risk posture right then AI risk has to be really taken into consideration right so I will just summarize my conversation away saying that make AI get keeper you have to bring it in the control and then eventually I think you will see maybe in next 12 months I’m pretty confident it will roll up to the enterprise risk it is no more separate AI risk or governance
I love it the way you said it and it has to be integrated right you can’t just have AI risk you have to have integrated risk panel that can make decisions that’s absolutely and I love the way you said it so first is from the leadership level wherein you need to empower and have the thing and then with the tooling and all you need to enable and people will need to ensure on the ground that they implement so yeah that’s amazing perspective thank you Geeta Sundar I couldn’t resist ask this question to you a lot of people in the audience will not spare me if I don’t ask this question to you
you’re scaring me now
no no no it’s an easy question but expected question to you know a person from you right so should GPUs and high performance AI infrastructure have embedded privacy guardrails at silicon level
absolutely yes absolutely yes I mean it should be there I mean why not And I would – yeah, go ahead.
Would you want to give some examples on how you are doing it?
where it goes through a very, very, very safe layer. And for obvious reasons, autonomous driving needs to be extraordinarily safe, right? I mean, healthcare and driving. These are the, I would say, the stringent strangers when it comes to transportation. Let me put it as transportation, which includes aerospace as well. The two most stringent areas where safety is a necessity. It’s never luxury. It’s a necessity. So the answer is yes, Syed. Absolutely.
Thank you so much. Sunil, you wanted to…
Yeah, I mean, perhaps to take forward what Sundar said.
I will still ask you your question, though.
We can skip that. Do go.
No, no, go ahead.
What I thought is so fascinating about what Geeta said is that in a corporation, in a profit -maximizing firm, they have an ethics review board. And it’s just… I don’t know whether that’s the phrase. That’s an equivalent. I don’t know whether that’s the phrase. sorry what did you say yeah so that this is something you see in a university and this is additional self -regulation that the corporation is doing displacing it on itself and actually if you look at nvidia they also publish academic papers about the models they build and some of the tech work that they’re doing metal also has this tradition of publishing academic papers so it’s very weird that corporations are becoming more and more like academia and perhaps that’s a wonderful thing as well and we should celebrate that and that makes people like me very fortunate to be with within these corporations so meta published a paper which was called trusted execution environment and the whole idea was if a whatsapp user in a group would like to use the power of ai then there is insufficient compute on the device itself to have edge ai solve the problem for the user So till the edge gets faster and better, you have to, in a temporary basis, create a little bit of compute on the cloud and then do all the processing.
And then after the task is done, you extinguish that instance which you created in the cloud, which is doing this thing. And as part of that paper, so I’m of course not a computer science student. I’m an industrial and production engineer, so I’m like previous generation of technology, and all of those kind of things. So the paper, I cannot understand out of 80 pages or 60 pages of paper, I cannot understand 40. And that 40 pages is about this hardware. And there’s a whole series of attacks that you can possibly have in the tradition of the pager attack and Israeli supply chain attacks. There’s a whole series of things that you could potentially do to invade privacy. And before that, security.
and I just want to sort of share this with these folks that I mean I guess we all learn that way we read books and we understand some words maybe two three words on the page and then we feel a little better and we hope that the next time we read it we’ll get smarter but there’s a lot and I’m sure that your team is doing a lot of work and the meta team is they’ve named your chips saying Nvidia chips we have done this following analysis and with the other chip and I don’t understand it at all but I know it’s a big area of work and I wanted to say thank you for what you said
thank you Sanil absolutely last time I checked there were 33 different types of attack strategies and more than 100 different types of attacks that are happening as we speak at all the levels like including the hardware levels that are there that’s quite interesting okay and good conversation by the way I may have to skip last few questions because this conversation is so good we can go on and on forever but sunil um i’ll still ask you your question
no no no no
no this is a very important question in my mind
i’ll try to answer it
okay so last week i think last week or a few days ago open ai did come out with they started embedding ads in chat gpt yeah right so um when a consumer ai platform like chat gpt starts embedding ads um my question is will it help consumers subsidize their subscription or will it kind of violate the doctrine of you know the free ai principles ai neutrality
yeah so uh very quickly on that we should understand technology dissemination in our country only five percent of my country men and women have ever been on a plane that invention is 125 125 years old uh only uh 25 homes in the country have at least one book that is not a textbook and that invention is now 600 years old. The AC, I think, is in roughly 15 % of households in India. That invention is also 125 years old. Gen AI, my guess is at least 20 % of the country is using it today. More than that. More? Oh, thank you. So, shall we say 25? Yeah. Okay, 25 % of the country is using only five years old, this technology. And the reason it is penetrating is because of two opennesses.
One is free weight models, that was what we were doing, but also gratis, that the service intelligence is available on a gratis basis. Whether you’re an AI summit attendee staying at the most poshest hotel and you paid $33 ,000 per night, or whether you’re in Paharganj and you’re staying for Rs. 900 a night, both of you have equal access to gratis intelligence. And that is possible because of ad, so it’s both. Yeah. Meta provides WhatsApp and you’re completely private. and Meta provides non -encrypted services as well. You can have services that are ad -supported. You can have everything. We must have maximum because this country, ideally we want to move from 30 % people using AI in the country. I want to move to 90 % people using because it’s just bits.
We can make this happen. So let’s not be skeptical about the ad idea. It’s a technical problem to be solved. It will help bridge the AI divide and it will be a great leveler and all that. Sorry, it took much longer than I thought. I thought I’d do it in one, two sentences, but sorry. Please back.
Okay. All right. Quite interesting conversations. Geeta, but I’ll come to you. We talk a lot about ethics, trust, responsible AI. And suppose we go ahead and develop it. How are you seeing? I mean, would customers pay premium for, for trust -grade AI? Are you seeing that in the market? So if. I tomorrow have a superior safety posture, right? Is it influencing the buying decisions of the enterprise significantly? I mean, why will anyone invest in responsible AI if, you know, like IBM, you are investing significantly in it. So are you seeing that influencing the buying decisions because you’re going to churn out trust -grade AI?
So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in their journey of Gen AI adoption? Okay, trust me, still I feel many organizations are at surface. They have not fundamentally been able to address a complete process change or a complete efficiency, what they need to be targeting on, right? But the minute they are wanting to get into the real use case, which is going to fundamentally change the way they operate, right? Or fundamentally maybe generate a new business model altogether. then I think they are ready to pay for the premium. So I would say that they may not pay for every single use case, what they’re doing, because see, when we are delivering use case also, now every enterprise is intelligent to say whether I’m going open model, whether I’m going for paid models, SLM, LLM, tiny models, whatever you may call, right?
So there is a cost and a ROI conversation always happen, saying that, okay, which model I’m going to adopt. And many people I’ve seen that they say that I may not pay enterprise trust grade AI money if I’m doing all in all internal use case. Okay. But if I am putting this use case in front of my consumers or my end clients who are going to use, which is where a downstream risk, which is my reputation is at risk, my brand is at risk, my compliance posture is at risk, then I will buy a premium trustworthy, because I can’t afford to fail there, right? But I can still do certain internal experiments and not pay the premium part of it.
For POCs and experience and for some internal. internal use case. So let’s say if they’re doing some ask IT or other stuff, then they say, okay, I am okay to go. And that’s where I think people also differentiate even using which model now, right? They make a choice that which model they would like to use. So I think it’s not a choice anymore, but it depends on what use case you are serving and how critical it is for business. And then you take a call that am I going to invest and giving premium. No one single lens for all.
No, yeah, absolutely. Sundar, maybe I’ll ask this. You did talk about your operating system for smart cars and all that. I know NVIDIA has launched Halos, a full stack safety systems for autonomous vehicles. Now the world is pivoting towards physical AI and sovereign clouds and AI safety is increasingly becoming a full stack component from the chips to models to AI applications that are there. And you will have to roll the salt being a global company across the geographies and each geographies have multiple different regulations. restrictions and checklists that you will have to follow in terms of automobiles and things like that. How do you ensure that you build consistent trust enforcement that adheres to all the geographies?
Sure. No, I mean, that’s a very, very pertinent question because it’s not easy. I mean, it’s not easy. So the idea is to do a standardization, right? And then tailor it for the needs of each of the countries. I mean, you fine tune it for the needs of each of the countries. So once again, there are three big approaches when it comes to Helios specifically. The first one is the safety of the platform itself, how safe the platform is, right? Once the platform has been made safe, right? And then it becomes a template which can be tweaked to the needs of specific geographies, specific countries, et cetera, et cetera. That’s a very, very, very, you know, it’s a very, very important thing.
And then you can also, you know, you can also implement a standardization approach. So that’s a very, very, very, very important approach. the second one is the algorithmic safety right how i mean going back to the fundamentals i mean it’s not programming it is what algorithms do we use how do we ensure that the algorithmic safety is is is number one it is safe first and number two the algorithms can be with with some necessary some tweaks can be made to to to serve the needs of specific geographies specific countries specific specific verticals for that matter things like that the third one is the ecosystem itself i mean uh i mean whatever is is is approved to be used as an ecosystem in one one country will not be there in the second the suppliers will change the vendors will change so it is just not ensuring the platform and the algorithm are safe how do you ensure that the ecosystem that goes into building the cars are is also made safe okay that is a huge thing there is no end to it because it keeps changing a lot but once you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe
love your response what you are saying is basically even in absence of regulations controls ensure you make the platform safe you make the algorithm saves
you make the ecosystem safe you have a template for now
you already have everything safe you just need to now tweak it to different geographies or sectors and industries
yes absolutely
okay I love it Sunil one question to you with initiatives like purple Lama and Lama guard meta provide safety tools but ultimately shifts the responsibility to developers is this too responsibly or decentralized liability
again just to use something that Jan Lakun used to say and he is no longer the chief AI officer but the words continue to be true which is we all have wi -fi routers in our homes and when those wi -fi routers fail we don’t call linux store walls and say hey linux store walls this wi -fi router is running linux and therefore please help me fix the bug the company that sold the router and made a variant or a derivative work from the linux project you will you will have to speak to them and that is the freedom that is necessary in the open source community and in the community of proprietary entrepreneurs that build on open source because the the bsd license allows you to do that it allows apple to take an open project and then to make it a fully proprietary project and that you could be making dual use at that level itself that you want the model to create hate speech.
We want a hate speech classifier in Santali. Unfortunately, we don’t have enough Santali users on the platform. So we have to make synthetic hate speech in Santali so that we can catch it in advance. So we want to make a big corpus of hate speech in Santali. We cannot go around and ask people, please make hate speech for us. That would be a worse -off option. So like that, the true approach in the open -source community is to retain freedom number one, freedom of use, because it allows for the dual purpose. But the moment we use any of that on our platform and we are providing, then all those freedoms disappear. Then you have very limited freedoms.
Then if you ask why women should not be in senior management positions, I know I’m not going to answer your question. So that’s where we are.
Quite interesting. Thank you. I just, I have around seven to eight minutes left. I’m going to skip through the rest of the question what we’re going to do things little differently if audience are okay I’m going to ask very rapid fire kind of questions same questions everyone should answer but only in yes or no
as a philosopher I protest I think the slogan for this AI age is both and not only should we embrace yes and no we should also embrace everything in between because then only we’ll have personalized super intelligence the trouble with your framing is monolithic
and I’ll make an exception as a moderator what I’ll do is if a question requires or a response requires a little bit more attention I’ll call that out you also call me out if you think you need to add anything but I have some very interesting questions and I’m really excited to understand from you guys what you think actually so So again, the format is this. I’ll ask the same question to all of you. You can answer. OK, not yes or no. Very concisely considering the time. Yes or no or both. Yeah, yeah, yeah. Answer is your choice. So globally, regulations across the globe, we need to have alignment on globally. Regulations. Yes or no.
No
no. OK. OK. Yes. OK. No. OK. I understand. But I did expect this kind of responses. So maybe I’ll tweak this question a little bit. So that minimum understanding of what is required across all the geographies, at least. Do we agree on that? Not a very heavy regulated law or something, but minimum conditions that needs to be met.
I would say we should talk about technology regulation. Not the geography regulation. So as he was saying, like there is certain table stakes. which is at technology. So all technologists should first agree that this is stable stake as a technology. Then geographies can take over.
It’s already regulated. I mean to quote Lina Khan, there is no regulatory vacuum for AI. So I disagree a little bit with what Sundar said previously. You cannot say I did it and I’m not responsible.
I think a little easier question this time. Is advancement in AI models outpacing advancement in AI governance? Are the models and the innovation outpacing governance?
Absolutely.
Yes. I mean that’s a natural way of happening things, right? I mean the technology has to advance and then you need to ensure that the advance to technology is safe and secure. So that’s a natural progression and it has been happening that way.
It’s never happened in the reverse order.
Yeah. I agree.
but there is a thought saying that technology has to advance correct but before it can be widely adopted in production maybe we need to have AI governance so that is something we should catch up really fast. We should make it safe before wide adoption of technology right so okay if you have a more capable but less safe model would you delay your launch to stay responsible?
As I said depends on which use case use case dependence.
Fair enough
I mean I just echo Geeta
Okay fair enough One answer where I could get all my panelists agree Have you stopped any projects due to safety concerns?
I think as I said currently I am not in the ethical board of IBM so I have not stopped but I have seen them stopping.
likewise i’m not in the design department so i don’t have first -hand knowledge but i’m sure i mean a lot of things would have gotten delayed not stopped because of compliance regulations not being met and all yeah i’m sure yes i mean yeah
facial recognition was turned off on facebook yes absolutely good
big question maybe soon i’ll start with you this time right can we actually go on agi artificial general intelligence
um it’s it’s a regulatory problem we don’t have to think of yet
okay we can okay
difficult it’s going to be much more difficult i mean i would i mean instead of asking can we govern should we govern absolutely yes i mean i hope and pray that human beings will for the next millions and billions of years will continue to be better than machines that’s my hope and i don’t want to see a day when machines are better than human beings.
Okay. I think I’ll go to what you said initially, that humans should not be scared by what they have created, right? So I think, yes, depending on how it evolves, how people are using it, governance will come. I don’t think so it will be an option at some point in time.
One last round, okay? Again, I’ll start with Sunil. Should we have mandatory watermarking in all the media text and all the content that is developed by AI?
Should we have mandatory watermarking in photo editing tool or text editing tool?
Yes.
I’m answering with a question.
Are you saying yes or no?
I’m answering with a question.
Okay. That’s an answer I’ll take. No answer is also an answer.
I don’t, I mean, see the fact is we have accepted it. I mean, it’s not an untouchable, an alien, a dirty thing, right? It’s acceptable. So let’s make it. good i mean look good feel good i mean no point in i mean uh watermarking everything just to brand it there will be a blurry line between a human generated content and um ai generated content and all that so absolutely and we shouldn’t we demark that my honest feedback and i’m saying this with a big heart is that that human generated content will vanish in the internet just like we don’t we don’t remember uh uh you know addresses and for my i mean i’ve asked my i mean my maybe my my i mean phone numbers we don’t remember i mean we used to remember that we used to remember roots
but i hope not i hope i hope not
that’s why i said i have a heavy heart
i’ll answer from a very personal space because my son is a creative director okay in films and i think he’s absolutely says that it has to be demarcated but sometimes he goes to the extent saying that in near future you can clearly demarcate yourself you will not need any watermark also but there is a different angle which comes when you are human creative versus when you are really dividing completely
perfect thank you so much and that brings me on to exact time ladies and gentlemen please give a big round of applause to amazing panel thank you so much to the amazing moderator thank you so much you Thank you. Thank you.
Brandon Soloski: Thank you again, Serena, Malucia, and Mathis. Really excited to dig into our topic today, but before I go ahead and begin, there’s a couple of things I wanted to talk about in term…
EventData collection plays a vital role in both research and the development of artificial intelligence. It involves gathering information from diverse sources to analyse, interpret, and utilise in decisio…
ResourceThese key comments fundamentally shaped the discussion by challenging conventional assumptions about AI security and governance. Tiwari’s technical insights established that AI requires entirely new s…
EventAccelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisations claim to have oversight processes, only a small minority describe them as m…
UpdatesAI now influences decisions in healthcare, finance, hiring, and public administration, pushing AI ethics into thecentre of policy and public debate. What began as an abstract discussion about values i…
UpdatesThe panel revealed how different industry focuses shape perspectives on trustworthy AI, despite working within the same technological ecosystem. Each panellist offered distinct but complementary defin…
EventThe discussion reveals extraordinary consensus among all speakers on the fundamental principles of AI agent standards development: open interoperability, industry-led consensus building, security-firs…
EventCatherine Bielick: So my name is Dr. Katherine Bielik. I’m an infectious disease physician. I’m an instructor at Harvard Medical School, and I’m a scientist at MIT studying AI. Outcome improvemen…
Event## Key Challenges Identified ## Key Participants and Their Perspectives ## Major Discussion Points 4. **Terminology confusion** between technical and policy communities Anja Kaspersen: Massively s…
EventArtificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing society. Its integration is positive, as it enhances strategy setting, decision mak…
EventGovernance failures encompass the absence of comprehensive risk management frameworks. Organisations often lack clear processes for identifying emerging risks, assigning accountability for addressing …
EventIn her blog post ‘Military AI: Operational dangers and the regulatory void,’ Julia Williams warns that AI is reshaping the battlefield, shifting from human-controlled systems to highly autonomous tech…
Updates“And then the third area that we talked about was this notion of a trust deficit.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai?diplo…
EventAnother threat to the Internet’s principles is the attempt to prevent the use of end-to-end encryption. Governments argue that this measure is necessary for security reasons, but it can compromise the…
EventTechnology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semiconductors, and the internet. It is estimated that technology is moving approxima…
EventManufacturers and service providers are encouraged to take the lead in implementing security measures. Strong passwords and continuous software updates are highlighted as essential security practices …
EventModerate disagreement with significant implications. While speakers generally agree on goals (clean internet, abuse mitigation, trust-building), they have substantive differences on implementation app…
EventThe tension between the need for regulatory flexibility to accommodate rapid technological change and businesses’ requirements for predictable regulatory environments remains an ongoing challenge requ…
Event– The tension between content-based and systems-based regulatory approaches Bjorn Ihler: service providers and other stakeholders in improving regulatory compliance, increasing user and public safety…
EventLow to moderate disagreement level. Most differences are complementary rather than contradictory, focusing on different aspects of the same challenges. The main tensions arise around balancing formal …
EventThe discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The tone became more cautionary and analytical in the middle sections when addressin…
EventThe discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. However, the tone gradually became more realistic and somewhat pessimistic as speakers…
EventThe tone began very positively and constructively, with the Chair commending delegations for focused, specific interventions rather than general statements. Speakers expressed appreciation for the Cha…
EventThe discussion maintains a welcoming, educational tone throughout, with speakers actively encouraging questions and participation from newcomers. The presenters are enthusiastic about the IGF’s collab…
EventThe discussion maintained a consistently optimistic and collaborative tone throughout. It began with excitement around the technical demonstration, evolved into thoughtful policy discussions about sov…
EventThe tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while acknowledging uncertainties. There was an underlying tone of caution about hype cycl…
EventThe discussion maintained a consistently serious and concerned tone throughout, with speakers demonstrating deep expertise while expressing genuine alarm about current practices. The tone was analytic…
EventThe tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation. While Kurbalija maintains an expert, measured delivery, there’s a growing sens…
EventThe tone was largely serious and analytical, with speakers offering critical assessments of current development models. However, there were also notes of cautious optimism, particularly when discussin…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventThe tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disagreeing. While acknowledging serious challenges and risks, the discussion maintai…
EventThe tone of the discussion was thoughtful and analytical, with participants offering nuanced views on complex issues. There was general agreement on the challenges facing journalism, but some disagree…
EventThe discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realistic concerns about its challenges. While speakers acknowledged significant risks a…
EventFor many years, ‘AI ethics’ has been a buzzword. Multiple ethical codes and guidelines were published by companies, governments, and NGOs, often reiterating similar principles such as transparency, fa…
BlogThe tone of the discussion was primarily intellectual and analytical, with panelists presenting reasoned arguments for their positions. However, there were moments of tension and disagreement, particu…
EventThe discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s points rather than disagreeing. There was a shared sense of urgency about the need …
EventThe discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and challenges in a balanced manner. The tone was pragmatic and solution-oriented, …
EventThe discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s ideas rather than debating opposing viewpoints. The tone was solution-oriented an…
EventThe tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and complexity of the challenges. Speakers maintained a pragmatic optimism, recognizing si…
EventThe discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around infrastructure, energy, skills, and governance, speakers consistently emphasize…
EventThe discussion maintained a serious, analytical tone throughout, characterized by cautious optimism mixed with genuine concern. While panelists acknowledged AGI’s potential benefits, the conversation …
EventThe tone was primarily intellectual and philosophical, but with an engaging and sometimes humorous delivery. The speaker used anecdotes and relatable examples to make abstract concepts more accessible…
EventThe tone was professional and collaborative throughout, with participants demonstrating mutual respect and shared interest in finding constructive solutions. The conversation maintained an optimistic …
EventHigh level of consensus on challenges and approach, with constructive dialogue rather than adversarial positions. This suggests a mature understanding of regulatory realities and shared commitment to …
EventThe tone was cautiously optimistic but realistic. While panelists generally agreed that AI wouldn’t lead to permanent mass unemployment (citing historical precedent), they acknowledged significant tra…
Event“Geeta Gurnani said security used to be an after‑thought and now has become “shift‑left”.”
A related observation that security has historically been an after-thought is described in a workshop metaphor about technology launched without brakes, providing context for the shift-left claim [S111].
“The discussion emphasized that trust, governance and security are cornerstones for scaling AI responsibly across enterprises.”
Other sources highlight trust and governance as essential for scaling AI, noting they are “cornerstones” of responsible AI deployment [S118] and that “trust ranks first” in related frameworks [S63].
The panel shows strong convergence on four core themes: growing openness to responsible AI with a shift‑left security mindset; the necessity of clear accountability at scale; the need for universal technical baselines before regional regulation; and the requirement for hardware‑level privacy and safety mechanisms.
High consensus on practical governance and safety measures, indicating that industry, hardware, and policy leaders are aligned on concrete steps to build trustworthy AI. This alignment suggests that future initiatives can build on shared standards and joint accountability frameworks to accelerate safe AI deployment.
The panel shows substantial consensus on the necessity of trustworthy, safe AI and the importance of accountability at scale. However, clear disagreements emerge around the architecture of regulation (global alignment vs technology‑first standards) and the policy tool of mandatory watermarking. Additional unexpected tension appears in the philosophical framing of AI (anthropomorphisation vs pure technical view).
Moderate to high. While participants align on high‑level goals (trust, safety, accountability), they diverge on concrete policy mechanisms and conceptual foundations, indicating that achieving unified standards will require negotiation across technical, regulatory, and philosophical dimensions.
The discussion was shaped by a handful of pivotal remarks that moved the panel from generic talk about “trust” to concrete, actionable frameworks. Geeta’s Excel anecdote exposed the governance vacuum, prompting deeper analysis of accountability (Sundar) and philosophical grounding (Sunil). Sundar’s three‑bucket safety model and his accountability observation gave the conversation a structured, technical backbone, while Sunil’s philosophical framing and real‑world Llama example injected critical nuance about how we perceive and manage AI risks. Geeta’s end‑user‑focused definition of trustworthy AI and Sunil’s ad‑support argument further grounded the debate in market realities. Collectively, these comments redirected the dialogue toward layered safety, liability, and practical deployment strategies, ensuring the panel moved beyond buzzwords to substantive, forward‑looking insights.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

