Conversation: 02

19 Feb 2026 11:15h - 11:30h

Session at a glanceSummary, keypoints, and speakers overview

Summary

The conversation featured ServiceNow President and CPO Amit Zaveri discussing why he views trust as the new infrastructure for enterprise AI [10][13-16]. He argued that without clear visibility, auditing and compliance, enterprises cannot reliably deploy AI in critical workflows [14-16].


Zaveri noted that the industry is still figuring out how to embed AI, but the first step is getting employees to accept its usefulness and see personal productivity gains [23-25][29-30]. ServiceNow responded by retraining staff and giving them hands-on access to AI tools, allowing workers to experience faster, more efficient task completion [29-33]. By automating repetitive “skull-crushing” tasks, employees gain time for higher-value work, which in turn builds confidence in the technology [35-38]. He described this as a step-wise cultural shift that has already lifted adoption across engineering, finance, support and go-to-market teams [42-43].


Addressing fears of job loss, Zaveri compared AI to previous disruptions such as cloud and web, emphasizing that the speed of change creates uncertainty but not inevitable layoffs [49-54]. ServiceNow’s AI business has actually expanded, enabling hiring, market expansion and reinvestment of savings into new areas [57-61]. He highlighted that removing mundane work frees staff for higher-margin activities, improving both top-line and bottom-line performance [62-66].


Security emerged as the biggest barrier to agentic AI adoption; early concerns about visibility and control limited uptake until ServiceNow introduced security profiles and a control tower [70-73][74-76]. The company’s acquisition of Vesa, which provides access graphs for non-human identities, ensures that AI agents operate only within authorized roles [80-86]. After these safeguards, the volume of customer-deployed agentic workflows jumped 55-fold, and adoption is now driven by clear ROI rather than experimentation [73][99-104].


Zaveri predicts AI will become a foundational layer of all enterprise software, with firms that fail to embed it losing competitive advantage [106-110]. He also warned that ongoing regulatory and security challenges, especially around physical AI in operational technology, will keep trust and risk management at the forefront [127-133]. Overall, the discussion concluded that combining cultural reskilling, robust security controls and measurable value is essential for sustainable enterprise AI deployment [15][42][104][127].


Keypoints


Major discussion points


Trust is the foundational “infrastructure” for enterprise AI.


Zaveri stresses that without trust, safety, auditing, compliance and visibility, companies cannot rely on AI for critical workflows - the lack of these elements makes AI adoption untenable [13-16].


Building trust through cultural change and employee reskilling.


ServiceNow’s strategy involves retraining staff, giving them hands-on access to AI tools, and eliminating repetitive “skull-crushing” tasks so workers see tangible productivity gains, which in turn fuels broader enterprise-level adoption [24-38].


Security and identity management are essential for agentic AI.


Companies worry about visibility, vulnerabilities, and control; ServiceNow responded by embedding security controls, creating AI “control towers,” and acquiring Vesa to manage non-human identities and permissions, arguing that without such safeguards agentic AI will not be adopted [70-86].


Adoption pace is becoming more measured and ROI-driven.


Early hype about rapid, universal deployment proved optimistic; now enterprises adopt AI more thoughtfully, first securing the platform, then piloting use-cases that demonstrate clear ROI before scaling [95-104].


Future outlook: AI as a core layer of software and the rise of physical/OT AI.


Zaveri sees AI as inseparable from next-generation software, with vendors needing deep domain expertise to add context to foundation models, while new regulatory and operational-technology challenges (e.g., AI-driven robotics in factories) will shape the next wave [106-112][127-133].


Overall purpose / goal of the discussion


The conversation was designed to illuminate ServiceNow’s perspective on how enterprises can responsibly and effectively embed AI-particularly agentic AI-by establishing trust, reskilling workforces, securing implementations, and positioning AI as a foundational component of future software and operational technology.


Overall tone


The dialogue begins with a formal, explanatory tone as the moderator frames the “trust as infrastructure” premise. As Zaveri describes internal initiatives, the tone shifts to pragmatic optimism, highlighting concrete steps and successes. When addressing security and the hype around AI, the tone becomes cautionary yet confident, emphasizing the need for robust safeguards. The closing remarks adopt a forward-looking, visionary tone, acknowledging ongoing challenges while expressing confidence in AI’s strategic role. Throughout, the tone remains professional and constructive, with a gradual move from exploratory questioning to assertive, solution-focused statements.


Speakers

Amit Zaveri


Role/Title: President and Chief Product Officer, ServiceNow [S1]


Area of Expertise: Enterprise software, AI integration, product strategy


Speaker 1


Role/Title: Event moderator / host (introduces and closes the session) [S3]


Area of Expertise: Not specified


Arjun Karpal


Role/Title: Senior Tech Correspondent, CNBC [S1]


Area of Expertise: Technology journalism, AI and enterprise technology


Additional speakers:


– None identified beyond the three listed above.


Full session reportComprehensive analysis and detailed insights

The interview opened with host Arjun Karpal introducing Amit Zaveri, President and Chief Product Officer of ServiceNow, and asking why “trust is the new infrastructure” for AI in the enterprise [10-11].


Zaveri stresses that without clear visibility, auditability, compliance and safety mechanisms, enterprises cannot rely on AI for mission-critical workflows [13-16].


Human-centred trust and cultural shift – ServiceNow first focused on getting employees to accept AI as useful and to understand its value, a mindset shift that acknowledges AI’s rapid impact on work [24-26]. The company then retrained staff and gave them hands-on access to AI tools, letting workers experience faster, more efficient task completion in their daily roles [29-33]. By automating “skull-crushing” repetitive work, employees free up time for higher-value activities, reinforcing confidence in the technology [35-38]. This cultural programme has driven noticeable AI adoption across engineering, finance, customer support and go-to-market teams [42-43].


Job-loss narrative and business growth – Zaveri compared AI-driven disruption to earlier waves such as cloud and the web, arguing that the technology itself does not inherently cause layoffs [50-54]. In ServiceNow’s experience the AI business has become a billion-dollar-plus unit, enabling new hires, entry into additional market segments, and reinvestment of efficiency savings into higher-margin activities [57-61][62-66].


Security as the gatekeeper – Early hesitancy stemmed from a lack of visibility, vulnerability controls and governance [70-73]. ServiceNow responded by building an AI “control tower” that provides end-to-end oversight and visibility, a catalyst that helped the volume of customer-deployed agentic AI (autonomous AI agents) workflows surge 55 × once the control tower was in place [73-76]. Recognising that AI agents act as non-human identities, ServiceNow acquired Vesa to add access-graph technology that enforces granular permissions and prevents unauthorised data access [80-86]. Zaveri emphasised that AI agents change roles “every second” based on requirements, so security and identity controls must be built into the product, not added later [87-89]. ServiceNow’s broader security business is also a “billion-dollar-plus” operation, underscoring the company’s commitment to robust protection [70-73].


Adoption pace and ROI loop – The initial expectation that agentic AI would proliferate instantly proved overly optimistic. Companies now adopt a more measured, ROI-driven approach, piloting a few well-secured use cases before scaling, which aligns with the industry view that visibility and control are prerequisites for large-scale rollout [95-101][102-104][68-69][71-77].


AI’s role relative to SaaS – Zaveri contended that AI will act as a synergistic layer rather than replace existing SaaS products. Only about 5-10 % of ServiceNow’s intellectual property derives from foundation models; the remaining 90 % comes from ServiceNow’s own context-building and domain-specific engineering [106-112][115-121]. He also highlighted a partner ecosystem that includes OpenAI, Anthropic, Mistral and Google-Gemini, reinforcing a collaborative approach to AI development [115-121].


Future outlook – Regulatory, privacy and security frameworks will continue to evolve, with every country now formulating AI-specific rules [127-129]. Zaveri flagged the emerging challenge of “physical AI” in operational technology-such as humanoid robots and droids in factories-and the need to secure these systems as part of broader enterprise processes [130-133].


In sum, the discussion identified three pillars for sustainable enterprise AI: embedding trust as foundational infrastructure, investing in employee reskilling and cultural change, and delivering built-in security and identity controls for autonomous AI agents. ServiceNow’s experience shows that when these elements are in place, adoption accelerates, ROI becomes evident, and AI serves as a value-adding layer rather than a disruptive replacement [13-16][42-43][104-106][127-129].


Session transcriptComplete transcript of the session
Speaker 1

IT, the technology. Ladies and gentlemen, and now I have the privilege of inviting our last speaker for the day, Mr. Amit Zaveri, President and Chief Product Officer, ServiceNow. Mr. Zaveri has spent his career at the intersection of enterprise software and AI, most recently leading ServiceNow’s push to embed AI agents into every corner of enterprise workflow. His perspective on agentic AI what it actually delivers versus what it promises is grounded in millions of enterprise deployments. He’ll be in conversation with Arjun Karpal, CNBC’s Senior Tech Correspondent. Please welcome our guest and the moderator.

Arjun Karpal

All right. Hello, everyone. Hi, thanks so much for joining us. And if you’re watching online, thank you so much. Amit, let’s just kick off. You’ve got this view that trust is the new infrastructure in this age of AI. Can you just unpack what that means?

Amit Zaveri

Yeah. Thank you, Arjun. I think if you look at what’s going on in the AI space, there’s a huge amount of interest in terms of using it in enterprise use cases as well, right? And without understanding what it’ll do for you and having any idea of what it landed up implementing inside your system, it becomes very hard to really depend on it. So that’s why without trust and safety and understanding of what’s happening in your underlying environment, it becomes very hard to expect to use AI in a lot of these enterprise use cases because your companies will not be able to do any auditing, compliance, visibility, and you wouldn’t really be able to really not run business without any kind of understanding of what’s going on.

So trust has to be a big part of it.

Arjun Karpal

And trust, I guess, in the enterprise sense of the word probably has lots of different definitions, right? We’re talking about trust amongst employees, for example, but also from the cyber perspective, from the security and safety perspective you were just mentioning there. So it’s worth digging into some of these. How do you design some of these? In the enterprise, let’s start with perhaps the human element. at this point, because there’s a lot of concern from people right now about potential job losses and the impact AI could have on their roles as well. So from the human perspective, how do you design trust within the organization?

Amit Zaveri

Yeah, no, I think it’s still something which I think the industry is still trying to figure out, to be honest. The way to think through this one is, one, everybody has to accept that AI is useful, and there’s a lot of opportunity to embed that in terms of your day -to -day lives. Second, I think there is a reality that this thing is transforming how the world works, and it’s moving very fast. So once you understand the principles and the value of it, then you start building together in terms of what the cultural shifts need to be, how people need to work together, and how do you help them understand the value while keeping their jobs very important and be able to bring them into the conversation.

So there’s a huge amount of cultural shift inside the company, as well as being able to kind of educate. Everybody in terms of what it delivers for you. So what we’ve been doing at ServiceNow, for example, we’ve been retraining our employees and giving them access to a lot of the AI capabilities, making sure they get to see what it does for their day -to -day life at the employee productive level and see that, okay, you know what, I could do my job faster, better, more efficiently, and free up more of my time to do other things which I couldn’t get to. Second thing after that, once you get them re -skilled, is to really now take it to the enterprise level, not just an employee.

Like, now how do you improve your processes? And the processes which are cutting across multiple departments, can I make that thing work faster? Can I get a better understanding of how it operates? And can you land up freeing up a lot of the human painful work you used to do, right? A lot of the repetitive tasks, which we used to call a skull -crushing task, which is making it so difficult in the day -to -day life that they can’t really get anything done beyond that. So if you remove those barriers, people start trusting that, oh, you know what, this is helping me. It’s getting my job done better. And it’s also getting me more understanding of new technologies.

So, I’m going to go ahead and start talking about the technology that I’m using. And you start accepting that in your day -to -day work environment and take that to the next level because you start innovating. So it’s a step process, I would say. And what we have done today at ServiceNow, we’ve seen the adoption go up a lot, be it in engineering, be it in finance, be it in customer support, be it in go -to -market, because they started playing with these technologies and bringing it to the day -to -day work environment. And from there, they’ve been starting to now innovate and come up with new ideas to help make their jobs better and how you make the customer’s life better long -term.

Arjun Karpal

Does that set them up then for success when, you know, inevitably we will see the changing nature of work? And also, we’ve already seen some companies, you know, make layoffs and blame it on AI, whether that’s true or not is another debate. But certainly, there will be changing nature of work and organizations are rethinking the workforce. So by doing the reskilling, is this… Setting employees up.

Amit Zaveri

No, I think you’re right. I mean, with any technology transformation, there’s always worries about job losses. It’s nothing, this is not the first time a technology shift has created that anxiety. It has happened with the cloud when the cloud happened. It happened with the web when the transformation happened towards that. I think the difference here is the speed sometimes and the uncertainty of understanding what it does to you as an individual in some cases. But I think the worry, a lot of the news out there is saying because of AI, we reduce our staffing. I think some of them are just using that as an excuse I’ve seen so far. If you look at our business, our AI business has grown significantly.

And we have been able to add more people because we were able to expand our tap. We’ve been able to get into a lot more new segments of market because of the investment. We’ve been able to reinvest a lot of that money we save because of AI into a lot of new areas. And that is one thing which I think a lot of companies are starting to realize. You take out a lot of the mundane tasks and move into the high value tasks. You can increase your top. line. Sure, you help on the bottom line with AI. There is a lot of work you can now outsource to autonomous agents and agentic workflows instead of having to do it by humans.

But the humans are now able to do a lot of other things you couldn’t and get into a lot of new segments. Now, business has grown significantly because we’ve been able to now take that savings and invest into a lot of new areas.

Arjun Karpal

You mentioned agentic there, and I’m glad you did because the other part of this trust equation is what you were mentioning earlier around safety and security as well. Well, given the excitement around agentic AI and how much businesses want to adopt this, is there enough focus being put right now on the vulnerabilities from a cyber perspective when it comes to agentic AI?

Amit Zaveri

I would say that’s probably the biggest concern for companies when they think about AI and agentic, right? If you look at last year, early last year, when we used to go and talk about agentic workflows or AI, most of the companies were worried about not having any kind of visibility, worried about vulnerabilities, worried about security, worried about control. And once we started introducing them the capability of controlling some of this implementation and having security profiles around the AI implementation, a lot more companies started adopting AI. Late last year, I would say middle of last year and late last year, the volume of our agentic workflows being adopted by customers went up by 55 times, 55x. Because what happened was they started feeling comfortable that one, they have visibility into all the AI systems.

Second, they have ability to secure it because you don’t want to lose access to your data or get it accessed externally without any kind of permissions. And once you start giving them that comfort factor, they’re starting to see the benefit of taking agentic AI and implementing that into the businesses, be a workflow around case management, incident management, triaging, be able to resolve issues. And that is a very, very valuable things for them. But once you only can do that once you have the security part of it. So, for example, we’ve been. Investing aggressively in the security space, our security. business itself is a billion dollars plus, but we’ve been adding now for AI agents. The AI agents are changing roles every second you call them based on what requirements you have.

So how do you manage the permissions? How do you manage their identity? So we bought recently a company called Vesa, which does access graphs for non -human identities, which makes it much more valuable to our customers because now they know that those agents are guaranteed to not do something nefarious or they won’t have access to data you’re not allowed to have. And whenever you change the roles, they’re only getting to do things based on the roles. So it’s a very important part of it. And I think agentic AI and things like that will not be adopted if you don’t have a right kind of security technology as part of this implementation. It cannot be on the side.

It has to be part of the product.

Arjun Karpal

There’s certain tech companies who will sort of talk up the capabilities of agentic AI right now and talk up how enterprises are adopting AI. But what do you think? I think it’s a very important part of it. I think it’s a very important part of it. From your perspective, has the adoption of AI from enterprises been faster, slower, or about right than you had anticipated?

Amit Zaveri

I think there was a lot of expectation early last year. Everybody thought that agentic AI and AI agents will be proliferated across every enterprise. I thought that was probably a little more optimistic and unrealistic because there were a lot of technologies which are missing to really provide you a platform which guarantees everything before you go and adopt it. A lot of those things started happening, I think I would say middle last year, and now the volume of adoption has gone up. But it is probably more thoughtful than probably experimental the way it was before. A lot of people were experimenting with it, but they were not wanting to put it in production because of the security things you talked about, trust and safety and compliance.

Now with a lot of the things customers are seeing from vendors like us, where you’re providing AI control tower, for example, to make sure you have visibility and control, they’re feeling more comfortable. So the volume is starting to go up. Use cases are getting much more defined. And what I’ve seen so far is that once you implement one or two use cases, you start seeing ROI. then the next more use cases become very very fast so you have to make it easy to be adopted you have to provide the security and everything else around it and then get them to see the roi and once you get the roi i think the customers all feel that this is something valuable to them and it’s something they want to invest in

Arjun Karpal

i mean can i get your take on a on a comment we had on cnbc this week i was speaking to the ceo of mistral ai in europe and it was around this conversation happening in financial markets right now around software yeah um and how much are these agentic ai systems going to do the job of software that enterprises currently pay for uh and these sas businesses and he said you know he believes that 50 roughly of current you know software being used by enterprises uh could shift to ai i just wanted to get your take on that given how how embedded you are in this industry

Amit Zaveri

no i think that there’s there’s a lot of people who are in the industry who are in the industry who are in the industry a lot of this debate about what is ai going to do the software industry i think uh ai is going to be a synergetic part of Any software you’re going to build going forward, and it’s already happening now, has to be with AI mindset and AI as part of the platform and the foundation. The companies which are going to suffer are the companies who are not adopting AI fast enough. So any vendor who’s thinking about AI as a side thing or something which is coming later, I think it’ll be very difficult to really justify customers buying that product.

Companies like us and others who are starting to make, we have been doing that for a few years, where they’ve been making AI part of the foundation, part of the platform. We’re already accelerating that adoption because customers, once they value, second, I think they do believe that this is going to be a very competitive advantage to them as well. And so we see a lot of synergy. We do a lot of partnership with OpenAI, Anthropic. We work with Mistral. We work with Google and Gemini because I think there’s a synergy between what foundational models and AI technology provides. and all the things you have to do around it. That’s what software industry can do. So what we’re doing is we’re building on top of it, but it’s like 5 % to 10 % of IP comes from those models.

90 % comes from technology we build because you have to build a lot of context around enterprise use cases. You have to understand what it means. You have to understand why an exception happened, how you handle it. Models are basically telling them what to do, but they don’t know why. The why part, the context part, comes from technologies and software we build. And the companies who are going to do that much more

Arjun Karpal

better, understand domain, understand expertise, and have a lot of experience, will win in this market. And that’s the difference, I think. I mean, we’ve got about a minute left. I just wanted to get your take on the future. If we were sat here

Amit Zaveri

I think we still will be talking about security and risk, definitely, because there’s a lot of work still to be done. regulations. Every country is now thinking about what AI means to them, what kind of regulations they want to put in for privacy, security, other things like that. I think the other one which is starting to come up a lot is physical AI. So we’re doing a lot of work in OT, operational technology, because a lot of the shop factories are changing with physical AI, with humanoids and droids and things like that, because they are going to be the next generational way of manufacturing. So how do you now secure that? How do you bring that as part of the processes?

How you integrate that into your environment is going to be a critical discussion as well.

Arjun Karpal

Fantastic. Amit, thanks for your insights. So incisive. I appreciate your time. Thank you so much. Round of applause for Amit Zaveri of ServiceCamp. Thank you, everyone.

Speaker 1

Mr. Amit Zaveri, and thanks, Arjun Karpal, for moderating this conversation. Ladies and gentlemen, with this, we end.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Host Arjun Karpal introduced Amit Zaveri, President and Chief Product Officer of ServiceNow.”

The transcript snippet shows Amit Zaveri speaking and thanks Arjun Kharpal for moderating, confirming the host-speaker relationship [S1].

!
Correctionhigh

“ServiceNow acquired Vesa to add access‑graph technology that enforces granular permissions.”

The knowledge base records ServiceNow’s recent AI acquisition as Moveworks for $2.85 billion, with no mention of a Vesa acquisition, indicating the reported claim is inaccurate [S94].

Additional Contextmedium

“In ServiceNow’s experience the AI business has become a billion‑dollar‑plus unit.”

ServiceNow’s $2.85 billion purchase of Moveworks demonstrates a multi-billion-dollar commitment to AI, providing context for the size of its AI business [S94].

Additional Contextmedium

“Enterprises need clear visibility, auditability, compliance and safety mechanisms to rely on AI for mission‑critical workflows.”

Other sessions stress that AI systems must be auditable and safety-focused, reinforcing the importance of those mechanisms [S78].

Additional Contextlow

“ServiceNow first focused on getting employees to accept AI as useful, retraining staff and giving them hands‑on access to AI tools to experience faster, more efficient task completion.”

Industry discussions highlight the value of hands-on workforce training and the shift toward human-centred AI adoption, echoing the described cultural programme [S64] and the broader benefit of automating repetitive work [S86].

External Sources (96)
S1
Conversation: 02 — – Amit Zavery- Arjun Kharpal
S2
https://dig.watch/event/india-ai-impact-summit-2026/conversation-02 — Fantastic. Amit, thanks for your insights. So incisive. I appreciate your time. Thank you so much. Round of applause for…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
Conversation: 02 — – Amit Zavery- Arjun Kharpal
S7
https://dig.watch/event/india-ai-impact-summit-2026/conversation-02 — Mr. Amit Zaveri, and thanks, Arjun Kharpal, for moderating this conversation. Ladies and gentlemen, with this, we end.
S8
Indias Roadmap to an AGI-Enabled Future — And finally my co -founder at Chariot, Mr. Parth Sarthi. To build this ecosystem from ground up starting with the very p…
S9
Shaping the Future AI Strategies for Jobs and Economic Development — Trust must be designed from day one, not retrofitted after deployment
S10
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S11
Thinking through Augmentation — AI is prevalent and beneficial, with 11,000 people using it daily at Cineph and achieving incredible results. However, c…
S12
How AI Is Transforming Indias Workforce for Global Competitivene — And I think that, you know, because AI is transforming tasks within jobs rather than eliminating, you know, roles entire…
S13
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S14
From India to the Global South_ Advancing Social Impact with AI — This comment directly addresses one of the most anxiety-provoking aspects of AI adoption – job displacement. By framing …
S15
Chief AI scientist at Meta states that AI won’t permanently displace jobs — Prof. Yann LeCun, the Chief AI Scientist at Meta and one of the people dubbed “godfathers of AI”, believes that AI techn…
S16
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And how do we demonstrate that the risks have been managed well? And that is where the assurance ecosystem that Rebecca …
S17
Agentic AI in Focus Opportunities Risks and Governance — All panelists emphasized the critical importance of enterprise guardrails and human oversight. They stressed that while …
S18
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — Zafrir explains that organizations are introducing AI agents rapidly without understanding their capabilities, treating …
S19
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — “The second big constraint is this notion of a context gap.”[32]. “So that second area of agents being enriched with mac…
S20
SAP elevates customer support with proactive AI systems — AIhas pushedcustomer support into a new era, where anticipation replaces reaction. SAP has built a proactive model that …
S21
S22
The role of standards in shaping a safe and sustainable AI-driven future — In conclusion, we are thankful for the steadfast support from all stakeholders, which is pivotal to unlocking AI’s poten…
S23
Setting the Rules_ Global AI Standards for Growth and Governance — Basically, we want to know how AI providers are managing risk, but we are in the early days of defining really what that…
S24
Building the AI-Ready Future From Infrastructure to Skills — Gilles Garcia presented physical AI as a paradigm shift from cloud-centric to edge computing applications. His focus on …
S25
Opening of the session/OEWG 2025 — Pakistan: Mr. Chair, let me begin by expressing Pakistan’s profound appreciation for the unwavering dedication and pa…
S26
Secure Finance Risk-Based AI Policy for the Banking Sector — This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automat…
S27
Building the Next Wave of AI_ Responsible Frameworks & Standards — Building confidence and security in the use of ICTs | Artificial intelligence Kamesh highlights that trust, especially …
S28
AI for Social Empowerment_ Driving Change and Inclusion — “But the criteria for hiring has shifted to a more nuanced, a more calibrated way of looking at learnability, looking at…
S29
Challenging the status quo of AI security — Agent identity management presents fundamental challenges including defining what constitutes agent identity, establishi…
S30
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Furthermore, the discussion recognises the necessity to address the dichotomy between identity and privacy. While identi…
S31
Multistakeholder Partnerships for Thriving AI Ecosystems — That the right to privacy of their data is properly maintained. So I think the policy makers, the people who are there, …
S32
Laying the foundations for AI governance — This comment grounded the entire discussion in urgent, practical reality. It demonstrated that AI governance isn’t just …
S33
Comprehensive Summary: The Future of Robotics and Physical AI — Economic | Infrastructure Recent Advances and Breakthroughs in Robotics
S34
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — We’re now at a pivotal moment. Artificial intelligence is rapidly transitioning from a technological frontier to a core …
S35
The Foundation of AI Democratizing Compute Data Infrastructure — “So obviously there is a lag between the hardware and the software.”[19]. “My question to anyone, Jan specifically, what…
S36
From principles to practice: Governing advanced AI in action — Juha argues that trust is the sine qua non for AI technology uptake, which in turn is necessary for AI benefits to mater…
S37
Conversation: 02 — This reframes trust from a soft concept to a foundational technical requirement, positioning it as critical infrastructu…
S38
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — Disclaimer:This is not an official session record. DiploAI generates these resources from audiovisual recordings, and th…
S39
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Beddoes references the historical economic argument against the ‘lump of labor fallacy,’ suggesting that technological a…
S40
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Distinction between jobs at risk of displacement versus those in augmentation category Economic | Future of work
S41
Discussion Report: AI-Native Business Transformation at Davos — – Yutong Zhang- Richard Socher Software Evolution and User Interface Changes Software replacement vs. software persist…
S42
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Consensus around least-common denominator of the future process, especially on implementing the existing vs. negotiating…
S43
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — “An agent that now understands human intentions because, you know, you just need to tell him what you want.”[32]. “You c…
S44
Agentic AI in Focus Opportunities Risks and Governance — Clear consumer intent verification and traceable transactions are essential for agentic commerce
S45
Driving Enterprise Impact Through Scalable AI Adoption — Artificial intelligence | Building confidence and security in the use of ICTs
S46
Enterprise AI adoption stalls despite heavy investment — AI has moved from experimentation to expectation, yet many enterprise AI rolloutscontinue to stall. Boards demand return…
S47
AI adoption surges with consumers but stalls in business — In a recentanalysis, Goldman Sachs warned that while AI is rapidly permeating the consumer market, enterprise integratio…
S48
Adoption of agentic AI slowed by data readiness and governance gaps — Agentic AI is emerging as a new stage ofenterprise automation, enabling systems to reason, plan, and act across workflow…
S49
Global AI Policy Framework: International Cooperation and Historical Perspectives — – Alexandra Baumann- Lucia Velasco Building on existing institutions rather than creating entirely new frameworks Lega…
S50
Delegated decisions, amplified risks: Charting a secure future for agentic AI — ## Introduction and Context ## Key Technical Insights ## Proposed Solutions and Recommendations ## Specific Security …
S51
WS #31 Cybersecurity in AI: balancing innovation and risks — Moderate consensus exists among the speakers on key issues, particularly regarding security challenges and the importanc…
S52
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S53
AI Meets Cybersecurity Trust Governance & Global Security — “And I look forward to the dialogue ahead.”[7]. “We are fortunate to have this conversation moderated by Nirmal John, Se…
S54
WS #283 AI Agents: Ensuring Responsible Deployment — ### Bias and Evaluation Challenges ### Introduction and Context – Better evaluation metrics that account for different…
S55
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — While Huang focused on job creation examples, the question of substitution in knowledge work remains partially addressed…
S56
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Disclaimer:This is not an official session record. DiploAI generates these resources from audiovisual recordings, and th…
S57
Comprehensive Report: European Approaches to AI Regulation and Governance — These are things you find in the guidelines. And then we have in the code, we have some rules on transparency which are …
S58
Conversation: 02 — This reframes trust from a soft concept to a foundational technical requirement, positioning it as critical infrastructu…
S59
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we …
S60
Secure Finance Risk-Based AI Policy for the Banking Sector — This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automat…
S61
AI as critical infrastructure for continuity in public services — “Trust also can influence economic confidence and cross -border collaboration.”[54]. “Standards are a very important pil…
S62
Skilling and Education in AI — “Five second response, I think the one action that we need to take is improve the trust infrastructure and make sure tha…
S63
Thinking through Augmentation — AI is prevalent and beneficial, with 11,000 people using it daily at Cineph and achieving incredible results. However, c…
S64
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Tigran Karapetyan: at work. And as we know, the AI is here to stay. It’s not going to go away. It’s there already. So it…
S65
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Furthermore, the discussion recognises the necessity to address the dichotomy between identity and privacy. While identi…
S66
Challenging the status quo of AI security — Trust is essential for agent identity systems to function properly and requires development of verification mechanisms
S67
Agentic AI drives a new identity security crisis — New research from Rubrik Zero Labswarnsthat agentic AI is reshaping the identity landscape faster than organisations can…
S68
Driving Enterprise Impact Through Scalable AI Adoption — During the COVID era. very few could explain the ROI. How do you measure the ROI of learning? It’s a really good questio…
S69
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S70
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — So adoption is ultimately where success is measured. And actually, you need to design that in from the get-go. And that …
S71
29, filed Jan. 22, 2010, at 9-10. — c h a p t e r 9 ## whIle 65% of amerIcans use broadband at home, the other 35% (roughly 80 million adults) do not. 1 …
S72
Building the AI-Ready Future From Infrastructure to Skills — And he said, Tim, India is software. This is what we do. He said, you’re going to be in front of the best people in the …
S73
Comprehensive Summary: The Future of Robotics and Physical AI — So I think most of the robotics, most of the software and AIs to be focused on the highest ROI applications. And so that…
S74
The rise of tech giants in healthcare: How AI is reshaping life sciences — The intersection of technology and healthcareis rapidly evolving, fuelled by advancements in ΑΙ and driven by major tech…
S75
Steering the future of AI — **Major Discussion Points:**
S76
The Foundation of AI Democratizing Compute Data Infrastructure — Thanks, Faith. Thank you all for such a brilliant session. My name is Arun Sharma. I work with the World Bank. My questi…
S77
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — -Announcer: Role/Title: Event announcer; Area of expertise: Not mentioned And I want this. The most important thing tha…
S78
Building Population-Scale Digital Public Infrastructure for AI — “These systems need to be auditable.”[58]. “there is this urgency to get things done and that might make one think very …
S79
IndoGerman AI Collaboration Driving Economic Development and Soc — Disclaimer:This is not an official session record. DiploAI generates these resources from audiovisual recordings, and th…
S80
Welcome Address — “How to make AI machine -centric and human -centric?”[33]. “Friends, the future of work will be inclusive, trusted, and …
S81
AI Governance Dialogue: Presidential address — This comment is insightful because it reveals a critical evolution in digital governance philosophy – moving from effici…
S82
Workers report major gains from AI use — ChatGPT nowreaches more than 800 million userseach week, and this rapid uptake is fuelling a surge in enterprise AI adop…
S83
Generative AI: Steam Engine of the Fourth Industrial Revolution? — To create a positive and practical approach towards adopting new technologies without fear, hands-on workforce training …
S84
S85
Lessons learned: Offering our course on AI for the first time — Participants who attended the AI course were frequently motivated by professional needs. Either they had been requested …
S86
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Artificial Intelligence (AI) carries the potential to revolutionise various sectors worldwide, due to its capacities for…
S87
DIGITAL DIVIDENDS — For the economy as a whole, the most profound impact of the internet on individuals is that it makes workers mor…
S88
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S89
Comprehensive Report: “Factories That Think” Panel Discussion — Economic | Development Jobs that are dull, dirty, and dangerous with high turnover within a year. Aging workforce getti…
S90
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Artificial intelligence | Building confidence and security in the use of ICTs Strategic importance and business models …
S91
How Multilingual AI Bridges the Gap to Inclusive Access — I’m South African. But what would an AI respond, right? And I have a pregnancy in my knee, right? I’m pregnant in my kne…
S92
Navigating the interplay between artificial intelligence, philosophy, education, and governance — This session invited emerging scholars to share their perspectives, experiences, and aspirations regarding the developme…
S93
Comprehensive Report: Preventing Jobless Growth in the Age of AI — – Erik Brynjolfsson- Valdis Dombrovskis- Jonas Prising Economic | Future of work Historical Context and Future of Tech…
S94
ServiceNow expands AI capabilities with $2.9B acquisition — ServiceNow has struck asignificant deal, acquiring AI firm Moveworks for $2.85 billion in cash and stock, marking its la…
S95
AI is transforming businesses and industries — I discovered the Dadbot andHereAfter AI. It is a way for people to preserve memories and pass along heirlooms even after…
S96
NVIDIA powers a new wave of specialised AI agents to transform business — Agentic AIhas entereda new phase as companies rely on specialised systems instead of broad, one-size-fits-all models. Op…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument127 words per minute114 words53 seconds
Argument 1
Trust must be built into AI products from the start, not added later (Speaker 1)
EXPLANATION
The speaker emphasizes that trust should be an integral design principle of AI solutions rather than an after‑thought feature. Embedding trust early ensures compliance, auditability and user confidence from the outset.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to embed trust from day one is highlighted in the report “Shaping the Future AI Strategies for Jobs and Economic Development” [S9] and reinforced by discussion on audit, compliance and visibility requirements in the conversation transcript [S1].
MAJOR DISCUSSION POINT
Trust as foundational design
AGREED WITH
Amit Zaveri, Arjun Karpal
A
Amit Zaveri
9 arguments216 words per minute2112 words586 seconds
Argument 1
Trust is essential for audit, compliance, and visibility in AI deployments (Amit Zaveri)
EXPLANATION
Amit argues that without trust, safety and clear visibility into AI behavior, enterprises cannot perform auditing, meet compliance requirements, or have confidence in AI‑driven processes.
EVIDENCE
He explains that lacking trust and safety makes it hard to depend on AI because companies would be unable to do any auditing, compliance, or visibility, and therefore could not run business without understanding what is going on [15-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The conversation notes that without trust and safety, companies cannot perform auditing, compliance, or gain visibility into AI behavior [S1]; algorithmic transparency is also emphasized by the AI Security Council [S10]; and a dedicated assurance ecosystem for safe AI is described in “Ensuring Safe AI” [S16].
MAJOR DISCUSSION POINT
Trust as infrastructure
AGREED WITH
Speaker 1, Arjun Karpal
Argument 2
Reskilling and cultural shift are needed to earn employee trust and demonstrate AI value (Amit Zaveri)
EXPLANATION
Amit states that enterprises must retrain staff, give them hands‑on access to AI tools, and foster a cultural shift so employees see AI as a productivity enhancer rather than a threat.
EVIDENCE
He describes ServiceNow’s program of retraining employees, providing them AI capabilities to see day-to-day benefits, and then scaling those gains to enterprise-wide process improvements, freeing people from repetitive “skull-crushing” tasks [29-33][34-38][42-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transcript mentions a large cultural shift and employee education as prerequisites for AI adoption [S1]; concerns about data-sharing culture and the need for human-AI collaboration are discussed in “Thinking through Augmentation” [S11]; and the transformation of India’s workforce emphasizes redesigning roles and reskilling rather than simple job cuts [S12].
MAJOR DISCUSSION POINT
Reskilling and cultural change
AGREED WITH
Arjun Karpal
Argument 3
AI does not inherently cause layoffs; it can expand business and create new roles (Amit Zaveri)
EXPLANATION
Amit counters the narrative that AI leads to job cuts, noting that AI can generate growth, enable entry into new market segments and allow companies to reinvest savings into higher‑value work.
EVIDENCE
He points out that ServiceNow’s AI business has grown, allowing the company to add more people, expand into new market segments, reinvest savings, and shift workers from mundane tasks to higher-value activities, resulting in top-line and bottom-line growth [49-67].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI’s impact on jobs argue that AI transforms tasks rather than eliminates roles, calling for redesign of work [S12]; policy statements refute the notion of permanent job loss [S14]; and Meta’s chief AI scientist Yann LeCun asserts AI will not permanently displace jobs [S15].
MAJOR DISCUSSION POINT
AI‑driven business expansion
DISAGREED WITH
Arjun Karpal
Argument 4
Visibility, security profiles, and control mechanisms are critical for enterprise adoption of agentic AI (Amit Zaveri)
EXPLANATION
Amit highlights that enterprises need clear visibility, security controls and governance over AI agents before they will adopt agentic workflows at scale.
EVIDENCE
He notes that early concerns about lack of visibility and security limited adoption, but after ServiceNow introduced security profiles and control mechanisms, adoption rose dramatically (55× increase) as customers felt comfortable with visibility and protection of data [71-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The conversation cites the introduction of security profiles and control mechanisms that boosted adoption [S1]; a World Economic Forum panel highlights security mismatches when applying human-centric identity models to AI agents [S18]; and the Agentic AI governance panel stresses the need for guardrails and human oversight [S17].
MAJOR DISCUSSION POINT
Security & control for agentic AI
AGREED WITH
Arjun Karpal
DISAGREED WITH
Arjun Karpal
Argument 5
Acquiring Vesa’s non‑human identity graph technology helps manage AI agent permissions and prevent misuse (Amit Zaveri)
EXPLANATION
Amit explains that ServiceNow bought Vesa to obtain an access‑graph system for non‑human identities, enabling precise permission management for AI agents and reducing risk of unauthorized actions.
EVIDENCE
He describes the Vesa acquisition, stating that the technology provides access graphs for non-human identities, ensuring agents only act within allowed roles and cannot perform nefarious actions [84-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transcript explicitly describes ServiceNow’s acquisition of Vesa to obtain non-human identity access graphs for agent permission management [S1]; the same panel notes the necessity of new identity security approaches for AI agents [S18].
MAJOR DISCUSSION POINT
Identity management for AI agents
Argument 6
Early optimism was unrealistic; adoption is now more thoughtful after security and trust controls are in place (Amit Zaveri)
EXPLANATION
Amit reflects that the initial hype around agentic AI was overly optimistic, but as security, trust and control features have matured, enterprises are adopting AI in a more deliberate, ROI‑driven manner.
EVIDENCE
He recounts that early expectations were unrealistic, but after mid-last year security and trust controls (e.g., AI control tower) were introduced, adoption became more thoughtful and use-cases better defined, leading to visible ROI and faster subsequent deployments [95-101].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Amit reflects on overly optimistic early expectations and the shift to deliberate, ROI-driven adoption after security controls were added [S1]; the Agentic AI focus panel observes a maturing adoption curve once guardrails are established [S17]; and the assurance ecosystem discussion underscores the role of trust in sustained deployment [S16].
MAJOR DISCUSSION POINT
Maturing adoption curve
AGREED WITH
Arjun Karpal
Argument 7
AI will act as a synergistic layer within software platforms rather than fully replace SaaS products (Amit Zaveri)
EXPLANATION
Amit argues that AI will be embedded as a complementary layer in software, providing context and workflow intelligence, while the core IP and domain expertise remain in the platform built by vendors.
EVIDENCE
He says AI will be a synergetic part of any software, with only 5-10% of IP coming from foundation models and 90% from the technology built around enterprise use cases, such as context, exception handling, and domain expertise [106-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A comment that only 5-10 % of IP comes from foundation models while 90 % is built around enterprise context aligns with observations that most value is added by custom technology [S2]; Amit also counters the claim that 50 % of SaaS will be replaced, a point raised in the conversation [S1].
MAJOR DISCUSSION POINT
AI as a software layer
DISAGREED WITH
Arjun Karpal
Argument 8
Ongoing development of AI regulations, privacy, and security standards will shape future deployments (Amit Zaveri)
EXPLANATION
Amit notes that governments worldwide are crafting AI‑related regulations covering privacy, security and risk, which will influence how enterprises implement AI going forward.
EVIDENCE
He mentions that every country is considering AI regulations for privacy, security and other concerns, indicating that security and risk will remain central topics [127-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources discuss emerging standards and regulatory frameworks for AI, including the role of standards in shaping AI-driven futures [S21], safe and sustainable AI standards [S22], and detailed risk-management rules [S23].
MAJOR DISCUSSION POINT
Regulatory landscape
Argument 9
Emerging focus on physical AI in operational technology (OT) and the need to secure humanoid and robotic systems (Amit Zaveri)
EXPLANATION
Amit points out that AI is moving beyond software into physical systems like factories, humanoids and droids, creating new security challenges that must be addressed as part of enterprise processes.
EVIDENCE
He describes work on operational technology, noting that factories are adopting physical AI, humanoids and droids, and raises questions about how to secure and integrate these systems into existing processes [130-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The CES demonstration and discussions on physical AI, edge computing, and robotics illustrate the shift toward operational-technology AI and its security challenges [S24]; the Agentic AI governance panel also mentions software-defined verification for physical AI interacting with the real world [S17].
MAJOR DISCUSSION POINT
Physical AI and OT security
A
Arjun Karpal
5 arguments199 words per minute542 words162 seconds
Argument 1
Concern about job loss and the changing nature of work drives the need for employee involvement (Arjun Karpal)
EXPLANATION
Arjun highlights that anxiety over AI‑driven job displacement motivates organizations to involve employees through reskilling and transparent communication.
EVIDENCE
He asks whether reskilling sets employees up for success amid changing work nature, references layoffs blamed on AI, and probes how reskilling might set employees up [44-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy briefs highlight anxiety over AI-driven job displacement and frame it as a misplaced concern, urging inclusive approaches [S14]; Meta’s chief AI scientist also reassures that AI will not cause permanent job loss [S15]; and workforce transformation literature stresses redesigning roles rather than simple reskilling [S12].
MAJOR DISCUSSION POINT
Job‑loss anxiety and employee involvement
AGREED WITH
Amit Zaveri
DISAGREED WITH
Amit Zaveri
Argument 2
Questioning whether current focus on cyber‑security risks of agentic AI is sufficient (Arjun Karpal)
EXPLANATION
Arjun asks if enough attention is being paid to the cyber‑security vulnerabilities that arise when enterprises adopt agentic AI.
EVIDENCE
He directly asks whether there is sufficient focus on cyber-security vulnerabilities related to agentic AI [68-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A World Economic Forum panel points out fundamental security mismatches when applying human-centric identity models to AI agents, indicating gaps in current cyber-security focus [S18]; the Agentic AI governance discussion underscores the need for robust guardrails and oversight [S17].
MAJOR DISCUSSION POINT
Adequacy of cyber‑security focus
AGREED WITH
Amit Zaveri
DISAGREED WITH
Amit Zaveri
Argument 3
Inquiry into whether enterprise AI adoption is faster, slower, or on target compared with expectations (Arjun Karpal)
EXPLANATION
Arjun seeks Amit’s view on whether the pace of AI adoption in enterprises matches, exceeds, or lags behind earlier expectations.
EVIDENCE
He asks, “From your perspective, has the adoption of AI from enterprises been faster, slower, or about right than you had anticipated?” [94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Amit notes that early expectations were overly optimistic and adoption has become more measured after security controls were introduced [S1]; the Agentic AI focus panel describes a maturing adoption curve as organizations implement proper governance [S17].
MAJOR DISCUSSION POINT
Adoption speed assessment
AGREED WITH
Amit Zaveri
Argument 4
Prediction that roughly 50 % of current enterprise software could shift to AI‑driven solutions (Arjun Karpal)
EXPLANATION
Arjun references a comment from Mistral AI’s CEO suggesting that about half of existing enterprise software could be replaced by AI‑based offerings.
EVIDENCE
He cites the CEO’s claim that roughly 50 % of current enterprise software could shift to AI during a CNBC interview [105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The conversation records the claim that 50 % of enterprise software could be replaced by AI, which Amit disputes, providing a counterpoint within the same discussion [S1].
MAJOR DISCUSSION POINT
Potential SaaS displacement
DISAGREED WITH
Amit Zaveri
Argument 5
Closing question about the long‑term direction of AI in enterprises (Arjun Karpal)
EXPLANATION
Arjun wraps up by asking Amit for his outlook on the future trajectory of AI within enterprises.
EVIDENCE
He says, “I just wanted to get your take on the future… If we were sat here” as the final question before concluding the interview [124-126].
MAJOR DISCUSSION POINT
Future outlook
Agreements
Agreement Points
Trust must be embedded in AI systems as a foundational element to enable audit, compliance and visibility.
Speakers: Speaker 1, Amit Zaveri, Arjun Karpal
Trust must be built into AI products from the start, not added later (Speaker 1) Trust is essential for audit, compliance, and visibility in AI deployments (Amit Zaveri)
All three participants stress that trust cannot be an after-thought; it has to be designed into AI solutions so enterprises can audit, meet compliance requirements and retain visibility into AI behaviour [10][15-16].
POLICY CONTEXT (KNOWLEDGE BASE)
The EU AI Act treats trust as a prerequisite for AI uptake, requiring legislative measures to build confidence and enable auditability [S36]; trust is reframed as a technical requirement essential for compliance and auditing [S37]; EU transparency rules mandate informing downstream providers, supporting visibility and audit trails [S57].
Reskilling employees and fostering a cultural shift are required to build internal trust and demonstrate AI’s value.
Speakers: Amit Zaveri, Arjun Karpal
Reskilling and cultural shift are needed to earn employee trust and demonstrate AI value (Amit Zaveri) Concern about job loss and the changing nature of work drives the need for employee involvement (Arjun Karpal)
Amit describes ServiceNow’s program of retraining staff, giving them hands-on AI tools and moving from individual productivity gains to enterprise-wide process improvements, while Arjun asks whether such reskilling sets employees up for success amid changing work dynamics [44-48][29-33][34-38][42-43].
Visibility, security profiles and control mechanisms are decisive factors for enterprise adoption of agentic AI.
Speakers: Amit Zaveri, Arjun Karpal
Visibility, security profiles, and control mechanisms are critical for enterprise adoption of agentic AI (Amit Zaveri) Questioning whether current focus on cyber‑security risks of agentic AI is sufficient (Arjun Karpal)
Amit explains that early adoption was hampered by lack of visibility and security, and that introducing security profiles and an AI control tower drove a 55× increase in deployments; Arjun explicitly asks if enough attention is being paid to cyber-security vulnerabilities, confirming shared emphasis on security [68-69][71-77].
POLICY CONTEXT (KNOWLEDGE BASE)
Enterprise adoption studies highlight that visibility and robust security controls are critical gate-keepers, and governance gaps around agentic AI slow scaling deployments [S45][S48][S50].
Enterprise AI adoption is proceeding more deliberately than the early hype suggested; security and trust controls have tempered expectations.
Speakers: Amit Zaveri, Arjun Karpal
Early optimism was unrealistic; adoption is now more thoughtful after security and trust controls are in place (Amit Zaveri) Inquiry into whether enterprise AI adoption is faster, slower, or on target compared with expectations (Arjun Karpal)
Amit reflects that the initial optimism about rapid, widespread agentic AI was unrealistic and that adoption has become ROI-driven after visibility and security features were added; Arjun’s question directly probes the speed of adoption, highlighting a shared view that the pace is more measured [94][95-101].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of recent roll-outs show boards demanding returns and tighter security, leading to a more measured pace that tempers early hype [S46][S47][S48].
Similar Viewpoints
Both agree that robust security and visibility are prerequisites for large‑scale enterprise deployment of agentic AI [68-69][71-77].
Speakers: Amit Zaveri, Arjun Karpal
Visibility, security profiles, and control mechanisms are critical for enterprise adoption of agentic AI (Amit Zaveri) Questioning whether current focus on cyber‑security risks of agentic AI is sufficient (Arjun Karpal)
Both assert that trust cannot be retro‑fitted; it must be a design principle from day one to satisfy audit and compliance needs [10][15-16].
Speakers: Amit Zaveri, Speaker 1
Trust must be built into AI products from the start, not added later (Speaker 1) Trust is essential for audit, compliance, and visibility in AI deployments (Amit Zaveri)
Both recognise that employee up‑skilling and cultural change are essential to mitigate job‑loss anxieties and to embed AI successfully [44-48][29-33][34-38][42-43].
Speakers: Amit Zaveri, Arjun Karpal
Reskilling and cultural shift are needed to earn employee trust and demonstrate AI value (Amit Zaveri) Concern about job loss and the changing nature of work drives the need for employee involvement (Arjun Karpal)
Unexpected Consensus
Both participants view security as the primary gate‑keeper for AI adoption, despite the broader hype around AI capabilities.
Speakers: Amit Zaveri, Arjun Karpal
Visibility, security profiles, and control mechanisms are critical for enterprise adoption of agentic AI (Amit Zaveri) Questioning whether current focus on cyber‑security risks of agentic AI is sufficient (Arjun Karpal)
While industry hype often spotlights performance and productivity, both speakers converge on the less-glamorous but decisive factor of security and visibility as the make-or-break element for enterprise AI roll-out [68-69][71-77].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports identify security as the foremost barrier to enterprise AI, emphasizing trust governance, risk management, and the need for strong cybersecurity frameworks [S45][S51][S53].
Overall Assessment

The discussion reveals a strong consensus that trust, security, and employee reskilling are the foundational pillars for successful enterprise AI deployment. Both Amit and Arjun stress that without visibility, auditability and robust security controls, adoption stalls, and that cultural change through up‑skilling is needed to allay job‑loss concerns. Expectations about rapid, hype‑driven adoption have been tempered by the reality of security‑driven rollout, leading to a more measured, ROI‑focused trajectory.

High consensus on the necessity of trust, security, and reskilling; moderate consensus on adoption pace; limited disagreement on the extent to which AI will replace existing SaaS products.

Differences
Different Viewpoints
Impact of AI on employment – layoffs vs job creation
Speakers: Arjun Karpal, Amit Zaveri
Concern about job loss and the changing nature of work drives the need for employee involvement (Arjun Karpal) AI does not inherently cause layoffs; it can expand business and create new roles (Amit Zaveri)
Arjun raises the possibility that AI could lead to layoffs and questions whether reskilling will protect workers [44-48]. Amit counters that AI has actually driven growth at ServiceNow, allowing the company to add staff, enter new market segments and shift workers from mundane to higher-value tasks, arguing that AI does not inherently cause job cuts [49-67].
POLICY CONTEXT (KNOWLEDGE BASE)
Historical economic literature rejects the ‘lump of labor’ fallacy, noting that past technological shifts have generated new jobs while displacing others; recent Davos panels and economic reports echo this nuanced view of AI’s employment impact [S39][S40][S55].
Extent to which AI will replace existing enterprise software
Speakers: Arjun Karpal, Amit Zaveri
Prediction that roughly 50 % of current enterprise software could shift to AI‑driven solutions (Arjun Karpal) AI will act as a synergistic layer within software platforms rather than fully replace SaaS products (Amit Zaveri)
Arjun cites a comment from the CEO of Mistral AI that about half of today’s enterprise SaaS could be displaced by AI [105]. Amit disputes this, stating that AI will be embedded as a complementary layer, with only 5-10 % of IP coming from foundation models and the bulk of value coming from vendor-built context and domain expertise [106-112].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on software evolution indicate that AI introduces new capabilities but legacy enterprise applications often persist, suggesting replacement will be partial rather than total [S41].
Sufficiency of current cyber‑security focus on agentic AI
Speakers: Arjun Karpal, Amit Zaveri
Questioning whether current focus on cyber‑security risks of agentic AI is sufficient (Arjun Karpal) Visibility, security profiles, and control mechanisms are critical for enterprise adoption of agentic AI (Amit Zaveri)
Arjun asks whether enough attention is being paid to the cyber-security vulnerabilities of agentic AI [68-69]. Amit replies that security is the biggest concern for companies and describes how ServiceNow’s visibility, security profiles and the Vesa acquisition address those risks, implying that the focus is already substantial [70-86][87-89].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent assessments argue that existing cybersecurity measures are inadequate for agentic AI, calling for stronger safeguards and addressing specific vulnerabilities unique to autonomous agents [S48][S50][S51].
Unexpected Differences
Scale of SaaS displacement by AI
Speakers: Arjun Karpal, Amit Zaveri
Prediction that roughly 50 % of current enterprise software could shift to AI‑driven solutions (Arjun Karpal) AI will act as a synergistic layer within software platforms rather than fully replace SaaS products (Amit Zaveri)
The claim that half of existing enterprise software could be replaced by AI was not anticipated given the broader discussion focused on trust, security and reskilling. Amit’s rebuttal that AI will augment rather than replace SaaS introduces a surprising point of contention about the magnitude of AI’s disruptive potential [105][106-112].
Overall Assessment

The conversation reveals three main areas of disagreement: (1) whether AI leads to job cuts or creates new roles, (2) how much of current enterprise software will be supplanted by AI, and (3) whether the industry’s current cyber‑security focus on agentic AI is adequate. While both speakers share common goals—building trust, ensuring security, and up‑skilling workers—their viewpoints diverge on the expected outcomes and the pathways to achieve those goals.

Moderate. The disagreements are substantive but not antagonistic; they reflect differing interpretations of AI’s impact rather than outright conflict. This suggests that policy and industry discussions will need to balance optimism about AI‑driven growth with realistic assessments of job displacement risks and the need for robust security frameworks.

Partial Agreements
Both speakers agree that trust, security and visibility are prerequisites for enterprise AI use. Arjun emphasizes the need to ensure sufficient cyber‑security focus, while Amit outlines concrete mechanisms (security profiles, control towers, identity graphs) that provide that trust, showing agreement on the goal but differing on how to achieve it [70-77][84-86][68-69].
Speakers: Arjun Karpal, Amit Zaveri
Trust is essential for audit, compliance, and visibility in AI deployments (Amit Zaveri) Questioning whether current focus on cyber‑security risks of agentic AI is sufficient (Arjun Karpal)
Both recognize that employee involvement is essential. Arjun highlights anxiety over job loss as a driver for engagement, whereas Amit describes a concrete reskilling program and cultural shift to build trust and demonstrate AI’s productivity benefits [29-33][34-38][42-43][44-48].
Speakers: Arjun Karpal, Amit Zaveri
Reskilling and cultural shift are needed to earn employee trust and demonstrate AI value (Amit Zaveri) Concern about job loss and the changing nature of work drives the need for employee involvement (Arjun Karpal)
Takeaways
Key takeaways
Trust is the foundational infrastructure for enterprise AI, enabling audit, compliance, and visibility. Building trust requires embedding safety, security, and control mechanisms directly into AI products, not as an after‑thought. Human impact: reskilling, cultural shift, and employee involvement are essential to earn trust and demonstrate AI value. AI does not inherently cause layoffs; it can expand business, create new roles, and free employees from repetitive tasks. Security and vulnerability management are critical for agentic AI adoption; visibility, security profiles, and identity control are prerequisites. ServiceNow’s acquisition of Vesa provides non‑human identity graphs to manage AI agent permissions and mitigate misuse. Initial expectations of rapid, widespread AI adoption were overly optimistic; adoption is now more thoughtful after security and trust controls are in place. AI will act as a synergistic layer within software platforms rather than fully replacing SaaS products; companies that embed AI deeply will gain competitive advantage. Future considerations include evolving AI regulations, privacy standards, and the emergence of physical AI/operational technology that will require new security approaches.
Resolutions and action items
ServiceNow is retraining its workforce and providing employees access to AI capabilities to demonstrate value. ServiceNow is expanding its AI business, reinvesting savings into new market segments and high‑value tasks. ServiceNow has acquired Vesa to implement non‑human identity graphs for managing AI agent permissions and security. ServiceNow is investing aggressively in AI‑specific security products, including an AI control tower for visibility and compliance.
Unresolved issues
Whether the current industry focus on cyber‑security risks of agentic AI is sufficient. How quickly comprehensive AI regulations and privacy standards will be established globally. The extent to which AI will displace existing SaaS solutions (e.g., the cited 50% estimate). Best practices for securing physical AI and robotics in operational technology environments. Long‑term impact of AI on job structures and how organizations can continuously upskill employees.
Suggested compromises
Adopt AI incrementally: start with limited, well‑controlled use cases, demonstrate ROI, then expand while maintaining security and trust controls. Combine AI augmentation with human roles rather than full automation, preserving jobs while increasing productivity. Treat AI as a synergistic layer on top of existing software platforms, allowing legacy SaaS products to evolve rather than be abruptly replaced.
Thought Provoking Comments
Trust is the new infrastructure – without trust, safety, auditing, compliance and visibility, enterprises cannot depend on AI for critical use cases.
Frames trust not as a soft, optional concern but as a foundational layer comparable to networking or compute, shifting the conversation from capabilities to governance.
Sets the thematic foundation for the entire interview, prompting subsequent questions about how trust is built (human, security, compliance) and leading to deeper discussion on security controls and cultural change.
Speaker: Amit Zaveri
We’ve been retraining our employees, giving them access to AI tools so they see how it makes their day‑to‑day work faster and frees them from repetitive, ‘skull‑crushing’ tasks.
Highlights the practical, human‑centric approach to AI adoption—education and reskilling—as essential for building trust, rather than just deploying technology.
Introduces the human element of trust, prompting Arjun to ask about job loss concerns and leading Amit to discuss reskilling versus layoffs, thereby expanding the conversation to workforce implications.
Speaker: Amit Zaveri
The narrative that AI is causing layoffs is often an excuse; our AI business has actually grown, allowing us to add people and reinvest savings into new market segments.
Challenges the prevailing fear that AI inevitably reduces headcount, offering a data‑backed counter‑narrative that AI can drive expansion and new hiring.
Shifts the tone from anxiety to opportunity, influencing the dialogue toward how AI can augment rather than replace human workers and reinforcing the earlier point about cultural shift.
Speaker: Amit Zaveri
Security and visibility were the biggest blockers; once we gave customers control towers and security profiles, adoption of agentic workflows jumped 55×.
Provides a concrete metric linking security enablement to rapid adoption, underscoring that trust mechanisms are the catalyst for scaling AI.
Acts as a turning point, moving the discussion from abstract trust concepts to tangible results, and leads Arjun to probe deeper into cyber‑security concerns and the Vesa acquisition.
Speaker: Amit Zaveri
We acquired Vesa, which builds access graphs for non‑human identities, ensuring AI agents have the right permissions and cannot act nefariously.
Introduces the novel idea of treating AI agents as identities with granular access controls—a fresh perspective on securing autonomous systems.
Expands the conversation into identity management for AI, reinforcing the security narrative and illustrating concrete product strategies to address trust.
Speaker: Amit Zaveri
AI will be a synergistic layer on top of software; only about 5‑10% of IP comes from foundational models, the remaining 90% is the context and domain expertise we build.
Reframes the debate about AI replacing software by positioning AI as an augmenting foundation, emphasizing the enduring value of domain‑specific engineering.
Redirects the discussion from fear of software displacement to the strategic importance of integrating AI with existing platforms, influencing the future‑outlook segment.
Speaker: Amit Zaveri
Looking ahead, security, regulation, and physical AI (OT, humanoids, droids) will dominate the conversation as enterprises embed AI into manufacturing and other physical processes.
Broadens the scope beyond enterprise IT to operational technology, highlighting emerging challenges and opportunities in a less‑explored area of AI deployment.
Provides a forward‑looking conclusion, setting the stage for future industry focus and reinforcing the recurring theme that trust and security remain paramount.
Speaker: Amit Zaveri
Overall Assessment

The discussion’s trajectory was shaped by Amit Zaveri’s framing of trust as the essential infrastructure for AI, his concrete examples of how security controls unlock massive adoption, and his counter‑narratives to common AI anxieties about job loss and software displacement. Each of these insights acted as a pivot point, steering the conversation from abstract hype to practical, governance‑focused implementation, and ultimately expanding the dialogue to encompass future challenges in regulation and physical AI. Collectively, these comments deepened the analysis, introduced new dimensions (human reskilling, identity for AI agents, synergy with existing software), and guided the interview toward a nuanced view of AI’s role in enterprise transformation.

Follow-up Questions
How can enterprises effectively manage permissions and identity for AI agents (non‑human identities) to ensure security and prevent misuse?
Amit highlighted the acquisition of Vesa for access graphs, indicating the need for robust identity and access management for AI agents, a critical component of trust and security.
Speaker: Amit Zaveri
What regulatory frameworks will emerge globally for AI, especially concerning privacy, security, and risk, and how will they impact enterprise adoption?
Amit noted that every country is considering AI regulations, suggesting a need for research into forthcoming policies and their implications for compliance.
Speaker: Amit Zaveri
How will physical AI (operational technology, humanoids, droids) be securely integrated into manufacturing and other OT environments?
He raised the upcoming challenge of securing physical AI in factories, indicating a gap in current knowledge and practice that requires further study.
Speaker: Amit Zaveri
What metrics and methodologies should enterprises use to measure ROI of early AI agentic workflow deployments to accelerate broader adoption?
Arjun referenced the importance of demonstrating ROI after one or two use cases, implying a need for systematic evaluation tools.
Speaker: Arjun Karpal
What are the long‑term impacts of AI‑driven reskilling on workforce composition, job displacement, and employee morale?
The discussion touched on fears of job loss and the role of reskilling, suggesting further research into workforce dynamics.
Speaker: Arjun Karpal
To what extent will AI agents replace or augment existing SaaS software functions, and is the estimate that ~50% of current enterprise software could shift to AI realistic?
Arjun cited a comment from Mistral AI’s CEO about a potential 50% shift, indicating a need for deeper analysis of AI’s impact on the software market.
Speaker: Arjun Karpal

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.