Shaping AI’s Story Trust Responsibility & Real-World Outcomes

20 Feb 2026 18:00h - 19:00h

Shaping AI’s Story Trust Responsibility & Real-World Outcomes

Session at a glance

Summary

This discussion at the AI Impact Summit focused on achieving responsible AI deployment through trust, accountability, and sustainable scaling across public and private sectors. The panel, moderated by Mridu Bhandari, included representatives from the Australian government, First Abu Dhabi Bank, Ericsson, and Wipro, exploring how to balance AI innovation with risk management and public trust.


Paul Hubbard from Australia’s Department of Finance emphasized that trust is foundational to AI innovation rather than a barrier, advocating for a people-first approach that meets citizens where they are and ensures democratic participation in AI deployment. Erik Ekudden from Ericsson discussed the evolution of networks from passive carriers to intelligent fabrics, highlighting how infrastructure must become an active enabler of AI applications, from consumer devices like AI glasses to industrial applications requiring distributed inference capabilities.


Divyesh Vithlani from First Abu Dhabi Bank shared their platform-first approach to AI governance, treating AI agents similarly to human employees with performance management, training programs, and accountability measures. He stressed that in banking, trust is existential rather than philosophical, requiring robust governance frameworks built into AI platforms from the ground up. Hari Shetty from Wipro advocated for “proof over promise,” emphasizing problem-first thinking rather than model-first approaches, and highlighted the importance of enterprise-ready solutions that work consistently over time.


The discussion revealed consensus that AI risks are manageable rather than insurmountable, requiring appropriate guardrails and governance structures. Panelists agreed that measuring AI success should extend beyond productivity metrics to include business transformation, decision velocity, and competitive advantage. Looking ahead, they envisioned an AI-native future where seamless integration of human and artificial intelligence transforms how we work, bank, and interact with technology, provided that trust and inclusion remain central to development efforts.


Keypoints

Major Discussion Points:

Building Trust as a Foundation for AI Innovation: The panel emphasized that trust isn’t opposed to innovation but rather enables it. Paul Hubbard stressed the importance of democratic, participatory approaches that meet people where they are, while Divyesh Vithlani discussed embedding governance and ethical AI principles directly into platform architecture to ensure safe, scalable deployment.


Infrastructure Evolution and Intelligent Networks: Erik Ekudden highlighted the transformation from passive AI carriers to active enablers, discussing how 5G/6G networks are becoming an “intelligent fabric” that hosts distributed AI workloads. The conversation covered practical applications like AI glasses and the need for networks to provide security, trust, and real-time responsiveness for emerging technologies.


Accountability and Governance in an AI-Driven World: The discussion explored who is responsible when AI agents make decisions, with panelists agreeing that accountability must reside with the domain providers. Vithlani described treating AI agents like human employees with performance management, onboarding/offboarding processes, and hierarchical responsibility structures.


Moving from AI Pilots to Scalable Value Creation: Hari Shetty emphasized “proof over promise,” advocating for problem-first thinking rather than model-first approaches. The panel discussed measuring ROI beyond productivity metrics, focusing on business transformation, decision velocity, and treating AI as fundamental infrastructure rather than optional technology.


Future Vision and Responsible Scaling: Looking ahead to 2030, panelists envisioned seamless AI integration across banking, networks, and daily life, with emphasis on inclusive deployment and maintaining human control. They stressed the importance of public-private partnerships and cross-sector collaboration to ensure AI benefits reach all segments of society.


Overall Purpose:

The discussion aimed to address how organizations and governments can achieve responsible AI deployment at scale while maintaining trust, accountability, and public benefit. The panel sought to move beyond theoretical frameworks to practical implementation strategies, focusing on the “seven chakras of aligned global cooperation” (human capital, inclusion, trust, resilience, science, resources, and social good) to translate AI ambitions into accountable action.


Overall Tone:

The discussion maintained an optimistic yet pragmatic tone throughout. Panelists were enthusiastic about AI’s transformative potential while acknowledging real challenges around trust, governance, and risk management. The conversation was collaborative and solution-oriented, with speakers building on each other’s insights rather than debating opposing viewpoints. The tone remained consistently forward-looking and constructive, emphasizing practical implementation over theoretical concerns, and concluded with shared excitement about the possibilities ahead while maintaining focus on responsible deployment principles.


Speakers

Speakers from the provided list:


Mridu Bhandari – Moderator from Network18


Paul Hubbard – First Assistant Secretary for AI Delivery and Enablement at the Department of Finance in the Australian Government; Public Policy Economist; Self-described “AI Masked Economist”


Divyesh Vithlani – Group Chief Technology and Transformation Officer, First Abu Dhabi Bank


Erik Ekudden – Chief Technology Officer of Ericsson


Hari Shetty – Strategist and Technology Officer at Wipro


Additional speakers:


– No additional speakers were identified beyond those in the provided speakers names list.


Full session report

This comprehensive discussion at the AI Impact Summit brought together leaders from government, telecommunications, banking, and consulting to address how to achieve responsible AI deployment at scale whilst maintaining trust, accountability, and public benefit. The panel, moderated by Mridu Bhandari from Network18, explored practical implementation strategies for what Bhandari termed the “seven chakras of aligned global cooperation” – human capital, inclusion, trust, resilience, science, resources, and social good.


Trust as the Foundation of Innovation

The discussion began with a fundamental reframing of the relationship between trust and innovation in AI deployment. Paul Hubbard, First Assistant Secretary for AI Delivery and Enablement at Australia’s Department of Finance, challenged the conventional wisdom that positions these as competing priorities. Drawing from his background – including starting a podcast during COVID to explain economics and unpack jargon – Hubbard emphasised that trust enables rather than hinders innovation.


“It’s really important that we don’t frame it as like trust versus innovation,” Hubbard argued. “It’s actually a foundation of trust that lets you make the innovation.” This perspective established a framework where democratic participation and people-first approaches become enablers of technological progress, with emphasis on meeting citizens where they are in terms of their comfort with AI.


Divyesh Vithlani, Group Chief Technology and Transformation Officer at First Abu Dhabi Bank, reinforced this theme from a banking perspective: “It’s not either or. It’s not about you have trust or you have productive AI. Our business in banking relies 100% on trust. So that is not a value that we can compromise on any time. However, in order to make sure that we do deploy AI at scale in a trusted manner, it starts with conviction.”


Infrastructure Evolution for AI-First Applications

Erik Ekudden, Chief Technology Officer at Ericsson, provided insights into how network infrastructure must evolve to support AI applications. Rather than viewing networks as passive carriers, he described transformation towards what he called an “intelligent fabric” – networks that actively enable and host AI workloads.


“The network is already secure, trusted. It’s going to be a carrier of all these inference workloads,” Ekudden explained, highlighting how 5G and emerging 6G networks become the foundation for distributed AI applications. He illustrated this with practical examples like AI glasses providing real-time navigation and language translation – applications requiring network-based processing rather than on-device computation.


This shift from centralised training to distributed inference represents a fundamental architectural change. Ekudden projected scaling challenges involving billions of users and sensors, necessitating networks that provide tailored service quality and security for each application. On sustainability concerns, he argued that focus on energy-intensive training phases obscures the more efficient reality of distributed inference: “We’re not going to explode energy consumption just because we use more AI.”


Human-Centric AI Governance Models

Vithlani outlined an innovative approach to AI governance that treats artificial agents with human-like management frameworks. “I view an agent no different to a human so you do performance management,” he explained, describing systems including “agent university” for training and graduated autonomy based on demonstrated capabilities.


This governance model extends to operational management including performance monitoring and conflict resolution between agents and humans. “Whilst humans may fill out a timesheet to account for the work that they’ve done… we’re also monitoring the agent for the tokens that they’ve consumed for the output that they’ve generated,” creating parallel accountability structures.


The approach involves treating AI agents like new graduates – providing appropriate guardrails and supervision that evolve with experience and competence, ensuring human oversight whilst enabling scalability for large organisations.


Moving Beyond Pilots to Production Value

Hari Shetty, Strategist and Technology Officer at Wipro, addressed the persistent challenge of moving from pilot projects to scalable, production-ready solutions. His emphasis on “proof over promise” provided a framework for organisations struggling beyond perpetual experimentation.


“AI is no longer about pilots. It’s about being able to get value out of AI,” Shetty declared, outlining key principles for successful scaling. First, organisations must adopt problem-first thinking rather than model-first approaches – identifying business challenges before selecting AI technologies. Second, enterprise AI faces fundamentally different challenges than consumer applications: “Enterprises are necessarily messy,” requiring solutions that work within existing constraints.


Third, solutions must work reliably “every day, every hour, and every minute” rather than just in controlled environments. Finally, “agentic trust is earned” through consistent performance over time, treating trust as an outcome of reliable operation rather than a prerequisite.


Measuring AI Value Beyond Traditional ROI

The panel revealed sophisticated thinking about measuring AI success beyond traditional return-on-investment calculations. Shetty provided a provocative perspective, comparing AI adoption to foundational technologies: “It’s almost like going back in time – could you ask should we implement an email system, what’s the ROI on the email system?”


Vithlani outlined a three-tier approach to value measurement. At the micro level, AI provides productivity improvements through co-pilot technologies. At the enterprise level, AI transforms complex, error-prone processes with tangible financial impact. Most strategically, AI provides competitive advantage through enhanced organisational responsiveness – the ability to “respond and react to change faster than our competitors.”


Ekudden reinforced this multi-dimensional approach, noting that whilst efficiency gains of 10-50% represent significant value in telecommunications, the most exciting opportunities emerge when AI enables entirely new business models and revenue streams.


Risk Management: Balanced and Practical Approaches

The panel demonstrated consensus that AI risks are manageable through appropriate frameworks rather than representing insurmountable obstacles. Shetty provided direct assessment: “There is certainly a level of risk that one should be aware of and work with… but at the same time the hype about risk is also overstated. It’s a manageable risk, it’s not an uncontrolled, unmanageable risk.”


Different sectors show varying risk tolerance levels, with Ekudden noting that enterprise risk assessment has become “quite realistic” whilst government sectors may still overestimate risks. Vithlani contributed that AI risks can be managed by extending existing regulatory frameworks: “AI is not actually new technology,” noting that AI concepts predate cloud computing and mobile technology.


The discussion revealed nuanced thinking about contextual risk tolerance, with Shetty describing scenarios where 85% accuracy might be acceptable for certain processes whilst others require 99.99% reliability.


Accountability in Distributed Systems

As AI systems become more distributed and autonomous, the panel advocated for clear domain-specific responsibility rather than overarching accountability frameworks. Ekudden articulated this approach: “If you are replacing work with an agent, that basically needs to translate into accountability and then also transparency, trust and governance issues around those agents.”


He described a hierarchical model where different agent levels have different decision-making authority and corresponding accountability structures. This draws from existing practices in critical infrastructure management, where telecommunications networks already provide life-critical services with established safety guardrails.


Hubbard emphasised government accountability through clear communication and comprehensive planning – demonstrating how AI will create better jobs, attract investment, and spread benefits broadly whilst keeping citizens safe.


Future Vision and Cross-Sector Collaboration

Looking toward 2030, panellists provided concrete predictions about AI transformation. Shetty projected that “the decision velocity in organisations will completely change in the next four years,” with AI accelerating processes so dramatically that current speeds will seem “intolerably slow.”


Vithlani envisioned seamless AI integration where “banking will be a lot more seamless” and shopping becomes intuitive. Ekudden projected “digital colleagues” and “physical AI colleagues” working alongside humans, though acknowledging uncertainty about achieving comprehensive transformation globally within four years.


Throughout the discussion, panellists emphasised that realising positive AI visions requires unprecedented collaboration across sectors. Hubbard highlighted Australia’s AI CoLab as an example of effective cross-sector collaboration, bringing together government, private sector, academics, and not-for-profits.


The collaborative imperative extends beyond technical considerations to social and ethical dimensions, with Hubbard noting successful AI initiatives require “having the people who are going to be in the room who may not care about AI, but they do care about the services that are being delivered.”


Conclusion

The discussion revealed remarkable convergence across sectors on fundamental principles for responsible AI deployment: trust as foundational to innovation, manageable AI risks through appropriate frameworks, problem-first thinking, and inclusive approaches ensuring broad social benefit.


The conversation moved beyond abstract AI potential to concrete, actionable frameworks – Vithlani’s governance models, Shetty’s scaling principles, Ekudden’s network architecture, and Hubbard’s collaborative approaches provide practical guidance for organisations seeking to harness AI’s potential whilst maintaining safeguards.


The panellists’ shared optimism about AI’s transformative potential was balanced with realistic acknowledgement of required work. Their insights suggest that whilst technical capabilities for transformative AI applications are emerging rapidly, realising full potential depends on building trust, governance frameworks, and collaborative relationships to ensure AI serves humanity’s broader aspirations for progress and social good.


Session transcript

Mridu Bhandari

for shaping a sustainable AI future that we are calling People, Planet and Progress. And to translate these sutras into action, we are looking at what we call the seven chakras of aligned global cooperation. So these are the concrete pillars that will really turn ambition into accountability. We have human capital, inclusion, trust, resilience, science, resources and social good as the seven chakras that we are going to be talking about. Today we have with us a very eminent panel trying to answer the defining question of this AI first decade that we are in. How can we achieve trust before skill? Outcomes over optics and responsibility as a competitive advantage. I’m Vipi Bhandari from Network18 and I’m very delighted to be joined by a panel of very distinguished guests here tonight.

Starting from my left, Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Next to them, Vibhesh Vitlani, Group Chief Technology and Transformation Officer, First Abu Dhabi Bank. Eric Ekudin, the Chief Technology Officer of Ericsson. And Harish Yatich, Strategist and Technology Officer at Wipro. Welcome, gentlemen. Thank you so much for joining us here today. You know, perhaps let’s set the context with the foundations of trust and skill. And Paul, if I may start with you first, you know, I was going through your LinkedIn profile and you call yourself the AI masked economist.

So very interesting, Monica, there. Why don’t you first tell us what that really means? And then we’ll jump into the rest of the stuff.

Paul Hubbard

Thanks for having me. It’s great to be here in India. I think we all bring a mic. Yeah. Thanks for having me, and it’s great to be here in India. I think we all bring a lens to AI. My lens that I bring is economics. I’m a public policy economist, which for me means AI is not about technological adoption. It’s all about what can generate public value, what generates public welfare.

Mridu Bhandari

And why do you call yourself the masked economist?

Paul Hubbard

Economist. That’s another story for you. That started in COVID, remember, when we were all wearing masks. And at the time, I started a podcast, which was all about explaining economics and unpacking the jargon. And I’ve kept that because I think explaining AI, unpacking the jargon, seeing how it relates to everyday life is really, really important.

Mridu Bhandari

Right. Now, when we talk about AI for social good, public permission is really, really important. Public trust is very important. Now, how do we really build society? How do we build confidence in AI without really slowing down? innovation. How are you doing that in Australia? Give us some examples of how you’ve been able to do that, especially because citizens all over the world today are demanding a lot more transparency and accountability when it comes to not just AI, but everything in general.

Paul Hubbard

Yeah, absolutely. I think it’s really important that we don’t frame it as like trust versus innovation. It’s actually a foundation of trust that lets you make the innovation. It’s starting from the proposition of what’s the problem we’re trying to solve or what are we trying to deliver for citizens? If you’re a government, what are you trying to deliver for your customers? Meet them where they’re at. Now, different countries, different populations have different comfort already, different familiarity with AI. You’ve got to know where people are up to, what they want and build from there, rather than just say, here’s a brand new thing that we’re going to impose on you. So I think really that framing, that democratic participatory.

approach, that people -first approach is key.

Mridu Bhandari

Right. Eric, coming to you, it’s often discussed as the application law, but you’ve mentioned that intelligence must be embedded into the networks themselves. Now, how does infrastructure really evolve from being a very passive carrier of AI to becoming this active enabler of trust and of resilience?

Erik Ekudden

Yeah, so first of all, Ericsson builds networks, advanced connectivity, so 5G and 6G, and increasingly that’s becoming this fabric that we all depend on. But let’s start by thinking about what people are using today. Gen AI is already hundreds of millions of smartphones, actually billions, already doing AI applications across the mobile infrastructure. So it’s already secure and trusted. The network is already provided the guarantees that you need. But I think Especially here in India, we’re talking about industrial AI applications, agriculture. There’s going to be a lot of AI in the fields, hospitals, education, smart manufacturing. So there’s going to be a lot more dissemination of AI from where we’re focused today in training to distributed AI or inference generation.

That’s going to happen much further out in the network. So the network is actually becoming the host for all those great AI experiences. We need to scale the networks to handle that. I don’t think I’m the only one. Maybe not everyone carries two pairs of glasses here, but AI glasses. They are already available in millions. Good AI glasses that give you navigation support, that gives you real -time language translation, maybe a prompt if you are on a stage making a keynote. I mean, these kind of things, they cannot be done on the device, on the wearables. You need to offload the AI, the inference from the glasses. So you can see the actual data. You can see the actual data.

You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. edge. That’s why we talk about this as a transition to an intelligent fabric. The network is already secure, trusted. It’s going to be a carrier of all these inference workloads. So we’re just starting that journey. But I think it really comes back to basic principles. Networks need to be trusted. They need to be secure. They’re already moving from consumers into enterprise and government services, mission critical, big example here in India.

Mridu Bhandari

So what have the AI glasses been doing for you this week?

Erik Ekudden

I didn’t read every question because everything is perfect in India when it comes to finding new ways. On a serious note, I actually use them privately at work. But I start to see people getting really … good value because it is an AI assistant. And think of it once, especially like me, wearing glasses. Once I’ve switched for good to these glasses, why would I go back? Even when I’m indoor, even when I’m at home, even when I’m training or in the elevator, I want it to work. And that, of course, means that the network, this intelligent fabric, needs to be so much better than it is today. Of course, great 5G networks here in India, but in the future, we will need even better funds.

And I think this is a change in terms of we will not get the full value of AI. We will not leverage AI fully until we connect it to that better network for AI. And that’s really what I’m focusing on. But you want to try it on? It’s a good one. No, it’s a great one. It’s a little bit fantastic. That was a little bit of a gallop, yeah. Using AI or AR wearables, glasses. Earpods. Cameras. Cameras. A few? Okay, well, two, two. Probably a representative crowd here. I think we are very early in this journey. It’s going to be a fantastic journey, I believe, for both consumers and anyone of us working in companies.

Mridu Bhandari

Absolutely, absolutely. Well, I’m going to come back to you with the knowledge that bringing the wish in, you know, in banking now, trust is not philosophical, it is existential. So how do you really embed AI into core decision making and also ensuring to dilute any risk discipline? So what governance models have you put in place that actually work for you? You know, any best practices that you can share with us here today?

Divyesh Vithlani

Sure. Well, first of all, it’s great to be here. And I’m already, you know, benefiting from the wisdom of my panelists, because my kids will tell you that I’ve been in denial about, or needing glasses. The eyesight is perfect, but the enlarging, the zooming really helps. But in reality, now I’ve got a different story for them, that I’ve been waiting for AI glasses before I really don on apparel specs. But coming back to your question, I kind of pick up on what Paul said. It’s not either or. It’s not about you have trust or you have productive AI. What we believe is like any regulated institution, that there is no compromise on risks and controls.

Our business in banking relies 100 % on trust. So that is not a value that we can compromise on any time. However, in order to make sure that we do deploy AI at scale in a trusted manner, it starts with conviction. and we have conviction right at the very top of the organization that AI is a force for good. We’ve heard a lot this week about AI being a general purpose technology. I really love what Eric said about AI in the network, and I’ll sort of come to that in a second, because that is a large part of the answer is establishing a platform. But if we take a step back, conventional organization is defined by its people, its processes, and its technology.

And there are all sorts of safeguards, guardrails, controls that have been built in. In the AI world, I think it’s going to be about agents, models, and data. And I think we’re going to have to have the same guardrails, and perhaps the same controls, and the same controls, and the same controls, even more stronger, because it will need AI to oversee and govern AI to sort of be really effective. So the approach we’ve taken is on the basis of the conviction that we have that AI is a force for good, it is a game changer, and it is truly going to transform everything about how we live, work, play, and bank. We want to basically make sure that we empower the entire organization to leverage and scale AI in a safe, secure, efficient, and compliant manner.

Now, the only way, in my opinion, to do that is to take a platform -first approach. Just like Eric said about the network needs to be safe and secure, our AI platform and our agentic platform needs to be safe and secure. So we have taken the approach of building a platform with all the different layers from data, model, knowledge, context, and the use cases that sit on top of that by building ethical, AI, data governance. level governance, the fair and appropriate use of AI into the platform. And by taking that approach, we are able to unleash the power of the technology in the hands of the end users. So just like when you open up Microsoft and start a new Excel, you’re not thinking about is this safe, what’s the underlying architecture.

You’re doing it fairly intuitively. And we’re going to be able to do the same thing with AI, that our folks, our business colleagues, our engineers can use AI as naturally and seamlessly as they do any other task. So taking that platform -first approach is what really is driving our sort of strategy to ensure that we drive AI at scale but with all the right trust and safeguards.

Mridu Bhandari

Right. All right. Bringing in Harry as well, you know, we’ve talked a little bit about public permission. We’ve talked about infrastructure. We’ve talked about governance, security. There’s a final leap. which is from promise to proof. Now, enterprises are, of course, often caught between the AI hype. There is hesitation. You speak a lot about proof over promise. Elaborate that for us. And what really separates scalable AI from perpetual pilots that we keep seeing a lot of enterprises deploying?

Hari Shetty

First and foremost, very happy to be here with this panelist here. And putting on the virtual lens, what do we do? We take Eric’s network, layer in the pro -intelligence on top of it, and provide solutions to Devesh. That’s where we fit it into this entire graph in terms of what we do. Now, coming back to proof over promise, you absolutely brought the most important topic that’s in discussion across the summit here as well. AI is no longer about pilots. It’s about being able to get value out of AI. And when we talk about proof over promise, we talk about four distinct elements that are important from a Wipro perspective. number one don’t start with a model don’t talk about model x or model y and then start start with a model first thinking start with a problem first thinking so you you pick a problem figure out what’s the right approach to solving the problem and then work the way backwards to look at you know what models can actually help you solve the problem so that’s the first approach the second part that we that we take care of is the the enterprise story is very different than the consumer story enterprises are necessarily messy you’ve got technology that’s like 20 years old 30 years old you’ve got different personas you’ve got different security needs uh data is you know in fragments across the organization so the enterprise story is a completely different story than a than a consumer grid story in terms of how how things have to come together from an perspective so in that context our ability to prove a solution in the enterprise world is extremely important for us and when we show it works in an enterprise that’s when other enterprises build trust that’s ready for diffusion and by the way we act as client zero for our solution so if we don’t get it to work in our own enterprise there’s no point talking to any of the clients about implementing the solution the third principle here is about whatever solution we build it’s not about making it work once it should work every day every hour and every minute and solutions that are only capable of you know following that principle are the ones that we actually take it to to take it to the market and that’s another principle that’s extremely important for us and last going back to trust that we all talked about if you look at human trust human trust is earned even agentic trust is earned you need something that can work for a long period in time without hallucination without fundamental flaws in the model so that there’s trust built into it so only when things work consistently over a longer period in time do you build trust and these are the four principles that we use to actually talk about you know proof or promise as what we call the product license

Mridu Bhandari

Right. All right. Well, we’re going to shift gears a little bit and also talk about accountability because we’re talking a lot about architecture. Let’s also talk about who’s accountable for what in an enterprise and perhaps in the society as well. Now, Paul, when we talk about responsible AI at a national level, what does accountability really look like for leaders? Is it about measurement frameworks? Is it about reporting outcomes? Is it about, you know, independent oversight? What are the signals that you need to tell citizens that, you know, this is being deployed in your interest?

Paul Hubbard

Yeah, thanks. I think it’s really about having a clear plan that you can communicate. In our case, that making it clear throughout the economy, throughout government, throughout society, that we’re going to seize the opportunity of AI. That means better jobs. That means investment in data centers and all the things we’ve been talking about. But the second thing is really even perhaps more important is we’re going to spread the benefit of AI, not just to people in the tech center, but to every aspect of community, people in rural areas, to people from marginalized groups, to people who maybe haven’t had the full benefit of current technology now. So spreading that benefit further. And then finally, just making it really clear that we’re also acting at every level, whether it’s businesses or whether it’s government, to keep citizens safe in the process.

We’ve had a big conversation here at a model level about AI safety and AI harms, but we’ve also got to have that conversation in the context of our communities and what does it look like to keep citizens safe there. So I think it’s the whole of. Society leadership piece. It’s not just saying, well, the tech people can look after this from a technical perspective.

Mridu Bhandari

Right. And, you know, ecosystems, of course, are very, very interdependent today. You have cloud providers, you have the telecom networks, you have enterprises. There are decisions flowing across the distributed stack by the second. So it’s really countable.

Erik Ekudden

Yes. I want to build on what you said, the version of the Harry here, and the difference between where we are today and when we are introducing agents at scale. And to me, there isn’t so much a question of who, because if you are replacing work with an agent, that basically needs to translate into an accountability and then also a transparency, trust and governance issue around those agents. And increasingly, we get agents at different levels. There’s super advanced agents at the top. And, of course, you follow down the stack, we get more. fine -grained agents having less knowledge making decisions that are guard -railed in a different way than the top models. So think of this as a hierarchy of decision -making and, of course, accountability.

But to me, there’s no question that if you are, and when you are introducing agentic technology, you need to take the responsibility for your part. If your complete service consists of many different agents on the cloud side, on the advanced connectivity side, on the application side, device side, it needs to come together. But, of course, responsibility should reside in the domain that you are providing, and that you are providing to the market, to the customer, to the employees. Then, of course, it’s never as simple as that, but in the world that I come from, in telecom, we’re already providing critical infrastructure. People’s everyday and life depend on it. So we have already guard -raised from a safety -security perspective that we have to move up to.

in today’s world of 5G and telecom. That, to me, should carry over into the, oh, yeah, an identic world. I know there are, of course, discussions about increasing governance, increasing regulation. I think that’s a dangerous way to go because if you regulate before you have innovated, you never know what you will get. But I think if you stay with these basic principles that we do have requirements and we have guardrails in the world we’re coming from, and you translate that more or less one -to -one into the identic world, I think we are on a good starting point.

Mridu Bhandari

Right. And, Vibhish, you know, we are talking about the way this identic, as Kavik said, machine working hand -in -hand. Now, as these identities shift, how should we be rethinking governance? How should we be rethinking trust? And, of course, governance is never static. It’s going to go on. It’s going to keep evolving. So what does dynamic oversight really look like, especially in a very regulated industry like yours?

Divyesh Vithlani

Look, I really love that question because at the end of the day, as a CTO in a bank, I am accountable. I am responsible for the platform that we construct and the output that gets generated from that platform, whether it’s from a human or an agent, right? So that’s my accountability. And this is where I have interesting debates and conversations with colleagues from Wipro and other partners of mine who are very eager to sell me solutions. And I said, if the solution is a black box, then I’m going to find it very difficult to integrate that into my environment because ultimately I have to be able to explain the output that gets generated. So to your question in terms of that dynamic oversight, it again goes back to the platform and the way we’ve architectured.

The platform is on sort of, you know, without getting too technical, is on two planes. There’s an execution plane and a control plane, right? But again, it’s not that sophisticated. just like when you onboard a new graduate into your organization. You will give them a set of guardrails and a set of responsibility that is befitting of their skill set and their experience. You provide the right level of supervision. You give them the right level of oversight. And as they grow, become more proficient, you clearly give them more responsibility. We treat agents in exactly the same way. So there’s a lot of conversation about agents being autonomous and hallucinations. Well, individuals can do the same thing if they’re left to their own devices, right?

So the way that we have built and architected our agentic architecture is that, as Eric said, there are different types of agents. At the lowest level, agents are not just autonomous, but they’re atomic. And with the right set of guardrails, with agentic operating processes, they are also deterministic, right? and we basically create agents to perform a single task. And we make them as reusable as possible to compose them and to aggregate them into a higher level of workflow. And as you get more, as they learn more, which is the good thing about agents, they learn faster, you give them more responsibility, just like you do to humans. But again, going back to that execution plane where you are monitoring every activity that is being done through a control plane, and also the other features of the platform include how we sort of onboard, offboard agents just like you do with humans.

And we also have practices in place to manage the conflicts between agents and humans because, again, just like you have conflicts between two humans, you have conflicts between an agent and a human, right? And you need to be able to detect that in real time. so that’s where some of the kind of work that we’ve done and it’s again early days, I don’t mean we have all the answers, but certainly the space is moving very fast the key is that we humans always have to be in control so the way we design the architecture is to ensure that happens

Mridu Bhandari

so are agents being put through tough performance appraisals, are they being fired for hallucinating?

Divyesh Vithlani

100 % right, and again it may sound really basic, but I view an agent more different to a human so you do performance management you do, there’s a concept that we call agent university right and I love that term because I was chatting earlier with James about this, at university you’re learning how to learn right, so that’s what we want the agents to do as well and you know whilst humans may fill out a timesheet to account for the work that they’ve done and to measure the output that they’ve produced for the cost that they’ve consumed. Whilst agents may not fill out a timesheet, we’re also monitoring and monitoring the agent for the worth, the tokens that they’ve consumed for the output that they’ve generated to ensure that we measure their performance in a similar way.

Mridu Bhandari

Wonderful. Well, Harry, bringing you in as well, how should organizations measure the ROI? That’s a question that enterprises around the world have been debating. What’s the value beyond the profit or beyond the bottom line? Are we looking at trust scores? Are we looking at productivity? Are we looking at decision velocity, risk mitigation? At the core, how are you looking at the ROI?

Hari Shetty

Probably one of the most debated topics and one of the topics that I hear a lot, and I will probably provide you the Wipro context in terms of how we are looking at productivity. point number one while everybody talks about use cases and productivity measurement of EI we think you know EI is beyond just measuring return on investments or measuring productivity it’s almost like going back in time could you ask should we implement an email system what’s the ROI on the email system could you ask for example why should I go to the internet I have a brochure already in the company why should I be on the internet so a lot of the thinking should change from looking at ROI to looking at EI as a fundamental capability and a fundamental shift and a journey which is irreversible in terms of where we are going so it’s not a question of should we invest because there’s ROI or not it’s a question of we have to go down that path and we look at it as a capability so within Wipro we look at it as a capability so we are not really asking this question of for every single use case is there a ROI on it Now, having said that, you know, as a business leader, ROI is extremely important.

Mridu Bhandari

Well, your clients must be demanding the ROI for sure.

Hari Shetty

Yes, that’s equally true. So the elements that we talk about is the earliest signal of ROI is productivity, right? We always talk about productivity as an early indicator of what can come down the pipe, but productivity is only an early signal. The resulting benefit is basically always an end outcome. It can be cost. It can be units produced. It can be lower, better quality. It can be cycle time reduction. It’s many of those things. And our goal has always been to move beyond productivity because productivity is a number that people talk about very frequently in AI, but we are moving beyond productivity to look at some of those end outcomes that we can achieve. And our models are built to help clients understand the end benefit of AI rather than just look at productivity as an element.

Plus scores are becoming equally important. I will just touch upon plus scores for a minute. When we look at plus scores, we are looking at, you know, how many instances, how many instances of failure did happen? How many instances of failure did happen? How many instances of failure did happen? and is that within the vector of what an organization says is acceptable or the process says is acceptable. So it’s important to measure quality aspects, failure aspects, hallucination that we talked about, all the other aspects of AI where it can go wrong and then measure what’s the task goal and see whether it’s appropriate for the process that we’re talking about. So we had situations where we talked about probabilistic models, deterministic models.

We had customer cases where 100 % was the only answer or 99 .99 % was the answer. There are situations where 85 % was good enough. So again, there’s no one single answer to this. It depends on the kind of process, the kind of problem that we’re trying to solve.

Mridu Bhandari

Right. And do you think business innovation would perhaps be one of the biggest ROIs and any outstanding cases of business innovation that you’ve seen with AI being scaled successfully yet?

Hari Shetty

Yeah, that’s a fantastic question. And again, let me give you one or two quick examples because, you know, that would bring this to life. one of the projects that we did for a client this is an energy client and this is for a refinery and obviously you know everything was automated, instrumented, there are a lot of sensors all along the way and they were asking us what’s the value of AI in this context so the work that we did for them was basically analysis of a flame and you know interestingly out of the flame we could extract information about combustion efficiency, fuel to air mixture ratio, maintenance of the equipment we could derive out of models that we built just looking at the flame so the kind of information that we could actually secure just looking at the flame was so much superior to using a sensor based technology because sensors typically tell you something is working or something is not working based on a threshold, here we could actually find out the health of what’s happening with incremental change compared to looking at an on and off kind of a situation with sensors

Mridu Bhandari

Fantastic. Erik, you want to add?

Erik Ekudden

Yeah, can I just add one thing? I think it’s so interesting to look at how in our world we talk about this intelligent fabric of 5G. And, of course, there are gains if you apply AI in terms of efficiency, in terms of productivity. You can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network. They use modeling on top of it. And then they can start to produce new outcomes. It’s kind of a business growth. And, of course, it’s not always that you can find that clear case.

But that’s really where AI and autonomous networks are helping. Saving, yes, TCO is important. But it’s very much about that business growth.

Mridu Bhandari

Any example you can share with us there?

Erik Ekudden

Yeah. So, Glasses was one. In the future. device, every application, every service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical for enterprises. And that’s what leading customers, including here in Juba, are doing. So they’re using AI for that. We can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network, they use modeling on top of it, and then they can start to produce new outcomes. It’s kind of a business growth.

And, of course, it’s not always that you can find that clear case, but that’s really where AI and autonomous networks are helping. Saving, yes, TCO is important, but it’s very much about that business growth.

Mridu Bhandari

Any example you can share with us there?

Erik Ekudden

Yeah, I think Glasses was one example here. But in the future, every device, every application, every AI service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical, for enterprises. And that’s what leading customers, including here, so they’re using AI for that. that kind of segmentation and growth of the business, it’s an upside that is unlimited. So, of course, it’s more exciting.

Mridu Bhandari

Absolutely. Well, let’s also look at the long -term competitiveness and value creation that we can achieve with AI. Paul, if we were to project 10 years ahead, what do you think would really separate AI native nations from AI dependent nations? You know, is it infrastructure? Is it talent pipelines, compute capacity? What would you add to that list?

Paul Hubbard

I would add capability, competence, and curiosity. I think a lot of the things you mentioned in terms of data centers and things like that, they will be built, there will be investment, but the underlying models, the compute that will be commoditized and what will set countries apart is the ability of government institutions to adapt, the ability of the economy to be flexible to new approaches and to be able to do what they want. I think that’s a really important point. And I think that’s a really important point. And I think that’s a really important point. the ability of the workforce to find the new jobs, the new wants and needs that are created and where the bottlenecks shift, being able to move to those.

And I’ve got to say that coming to India this week, I see not just competence, capability and curiosity, but just a down -out enthusiasm for this. So I think maybe India is one to watch.

Mridu Bhandari

Good to know that and happy to hear that, of course. Because, well, Eric, you know, AI demands massive compute, massive energy, massive connectivity. Now, how do we really reconcile infrastructure -scale AI expansion with sustainability? You know, even with the AI globally, how do we ensure that efficiency is imperative to everything that is deployed?

Erik Ekudden

Well, AI is… …energy -intense, especially now in the training phase. I think some of the data that are out there, it’s… I mean, it’s… mind -boggling numbers and I’m not even sure we’re going to need that kind of energy that has been predicted. But what I was saying before is that we’re moving from that big data center training to the distributed inference. That’s kind of where the book is going. That means that you need to scale it to like 8 billion inference for glasses. Tens of billions of sensors using AI or visual sensors. So what we are doing and what needs to happen is to really have energy -efficient hardware, energy -efficient software, energy -efficient AI models.

Small models when you can do away with that and of course big models when you don’t. So we’re not going to explode energy consumption just because we use more AI. In fact, we’re going to use even smarter and better ways to do it both on the hardware and software side. Then just as a little bit of sort of putting things in perspective, all the world’s networks is around a percent of their total power bill or their power consumption. And it’s actually by using more of the digital technology, you are able to reduce emissions in other sectors by as much as 15%. So it’s kind of a 10, 15 times payback on that energy consumption. And again, if you combine that with what I said about really being conscious about energy efficiency as you move further out, I think it’s actually going to be a sustainable way to do a lot of things, not just replacing unnecessary traveling, logistics chains with more digital means.

Everything is going to be more efficient, so I think we have to be a little bit careful before we say that it’s just exploding and it’s completely outrageous. Because if you just project those big data center training clusters, it looks scary, but that’s not the whole picture.

Mridu Bhandari

All right. Well, Dinesh, you know, while we are talking of value creation from AI, you know, of course, many organizations still accounting and measuring AI success and cost savings, but… at your organization, how are you really reframing AI value in banking, resilience, fraud protection, customer trust, capital efficiency? What are some of the metrics that you are tracking and really ensuring that this is true value creation for us?

Divyesh Vithlani

I think it’s a question that sort of is constantly exercising our minds. And if I start with the productivity question that you asked earlier, whilst there wasn’t a straightforward answer, I can’t look at it in three levels. AI will provide micro -level productivity through, you know, co -pilot and sort of technologies like that, which might be difficult to measure, but certainly it’s helping with the whole literacy and those in the overall level of education and awareness in the organization. Secondly, at the enterprise level, and this is your point on value creation, we absolutely see the potential of AI to drive significant ROI. When you take very complex processes, which have been utilizing HIPAA -2 technologies, whether it’s RPA, OCR, etc., but when you apply AI and agentic, you can actually take them to the next level.

And these are extremely complex processes, which are error -prone, and you’re talking about large sums of money. And when we’ve applied AI and agentic to them, we’ve seen incredible outcomes, which is sort of giving us tangible value creation. And the third aspect I would look at is, if we really take a step back, certainly in banking, what is our biggest source of competitive advantage? it’s not necessarily the technology or the products or any other capabilities, right, because the next person can come along and emulate those. It’s really our ability to respond and react to change faster than our competitors. And that’s what AI is going to help us do in terms of creating value because it allows us to respond to change faster, do rapid experiments, and to scale and to double down where we think that we will see a significant ROI.

Mridu Bhandari

Right. Okay, so I have a question for all of you, and perhaps you can, you know, take about 30 seconds each to tell me. Do you believe today enterprises are overestimating or underestimating AI risk? And, you know, how should leaders and boards really measure AI, AI thrust readiness in practical terms? So, you know, how we may do if you want to start on that one.

Hari Shetty

see there is certainly a level of risk that one should be aware of and work with with risk and again in every business there’s always element of risk that one is to mitigate so ai is no different from that perspective but at same time the own hype about risk is also overstated it’s a manageable risk it’s not a uncontrolled unmanageable risk it’s a manageable risk and with the right kind of tool set that divesh talked about it’s definitely possible to get the best value out of ai without actually exposing oneself to risk

Mridu Bhandari

okay that’s a very diplomatic balanced answer that you give us, Eric what do you think

Erik Ekudden

i suspect that it’s become quite realistic the risk assessment among enterprises not to overestimate it they’re manageable i think maybe on the government side there’s still an overestimation on the risk side trying to sort of be too cautious, and that, I think, could hold back in certain public sectors and in other areas. Then the risks are very, very big if you mistreat this extremely powerful technology. So I’m not saying that we’re over the hump, but that’s what I think.

Mridu Bhandari

Paul, you want to take that on, considering, you know, Eric just said that perhaps the public sector overestimates risk. Would you say that for, you know, the government in Australia as well?

Paul Hubbard

I mean, certainly governments have a responsibility to start off probably with a more cautious approach than private sector folk. I’d say there’s a shift between the uncertainty of something new that isn’t quantifiable to actually I understand the risk, and then once you understand the risk, you can manage it. So certainly over the last year or so, and the government of Australia has taken. much more sort of active posture towards AI where embracing, in a sense embracing the risk a little bit more than we were in the past but as we grow the capability as we’ve got the foundation of trust, the guardrails that we need, it means you can actually manage that risk and that’s the key thing.

Mridu Bhandari

All right, Divyesh?

Divyesh Vithlani

Look, with any so -called new technology there is always going to be a level of, you know, fear, uncertainty, doubt but the kind of, the sort of the paradox for me is that AI is actually not a new technology. In fact, it predates cloud, mobile, robotics you know, judging by the lack of I was writing programs at university that that But AI was just well ahead of its time. We needed the cloud to be able to process large amounts of data. We needed the kind of data centers that we’re talking about for the compute, et cetera, for this technology to really come to light. And clearly, as we’ve gone through digital, social, cloud, and data, along the way, we’ve seen many, many regulations around data protection, how best to use cloud, data sovereignty, data residency, et cetera.

So as long as we are not sort of shedding those controls that we’ve already built and making sure that we tighten the guardrails as we deploy AI and deploy AI through a platform -centric approach where you’ve built the necessary guardrails, I think that those risks will be met. And I think that’s what we’ve managed and mitigated. And hopefully what we’ll start to see is that we’ve managed to do that. to see is the benefits of this combined technology will far outweigh the kind of risks and concerns that we’re seeing. The only qualification I would make, and I think that’s been talked about at this conference, is making sure that we do take

Mridu Bhandari

Absolutely. I mean, it has to be inclusive for all, especially in a country like India where, you know, we have divides of many kinds. Well, let’s spend a few minutes trying to look ahead and do some crystal ball gazing. And Eric, if I can come to you, you know, we are entering autonomous networks, embedded intelligence, physical AI from robotics to many, many massive systems. Now, what does an AI mean? An AI is a creative network then look like, say, five years from now, because anything more than five is just… much to envision and how do we get to this mobile and cloud infrastructure where we’re able for that future?

Erik Ekudden

Well, I think we have to look perhaps further out in five years because we’re building something that should work for society in broad terms. But of course, AI is moving super fast and when you ask about AI native, I think that any industry, including the one I represent, is going through major change now. And AI native is not just how you build your products, that they need to be data -driven, they need to learn, they need to be updated all the time. It’s very much about your processes. It’s about how you go to market with that, how you engage with lifecycle management, handling questions, and I think we talked about it in the pre -meeting as well.

There are so many things that are changes in terms of how you build AI native systems that it is a fundamental rework for, I would say, most AI native systems. product, actually service companies as well. So an AI -native world is something that is much more responsive to these fast changes that we talked about. An AI -native network is a network that is responsive to all of these needs. You already mentioned the physical AI, which is just around the corner, humanoids, robots, drones, all the things that are requiring much more tailoring, much more flexibility from that network or the intelligent fabric. So we need to do what I call user experience at scale or massive user experience.

Everything has to have its own and unique requirements met. I think only AI -native networks that are responsive in real time to these needs and adapt and create the best user experience can handle it. So it’s going to be a very different world, very intuitive, judging what works. What we see on the wearable side, but that’s going to be a completely new setup.

Mridu Bhandari

Right. And Paul, you know, as we’re looking ahead, of course, public -private partnerships are going to be key to any kind of success that we’re going to see. Tell us a little bit about AI CoLab and your approach towards, you know, bringing together public institutions, academia, industry, to really advance the practical adoption of AI while also keeping it very transparent and ensuring that public good is at the center of it.

Paul Hubbard

Absolutely. So the AI CoLab is a cross -sector initiative where folk from the government, folk from the private sector, academics, not -for -profits, can get together in one place and often in person to understand things. And I think everybody who’s come to the AI Impact Summit really understands that we can’t do this alone. Like nobody in their silo can solve the problem themselves. We’ve got to get capability from each other. We’ve got to learn from each other. And I think the 300 ,000 people who have been here this week have certainly proven that to be. proven that to be the case. I think that it’s also key to actually doing safe and responsible AI. It’s not just the technical controls or the networks that we have.

It’s having the people who are going to be in the room who may not care about AI, but they do care about the services that are being delivered. They do care about their voice being heard. They do care about the environment around them as well. So he keeps on bringing you back to reframing that. What’s the problem we’re trying to solve? What’s the mission we’re trying to achieve? And I think if we want to talk about impact, that’s the key question.

Mridu Bhandari

Right. All right. Well, let’s also look at the financial angle with Divesh. You know, we’ve talked about open finance and very effective financial ecosystems. What is it really going to take to scale AI to that level, especially in the near and short term, to enable very responsible deployment? And sustain… finance with egg farmers particularly in the Indian context given the complex complexities that we see in this country?

Divyesh Vithlani

So I think it’s going to be a force for good. If I look at banking, I don’t think the core of banking is going to change. However, how we bank, how we drive that experience for our customers is I think going to be transformationally different in the future. Just one example to pick up on your question, if you combine the technology of AI together with say digital assets and stable coins, the ability to move money faster like emails, why is it that it takes three or four days today to clear a cross -border payment, right? Which goes completely against the whole concept of open finance and inclusion. So I think AI together with some of these other is going is going to be a game changer in enabling things like that and really driving experience to be much more natural, much more intuitive than it is today.

Personally, as a CTO, there is a lot of questions about a job is going to go away, et cetera. If you look at sort of in any organization, certainly banks that I’ve worked in, typically the CapEx demand on an annual basis outstrips supply on a ratio of five to one. But if AI can help us change those legacy systems, modernize our platforms, because let’s be honest, 90 % of banks still operate with legacy technologies. There’s very few in the green field. All of those technologies need to be modernized, upgraded, and I think AI, again, is going to be a force for good there. And once we modernize those systems, we’ll again lend itself to connecting more seamlessly through microservices, APIs, without getting into the technical details, through MCPs, et cetera.

So I think that AI, together with some of these other technologies, digital assets, print and data line, I think will drive a very different paradigm in terms of

Mridu Bhandari

Lovely. Very exciting times ahead. Well, Hari, if you were to give a CEO a three -step plan today to really scale responsibly, what would that be? Three things.

Hari Shetty

Okay. Number one, be very clear about what you want to achieve with AI. So have the vision right. Have clear objectives in terms of what you want to achieve with AI. That’s the first part. The second part that I would call out is don’t think about task and task automation. Think about what does AI do to your business? And it’s an operating model shift fundamentally which can actually deliver value. So think big. Think about the operating model shift that will require structural changes, methods of working changes, skill changes, and, you know, it’s a complete change. It’s a complete transformation compared to just being an automation. And third thing, you know, please call Wipro.

Mridu Bhandari

All right. We are about to now imagine that we are at the India AI Impact Summit 2030, just about four years ahead. What has changed today in the way we live, work, and play that didn’t happen perhaps at the time you were here last, which is today? What has changed? Paul, do you want to start with that? And you can go ahead with the imagination.

Paul Hubbard

yeah okay look as an economist it’s very hard to predict the future I think what has changed is there’s a whole bunch of people turning up with job titles that we’ve never even heard of before and they’re telling us about things that people in a bureaucracy or the government only dream about so I think we’ll see a lot more diversity in what people do

Mridu Bhandari

right lots of new jobs and yeah most industry reports suggest that many of the new jobs of the next decade have not been invented yet so absolutely

Divyesh Vithlani

well in four years time we may not be here in person it will be our agents or avatars that are being kind of you know teleported in because the technology through Ericsson’s amazing network has the kind of bandwidth and the latency is improved vastly, and obviously with Wipro’s technology around creating these avatars and these agents. But no, I think, to be serious, I think what will have changed, at least from my perspective, is banking will be a lot more seamless. It will really be about putting the customers first rather than sort of imposing friction that we see today in terms of how financial services works. For instance, we will be shopping much more intuitively. We won’t even know that we need to get a new fridge or a new car.

It will kind of just occur to us naturally, and something will appear on your doorstep that you didn’t even know you needed, but once it arrives, you think, wow, that’s exactly what I needed. The payment’s taken care of. All the servicing is taken care of. So I think that is a near -term reality.

Mridu Bhandari

All right. Eric, Hari, go ahead.

Hari Shetty

couple of things one is I’ll definitely break my glasses and use Eric’s Eric’s glasses more importantly why I think will fundamentally change is the decision velocity good most importantly I think the decision velocity in organization will completely change in in the next four years one of the key things that we always talk in any enterprises our organization is so slow the processes take a lot of time it does not happen at the pace that we all want it to be and the experience that one gets out of it a slow process is not necessarily a great experience process the fundamental problem that AI will solve and I’m pretty sure it will solve in the next couple of years is the velocity of everything will increase so tremendously that we’ll look back and say how did we ever tolerate something that was as slow as what it is today

Erik Ekudden

yeah I I wonder if it’s doable in four years on a global scale. But I hope what we see four years from now is that we have this dissemination, we have diffusion, we have everyone being included in this fantastic journey that AI really, really is about. But I think it hinges on this dialogue that we have here, and it hinges, it’s conditional on the fact that we solve the trust issues. Because these things with security, privacy, we talk about them as things we can solve technically and so forth, but that needs to have fundamental anchoring in how humans behave so that you can really trust these agents, as was mentioned before, and that we put the right constraints on.

If that happens, of course, four years from now, it’s going to be so seamless where we have our digital colleagues or AI colleagues, AI, physical AI colleagues, and so forth, that it’s going to be a complete. It’s a completely different way of looking at work and, of course, how you get help outsourcing. I mean, you’re going to be an agent of something which is much, much bigger than what you’re commanding today. I think it’s an enormous shift.

Mridu Bhandari

absolutely well fascinating times ahead thank you gentlemen for your very very incredible insights that was very very educational and informational for all of us the takeaway for me I think from this conversation is clear that if people planet progress remain our guiding sutras and if we can align all the seven pillars of global cooperation AI is not going to just optimize businesses it is going to redefine competitiveness it is going to rebuild public trust and of course hopefully it will future -proof all our institutions for the decades ahead thank you very much appreciate you all taking the time here and thank you all for being a wonderful audience thank you you Thank you. Thank you.

Thank you. Thank you.

P

Paul Hubbard

Speech speed

171 words per minute

Speech length

1045 words

Speech time

365 seconds

Trust as foundation for innovation

Explanation

Paul stresses that trust is the essential base that enables AI innovation. Without trust, the willingness to adopt new AI solutions is limited.


Evidence

“It’s actually a foundation of trust that lets you make the innovation” [1].


Major discussion point

Trust and People‑First Approach to AI


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Democratic, people‑first design for AI adoption

Explanation

He argues that AI should be framed through a democratic, participatory lens and that a people‑first approach is key to widespread adoption.


Evidence

“So I think really that framing, that democratic participatory” [17]. “approach, that people -first approach is key” [22].


Major discussion point

Trust and People‑First Approach to AI


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Broad societal benefit beyond profit and job creation

Explanation

Paul highlights that AI should generate public value and welfare, extending benefits to marginalized and rural communities, not just profit.


Evidence

“It’s all about what can generate public value, what generates public welfare” [158]. “But the second thing is really even perhaps more important is we’re going to spread the benefit of AI, not just to people in the tech center, but to every aspect of community, people in rural areas, …” [159].


Major discussion point

Measuring ROI and Value Creation


Topics

Social and economic development | Human rights and the ethical dimensions of the information society


Government cautious stance to manage AI risk

Explanation

He notes that governments have a responsibility to begin with a more cautious approach than the private sector when regulating AI.


Evidence

“certainly governments have a responsibility to start off probably with a more cautious approach than private sector folk” [110].


Major discussion point

Risk Perception and Management


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


AI‑native nations need capability, competence, curiosity

Explanation

Paul adds that for a nation to be AI‑native it must develop capability, competence and curiosity among its people and institutions.


Evidence

“I would add capability, competence, and curiosity” [137].


Major discussion point

Future Outlook and AI‑Native Nations


Topics

Capacity development | Artificial intelligence


AI CoLab as cross‑sector collaboration for responsible AI

Explanation

He describes the AI CoLab as a platform where government, industry, academia and NGOs collaborate to advance responsible AI.


Evidence

“AI CoLab is a cross -sector initiative where folk from the government, folk from the private sector, academics, not -for -profits, can get together in one place …” [188].


Major discussion point

Future Outlook and AI‑Native Nations


Topics

The enabling environment for digital development | Artificial intelligence


E

Erik Ekudden

Speech speed

184 words per minute

Speech length

2305 words

Speech time

750 seconds

Networks as active trust‑enablers

Explanation

Erik points out that networks must be trusted and secure, acting as the foundation for AI‑enabled services.


Evidence

“Networks need to be trusted” [9]. “The network is already secure, trusted” [31].


Major discussion point

Trust and People‑First Approach to AI


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Infrastructure evolving from passive carrier to active AI enabler

Explanation

He asks how infrastructure can shift from merely carrying data to actively enabling trustworthy AI and resilience.


Evidence

“Now, how does infrastructure really evolve from being a very passive carrier of AI to becoming this active enabler of trust and of resilience?” [32]. “We need to scale the networks to handle that” [35].


Major discussion point

Infrastructure and the Intelligent Fabric


Topics

Information and communication technologies for development | Artificial intelligence


AI glasses illustrate need for low‑latency edge inference

Explanation

Erik uses AI‑glasses as an example that requires massive edge inference capacity and ultra‑low latency.


Evidence

“You need to offload the AI, the inference from the glasses” [56]. “That means that you need to scale it to like 8 billion inference for glasses” [60].


Major discussion point

Infrastructure and the Intelligent Fabric


Topics

Artificial intelligence | Information and communication technologies for development


Energy‑efficient hardware, software and models for sustainable AI

Explanation

He stresses the need for energy‑efficient components to avoid exploding AI energy consumption.


Evidence

“So what we are doing and what needs to happen is to really have energy -efficient hardware, energy -efficient software, energy -efficient AI models” [70]. “So we’re not going to explode energy consumption just because we use more AI” [71].


Major discussion point

Infrastructure and the Intelligent Fabric


Topics

Environmental impacts | Artificial intelligence


Telecom guardrails translated to AI agent ecosystems

Explanation

Erik notes that existing telecom security and trust guardrails can be leveraged as a baseline for AI agents.


Evidence

“The network is already secure, trusted” [31]. “So it’s already secure and trusted” [33].


Major discussion point

Infrastructure and the Intelligent Fabric


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Overestimation of risk in public sector can hinder progress

Explanation

He observes that governments may over‑estimate AI risk, which could slow down adoption compared with the private sector.


Evidence

“i suspect that it’s become quite realistic the risk assessment among enterprises not to overestimate it they’re manageable i think maybe on the government side there’s still an overestimation on the risk side” [165].


Major discussion point

Risk Perception and Management


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


AI‑native networks must be real‑time, user‑centric, and adaptive

Explanation

He argues that only AI‑native networks that respond in real time and adapt to user needs can meet future demands.


Evidence

“only AI -native networks that are responsive in real time to these needs and adapt and create the best user experience can handle it” [49].


Major discussion point

Future Outlook and AI‑Native Nations


Topics

Artificial intelligence | Information and communication technologies for development


Business growth through AI‑enabled network services

Explanation

Erik links AI deployment to overall business growth and expansion opportunities.


Evidence

“It’s kind of a business growth” [151]. “But it’s very much about that business growth” [152].


Major discussion point

Measuring ROI and Value Creation


Topics

The digital economy | Artificial intelligence


Emergence of new job categories and avatars within four years

Explanation

He envisions that in four years AI avatars and digital colleagues will become commonplace, reshaping work.


Evidence

“In the future. … four years from now, it’s going to be so seamless where we have our digital colleagues or AI colleagues” [192]. “yeah I I wonder if it’s doable in four years on a global scale” [193].


Major discussion point

Future Outlook and AI‑Native Nations


Topics

Future of work | Artificial intelligence


D

Divyesh Vithlani

Speech speed

145 words per minute

Speech length

2320 words

Speech time

958 seconds

Platform‑first governance to embed trust

Explanation

Divyesh advocates a platform‑first strategy to scale AI safely, embedding trust and safeguards at the core.


Evidence

“So taking that platform -first approach is what really is driving our sort of strategy to ensure that we drive AI at scale but with all the right trust and safeguards” [39]. “Now, the only way, in my opinion, to do that is to take a platform -first approach” [40].


Major discussion point

Trust and People‑First Approach to AI


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Layered, platform‑first architecture with data, model, knowledge, context

Explanation

He describes a multi‑layered platform that integrates data, models, knowledge and context to support AI use cases.


Evidence

“So we have taken the approach of building a platform with all the different layers from data, model, knowledge, context, and the use cases that sit on top of that” [41].


Major discussion point

Governance, Accountability and Platform Approach


Topics

Artificial intelligence | Data governance


Dynamic oversight via execution and control planes

Explanation

Divyesh explains that the platform includes separate execution and control planes for monitoring and governance of AI activities.


Evidence

“There’s an execution plane and a control plane, right?” [89]. “But again, going back to that execution plane where you are monitoring every activity that is being done through a control plane” [90].


Major discussion point

Governance, Accountability and Platform Approach


Topics

Artificial intelligence | Monitoring and measurement


Agent performance appraisal and “agent university” concept

Explanation

He introduces an “agent university” to continuously train and evaluate AI agents, akin to human performance management.


Evidence

“there’s a concept that we call agent university” [93]. “Whilst agents may not fill out a timesheet, we’re also monitoring the agent for the worth, the tokens that they’ve consumed for the output they’ve generated” [94].


Major discussion point

Governance, Accountability and Platform Approach


Topics

Artificial intelligence | Capacity development


AI‑driven ROI in banking: fraud protection, speed, legacy modernisation

Explanation

Divyesh outlines how AI can modernise legacy banking systems, improve fraud protection and accelerate decision‑making.


Evidence

“AI can help us change those legacy systems, modernize our platforms” [140]. “AI is going to help us do in terms of creating value because it allows us to respond to change faster, do rapid experiments, and to scale” [142].


Major discussion point

Measuring ROI and Value Creation


Topics

The digital economy | Artificial intelligence


Platform guardrails mitigate AI‑specific risks

Explanation

He stresses that deploying AI through a platform with built‑in guardrails ensures risks are managed effectively.


Evidence

“as we deploy AI through a platform -centric approach where you’ve built the necessary guardrails, I think that those risks will be met” [80]. “There are all of sorts of safeguards, guardrails, controls that have been built in” [174].


Major discussion point

Risk Perception and Management


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Emergence of new job categories and avatars within four years

Explanation

Divyesh predicts that in four years AI avatars will replace many in‑person interactions, creating new job families.


Evidence

“in four years time we may not be here in person it will be our agents or avatars that are being kind of you know teleported in because the technology …” [191].


Major discussion point

Future Outlook and AI‑Native Nations


Topics

Future of work | Artificial intelligence


H

Hari Shetty

Speech speed

199 words per minute

Speech length

1619 words

Speech time

487 seconds

Trust earned through consistent, hallucination‑free performance

Explanation

Hari explains that human and agentic trust is built when AI works reliably over time without hallucinations or fundamental flaws.


Evidence

“human trust is earned … you need something that can work for a long period in time without hallucination without fundamental flaws in the model so that there’s trust built into it” [15]. “solutions that are only capable of you know following that principle are the ones that we actually take it to market” [15].


Major discussion point

Trust and People‑First Approach to AI


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Four principles for “proof over promise” in enterprise AI

Explanation

He outlines a framework of four principles—problem‑first, enterprise‑centric, continuous reliability, and proof‑over‑promise—to validate AI solutions.


Evidence

“…the enterprise story is a completely different story… the third principle … it should work every day every hour and every minute… and last … human trust is earned … without hallucination…” [15]. “And when we talk about proof over promise, we talk about four distinct elements that are important from a Wipro perspective” [115].


Major discussion point

Governance, Accountability and Platform Approach


Topics

Artificial intelligence | Monitoring and measurement


Productivity as early signal; focus on end outcomes

Explanation

Hari notes that productivity is an early indicator of AI ROI, but the ultimate focus should be on measurable end outcomes.


Evidence

“productivity is only an early signal” [122]. “the earliest signal of ROI is productivity” [123]. “We are moving beyond productivity to look at some of those end outcomes that we can achieve” [125].


Major discussion point

Measuring ROI and Value Creation


Topics

Monitoring and measurement | Artificial intelligence


“Plus‑scores” to track failures, hallucinations, and quality

Explanation

He introduces “plus‑scores” as a metric to capture AI failures, hallucinations and overall quality for continuous improvement.


Evidence

“When we look at plus scores, we are looking at, you know, how many instances, how many instances of failure did happen?” [132]. “I will just touch upon plus scores for a minute” [134]. “It’s important to measure quality aspects, failure aspects, hallucination that we talked about” [135].


Major discussion point

Measuring ROI and Value Creation


Topics

Monitoring and measurement | Artificial intelligence


AI risk is manageable with appropriate toolsets and guardrails

Explanation

Hari argues that AI risk is not unmanageable; with the right tools and guardrails, organizations can safely extract value.


Evidence

“there is certainly a level of risk … it’s a manageable risk and with the right kind of tool set … it’s definitely possible to get the best value out of ai without actually exposing oneself to risk” [166].


Major discussion point

Risk Perception and Management


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Proof‑over‑promise framework drives scalable enterprise AI

Explanation

He reiterates that the proof‑over‑promise approach is essential for scaling AI responsibly across enterprises.


Evidence

“Now, coming back to proof over promise, you absolutely brought the most important topic that’s in discussion across the summit here as well” [187].


Major discussion point

Future Outlook and AI‑Native Nations


Topics

Artificial intelligence | Monitoring and measurement


M

Mridu Bhandari

Speech speed

133 words per minute

Speech length

1768 words

Speech time

795 seconds

Public trust is essential for AI adoption

Explanation

Mridu emphasizes that building public confidence is a prerequisite for scaling AI responsibly.


Evidence

“Public trust is very important” [5]. “How can we achieve trust before skill?” [4]. “How should we be rethinking trust?” [6].


Major discussion point

Trust and People‑First Approach to AI


Topics

Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Infrastructure must evolve from passive carrier to active enabler

Explanation

She asks how infrastructure can transition from a passive data pipe to an active AI‑enabling component.


Evidence

“Now, how does infrastructure really evolve from being a very passive carrier of AI to becoming this active enabler of trust and of resilience?” [32].


Major discussion point

Infrastructure and the Intelligent Fabric


Topics

Information and communication technologies for development | Artificial intelligence


Measuring ROI: focus on outcomes over optics

Explanation

She stresses that outcomes, not just metrics or optics, should drive AI ROI assessments.


Evidence

“Outcomes over optics and responsibility as a competitive advantage” [112].


Major discussion point

Measuring ROI and Value Creation


Topics

Monitoring and measurement | Artificial intelligence


Risk perception: governments may over‑estimate AI risk

Explanation

She references Paul’s view that excessive caution in the public sector could impede AI progress.


Evidence

“Paul, you want to take that on, considering, you know, Eric just said that perhaps the public sector overestimates risk” [168].


Major discussion point

Risk Perception and Management


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Future outlook: AI‑native nations need capability, competence, curiosity

Explanation

She asks Paul to elaborate on the traits required for AI‑native nations, linking back to capability, competence and curiosity.


Evidence

“Paul, when we talk about AI native nations…” [178]. “I would add capability, competence, and curiosity” [137].


Major discussion point

Future Outlook and AI‑Native Nations


Topics

Capacity development | Artificial intelligence


Agreements

Agreement points

Trust is foundational to AI innovation rather than opposing it

Speakers

– Paul Hubbard
– Divyesh Vithlani

Arguments

Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets citizens where they are


Public trust in AI must be built through conviction at the organizational level and platform-first approaches with built-in safeguards


Summary

Both speakers reject the false dichotomy between trust and innovation, arguing that trust actually enables and accelerates AI innovation when properly implemented through people-first approaches and platform-based safeguards


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


AI risks are manageable with proper frameworks and controls

Speakers

– Paul Hubbard
– Erik Ekudden
– Divyesh Vithlani
– Hari Shetty

Arguments

Risk management shifts from uncertainty about new technology to understanding and managing quantifiable risks through proper guardrails


Enterprise risk assessment has become realistic, though government sectors may still overestimate risks and be overly cautious


AI risks can be managed by maintaining existing data protection and cloud security controls while tightening guardrails through platform-centric approaches


AI risks are manageable rather than uncontrollable, requiring appropriate tools and frameworks but not being overstated


Summary

All speakers agree that AI risks, while real, are manageable through proper frameworks, existing security controls, and appropriate guardrails rather than being insurmountable obstacles


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Platform-first approaches are essential for scaling AI responsibly

Speakers

– Divyesh Vithlani
– Erik Ekudden

Arguments

AI platform architecture should include execution and control planes with guardrails that allow agents to learn and take on more responsibility over time


Networks must evolve from passive carriers to active enablers, becoming an intelligent fabric that hosts AI workloads at the edge


Summary

Both speakers emphasize the importance of building robust platform infrastructure – whether for banking AI systems or network intelligence – that provides the foundation for safe, scalable AI deployment


Topics

Artificial intelligence | Information and communication technologies for development


AI governance requires clear accountability structures with domain-specific responsibility

Speakers

– Erik Ekudden
– Divyesh Vithlani

Arguments

Accountability in AI systems requires clear responsibility at each domain level, with existing safety and security principles from critical infrastructure translating to the agentic world


AI governance requires treating agents like employees with performance management, onboarding/offboarding processes, and conflict resolution mechanisms


Summary

Both speakers advocate for clear accountability frameworks where responsibility is assigned at appropriate levels, whether through domain-specific responsibility or structured agent management similar to human resources


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Problem-first thinking should drive AI implementation over technology-first approaches

Speakers

– Paul Hubbard
– Hari Shetty

Arguments

Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets citizens where they are


Successful AI scaling requires starting with problem-first thinking rather than model-first approaches, understanding enterprise complexity, and ensuring consistent daily operation


Summary

Both speakers emphasize starting with the problems to be solved and the needs of end users rather than beginning with AI models or technology capabilities


Topics

Artificial intelligence | Social and economic development


Similar viewpoints

Both speakers view AI as a transformational capability that goes beyond simple productivity gains to fundamental business transformation, though they approach measurement differently

Speakers

– Divyesh Vithlani
– Hari Shetty

Arguments

AI value creation occurs at three levels: micro-level productivity through co-pilots, enterprise-level process transformation, and competitive advantage through faster response to change


AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases


Topics

Artificial intelligence | The digital economy


Both speakers see AI as fundamentally transforming organizational speed and responsiveness, requiring complete rethinking of how businesses operate

Speakers

– Erik Ekudden
– Hari Shetty

Arguments

Future AI-native systems require fundamental rework of products, processes, and go-to-market strategies to be responsive to fast changes


The fundamental change AI will bring is dramatically increased decision velocity in organizations, making current processes seem intolerably slow


Topics

Artificial intelligence | The digital economy | Social and economic development


Both speakers emphasize the importance of ensuring AI benefits reach all segments of society, whether through government policy or financial inclusion

Speakers

– Paul Hubbard
– Divyesh Vithlani

Arguments

Government accountability involves having clear plans for seizing AI opportunities while spreading benefits broadly and keeping citizens safe


AI will transform banking by making financial services more seamless and intuitive while enabling faster system modernization


Topics

Artificial intelligence | Social and economic development | Closing all digital divides


Unexpected consensus

AI as fundamental infrastructure rather than optional technology

Speakers

– Erik Ekudden
– Hari Shetty
– Divyesh Vithlani

Arguments

AI applications require distributed inference capabilities across networks to support emerging technologies like AI glasses and industrial applications


AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases


AI value creation occurs at three levels: micro-level productivity through co-pilots, enterprise-level process transformation, and competitive advantage through faster response to change


Explanation

Despite coming from different sectors (telecom, consulting, banking), all three speakers converged on viewing AI as essential infrastructure rather than optional technology, suggesting a maturation in thinking about AI’s role


Topics

Artificial intelligence | Information and communication technologies for development


Human-AI collaboration models based on existing organizational structures

Speakers

– Divyesh Vithlani
– Erik Ekudden

Arguments

AI governance requires treating agents like employees with performance management, onboarding/offboarding processes, and conflict resolution mechanisms


Accountability in AI systems requires clear responsibility at each domain level, with existing safety and security principles from critical infrastructure translating to the agentic world


Explanation

Both speakers independently arrived at the idea that AI systems should be managed using familiar organizational and infrastructure management principles, suggesting practical governance approaches are emerging


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Trust as a competitive advantage rather than compliance burden

Speakers

– Paul Hubbard
– Divyesh Vithlani
– Hari Shetty

Arguments

Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets citizens where they are


Public trust in AI must be built through conviction at the organizational level and platform-first approaches with built-in safeguards


Trust in AI systems, like human trust, must be earned through consistent performance over time without hallucinations or fundamental flaws


Explanation

All speakers reframed trust from a regulatory compliance issue to a strategic business advantage, which is unexpected given typical discussions that position trust and innovation as competing priorities


Topics

Artificial intelligence | Building confidence and security in the use of ICTs | The digital economy


Overall assessment

Summary

The speakers demonstrated remarkable consensus across multiple dimensions: viewing AI as foundational infrastructure, emphasizing trust as enabling rather than hindering innovation, advocating for problem-first approaches, and agreeing that risks are manageable through proper frameworks. They converged on practical governance approaches using familiar organizational structures and consistently emphasized the importance of inclusive benefits distribution.


Consensus level

High level of consensus with strong alignment on fundamental principles and practical approaches. This suggests the AI governance discussion has matured beyond basic debates about whether to adopt AI toward more sophisticated questions about how to implement it responsibly and effectively. The cross-sector agreement (government, telecom, banking, consulting) indicates these principles may be broadly applicable across industries and contexts.


Differences

Different viewpoints

Government vs. Enterprise Risk Assessment Approaches

Speakers

– Erik Ekudden
– Paul Hubbard

Arguments

Enterprise risk assessment has become realistic, though government sectors may still overestimate risks and be overly cautious


Risk management shifts from uncertainty about new technology to understanding and managing quantifiable risks through proper guardrails


Summary

Erik suggests governments are being overly cautious and overestimating AI risks, which could hold back progress in public sectors, while Paul defends the government approach as necessarily cautious initially but evolving toward active risk management as understanding grows


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Regulation Timing and Approach

Speakers

– Erik Ekudden
– Paul Hubbard

Arguments

Accountability in AI systems requires clear responsibility at each domain level, with existing safety and security principles from critical infrastructure translating to the agentic world


Government accountability involves having clear plans for seizing AI opportunities while spreading benefits broadly and keeping citizens safe


Summary

Erik warns against regulating before innovating and advocates for translating existing telecom guardrails to AI, while Paul emphasizes the need for comprehensive government planning and whole-of-society leadership in AI governance


Topics

Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs


Unexpected differences

Fundamental Nature of AI Technology

Speakers

– Divyesh Vithlani
– Hari Shetty

Arguments

AI risks can be managed by maintaining existing data protection and cloud security controls while tightening guardrails through platform-centric approaches


AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases


Explanation

Divyesh argues that AI is not actually new technology but predates cloud and mobile technologies, suggesting continuity with existing approaches, while Hari treats AI as a fundamental shift comparable to email or internet adoption. This creates tension between evolutionary vs. revolutionary framing of AI’s impact


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Overall assessment

Summary

The discussion reveals relatively low levels of direct disagreement, with most tensions arising around implementation approaches rather than fundamental principles. Key areas of difference include government vs. private sector risk tolerance, regulation timing, and whether AI represents evolutionary or revolutionary change


Disagreement level

Low to moderate disagreement level. The speakers largely align on core principles of trust, responsibility, and the transformative potential of AI, but differ on tactical approaches to governance, risk management, and implementation strategies. These disagreements reflect healthy debate about best practices rather than fundamental philosophical divisions, suggesting good potential for collaborative solutions


Partial agreements

Partial agreements

Both agree that AI represents a fundamental transformation beyond simple ROI calculations, but Hari advocates for viewing AI as a capability like email or internet (not requiring ROI justification), while Divyesh provides a structured three-tier framework for measuring and demonstrating AI value to stakeholders

Speakers

– Hari Shetty
– Divyesh Vithlani

Arguments

AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases


AI value creation occurs at three levels: micro-level productivity through co-pilots, enterprise-level process transformation, and competitive advantage through faster response to change


Topics

Artificial intelligence | The digital economy


Both agree on the need for distributed, intelligent infrastructure, but Erik focuses on network-level intelligence and edge computing capabilities, while Divyesh emphasizes platform-level governance and control mechanisms for managing AI agents

Speakers

– Erik Ekudden
– Divyesh Vithlani

Arguments

AI applications require distributed inference capabilities across networks to support emerging technologies like AI glasses and industrial applications


AI platform architecture should include execution and control planes with guardrails that allow agents to learn and take on more responsibility over time


Topics

Artificial intelligence | Information and communication technologies for development | Building confidence and security in the use of ICTs


Similar viewpoints

Both speakers view AI as a transformational capability that goes beyond simple productivity gains to fundamental business transformation, though they approach measurement differently

Speakers

– Divyesh Vithlani
– Hari Shetty

Arguments

AI value creation occurs at three levels: micro-level productivity through co-pilots, enterprise-level process transformation, and competitive advantage through faster response to change


AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases


Topics

Artificial intelligence | The digital economy


Both speakers see AI as fundamentally transforming organizational speed and responsiveness, requiring complete rethinking of how businesses operate

Speakers

– Erik Ekudden
– Hari Shetty

Arguments

Future AI-native systems require fundamental rework of products, processes, and go-to-market strategies to be responsive to fast changes


The fundamental change AI will bring is dramatically increased decision velocity in organizations, making current processes seem intolerably slow


Topics

Artificial intelligence | The digital economy | Social and economic development


Both speakers emphasize the importance of ensuring AI benefits reach all segments of society, whether through government policy or financial inclusion

Speakers

– Paul Hubbard
– Divyesh Vithlani

Arguments

Government accountability involves having clear plans for seizing AI opportunities while spreading benefits broadly and keeping citizens safe


AI will transform banking by making financial services more seamless and intuitive while enabling faster system modernization


Topics

Artificial intelligence | Social and economic development | Closing all digital divides


Takeaways

Key takeaways

Trust is foundational to AI innovation rather than a barrier – it enables rather than hinders progress and must be built through people-first approaches


AI infrastructure must evolve from passive carriers to intelligent fabrics that actively enable AI workloads at the edge through networks like 5G/6G


AI governance should treat agents like employees with performance management, guardrails, and accountability structures built into platform architectures


AI value should be measured beyond ROI – it represents a fundamental capability shift requiring assessment of productivity, business outcomes, and competitive advantage


Successful AI scaling requires problem-first thinking, understanding enterprise complexity, and ensuring consistent operation rather than perpetual pilots


AI risks are manageable through proper frameworks and existing security controls, though government sectors may be overly cautious while enterprises have realistic assessments


Future AI-native organizations will require fundamental rework of products, processes, and business models to be responsive to rapid changes


Cross-sector collaboration between government, private sector, and academia is essential for responsible AI development and public good


AI will dramatically increase decision velocity in organizations, making current slow processes seem intolerable in the future


Resolutions and action items

Organizations should adopt platform-first approaches with built-in ethical AI and governance controls


Governments should focus on clear communication plans that demonstrate AI benefits while keeping citizens safe


Enterprises should move beyond measuring individual use case ROI to viewing AI as fundamental capability investment


Leaders should implement three-step scaling plans: clear AI vision, focus on operating model transformation rather than task automation, and structural organizational changes


Networks must be designed with energy-efficient hardware and software to support distributed AI inference at scale


AI governance frameworks should include agent onboarding/offboarding, performance management, and conflict resolution mechanisms


Unresolved issues

How to achieve global-scale AI diffusion and inclusion within a 4-year timeframe remains uncertain


Balancing AI innovation speed with appropriate regulatory oversight without stifling development


Reconciling massive infrastructure and energy demands of AI with sustainability goals


Determining optimal risk tolerance levels across different industries and use cases


Addressing the digital divide to ensure AI benefits reach marginalized communities and rural areas


Managing the transition period as job roles transform and new positions emerge that don’t yet exist


Establishing international standards for AI accountability across distributed, multi-vendor technology stacks


Suggested compromises

Accept that AI risks are manageable rather than seeking zero-risk approaches that could stifle innovation


Use existing regulatory frameworks from related technologies (data protection, cloud security) as starting points rather than creating entirely new governance structures


Allow different risk tolerance levels (85% vs 99.99% accuracy) based on specific use cases and industry requirements


Balance centralized AI training with distributed inference to optimize both performance and energy efficiency


Implement graduated autonomy for AI agents similar to human employee development – starting with limited responsibility and increasing over time


Focus regulation on outcomes and accountability rather than prescriptive technical requirements that could limit innovation


Combine public and private sector expertise through collaborative initiatives rather than siloed development approaches


Thought provoking comments

It’s not either or. It’s not about you have trust or you have productive AI… there is no compromise on risks and controls. Our business in banking relies 100% on trust. So that is not a value that we can compromise on any time. However, in order to make sure that we do deploy AI at scale in a trusted manner, it starts with conviction.

Speaker

Divyesh Vithlani


Reason

This comment reframes the entire trust vs. innovation debate by rejecting the false dichotomy. It establishes that trust isn’t a barrier to AI adoption but rather a foundational requirement, especially in regulated industries. The emphasis on ‘conviction’ as a starting point is particularly insightful as it suggests organizational commitment precedes technical implementation.


Impact

This comment shifted the discussion from viewing trust and innovation as competing priorities to understanding them as complementary necessities. It influenced subsequent speakers to discuss governance frameworks and platform approaches rather than trade-offs, fundamentally changing how the panel approached AI implementation strategies.


AI is no longer about pilots. It’s about being able to get value out of AI… don’t start with a model, don’t talk about model x or model y and then start with a model first thinking, start with a problem first thinking.

Speaker

Hari Shetty


Reason

This insight challenges the prevalent technology-first approach in AI adoption. By advocating for problem-first thinking, it addresses a critical issue where organizations get caught up in AI capabilities rather than focusing on actual business problems that need solving. This represents a maturation in AI thinking from experimentation to practical application.


Impact

This comment redirected the conversation toward practical value creation and moved the discussion away from theoretical AI capabilities to concrete business outcomes. It prompted other panelists to share specific examples of successful AI implementations and influenced the later discussion about ROI measurement and business innovation.


I view an agent no different to a human so you do performance management… there’s a concept that we call agent university… whilst humans may not fill out a timesheet to account for the work that they’ve done… we’re also monitoring the agent for the tokens that they’ve consumed for the output that they’ve generated.

Speaker

Divyesh Vithlani


Reason

This anthropomorphic approach to AI governance is remarkably innovative, treating AI agents with human-like management frameworks including performance reviews, training, and accountability measures. The ‘agent university’ concept and the parallel between human timesheets and token consumption monitoring represents a novel governance model that makes AI management more relatable and systematic.


Impact

This comment introduced a completely new framework for thinking about AI governance that resonated throughout the remainder of the discussion. It influenced questions about agent accountability, performance measurement, and even prompted a humorous exchange about ‘firing agents for hallucinating,’ while establishing a practical model for AI management that other organizations could adopt.


We’re moving from that big data center training to the distributed inference… That means that you need to scale it to like 8 billion inference for glasses. Tens of billions of sensors using AI… So we’re not going to explode energy consumption just because we use more AI.

Speaker

Erik Ekudden


Reason

This technical insight challenges the prevailing narrative about AI’s unsustainable energy consumption by distinguishing between energy-intensive training phases and more efficient distributed inference. It provides a nuanced view of AI’s environmental impact and suggests a more sustainable path forward through architectural changes.


Impact

This comment reframed the sustainability discussion from doom-and-gloom predictions to a more optimistic and technically grounded perspective. It influenced the conversation to focus on practical solutions for sustainable AI deployment and helped balance concerns about AI’s environmental impact with realistic projections about future efficiency improvements.


AI is beyond just measuring return on investments… it’s almost like going back in time could you ask should we implement an email system what’s the ROI on the email system… so a lot of the thinking should change from looking at ROI to looking at AI as a fundamental capability and a fundamental shift and a journey which is irreversible.

Speaker

Hari Shetty


Reason

This analogy brilliantly contextualizes AI adoption by comparing it to foundational technologies like email and the internet. It challenges the conventional business approach of demanding immediate ROI for transformational technologies and suggests that AI should be viewed as an inevitable capability rather than an optional investment.


Impact

This comment fundamentally shifted how the panel discussed AI value measurement, moving from traditional ROI metrics to broader capability building. It influenced the subsequent discussion about long-term competitiveness and helped establish AI as a strategic imperative rather than a tactical tool, affecting how other panelists framed their responses about organizational AI adoption.


If you regulate before you have innovated, you never know what you will get. But I think if you stay with these basic principles that we do have requirements and we have guardrails in the world we’re coming from, and you translate that more or less one-to-one into the agentic world, I think we are on a good starting point.

Speaker

Erik Ekudden


Reason

This comment provides a nuanced approach to AI regulation that balances innovation with safety. Rather than calling for new regulatory frameworks, it suggests adapting existing proven governance models from established industries like telecommunications. This perspective offers a practical path forward that avoids regulatory paralysis while maintaining necessary safeguards.


Impact

This insight influenced the discussion about accountability and governance by providing a middle ground between over-regulation and under-regulation. It helped shape the conversation toward practical governance approaches and influenced other panelists to discuss how existing regulatory frameworks in their industries could be adapted for AI, rather than starting from scratch.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional wisdom and introducing innovative frameworks for AI adoption. The conversation evolved from abstract concepts about AI trust and innovation to concrete, actionable approaches for implementation. Vithlani’s reframing of trust as foundational rather than optional, combined with Shetty’s problem-first approach, established a mature perspective on AI adoption that moved beyond pilot projects to scalable solutions. The anthropomorphic governance model and the sustainability reframing provided practical solutions to common AI concerns, while the capability-versus-ROI perspective and regulation-innovation balance offered strategic guidance for long-term AI adoption. Together, these insights transformed what could have been a typical AI hype discussion into a substantive conversation about practical, responsible AI implementation at scale.


Follow-up questions

How do we measure the ROI of AI beyond traditional productivity metrics?

Speaker

Mridu Bhandari


Explanation

This question was raised multiple times throughout the discussion as enterprises struggle to quantify AI value beyond cost savings, with participants suggesting various approaches but no definitive framework emerging


What governance models work best for agentic AI systems in regulated industries?

Speaker

Divyesh Vithlani


Explanation

As AI agents become more autonomous, there’s a need to understand how to adapt existing governance frameworks, particularly in banking and other regulated sectors


How do we scale AI infrastructure sustainably while meeting massive compute and energy demands?

Speaker

Mridu Bhandari


Explanation

The tension between AI’s energy requirements and sustainability goals requires further research into energy-efficient hardware, software, and deployment models


What are the specific technical requirements for AI-native networks to support distributed inference at scale?

Speaker

Erik Ekudden


Explanation

The transition from centralized training to distributed inference across billions of devices requires detailed technical specifications and infrastructure planning


How do we measure and manage agent performance, conflicts, and accountability in real-world deployments?

Speaker

Divyesh Vithlani


Explanation

As organizations deploy AI agents at scale, they need practical frameworks for performance management, conflict resolution between agents and humans, and clear accountability structures


What distinguishes AI-native nations from AI-dependent nations in terms of long-term competitiveness?

Speaker

Paul Hubbard


Explanation

Understanding the fundamental differences in approach, capability, and outcomes between nations that build AI capabilities versus those that merely consume AI services


How do we ensure AI diffusion and inclusion reaches all segments of society, particularly in diverse countries like India?

Speaker

Erik Ekudden


Explanation

The challenge of making AI benefits accessible across different socioeconomic groups, rural areas, and marginalized communities requires targeted research and policy approaches


What are the practical frameworks for building public trust in AI while maintaining innovation speed?

Speaker

Paul Hubbard


Explanation

Balancing the need for public confidence and democratic participation with the pace of technological advancement requires refined approaches to stakeholder engagement


How do we transition from AI pilots to scalable, production-ready solutions that work consistently?

Speaker

Hari Shetty


Explanation

Many organizations struggle to move beyond proof-of-concept projects to enterprise-scale AI implementations that deliver reliable business value


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.