Shaping AI’s Story Trust Responsibility & Real-World Outcomes

20 Feb 2026 18:00h - 19:00h

Shaping AI’s Story Trust Responsibility & Real-World Outcomes

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing a “sustainable AI future” built on the three sutras of People, Planet and Progress and introduced seven “chakras” – human capital, inclusion, trust, resilience, science, resources and social good – as concrete pillars for global cooperation [1-4]. The central question posed was how to achieve trust before skill, positioning outcomes and responsibility as a competitive advantage [6].


Paul Hubbard argued that AI should be viewed through an economics lens, focusing on public value rather than mere technological adoption, and that trust is the foundation that enables innovation [26-27][39-46]. He emphasized a people-first, democratic participatory approach that meets citizens where they are rather than imposing new technology [44-46].


Erik Ekudden described how telecom networks are evolving into an “intelligent fabric” that will host AI inference for devices such as AI glasses, requiring the network to be secure and trusted [49-58][75-82]. He noted that scaling this fabric is essential for future industrial AI applications in agriculture, healthcare and smart manufacturing [54-58][60-62].


Divyesh Vithlani explained First Abu Dhabi Bank’s platform-first strategy, embedding ethical AI governance, layered data-model-knowledge architecture, and dynamic oversight through separate execution and control planes to manage agents and their performance [111-133][200-226]. He added that agents are treated like humans with guardrails, performance monitoring and “agent university” to ensure accountability and mitigate hallucinations [225-226].


Hari Shetty contrasted “proof over promise” with a problem-first mindset, insisting that AI solutions must run continuously, earn trust through consistent performance, and be measured beyond simple productivity using “plus scores” that track failures and quality [152-157][236-247]. On accountability, Paul stressed a clear, inclusive plan that spreads AI benefits across communities while safeguarding citizens, whereas Erik highlighted a hierarchy of agent decision-making that ties responsibility to the domain providing the service [163-170][177-186]. Both agreed that perceived AI risk is manageable; Hari called hype overstated, while Erik warned that excessive caution in the public sector could hinder progress [344-348].


Looking ahead, Paul added that capability, competence and curiosity will differentiate AI-native nations, and Erik argued that energy-efficient hardware, software and inference distribution can keep AI expansion sustainable, with networks accounting for only a small share of total power consumption [300-306][311-326]. Finally, Paul described the AI CoLab as a cross-sector initiative that brings government, industry and academia together to solve problems collaboratively, and Bhandari concluded that aligning the seven pillars will redefine competitiveness, rebuild public trust and future-proof institutions [385-399][456-458].


Keypoints


Major discussion points


Trust as the foundation for AI innovation – The panel repeatedly stressed that trust is not an obstacle to innovation but the very base that enables it. Mridu asked how to build confidence in AI without slowing progress [33-37], and Paul replied that “trust lets you make the innovation” and that a people-first, participatory approach is essential [39-46].


The network as an “intelligent fabric” that must evolve from passive conduit to active, trusted AI enabler – Erik described how 5G/6G networks are becoming the host for distributed inference (e.g., AI glasses) and must be secure, scalable, and edge-enabled [49-60][75-82]. He later linked this infrastructure to business value, noting that AI-driven network services can generate large efficiency gains and new revenue streams [262-270][284-287].


Platform-first governance and dynamic oversight for enterprise AI – Divyesh explained that a layered, platform-centric architecture (execution plane + control plane) with built-in guardrails, deterministic “atomic” agents, and continuous performance monitoring is how banks can maintain accountability while scaling AI [130-138][200-219][225-226].


Moving from “proof-of-concept” pilots to proven, production-grade AI – Hari outlined four enterprise principles: start with the problem, adapt to legacy-heavy environments, ensure continuous, reliable operation, and earn long-term trust by avoiding hallucinations [147-154][155-162]. This shift is presented as the key to turning AI hype into measurable outcomes.


Future-oriented considerations: AI-native nations, sustainability, and ROI re-framing – Paul highlighted that beyond infrastructure, a nation’s capability, competence, and curiosity will separate AI-native from AI-dependent economies [300-306]. Erik warned that AI’s energy intensity can be mitigated through efficient hardware, software, and inference-centric deployment, turning network power use into a net-positive for emissions [311-326]. Hari and the panel also argued that ROI should be viewed as a capability (EI) rather than a simple cost-benefit metric [234-247].


Overall purpose / goal of the discussion


The session was convened to explore how global stakeholders can “shape a sustainable AI future” by aligning the seven “chakras” of human capital, inclusion, trust, resilience, science, resources, and social good [1-4]. Throughout the dialogue the panel sought concrete ways to achieve trust before skill, embed AI responsibly in public policy and enterprise operations, and translate high-level ambition into accountable, scalable actions.


Overall tone and its evolution


– The conversation opened with a formal, visionary tone, setting out broad principles and introducing the panel [1-14].


– It then shifted to a pragmatic, solution-focused tone, with detailed technical explanations about networks, platform governance, and operational safeguards [39-82][130-138][147-162].


– Mid-discussion the tone became optimistic and forward-looking, emphasizing future capabilities, sustainability, and the transformative impact of AI-native societies [300-326][447-452].


– The closing remarks returned to a hopeful, unifying tone, reiterating the “People, Planet, Progress” sutras and the belief that aligned global cooperation will future-proof institutions [456-458].


Overall, the dialogue moved from setting the agenda, through concrete technical and governance recommendations, to an inspiring vision of AI’s role in the next decade.


Speakers

Mridu Bhandari


Area of expertise: AI policy, responsible AI, multi-stakeholder governance


Role / Title: Moderator, Network18 (referred to as “Vipi Bhandari” in the opening) [S1]


Hari Shetty


Area of expertise: Strategy, technology implementation, AI consulting


Role / Title: Strategist and Technology Officer, Wipro [S4]


Divyesh Vithlani


Area of expertise: Banking technology, digital transformation, AI platform governance


Role / Title: Group Chief Technology and Transformation Officer, First Abu Dhabi Bank [S5]


Paul Hubbard


Area of expertise: Public-policy economics, AI governance in government


Role / Title: First Assistant Secretary for AI Delivery and Enablement, Department of Finance, Australian Government; also known as the “AI masked economist” (self-described) [S7]


Erik Ekudden


Area of expertise: Telecommunications networks, AI-enabled connectivity, 5G/6G infrastructure


Role / Title: Chief Technology Officer, Ericsson [S8]


Additional speakers:


Harish Yatich


Area of expertise: Technology strategy (introduced as part of the panel)


Role / Title: Strategist and Technology Officer, Wipro (mentioned in the opening but did not speak)


Dinesh


Area of expertise: (not specified)


Role / Title: (mentioned in a question prompt; no title provided)


Full session reportComprehensive analysis and detailed insights

Opening and framing


Mr Bhandari opened the session by positioning AI as a “sustainable AI future” built on the three sutras of People, Planet and Progress and introduced seven concrete “chakras” – human capital, inclusion, trust, resilience, science, resources, and social good – to guide global cooperation [1-4]. She then framed the core challenge of the AI-first decade as achieving trust before skill, arguing that outcomes and responsibility must become a competitive advantage rather than a cosmetic concern [6-8].


Paul Hrubag’s economics-first view (later self-identified as Paul Hubbard)


Paul Hrubag, first assistant secretary for AI delivery and enablement at the Australian Department of Finance (who later refers to himself as Paul Hubbard), responded from an economics perspective, insisting that AI should be evaluated on the public value it creates rather than on mere technology adoption [26-27]. He rejected any trade-off between trust and innovation, stating that “trust lets you make the innovation” and emphasizing a people-first, democratic-participatory approach – meeting citizens where they are and building on existing familiarity with AI – as essential for public confidence [39-46]. He also recounted how he earned the nickname “AI-masked economist” during the COVID-19 pandemic when he launched a podcast to demystify economics and AI jargon [??].


Eric Ekudin / Erik Ekudden on the intelligent fabric


Eric Ekudin (later speaking as Erik Ekudden) described the evolution of telecom networks from passive data carriers to an “intelligent fabric” that will host AI inference workloads such as AI glasses, which off-load processing to the edge [49-58]. He highlighted that 5G/6G must be secure, trusted and scalable to support industrial AI in agriculture, health-care and smart manufacturing, and that the network already provides the guarantees needed for billions of devices [60-62][75-82]. The transition to an active, AI-enabled fabric is presented as a prerequisite for future business value and new revenue streams [citation needed].


Divyesh Vithlani’s platform-first, agent-centric governance


Divyesh Vithlani outlined First Abu Dhabi Bank’s platform-first strategy, embedding ethical AI, data and model governance into a layered architecture (data, model, knowledge, context) and separating execution from control planes to enable dynamic oversight of autonomous agents [130-138]. Agents are treated like human staff – with guardrails, performance monitoring, and an “agent university” that tracks token consumption, output quality and hallucinations – ensuring accountability and mitigating risks [200-219][225-226]. He also noted that AI is a general-purpose technology, reinforcing the need for a platform-first approach [??].


Hari Shetty on proof-over-promise and “plus scores”


Hari Shetty contrasted “proof over promise” with the prevailing pilot-centric mindset, urging a problem-first methodology, adaptation to legacy-heavy environments, continuous-operation models and the earning of long-term trust through hallucination-free performance [147-152][155-162]. He introduced the term “product license” to describe the outcome-driven approach that solutions must meet before being marketed [??]. Shetty also defined “plus scores” as a metric that records model failures, hallucinations and any deviation from organisational quality thresholds, providing a concrete measure for ongoing trust [??]. Finally, he framed AI as a core EI capability rather than a simple ROI calculator, arguing that productivity is only an early indicator [236-247].


Accountability and governance


Paul stressed the need for a clear, inclusive national plan that spreads AI benefits to rural and marginalised communities while safeguarding citizens, framing responsible AI as a whole-of-society leadership task [163-170]. Erik added that responsibility follows the hierarchy of decision-making: each domain that provides a service – network, cloud, application or device – must retain accountability, and existing telecom guardrails can be translated one-to-one into the AI world [177-186]. Divyesh reinforced this by highlighting the platform’s execution/control separation as a mechanism for public-sector accountability [??].


Risk perception debate


A moderate disagreement emerged over whether the public sector still over-estimates AI risk (Erik) versus the Australian government’s recent shift to a more proactive stance (Paul) [346-348][351-353]. A further divergence concerned the locus of AI integration: Erik advocated a network-centric “intelligent fabric”, while Divyesh promoted a platform-centric governance model[75-78][130-138].


Sustainability and energy efficiency


Erik warned that AI’s energy intensity can be mitigated through energy-efficient hardware, software and inference-centric deployment, noting that network power consumption is a small fraction of total electricity use and that digital technologies can reduce emissions in other sectors by up to 15 % [citation needed].


Future outlook and cross-sector collaboration


Paul described the AI CoLab as a physical hub where government, industry, academia and NGOs co-create responsible AI solutions to real-world problems [??]. Erik envisioned AI-native networks as a “creative network” that can dynamically compose services for wearables, robotics and autonomous agents [??]. Hari projected a dramatic increase in decision-velocity, arguing that current organisational processes are too slow and that AI will enable near-real-time decision-making [??]. Divyesh painted a 2030 banking scenario where interactions are mediated by AI avatars, with instant cross-border payments and frictionless product discovery [??]. He also outlined a three-step plan for CEOs: (1) define a clear AI vision, (2) re-think operating models rather than merely automating tasks, and (3) engage Wipro for implementation [??].


Actionable take-aways


From the discussion the panel distilled several actionable recommendations:


(i) Build trust through people-first, participatory design and secure, low-latency networks [39-41][75-82];


(ii) Adopt a platform-first architecture with execution and control planes, layered ethical governance and continuous agent monitoring [130-138][200-219];


(iii) Move from pilots to always-on, problem-driven AI solutions that earn trust via consistent performance [147-152][130-138];


(iv) Measure AI impact not only by productivity but also by “plus scores”, decision-velocity and risk mitigation [236-247];


(v) Treat AI as a core EI capability rather than a pure cost-benefit exercise [236-247];


(vi) Manage risk with calibrated guardrails, avoiding premature over-regulation while ensuring public-sector accountability [344-345][351-353];


(vii) Foster cross-sector collaboration through initiatives such as the AI CoLab [??];


(viii) Pursue energy-efficient AI hardware and inference distribution to align AI expansion with sustainability goals [citation needed].


Closing


Mr Bhandari reiterated that aligning the seven chakras of human capital, inclusion, trust, resilience, science, resources and social good will allow AI to move beyond optimisation of businesses to redefining competitiveness, rebuilding public trust and future-proofing institutions for decades to come [456-458].


Session transcriptComplete transcript of the session
Mridu Bhandari

for shaping a sustainable AI future that we are calling People, Planet and Progress. And to translate these sutras into action, we are looking at what we call the seven chakras of aligned global cooperation. So these are the concrete pillars that will really turn ambition into accountability. We have human capital, inclusion, trust, resilience, science, resources and social good as the seven chakras that we are going to be talking about. Today we have with us a very eminent panel trying to answer the defining question of this AI first decade that we are in. How can we achieve trust before skill? Outcomes over optics and responsibility as a competitive advantage. I’m Vipi Bhandari from Network18 and I’m very delighted to be joined by a panel of very distinguished guests here tonight.

Starting from my left, Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Next to them, Vibhesh Vitlani, Group Chief Technology and Transformation Officer, First Abu Dhabi Bank. Eric Ekudin, the Chief Technology Officer of Ericsson. And Harish Yatich, Strategist and Technology Officer at Wipro. Welcome, gentlemen. Thank you so much for joining us here today. You know, perhaps let’s set the context with the foundations of trust and skill. And Paul, if I may start with you first, you know, I was going through your LinkedIn profile and you call yourself the AI masked economist.

So very interesting, Monica, there. Why don’t you first tell us what that really means? And then we’ll jump into the rest of the stuff.

Paul Hubbard

Thanks for having me. It’s great to be here in India. I think we all bring a mic. Yeah. Thanks for having me, and it’s great to be here in India. I think we all bring a lens to AI. My lens that I bring is economics. I’m a public policy economist, which for me means AI is not about technological adoption. It’s all about what can generate public value, what generates public welfare.

Mridu Bhandari

And why do you call yourself the masked economist?

Paul Hubbard

Economist. That’s another story for you. That started in COVID, remember, when we were all wearing masks. And at the time, I started a podcast, which was all about explaining economics and unpacking the jargon. And I’ve kept that because I think explaining AI, unpacking the jargon, seeing how it relates to everyday life is really, really important.

Mridu Bhandari

Right. Now, when we talk about AI for social good, public permission is really, really important. Public trust is very important. Now, how do we really build society? How do we build confidence in AI without really slowing down? innovation. How are you doing that in Australia? Give us some examples of how you’ve been able to do that, especially because citizens all over the world today are demanding a lot more transparency and accountability when it comes to not just AI, but everything in general.

Paul Hubbard

Yeah, absolutely. I think it’s really important that we don’t frame it as like trust versus innovation. It’s actually a foundation of trust that lets you make the innovation. It’s starting from the proposition of what’s the problem we’re trying to solve or what are we trying to deliver for citizens? If you’re a government, what are you trying to deliver for your customers? Meet them where they’re at. Now, different countries, different populations have different comfort already, different familiarity with AI. You’ve got to know where people are up to, what they want and build from there, rather than just say, here’s a brand new thing that we’re going to impose on you. So I think really that framing, that democratic participatory.

approach, that people -first approach is key.

Mridu Bhandari

Right. Eric, coming to you, it’s often discussed as the application law, but you’ve mentioned that intelligence must be embedded into the networks themselves. Now, how does infrastructure really evolve from being a very passive carrier of AI to becoming this active enabler of trust and of resilience?

Erik Ekudden

Yeah, so first of all, Ericsson builds networks, advanced connectivity, so 5G and 6G, and increasingly that’s becoming this fabric that we all depend on. But let’s start by thinking about what people are using today. Gen AI is already hundreds of millions of smartphones, actually billions, already doing AI applications across the mobile infrastructure. So it’s already secure and trusted. The network is already provided the guarantees that you need. But I think Especially here in India, we’re talking about industrial AI applications, agriculture. There’s going to be a lot of AI in the fields, hospitals, education, smart manufacturing. So there’s going to be a lot more dissemination of AI from where we’re focused today in training to distributed AI or inference generation.

That’s going to happen much further out in the network. So the network is actually becoming the host for all those great AI experiences. We need to scale the networks to handle that. I don’t think I’m the only one. Maybe not everyone carries two pairs of glasses here, but AI glasses. They are already available in millions. Good AI glasses that give you navigation support, that gives you real -time language translation, maybe a prompt if you are on a stage making a keynote. I mean, these kind of things, they cannot be done on the device, on the wearables. You need to offload the AI, the inference from the glasses. So you can see the actual data. You can see the actual data.

You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. edge. That’s why we talk about this as a transition to an intelligent fabric. The network is already secure, trusted. It’s going to be a carrier of all these inference workloads. So we’re just starting that journey. But I think it really comes back to basic principles. Networks need to be trusted. They need to be secure. They’re already moving from consumers into enterprise and government services, mission critical, big example here in India.

Mridu Bhandari

So what have the AI glasses been doing for you this week?

Erik Ekudden

I didn’t read every question because everything is perfect in India when it comes to finding new ways. On a serious note, I actually use them privately at work. But I start to see people getting really … good value because it is an AI assistant. And think of it once, especially like me, wearing glasses. Once I’ve switched for good to these glasses, why would I go back? Even when I’m indoor, even when I’m at home, even when I’m training or in the elevator, I want it to work. And that, of course, means that the network, this intelligent fabric, needs to be so much better than it is today. Of course, great 5G networks here in India, but in the future, we will need even better funds.

And I think this is a change in terms of we will not get the full value of AI. We will not leverage AI fully until we connect it to that better network for AI. And that’s really what I’m focusing on. But you want to try it on? It’s a good one. No, it’s a great one. It’s a little bit fantastic. That was a little bit of a gallop, yeah. Using AI or AR wearables, glasses. Earpods. Cameras. Cameras. A few? Okay, well, two, two. Probably a representative crowd here. I think we are very early in this journey. It’s going to be a fantastic journey, I believe, for both consumers and anyone of us working in companies.

Mridu Bhandari

Absolutely, absolutely. Well, I’m going to come back to you with the knowledge that bringing the wish in, you know, in banking now, trust is not philosophical, it is existential. So how do you really embed AI into core decision making and also ensuring to dilute any risk discipline? So what governance models have you put in place that actually work for you? You know, any best practices that you can share with us here today?

Divyesh Vithlani

Sure. Well, first of all, it’s great to be here. And I’m already, you know, benefiting from the wisdom of my panelists, because my kids will tell you that I’ve been in denial about, or needing glasses. The eyesight is perfect, but the enlarging, the zooming really helps. But in reality, now I’ve got a different story for them, that I’ve been waiting for AI glasses before I really don on apparel specs. But coming back to your question, I kind of pick up on what Paul said. It’s not either or. It’s not about you have trust or you have productive AI. What we believe is like any regulated institution, that there is no compromise on risks and controls.

Our business in banking relies 100 % on trust. So that is not a value that we can compromise on any time. However, in order to make sure that we do deploy AI at scale in a trusted manner, it starts with conviction. and we have conviction right at the very top of the organization that AI is a force for good. We’ve heard a lot this week about AI being a general purpose technology. I really love what Eric said about AI in the network, and I’ll sort of come to that in a second, because that is a large part of the answer is establishing a platform. But if we take a step back, conventional organization is defined by its people, its processes, and its technology.

And there are all sorts of safeguards, guardrails, controls that have been built in. In the AI world, I think it’s going to be about agents, models, and data. And I think we’re going to have to have the same guardrails, and perhaps the same controls, and the same controls, and the same controls, even more stronger, because it will need AI to oversee and govern AI to sort of be really effective. So the approach we’ve taken is on the basis of the conviction that we have that AI is a force for good, it is a game changer, and it is truly going to transform everything about how we live, work, play, and bank. We want to basically make sure that we empower the entire organization to leverage and scale AI in a safe, secure, efficient, and compliant manner.

Now, the only way, in my opinion, to do that is to take a platform -first approach. Just like Eric said about the network needs to be safe and secure, our AI platform and our agentic platform needs to be safe and secure. So we have taken the approach of building a platform with all the different layers from data, model, knowledge, context, and the use cases that sit on top of that by building ethical, AI, data governance. level governance, the fair and appropriate use of AI into the platform. And by taking that approach, we are able to unleash the power of the technology in the hands of the end users. So just like when you open up Microsoft and start a new Excel, you’re not thinking about is this safe, what’s the underlying architecture.

You’re doing it fairly intuitively. And we’re going to be able to do the same thing with AI, that our folks, our business colleagues, our engineers can use AI as naturally and seamlessly as they do any other task. So taking that platform -first approach is what really is driving our sort of strategy to ensure that we drive AI at scale but with all the right trust and safeguards.

Mridu Bhandari

Right. All right. Bringing in Harry as well, you know, we’ve talked a little bit about public permission. We’ve talked about infrastructure. We’ve talked about governance, security. There’s a final leap. which is from promise to proof. Now, enterprises are, of course, often caught between the AI hype. There is hesitation. You speak a lot about proof over promise. Elaborate that for us. And what really separates scalable AI from perpetual pilots that we keep seeing a lot of enterprises deploying?

Hari Shetty

First and foremost, very happy to be here with this panelist here. And putting on the virtual lens, what do we do? We take Eric’s network, layer in the pro -intelligence on top of it, and provide solutions to Devesh. That’s where we fit it into this entire graph in terms of what we do. Now, coming back to proof over promise, you absolutely brought the most important topic that’s in discussion across the summit here as well. AI is no longer about pilots. It’s about being able to get value out of AI. And when we talk about proof over promise, we talk about four distinct elements that are important from a Wipro perspective. number one don’t start with a model don’t talk about model x or model y and then start start with a model first thinking start with a problem first thinking so you you pick a problem figure out what’s the right approach to solving the problem and then work the way backwards to look at you know what models can actually help you solve the problem so that’s the first approach the second part that we that we take care of is the the enterprise story is very different than the consumer story enterprises are necessarily messy you’ve got technology that’s like 20 years old 30 years old you’ve got different personas you’ve got different security needs uh data is you know in fragments across the organization so the enterprise story is a completely different story than a than a consumer grid story in terms of how how things have to come together from an perspective so in that context our ability to prove a solution in the enterprise world is extremely important for us and when we show it works in an enterprise that’s when other enterprises build trust that’s ready for diffusion and by the way we act as client zero for our solution so if we don’t get it to work in our own enterprise there’s no point talking to any of the clients about implementing the solution the third principle here is about whatever solution we build it’s not about making it work once it should work every day every hour and every minute and solutions that are only capable of you know following that principle are the ones that we actually take it to to take it to the market and that’s another principle that’s extremely important for us and last going back to trust that we all talked about if you look at human trust human trust is earned even agentic trust is earned you need something that can work for a long period in time without hallucination without fundamental flaws in the model so that there’s trust built into it so only when things work consistently over a longer period in time do you build trust and these are the four principles that we use to actually talk about you know proof or promise as what we call the product license

Mridu Bhandari

Right. All right. Well, we’re going to shift gears a little bit and also talk about accountability because we’re talking a lot about architecture. Let’s also talk about who’s accountable for what in an enterprise and perhaps in the society as well. Now, Paul, when we talk about responsible AI at a national level, what does accountability really look like for leaders? Is it about measurement frameworks? Is it about reporting outcomes? Is it about, you know, independent oversight? What are the signals that you need to tell citizens that, you know, this is being deployed in your interest?

Paul Hubbard

Yeah, thanks. I think it’s really about having a clear plan that you can communicate. In our case, that making it clear throughout the economy, throughout government, throughout society, that we’re going to seize the opportunity of AI. That means better jobs. That means investment in data centers and all the things we’ve been talking about. But the second thing is really even perhaps more important is we’re going to spread the benefit of AI, not just to people in the tech center, but to every aspect of community, people in rural areas, to people from marginalized groups, to people who maybe haven’t had the full benefit of current technology now. So spreading that benefit further. And then finally, just making it really clear that we’re also acting at every level, whether it’s businesses or whether it’s government, to keep citizens safe in the process.

We’ve had a big conversation here at a model level about AI safety and AI harms, but we’ve also got to have that conversation in the context of our communities and what does it look like to keep citizens safe there. So I think it’s the whole of. Society leadership piece. It’s not just saying, well, the tech people can look after this from a technical perspective.

Mridu Bhandari

Right. And, you know, ecosystems, of course, are very, very interdependent today. You have cloud providers, you have the telecom networks, you have enterprises. There are decisions flowing across the distributed stack by the second. So it’s really countable.

Erik Ekudden

Yes. I want to build on what you said, the version of the Harry here, and the difference between where we are today and when we are introducing agents at scale. And to me, there isn’t so much a question of who, because if you are replacing work with an agent, that basically needs to translate into an accountability and then also a transparency, trust and governance issue around those agents. And increasingly, we get agents at different levels. There’s super advanced agents at the top. And, of course, you follow down the stack, we get more. fine -grained agents having less knowledge making decisions that are guard -railed in a different way than the top models. So think of this as a hierarchy of decision -making and, of course, accountability.

But to me, there’s no question that if you are, and when you are introducing agentic technology, you need to take the responsibility for your part. If your complete service consists of many different agents on the cloud side, on the advanced connectivity side, on the application side, device side, it needs to come together. But, of course, responsibility should reside in the domain that you are providing, and that you are providing to the market, to the customer, to the employees. Then, of course, it’s never as simple as that, but in the world that I come from, in telecom, we’re already providing critical infrastructure. People’s everyday and life depend on it. So we have already guard -raised from a safety -security perspective that we have to move up to.

in today’s world of 5G and telecom. That, to me, should carry over into the, oh, yeah, an identic world. I know there are, of course, discussions about increasing governance, increasing regulation. I think that’s a dangerous way to go because if you regulate before you have innovated, you never know what you will get. But I think if you stay with these basic principles that we do have requirements and we have guardrails in the world we’re coming from, and you translate that more or less one -to -one into the identic world, I think we are on a good starting point.

Mridu Bhandari

Right. And, Vibhish, you know, we are talking about the way this identic, as Kavik said, machine working hand -in -hand. Now, as these identities shift, how should we be rethinking governance? How should we be rethinking trust? And, of course, governance is never static. It’s going to go on. It’s going to keep evolving. So what does dynamic oversight really look like, especially in a very regulated industry like yours?

Divyesh Vithlani

Look, I really love that question because at the end of the day, as a CTO in a bank, I am accountable. I am responsible for the platform that we construct and the output that gets generated from that platform, whether it’s from a human or an agent, right? So that’s my accountability. And this is where I have interesting debates and conversations with colleagues from Wipro and other partners of mine who are very eager to sell me solutions. And I said, if the solution is a black box, then I’m going to find it very difficult to integrate that into my environment because ultimately I have to be able to explain the output that gets generated. So to your question in terms of that dynamic oversight, it again goes back to the platform and the way we’ve architectured.

The platform is on sort of, you know, without getting too technical, is on two planes. There’s an execution plane and a control plane, right? But again, it’s not that sophisticated. just like when you onboard a new graduate into your organization. You will give them a set of guardrails and a set of responsibility that is befitting of their skill set and their experience. You provide the right level of supervision. You give them the right level of oversight. And as they grow, become more proficient, you clearly give them more responsibility. We treat agents in exactly the same way. So there’s a lot of conversation about agents being autonomous and hallucinations. Well, individuals can do the same thing if they’re left to their own devices, right?

So the way that we have built and architected our agentic architecture is that, as Eric said, there are different types of agents. At the lowest level, agents are not just autonomous, but they’re atomic. And with the right set of guardrails, with agentic operating processes, they are also deterministic, right? and we basically create agents to perform a single task. And we make them as reusable as possible to compose them and to aggregate them into a higher level of workflow. And as you get more, as they learn more, which is the good thing about agents, they learn faster, you give them more responsibility, just like you do to humans. But again, going back to that execution plane where you are monitoring every activity that is being done through a control plane, and also the other features of the platform include how we sort of onboard, offboard agents just like you do with humans.

And we also have practices in place to manage the conflicts between agents and humans because, again, just like you have conflicts between two humans, you have conflicts between an agent and a human, right? And you need to be able to detect that in real time. so that’s where some of the kind of work that we’ve done and it’s again early days, I don’t mean we have all the answers, but certainly the space is moving very fast the key is that we humans always have to be in control so the way we design the architecture is to ensure that happens

Mridu Bhandari

so are agents being put through tough performance appraisals, are they being fired for hallucinating?

Divyesh Vithlani

100 % right, and again it may sound really basic, but I view an agent more different to a human so you do performance management you do, there’s a concept that we call agent university right and I love that term because I was chatting earlier with James about this, at university you’re learning how to learn right, so that’s what we want the agents to do as well and you know whilst humans may fill out a timesheet to account for the work that they’ve done and to measure the output that they’ve produced for the cost that they’ve consumed. Whilst agents may not fill out a timesheet, we’re also monitoring and monitoring the agent for the worth, the tokens that they’ve consumed for the output that they’ve generated to ensure that we measure their performance in a similar way.

Mridu Bhandari

Wonderful. Well, Harry, bringing you in as well, how should organizations measure the ROI? That’s a question that enterprises around the world have been debating. What’s the value beyond the profit or beyond the bottom line? Are we looking at trust scores? Are we looking at productivity? Are we looking at decision velocity, risk mitigation? At the core, how are you looking at the ROI?

Hari Shetty

Probably one of the most debated topics and one of the topics that I hear a lot, and I will probably provide you the Wipro context in terms of how we are looking at productivity. point number one while everybody talks about use cases and productivity measurement of EI we think you know EI is beyond just measuring return on investments or measuring productivity it’s almost like going back in time could you ask should we implement an email system what’s the ROI on the email system could you ask for example why should I go to the internet I have a brochure already in the company why should I be on the internet so a lot of the thinking should change from looking at ROI to looking at EI as a fundamental capability and a fundamental shift and a journey which is irreversible in terms of where we are going so it’s not a question of should we invest because there’s ROI or not it’s a question of we have to go down that path and we look at it as a capability so within Wipro we look at it as a capability so we are not really asking this question of for every single use case is there a ROI on it Now, having said that, you know, as a business leader, ROI is extremely important.

Mridu Bhandari

Well, your clients must be demanding the ROI for sure.

Hari Shetty

Yes, that’s equally true. So the elements that we talk about is the earliest signal of ROI is productivity, right? We always talk about productivity as an early indicator of what can come down the pipe, but productivity is only an early signal. The resulting benefit is basically always an end outcome. It can be cost. It can be units produced. It can be lower, better quality. It can be cycle time reduction. It’s many of those things. And our goal has always been to move beyond productivity because productivity is a number that people talk about very frequently in AI, but we are moving beyond productivity to look at some of those end outcomes that we can achieve. And our models are built to help clients understand the end benefit of AI rather than just look at productivity as an element.

Plus scores are becoming equally important. I will just touch upon plus scores for a minute. When we look at plus scores, we are looking at, you know, how many instances, how many instances of failure did happen? How many instances of failure did happen? How many instances of failure did happen? and is that within the vector of what an organization says is acceptable or the process says is acceptable. So it’s important to measure quality aspects, failure aspects, hallucination that we talked about, all the other aspects of AI where it can go wrong and then measure what’s the task goal and see whether it’s appropriate for the process that we’re talking about. So we had situations where we talked about probabilistic models, deterministic models.

We had customer cases where 100 % was the only answer or 99 .99 % was the answer. There are situations where 85 % was good enough. So again, there’s no one single answer to this. It depends on the kind of process, the kind of problem that we’re trying to solve.

Mridu Bhandari

Right. And do you think business innovation would perhaps be one of the biggest ROIs and any outstanding cases of business innovation that you’ve seen with AI being scaled successfully yet?

Hari Shetty

Yeah, that’s a fantastic question. And again, let me give you one or two quick examples because, you know, that would bring this to life. one of the projects that we did for a client this is an energy client and this is for a refinery and obviously you know everything was automated, instrumented, there are a lot of sensors all along the way and they were asking us what’s the value of AI in this context so the work that we did for them was basically analysis of a flame and you know interestingly out of the flame we could extract information about combustion efficiency, fuel to air mixture ratio, maintenance of the equipment we could derive out of models that we built just looking at the flame so the kind of information that we could actually secure just looking at the flame was so much superior to using a sensor based technology because sensors typically tell you something is working or something is not working based on a threshold, here we could actually find out the health of what’s happening with incremental change compared to looking at an on and off kind of a situation with sensors

Mridu Bhandari

Fantastic. Erik, you want to add?

Erik Ekudden

Yeah, can I just add one thing? I think it’s so interesting to look at how in our world we talk about this intelligent fabric of 5G. And, of course, there are gains if you apply AI in terms of efficiency, in terms of productivity. You can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network. They use modeling on top of it. And then they can start to produce new outcomes. It’s kind of a business growth. And, of course, it’s not always that you can find that clear case.

But that’s really where AI and autonomous networks are helping. Saving, yes, TCO is important. But it’s very much about that business growth.

Mridu Bhandari

Any example you can share with us there?

Erik Ekudden

Yeah. So, Glasses was one. In the future. device, every application, every service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical for enterprises. And that’s what leading customers, including here in Juba, are doing. So they’re using AI for that. We can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network, they use modeling on top of it, and then they can start to produce new outcomes. It’s kind of a business growth. And, of course, it’s not always that you can find that clear case, but that’s really where AI and autonomous networks are helping.

Saving, yes, TCO is important, but it’s very much about that business growth.

Mridu Bhandari

Any example you can share with us there?

Erik Ekudden

Yeah, I think Glasses was one example here. But in the future, every device, every application, every AI service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical, for enterprises. And that’s what leading customers, including here, so they’re using AI for that. that kind of segmentation and growth of the business, it’s an upside that is unlimited. So, of course, it’s more exciting.

Mridu Bhandari

Absolutely. Well, let’s also look at the long -term competitiveness and value creation that we can achieve with AI. Paul, if we were to project 10 years ahead, what do you think would really separate AI native nations from AI dependent nations? You know, is it infrastructure? Is it talent pipelines, compute capacity? What would you add to that list?

Paul Hubbard

I would add capability, competence, and curiosity. I think a lot of the things you mentioned in terms of data centers and things like that, they will be built, there will be investment, but the underlying models, the compute that will be commoditized and what will set countries apart is the ability of government institutions to adapt, the ability of the economy to be flexible to new approaches and to be able to do what they want. I think that’s a really important point. And I think that’s a really important point. And I think that’s a really important point. the ability of the workforce to find the new jobs, the new wants and needs that are created and where the bottlenecks shift, being able to move to those.

And I’ve got to say that coming to India this week, I see not just competence, capability and curiosity, but just a down -out enthusiasm for this. So I think maybe India is one to watch.

Mridu Bhandari

Good to know that and happy to hear that, of course. Because, well, Eric, you know, AI demands massive compute, massive energy, massive connectivity. Now, how do we really reconcile infrastructure -scale AI expansion with sustainability? You know, even with the AI globally, how do we ensure that efficiency is imperative to everything that is deployed?

Erik Ekudden

Well, AI is… …energy -intense, especially now in the training phase. I think some of the data that are out there, it’s… I mean, it’s… mind -boggling numbers and I’m not even sure we’re going to need that kind of energy that has been predicted. But what I was saying before is that we’re moving from that big data center training to the distributed inference. That’s kind of where the book is going. That means that you need to scale it to like 8 billion inference for glasses. Tens of billions of sensors using AI or visual sensors. So what we are doing and what needs to happen is to really have energy -efficient hardware, energy -efficient software, energy -efficient AI models.

Small models when you can do away with that and of course big models when you don’t. So we’re not going to explode energy consumption just because we use more AI. In fact, we’re going to use even smarter and better ways to do it both on the hardware and software side. Then just as a little bit of sort of putting things in perspective, all the world’s networks is around a percent of their total power bill or their power consumption. And it’s actually by using more of the digital technology, you are able to reduce emissions in other sectors by as much as 15%. So it’s kind of a 10, 15 times payback on that energy consumption. And again, if you combine that with what I said about really being conscious about energy efficiency as you move further out, I think it’s actually going to be a sustainable way to do a lot of things, not just replacing unnecessary traveling, logistics chains with more digital means.

Everything is going to be more efficient, so I think we have to be a little bit careful before we say that it’s just exploding and it’s completely outrageous. Because if you just project those big data center training clusters, it looks scary, but that’s not the whole picture.

Mridu Bhandari

All right. Well, Dinesh, you know, while we are talking of value creation from AI, you know, of course, many organizations still accounting and measuring AI success and cost savings, but… at your organization, how are you really reframing AI value in banking, resilience, fraud protection, customer trust, capital efficiency? What are some of the metrics that you are tracking and really ensuring that this is true value creation for us?

Divyesh Vithlani

I think it’s a question that sort of is constantly exercising our minds. And if I start with the productivity question that you asked earlier, whilst there wasn’t a straightforward answer, I can’t look at it in three levels. AI will provide micro -level productivity through, you know, co -pilot and sort of technologies like that, which might be difficult to measure, but certainly it’s helping with the whole literacy and those in the overall level of education and awareness in the organization. Secondly, at the enterprise level, and this is your point on value creation, we absolutely see the potential of AI to drive significant ROI. When you take very complex processes, which have been utilizing HIPAA -2 technologies, whether it’s RPA, OCR, etc., but when you apply AI and agentic, you can actually take them to the next level.

And these are extremely complex processes, which are error -prone, and you’re talking about large sums of money. And when we’ve applied AI and agentic to them, we’ve seen incredible outcomes, which is sort of giving us tangible value creation. And the third aspect I would look at is, if we really take a step back, certainly in banking, what is our biggest source of competitive advantage? it’s not necessarily the technology or the products or any other capabilities, right, because the next person can come along and emulate those. It’s really our ability to respond and react to change faster than our competitors. And that’s what AI is going to help us do in terms of creating value because it allows us to respond to change faster, do rapid experiments, and to scale and to double down where we think that we will see a significant ROI.

Mridu Bhandari

Right. Okay, so I have a question for all of you, and perhaps you can, you know, take about 30 seconds each to tell me. Do you believe today enterprises are overestimating or underestimating AI risk? And, you know, how should leaders and boards really measure AI, AI thrust readiness in practical terms? So, you know, how we may do if you want to start on that one.

Hari Shetty

see there is certainly a level of risk that one should be aware of and work with with risk and again in every business there’s always element of risk that one is to mitigate so ai is no different from that perspective but at same time the own hype about risk is also overstated it’s a manageable risk it’s not a uncontrolled unmanageable risk it’s a manageable risk and with the right kind of tool set that divesh talked about it’s definitely possible to get the best value out of ai without actually exposing oneself to risk

Mridu Bhandari

okay that’s a very diplomatic balanced answer that you give us, Eric what do you think

Erik Ekudden

i suspect that it’s become quite realistic the risk assessment among enterprises not to overestimate it they’re manageable i think maybe on the government side there’s still an overestimation on the risk side trying to sort of be too cautious, and that, I think, could hold back in certain public sectors and in other areas. Then the risks are very, very big if you mistreat this extremely powerful technology. So I’m not saying that we’re over the hump, but that’s what I think.

Mridu Bhandari

Paul, you want to take that on, considering, you know, Eric just said that perhaps the public sector overestimates risk. Would you say that for, you know, the government in Australia as well?

Paul Hubbard

I mean, certainly governments have a responsibility to start off probably with a more cautious approach than private sector folk. I’d say there’s a shift between the uncertainty of something new that isn’t quantifiable to actually I understand the risk, and then once you understand the risk, you can manage it. So certainly over the last year or so, and the government of Australia has taken. much more sort of active posture towards AI where embracing, in a sense embracing the risk a little bit more than we were in the past but as we grow the capability as we’ve got the foundation of trust, the guardrails that we need, it means you can actually manage that risk and that’s the key thing.

Mridu Bhandari

All right, Divyesh?

Divyesh Vithlani

Look, with any so -called new technology there is always going to be a level of, you know, fear, uncertainty, doubt but the kind of, the sort of the paradox for me is that AI is actually not a new technology. In fact, it predates cloud, mobile, robotics you know, judging by the lack of I was writing programs at university that that But AI was just well ahead of its time. We needed the cloud to be able to process large amounts of data. We needed the kind of data centers that we’re talking about for the compute, et cetera, for this technology to really come to light. And clearly, as we’ve gone through digital, social, cloud, and data, along the way, we’ve seen many, many regulations around data protection, how best to use cloud, data sovereignty, data residency, et cetera.

So as long as we are not sort of shedding those controls that we’ve already built and making sure that we tighten the guardrails as we deploy AI and deploy AI through a platform -centric approach where you’ve built the necessary guardrails, I think that those risks will be met. And I think that’s what we’ve managed and mitigated. And hopefully what we’ll start to see is that we’ve managed to do that. to see is the benefits of this combined technology will far outweigh the kind of risks and concerns that we’re seeing. The only qualification I would make, and I think that’s been talked about at this conference, is making sure that we do take

Mridu Bhandari

Absolutely. I mean, it has to be inclusive for all, especially in a country like India where, you know, we have divides of many kinds. Well, let’s spend a few minutes trying to look ahead and do some crystal ball gazing. And Eric, if I can come to you, you know, we are entering autonomous networks, embedded intelligence, physical AI from robotics to many, many massive systems. Now, what does an AI mean? An AI is a creative network then look like, say, five years from now, because anything more than five is just… much to envision and how do we get to this mobile and cloud infrastructure where we’re able for that future?

Erik Ekudden

Well, I think we have to look perhaps further out in five years because we’re building something that should work for society in broad terms. But of course, AI is moving super fast and when you ask about AI native, I think that any industry, including the one I represent, is going through major change now. And AI native is not just how you build your products, that they need to be data -driven, they need to learn, they need to be updated all the time. It’s very much about your processes. It’s about how you go to market with that, how you engage with lifecycle management, handling questions, and I think we talked about it in the pre -meeting as well.

There are so many things that are changes in terms of how you build AI native systems that it is a fundamental rework for, I would say, most AI native systems. product, actually service companies as well. So an AI -native world is something that is much more responsive to these fast changes that we talked about. An AI -native network is a network that is responsive to all of these needs. You already mentioned the physical AI, which is just around the corner, humanoids, robots, drones, all the things that are requiring much more tailoring, much more flexibility from that network or the intelligent fabric. So we need to do what I call user experience at scale or massive user experience.

Everything has to have its own and unique requirements met. I think only AI -native networks that are responsive in real time to these needs and adapt and create the best user experience can handle it. So it’s going to be a very different world, very intuitive, judging what works. What we see on the wearable side, but that’s going to be a completely new setup.

Mridu Bhandari

Right. And Paul, you know, as we’re looking ahead, of course, public -private partnerships are going to be key to any kind of success that we’re going to see. Tell us a little bit about AI CoLab and your approach towards, you know, bringing together public institutions, academia, industry, to really advance the practical adoption of AI while also keeping it very transparent and ensuring that public good is at the center of it.

Paul Hubbard

Absolutely. So the AI CoLab is a cross -sector initiative where folk from the government, folk from the private sector, academics, not -for -profits, can get together in one place and often in person to understand things. And I think everybody who’s come to the AI Impact Summit really understands that we can’t do this alone. Like nobody in their silo can solve the problem themselves. We’ve got to get capability from each other. We’ve got to learn from each other. And I think the 300 ,000 people who have been here this week have certainly proven that to be. proven that to be the case. I think that it’s also key to actually doing safe and responsible AI. It’s not just the technical controls or the networks that we have.

It’s having the people who are going to be in the room who may not care about AI, but they do care about the services that are being delivered. They do care about their voice being heard. They do care about the environment around them as well. So he keeps on bringing you back to reframing that. What’s the problem we’re trying to solve? What’s the mission we’re trying to achieve? And I think if we want to talk about impact, that’s the key question.

Mridu Bhandari

Right. All right. Well, let’s also look at the financial angle with Divesh. You know, we’ve talked about open finance and very effective financial ecosystems. What is it really going to take to scale AI to that level, especially in the near and short term, to enable very responsible deployment? And sustain… finance with egg farmers particularly in the Indian context given the complex complexities that we see in this country?

Divyesh Vithlani

So I think it’s going to be a force for good. If I look at banking, I don’t think the core of banking is going to change. However, how we bank, how we drive that experience for our customers is I think going to be transformationally different in the future. Just one example to pick up on your question, if you combine the technology of AI together with say digital assets and stable coins, the ability to move money faster like emails, why is it that it takes three or four days today to clear a cross -border payment, right? Which goes completely against the whole concept of open finance and inclusion. So I think AI together with some of these other is going is going to be a game changer in enabling things like that and really driving experience to be much more natural, much more intuitive than it is today.

Personally, as a CTO, there is a lot of questions about a job is going to go away, et cetera. If you look at sort of in any organization, certainly banks that I’ve worked in, typically the CapEx demand on an annual basis outstrips supply on a ratio of five to one. But if AI can help us change those legacy systems, modernize our platforms, because let’s be honest, 90 % of banks still operate with legacy technologies. There’s very few in the green field. All of those technologies need to be modernized, upgraded, and I think AI, again, is going to be a force for good there. And once we modernize those systems, we’ll again lend itself to connecting more seamlessly through microservices, APIs, without getting into the technical details, through MCPs, et cetera.

So I think that AI, together with some of these other technologies, digital assets, print and data line, I think will drive a very different paradigm in terms of

Mridu Bhandari

Lovely. Very exciting times ahead. Well, Hari, if you were to give a CEO a three -step plan today to really scale responsibly, what would that be? Three things.

Hari Shetty

Okay. Number one, be very clear about what you want to achieve with AI. So have the vision right. Have clear objectives in terms of what you want to achieve with AI. That’s the first part. The second part that I would call out is don’t think about task and task automation. Think about what does AI do to your business? And it’s an operating model shift fundamentally which can actually deliver value. So think big. Think about the operating model shift that will require structural changes, methods of working changes, skill changes, and, you know, it’s a complete change. It’s a complete transformation compared to just being an automation. And third thing, you know, please call Wipro.

Mridu Bhandari

All right. We are about to now imagine that we are at the India AI Impact Summit 2030, just about four years ahead. What has changed today in the way we live, work, and play that didn’t happen perhaps at the time you were here last, which is today? What has changed? Paul, do you want to start with that? And you can go ahead with the imagination.

Paul Hubbard

yeah okay look as an economist it’s very hard to predict the future I think what has changed is there’s a whole bunch of people turning up with job titles that we’ve never even heard of before and they’re telling us about things that people in a bureaucracy or the government only dream about so I think we’ll see a lot more diversity in what people do

Mridu Bhandari

right lots of new jobs and yeah most industry reports suggest that many of the new jobs of the next decade have not been invented yet so absolutely

Divyesh Vithlani

well in four years time we may not be here in person it will be our agents or avatars that are being kind of you know teleported in because the technology through Ericsson’s amazing network has the kind of bandwidth and the latency is improved vastly, and obviously with Wipro’s technology around creating these avatars and these agents. But no, I think, to be serious, I think what will have changed, at least from my perspective, is banking will be a lot more seamless. It will really be about putting the customers first rather than sort of imposing friction that we see today in terms of how financial services works. For instance, we will be shopping much more intuitively. We won’t even know that we need to get a new fridge or a new car.

It will kind of just occur to us naturally, and something will appear on your doorstep that you didn’t even know you needed, but once it arrives, you think, wow, that’s exactly what I needed. The payment’s taken care of. All the servicing is taken care of. So I think that is a near -term reality.

Mridu Bhandari

All right. Eric, Hari, go ahead.

Hari Shetty

couple of things one is I’ll definitely break my glasses and use Eric’s Eric’s glasses more importantly why I think will fundamentally change is the decision velocity good most importantly I think the decision velocity in organization will completely change in in the next four years one of the key things that we always talk in any enterprises our organization is so slow the processes take a lot of time it does not happen at the pace that we all want it to be and the experience that one gets out of it a slow process is not necessarily a great experience process the fundamental problem that AI will solve and I’m pretty sure it will solve in the next couple of years is the velocity of everything will increase so tremendously that we’ll look back and say how did we ever tolerate something that was as slow as what it is today

Erik Ekudden

yeah I I wonder if it’s doable in four years on a global scale. But I hope what we see four years from now is that we have this dissemination, we have diffusion, we have everyone being included in this fantastic journey that AI really, really is about. But I think it hinges on this dialogue that we have here, and it hinges, it’s conditional on the fact that we solve the trust issues. Because these things with security, privacy, we talk about them as things we can solve technically and so forth, but that needs to have fundamental anchoring in how humans behave so that you can really trust these agents, as was mentioned before, and that we put the right constraints on.

If that happens, of course, four years from now, it’s going to be so seamless where we have our digital colleagues or AI colleagues, AI, physical AI colleagues, and so forth, that it’s going to be a complete. It’s a completely different way of looking at work and, of course, how you get help outsourcing. I mean, you’re going to be an agent of something which is much, much bigger than what you’re commanding today. I think it’s an enormous shift.

Mridu Bhandari

absolutely well fascinating times ahead thank you gentlemen for your very very incredible insights that was very very educational and informational for all of us the takeaway for me I think from this conversation is clear that if people planet progress remain our guiding sutras and if we can align all the seven pillars of global cooperation AI is not going to just optimize businesses it is going to redefine competitiveness it is going to rebuild public trust and of course hopefully it will future -proof all our institutions for the decades ahead thank you very much appreciate you all taking the time here and thank you all for being a wonderful audience thank you you Thank you. Thank you.

Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Mr Bhandari framed AI sustainability using three sutras of People, Planet and Progress”

The knowledge base explicitly lists the three sutras as people, progress and planet, confirming the framing described in the report [S106] and [S108].

Additional Contextmedium

“Mr Bhandari introduced seven “chakras” – human capital, inclusion, trust, resilience, science, resources, and social good – to guide global cooperation”

While the sutras are confirmed, the knowledge base does not mention the seven chakras; it only references the broader three-sutra framework, providing context but not confirming the specific chakras [S106].

Additional Contexthigh

“Eric Ekudin described telecom networks evolving from passive data carriers to an “intelligent fabric” that will host AI inference workloads at the edge”

The transcript of the AI Impact Summit notes that telecom networks have evolved significantly from merely enabling connectivity to more advanced roles, aligning with the description of an “intelligent fabric” for edge AI workloads [S32].

Additional Contextmedium

“5G/6G must be secure, trusted and scalable to support industrial AI in agriculture, health‑care and smart manufacturing, and the network already provides guarantees for billions of devices”

The knowledge base discusses the upcoming 6G ecosystem where devices will have AI capabilities and emphasizes the need for secure, scalable networks for widespread AI deployment, adding nuance to the claim about 5G/6G requirements [S118].

External Sources (119)
S1
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — -Mridu Bhandari- Moderator from Network18 This comprehensive discussion at the AI Impact Summit brought together leader…
S2
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S3
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S4
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — -Hari Shetty- Strategist and Technology Officer at Wipro
S5
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — -Divyesh Vithlani- Group Chief Technology and Transformation Officer, First Abu Dhabi Bank
S6
https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-responsibility-real-world-outcomes — All right, Divyesh? Starting from my left, Paul Hrubag, first assistant secretary for AI delivery and enablement at the…
S7
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — – Paul Hubbard- Divyesh Vithlani – Paul Hubbard- Erik Ekudden- Divyesh Vithlani- Hari Shetty – Paul Hubbard- Hari Shet…
S8
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — -Erik Ekudden- Chief Technology Officer of Ericsson
S9
Keynote by Marcus Wallenberg Chairman SEB & Saab — – Mr. Ek Udden: Chief Technical Officer of Ericsson (mentioned by Marcus Wallenberg as being present, but did not speak …
S10
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we …
S11
UNSC meeting: Peace and common development — The speaker emphasises the importance of a comprehensive approach to achieving sustainable peace and security, rooted in…
S12
Agenda item 5 : Day 4 Morning session — Collaborative implementation of joint projects builds confidence
S13
Session — Building trust in electoral processes Eliud argues that building confidence in elections requires improvements in overa…
S14
DRAFT AUGUST, 2024 — What makes AI a compelling force for advancement and change is that the technology has the potential to make an impact f…
S15
Conversation: 01 — Thank you very much. And I must say that it’s very impressive to see India convene the world on such an important subjec…
S16
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Other countries have recognized this and have implemented national-level initiatives. UNDP is actively involved in suppo…
S17
National Strategy for Artificial Intelligence — Decisions made by systems built on artificial intelligence must be traceable, explainable and transparent. This means th…
S18
National Strategy for Artificial Intelligence — Citizens and businesses must have confidence in artificial intelligence whenever it is used by the public authorities, s…
S19
STRATEGIE NATIONALE DE L’INTELLIGENCE ARTIFICIELLE — La stratégie nationale d’intelligence artificielle (IA) de la Côte d’Ivoire repose sur dix principes fondamentaux qui …
S20
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S21
Agentic AI in Focus Opportunities Risks and Governance — That’s great. Really appreciate that, Austin. And I love the focus on voluntary, industry -driven, consensus -based stan…
S22
Shaping the Future AI Strategies for Jobs and Economic Development — Nations defined by geographical dispersal of small islands, 1 ,200 islands, narrow economy base, and acute exposure to c…
S23
Keynote-Brad Smith — “We need to look at AI as the next great generator for human curiosity.”[11]. “Human capability is neither fixed nor fin…
S24
https://dig.watch/event/india-ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai — Absolutely. And I told them that you were starting your journey on the Gen AI. Can we work with you on responsible AI? …
S25
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — In areas like textiles, pharmaceuticals, etc. The question now is, how do we reliably move from ideas to impact and be m…
S26
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Absolutely. Last year I talked about 400 use cases that we came up with in Saudi Aramco. This year we’re talking about 5…
S27
AI for agriculture Scaling Intelegence for food and climate resiliance — “We are moving beyond pilots to projects at full scale.”[47]. “We will move from pilots to platforms, from fragmented da…
S28
The Intelligent Coworker: AI’s Evolution in the Workplace — Christoph Schweizer advocated for new measurement approaches, emphasising “adoption and usage,” “employee satisfaction s…
S29
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S30
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S31
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — This comment is revolutionary because it redefines what a telecommunications network fundamentally is. Rather than viewi…
S32
Building Indias Digital and Industrial Future with AI — “Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure.”[1]. …
S33
Secure Finance Risk-Based AI Policy for the Banking Sector — Embedded governance is not regulatory burden.It is strategic imperative.It ensures that innovation is sustainable, trust…
S34
Dynamic Coalition Collaborative Session — Legal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance tha…
S35
Building the AI-Ready Future From Infrastructure to Skills — The progression from proof-of-concept to production represents a critical challenge. Resources like AMD’s Developer Clou…
S36
Responsible AI in India Leadership Ethics & Global Impact — “So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for …
S37
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — This reframing fundamentally altered the discussion’s direction, moving away from technical solutions toward structural …
S38
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Development | Capacity development | Infrastructure
S39
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S40
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S41
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Deshpande’s framework emphasises three critical elements: providing scalable playgrounds for business units to operate w…
S42
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S43
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S44
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S45
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S46
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Hari Shetty, Strategist and Technology Officer at Wipro, addressed the persistent challenge of moving from pilot project…
S47
AI Meets Agriculture Building Food Security and Climate Resilien — “And under the visionary leadership of our Honorable Prime Minister Narendra Modi, India has placed digital public infra…
S48
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S49
GOVERNING AI FOR HUMANITY — – 190 Discussions about AI often resolve into extremes. In our consultations around the world, we engaged with those who…
S50
Advancing Scientific AI with Safety Ethics and Responsibility — -Balancing Open Science with Security: Panelists explored the challenge of preserving open science benefits while preven…
S51
Interim Report: — 52. Any AI governance effort should prioritize universal buy-in by different member states and stakeholders. This is in …
S52
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S53
WS #123 Responsible AI in Security Governance Risks and Innovation — She emphasizes that UN-sponsored platforms like UNIDIR’s RAISE and IGF play a critical role in enabling multi-stakeholde…
S54
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we …
S55
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste, Your Excellencies. Thank you so much for organizing this great event. It’s a great honor for Austria to be here…
S56
Comprehensive Report: European Approaches to AI Regulation and Governance — And that goes along with this state intervention only whenever necessary. And the goals of the regulation should be, we …
S57
Building Trustworthy AI Foundations and Practical Pathways — Debayan proposes defining risk as the product of the likelihood of an undesirable outcome and its severity. He stresses …
S58
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S59
https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-responsibility-real-world-outcomes — i suspect that it’s become quite realistic the risk assessment among enterprises not to overestimate it they’re manageab…
S60
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 2 — Kazakhstan: Thank you, Chair. As we advance in our discussions, it is evident that while significant progress has been …
S61
Future-Ready Education: Enhancing Accessibility & Building | IGF 2023 — Another significant aspect highlighted is the role of multi-stakeholder engagement in the Internet Governance Forum (IGF…
S62
Cyber Resilience Playbook for PublicPrivate Collaboration — – Some capabilities have the profile of a pure public good (in the classic economics sense): their consumption is non-r…
S63
Summary — The Principality of Liechtenstein is supporting, developing and shaping digitalisation for the benefit of the population…
S64
Prediction Machines in International Organisations: A 3-Pathway Transition — Have you ever pondered whether it is appropriate to ask ChatGPT to write the first paragraph of a press release or rephr…
S65
UNSC meeting: Scientific developments, peace and security — The integration of artificial intelligence and neurotechnologies will enable ultra-fast decision-making
S66
Networking Session #26 Transforming Diplomacy for a Shared Tomorrow — Sebastian contends that AI’s ability to process vast amounts of historical and current data provides diplomats with pred…
S67
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — “We also, I’m glad to announce establishing a specialized economic zone dedicated to digital technology and AI designed …
S68
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets c…
S69
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S70
AI Governance Dialogue: Presidential address — H.E. Mr. Alar Karis: Honourable leaders, excellencies, distinguished delegates. It is truly an honour to represent Eston…
S71
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S72
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — This comment is revolutionary because it redefines what a telecommunications network fundamentally is. Rather than viewi…
S73
Building Indias Digital and Industrial Future with AI — “Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure.”[1]. …
S74
Trusted Connections_ Ethical AI in Telecom & 6G Networks — And not only that, but truly well performing networks. That is a fundamental platform to drive innovation on and to driv…
S75
Dynamic Coalition Collaborative Session — Legal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance tha…
S76
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — So the data really doesn’t go out of the bank themselves. But there is a central aggregation service that we are running…
S77
Agentic AI in Focus Opportunities Risks and Governance — Caroline Louveaux outlined MasterCard’s four-pillar approach to agentic commerce guardrails. First, “know your agent” re…
S78
Building the AI-Ready Future From Infrastructure to Skills — The progression from proof-of-concept to production represents a critical challenge. Resources like AMD’s Developer Clou…
S79
Responsible AI in India Leadership Ethics & Global Impact — And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. E…
S80
Open Forum #30 High Level Review of AI Governance Including the Discussion — Melinda Claybaugh: Thank you so much for the question, and thank you for the opportunity to be here. As you were giving …
S81
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — A particularly encouraging theme throughout the discussion was the natural alignment of commercial incentives with susta…
S83
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Collaboration across sectors, robust governance, and strategic investments will be critical in achieving a sustainable a…
S84
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S85
WS #148 Making the Internet greener and more sustainable — Disclaimer:This is not an official record of the session. The DiploAI system automatically generates these resou…
S86
African Priorities for the Global Digital Compact: A Comprehensive Discussion Report — The discussion began with a professional, diplomatic tone as panelists introduced themselves and outlined the compact’s …
S87
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S88
Building the Next Wave of AI_ Responsible Frameworks & Standards — The discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with an authorita…
S89
Opening of the session — The tone began very positively and constructively, with the Chair commending delegations for focused, specific intervent…
S90
Central Bank Tools and Independence: A Comprehensive Panel Discussion — The tone began as analytical and professional, with central bankers carefully explaining their institutional perspective…
S91
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — ## Major Discussion Points: The discussion maintained a professional, collaborative tone throughout, characterized by c…
S92
WS #187 Bridging Internet AI Governance From Theory to Practice — – **Risk-based approaches**: Multiple speakers supported prioritizing governance based on risk levels and application co…
S93
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S94
AI Meets Cybersecurity Trust Governance & Global Security — These key comments fundamentally shaped the discussion by challenging conventional assumptions about AI security and gov…
S95
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S96
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S97
Discussion Report: AI-Native Business Transformation at Davos — The discussion maintains an optimistic and forward-looking tone throughout, with participants sharing insights as indust…
S98
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S99
Welfare for All Ensuring Equitable AI in the Worlds Democracies — The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutio…
S100
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The discussion maintained a predominantly optimistic and forward-looking tone throughout, despite acknowledging signific…
S101
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S102
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S103
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S104
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S105
World Economic Forum Annual Meeting Closing Remarks: Summary — These key comments transformed what could have been a standard ceremonial closing into a meaningful reflection on the ph…
S106
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and hum…
S107
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — ## Key Commitments and Next Steps ## Opening Context and Audience Engagement A crucial dimension addressed energy cons…
S108
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — Thomas Schneider delivered a keynote address at the AI Impact Summit in Delhi, announcing Switzerland’s role as host of …
S109
AI for social good: the new face of technosolutionism — Birhane concluded her presentation by acknowledging that being allowed to “take centre stage here and to speak about thi…
S110
Closing remarks — Secretary-General Martin offered insight into trust in AI systems, stating: “Trust isn’t a property of machines. It’s ho…
S111
UK names industry leaders to steer safe AI adoption in finance — The UK government hasappointed two senior industry figuresas AI Champions to support safe and effective adoption of AI a…
S112
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — **Ernst Noorman**, Cyber Ambassador for the Netherlands and co-chair of the FOC Task Force on AI and Human Rights, share…
S113
Keynote-Sam Altman — First, that widespread access to AI is the only fair and safe path forward. He argued that democratizing AI capabilities…
S114
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — – Brian Armstrong- Brad Garlinghouse – Brian Armstrong- François Villaroy de Galhau Regulation and innovation must wor…
S115
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S116
Hard power of AI — Furthermore, the analysis addresses the proliferation of fake media, particularly through the use of deepfakes in crypto…
S117
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — AI is transforming various aspects of the media and publishing industry, including content creation, workflow improvemen…
S118
Artificial intelligence as a driver of digital transformation in industries (HSE University) — The analysis offers a comprehensive examination of artificial intelligence (AI) and its impact on various sectors. One s…
S119
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Strong aggregate demand and tight business-education partnerships are essential for successful transitions
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Paul Hubbard
6 arguments171 words per minute1045 words365 seconds
Argument 1
Trust as foundation for innovation
EXPLANATION
Paul argues that trust is not an obstacle to AI innovation but rather the essential foundation that enables it. Without public trust, innovative AI solutions cannot be effectively deployed.
EVIDENCE
He states that AI should not be framed as a trade-off between trust and innovation; instead, trust provides the base that makes innovation possible, emphasizing that trust is the foundation for any AI advancement [39-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust is highlighted as essential for banking and AI deployment, and as a prerequisite for public confidence and inclusion in AI systems [S1][S10][S18][S20].
MAJOR DISCUSSION POINT
Trust as foundation for innovation
AGREED WITH
Erik Ekudden, Hari Shetty, Divyesh Vithlani
Argument 2
People‑first, democratic participation builds confidence
EXPLANATION
Paul stresses that AI adoption must be grounded in a people‑first approach, engaging citizens where they are and respecting their familiarity with technology. Democratic participation helps build confidence and acceptance of AI systems.
EVIDENCE
He explains that governments need to meet citizens where they are, understand their comfort levels, and build AI solutions from that foundation rather than imposing new technologies, highlighting a people-first, participatory approach [44-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaborative implementation and inclusive participation are identified as ways to build confidence in technology projects [S12][S20].
MAJOR DISCUSSION POINT
People‑first, democratic participation builds confidence
Argument 3
AI‑ready infrastructure distinguishes AI‑native nations
EXPLANATION
Paul notes that while data‑centers and compute capacity will be built everywhere, the true differentiator for AI‑native nations will be their ability to adapt, be competent, and stay curious. These capabilities enable rapid deployment of AI models and services.
EVIDENCE
He adds that beyond physical infrastructure, the key factors are capability, competence, and curiosity, which allow governments and economies to flexibly adopt new AI approaches and create new jobs, citing examples from his visit to India [300-306].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI is presented as a tool to bridge infrastructure gaps in developing economies and as a driver of institutional resilience and curiosity-driven capability [S14][S15][S22][S23].
MAJOR DISCUSSION POINT
AI‑ready infrastructure distinguishes AI‑native nations
Argument 4
National AI strategy must be transparent, spread benefits, and protect citizens
EXPLANATION
Paul outlines that a responsible national AI strategy should clearly communicate its goals, ensure AI benefits reach all segments of society, and safeguard citizens from potential harms. Transparency and inclusive benefit distribution are essential for public trust.
EVIDENCE
He describes the need for a clear plan that communicates AI opportunities, spreads benefits to rural and marginalized groups, and keeps citizens safe through AI safety and harm mitigation conversations across government and business levels [163-171].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
National AI strategies call for traceability, transparency, and citizen confidence, emphasizing inclusive benefit distribution [S17][S18][S20].
MAJOR DISCUSSION POINT
Transparent, inclusive national AI strategy
Argument 5
Governments shift from cautious to active posture, managing risk with guardrails
EXPLANATION
Paul observes that governments, traditionally cautious, are now adopting a more active stance toward AI, embracing risk while establishing guardrails to manage it. This shift enables faster AI adoption while maintaining safety.
EVIDENCE
He notes that the Australian government has moved from a cautious approach to a more active posture, embracing risk and implementing guardrails that allow risk to be understood and managed effectively [351-353].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voluntary, industry-driven standards and guardrails are recommended to balance risk without stifling innovation, while over-regulation is warned against [S21][S24][S20].
MAJOR DISCUSSION POINT
Government shift to active AI risk management
AGREED WITH
Erik Ekudden, Hari Shetty
DISAGREED WITH
Erik Ekudden
Argument 6
AI‑native nations will be defined by capability, competence, and curiosity
EXPLANATION
Paul reiterates that the defining traits of AI‑native nations will be their internal capabilities, technical competence, and a culture of curiosity that drives continuous learning and adaptation. These traits outweigh pure infrastructure investment.
EVIDENCE
He emphasizes that while data-centers and compute will be built, the decisive factors are capability, competence, and curiosity, which enable governments and economies to flexibly adopt AI and create new jobs, citing observations from his visit to India [300-306].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capability and curiosity are cited as key differentiators for AI-native economies, linked to broader capacity-building goals [S23][S22][S14].
MAJOR DISCUSSION POINT
Capability, competence, curiosity as AI‑native nation traits
H
Hari Shetty
6 arguments199 words per minute1619 words487 seconds
Argument 1
Consistent, hallucination‑free performance earns trust
EXPLANATION
Hari argues that trust in AI systems is earned only when they consistently deliver accurate results without hallucinations. Long‑term reliable performance builds both human and agentic trust.
EVIDENCE
He explains that trust must be earned over time, requiring AI to operate without hallucinations or fundamental flaws, and that only consistent, reliable performance can establish lasting trust [147-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Metrics such as “plus scores” and quality-tracking frameworks are proposed to monitor hallucinations and ensure reliable outputs [S28].
MAJOR DISCUSSION POINT
Reliability and hallucination‑free operation builds trust
AGREED WITH
Paul Hubbard, Erik Ekudden, Divyesh Vithlani
Argument 2
Problem‑first, continuous‑operation model drives proof over promise
EXPLANATION
Hari stresses that AI projects should start by defining the business problem rather than selecting a model, and solutions must operate continuously to demonstrate real value. This problem‑first, always‑on approach turns promises into proven outcomes.
EVIDENCE
He outlines a four-point approach: start with the problem, adapt solutions to enterprise complexity, ensure solutions work every day, and embed trust through consistent performance, thereby moving from pilots to proof [147-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A problem-first, always-on approach is advocated to move from pilots to production, with emphasis on collaboration and scaling use cases [S26][S27][S1].
MAJOR DISCUSSION POINT
Problem‑first, always‑on AI delivery
Argument 3
Enterprise AI must move beyond pilots to reliable, always‑on services
EXPLANATION
Hari contends that enterprises can no longer rely on pilot projects; AI must be deployed as a reliable, continuously operating service to generate real business value. Consistency and uptime are essential for enterprise trust.
EVIDENCE
He states that AI is no longer about pilots, emphasizing the need for solutions that work every hour, every day, and that only such reliable services can be taken to market [147-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from experimental pilots to continuously operating platforms is highlighted as essential for enterprise value [S27][S26].
MAJOR DISCUSSION POINT
Enterprise AI needs reliable, always‑on services
AGREED WITH
Divyesh Vithlani, Paul Hubbard
Argument 4
Treat AI as a core capability; productivity is an early indicator, not the sole metric
EXPLANATION
Hari suggests that AI should be viewed as a foundational capability rather than a project measured solely by ROI. Productivity gains are an early signal, but longer‑term outcomes such as cost reduction, quality improvement, and cycle‑time reduction are more meaningful.
EVIDENCE
He explains that while productivity is an early indicator, the ultimate benefits include lower costs, higher quality, and faster cycles, and that Wipro’s models help clients understand these end outcomes beyond simple productivity metrics [237-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI is framed as a foundational capability that drives broader outcomes beyond immediate productivity gains, aligning with human-capability and capacity themes [S23][S22].
MAJOR DISCUSSION POINT
AI as core capability, productivity as early signal
Argument 5
“Plus scores” track failures, hallucinations, and quality of outcomes
EXPLANATION
Hari introduces “plus scores” as a metric to monitor AI performance, capturing failure rates, hallucinations, and alignment with acceptable quality thresholds. This helps ensure AI outputs meet organizational standards.
EVIDENCE
He describes plus scores as tracking the number of failure instances, assessing whether they fall within acceptable vectors, and using them to evaluate quality, hallucinations, and overall task success [247-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
New measurement approaches, including “plus scores,” are suggested to capture failure rates, hallucinations, and quality thresholds [S28].
MAJOR DISCUSSION POINT
Plus scores for AI quality monitoring
Argument 6
Decision velocity will dramatically increase, reshaping organizational processes
EXPLANATION
Hari predicts that AI will accelerate decision‑making speed across organizations, eliminating slow, cumbersome processes. Faster decision velocity will become a competitive advantage.
EVIDENCE
He notes that AI will dramatically increase decision velocity, transforming slow organizational processes into rapid, efficient operations, and that this shift will be evident within the next few years [447-452].
MAJOR DISCUSSION POINT
AI‑driven acceleration of decision velocity
D
Divyesh Vithlani
7 arguments145 words per minute2320 words958 seconds
Argument 1
Platform‑first approach with layered ethical data and model governance
EXPLANATION
Divyesh explains that a platform‑first strategy, built with layers for data, models, knowledge, and context, embeds ethical AI, data governance, and fair use directly into the platform. This enables safe, scalable AI deployment across the enterprise.
EVIDENCE
He describes constructing a platform that integrates layers from data to models, embedding ethical AI and data-governance controls, allowing end-users to leverage AI as intuitively as opening an Excel file [130-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A platform-first strategy with built-in ethical layers and traceability is promoted to embed responsible AI governance [S1][S10][S21].
MAJOR DISCUSSION POINT
Platform‑first with ethical governance layers
AGREED WITH
Erik Ekudden, Paul Hubbard, Hari Shetty
DISAGREED WITH
Erik Ekudden
Argument 2
Execution plane vs. control plane enables dynamic agent oversight
EXPLANATION
Divyesh differentiates between an execution plane that runs AI agents and a control plane that monitors and governs them. This separation allows real‑time oversight, onboarding/offboarding, and conflict management between agents and humans.
EVIDENCE
He outlines the two-plane architecture-execution for activity and control for supervision-detailing how agents receive guardrails, are monitored, and can be managed similarly to human staff, including conflict detection in real time [206-213] and [221-223].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Separating execution and control planes allows real-time monitoring and governance of AI agents, echoing industry-standard guardrail frameworks [S21][S24].
MAJOR DISCUSSION POINT
Two‑plane architecture for dynamic agent oversight
Argument 3
Platform architecture allows safe, enterprise‑scale AI deployment
EXPLANATION
Divyesh argues that a platform‑centric architecture, with built‑in safeguards and layered governance, enables enterprises to deploy AI at scale while maintaining trust and compliance. The platform abstracts complexity for end‑users.
EVIDENCE
He reiterates that the platform-first approach, with ethical layers and governance, unleashes AI power safely for business users, allowing AI to be used as naturally as any other task [130-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Platform-centric designs with ethical layers are presented as a way to safely scale AI across enterprises [S1][S10].
MAJOR DISCUSSION POINT
Platform enables safe enterprise AI scale
Argument 4
Dynamic oversight through control planes ensures accountable agent actions
EXPLANATION
Divyesh emphasizes that the control plane provides continuous monitoring and accountability for AI agents, ensuring their actions align with organizational policies and can be audited. This dynamic oversight is essential for responsible AI.
EVIDENCE
He describes how the control plane monitors every agent activity, manages onboarding/offboarding, and resolves conflicts between agents and humans, thereby ensuring accountability and real-time oversight [206-213] and [221-223].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Control-plane monitoring provides continuous accountability for AI agents, aligning with recommended guardrail practices [S21][S24].
MAJOR DISCUSSION POINT
Control‑plane based dynamic oversight
Argument 5
Banking ROI measured via micro‑productivity, faster response, and legacy modernization
EXPLANATION
Divyesh outlines that AI delivers ROI in banking through micro‑productivity gains, accelerated response times, and the modernization of legacy systems. These improvements translate into cost savings and competitive advantage.
EVIDENCE
He cites examples where AI improves micro-productivity, enables faster reactions to change, and modernizes legacy platforms that still dominate 90 % of banks, thereby creating tangible value and faster experimentation [334-340] and [413-416].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Banking relies on trust and AI to improve productivity, accelerate response times, and modernize legacy systems, supporting economic development goals [S1][S14][S22].
MAJOR DISCUSSION POINT
Banking ROI through productivity and legacy modernization
Argument 6
Platform‑centric guardrails mitigate AI risks in enterprise deployments
EXPLANATION
Divyesh asserts that embedding guardrails within a platform‑centric design reduces AI‑related risks, ensuring compliance, security, and ethical operation across the enterprise. Proper tooling and governance are key to risk mitigation.
EVIDENCE
He explains that by maintaining platform-centric guardrails and leveraging existing data-center and cloud controls, the organization can meet and mitigate AI risks, ensuring benefits outweigh concerns [355-362].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding guardrails within a platform architecture is recommended to manage AI risk and ensure compliance [S21][S24].
MAJOR DISCUSSION POINT
Platform guardrails for AI risk mitigation
Argument 7
Banking will become seamless with AI avatars and instant cross‑border payments
EXPLANATION
Divyesh envisions a future where AI avatars and digital assets enable frictionless banking experiences, including near‑instant cross‑border transactions, transforming customer interactions.
EVIDENCE
He provides an example of combining AI with digital assets and stablecoins to reduce cross-border payment times from days to near-instant, illustrating how AI will reshape banking experiences [408-410] and further describes seamless, intuitive services enabled by AI avatars [437-445].
MAJOR DISCUSSION POINT
Seamless AI‑driven banking experiences
E
Erik Ekudden
9 arguments184 words per minute2305 words750 seconds
Argument 1
Secure, trusted network as backbone of AI trust
EXPLANATION
Erik highlights that the security and trustworthiness of telecom networks are fundamental to building overall AI trust. A secure network provides the guarantees needed for AI workloads.
EVIDENCE
He notes that networks are already secure and trusted, providing the guarantees required for AI inference and that trust and security are core principles for network evolution [80-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure, trusted telecom networks are identified as foundational for AI trust, with standards and inclusion-based trust building cited [S21][S20].
MAJOR DISCUSSION POINT
Network security as AI trust foundation
AGREED WITH
Paul Hubbard, Hari Shetty, Divyesh Vithlani
Argument 2
Networks evolving from passive carriers to active AI enablers
EXPLANATION
Erik describes the transition of telecom networks from merely transporting data to actively hosting AI inference workloads, becoming an intelligent fabric that supports distributed AI services.
EVIDENCE
He explains that the network is becoming the host for AI experiences, requiring scaling to handle inference workloads and marking a shift to an intelligent fabric [75-78] and earlier discussion of the network as a host for AI [58-60].
MAJOR DISCUSSION POINT
Network evolution to active AI fabric
DISAGREED WITH
Divyesh Vithlani
Argument 3
AI glasses demand low‑latency, reliable 5G/6G fabric
EXPLANATION
Erik points out that AI‑powered wearables like smart glasses require ultra‑low latency and high‑reliability connectivity, which only advanced 5G/6G networks can provide. This drives the need for a robust intelligent fabric.
EVIDENCE
He describes AI glasses that offload inference to the network, requiring reliable, low-latency connectivity, and stresses that the network must improve beyond current 5G to meet these demands [61-66] and [90-94].
MAJOR DISCUSSION POINT
AI wearables need high‑performance network fabric
Argument 4
Energy‑efficient hardware and software mitigate AI’s power use
EXPLANATION
Erik argues that to keep AI sustainable, both hardware and software must be designed for energy efficiency, including smaller models where possible and smarter hardware, reducing overall power consumption.
EVIDENCE
He outlines the need for energy-efficient hardware, software, and AI models, noting that moving inference to the edge and using smaller models can prevent a surge in energy use, and cites that networks consume about 1 % of total power while enabling a 10-15 % emissions reduction elsewhere [318-326].
MAJOR DISCUSSION POINT
Energy‑efficient AI hardware/software
Argument 5
Guardrails from telecom translate to AI agents, ensuring accountability
EXPLANATION
Erik suggests that the existing safety and security guardrails in telecom can be adapted to AI agents, providing a familiar accountability framework for AI services.
EVIDENCE
He states that telecom already has safety-security guardrails, and these can be translated one-to-one into the AI (identic) world to ensure accountability for AI agents [190-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Existing telecom safety and security guardrails can be adapted to AI agents to provide accountability frameworks [S21][S24].
MAJOR DISCUSSION POINT
Telecom guardrails applied to AI agents
AGREED WITH
Divyesh Vithlani, Paul Hubbard, Hari Shetty
Argument 6
Governance should be domain‑specific, avoiding premature over‑regulation
EXPLANATION
Erik warns against imposing blanket regulations on AI before innovation has matured, advocating for domain‑specific governance that mirrors existing telecom safeguards without stifling progress.
EVIDENCE
He cautions that regulating before innovation can hinder progress and recommends translating telecom guardrails to AI on a domain-by-domain basis, avoiding premature over-regulation [191-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Domain-specific, voluntary standards are advocated to prevent stifling innovation, warning against blanket regulation [S21][S24].
MAJOR DISCUSSION POINT
Domain‑specific AI governance
Argument 7
Intelligent fabric unlocks new business models and growth opportunities
EXPLANATION
Erik claims that an AI‑enabled intelligent network creates novel business models, allowing operators to offer tailored, mission‑critical services and generate significant revenue and cost‑savings.
EVIDENCE
He describes how AI on the network enables new outcomes, drives business growth, and can deliver 10-50 % efficiency gains, translating into billions of dollars of savings and new revenue streams [262-274] and further elaborates on modeling on top of the network to produce new outcomes [285-287].
MAJOR DISCUSSION POINT
Network AI drives new business models
Argument 8
AI‑native networks will provide real‑time, massive user experiences and support physical AI
EXPLANATION
Erik envisions AI‑native networks that can deliver real‑time, large‑scale user experiences and support emerging physical AI technologies such as robots and drones, requiring highly responsive and adaptable infrastructure.
EVIDENCE
He explains that AI-native networks must be responsive to fast changes, provide massive user experiences, and support physical AI like humanoids, drones, and other devices, emphasizing real-time adaptability [371-382].
MAJOR DISCUSSION POINT
AI‑native networks for real‑time massive experiences
Argument 9
Public sector may overestimate risk, potentially stalling innovation
EXPLANATION
Erik observes that governments sometimes over‑estimate AI risks, leading to excessive caution that can impede innovation, especially in public‑sector deployments.
EVIDENCE
He notes that the public sector may be overly cautious, over-estimating risk, which could hold back innovation, and contrasts this with the need for balanced risk management [346-348] and earlier remarks on premature regulation [191-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Over-cautious risk perception in the public sector is highlighted as a barrier to AI innovation, with calls for balanced risk management [S24][S20].
MAJOR DISCUSSION POINT
Over‑cautious risk perception in public sector
AGREED WITH
Paul Hubbard, Hari Shetty
DISAGREED WITH
Paul Hubbard
M
Mridu Bhandari
3 arguments133 words per minute1768 words795 seconds
Argument 1
Trust framed as one of the seven chakras for sustainable AI
EXPLANATION
Mridu positions trust as one of the seven foundational ‘chakras’—human capital, inclusion, trust, resilience, science, resources, and social good—that guide a sustainable AI future. Embedding trust at this pillar level ensures accountability and long‑term success.
EVIDENCE
She introduces the seven chakras of aligned global cooperation, explicitly listing trust among them as a concrete pillar for turning ambition into accountability [4] and frames the discussion around People, Planet, and Progress [1-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust is positioned as a core pillar for sustainable AI, reinforced by inclusion-based trust building and traceability concepts [S20][S10][S1].
MAJOR DISCUSSION POINT
Trust as a chakra for sustainable AI
Argument 2
Accountability embedded in the seven‑pillar framework for global cooperation
EXPLANATION
Mridu emphasizes that accountability for AI outcomes is woven into the seven‑pillar (chakras) framework, ensuring that each pillar—such as trust and social good—carries clear responsibility across societies and enterprises.
EVIDENCE
She references the seven chakras (human capital, inclusion, trust, resilience, science, resources, social good) as the structure for global cooperation and accountability throughout the discussion [4] and reiterates the People, Planet, Progress vision in her closing remarks [456-457].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Accountability, traceability, and citizen confidence are integral to national AI strategies, aligning with the seven-pillar framework [S17][S18][S20].
MAJOR DISCUSSION POINT
Accountability within seven‑pillar AI framework
Argument 3
Vision of People, Planet, Progress guided by seven chakras shapes the AI future
EXPLANATION
Mridu concludes that aligning AI development with the three guiding principles—People, Planet, and Progress—and the seven chakras will redefine competitiveness, rebuild public trust, and future‑proof institutions.
EVIDENCE
In her closing, she ties together People, Planet, Progress with the seven pillars, stating that this alignment will redefine competitiveness, rebuild trust, and future-proof institutions for decades ahead [456-457].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The People-Planet-Progress narrative is echoed in AI strategies that emphasize human development, environmental sustainability, and economic progress [S22][S23][S20].
MAJOR DISCUSSION POINT
People, Planet, Progress as AI guiding principles
Agreements
Agreement Points
Trust is the essential foundation for AI innovation and deployment
Speakers: Paul Hubbard, Erik Ekudden, Hari Shetty, Divyesh Vithlani
Trust as foundation for innovation Secure, trusted network as backbone of AI trust Consistent, hallucination‑free performance earns trust Platform‑first approach with layered ethical data and model governance
All speakers stress that trust-whether as a societal foundation, network security, reliable performance, or embedded platform governance-is a prerequisite for successful AI adoption [39-41][80-82][147-152][130-138].
POLICY CONTEXT (KNOWLEDGE BASE)
Trust is highlighted as a prerequisite for scaling AI systems in India’s responsible AI discourse [S45], European AI policy stresses “trustability” and traceability as core principles [S54][S56], and global discussions note the need for guardrails to avoid both over-trust and mistrust of AI [S42].
Robust governance frameworks and guardrails are needed for responsible AI at scale
Speakers: Divyesh Vithlani, Erik Ekudden, Paul Hubbard, Hari Shetty
Platform‑first approach with layered ethical data and model governance Guardrails from telecom translate to AI agents, ensuring accountability Governments shift from cautious to active posture, managing risk with guardrails Risk is manageable with the right tool‑set
The panel agrees that AI must be deployed within structured governance-platform-centric layers, execution/control planes, telecom-derived guardrails, and active government risk management-to ensure accountability and safety [130-138][190-192][351-353][344-345].
POLICY CONTEXT (KNOWLEDGE BASE)
Deshpande’s framework calls for process and governance guardrails that protect innovation while ensuring responsibility [S41]; UN-sponsored panels underline the necessity of responsible deployment frameworks for agentic AI [S43]; and EU approaches advocate balanced regulation that safeguards rights without stifling innovation [S56].
Public‑sector risk perception tends to be overly cautious, requiring balanced management
Speakers: Erik Ekudden, Paul Hubbard, Hari Shetty
Public sector may overestimate risk, potentially stalling innovation Governments shift from cautious to active posture, managing risk with guardrails Risk is manageable with the right tool‑set
Both Erik and Paul note that governments can over-estimate AI risk, while Hari emphasizes that risk is manageable, suggesting a need for calibrated oversight [346-348][351-353][344-345].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses define risk as likelihood × severity and note divergent risk perceptions across contexts, urging calibrated risk quantification [S57]; IGF-style reports highlight tension between optimistic and cautious public-sector views [S58]; practitioners observe governments may overestimate AI risk, potentially hindering adoption [S59]; European policy recommends intervention only when necessary, balancing caution with innovation [S56].
AI will dramatically increase decision‑making speed and organisational responsiveness
Speakers: Hari Shetty, Divyesh Vithlani, Paul Hubbard
Decision velocity will dramatically increase, reshaping organisational processes Banking ROI measured via faster response and legacy modernisation AI will help respond to change faster (implied)
Hari predicts a surge in decision velocity, Divyesh links AI to faster response times and legacy modernisation, and Paul’s remarks on rapid experimentation reinforce the view that AI accelerates organisational agility [447-452][334-340].
POLICY CONTEXT (KNOWLEDGE BASE)
UN Security Council discussions recognize AI’s capacity to enable ultra-fast decision-making in security contexts [S65]; diplomatic forums cite AI-driven predictive analytics to accelerate policy choices [S66]; broader summit narratives affirm AI’s transformative speed benefits for organisations [S58].
Enterprises must move from pilot projects to always‑on, reliable AI services
Speakers: Hari Shetty, Divyesh Vithlani, Paul Hubbard
Enterprise AI must move beyond pilots to reliable, always‑on services Platform‑first approach enables scale and reliability Trust as foundation enables continuous innovation
The speakers concur that AI should no longer be confined to pilots; instead, it must be delivered as continuous, production-grade services supported by trustworthy platforms [147-152][130-138].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry leaders stress the transition from pilots to production-ready platforms as essential for scale, citing Wipro’s “proof over promise” approach [S46] and calls to move from fragmented data to interoperable systems [S45][S47].
Similar Viewpoints
All three stress a user‑centred, inclusive approach that meets people where they are and provides trustworthy infrastructure for AI adoption [44-46][130-138][49-53].
Speakers: Paul Hubbard, Divyesh Vithlani, Erik Ekudden
People‑first, democratic participation builds confidence Platform‑first approach empowers the entire organisation Secure, trusted network serves all users
Both propose a layered architecture that separates execution from oversight, allowing real‑time monitoring and accountability of AI agents [206-213][221-223][190-192].
Speakers: Divyesh Vithlani, Erik Ekudden
Execution plane vs. control plane enables dynamic agent oversight Guardrails from telecom translate to AI agents, ensuring accountability
Unexpected Consensus
AI as a catalyst for financial inclusion and faster cross‑border payments
Speakers: Paul Hubbard, Divyesh Vithlani
National AI strategy must be transparent, spread benefits, and protect citizens Banking will become seamless with AI avatars and instant cross‑border payments
While coming from different sectors, both agree that AI should be leveraged to deliver inclusive financial services, reducing payment times and reaching underserved populations [166-168][408-410].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s banking-sector AI policy balances experimentation with systemic-risk controls to promote inclusive finance [S48]; the AI Impact Summit 2026 highlighted AI’s role in expanding financial services and streamlining cross-border transactions [S55]; broader development discourse links AI to financial-inclusion objectives in the Global South [S44].
Overall Assessment

The panel shows strong consensus on trust as the cornerstone of AI, the necessity of robust governance and guardrails, and the transformative impact of AI on speed and continuous service delivery. There is moderate agreement on risk perception and a shared vision of inclusive, user‑centred AI ecosystems.

High consensus on foundational principles (trust, governance, always‑on services) with medium consensus on risk management and sector‑specific impacts, suggesting that coordinated policy and platform‑centric strategies are likely to gain broad support across government, industry, and academia.

Differences
Different Viewpoints
Extent of risk overestimation and appropriate governmental posture toward AI
Speakers: Erik Ekudden, Paul Hubbard
Public sector may overestimate risk, potentially stalling innovation Governments shift from cautious to active posture, managing risk with guardrails
Erik argues that the public sector often overestimates AI risk, which can hold back innovation [346-348]. Paul counters that the Australian government has moved from a cautious stance to a more active posture, embracing risk while putting guardrails in place to manage it [351-353].
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly work defines risk as likelihood × severity and notes divergent risk perceptions, suggesting governments may over-estimate AI hazards [S57]; IGF-derived analyses document disagreements on risk prioritisation between public and private actors [S58]; industry commentary confirms perception of governmental over-caution [S59].
Preferred locus of AI integration and governance – network‑centric vs platform‑centric
Speakers: Erik Ekudden, Divyesh Vithlani
Networks evolving from passive carriers to active AI enablers Platform‑first approach with layered ethical data and model governance
Erik emphasizes that telecom networks should evolve into an intelligent fabric that actively hosts AI inference workloads, making the network the primary enabler of trustworthy AI [75-78]. Divyesh advocates a platform-first strategy that embeds ethical data and model governance layers, separating execution and control planes to oversee agents, positioning the platform as the central point for safe, scalable AI deployment [130-138].
Unexpected Differences
Public‑sector risk perception versus active governmental engagement
Speakers: Erik Ekudden, Paul Hubbard
Public sector may overestimate risk, potentially stalling innovation Governments shift from cautious to active posture, managing risk with guardrails
It is surprising that a telecom executive (Erik) and a government economist (Paul) diverge on whether the public sector is still overly cautious. Erik sees persistent over-estimation of risk that could impede progress [346-348], while Paul highlights a recent shift toward a more proactive stance with concrete guardrails [351-353]. This contrast was not anticipated given their respective domains.
POLICY CONTEXT (KNOWLEDGE BASE)
Reports from IGF and related forums describe a split between cautious risk perception and calls for proactive government involvement in AI governance [S58]; practitioner commentary indicates governments risk stalling innovation by being overly risk-averse [S59]; policy guidance recommends balanced engagement rather than passive caution [S56].
Overall Assessment

The panel largely agrees on the centrality of trust, people‑first approaches, and the need for robust governance. The main points of contention revolve around how risk is perceived and managed by the public sector and whether AI should be primarily embedded in telecom networks or delivered via enterprise platforms. These disagreements are moderate in intensity and reflect differing professional lenses rather than fundamental opposition.

Moderate – the disagreements are focused on implementation pathways and risk framing, which could influence policy coordination and industry‑government collaboration but do not undermine the shared commitment to trustworthy, inclusive AI.

Partial Agreements
All speakers concur that trust is a prerequisite for AI deployment—Paul frames trust as the foundation for innovation [39-41]; Erik stresses that secure networks provide the backbone of AI trust [80-82]; Hari notes that reliable, hallucination‑free performance builds trust over time [147-152]; Divyesh’s platform‑first design embeds trust through ethical governance layers [130-138]. However, they differ on where that trust should be instantiated (network vs platform).
Speakers: Paul Hubbard, Erik Ekudden, Hari Shetty, Divyesh Vithlani
Trust as foundation for innovation Secure, trusted network as backbone of AI trust Consistent, hallucination‑free performance earns trust Platform‑first approach with layered ethical data and model governance
Both emphasize a people‑centric approach: Paul calls for meeting citizens where they are and engaging them democratically [44-46], while Divyesh stresses empowering the entire organization through a platform that makes AI as intuitive as everyday tools, reflecting a user‑first mindset [130-138]. Their focus aligns on inclusivity, though one targets citizens broadly and the other internal enterprise users.
Speakers: Paul Hubbard, Divyesh Vithlani
People‑first, democratic participation builds confidence Platform‑first approach with layered ethical data and model governance
Takeaways
Key takeaways
Trust is the foundation for AI innovation; it must be built through people‑first, democratic participation and reliable, secure infrastructure. The network is evolving from a passive data carrier to an active, intelligent fabric that hosts AI inference (e.g., AI glasses) and must be secure, low‑latency, and energy‑efficient. A platform‑first approach with layered ethical, data, and model governance enables scalable, enterprise‑wide AI while maintaining guardrails and accountability. Proof over promise requires a problem‑first mindset, continuous‑operation models, and moving beyond pilot projects to always‑on services. Measuring AI value should treat AI as a core capability; productivity is an early signal, complemented by “plus scores” that track failures, hallucinations, and quality. Risk is manageable with proper toolsets and governance; governments may over‑estimate risk, but a balanced, cautious‑yet‑active posture is needed. AI‑native nations will be distinguished by capability, competence, curiosity, and the ability to adapt institutions and workforce to AI‑driven change. Sustainability can be achieved through energy‑efficient hardware, software, and models; distributed inference reduces overall energy impact. Future AI ecosystems will feature autonomous networks, physical AI (robots, drones), and AI avatars, dramatically increasing decision velocity and reshaping work, finance, and daily life.
Resolutions and action items
Adopt a platform‑first architecture with distinct execution and control planes for AI and agent oversight (proposed by Divyesh Vithlani). Implement layered ethical, data, and model governance within AI platforms to embed trust and compliance (Divyesh Vithlani). Leverage existing telecom guardrails as a baseline for AI agent accountability and extend them to AI services (Erik Ekudden). Use the AI CoLab model to foster cross‑sector collaboration among government, industry, academia, and NGOs for responsible AI deployment (Paul Hubbard). Measure AI outcomes using productivity metrics and “plus scores” that capture failures, hallucinations, and quality of results (Hari Shetty). Prioritize people‑first, participatory approaches when introducing AI services to build public confidence (Paul Hubbard). Invest in energy‑efficient hardware, software, and smaller inference models to mitigate AI’s power consumption (Erik Ekudden). Develop dynamic oversight mechanisms via control‑plane monitoring to continuously supervise agent actions (Divyesh Vithlani).
Unresolved issues
Specific standards or metrics for the proposed “plus scores” and how they will be operationalized across industries. Detailed roadmap for scaling AI‑enabled intelligent fabric (5G/6G) to support billions of edge devices and AI glasses. Concrete mechanisms for ensuring inclusive trust across diverse demographic groups, especially in rural and marginalized communities. How regulatory frameworks can evolve without stifling innovation—exact balance between oversight and flexibility remains open. Implementation details for dynamic, real‑time accountability across the full AI stack (network, cloud, edge, device). Clear guidance on transitioning legacy banking systems to AI‑native platforms while managing CAPEX constraints.
Suggested compromises
Use existing telecom security and safety guardrails as a starting point for AI agent regulation rather than imposing entirely new regulations (Erik Ekudden). Adopt a balanced risk posture: governments start cautiously but progressively shift to an active, risk‑managed approach as understanding improves (Paul Hubbard). Combine people‑first participatory design with technical guardrails to build trust without slowing innovation (Paul Hubbard). Treat AI as a capability rather than a pure ROI driver, allowing investment in foundational platforms while still delivering measurable productivity gains (Hari Shetty).
Thought Provoking Comments
AI is not about technological adoption. It’s all about what can generate public value, what generates public welfare.
Frames AI from a public‑policy/economic perspective rather than a purely technical race, reminding the audience that the ultimate metric is societal benefit.
Set the tone for the discussion on trust and responsibility, prompting subsequent speakers (e.g., Erik on network trust, Divyesh on platform governance) to ground their technical proposals in public value rather than hype.
Speaker: Paul Hubbard
We shouldn’t frame it as trust versus innovation; trust is the foundation that lets you make the innovation.
Challenges the common narrative that safety slows progress, proposing instead that trust enables faster, more sustainable innovation.
Shifted the conversation from a perceived trade‑off to a synergistic relationship, leading Erik to discuss how the network itself can embed trust and Hari to outline a “proof‑over‑promise” framework.
Speaker: Paul Hubbard
The network is becoming an intelligent fabric that hosts AI inference workloads – think AI glasses that off‑load processing to the edge.
Introduces a concrete evolution of infrastructure: from passive connectivity to an active, AI‑enabled platform, linking hardware, edge computing, and user experience.
Opened a new topic on the role of telecom in AI governance, inspired Divyesh to talk about platform layers and agents, and set up later sustainability discussions about energy‑efficient inference.
Speaker: Erik Ekudden
We take a platform‑first approach: build a layered AI platform (data, model, knowledge, context) with built‑in ethical and governance controls, so end‑users can use AI as naturally as opening Excel.
Provides a practical blueprint for scaling trustworthy AI in a regulated industry, emphasizing usability without sacrificing safeguards.
Guided the dialogue toward concrete implementation tactics, prompting Hari to articulate his “proof over promise” principles and Erik to discuss network‑level guardrails.
Speaker: Divyesh Vithlani
Proof over promise: start with the problem, not the model; ensure solutions work continuously; earn agentic trust through consistent performance.
Distills AI delivery into four actionable tenets, moving the conversation from abstract ideals to measurable outcomes.
Created a turning point that reframed the rest of the panel’s discussion around operational rigor, influencing Divyesh’s talk of performance appraisal for agents and Erik’s emphasis on reliability.
Speaker: Hari Shetty
When you introduce agents at scale, accountability follows a hierarchy of decision‑making – responsibility resides in the domain providing the service, and existing telecom guardrails can be translated one‑to‑one to the AI world.
Bridges the gap between traditional telecom regulation and emerging AI agent governance, offering a concrete governance model.
Prompted Divyesh to elaborate on dynamic oversight via execution and control planes, and reinforced the theme that existing infrastructure can be leveraged for AI accountability.
Speaker: Erik Ekudden
Agents get performance appraisals just like humans – we monitor token consumption, output quality, and even have an ‘Agent University’ for continual learning.
Novel analogy that human resource practices can be applied to autonomous AI agents, highlighting the need for ongoing governance and continuous improvement.
Deepened the conversation on operational oversight, leading Hari to mention “plus scores” for failure tracking and reinforcing the idea that trust is earned over time.
Speaker: Divyesh Vithlani
AI CoLab is a cross‑sector initiative that brings government, industry, academia, and NGOs together to solve real problems, not just to tinker with technology.
Emphasizes collaborative governance as essential for responsible AI, moving beyond siloed efforts.
Reinforced earlier points about public‑private partnership, gave a concrete example of how trust can be institutionalized, and set the stage for the forward‑looking “AI‑native nations” discussion.
Speaker: Paul Hubbard
What will separate AI‑native nations from AI‑dependent ones are capability, competence, and curiosity – not just compute or data‑centers.
Shifts focus from infrastructure to human capital and cultural factors, suggesting that long‑term competitiveness hinges on mindset and adaptability.
Prompted the panel to reflect on talent pipelines and education, influencing Erik’s sustainability remarks and Hari’s three‑step plan for CEOs.
Speaker: Paul Hubbard
Energy‑efficient hardware, software, and models will keep AI’s carbon footprint in check; distributed inference actually reduces emissions in other sectors by up to 15 %.
Counters the narrative that AI is inherently unsustainable, offering a balanced view that aligns AI growth with climate goals.
Steered the discussion toward sustainability, leading Paul and Divyesh to mention responsible deployment and risk management, and tying back to the opening theme of People, Planet, Progress.
Speaker: Erik Ekudden
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from abstract aspirations to concrete, actionable frameworks. Paul Hubbard’s framing of AI as a public‑value endeavour and his insistence that trust underpins innovation set a foundational narrative. Erik Ekudden’s vision of the network as an “intelligent fabric” and his sustainability insights expanded the technical scope, while Divyesh Vithlani’s platform‑first strategy and Hari Shetty’s “proof over promise” principles supplied practical roadmaps for trustworthy deployment. The interplay of these comments—each prompting deeper elaboration from other panelists—created a dynamic flow that oscillated between policy, infrastructure, governance, and future competitiveness, ultimately delivering a cohesive vision of how People, Planet, and Progress can be aligned through the seven “chakras” of AI cooperation.

Follow-up Questions
What specific measurement frameworks, reporting mechanisms, and independent oversight structures should governments adopt to ensure accountable and responsible AI deployment?
The discussion highlighted the need for clear accountability at the national level, but concrete frameworks were not detailed, indicating a gap that requires further definition and research.
Speaker: Mridu Bhandari, Paul Hubbard
What standardized metrics can be used to evaluate AI ROI beyond productivity, such as trust scores, decision velocity, and risk mitigation?
While ROI was discussed, participants noted the lack of agreed‑upon metrics for trust, speed of decision‑making, and risk, suggesting a need for systematic measurement approaches.
Speaker: Mridu Bhandari, Hari Shetty
How can dynamic oversight of AI agents be operationalized in highly regulated industries, including mechanisms for performance appraisal, accountability, and termination of misbehaving agents?
The panel raised the concept of “agent university” and performance management for agents but did not outline concrete governance processes, indicating a research need.
Speaker: Mridu Bhandari, Divyesh Vithlani
What best practices and technological approaches can achieve energy‑efficient AI hardware, software, and model design to reconcile large‑scale AI expansion with sustainability goals?
The conversation acknowledged AI’s high energy demand and the importance of efficient hardware/software, yet specific strategies remain unexplored.
Speaker: Mridu Bhandari, Erik Ekudden
How can the AI CoLab model of cross‑sector collaboration be scaled, replicated, and evaluated for effectiveness in fostering responsible AI innovation globally?
The AI CoLab was presented as a promising partnership framework, but details on scaling, governance, and impact measurement were not provided.
Speaker: Mridu Bhandari, Paul Hubbard
What concrete use‑cases, performance metrics, and implementation pathways exist for AI‑driven improvements in cross‑border payments and open finance within the Indian context?
The panel suggested AI could accelerate payments, yet specific pilots, success criteria, and regulatory considerations were left open.
Speaker: Mridu Bhandari, Divyesh Vithlani
Beyond infrastructure, what policies and programs can nations adopt to develop the capability, competence, and curiosity needed to become AI‑native economies?
Capability, competence, and curiosity were identified as differentiators for AI‑native nations, but actionable national strategies were not detailed.
Speaker: Mridu Bhandari, Paul Hubbard
What architectural design principles and standards are required for AI‑native networks that can support massive, low‑latency AI workloads such as AI glasses, robotics, and edge inference at scale?
The shift to an intelligent fabric was discussed, but concrete network design guidelines and scalability benchmarks remain undefined.
Speaker: Mridu Bhandari, Erik Ekudden
How can “plus scores” be standardized across industries to monitor AI failures, hallucinations, and overall quality of AI outputs?
The concept of plus scores was introduced as a quality metric, yet a universal framework for calculation and benchmarking is lacking.
Speaker: Hari Shetty
What real‑time governance mechanisms are needed to detect and resolve conflicts between AI agents and human operators?
The panel mentioned conflict detection between agents and humans but did not specify detection algorithms, escalation protocols, or governance structures.
Speaker: Divyesh Vithlani
What practical tools and assessment criteria can boards use to measure AI thrust readiness and risk tolerance within their organizations?
Leaders expressed uncertainty about how to gauge AI readiness at the board level, indicating a need for ready‑to‑use assessment frameworks.
Speaker: Mridu Bhandari
What are the long‑term impacts of AI adoption on job roles and skill pipelines in the banking and financial services sector, and how should workforce planning adapt?
While AI’s transformative potential was highlighted, detailed analysis of workforce displacement, reskilling needs, and talent pipelines was not provided.
Speaker: Divyesh Vithlani
How can organizations quantitatively measure improvements in decision velocity attributable to AI, and what benchmarks should be used?
Decision velocity was identified as a key benefit, but specific measurement methods and industry benchmarks were not discussed.
Speaker: Hari Shetty
What remediation and accountability processes should be established when AI agents produce hallucinations or erroneous outputs, including possible “firing” of agents?
The notion of terminating agents for poor performance was raised, yet concrete policies for remediation and accountability are missing.
Speaker: Divyesh Vithlani
What migration pathways and integration strategies enable AI to be incorporated into legacy banking systems efficiently without disrupting operations?
The challenge of modernizing 90 % of banks’ legacy platforms with AI was noted, but detailed migration frameworks were not outlined.
Speaker: Divyesh Vithlani
How can policymakers balance the need for regulation with the risk of stifling AI innovation, especially in the telecom and network domain?
The panel warned against premature regulation, but did not propose a balanced regulatory approach or criteria for timing.
Speaker: Erik Ekudden

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.