Shaping AI’s Story Trust Responsibility & Real-World Outcomes
20 Feb 2026 18:00h - 19:00h
Shaping AI’s Story Trust Responsibility & Real-World Outcomes
Summary
The panel opened by framing a “sustainable AI future” built on the three sutras of People, Planet and Progress and introduced seven “chakras” – human capital, inclusion, trust, resilience, science, resources and social good – as concrete pillars for global cooperation [1-4]. The central question posed was how to achieve trust before skill, positioning outcomes and responsibility as a competitive advantage [6].
Paul Hubbard argued that AI should be viewed through an economics lens, focusing on public value rather than mere technological adoption, and that trust is the foundation that enables innovation [26-27][39-46]. He emphasized a people-first, democratic participatory approach that meets citizens where they are rather than imposing new technology [44-46].
Erik Ekudden described how telecom networks are evolving into an “intelligent fabric” that will host AI inference for devices such as AI glasses, requiring the network to be secure and trusted [49-58][75-82]. He noted that scaling this fabric is essential for future industrial AI applications in agriculture, healthcare and smart manufacturing [54-58][60-62].
Divyesh Vithlani explained First Abu Dhabi Bank’s platform-first strategy, embedding ethical AI governance, layered data-model-knowledge architecture, and dynamic oversight through separate execution and control planes to manage agents and their performance [111-133][200-226]. He added that agents are treated like humans with guardrails, performance monitoring and “agent university” to ensure accountability and mitigate hallucinations [225-226].
Hari Shetty contrasted “proof over promise” with a problem-first mindset, insisting that AI solutions must run continuously, earn trust through consistent performance, and be measured beyond simple productivity using “plus scores” that track failures and quality [152-157][236-247]. On accountability, Paul stressed a clear, inclusive plan that spreads AI benefits across communities while safeguarding citizens, whereas Erik highlighted a hierarchy of agent decision-making that ties responsibility to the domain providing the service [163-170][177-186]. Both agreed that perceived AI risk is manageable; Hari called hype overstated, while Erik warned that excessive caution in the public sector could hinder progress [344-348].
Looking ahead, Paul added that capability, competence and curiosity will differentiate AI-native nations, and Erik argued that energy-efficient hardware, software and inference distribution can keep AI expansion sustainable, with networks accounting for only a small share of total power consumption [300-306][311-326]. Finally, Paul described the AI CoLab as a cross-sector initiative that brings government, industry and academia together to solve problems collaboratively, and Bhandari concluded that aligning the seven pillars will redefine competitiveness, rebuild public trust and future-proof institutions [385-399][456-458].
Keypoints
Major discussion points
– Trust as the foundation for AI innovation – The panel repeatedly stressed that trust is not an obstacle to innovation but the very base that enables it. Mridu asked how to build confidence in AI without slowing progress [33-37], and Paul replied that “trust lets you make the innovation” and that a people-first, participatory approach is essential [39-46].
– The network as an “intelligent fabric” that must evolve from passive conduit to active, trusted AI enabler – Erik described how 5G/6G networks are becoming the host for distributed inference (e.g., AI glasses) and must be secure, scalable, and edge-enabled [49-60][75-82]. He later linked this infrastructure to business value, noting that AI-driven network services can generate large efficiency gains and new revenue streams [262-270][284-287].
– Platform-first governance and dynamic oversight for enterprise AI – Divyesh explained that a layered, platform-centric architecture (execution plane + control plane) with built-in guardrails, deterministic “atomic” agents, and continuous performance monitoring is how banks can maintain accountability while scaling AI [130-138][200-219][225-226].
– Moving from “proof-of-concept” pilots to proven, production-grade AI – Hari outlined four enterprise principles: start with the problem, adapt to legacy-heavy environments, ensure continuous, reliable operation, and earn long-term trust by avoiding hallucinations [147-154][155-162]. This shift is presented as the key to turning AI hype into measurable outcomes.
– Future-oriented considerations: AI-native nations, sustainability, and ROI re-framing – Paul highlighted that beyond infrastructure, a nation’s capability, competence, and curiosity will separate AI-native from AI-dependent economies [300-306]. Erik warned that AI’s energy intensity can be mitigated through efficient hardware, software, and inference-centric deployment, turning network power use into a net-positive for emissions [311-326]. Hari and the panel also argued that ROI should be viewed as a capability (EI) rather than a simple cost-benefit metric [234-247].
Overall purpose / goal of the discussion
The session was convened to explore how global stakeholders can “shape a sustainable AI future” by aligning the seven “chakras” of human capital, inclusion, trust, resilience, science, resources, and social good [1-4]. Throughout the dialogue the panel sought concrete ways to achieve trust before skill, embed AI responsibly in public policy and enterprise operations, and translate high-level ambition into accountable, scalable actions.
Overall tone and its evolution
– The conversation opened with a formal, visionary tone, setting out broad principles and introducing the panel [1-14].
– It then shifted to a pragmatic, solution-focused tone, with detailed technical explanations about networks, platform governance, and operational safeguards [39-82][130-138][147-162].
– Mid-discussion the tone became optimistic and forward-looking, emphasizing future capabilities, sustainability, and the transformative impact of AI-native societies [300-326][447-452].
– The closing remarks returned to a hopeful, unifying tone, reiterating the “People, Planet, Progress” sutras and the belief that aligned global cooperation will future-proof institutions [456-458].
Overall, the dialogue moved from setting the agenda, through concrete technical and governance recommendations, to an inspiring vision of AI’s role in the next decade.
Speakers
– Mridu Bhandari
– Area of expertise: AI policy, responsible AI, multi-stakeholder governance
– Role / Title: Moderator, Network18 (referred to as “Vipi Bhandari” in the opening) [S1]
– Hari Shetty
– Area of expertise: Strategy, technology implementation, AI consulting
– Role / Title: Strategist and Technology Officer, Wipro [S4]
– Divyesh Vithlani
– Area of expertise: Banking technology, digital transformation, AI platform governance
– Role / Title: Group Chief Technology and Transformation Officer, First Abu Dhabi Bank [S5]
– Paul Hubbard
– Area of expertise: Public-policy economics, AI governance in government
– Role / Title: First Assistant Secretary for AI Delivery and Enablement, Department of Finance, Australian Government; also known as the “AI masked economist” (self-described) [S7]
– Erik Ekudden
– Area of expertise: Telecommunications networks, AI-enabled connectivity, 5G/6G infrastructure
– Role / Title: Chief Technology Officer, Ericsson [S8]
Additional speakers:
– Harish Yatich
– Area of expertise: Technology strategy (introduced as part of the panel)
– Role / Title: Strategist and Technology Officer, Wipro (mentioned in the opening but did not speak)
– Dinesh
– Area of expertise: (not specified)
– Role / Title: (mentioned in a question prompt; no title provided)
Opening and framing
Mr Bhandari opened the session by positioning AI as a “sustainable AI future” built on the three sutras of People, Planet and Progress and introduced seven concrete “chakras” – human capital, inclusion, trust, resilience, science, resources, and social good – to guide global cooperation [1-4]. She then framed the core challenge of the AI-first decade as achieving trust before skill, arguing that outcomes and responsibility must become a competitive advantage rather than a cosmetic concern [6-8].
Paul Hrubag’s economics-first view (later self-identified as Paul Hubbard)
Paul Hrubag, first assistant secretary for AI delivery and enablement at the Australian Department of Finance (who later refers to himself as Paul Hubbard), responded from an economics perspective, insisting that AI should be evaluated on the public value it creates rather than on mere technology adoption [26-27]. He rejected any trade-off between trust and innovation, stating that “trust lets you make the innovation” and emphasizing a people-first, democratic-participatory approach – meeting citizens where they are and building on existing familiarity with AI – as essential for public confidence [39-46]. He also recounted how he earned the nickname “AI-masked economist” during the COVID-19 pandemic when he launched a podcast to demystify economics and AI jargon [??].
Eric Ekudin / Erik Ekudden on the intelligent fabric
Eric Ekudin (later speaking as Erik Ekudden) described the evolution of telecom networks from passive data carriers to an “intelligent fabric” that will host AI inference workloads such as AI glasses, which off-load processing to the edge [49-58]. He highlighted that 5G/6G must be secure, trusted and scalable to support industrial AI in agriculture, health-care and smart manufacturing, and that the network already provides the guarantees needed for billions of devices [60-62][75-82]. The transition to an active, AI-enabled fabric is presented as a prerequisite for future business value and new revenue streams [citation needed].
Divyesh Vithlani’s platform-first, agent-centric governance
Divyesh Vithlani outlined First Abu Dhabi Bank’s platform-first strategy, embedding ethical AI, data and model governance into a layered architecture (data, model, knowledge, context) and separating execution from control planes to enable dynamic oversight of autonomous agents [130-138]. Agents are treated like human staff – with guardrails, performance monitoring, and an “agent university” that tracks token consumption, output quality and hallucinations – ensuring accountability and mitigating risks [200-219][225-226]. He also noted that AI is a general-purpose technology, reinforcing the need for a platform-first approach [??].
Hari Shetty on proof-over-promise and “plus scores”
Hari Shetty contrasted “proof over promise” with the prevailing pilot-centric mindset, urging a problem-first methodology, adaptation to legacy-heavy environments, continuous-operation models and the earning of long-term trust through hallucination-free performance [147-152][155-162]. He introduced the term “product license” to describe the outcome-driven approach that solutions must meet before being marketed [??]. Shetty also defined “plus scores” as a metric that records model failures, hallucinations and any deviation from organisational quality thresholds, providing a concrete measure for ongoing trust [??]. Finally, he framed AI as a core EI capability rather than a simple ROI calculator, arguing that productivity is only an early indicator [236-247].
Accountability and governance
Paul stressed the need for a clear, inclusive national plan that spreads AI benefits to rural and marginalised communities while safeguarding citizens, framing responsible AI as a whole-of-society leadership task [163-170]. Erik added that responsibility follows the hierarchy of decision-making: each domain that provides a service – network, cloud, application or device – must retain accountability, and existing telecom guardrails can be translated one-to-one into the AI world [177-186]. Divyesh reinforced this by highlighting the platform’s execution/control separation as a mechanism for public-sector accountability [??].
Risk perception debate
A moderate disagreement emerged over whether the public sector still over-estimates AI risk (Erik) versus the Australian government’s recent shift to a more proactive stance (Paul) [346-348][351-353]. A further divergence concerned the locus of AI integration: Erik advocated a network-centric “intelligent fabric”, while Divyesh promoted a platform-centric governance model [75-78][130-138].
Sustainability and energy efficiency
Erik warned that AI’s energy intensity can be mitigated through energy-efficient hardware, software and inference-centric deployment, noting that network power consumption is a small fraction of total electricity use and that digital technologies can reduce emissions in other sectors by up to 15 % [citation needed].
Future outlook and cross-sector collaboration
Paul described the AI CoLab as a physical hub where government, industry, academia and NGOs co-create responsible AI solutions to real-world problems [??]. Erik envisioned AI-native networks as a “creative network” that can dynamically compose services for wearables, robotics and autonomous agents [??]. Hari projected a dramatic increase in decision-velocity, arguing that current organisational processes are too slow and that AI will enable near-real-time decision-making [??]. Divyesh painted a 2030 banking scenario where interactions are mediated by AI avatars, with instant cross-border payments and frictionless product discovery [??]. He also outlined a three-step plan for CEOs: (1) define a clear AI vision, (2) re-think operating models rather than merely automating tasks, and (3) engage Wipro for implementation [??].
Actionable take-aways
From the discussion the panel distilled several actionable recommendations:
(i) Build trust through people-first, participatory design and secure, low-latency networks [39-41][75-82];
(ii) Adopt a platform-first architecture with execution and control planes, layered ethical governance and continuous agent monitoring [130-138][200-219];
(iii) Move from pilots to always-on, problem-driven AI solutions that earn trust via consistent performance [147-152][130-138];
(iv) Measure AI impact not only by productivity but also by “plus scores”, decision-velocity and risk mitigation [236-247];
(v) Treat AI as a core EI capability rather than a pure cost-benefit exercise [236-247];
(vi) Manage risk with calibrated guardrails, avoiding premature over-regulation while ensuring public-sector accountability [344-345][351-353];
(vii) Foster cross-sector collaboration through initiatives such as the AI CoLab [??];
(viii) Pursue energy-efficient AI hardware and inference distribution to align AI expansion with sustainability goals [citation needed].
Closing
Mr Bhandari reiterated that aligning the seven chakras of human capital, inclusion, trust, resilience, science, resources and social good will allow AI to move beyond optimisation of businesses to redefining competitiveness, rebuilding public trust and future-proofing institutions for decades to come [456-458].
for shaping a sustainable AI future that we are calling People, Planet and Progress. And to translate these sutras into action, we are looking at what we call the seven chakras of aligned global cooperation. So these are the concrete pillars that will really turn ambition into accountability. We have human capital, inclusion, trust, resilience, science, resources and social good as the seven chakras that we are going to be talking about. Today we have with us a very eminent panel trying to answer the defining question of this AI first decade that we are in. How can we achieve trust before skill? Outcomes over optics and responsibility as a competitive advantage. I’m Vipi Bhandari from Network18 and I’m very delighted to be joined by a panel of very distinguished guests here tonight.
Starting from my left, Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Next to them, Vibhesh Vitlani, Group Chief Technology and Transformation Officer, First Abu Dhabi Bank. Eric Ekudin, the Chief Technology Officer of Ericsson. And Harish Yatich, Strategist and Technology Officer at Wipro. Welcome, gentlemen. Thank you so much for joining us here today. You know, perhaps let’s set the context with the foundations of trust and skill. And Paul, if I may start with you first, you know, I was going through your LinkedIn profile and you call yourself the AI masked economist.
So very interesting, Monica, there. Why don’t you first tell us what that really means? And then we’ll jump into the rest of the stuff.
Thanks for having me. It’s great to be here in India. I think we all bring a mic. Yeah. Thanks for having me, and it’s great to be here in India. I think we all bring a lens to AI. My lens that I bring is economics. I’m a public policy economist, which for me means AI is not about technological adoption. It’s all about what can generate public value, what generates public welfare.
And why do you call yourself the masked economist?
Economist. That’s another story for you. That started in COVID, remember, when we were all wearing masks. And at the time, I started a podcast, which was all about explaining economics and unpacking the jargon. And I’ve kept that because I think explaining AI, unpacking the jargon, seeing how it relates to everyday life is really, really important.
Right. Now, when we talk about AI for social good, public permission is really, really important. Public trust is very important. Now, how do we really build society? How do we build confidence in AI without really slowing down? innovation. How are you doing that in Australia? Give us some examples of how you’ve been able to do that, especially because citizens all over the world today are demanding a lot more transparency and accountability when it comes to not just AI, but everything in general.
Yeah, absolutely. I think it’s really important that we don’t frame it as like trust versus innovation. It’s actually a foundation of trust that lets you make the innovation. It’s starting from the proposition of what’s the problem we’re trying to solve or what are we trying to deliver for citizens? If you’re a government, what are you trying to deliver for your customers? Meet them where they’re at. Now, different countries, different populations have different comfort already, different familiarity with AI. You’ve got to know where people are up to, what they want and build from there, rather than just say, here’s a brand new thing that we’re going to impose on you. So I think really that framing, that democratic participatory.
approach, that people -first approach is key.
Right. Eric, coming to you, it’s often discussed as the application law, but you’ve mentioned that intelligence must be embedded into the networks themselves. Now, how does infrastructure really evolve from being a very passive carrier of AI to becoming this active enabler of trust and of resilience?
Yeah, so first of all, Ericsson builds networks, advanced connectivity, so 5G and 6G, and increasingly that’s becoming this fabric that we all depend on. But let’s start by thinking about what people are using today. Gen AI is already hundreds of millions of smartphones, actually billions, already doing AI applications across the mobile infrastructure. So it’s already secure and trusted. The network is already provided the guarantees that you need. But I think Especially here in India, we’re talking about industrial AI applications, agriculture. There’s going to be a lot of AI in the fields, hospitals, education, smart manufacturing. So there’s going to be a lot more dissemination of AI from where we’re focused today in training to distributed AI or inference generation.
That’s going to happen much further out in the network. So the network is actually becoming the host for all those great AI experiences. We need to scale the networks to handle that. I don’t think I’m the only one. Maybe not everyone carries two pairs of glasses here, but AI glasses. They are already available in millions. Good AI glasses that give you navigation support, that gives you real -time language translation, maybe a prompt if you are on a stage making a keynote. I mean, these kind of things, they cannot be done on the device, on the wearables. You need to offload the AI, the inference from the glasses. So you can see the actual data. You can see the actual data.
You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. edge. That’s why we talk about this as a transition to an intelligent fabric. The network is already secure, trusted. It’s going to be a carrier of all these inference workloads. So we’re just starting that journey. But I think it really comes back to basic principles. Networks need to be trusted. They need to be secure. They’re already moving from consumers into enterprise and government services, mission critical, big example here in India.
So what have the AI glasses been doing for you this week?
I didn’t read every question because everything is perfect in India when it comes to finding new ways. On a serious note, I actually use them privately at work. But I start to see people getting really … good value because it is an AI assistant. And think of it once, especially like me, wearing glasses. Once I’ve switched for good to these glasses, why would I go back? Even when I’m indoor, even when I’m at home, even when I’m training or in the elevator, I want it to work. And that, of course, means that the network, this intelligent fabric, needs to be so much better than it is today. Of course, great 5G networks here in India, but in the future, we will need even better funds.
And I think this is a change in terms of we will not get the full value of AI. We will not leverage AI fully until we connect it to that better network for AI. And that’s really what I’m focusing on. But you want to try it on? It’s a good one. No, it’s a great one. It’s a little bit fantastic. That was a little bit of a gallop, yeah. Using AI or AR wearables, glasses. Earpods. Cameras. Cameras. A few? Okay, well, two, two. Probably a representative crowd here. I think we are very early in this journey. It’s going to be a fantastic journey, I believe, for both consumers and anyone of us working in companies.
Absolutely, absolutely. Well, I’m going to come back to you with the knowledge that bringing the wish in, you know, in banking now, trust is not philosophical, it is existential. So how do you really embed AI into core decision making and also ensuring to dilute any risk discipline? So what governance models have you put in place that actually work for you? You know, any best practices that you can share with us here today?
Sure. Well, first of all, it’s great to be here. And I’m already, you know, benefiting from the wisdom of my panelists, because my kids will tell you that I’ve been in denial about, or needing glasses. The eyesight is perfect, but the enlarging, the zooming really helps. But in reality, now I’ve got a different story for them, that I’ve been waiting for AI glasses before I really don on apparel specs. But coming back to your question, I kind of pick up on what Paul said. It’s not either or. It’s not about you have trust or you have productive AI. What we believe is like any regulated institution, that there is no compromise on risks and controls.
Our business in banking relies 100 % on trust. So that is not a value that we can compromise on any time. However, in order to make sure that we do deploy AI at scale in a trusted manner, it starts with conviction. and we have conviction right at the very top of the organization that AI is a force for good. We’ve heard a lot this week about AI being a general purpose technology. I really love what Eric said about AI in the network, and I’ll sort of come to that in a second, because that is a large part of the answer is establishing a platform. But if we take a step back, conventional organization is defined by its people, its processes, and its technology.
And there are all sorts of safeguards, guardrails, controls that have been built in. In the AI world, I think it’s going to be about agents, models, and data. And I think we’re going to have to have the same guardrails, and perhaps the same controls, and the same controls, and the same controls, even more stronger, because it will need AI to oversee and govern AI to sort of be really effective. So the approach we’ve taken is on the basis of the conviction that we have that AI is a force for good, it is a game changer, and it is truly going to transform everything about how we live, work, play, and bank. We want to basically make sure that we empower the entire organization to leverage and scale AI in a safe, secure, efficient, and compliant manner.
Now, the only way, in my opinion, to do that is to take a platform -first approach. Just like Eric said about the network needs to be safe and secure, our AI platform and our agentic platform needs to be safe and secure. So we have taken the approach of building a platform with all the different layers from data, model, knowledge, context, and the use cases that sit on top of that by building ethical, AI, data governance. level governance, the fair and appropriate use of AI into the platform. And by taking that approach, we are able to unleash the power of the technology in the hands of the end users. So just like when you open up Microsoft and start a new Excel, you’re not thinking about is this safe, what’s the underlying architecture.
You’re doing it fairly intuitively. And we’re going to be able to do the same thing with AI, that our folks, our business colleagues, our engineers can use AI as naturally and seamlessly as they do any other task. So taking that platform -first approach is what really is driving our sort of strategy to ensure that we drive AI at scale but with all the right trust and safeguards.
Right. All right. Bringing in Harry as well, you know, we’ve talked a little bit about public permission. We’ve talked about infrastructure. We’ve talked about governance, security. There’s a final leap. which is from promise to proof. Now, enterprises are, of course, often caught between the AI hype. There is hesitation. You speak a lot about proof over promise. Elaborate that for us. And what really separates scalable AI from perpetual pilots that we keep seeing a lot of enterprises deploying?
First and foremost, very happy to be here with this panelist here. And putting on the virtual lens, what do we do? We take Eric’s network, layer in the pro -intelligence on top of it, and provide solutions to Devesh. That’s where we fit it into this entire graph in terms of what we do. Now, coming back to proof over promise, you absolutely brought the most important topic that’s in discussion across the summit here as well. AI is no longer about pilots. It’s about being able to get value out of AI. And when we talk about proof over promise, we talk about four distinct elements that are important from a Wipro perspective. number one don’t start with a model don’t talk about model x or model y and then start start with a model first thinking start with a problem first thinking so you you pick a problem figure out what’s the right approach to solving the problem and then work the way backwards to look at you know what models can actually help you solve the problem so that’s the first approach the second part that we that we take care of is the the enterprise story is very different than the consumer story enterprises are necessarily messy you’ve got technology that’s like 20 years old 30 years old you’ve got different personas you’ve got different security needs uh data is you know in fragments across the organization so the enterprise story is a completely different story than a than a consumer grid story in terms of how how things have to come together from an perspective so in that context our ability to prove a solution in the enterprise world is extremely important for us and when we show it works in an enterprise that’s when other enterprises build trust that’s ready for diffusion and by the way we act as client zero for our solution so if we don’t get it to work in our own enterprise there’s no point talking to any of the clients about implementing the solution the third principle here is about whatever solution we build it’s not about making it work once it should work every day every hour and every minute and solutions that are only capable of you know following that principle are the ones that we actually take it to to take it to the market and that’s another principle that’s extremely important for us and last going back to trust that we all talked about if you look at human trust human trust is earned even agentic trust is earned you need something that can work for a long period in time without hallucination without fundamental flaws in the model so that there’s trust built into it so only when things work consistently over a longer period in time do you build trust and these are the four principles that we use to actually talk about you know proof or promise as what we call the product license
Right. All right. Well, we’re going to shift gears a little bit and also talk about accountability because we’re talking a lot about architecture. Let’s also talk about who’s accountable for what in an enterprise and perhaps in the society as well. Now, Paul, when we talk about responsible AI at a national level, what does accountability really look like for leaders? Is it about measurement frameworks? Is it about reporting outcomes? Is it about, you know, independent oversight? What are the signals that you need to tell citizens that, you know, this is being deployed in your interest?
Yeah, thanks. I think it’s really about having a clear plan that you can communicate. In our case, that making it clear throughout the economy, throughout government, throughout society, that we’re going to seize the opportunity of AI. That means better jobs. That means investment in data centers and all the things we’ve been talking about. But the second thing is really even perhaps more important is we’re going to spread the benefit of AI, not just to people in the tech center, but to every aspect of community, people in rural areas, to people from marginalized groups, to people who maybe haven’t had the full benefit of current technology now. So spreading that benefit further. And then finally, just making it really clear that we’re also acting at every level, whether it’s businesses or whether it’s government, to keep citizens safe in the process.
We’ve had a big conversation here at a model level about AI safety and AI harms, but we’ve also got to have that conversation in the context of our communities and what does it look like to keep citizens safe there. So I think it’s the whole of. Society leadership piece. It’s not just saying, well, the tech people can look after this from a technical perspective.
Right. And, you know, ecosystems, of course, are very, very interdependent today. You have cloud providers, you have the telecom networks, you have enterprises. There are decisions flowing across the distributed stack by the second. So it’s really countable.
Yes. I want to build on what you said, the version of the Harry here, and the difference between where we are today and when we are introducing agents at scale. And to me, there isn’t so much a question of who, because if you are replacing work with an agent, that basically needs to translate into an accountability and then also a transparency, trust and governance issue around those agents. And increasingly, we get agents at different levels. There’s super advanced agents at the top. And, of course, you follow down the stack, we get more. fine -grained agents having less knowledge making decisions that are guard -railed in a different way than the top models. So think of this as a hierarchy of decision -making and, of course, accountability.
But to me, there’s no question that if you are, and when you are introducing agentic technology, you need to take the responsibility for your part. If your complete service consists of many different agents on the cloud side, on the advanced connectivity side, on the application side, device side, it needs to come together. But, of course, responsibility should reside in the domain that you are providing, and that you are providing to the market, to the customer, to the employees. Then, of course, it’s never as simple as that, but in the world that I come from, in telecom, we’re already providing critical infrastructure. People’s everyday and life depend on it. So we have already guard -raised from a safety -security perspective that we have to move up to.
in today’s world of 5G and telecom. That, to me, should carry over into the, oh, yeah, an identic world. I know there are, of course, discussions about increasing governance, increasing regulation. I think that’s a dangerous way to go because if you regulate before you have innovated, you never know what you will get. But I think if you stay with these basic principles that we do have requirements and we have guardrails in the world we’re coming from, and you translate that more or less one -to -one into the identic world, I think we are on a good starting point.
Right. And, Vibhish, you know, we are talking about the way this identic, as Kavik said, machine working hand -in -hand. Now, as these identities shift, how should we be rethinking governance? How should we be rethinking trust? And, of course, governance is never static. It’s going to go on. It’s going to keep evolving. So what does dynamic oversight really look like, especially in a very regulated industry like yours?
Look, I really love that question because at the end of the day, as a CTO in a bank, I am accountable. I am responsible for the platform that we construct and the output that gets generated from that platform, whether it’s from a human or an agent, right? So that’s my accountability. And this is where I have interesting debates and conversations with colleagues from Wipro and other partners of mine who are very eager to sell me solutions. And I said, if the solution is a black box, then I’m going to find it very difficult to integrate that into my environment because ultimately I have to be able to explain the output that gets generated. So to your question in terms of that dynamic oversight, it again goes back to the platform and the way we’ve architectured.
The platform is on sort of, you know, without getting too technical, is on two planes. There’s an execution plane and a control plane, right? But again, it’s not that sophisticated. just like when you onboard a new graduate into your organization. You will give them a set of guardrails and a set of responsibility that is befitting of their skill set and their experience. You provide the right level of supervision. You give them the right level of oversight. And as they grow, become more proficient, you clearly give them more responsibility. We treat agents in exactly the same way. So there’s a lot of conversation about agents being autonomous and hallucinations. Well, individuals can do the same thing if they’re left to their own devices, right?
So the way that we have built and architected our agentic architecture is that, as Eric said, there are different types of agents. At the lowest level, agents are not just autonomous, but they’re atomic. And with the right set of guardrails, with agentic operating processes, they are also deterministic, right? and we basically create agents to perform a single task. And we make them as reusable as possible to compose them and to aggregate them into a higher level of workflow. And as you get more, as they learn more, which is the good thing about agents, they learn faster, you give them more responsibility, just like you do to humans. But again, going back to that execution plane where you are monitoring every activity that is being done through a control plane, and also the other features of the platform include how we sort of onboard, offboard agents just like you do with humans.
And we also have practices in place to manage the conflicts between agents and humans because, again, just like you have conflicts between two humans, you have conflicts between an agent and a human, right? And you need to be able to detect that in real time. so that’s where some of the kind of work that we’ve done and it’s again early days, I don’t mean we have all the answers, but certainly the space is moving very fast the key is that we humans always have to be in control so the way we design the architecture is to ensure that happens
so are agents being put through tough performance appraisals, are they being fired for hallucinating?
100 % right, and again it may sound really basic, but I view an agent more different to a human so you do performance management you do, there’s a concept that we call agent university right and I love that term because I was chatting earlier with James about this, at university you’re learning how to learn right, so that’s what we want the agents to do as well and you know whilst humans may fill out a timesheet to account for the work that they’ve done and to measure the output that they’ve produced for the cost that they’ve consumed. Whilst agents may not fill out a timesheet, we’re also monitoring and monitoring the agent for the worth, the tokens that they’ve consumed for the output that they’ve generated to ensure that we measure their performance in a similar way.
Wonderful. Well, Harry, bringing you in as well, how should organizations measure the ROI? That’s a question that enterprises around the world have been debating. What’s the value beyond the profit or beyond the bottom line? Are we looking at trust scores? Are we looking at productivity? Are we looking at decision velocity, risk mitigation? At the core, how are you looking at the ROI?
Probably one of the most debated topics and one of the topics that I hear a lot, and I will probably provide you the Wipro context in terms of how we are looking at productivity. point number one while everybody talks about use cases and productivity measurement of EI we think you know EI is beyond just measuring return on investments or measuring productivity it’s almost like going back in time could you ask should we implement an email system what’s the ROI on the email system could you ask for example why should I go to the internet I have a brochure already in the company why should I be on the internet so a lot of the thinking should change from looking at ROI to looking at EI as a fundamental capability and a fundamental shift and a journey which is irreversible in terms of where we are going so it’s not a question of should we invest because there’s ROI or not it’s a question of we have to go down that path and we look at it as a capability so within Wipro we look at it as a capability so we are not really asking this question of for every single use case is there a ROI on it Now, having said that, you know, as a business leader, ROI is extremely important.
Well, your clients must be demanding the ROI for sure.
Yes, that’s equally true. So the elements that we talk about is the earliest signal of ROI is productivity, right? We always talk about productivity as an early indicator of what can come down the pipe, but productivity is only an early signal. The resulting benefit is basically always an end outcome. It can be cost. It can be units produced. It can be lower, better quality. It can be cycle time reduction. It’s many of those things. And our goal has always been to move beyond productivity because productivity is a number that people talk about very frequently in AI, but we are moving beyond productivity to look at some of those end outcomes that we can achieve. And our models are built to help clients understand the end benefit of AI rather than just look at productivity as an element.
Plus scores are becoming equally important. I will just touch upon plus scores for a minute. When we look at plus scores, we are looking at, you know, how many instances, how many instances of failure did happen? How many instances of failure did happen? How many instances of failure did happen? and is that within the vector of what an organization says is acceptable or the process says is acceptable. So it’s important to measure quality aspects, failure aspects, hallucination that we talked about, all the other aspects of AI where it can go wrong and then measure what’s the task goal and see whether it’s appropriate for the process that we’re talking about. So we had situations where we talked about probabilistic models, deterministic models.
We had customer cases where 100 % was the only answer or 99 .99 % was the answer. There are situations where 85 % was good enough. So again, there’s no one single answer to this. It depends on the kind of process, the kind of problem that we’re trying to solve.
Right. And do you think business innovation would perhaps be one of the biggest ROIs and any outstanding cases of business innovation that you’ve seen with AI being scaled successfully yet?
Yeah, that’s a fantastic question. And again, let me give you one or two quick examples because, you know, that would bring this to life. one of the projects that we did for a client this is an energy client and this is for a refinery and obviously you know everything was automated, instrumented, there are a lot of sensors all along the way and they were asking us what’s the value of AI in this context so the work that we did for them was basically analysis of a flame and you know interestingly out of the flame we could extract information about combustion efficiency, fuel to air mixture ratio, maintenance of the equipment we could derive out of models that we built just looking at the flame so the kind of information that we could actually secure just looking at the flame was so much superior to using a sensor based technology because sensors typically tell you something is working or something is not working based on a threshold, here we could actually find out the health of what’s happening with incremental change compared to looking at an on and off kind of a situation with sensors
Fantastic. Erik, you want to add?
Yeah, can I just add one thing? I think it’s so interesting to look at how in our world we talk about this intelligent fabric of 5G. And, of course, there are gains if you apply AI in terms of efficiency, in terms of productivity. You can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network. They use modeling on top of it. And then they can start to produce new outcomes. It’s kind of a business growth. And, of course, it’s not always that you can find that clear case.
But that’s really where AI and autonomous networks are helping. Saving, yes, TCO is important. But it’s very much about that business growth.
Any example you can share with us there?
Yeah. So, Glasses was one. In the future. device, every application, every service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical for enterprises. And that’s what leading customers, including here in Juba, are doing. So they’re using AI for that. We can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network, they use modeling on top of it, and then they can start to produce new outcomes. It’s kind of a business growth. And, of course, it’s not always that you can find that clear case, but that’s really where AI and autonomous networks are helping.
Saving, yes, TCO is important, but it’s very much about that business growth.
Any example you can share with us there?
Yeah, I think Glasses was one example here. But in the future, every device, every application, every AI service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical, for enterprises. And that’s what leading customers, including here, so they’re using AI for that. that kind of segmentation and growth of the business, it’s an upside that is unlimited. So, of course, it’s more exciting.
Absolutely. Well, let’s also look at the long -term competitiveness and value creation that we can achieve with AI. Paul, if we were to project 10 years ahead, what do you think would really separate AI native nations from AI dependent nations? You know, is it infrastructure? Is it talent pipelines, compute capacity? What would you add to that list?
I would add capability, competence, and curiosity. I think a lot of the things you mentioned in terms of data centers and things like that, they will be built, there will be investment, but the underlying models, the compute that will be commoditized and what will set countries apart is the ability of government institutions to adapt, the ability of the economy to be flexible to new approaches and to be able to do what they want. I think that’s a really important point. And I think that’s a really important point. And I think that’s a really important point. the ability of the workforce to find the new jobs, the new wants and needs that are created and where the bottlenecks shift, being able to move to those.
And I’ve got to say that coming to India this week, I see not just competence, capability and curiosity, but just a down -out enthusiasm for this. So I think maybe India is one to watch.
Good to know that and happy to hear that, of course. Because, well, Eric, you know, AI demands massive compute, massive energy, massive connectivity. Now, how do we really reconcile infrastructure -scale AI expansion with sustainability? You know, even with the AI globally, how do we ensure that efficiency is imperative to everything that is deployed?
Well, AI is… …energy -intense, especially now in the training phase. I think some of the data that are out there, it’s… I mean, it’s… mind -boggling numbers and I’m not even sure we’re going to need that kind of energy that has been predicted. But what I was saying before is that we’re moving from that big data center training to the distributed inference. That’s kind of where the book is going. That means that you need to scale it to like 8 billion inference for glasses. Tens of billions of sensors using AI or visual sensors. So what we are doing and what needs to happen is to really have energy -efficient hardware, energy -efficient software, energy -efficient AI models.
Small models when you can do away with that and of course big models when you don’t. So we’re not going to explode energy consumption just because we use more AI. In fact, we’re going to use even smarter and better ways to do it both on the hardware and software side. Then just as a little bit of sort of putting things in perspective, all the world’s networks is around a percent of their total power bill or their power consumption. And it’s actually by using more of the digital technology, you are able to reduce emissions in other sectors by as much as 15%. So it’s kind of a 10, 15 times payback on that energy consumption. And again, if you combine that with what I said about really being conscious about energy efficiency as you move further out, I think it’s actually going to be a sustainable way to do a lot of things, not just replacing unnecessary traveling, logistics chains with more digital means.
Everything is going to be more efficient, so I think we have to be a little bit careful before we say that it’s just exploding and it’s completely outrageous. Because if you just project those big data center training clusters, it looks scary, but that’s not the whole picture.
All right. Well, Dinesh, you know, while we are talking of value creation from AI, you know, of course, many organizations still accounting and measuring AI success and cost savings, but… at your organization, how are you really reframing AI value in banking, resilience, fraud protection, customer trust, capital efficiency? What are some of the metrics that you are tracking and really ensuring that this is true value creation for us?
I think it’s a question that sort of is constantly exercising our minds. And if I start with the productivity question that you asked earlier, whilst there wasn’t a straightforward answer, I can’t look at it in three levels. AI will provide micro -level productivity through, you know, co -pilot and sort of technologies like that, which might be difficult to measure, but certainly it’s helping with the whole literacy and those in the overall level of education and awareness in the organization. Secondly, at the enterprise level, and this is your point on value creation, we absolutely see the potential of AI to drive significant ROI. When you take very complex processes, which have been utilizing HIPAA -2 technologies, whether it’s RPA, OCR, etc., but when you apply AI and agentic, you can actually take them to the next level.
And these are extremely complex processes, which are error -prone, and you’re talking about large sums of money. And when we’ve applied AI and agentic to them, we’ve seen incredible outcomes, which is sort of giving us tangible value creation. And the third aspect I would look at is, if we really take a step back, certainly in banking, what is our biggest source of competitive advantage? it’s not necessarily the technology or the products or any other capabilities, right, because the next person can come along and emulate those. It’s really our ability to respond and react to change faster than our competitors. And that’s what AI is going to help us do in terms of creating value because it allows us to respond to change faster, do rapid experiments, and to scale and to double down where we think that we will see a significant ROI.
Right. Okay, so I have a question for all of you, and perhaps you can, you know, take about 30 seconds each to tell me. Do you believe today enterprises are overestimating or underestimating AI risk? And, you know, how should leaders and boards really measure AI, AI thrust readiness in practical terms? So, you know, how we may do if you want to start on that one.
see there is certainly a level of risk that one should be aware of and work with with risk and again in every business there’s always element of risk that one is to mitigate so ai is no different from that perspective but at same time the own hype about risk is also overstated it’s a manageable risk it’s not a uncontrolled unmanageable risk it’s a manageable risk and with the right kind of tool set that divesh talked about it’s definitely possible to get the best value out of ai without actually exposing oneself to risk
okay that’s a very diplomatic balanced answer that you give us, Eric what do you think
i suspect that it’s become quite realistic the risk assessment among enterprises not to overestimate it they’re manageable i think maybe on the government side there’s still an overestimation on the risk side trying to sort of be too cautious, and that, I think, could hold back in certain public sectors and in other areas. Then the risks are very, very big if you mistreat this extremely powerful technology. So I’m not saying that we’re over the hump, but that’s what I think.
Paul, you want to take that on, considering, you know, Eric just said that perhaps the public sector overestimates risk. Would you say that for, you know, the government in Australia as well?
I mean, certainly governments have a responsibility to start off probably with a more cautious approach than private sector folk. I’d say there’s a shift between the uncertainty of something new that isn’t quantifiable to actually I understand the risk, and then once you understand the risk, you can manage it. So certainly over the last year or so, and the government of Australia has taken. much more sort of active posture towards AI where embracing, in a sense embracing the risk a little bit more than we were in the past but as we grow the capability as we’ve got the foundation of trust, the guardrails that we need, it means you can actually manage that risk and that’s the key thing.
All right, Divyesh?
Look, with any so -called new technology there is always going to be a level of, you know, fear, uncertainty, doubt but the kind of, the sort of the paradox for me is that AI is actually not a new technology. In fact, it predates cloud, mobile, robotics you know, judging by the lack of I was writing programs at university that that But AI was just well ahead of its time. We needed the cloud to be able to process large amounts of data. We needed the kind of data centers that we’re talking about for the compute, et cetera, for this technology to really come to light. And clearly, as we’ve gone through digital, social, cloud, and data, along the way, we’ve seen many, many regulations around data protection, how best to use cloud, data sovereignty, data residency, et cetera.
So as long as we are not sort of shedding those controls that we’ve already built and making sure that we tighten the guardrails as we deploy AI and deploy AI through a platform -centric approach where you’ve built the necessary guardrails, I think that those risks will be met. And I think that’s what we’ve managed and mitigated. And hopefully what we’ll start to see is that we’ve managed to do that. to see is the benefits of this combined technology will far outweigh the kind of risks and concerns that we’re seeing. The only qualification I would make, and I think that’s been talked about at this conference, is making sure that we do take
Absolutely. I mean, it has to be inclusive for all, especially in a country like India where, you know, we have divides of many kinds. Well, let’s spend a few minutes trying to look ahead and do some crystal ball gazing. And Eric, if I can come to you, you know, we are entering autonomous networks, embedded intelligence, physical AI from robotics to many, many massive systems. Now, what does an AI mean? An AI is a creative network then look like, say, five years from now, because anything more than five is just… much to envision and how do we get to this mobile and cloud infrastructure where we’re able for that future?
Well, I think we have to look perhaps further out in five years because we’re building something that should work for society in broad terms. But of course, AI is moving super fast and when you ask about AI native, I think that any industry, including the one I represent, is going through major change now. And AI native is not just how you build your products, that they need to be data -driven, they need to learn, they need to be updated all the time. It’s very much about your processes. It’s about how you go to market with that, how you engage with lifecycle management, handling questions, and I think we talked about it in the pre -meeting as well.
There are so many things that are changes in terms of how you build AI native systems that it is a fundamental rework for, I would say, most AI native systems. product, actually service companies as well. So an AI -native world is something that is much more responsive to these fast changes that we talked about. An AI -native network is a network that is responsive to all of these needs. You already mentioned the physical AI, which is just around the corner, humanoids, robots, drones, all the things that are requiring much more tailoring, much more flexibility from that network or the intelligent fabric. So we need to do what I call user experience at scale or massive user experience.
Everything has to have its own and unique requirements met. I think only AI -native networks that are responsive in real time to these needs and adapt and create the best user experience can handle it. So it’s going to be a very different world, very intuitive, judging what works. What we see on the wearable side, but that’s going to be a completely new setup.
Right. And Paul, you know, as we’re looking ahead, of course, public -private partnerships are going to be key to any kind of success that we’re going to see. Tell us a little bit about AI CoLab and your approach towards, you know, bringing together public institutions, academia, industry, to really advance the practical adoption of AI while also keeping it very transparent and ensuring that public good is at the center of it.
Absolutely. So the AI CoLab is a cross -sector initiative where folk from the government, folk from the private sector, academics, not -for -profits, can get together in one place and often in person to understand things. And I think everybody who’s come to the AI Impact Summit really understands that we can’t do this alone. Like nobody in their silo can solve the problem themselves. We’ve got to get capability from each other. We’ve got to learn from each other. And I think the 300 ,000 people who have been here this week have certainly proven that to be. proven that to be the case. I think that it’s also key to actually doing safe and responsible AI. It’s not just the technical controls or the networks that we have.
It’s having the people who are going to be in the room who may not care about AI, but they do care about the services that are being delivered. They do care about their voice being heard. They do care about the environment around them as well. So he keeps on bringing you back to reframing that. What’s the problem we’re trying to solve? What’s the mission we’re trying to achieve? And I think if we want to talk about impact, that’s the key question.
Right. All right. Well, let’s also look at the financial angle with Divesh. You know, we’ve talked about open finance and very effective financial ecosystems. What is it really going to take to scale AI to that level, especially in the near and short term, to enable very responsible deployment? And sustain… finance with egg farmers particularly in the Indian context given the complex complexities that we see in this country?
So I think it’s going to be a force for good. If I look at banking, I don’t think the core of banking is going to change. However, how we bank, how we drive that experience for our customers is I think going to be transformationally different in the future. Just one example to pick up on your question, if you combine the technology of AI together with say digital assets and stable coins, the ability to move money faster like emails, why is it that it takes three or four days today to clear a cross -border payment, right? Which goes completely against the whole concept of open finance and inclusion. So I think AI together with some of these other is going is going to be a game changer in enabling things like that and really driving experience to be much more natural, much more intuitive than it is today.
Personally, as a CTO, there is a lot of questions about a job is going to go away, et cetera. If you look at sort of in any organization, certainly banks that I’ve worked in, typically the CapEx demand on an annual basis outstrips supply on a ratio of five to one. But if AI can help us change those legacy systems, modernize our platforms, because let’s be honest, 90 % of banks still operate with legacy technologies. There’s very few in the green field. All of those technologies need to be modernized, upgraded, and I think AI, again, is going to be a force for good there. And once we modernize those systems, we’ll again lend itself to connecting more seamlessly through microservices, APIs, without getting into the technical details, through MCPs, et cetera.
So I think that AI, together with some of these other technologies, digital assets, print and data line, I think will drive a very different paradigm in terms of
Lovely. Very exciting times ahead. Well, Hari, if you were to give a CEO a three -step plan today to really scale responsibly, what would that be? Three things.
Okay. Number one, be very clear about what you want to achieve with AI. So have the vision right. Have clear objectives in terms of what you want to achieve with AI. That’s the first part. The second part that I would call out is don’t think about task and task automation. Think about what does AI do to your business? And it’s an operating model shift fundamentally which can actually deliver value. So think big. Think about the operating model shift that will require structural changes, methods of working changes, skill changes, and, you know, it’s a complete change. It’s a complete transformation compared to just being an automation. And third thing, you know, please call Wipro.
All right. We are about to now imagine that we are at the India AI Impact Summit 2030, just about four years ahead. What has changed today in the way we live, work, and play that didn’t happen perhaps at the time you were here last, which is today? What has changed? Paul, do you want to start with that? And you can go ahead with the imagination.
yeah okay look as an economist it’s very hard to predict the future I think what has changed is there’s a whole bunch of people turning up with job titles that we’ve never even heard of before and they’re telling us about things that people in a bureaucracy or the government only dream about so I think we’ll see a lot more diversity in what people do
right lots of new jobs and yeah most industry reports suggest that many of the new jobs of the next decade have not been invented yet so absolutely
well in four years time we may not be here in person it will be our agents or avatars that are being kind of you know teleported in because the technology through Ericsson’s amazing network has the kind of bandwidth and the latency is improved vastly, and obviously with Wipro’s technology around creating these avatars and these agents. But no, I think, to be serious, I think what will have changed, at least from my perspective, is banking will be a lot more seamless. It will really be about putting the customers first rather than sort of imposing friction that we see today in terms of how financial services works. For instance, we will be shopping much more intuitively. We won’t even know that we need to get a new fridge or a new car.
It will kind of just occur to us naturally, and something will appear on your doorstep that you didn’t even know you needed, but once it arrives, you think, wow, that’s exactly what I needed. The payment’s taken care of. All the servicing is taken care of. So I think that is a near -term reality.
All right. Eric, Hari, go ahead.
couple of things one is I’ll definitely break my glasses and use Eric’s Eric’s glasses more importantly why I think will fundamentally change is the decision velocity good most importantly I think the decision velocity in organization will completely change in in the next four years one of the key things that we always talk in any enterprises our organization is so slow the processes take a lot of time it does not happen at the pace that we all want it to be and the experience that one gets out of it a slow process is not necessarily a great experience process the fundamental problem that AI will solve and I’m pretty sure it will solve in the next couple of years is the velocity of everything will increase so tremendously that we’ll look back and say how did we ever tolerate something that was as slow as what it is today
yeah I I wonder if it’s doable in four years on a global scale. But I hope what we see four years from now is that we have this dissemination, we have diffusion, we have everyone being included in this fantastic journey that AI really, really is about. But I think it hinges on this dialogue that we have here, and it hinges, it’s conditional on the fact that we solve the trust issues. Because these things with security, privacy, we talk about them as things we can solve technically and so forth, but that needs to have fundamental anchoring in how humans behave so that you can really trust these agents, as was mentioned before, and that we put the right constraints on.
If that happens, of course, four years from now, it’s going to be so seamless where we have our digital colleagues or AI colleagues, AI, physical AI colleagues, and so forth, that it’s going to be a complete. It’s a completely different way of looking at work and, of course, how you get help outsourcing. I mean, you’re going to be an agent of something which is much, much bigger than what you’re commanding today. I think it’s an enormous shift.
absolutely well fascinating times ahead thank you gentlemen for your very very incredible insights that was very very educational and informational for all of us the takeaway for me I think from this conversation is clear that if people planet progress remain our guiding sutras and if we can align all the seven pillars of global cooperation AI is not going to just optimize businesses it is going to redefine competitiveness it is going to rebuild public trust and of course hopefully it will future -proof all our institutions for the decades ahead thank you very much appreciate you all taking the time here and thank you all for being a wonderful audience thank you you Thank you. Thank you.
Thank you. Thank you.
Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets citizens where they are
EventGovernments have collectively affirmed the importance of building trust by governing AI based on human rights, and that was repeated. It is repeated today by a number of heads of state and the leaders…
EventH.E. Mr. Alar Karis: Honourable leaders, excellencies, distinguished delegates. It is truly an honour to represent Estonia here today, a country of just 1.3 million people, but one of the world’s most…
EventSergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has been, let’s say, labelled as a regulation-focused environment. This is because…
EventThis comment is revolutionary because it redefines what a telecommunications network fundamentally is. Rather than viewing 6G as simply faster connectivity, Amon presents it as an intelligent, sensing…
Event“Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-…
EventAnd not only that, but truly well performing networks. That is a fundamental platform to drive innovation on and to drive innovation on together with AI. And looking then for the next five years to co…
EventLegal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance that spans multiple organizational levels. This includes governance structures, app…
EventSo the data really doesn’t go out of the bank themselves. But there is a central aggregation service that we are running which would take the intelligence from the features to the central aggregation …
EventCaroline Louveaux outlined MasterCard’s four-pillar approach to agentic commerce guardrails. First, “know your agent” requires verification and authentication of AI agents before they can act. Second,…
EventThe progression from proof-of-concept to production represents a critical challenge. Resources like AMD’s Developer Cloud, offering 50-100 hours of free compute time, address immediate technical barri…
EventAnd let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. Every new product that we have goes through a very strong, secure methodology wit…
EventMelinda Claybaugh: Thank you so much for the question, and thank you for the opportunity to be here. As you were giving the opening remarks and listing all of the frameworks and the acronyms and all o…
EventA particularly encouraging theme throughout the discussion was the natural alignment of commercial incentives with sustainability goals. Multiple speakers emphasised that energy constraints and cost p…
EventCollaboration across sectors, robust governance, and strategic investments will be critical in achieving a sustainable and resilient future. Key focus areas will include energy efficiency, circular e…
BlogAI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scaleAImodels require vast computational r…
UpdatesDisclaimer:This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their or…
EventThe discussion began with a professional, diplomatic tone as panelists introduced themselves and outlined the compact’s aspirations. However, the tone became increasingly passionate and critical as th…
EventThe tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrated mutual respect and shared commitment to inclusive AI development. The atmosph…
EventThe discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with an authoritative presentation of frameworks and evolved into a pragmatic exchange of real-wo…
EventThe tone began very positively and constructively, with the Chair commending delegations for focused, specific interventions rather than general statements. Speakers expressed appreciation for the Cha…
EventThe tone began as analytical and professional, with central bankers carefully explaining their institutional perspectives and tools. As the discussion progressed, it became more concerned and defensiv…
Event## Major Discussion Points: The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving rather than confrontational debate. Speakers acknowl…
Event– **Risk-based approaches**: Multiple speakers supported prioritizing governance based on risk levels and application contexts Hadia Elminiawi: Yes, sure. So that would be very quick. I’m almost done…
EventThe discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insights constructively. The tone was pragmatic and solution-oriented, acknowledging si…
EventThese key comments fundamentally shaped the discussion by challenging conventional assumptions about AI security and governance. Tiwari’s technical insights established that AI requires entirely new s…
EventThe discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s ideas rather than debating opposing viewpoints. The tone was solution-oriented an…
EventThe overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technology for government. There was a sense of urgency about the need for governments t…
EventThe discussion maintains an optimistic and forward-looking tone throughout, with participants sharing insights as industry experts and investors. The conversation is collaborative rather than confront…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThe conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutions and success stories. While acknowledging significant challenges like the digi…
EventThe discussion maintained a predominantly optimistic and forward-looking tone throughout, despite acknowledging significant challenges. While speakers presented sobering statistics about energy consum…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mutual respect. While there were some tensions around specific content (particularl…
EventThe overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stressed the need for immediate action rather than just words. While acknowledging the …
EventThe discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplishment, and forward-looking optimism. Speakers expressed appreciation for the wee…
EventThe discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebration of achievements, and forward-looking optimism. However, there are moments of…
EventThese key comments transformed what could have been a standard ceremonial closing into a meaningful reflection on the philosophy and purpose of global dialogue. Brende’s reframing of disagreement as v…
Event“Mr Bhandari introduced seven “chakras” – human capital, inclusion, trust, resilience, science, resources, and social good – to guide global cooperation”
While the sutras are confirmed, the knowledge base does not mention the seven chakras; it only references the broader three-sutra framework, providing context but not confirming the specific chakras [S106].
“Eric Ekudin described telecom networks evolving from passive data carriers to an “intelligent fabric” that will host AI inference workloads at the edge”
The transcript of the AI Impact Summit notes that telecom networks have evolved significantly from merely enabling connectivity to more advanced roles, aligning with the description of an “intelligent fabric” for edge AI workloads [S32].
“5G/6G must be secure, trusted and scalable to support industrial AI in agriculture, health‑care and smart manufacturing, and the network already provides guarantees for billions of devices”
The knowledge base discusses the upcoming 6G ecosystem where devices will have AI capabilities and emphasizes the need for secure, scalable networks for widespread AI deployment, adding nuance to the claim about 5G/6G requirements [S118].
The panel shows strong consensus on trust as the cornerstone of AI, the necessity of robust governance and guardrails, and the transformative impact of AI on speed and continuous service delivery. There is moderate agreement on risk perception and a shared vision of inclusive, user‑centred AI ecosystems.
High consensus on foundational principles (trust, governance, always‑on services) with medium consensus on risk management and sector‑specific impacts, suggesting that coordinated policy and platform‑centric strategies are likely to gain broad support across government, industry, and academia.
The panel largely agrees on the centrality of trust, people‑first approaches, and the need for robust governance. The main points of contention revolve around how risk is perceived and managed by the public sector and whether AI should be primarily embedded in telecom networks or delivered via enterprise platforms. These disagreements are moderate in intensity and reflect differing professional lenses rather than fundamental opposition.
Moderate – the disagreements are focused on implementation pathways and risk framing, which could influence policy coordination and industry‑government collaboration but do not undermine the shared commitment to trustworthy, inclusive AI.
The discussion was shaped by a series of pivotal remarks that moved the conversation from abstract aspirations to concrete, actionable frameworks. Paul Hubbard’s framing of AI as a public‑value endeavour and his insistence that trust underpins innovation set a foundational narrative. Erik Ekudden’s vision of the network as an “intelligent fabric” and his sustainability insights expanded the technical scope, while Divyesh Vithlani’s platform‑first strategy and Hari Shetty’s “proof over promise” principles supplied practical roadmaps for trustworthy deployment. The interplay of these comments—each prompting deeper elaboration from other panelists—created a dynamic flow that oscillated between policy, infrastructure, governance, and future competitiveness, ultimately delivering a cohesive vision of how People, Planet, and Progress can be aligned through the seven “chakras” of AI cooperation.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

