Building Sovereign and Responsible AI Beyond Proof of Concepts

20 Feb 2026 11:00h - 12:00h

Building Sovereign and Responsible AI Beyond Proof of Concepts

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened by highlighting that only a minority of AI initiatives reach operational use, with just 30 % of pilots advancing to production [11]. Speakers argued that a core barrier is a lack of trust in AI systems, both at the organizational level and among individuals whose data and outcomes are affected [13]. Supporting this, the OECD AI Observatory records a rapidly growing catalogue of incidents-600 reported harms in December 2025 alone-illustrating real-world risks of untrusted deployments [22-23]. Concrete examples cited included Romanian voice-cloning scams, AI-generated books in Cairo that omitted human oversight, and biased facial-recognition at borders, all of which erode public confidence [28-29][30-35][36-38].


The presenters identified six common failure points for proof-of-concept projects: weak adoption planning, governance gaps, misalignment with societal goals, sovereignty concerns, sustainability pressures, and inadequate change-management [42-61]. To address these, they proposed a “AI 4D” framework comprising sovereignty, green (sustainability), responsible, and valuable dimensions, each intended to surface and mitigate harms before scaling [64-66]. A health-care pilot illustrated how neglecting the green dimension-excessive compute, power, and water demands-rendered the project financially and politically untenable, prompting participants to label the issue as sustainability [73-80]. A traffic-light optimization case showed that focusing solely on technical efficiency ignored the value dimension, leading to increased congestion in low-income neighborhoods and community backlash [90-99].


A justice-system AI example highlighted sovereignty problems when a model hosted offshore could not be audited or updated, underscoring the need for control over critical public-sector AI [103-108]. Audience discussions reinforced that responsible AI-addressing bias, fairness, and human-centered design-and valuable AI-measuring societal benefit-are intertwined, as seen in a social-benefits pilot that lacked explainability and caused harm to vulnerable citizens [108-124]. Omeed expanded on sovereignty, emphasizing that data and model control are prerequisites for trust, and warned that reliance on foreign AI services could jeopardize national objectives [131-148]. He further argued that green AI ties environmental impact to economic viability, noting that unsustainable scaling leads to cost overruns and eventual failure [154-165].


The session concluded that no single dimension suffices; organizations must balance trade-offs, adopt comprehensive AI policies, define measurable KPIs for each lens, and upskill teams to ensure trustworthy, sustainable, and valuable AI deployments [349-362].


Keypoints


Major discussion points


AI pilots have a low conversion rate because trust is not built and harms are overlooked.


Only about 30 % of AI projects reach production, and many fail to consider trust-related issues such as data sharing, impact on jobs, and potential harms [11-19]. The OECD AI Observatory tracks a rapidly growing number of incidents (≈600 in December 2025) that erode confidence [20-24], with concrete examples ranging from voice-cloning scams in Romania to AI-generated books in Cairo and biased facial-recognition at borders [36-38].


Six common reasons why proof-of-concepts (PoCs) stall, summarized in a “4-D” framework.


The speakers list six failure categories – adoption vs. impact, governance, mis-alignment, sovereignty, sustainability, and change-management [42-60]. They then condense these into four lenses that must be addressed to build trustworthy AI: Sovereignty, Green (sustainability), Responsible AI, and Value[61-66].


Real-world scenarios illustrate each lens and the consequences of ignoring them.


Health: an AI radiology triage tool required more compute, power and water than available, causing a sustainability failure [70-78].


Transport: an AI traffic-light optimizer reduced travel time but diverted traffic to low-income areas, exposing a value-misalignment issue [89-93].


Justice: a case-routing system hosted offshore with no auditability raised sovereignty and responsibility concerns [101-106].


Social benefits: an AI eligibility engine lacked explainability and showed bias, highlighting responsible-AI and value gaps [108-114].


Trade-offs between the four dimensions are inevitable and must be managed explicitly.


Participants asked how sovereignty might outweigh value or vice-versa, and the presenters explained that trade-offs (e.g., choosing foreign models for speed vs. retaining control) require transparent decision-making [279-286][290-306]. Sustainability versus rapid adoption was also discussed as a common tension [306-313].


Actionable next steps: policies, frameworks, KPIs, and up-skilling.


The session concludes with a call to develop AI policies that embed all four lenses, adopt responsible-AI frameworks, define measurable KPIs for ethics, sustainability and value, and invest in team up-skilling [342-360]. A white-paper summarising eight-to-ten practical recommendations is offered for further guidance [342-347].


Overall purpose / goal


The discussion aimed to explain why most AI pilots never scale, introduce a structured “4-D” approach (sovereignty, green, responsible, value) for evaluating and designing trustworthy AI, illustrate the approach with concrete case studies, and equip participants with concrete actions (policies, frameworks, metrics) to move from experimental PoCs to production-ready, impact-driving AI systems.


Overall tone


The conversation began with a formal, informational tone, presenting statistics and definitions. As audience interaction increased, the tone shifted to collaborative and exploratory, with participants sharing scenarios and questions. Towards the end, the tone became supportive and actionable, emphasizing practical guidance, shared resources, and encouragement for attendees to adopt the framework. Throughout, the speakers maintained a constructive, solution-focused demeanor.


Speakers

Omeed Hashim - Role/Title: not specified - Areas of expertise: AI governance, sovereign AI, responsible AI (as discussed) [S1]


Audience - Generic participant; individual members mentioned with their own backgrounds:


* Yuv - Individual from Senegal; role/title not specified [S2]


* Professor Charu - Professor, Indian Institute of Public Administration; expertise in public administration [S3]


* Dr. Nazar - Role/title not clearly mentioned [S4]


Theresa Yurkewich Hoffmann - Role/Title: not specified - Areas of expertise: AI trust, AI governance, AI policy (as discussed) [S5]


Additional speakers:


Ami Kotecha - Co-founder, Amro Partners; sector: real estate and data spin-out (mentioned in transcript)


Shri - Name referenced in audience comments; no role or title provided


Full session reportComprehensive analysis and detailed insights

Context & Trust Gap – Theresa opened the session by noting that only about 30 % of AI pilots progress to production, and that a lack of trust in the technology, its data, and its societal impact is the principal barrier [1-2].


OECD AI Observatory – She highlighted a rapid rise in recorded AI harms, citing 600 incidents reported for December 2025 [3-4]. Concrete examples were given: Romanian voice-cloning scams [5-6]; a Cairo book-fair featuring AI-generated titles with printed prompts and model instructions, raising authorship questions [7-9]; and facial-recognition systems at borders that performed unevenly across population groups, eroding public confidence [10-12].


Why PoCs Fail – Six Themes – Theresa identified six recurring reasons for stalled proof-of-concepts: (1) a gap between adoption planning and real-world impact; (2) governance failures such as missing risk-management and accountability; (3) mis-alignment between the AI’s purpose and societal goals; (4) sovereignty concerns about data and model control; (5) sustainability pressures, especially energy and water use; and (6) inadequate change-management and cultural readiness [13-18].


4-D Framework – She then introduced a “4-D” framework that consolidates the six failure points into four lenses – Sovereignty, Green (sustainability), Responsible AI, and Valuable AI – to help anticipate harms before scaling [19-21].


Scenario Walk-through


Public-health X-ray triage: the model required far more compute, power and cooling water than the host region could provide, making the solution financially and politically untenable; participants flagged this as a Green issue [22-26].


Traffic-light optimisation: while average commute times fell, traffic was rerouted through low-income neighbourhoods, worsening pedestrian safety and provoking community backlash; this was marked as a failure of the Valuable lens because technical gains did not align with citizen-perceived value [27-31].


Justice-system routing: the PoC performed well in testing but was hosted offshore with no clear audit trail or control over model updates, highlighting a lack of Sovereignty oversight [32-35].


Social-benefits eligibility engine: the system could not explain its decisions, exhibited bias across age, ethnicity and gender, and offered no escalation path, thereby harming vulnerable citizens and missing both Responsible and Valuable dimensions [36-40].


Omeed’s Deep-Dive


Sovereignty: Omeed argued that trust hinges on who controls data and models, warning that reliance on foreign AI services creates vulnerability if providers withdraw access; sovereignty must be baked into design from the start [41-45].


Green AI: He linked environmental impact directly to economic viability, noting that unsustainable systems incur higher operating costs and are unlikely to scale; he cited a new data-centre consuming as much electricity as the whole of Los Angeles [46-50].


Responsible AI: Described as encompassing ethics, bias mitigation, governance, security and human-centred design; he referenced Prime Minister Modi’s remarks on human-centred AI and illustrated the point with a nursing-home hydration-monitoring case where poor design could harm staff and families [51-56].


Value: Defined as real-world benefit beyond cost-savings; he contrasted a UAE executive’s ambition to serve 120 million people with India’s context, where the same scale would not add societal value [57-60].


Audience Contributions


Ami Kotecha: asked for clearer governmental guidance on “safe vs. experimental” AI use and noted an upcoming data-protection law expected to roll out in 18-24 months [61-63].


Platform vs IP: an entrepreneur described exclusive client contracts (e.g., with PepsiCo) that lock-in IP and prevent platform-level deployment; Omeed suggested a service-oriented, co-creation model, citing India’s UPI ecosystem as a successful open-service example [64-68].


Ranking the lenses: Theresa placed Responsible/Valuable AI slightly above Sovereignty, arguing that responsible AI can act as an umbrella covering other concerns; Omeed countered that sovereignty can be non-negotiable for trust-critical systems and may need to dominate in certain contexts [69-73].


Trade-off discussion: participants noted that organisations with strong carbon-reduction goals may prioritise Green AI even if it slows rollout, while others may accept higher emissions to accelerate adoption; similarly, choosing an external model can speed delivery but sacrifices long-term control, whereas building domestic capability delays value creation but secures sovereignty [74-78].


Final quick question: an audience member asked what aspects might be missed when focusing on a single lens; the presenters deferred a detailed answer to follow-up email [79-80].


Closing & Action Items – The presenters announced a white-paper that outlines a set of actionable recommendations for each of the four dimensions and shared a link in the chat [81-82]. They urged organisations to draft an AI policy explicitly addressing sovereignty, sustainability, responsibility and value; adopt a responsible-AI framework with clear governance questions; define quantitative KPIs for ethics, carbon impact and user benefit; and invest in up-skilling programmes that incorporate diverse stakeholder perspectives [83-88]. The session ended with contact details, a QR-code for feedback, and a note that two audience questions remained unanswered [89-91].


Overall, the discussion reached strong consensus that trustworthy AI requires a holistic 4-D lens, that trade-offs among sovereignty, sustainability, responsibility and value are inevitable and must be documented, and that coordinated policy, measurable metrics and collaborative business models are essential to move AI pilots from proof-of-concept to production-ready, impact-driving systems.[92-94].


[Note: Insert accurate turn-based citation numbers in place of the placeholder brackets above.]
Session transcriptComplete transcript of the session
Theresa Yurkewich Hoffmann

Okay. Sounds good. Okay. Well, this session will be all around that. So if we can have the next slide. So what we want to talk to you today is that there are so many different AI projects and AI pilots happening in the world. And a pilot is the same as a proof of concept. It’s an idea that you’re testing. And it’s a concept that you’re testing. And it’s a concept that you’re testing. And it’s a concept that you’re testing. to see if that idea is something that you can put into implementation later on. And I was looking at the stat of how many AI pilots are in the world, and that was very difficult to quantify.

But what I did find was that only 30 % of all the AI projects actually go into production. So what we’re finding in the world is that we have lots of different AI ideas, but really a difficulty in translating that into something real. And the point of this session and what I think is the point of the whole AI summit was that one of those reasons is because we don’t have trust. So if we can have the next slide. So if we think about trust, that could be an organization’s trust that the AI will work. It can be trust in us as individuals around how our data will be shared, the outputs that it will give us.

It could be trust in terms of the impacts that it will have on people and people’s lives. It could be trust in terms of jobs and how that will work. And with that, what we’re seeing is a lot of these AI projects are failing to consider that. And I don’t know if you’re familiar with the OECD AI Observatory, but they do a monitor where they essentially monitor all of the harms and all of the AI incidents around the world. And you can see that it’s been growing exponentially. In 2025 of December only, there were 600 different incidents in the world. So those are 600 different times that people were harmed or that there was some kind of AI hazard that was created through a pilot.

If we can have the next slide. It’s just to zoom in, so this is a little bit difficult for you to read now. But in that harms monitor, you can click on any of them and learn more about them. So some that I found, the first one is in Romania. AI was being used to clone people’s voices and then scam their voices. By making them think that they were in distress. As well, there was an example, I believe it was in Cairo. So there was a book fair, and a lot of the books there were actually produced using an equivalent of chat GBTs, using generative AI. But there was no humans included in that project, so the books were printed with the prompts and the AI instructions still in them.

So that created a lot of issue of creativity, and are these books generated by AI? Are they what we’re looking for? Is that what we thought we were buying? And then there’s several other examples happening all around the world where this is happening with facial recognition, for example. So using that at borders, and all of a sudden that might not work equal between different types of people. And all of these really build towards people losing trust in AI and being fearful of using it. So these are some examples, and we’ll kind of go into next what we can do about that. So. Next we’re going to look at. why do these proof of concepts fail and how do we shift from just experimenting to actually having impact.

So I can have the next slide. So I put here six ideas of what we’re seeing with the customers we work on is why proof of concepts are not working. The first one is between adoption and impact. So a lot of times we’ll have organizations that are working on AI and they’ve just thought about producing something but they haven’t actually thought about how will people use it. Will it have the goal that you’re hoping it to have? Or say, for example, I’m using a legal tool. Will it actually serve the purpose that I’m looking for? Will it require more work for me to actually review everything it’s doing? So there’s a gap there. The second is around governance failures.

So I’m not sure how many of you have thought about risk management. How do you identify all of the risks that are coming up? Who’s going to be accountable to solving them? That might be things like, is it treating people differently? Is it biased? It might be things around security, for example. And then there’s also a failure around misalignment. So between what you’re looking for in society, those might not be aligned. So if you’re, for example, prioritizing AI use to automate people, all of a sudden people are thinking, what about job loss? So there’s not really a link in value there, and that’s another reason. We’ve got three other challenges. The first one is sovereignty, which I think if anyone was around the summit today or this week, everybody was talking about sovereignty.

So questions around how do we maintain control? Who is responsible? If, for example, a foreign government decides to turn off that AI access, is that something we trust? Or how do we deal with that? we also have sustainability pressure so thinking about the carbon cost of using AI and lack of clarity around that and then change management is really all the people so if we’re thinking about these frontier firms where people are working with agents what does that work culture look like have we actually thought about how people use AI and have time to test it and practice with it have we thought about the relationship between people and AI and how that works as well so these are six quick concepts and if we can have the next slide is just a point to make is that when we’re considering a proof of concept we’re really just considering does it function we weren’t considering any of those other six things and if we want to scale AI we need to think about everything else so next slide so I guess the point of this session is really to think about how do we actually do that so what we have thought of is calling it AI and 4D so four dimensional the idea that you need to look at four different lenses to build trust in AI if we could have next slide and when we’re looking at that we’re thinking if you can look at all these four different lenses that’s really going to help you predict any harms or challenges that could come with the AI model and actually prevent them so that you can deploy and scale that AI there’s four dimensions that we’re looking at the first one is sovereignty so thinking about who controls it not just data but looking at all the security measures behind it where does the model come from who has access to it we’re looking at green so that’s sustainability can this scale without destroying our climate goals for example We’re looking at responsibility, so that is thinking about ethics and governance and bias and fairness and human -centered.

And then valuable, so is this project actually really going to deliver a real -world benefit to people? So next slide. This one, I think it might be difficult for us to create a poll, so what we’ll do is we’ll do it by hand instead. So if we can just go to the next slide. What I thought we could do before we give you more information of those 4D and how to apply them and break out into groups is we could just have some quick scenarios and test what your knowledge is of those themes already. So I’m going to give you an example, and then we’ll do a show of hands of who thinks what lens is missed here.

So this example is with a public health company. They’re using AI to read different x -rays and radiology scans. And the point of the proof of concept is to help triage different illnesses or different breaks, things that you might find in the scan, and reduce that backlog. So when they actually started modeling and rolling it out, the team realized that this required more compute than they needed. It would exceed, actually, the available power supply, so there was not going to be ability to use it consistently. And that, actually, there was a large demand on water because the GPUs needed to be cooled, and this is in a water -sensitive area. So that would be another challenge between people and the planet.

So this program failed, this hypothetical program failed, because it was financially and politically impossible to run. So who thinks that this is a problem because of sovereignty? Who thinks that this is a problem with sustainability? Yeah? who thinks that this is a problem with responsible AI and value. Yeah, I agree. So I mark this one as sustainability. I think it’s an example of the dynamics that we might have in the real world is we want to scale AI, do really great things, but actually we haven’t considered the power or the water usage that that has because we either don’t have the information or it hasn’t been something that’s been baked in the front to think about.

And we will give you some higher level into what this means and how to apply it in a moment. Okay, the next one. So we’ve got a second one. This is dealing with transport. So I think we’ve all dealt with traffic this week. We’re looking at this in this scenario here. It’s thinking about this project is to optimize traffic lights across the city and smooth congestion. But when they started implementing this project, it was only looking for average commute time. It was diverting traffic into lower income areas and pedestrian safety actually became worse. So while this met the technical triggers that it did reduce and optimize time, there was a lot of community backlash. So does someone want to tell me which one they think this is a failure of?

Audience

Sovereign and responsibility.

Theresa Yurkewich Hoffmann

Yeah, we’ve got some sovereignty, we’ve got responsibility. I think this one is actually value. So here, what the ministry had thought was valuable, reduce overall time, is not what’s valuable to the people. What’s valuable to the people is that they want to have safety and walking. And what’s valuable to them is that you protect communities and you don’t have unbiased impact. Next one. So now we’re looking at justice. So here we’ve got a justice system. Our justice department is building AI. to triage different complaints from citizens and reroute them to the right legal body, so whether it’s the courts or a commissioner or something like that. In the pilot, it performed really well, but later when they started to prepare to deploy this into production, the team discovers that, one, the model is hosted offshore.

Two, they don’t have a lot of information on when the model will be updated, and they don’t have control over that. This government doesn’t. That different logic within the model could change based on updates that they couldn’t control, and that they can’t audit the logs. So what do we think this time?

Audience

Yes.

Theresa Yurkewich Hoffmann

Okay, everyone is sovereignty. Sorry, did you say something else? A responsible AI? I think that could also be here, because they hadn’t thought of maybe all these risks beforehand. but I agree here especially when you’ve got a national organization they need to have control of the model and how it functions not being able to update it or audit it in such a sensitive area like justice is a real challenge so sovereignty is a challenge here and then last one okay so here we’ve got a social science agency and they’re using AI to determine who’s eligible for social benefits and the pilot showed that they were able to progress and reduce the time and have fewer manual checks but when they were actually doing this in real life the model wasn’t able to explain why it had made a decision so why it had allocated benefits to someone versus someone else there was no ability to understand how to appeal it so if you were rejected for example you couldn’t understand why that was and how to change that decision There was bias discovered between different groups, so age groups or ethnicity or gender.

It wasn’t applying it the same to equal to everyone. And there was no agreed process for how you would escalate if there was a problem. So this became very seriously harmful, and there was a lot of vulnerable citizens who could be impacted. So in this scenario, what do we think between responsible and value? Anybody else? Training data not accurate. Agreed. So I agree. I think this one is a good one of responsible but also valuable here. Responsible AI is thinking about bias. It’s thinking about fairness. It’s thinking about the data that you have. It’s thinking about all these. It’s thinking about all these harms up front and how you’re going to deal with them. And then equally with value, people need to see value of why they’re using AI in a public system.

And if it’s actually harming people, then it’s not necessarily a good use case. So far, everyone is doing good. I think we can move on. But what we wanted to go through now is how does this work in real life? What does this actually look like? And so I’ll pass to Omid. Can we have the next slide, please?

Omeed Hashim

Right. So I think it’s clear, you know, having had this conversation and the contribution from yourselves, that it’s not so straightforward because there are different dimensions, and this is the point that Riz is making in terms of having to look at different angles. So over the last two days, or definitely the day before yesterday, I was going around in the summit hall, and I was asking everyone, because you see everywhere it says sovereign AI, sovereign AI. I was asking them, what do you mean by sovereign AI? And some people were talking about, oh, we need to have our data centers here. Somebody was saying, or our models need to be here. There were different kind of conversations in terms of what sovereign AI actually means in the context of AI and how it works and how it deploys and so on and so forth.

But the key thing is that ultimately it comes down to control. And my view is that it’s not even just about the organization, the sector potentially, or the nation, but also about the people. So where is your data? Who’s actually looking at your data? Why are they looking at your data? What will they do with your data? If you don’t have an understanding of that, the likelihood of you trusting that system is very low, and therefore it would be susceptible to failure. So it’s really, really key to understand the implications of data sovereignty, AI sovereignty, and so on. I mean, I was talking to one country… called Serbia, and they were saying that we have a view that we need to have control of our own environment, we’re building new large language models in our own geography, and we are going to have control over what we do.

And I think that’s the key thing. But the important thing is that if the trust is lost in terms of the sovereignty, the likelihood is that the system will fail. And I can assure you that if it’s not designed in at the beginning, you’re going to test this under a lot of pressure. You’re likely to be in a crisis as well, because when you don’t know if your health data is trained on somebody else’s data, or you’re using very commercially available large language models. then the thing is you’re actually beholden to those people and therefore you may not be able to achieve what you want to achieve as an objective. So it’s a really, really important dimension in terms of a successful deployment.

And all of the stuff that I’m going to go through here, whilst I’ve seen them through failure, but also they’re the recipe for success. So you can think of it in both ways. So if I could have the next slide, please. So green AI, I mean, this is kind of not dissimilar to what we had before in terms of cloud and green computing and the fact that unless you actually look at the environment, look at it from an economic viability of the system, ultimately what it means is that it’s going to cost a lot more and it won’t scale. And if it doesn’t scale and you cannot handle the data volumes and the amount of usage that you do, you’re going to have, the likelihood is that it would stop.

Now, in my mind, the approach to take here is to make sure you address both. And what happens is that addressing both the environmental effects as well as the cost actually work very, very nicely together. So we had a similar scenario before in how we deployed cloud services, and the same thing is translating to this now. So the more economic your system is, the more likely that it’s going to reduce less greenhouse gases as well. And as a result of that, you can sustain this system longer term. I mean, we all know people are building now massive data centers. Yesterday, there was, I think, a discussion around Microsoft building the new data center that consumes, as much electricity as all of Los Angeles, and Los Angeles is an enormous city.

So the environmental effects of what we’re doing are really key, and it has a direct link into the costs that are driven out of that as well. And I can again assure you, I think it does only take away, that if an AI system can’t scale sustainably, then it won’t scale at all. I’m pretty convinced of that. So we can move on. So the next one is responsible AI, and I think a lot of people here are familiar with that. In terms of governance, assurance, are we doing the right things ethically, is there bias in the system, all of those things fall under the responsible AI banner. And it’s really fundamental in terms of giving people that trust that Teresa was talking about in order to use the system in anger and kind of really link their kind of lifestyle to that, and so on and so forth.

And as you know now, there are all sorts of other systems now like the AI companions that kind of help you achieve different things, whether it’s weight loss or even provide you counseling and help you along in your life. But unless they’re done in an ethical way and an unbiased way, they’re not leading you down a particular path, they’re likely to fail as well. Now, one thing that I wanted to bring to attention, and yesterday, Prime Minister Modi was talking about this, which is really key as far as I’m concerned in the responsible AI area, is the human -centered design of AI. Because when you’re actually building an AI system, you need to have in your mind who you’re trying to help and how.

And what does this actually mean? And if you’re trying to do something, you have to have a clear vision of what you’re trying to do. And if you’re trying to do something, you have to have a clear vision of what you’re trying to do. to them when they start to use the system. So I think the example around the traffic management was a very good one because we all struggled over the last few days with the traffic. And if a system is put into place which does not take into account what the purpose of what they’re doing is, then it is likely to fail. I think the goal of the system itself as well is really key in terms of whether it gets the right sort of results or not.

So there are many systems where people don’t consider that and as a result of that, it becomes unusable by the people or it might have harms built into it as a result of that. But the last one, the last dimension is how valuable that AI is and what does it mean in terms of the outcomes and what the measures are and so on. So a couple of days ago I attended a session where we had a senior executive from UAE. They were talking about, as a country, what they’re trying to do. And it’s really key for us to understand what we’re trying to achieve. So they had a very simple kind of thinking in terms of what they were trying to do, which made what they were doing much more measurable.

So what was the intention for them? The intention was that there are about 12 million people in the United Arab Emirates. And they wanted to effectively be, rather than 12 million people, with the introduction of AI, do as much work as 120 million, almost like 10 times the size. And I think that actually is really, really key. Very simple reason as to why you’re doing what you’re doing and how you measure it. And what the value is. Now, if you actually think about that in the context of, say, India, in my opinion, that ambition doesn’t give India the value. So to create, I don’t know, lots of agents to replace people’s jobs or do more jobs, right, doesn’t actually have the right outcome because there’s already a lot of people here.

Why would you do that, right? So you have to think really carefully about what the value is of the system itself because without thinking about that, you end up building a system that you cannot measure the value of. And then ultimately what you would do is that it would just become a dead weight. Why do we have this at all? Should we be getting rid of it or not? So hopefully you kind of understand all of the aspects of the different areas. At Kainos, we deploy systems, AI systems, into production. So we see. A lot of these issues. And we are quite lucky because our customers, which are all the government departments. are actually very, very clued up as well in terms of what different aspects of what we’re doing are, and they see value in it.

So it’s not just about deploying the technology, but how is this technology going to affect the UK citizens and where we work in other countries like Canada, US, and so on, those countries respectively. So I think that was my last slide. I think I’m going to hand it over to you.

Theresa Yurkewich Hoffmann

So we had originally intended to maybe do different breakout groups. The audience is quite small, so it’s up to you. We could either have everyone kind of have a few discussions and talk about what you think is the most challenging, or we could use 10 minutes if we want to do a Q &A, if people want to share their thoughts. Put your hands up if you want to go into a breakout group and discuss one of the concepts together. okay so we’ll do the second nobody voted for that so why don’t we, yeah we can have a discussion it’d be interesting to hear are you looking at these four challenges which do you think is the most difficult, which do you feel like you’ve solved and we can have a little discussion around that for a little bit introduce yourself

Audience

Hi there thank you my name is Ami Kotecha I’m co -founder of Amro Partners we are a real estate company and we are now getting involved in a data spin out my challenge is as follows, I’m one of the co -founders of the company as a leader I’m very keen of course that there’s AI adoption there’s upskilling etc. in the company and of course productivity challenges where we have them should be addressed using this technology. I feel like I am often left in the lurch to actually literally make all the decisions within the private sector environment whereas I think government needs to step in and make some of these decisions on our behalf in terms of model utilization, where we go, what we do with it.

I mean we are good experimenters so fortunately we are throwing capital at experimenting not every company can afford to do that or would want to do that because of the same sort of issues you mentioned right at the start which are aligned with just the fear of adopting something that is going to break your system or open yourself up to some kind of cyber attack etc so how do you see this sort of playing out in the next 6 months, 8 years 12 months because obviously the technology is moving really fast as to what role the government is going to play in saying this is safe to use and this is still experimental and you should worry about it?

Theresa Yurkewich Hoffmann

It’s like, go ahead and do it. But then there’s a medium risk, a high risk would be something that would be like really critical infrastructure or something that’s impacting people directly. And if it’s a high risk, then there’s a load of different things that you need to do around transparency with people. There’s also prohibited use cases of how to use AI. So I think that’s one example where some governments are actually saying, this is what we’ve deemed safe. And if it’s not one of these uses, then we want to see a lot of other checks. In the UK, we have regulation that’s looking at third party suppliers right now. And if they’re critical to the infrastructure or not of the country, then there will be new requirements on AI as well, in terms of like the updates that go in transparency around models, explainability.

But then maybe you have the US approach where you don’t have regulation yet. So I think that’s one example. it really depends on the country. I think a lot of what we heard yesterday was around, you know, for India, thinking about ethical and responsible AI, but I don’t know if you have any regulation in place around that yet. I think, yeah, I think it’s very difficult otherwise for a private company because otherwise you’re fighting to who gets to the bottom, who’s the cheapest, who’s the quickest. And this week I was touring around with different businesses and everyone was thinking, how do we do agents? But no one was thinking about human -centered, ethical, responsible. So I think it does need to come from the government to have a base.

But I noticed that some are maybe more forthcoming with that than others.

Audience

I think just before, I just wanted to answer your question about the government. There is a data protection and data personalization law that was, you know, legislated last year. November 2025 is going to be legal in the next 27 October onwards. They are getting a time of, you know, around 18 to 24 months. After that, what you are saying, the addressing of how the data is handled by the person who is creating the data, who is like the person who is created, who is the principal or one who is the repository, all that rules are coming. But presently, I would say it is 0 .1 % of that responsible AI part which is happening. But over a period of these two years, the preparation is going to happen where it will slowly get into that mode, actually.

Omeed Hashim

I was just going to say, so she’s a high flyer entrepreneur in the UK, actually. But I was just going to say, in my mind, right, there are a couple of things that we should really push the government to do, right? One is about smartness. Smart data. So they’ve been playing around with this for years and years. So we’ve got quite a lot of open banking applications now. But this can be extended way beyond open banking where different organizations can share data. Like, for instance, in the property market, you know, how do you go through the cycle of all the way from putting an offer in to conveyancing to, I don’t know, valuation to the end, right?

So that’s really critical. The other side of it is actually having trust in language models which are built within the U .K. itself, right? And I think most of the – even Serbia is doing that, right? French have already done it with Mistral. So there is a lot of examples of this, and that’s where the government can really help, and that’s what we should be lobbying them to do, in my opinion. Any other comment? Oh, yeah. Maybe behind you? Oh, sorry. You had your hand up. You had your hand up first. You go first, and then behind you next.

Audience

Yeah. Yeah. So my – I am building a agentic AI for vending machines. And I have been an entrepreneur in the corporate world. But before three years, I was just doing physical stuff, right? Doing products, innovation, the food and beverage sector. One of the challenges which I am seeing is how to build value at a platform level rather than an individual customer level, right? For example, if I offer this vending machine agentic AI for a PepsiCo, they would say don’t do it for Coca -Cola, right? Give it to us only and keep it with us. But UPI, for example, was not a master card or a visa card thing, right? It was for the whole country, right?

So how do you get that kind of attraction to build a platform instead of one very customized for a customer who might say that don’t give it to anybody else. So that is the key question that I am trying to address and I do not seem to find answers.

Theresa Yurkewich Hoffmann

I agree I think that is a challenge in the corporate world I used to work at Microsoft and even there it was if you’re using our technology if we’re coming on a panel then we’re on a panel but we’re not having Amazon on the panel or Google on the panel with us but I think like you say that’s really figuring out what you have that’s so unique and that actually goes to the value lens I think is that if you have something that’s really valuable to people you make the case that it has to be shared but it is it is difficult if you’re building it with one customer first because that almost becomes their IP that they want to keep right so something that we are doing when we’re working on response by our projects is we’re looking at all this similarity of requests that come in and we’re sort of doing the work on ourselves in the background and then we’re taking elements that we need and exposing them to the different customers and that way we keep that IP but it is very difficult to get multiple customers on board if they’re all competing

Audience

yeah so for example I build a few IP in the area of sustainability like clean air clean water I sold to a company but that company is not commercializing it I don’t want to name that company because it didn’t want to commercialize it wanted to keep that technology right so that’s a big challenge that I am seeing in the corporate world that a company will buy another company but it won’t implement for the society or for the good right so that is the challenge that I am seeing how do you handle that because that is part of the responsible AI as well as the valuable AI part

Omeed Hashim

yeah I think you’re right and I think you have your own kind of description of this problem but I was in the US a few months ago and I saw, I don’t know whether you’re familiar with SVB, but it’s basically Silicon Valley Bank, right? And they did a presentation to us where they were talking about where all the funds are going, right? And if you actually see what is going on in terms of this, I think it’s about a trillion dollars worth of investment. This investment is flowing only into a handful of companies. What those companies are doing, they literally are stifling everybody else, right? This is a commercial reality, right? But if I was to offer you some options, I would say there shouldn’t be just the IP.

You should be thinking about it more as a service that you could build layers on, right? So you may retain the IP or you may share the IP. It could be a co -created, whatever it is, but it’s got to have a service model attached to it because if PepsiCo buys X and then co -creates, Coca -Cola buys Y, why would they be buying it and how would you be able to build on top of that but you know it’s very very commercially challenging problem it’s been there for many years this is nothing new

Audience

as Shri said exactly like that UPI beat that right so today UPI compared to a Mastercard or a Visa in India everyone is using that right and there are applications which are attached to UPI whether it’s a Paytm or a Google or whatever Amazon Pay all of them are on the platform of API right so the question I had was why are IT companies for example Kainos right or an Infosys or an Accenture not looking at the platform approach and looking at the services approach where they can put their team manpower and run projects right so I see this as a challenge I have been talking to the top management of Infosys Accenture every time I go with the proposal they say just do it for a client you know and we will attach you as an expert I don’t want to do that I want to build a platform there is nobody who is really interested in building that sort of a business which is path breaking it takes longer time right like UPI it happened organically can these kind of initiatives happen inorganically that was the question

Theresa Yurkewich Hoffmann

I think they are looking at both yeah so I think we should take one more question because we have very few minutes left so we can talk after and I want to get to the person behind you for his question as well and then we will do a quick back up

Audience

Good afternoon thanks for covering those areas in the lectures that were much needed to understand so you talked about sovereignty AI and then you talked about value or responsible AI so there might be few scenarios where while chasing sovereignty we might have to bypass value additions or responsibility for the citizens while the other way round also so can you discuss about those scenarios where you value sovereignty more than talking about responsible AI or value additions AI and the otherwise also and when they can be parallelly taken into account

Theresa Yurkewich Hoffmann

So you’re asking around Responsible AI and valuable AI Where they link Where one might be more useful Than the other So where I see Responsible AI I think it can actually incorporate As a lens for everything But it’s much easier to think of it as separate I think Responsible AI can encompass five things So like ethics Trust Like bias and fairness Human centered Governance and security Where I think that distinguishes from value Is value Is looking beyond Financial growth So a lot of organizations You might work with Or when you think of many organizations In the world, they’re looking at how much money Will this save me Or how much time or how much productivity But I think valuable AI is looking at What goes beyond that And what’s the value of value And what’s the value of value And what’s the value of value does it actually create more well -being in people?

Does it give people time back with their families, for example, or other hobbies they want to do? Valuable is thinking about what’s the long -term benefit that this will have in terms of how we change society. Maybe it’s going to create a whole bunch of different jobs now in something else. So I think actually if you’re using responsible AI, it will create value. So I still think they go hand -in -hand, but that’s probably how I distinguish it. Is that your question? No. I’m not sure. Maybe Omid has an answer. You know also. Yeah.

Omeed Hashim

So I think you’re saying what happens when you have to do a trade -off, right, between sovereignty and value. And I think this is a very good question, to be honest, right? Because when you – so, again, yesterday I was wandering around in the summit. I keep asking people questions about different things. Right. And one of the countries that I spoke to – they know that using GPD models or Claude and various other things is a quick route to building what they need to build because they’re there it is immediate and it can be done almost like without any issues at all but they’re taking the hard route so they’re saying actually we don’t want to do that because what if tomorrow we fall out with them as Europeans are falling out with Americans anyway so what happens if they turn off the systems, what would we do then so if you think about it in terms of the speed and the value is actually going with what you got but the more challenging thing which is the value the value is can we actually use this system for our citizens on an ongoing basis is that data something that belongs to us are the models aligned with what we are doing.

So they want to be able to enable their people in order to deliver the right outcomes. And that would not happen if they just outsourced their sovereignty to the U .S. So I think those are some of the very, very important factors that need to be explained. But ultimately, from a value perspective, three is a spot on. I think it’s about what is the value in terms of to the people that are going to use that system. So if tomorrow we found this fantastic system, like I give you an example, we’ve stopped multiple times in terms of the traffic because some VIP was coming out of somewhere, right? And then just literally closed the road.

So we’re sitting there for like half an hour and then we get going again. That’s happened, I think, three or four times so far. So if you were to build the system, you would be needing to think, you know, what is the value for those taxi drivers, all these public that are going around. and that’s the key thing that you need to be able to use AI to achieve, right? This needs to be measurable, it needs to actually help the people itself. So yes, it’s a very tricky trade -off.

Theresa Yurkewich Hoffmann

I think the trade -off is really good, especially in sustainability as well. So a lot of times organizations might just think, how do we adopt AI as quick as possible, get people to use it as much as possible, but actually every query that you use has a sustainability impact. And so I think there’s a trade -off there because there might be a sustainability impact, but depending on where you are, you might value training people to use AI more, so you might be okay with that impact because it’s more about getting people comfortable with using it. But then if you are an organization that really values sustainability, you have really strong carbon goals or net zero goals.

then actually that might be the trade -off that you have. So I think one thing that we’re doing when we’re working with organizations is we’re getting them to make that very difficult decision of here’s high concern, here’s low concern. We map out all the harms that we can think of and all the principles and values that align to them and they can’t put any side by side. And they have to move all of them from high to low concern. And very quickly that makes you see what’s real for your organization and I’ve seen a lot of them put sustainability at the bottom, which to me is a little bit concerning, but it does start to think about really understanding your organization and how those trade -offs are going to play.

And that’s what we’re finding in the human one as well.

Audience

So just 10 seconds more, adding to yours, ranking low to high. So out of all these four, sustainability, sovereignty, responsibility and valuation, how do you rate them as low to high, all the four factors that you have covered in your paper?

Theresa Yurkewich Hoffmann

how do you rate all this on a scale of how do I rate them I think that’s very difficult I think I’m putting response by AI at the top because I think it can actually it’s a cheat because it can kind of include sustainability actually and I think it will create value I think so then I would probably put sovereignty lower than that but obviously this year has maybe changed that geopolitically I think I still put response by AI at the top I’ll make that hard choice what do you say

Omeed Hashim

I think I kind of agree and I think Modi Prime Minister Modi said this himself yesterday about human centered AI design is part of responsible AI a few days ago me and Teresa were talking to someone and they were describing a system now if you just indulge me for a couple of minutes let me explain the background of the system and then you see how it’s relevant so they were building a system for kind of the nursing or old people’s home so you may know that elderly get dehydrated and they forget to drink water and then that causes a lot of problems for them so they built a system where using AI and vision they were seeing if the elderly were having enough liquid in the day or not now that’s fantastic everybody says this is a brilliant idea but then you think about it they are monitoring those elderly both in the area where it’s common as well as where they may be in their bedrooms or whatever so that brings a challenge and then the other challenge was what about the people, the nurses who are actually hydrating them because that could become a negative effect on them because somebody might be saying you’re not doing your job right right and what about the family of the elderly, what about the impact on them, so I think it is really important to understand why we build the system who it affects how it affects them and what the long term benefits are which brings the value, right, this is why it’s four dimensions, none of these are independent, I think they’re all relating to one another, one shape or form

Theresa Yurkewich Hoffmann

yeah so we’ll work towards wrapping up because I think we’re getting the time check can we but this one switched because it said 8 and then it said 17 and now it says 10 and then she told me I had 7 this one’s right, okay well let’s see if there are more questions yeah, but we have the takeaways and things to go through also, so I think we’ll wrap up and we can talk to people individually afterwards, can we skip through some of the slides, because we wanted to ask next one, next one next one Next one, I think. Okay, so we actually wanted to flip that question and ask this to you in the audience as well. Which one would be your top?

So of those four lenses, sovereign, green, responsible AI, which one do you think is an absolute must -have? And you can only pick one. And if this isn’t there, it’s going to derail the project. Shall I show of hands? Who says sovereignty is the most important? Who says that it’s green AI sustainability? Who says it’s responsible and then value? Some people didn’t vote. You didn’t vote back there. But I think it sounds like a lot of people are in the responsible and value is the most important. I think I agree. But I think what we wanted to come across is all of these need to come into play as well. Can we do the next slide?

Of that question, though, who has a responsible AI practice in place? Who uses a framework or anything like that? Anybody? Who has a sovereign AI policy in place? No? And who is looking at sustainability? None of us. So that’s a takeaway for all of us. I think we wanted to wrap up with how do I take this forward. So a couple points I want to make. The first is that we have taken a lot of the learnings and things you talked about in this and we’ve turned it into a white paper. There’s a link below, but we can share it with you. We’ve shared it on LinkedIn as well to talk about these learnings. And we wrapped it up to say for each of those themes, here’s eight to ten things that you could do if you really wanted to take sovereignty, green, sustainable, responsible, and valuable AI forward.

So please check out. I’m very happy to talk about that paper. I’m very happy to talk about it and give you insight. My thing, the key takeaway for us here is that no single dimension is the answer. I think that’s come out in the scenarios, in the conversations that we’ve had, and in how we’re prioritizing is that you can’t really have one, just one. You need all of them if you want to scale that project and really make it to production. The second point is on tradeoffs. So it was really good that that came up in the conversation, is that just being aware of the tradeoffs that you will have to make and having something in process to talk about why you made that decision is important.

I think a takeaway for everyone here is think about an AI policy, which talks about how you’ll use AI and what you will prioritize. Think about having a responsible AI framework, which is essentially all the questions and things that you want to be implemented across ethics or across trust, security. and then really think about how you can turn some of this into numbers. So what are the KPIs that you can actually look at for sustainability, for users, for ethics? Don’t just make them a concept of we will be ethical. Think about what that actually means for you and how you’re going to measure it. That’s important if you want to get funding, investment, and show the project is a success.

And then finally, think about how you can upskill your teams to understand these concepts and how you can incorporate diverse views. I think that’s probably the most important in building out the responsibility. So we will wrap up. This is to say if you want to get in touch with us, here’s our details. Find us on LinkedIn. Send us an email. We can have a couple minutes after this since I know there was two questions in the audience that we might not have got to. But otherwise, we hope this session was useful. If you want to give us feedback, here’s a bit. Bigger QR code. so if you want to stay in touch then fill out this, let us know if there’s anything we can improve on the session or any questions that you have, we’re super happy to hear that and otherwise just a big thank you for your participation and yes, hope you have a good rest of the day and a good weekend

Omeed Hashim

Yeah, I was just going to say I think great questions there about trade -off and absolutely the right question to ask because none of these are unique Sorry, please, go ahead

Audience

Yeah Like you were talking about trade -offs, I just wanted to say okay, every model has their own aspects like pros and cons or as you say, different dimensions, I’ve got most of my answers from those questions but I just wanted to ask, if we’re building something and taking these aspects like okay, taking responsible AIA, valuable AIA on them. But if we are taking, we’ll be missing some aspects. As he said about responsibility, if we are taking accuracy and fairness, if we will take if it makes it easier to speak in Hindi, I understand. It’s okay, but fairness, okay. If we are doing Sorry, sorry. Again.

Theresa Yurkewich Hoffmann

I think there’s no issue. You can ask us on email as well. It’s not an issue. We’re more than happy we’ll respond to you if you want to ask.

Omeed Hashim

Just a question.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (7)
!
Correctionhigh

“Only about 30 % of AI pilots progress to production.”

The knowledge base reports that almost 80 % of AI pilots do not make it to production, implying roughly 20 % succeed, not 30 % [S6].

Confirmedhigh

“A lack of trust in the technology, its data, and its societal impact is the principal barrier.”

The source highlights mistrust stemming from concerns over data and model control as a key obstacle to AI project adoption [S1].

Confirmedmedium

“Sovereignty hinges on who controls data and models; reliance on foreign AI services creates vulnerability.”

The knowledge base outlines AI sovereignty dimensions that include control over data, models, training, and operational governance, matching the claim [S22].

Additional Contextmedium

“Public‑health X‑ray triage model required far more compute, power and cooling water than the host region could provide, making the solution financially and politically untenable.”

Sources discuss the cooling and water challenges of high-compute AI workloads and the impact of electricity and water shortages on data-centre feasibility in various regions [S89] and [S92].

Additional Contextlow

“Traffic‑light optimisation reduced average commute times but rerouted traffic through low‑income neighbourhoods, worsening pedestrian safety and provoking community backlash.”

A discussion of a transport-focused AI scenario notes equity and community impact concerns when AI-driven traffic management is deployed [S75].

Confirmedlow

“Voice‑cloning scams illustrate AI‑generated harms (e.g., Romanian voice‑cloning scams).”

The knowledge base documents the use of deep-fake and voice-cloning technology in scams, confirming that such AI-generated fraud exists [S84] and [S85].

Additional Contextmedium

“Unsustainable AI systems incur higher operating costs and are unlikely to scale, linking environmental impact directly to economic viability.”

Sources note that high energy and cooling demands raise operating expenses and affect scalability of AI deployments, especially in regions with limited power and water resources [S89] and [S92].

External Sources (95)
S1
Building Sovereign and Responsible AI Beyond Proof of Concepts — – Theresa Yurkewich Hoffmann- Omeed Hashim
S2
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S3
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S4
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S5
Building Sovereign and Responsible AI Beyond Proof of Concepts — – Theresa Yurkewich Hoffmann- Omeed Hashim – Theresa Yurkewich Hoffmann- Omeed Hashim- Audience
S6
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S7
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the AI Security Council, a significant discussion unfolded regarding the role of artificial i…
S8
Certifying humanity: Labeling content amid AI flood — The erosion of trust did not begin when AI became highly intelligent. It began whensynthetic contentbecame abundant. Tex…
S9
Deepfakes and the AI scam wave eroding trust — Author:Slobodan Kovrlija Deepfakes force an uncomfortable reassessment of how trust works online. For decades,digital t…
S10
Who Watches the Watchers Building Trust in AI Governance — So there is no end to the story of how regulators should design the regulations. That is the main question. All countrie…
S11
Technology Regulation and AI Governance Panel Discussion — Different countries require different approaches based on their regulatory context and capture by interest groups
S12
US regulators to decide the path for AI regulation — Prompted by the rise of generative artificial intelligence systems (AI) such as OpenAI’s ChatGPT, US lawmakers are curre…
S13
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — I think all of the above to some extent. Part of why we start with principles in our governance program is I think it’s …
S14
Health Inequality Monitoring — The process of inequality monitoring does not stop with the reporting of data, but must continue on to its translation f…
S15
AI That Empowers Safety Growth and Social Inclusion in Action — “So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attenti…
S16
Overview of AI policy in 10 jurisdictions — Summary: Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspire…
S17
HealthAI: The Global Agency for Responsible AI in Health — Responsible AI is characterised by AI technologies that align with established standards and ethical principles, priorit…
S18
Successes & challenges: cyber capacity building coordination | IGF 2023 — A sustainability outlook is crucial for lasting and effective impact in cyber capacity building. Projects lacking sustai…
S19
Democratizing AI: Open foundations and shared resources for global impact — The speakers consistently emphasised the need for broader engagement and participation. They highlighted the importance …
S20
DC3 Community Networks: Digital Sovereignty and Sustainability | IGF 2023 — By involving various stakeholders, including community members, organisations, and government bodies, this model ensures…
S21
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Hisham Ibrahim: I’ll also mention three quick ones, looking across my service region, trying to give different examples….
S22
Discussion Report: Sovereign AI in Defence and National Security — The presentation outlines six key dimensions of AI sovereignty: data control, model control, training and alignment over…
S23
WS #102 Harmonising approaches for data free flow with trust — Dave Pendle: Yeah, thanks, Saman. Thanks for having me and good morning to everyone. My name is Dave Pendle. I’m an …
S24
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — 100 % trust only on machines is still a little far. So people in the loop is definitely which built trust for all of us….
S25
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Natalie Cohen, Head of Regulatory Policy for Global Challenges at the OECD, positioned sandboxes within broader regulato…
S26
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Yeah, thanks Fadi. So with regards to opportunities, there are a lot of AI pilot projects that are coming…
S27
Keynote-Martin Schroeter — “while more than two -thirds of global organizations are already heavily invested in AI, almost half still struggle to s…
S28
Scenarios and their Implications — In the first section, we explain why scenarios are a useful tool to address the uncertainties around the future of work …
S29
A Guide for Practitioners — – What are the current macroeconomic, political and social environments, and how do they relate to health? A thoro…
S30
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — Ignoring the wider context and blindly implementing digital solutions can inadvertently increase the digital divide. It …
S31
Research Publication No. 2014-6 March 17, 2014 — Among the bigger picture insights gained from our review is the high degree to which the economic, political, organizati…
S32
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Lidia Stepinska Ustasiak: Excellencies, distinguished delegates, ladies and gentlemen, good afternoon. My name is Lidia …
S33
Exploring the power of AI: Diplomatic language as Turing Test — Trade-offs form the bedrock of any diplomatic treaty. They embody a delicate balance between give-and-take, a nuanced ta…
S34
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S35
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 127. The Inspector found that many organizations take a narrow approach to learning and talent management – one that is …
S36
AN INTRODUCTION TO — (mainly former socialist countries) where it became obvious that the development of society is a much more complex proce…
S37
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S38
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S39
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Our country actively contributes to European initiatives strengthening Europe’s technological leadership. In this contex…
S40
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S41
From principles to practice: Governing advanced AI in action — – Balancing rapid technological advancement with necessary governance frameworks across different regional approaches B…
S42
Comprehensive Report: European Approaches to AI Regulation and Governance — This discussion revealed gaps in current regulatory approaches, which focus primarily on technical performance and funda…
S43
AI That Empowers Safety Growth and Social Inclusion in Action — This discussion revealed both significant progress and substantial challenges in implementing responsible AI governance….
S44
AI & Diplomacy: Managing New Frontiers – ADF 2024 — The discussion concluded that although regulatory frameworks recognise the importance of these issues, the gap between i…
S45
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — High level of consensus on fundamental principles with constructive disagreement on implementation details. This suggest…
S46
Responsible AI in India Leadership Ethics & Global Impact — “There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI t…
S47
Safeguarding Children with Responsible AI — High level of consensus across diverse stakeholders (government, industry, academia, and youth representatives) suggests…
S48
Technology Regulation and AI Governance Panel Discussion — Different countries require different approaches based on their regulatory context and capture by interest groups
S49
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Regulatory frameworks must exist at different levels – global, regional, national, and even sub-national – to ensure com…
S50
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S51
Building Sovereign and Responsible AI Beyond Proof of Concepts — Green AIaddresses both environmental impact and economic viability. The speakers argued that these concerns are intrinsi…
S52
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S53
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The participant argues that AI solutions are sustained and scalable when they actually address real problems and help so…
S54
Artificial intelligence — Sustainable development
S55
Living with the genie: Responsible use of genAI in content creation — Connecting these discourses is the realization that technology intertwines deeply with societal goals, such as promoting…
S56
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — It’s notable that government representatives openly acknowledge significant gaps and failures in current AI governance, …
S57
WIPO Conversation on Intellectual Property (IP) and Artificial Intelligence (AI) — 25. The number of countries with expertise and capacity in AI is limited. At the same time, the technology of AI is adva…
S58
Global AI Policy Framework: International Cooperation and Historical Perspectives — So I think that today’s problem, as well as the IP policies, that how to facilitate those creation based on the IP mater…
S59
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Lack of infrastructure, skills, compu…
S60
Building a Digital Society, from Vision to Implementation — – Chukwuemeka Cameron Economic | Sociocultural Hines cites research from Gary Marcus presented at Web Summit showing t…
S61
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Natalie Cohen: Yeah, I think this issue of trust is key. One thing the OECD does is a driver of trust in government surv…
S62
AI agents offer major value but trust and data gaps remain — AI agents coulddrive up to $450 billion in economic value by 2028, according to new research by Capgemini. The gains wou…
S63
AI as critical infrastructure for continuity in public services — first definitely not technology because I think we’ve seen technology is always almost ahead very true over the last cou…
S64
Building Sovereign and Responsible AI Beyond Proof of Concepts — Artificial intelligence | Building confidence and security in the use of ICTs Theresa points out that only a small frac…
S65
Keynote-Martin Schroeter — “while more than two -thirds of global organizations are already heavily invested in AI, almost half still struggle to s…
S66
https://dig.watch/event/india-ai-impact-summit-2026/keynote-martin-schroeter — or never makes it out of the experimentation phase. And what we’re seeing is not an innovation problem. The innovation i…
S67
Blockchain-Based Public Procurement to Reduce Corruption — The project is anchored in a software PoC to uncover, using a bottom-up approach, key capabilities and limitations assoc…
S68
AI can reshape the insurance industry, but carries real-world risks — AIis creatingnew opportunities for the insurance sector, from faster claims processing to enhanced fraud detection. Acco…
S69
Micro and macro philosophy — My hunch is that we may consider revisiting or even ‘retiring’ the concept of ‘freedom’ (even scientists are considering…
S70
WORKING PAPER — The current global landscape is marked by an array of disparate data regula7ons, a situa7on that presents substan7al imp…
S71
WS #254 The Human Rights Impact of Underrepresented Languages in AI — Nidhi Singh: Yeah, thank you so much for the question. So I think this is something we’ve broadly said this in the in…
S72
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — But you started to look upon through different lenses. All that I need to do is to look through different lens. But I st…
S74
https://dig.watch/event/india-ai-impact-summit-2026/ensuring-safe-ai_-monitoring-agents-to-bridge-the-global-assurance-gap — And so I think there will be a lot of questions around how do you weigh up all these challenges, again, knowing that eve…
S75
https://dig.watch/event/india-ai-impact-summit-2026/building-sovereign-and-responsible-ai-beyond-proof-of-concepts — then actually that might be the trade -off that you have. So I think one thing that we’re doing when we’re working with …
S76
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S77
UNESCO Recommendation on the ethics of artificial intelligence — 118. Member States should work with private sector companies, civil society organizations and other  stakeholders, inclu…
S78
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 137. UNHCR has established a centralized systematic learning centre overseeing all learning solutions across th…
S79
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Skills Development: African countries can develop policies that promote skills development in areas related to AI. This …
S80
Press Conference: Closing the AI Access Gap — Trust, accessibility, inclusivity, and collaboration are seen as crucial pillars for successfully harnessing AI’s potent…
S81
#205 L&A Launch of the Global CyberPeace index — Wisniak argues that AI governance discussions often focus too much on hypothetical future risks while ignoring current h…
S82
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — Cormann outlined the OECD’s comprehensive approach to supporting policymakers through four key areas. First, the organis…
S83
Shadow AI and poor governance fuel growing cyber risks, IBM warns — Many organisations racing to adopt AI arefailing to implement adequate security and governance controls, according to IB…
S84
Disinformation and Misinformation in Online Content and its Impact on Digital Trust — Tara Harris provided concrete examples of how bad actors exploit these technologies, describing how Prosus has been targ…
S85
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — The ability to mimic voices and generate realistic messages allows malicious actors to deceive individuals in various wa…
S86
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S87
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — This comment identifies a critical gap between proof-of-concept success and real-world adoption. It’s insightful because…
S88
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration R…
S89
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The cooling challenge becomes complex as compute requirements scale, with different cooling solutions needed for varying…
S90
DPI+H – health for all through digital public infrastructure — A global recognition of DPI’s foundational value in healthcare is apparent, though this acknowledgment is coupled with a…
S91
ACKNOWLEDGEMENTS — Data centres are key to today’s cloud services. To optimize performance, they need to be located where access to high-ca…
S92
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — South Africa experiences electricity challenges and water shortages, requiring expensive backup power and affecting cool…
S93
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Those who design, train, and deploy these systems will influence not only over individual users, but also the informatio…
S94
Annex 5 — – ■ Data integrity risks may occur when people choose to rely solely upon paper printouts or PDF reports from computeriz…
S95
Surveillance technology: Different levels of accountability | IGF 2023 Networking Session #186 — Concerns have been raised regarding the misuse of surveillance technology in the Middle East and North Africa (MENA) reg…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Theresa Yurkewich Hoffmann
9 arguments170 words per minute4540 words1599 seconds
Argument 1
Lack of trust is a primary reason only ~30 % of AI pilots reach production, with organizations failing to consider trust dimensions such as reliability, data handling, impact on jobs, and societal effects.
EXPLANATION
Theresa explains that the low conversion rate of AI pilots to production is largely due to insufficient trust. Organizations often overlook how reliable the AI will be, how data is managed, and the broader impacts on employment and society.
EVIDENCE
She cites that only 30 % of AI projects move to production and links this to a lack of trust, noting that trust encompasses organizational confidence, data sharing, output reliability, societal impact, and job implications [11-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The low conversion rate of AI pilots and its link to trust issues is documented in the sovereign and responsible AI briefing and in data-silodereport showing ~80 % of pilots fail to reach production [S1][S6].
MAJOR DISCUSSION POINT
Trust as a barrier to AI adoption
Argument 2
The rise in AI incidents worldwide (e.g., voice‑cloning scams, AI‑generated books without attribution, biased facial‑recognition at borders) erodes public confidence and hampers adoption.
EXPLANATION
Theresa highlights a growing number of AI‑related harms that undermine public trust. Specific incidents illustrate how misuse can lead to scams, misinformation, and discrimination, discouraging broader AI deployment.
EVIDENCE
She references the OECD AI Observatory’s monitor showing 600 incidents in December 2025, and provides examples such as voice-cloning scams in Romania, AI-generated books in Cairo lacking human oversight, and biased facial-recognition at borders [20-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Erosion of public trust due to synthetic content, deepfakes and AI-driven scams is highlighted in sources on content labeling and deepfake scams [S8][S9].
MAJOR DISCUSSION POINT
AI harms reducing public trust
Argument 3
Introduces four lenses—Sovereignty, Green (sustainability), Responsible AI, and Valuable AI—as a holistic approach to building trust and preventing harms.
EXPLANATION
Theresa presents the 4D framework, arguing that evaluating AI projects through these four dimensions helps anticipate and mitigate risks, ensuring trustworthy and valuable outcomes.
EVIDENCE
She describes the four dimensions-sovereignty (control over data and models), green (environmental sustainability), responsible AI (ethics, bias, governance), and valuable AI (real-world benefit) as essential for scaling AI safely [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 4-dimensional framework (Sovereignty, Green, Responsible, Valuable) is described in the sovereign and responsible AI presentation [S1].
MAJOR DISCUSSION POINT
4D framework for trustworthy AI
Argument 4
Six common failure categories: (1) adoption/impact gap, (2) governance failures, (3) misalignment with societal goals, (4) sovereignty issues, (5) sustainability pressures, and (6) change‑management challenges.
EXPLANATION
Theresa outlines why many proof‑of‑concept AI projects do not succeed, pointing to gaps between design and real‑world use, weak governance, misaligned objectives, lack of control, environmental constraints, and cultural resistance.
EVIDENCE
She lists the six challenges while discussing why proof-of-concepts fail, covering adoption, governance, misalignment, sovereignty, sustainability, and change-management [42-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The six failure categories are enumerated in the same sovereign AI briefing [S1].
MAJOR DISCUSSION POINT
Root causes of POC failures
Argument 5
Governments need to set baseline safety standards; regulatory approaches differ (e.g., UK’s third‑party AI supplier rules vs. the US’s lack of formal regulation).
EXPLANATION
Theresa argues that clear governmental regulations are essential for high‑risk AI applications, noting that the UK is moving toward stricter supplier rules while the US currently lacks comparable legislation.
EVIDENCE
She explains high-risk AI requires transparency, explainability, and third-party supplier regulation in the UK, contrasting this with the US’s more permissive stance [213-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Comparisons of UK third-party supplier rules and the US regulatory gap are discussed in analyses of AI governance and regulator approaches [S10][S11][S12].
MAJOR DISCUSSION POINT
Regulatory landscape for AI
Argument 6
Mapping high‑ and low‑concern harms helps decide which dimension to prioritize, though no single lens can be ignored.
EXPLANATION
Theresa suggests a practical method of ranking harms by concern level to guide which of the four dimensions should receive focus, emphasizing that all dimensions remain important.
EVIDENCE
She describes a process where organisations map harms from high to low concern, revealing which issues (e.g., sustainability) are most critical for them [306-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The method of mapping harms to prioritize dimensions is presented in the 4D framework discussion [S1].
MAJOR DISCUSSION POINT
Prioritisation of trust dimensions
Argument 7
Create an AI policy that defines priorities across the four dimensions; adopt a responsible‑AI framework with concrete questions and safeguards.
EXPLANATION
Theresa recommends organisations develop a formal AI policy that outlines how each of the four lenses will be addressed, and implement a responsible‑AI framework to embed ethical and security safeguards.
EVIDENCE
She mentions a white paper summarising eight-to-ten actions per dimension and urges creation of an AI policy and responsible-AI framework with clear safeguards [342-355].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations to draft an AI policy covering the four lenses and to adopt a responsible-AI framework appear in the sovereign AI briefing and safety-growth discussions [S1][S15].
MAJOR DISCUSSION POINT
Policy and framework recommendations
Argument 8
Develop measurable KPIs for sustainability, ethics, and user impact to demonstrate value and secure funding.
EXPLANATION
Theresa stresses the importance of quantifying AI outcomes through key performance indicators, enabling organisations to prove value, attract investment, and meet sustainability goals.
EVIDENCE
She advises defining KPIs for sustainability, ethics, and user impact, moving beyond vague commitments to measurable targets [355-359].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance to translate goals into measurable sustainability, ethics and user-impact KPIs is provided in the sovereign AI briefing and sustainability outlook literature [S1][S18].
MAJOR DISCUSSION POINT
KPIs for AI governance
Argument 9
Upskill teams, incorporate diverse perspectives, and engage government to promote smart data sharing and domestic model development.
EXPLANATION
Theresa highlights capacity building as essential, urging organisations to train staff, include varied viewpoints, and collaborate with governments to foster local data ecosystems and sovereign AI capabilities.
EVIDENCE
She calls for upskilling, diverse input, and government engagement to support responsible AI and sovereign model development [360-362].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building, stakeholder diversity and government partnership for sovereign AI are emphasized in the sovereign AI discussion and digital sovereignty networks [S1][S20].
MAJOR DISCUSSION POINT
Capacity development and stakeholder engagement
O
Omeed Hashim
8 arguments161 words per minute2796 words1039 seconds
Argument 1
Sovereignty means control over data and models; loss of this control undermines trust and can cause project failure.
EXPLANATION
Omeed explains that AI sovereignty—having authority over where data resides and how models are built—directly influences trust. Without this control, projects risk failure due to uncertainty about data usage and model updates.
EVIDENCE
He discusses the importance of control over data and models, linking loss of control to reduced trust and potential failure, and cites Serbia’s plan to build its own LLM as an illustration [137-144][145-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of data and model control for trust is detailed in the sovereign AI dimensions and the discussion report on sovereign AI in defence [S1][S22].
MAJOR DISCUSSION POINT
AI sovereignty and trust
Argument 2
Green AI links environmental impact to economic viability; more sustainable systems are also cheaper and more scalable.
EXPLANATION
Omeed argues that environmental sustainability and cost efficiency are intertwined; greener AI solutions reduce carbon footprints and operational expenses, making them more scalable.
EVIDENCE
He describes how sustainable AI reduces greenhouse-gas emissions and costs, referencing cloud computing economics, massive data-center electricity consumption, and the link between sustainability and scalability [154-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Links between AI sustainability, reduced emissions and cost efficiency are discussed in the green dimension and sustainability outlook sources [S1][S18].
MAJOR DISCUSSION POINT
Environmental and economic benefits of Green AI
Argument 3
Responsible AI encompasses ethics, bias mitigation, governance, security, and human‑centered design to ensure trustworthy outcomes.
EXPLANATION
Omeed outlines that responsible AI requires ethical standards, bias checks, robust governance, security measures, and designs that centre human needs, all of which build trust and prevent harm.
EVIDENCE
He details responsible AI components such as governance, ethics, bias, security, and human-centered design, noting their role in fostering trust and safe AI deployment [168-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The components of responsible AI-ethics, bias mitigation, governance, security, human-centered design-are outlined in the responsible AI lens and safety-growth discussions [S1][S15].
MAJOR DISCUSSION POINT
Components of responsible AI
Argument 4
Valuable AI focuses on delivering real‑world benefits, measurable outcomes, and societal well‑being beyond mere cost savings.
EXPLANATION
Omeed stresses that AI should generate tangible societal value, not just financial efficiency. Measuring impact on wellbeing, job creation, and broader societal goals is essential for true value.
EVIDENCE
He provides examples such as the UAE’s ambition to multiply workforce productivity and stresses the need for clear, measurable objectives to avoid dead-weight projects [181-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The valuable AI dimension emphasizing societal outcomes and measurable impact is described in the 4D framework and democratizing AI literature [S1][S19].
MAJOR DISCUSSION POINT
Defining and measuring AI value
Argument 5
National AI sovereignty—building and hosting models domestically (e.g., Serbia’s own LLM)—is crucial for control and long‑term trust.
EXPLANATION
Omeed points out that countries seeking AI sovereignty aim to develop and host models within their borders to retain control over data and avoid dependence on foreign providers, thereby sustaining trust.
EVIDENCE
He recounts a conversation with Serbian officials who plan to build large language models locally to maintain control over AI systems [145-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building domestic large language models for sovereignty is highlighted in the sovereign AI report and examples such as Serbia’s initiative [S22][S1].
MAJOR DISCUSSION POINT
Domestic AI model development
Argument 6
Organizations must balance competing priorities; for example, choosing a fast external model may boost short‑term value but sacrifice sovereignty and long‑term control.
EXPLANATION
Omeed explains that while external AI services can deliver quick value, they introduce risks to sovereignty and future autonomy, forcing organisations to weigh immediate benefits against strategic control.
EVIDENCE
He describes scenarios where countries consider using external models like GPT or Claude for speed, but worry about losing control if providers withdraw services, highlighting the trade-off between value and sovereignty [292-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between rapid value from external models and loss of sovereignty is discussed in the trade-off analysis of the 4D framework [S1].
MAJOR DISCUSSION POINT
Trade‑offs between value and sovereignty
Argument 7
A service‑oriented, co‑creation model is suggested to retain IP while enabling multi‑client platforms.
EXPLANATION
Omeed proposes shifting from pure IP ownership to a service‑based approach where AI capabilities are offered as shared services, allowing co‑creation and broader adoption across multiple clients.
EVIDENCE
He suggests offering AI as a layered service, retaining or sharing IP, and co-creating with clients to overcome commercial challenges of exclusive IP [273-276].
MAJOR DISCUSSION POINT
Service model for AI commercialization
Argument 8
Encourage governments to support sovereign AI initiatives and establish clear regulatory baselines for high‑risk applications.
EXPLANATION
Omeed calls for governmental action to back sovereign AI projects and to set definitive safety standards for high‑risk AI, ensuring trust and long‑term viability.
EVIDENCE
He reiterates the importance of sovereignty, noting that loss of trust leads to crises, and stresses that governments must provide clear regulatory frameworks for critical AI systems [292-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for government backing of sovereign AI and clear safety standards for high-risk AI appear in regulator-watching analyses and policy panels [S10][S11][S12][S16][S22].
MAJOR DISCUSSION POINT
Government role in sovereign AI
A
Audience
5 arguments154 words per minute1127 words437 seconds
Argument 1
Private sector leaders seek clearer governmental guidance on safe AI use and model selection.
EXPLANATION
Ami Kotecha expresses that private companies need government‑defined safety standards and guidance on which AI models are acceptable, especially for high‑risk or critical applications.
EVIDENCE
She describes her company’s need for government direction on AI safety, model utilization, and regulatory expectations for high-risk use cases [210-228].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Private sector demand for clear AI safety standards and model guidance is reflected in regulator-watching analyses and policy panel discussions [S10][S11][S12].
MAJOR DISCUSSION POINT
Demand for government AI guidance
Argument 2
Upcoming data‑protection legislation will gradually raise responsible‑AI compliance, but current adoption remains very low.
EXPLANATION
An audience member notes that a new data‑protection law will be enforced soon, but presently only a tiny fraction of organisations practice responsible AI, indicating a lag between legislation and implementation.
EVIDENCE
The speaker references a law slated for October 2025, predicts a 18-24-month rollout, and states that only 0.1 % of organisations currently practice responsible AI, expecting gradual improvement [230-235].
MAJOR DISCUSSION POINT
Legislative impact on responsible AI
Argument 3
Participants asked how to rank the four lenses and when one (e.g., sovereignty) should outweigh others such as responsibility or value.
EXPLANATION
An audience question seeks clarification on prioritising the 4D dimensions, asking for scenarios where sovereignty might be favoured over responsible or valuable AI, and vice‑versa.
EVIDENCE
The audience member explicitly asks for discussion of scenarios where sovereignty is prioritized over responsibility or value, and how the lenses can be balanced [279].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The mapping and prioritisation of the four lenses, including scenarios where sovereignty may dominate, is addressed in the sovereign AI framework [S1].
MAJOR DISCUSSION POINT
Prioritisation of 4D lenses
Argument 4
Companies struggle to develop platform‑level AI solutions because large clients treat the technology as proprietary IP, limiting broader adoption.
EXPLANATION
A participant describes difficulty in scaling AI offerings when major customers demand exclusive ownership, preventing the creation of shared platforms that could serve multiple users.
EVIDENCE
He explains that a vending-machine AI built for PepsiCo cannot be offered to Coca-Cola, illustrating how client-specific IP demands hinder platform development [253-264].
MAJOR DISCUSSION POINT
IP constraints on platform scaling
Argument 5
Corporate reluctance to commercialize socially beneficial AI (e.g., sustainability IP) creates tension between responsible AI and value generation.
EXPLANATION
An audience member points out that some companies acquire sustainable‑technology IP but choose not to commercialise it, raising concerns about responsible AI practices and missed societal value.
EVIDENCE
She shares that a sustainability-focused IP was sold to a company that refused to commercialise it, highlighting a conflict between responsible AI and delivering value [266].
MAJOR DISCUSSION POINT
Responsibility vs. commercial value
Agreements
Agreement Points
A four‑dimensional (4D) framework—sovereignty, green (sustainability), responsible AI and valuable AI—is essential to build trust and successfully scale AI projects.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Introduces four lenses—Sovereignty, Green, Responsible AI, and Valuable AI—as a holistic approach to building trust and preventing harms. Sovereignty means control over data and models; Green AI links environmental impact to economic viability; Responsible AI encompasses ethics, bias, governance, security and human‑centered design; Valuable AI focuses on real‑world benefit and measurable outcomes.
Both speakers argue that evaluating AI projects through the four lenses helps anticipate risks, ensure trust and achieve scalable, beneficial outcomes [64-66][137-144][154-165][168-176][181-196].
POLICY CONTEXT (KNOWLEDGE BASE)
The combination of sovereignty, responsible, and green AI mirrors recent policy analyses that link sovereign AI strategies with responsible practices and emphasize the economic and environmental benefits of green AI [S51][S56].
There are inherent trade‑offs between AI sovereignty and the value delivered; organisations must balance control with real‑world benefits.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Discusses why proof‑of‑concepts fail, highlighting sovereignty issues and the need to prioritize dimensions based on high‑ and low‑concern harms. Explains that choosing fast external models may boost short‑term value but sacrifice sovereignty and long‑term control, requiring careful trade‑off decisions.
Both recognise that sovereignty and value can conflict and that organisations need to weigh these dimensions when designing AI systems [306-313][292-298].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions highlight the tension between national AI control (sovereignty) and delivering societal and economic value, noting trade-offs similar to those described in sovereignty-sustainability debates [S56][S51].
Governments should establish baseline safety standards and clear regulatory frameworks for high‑risk AI, with differing national approaches noted.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim, Audience
Governments need to set baseline safety standards; the UK is moving toward third‑party supplier rules while the US lacks formal regulation. Encourages governments to support sovereign AI initiatives and set clear regulatory baselines for high‑risk applications. Private‑sector leaders seek clearer governmental guidance on safe AI use and model selection.
All three parties call for stronger governmental regulation and guidance to ensure trustworthy AI, noting variations between the UK and US models [213-224][292-298][210-228].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for baseline safety standards align with multi-level regulatory approaches advocated at the IGF and in national panel discussions, which stress the need for clear, risk-based frameworks across jurisdictions [S48][S49][S41].
Sustainability (green AI) is tightly linked to economic viability; greener systems are cheaper and more scalable.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Highlights trade‑offs where sustainability impacts may be accepted for rapid adoption, but organisations with strong carbon goals must prioritize green AI. States that environmental sustainability reduces costs and improves scalability, making green AI essential.
Both emphasize that environmental considerations are not optional but affect cost and scalability, so sustainability must be integrated into AI design [306-310][154-165].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies on Green AI demonstrate that environmentally efficient models also reduce operational costs and improve scalability, supporting the link between sustainability and economic viability [S51][S52].
Similar Viewpoints
Both define responsible AI as a set of ethical, governance and human‑centred safeguards that are necessary for trustworthy AI [168-176][168-176].
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Responsible AI encompasses ethics, bias mitigation, governance, security and human‑centered design. Responsible AI includes ethics, bias, governance, security and human‑centered design.
Both stress the need for clear government policies and regulations to guide AI adoption in the private sector [213-224][210-228].
Speakers: Theresa Yurkewich Hoffmann, Audience (Ami Kotecha)
Governments need to set baseline safety standards and provide guidance for high‑risk AI use. Private‑sector leaders seek clearer governmental guidance on safe AI use and model selection.
Both acknowledge that responsible AI practices are currently scarce and that regulatory developments are needed to improve adoption [11-13][230-235].
Speakers: Theresa Yurkewich Hoffmann, Audience (legislation comment)
Only a small fraction of AI projects reach production due to lack of trust and governance. Upcoming data‑protection law will raise responsible‑AI compliance, but current adoption is very low.
Unexpected Consensus
Audience members overwhelmingly identified responsible and valuable AI as the most critical lenses, despite earlier emphasis on sovereignty and sustainability as equally vital.
Speakers: Theresa Yurkewich Hoffmann, Audience
Theresa presents all four lenses as essential and asks participants to pick the most important. Audience votes that responsible and valuable AI are the top priorities.
The audience’s preference for responsible/value over sovereignty or green AI was not anticipated given the presenters’ balanced framing, indicating a strong demand for ethical and impact-focused AI first [320-333][279].
Both speakers and audience acknowledge that only a tiny proportion of organisations currently practice responsible AI, yet they all agree on the urgency to develop frameworks and KPIs.
Speakers: Theresa Yurkewich Hoffmann, Audience (legislation comment)
Theresa notes that many AI pilots fail due to governance and trust gaps. Audience notes that only 0.1 % of organisations practice responsible AI today.
The convergence on the extremely low current adoption of responsible AI, despite different contexts (pilot failures vs. legislative rollout), was an unexpected point of agreement highlighting a shared perception of a critical gap [11-13][230-235].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry and governance reports repeatedly note the gap between responsible AI principles and actual practice, urging the creation of concrete frameworks and performance indicators [S43][S44][S45].
Overall Assessment

There is strong consensus among speakers and participants that AI deployment must be guided by a multi‑dimensional framework covering sovereignty, sustainability, responsibility and value; that trade‑offs between these dimensions need explicit management; and that government regulation and measurable KPIs are essential to build trust and scale AI responsibly.

High consensus on the need for a holistic, regulated and measurable approach, suggesting that future policy and practice are likely to converge on integrated frameworks that address all four lenses.

Differences
Different Viewpoints
Prioritisation of the four lenses (sovereignty, green, responsible AI, valuable AI)
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim, Audience
Theresa suggests it is difficult to rank the lenses and places responsible/value higher while putting sovereignty lower [317-319] Omeed argues that sovereignty is a critical, sometimes non-negotiable, dimension for trust and long-term control, and may outweigh value in certain scenarios [292-298] Audience asks for a ranking and later votes for responsible/value as most important, seeking scenarios where sovereignty might dominate [315-319][279]
The speakers differ on which of the four dimensions should be considered the most essential. Theresa views all dimensions as important but leans toward responsible/value AI as the top priority, whereas Omeed stresses that sovereignty can be paramount for trust and may need to be prioritised over value. The audience seeks clarification and shows a split preference, indicating no consensus on ranking.
POLICY CONTEXT (KNOWLEDGE BASE)
The disagreement over lens prioritisation reflects the broader constructive debates on implementation details observed in AI policy roadmaps and research agendas [S45].
How to overcome IP constraints and build platform‑level AI solutions
Speakers: Audience (Ami Kotecha and vending‑machine entrepreneur), Theresa Yurkewich Hoffmann, Omeed Hashim
Audience members describe how exclusive IP demands from large clients prevent the creation of shared platforms, limiting broader adoption [253-264] Theresa acknowledges the difficulty and mentions internal reuse of components but notes challenges in scaling across competing customers [265-267] Omeed proposes a service-oriented, co-creation model that retains or shares IP while enabling multi-client platforms [273-276]
There is a disagreement on the best strategy to address proprietary IP that blocks platform development. The audience sees IP exclusivity as a barrier, Theresa points to internal component sharing as a partial remedy, while Omeed recommends a service‑based, co‑creation approach to retain IP yet allow broader use.
POLICY CONTEXT (KNOWLEDGE BASE)
WIPO discussions and global AI policy frameworks highlight the challenges posed by intellectual-property regimes for AI development and call for lower-cost, open-access mechanisms to enable platform-level solutions [S57][S58].
Extent and nature of government involvement in AI governance
Speakers: Audience (Ami Kotecha), Theresa Yurkewich Hoffmann, Omeed Hashim
Audience calls for clear governmental guidance on safe AI use, model selection and high-risk regulations [210-228] Theresa outlines the need for regulation (e.g., UK third-party supplier rules) but also stresses private-sector responsibility, up-skilling and internal policies [213-224][342-362] Omeed urges governments to back sovereign AI initiatives and set definitive safety baselines for high-risk applications [292-298][138-144]
All parties agree government has a role, but they diverge on how extensive it should be. The audience seeks direct, prescriptive guidance; Theresa emphasizes a balanced approach combining regulation with private‑sector actions; Omeed focuses on sovereign AI support and clear safety standards, indicating differing expectations of governmental scope.
POLICY CONTEXT (KNOWLEDGE BASE)
Diverse viewpoints on governmental roles are documented in calls for inclusive AI governance that goes beyond state actors and in analyses of multi-level regulatory models across regions [S40][S48][S49][S41].
Unexpected Differences
Perceived level of responsible‑AI adoption
Speakers: Audience (data‑protection law speaker), Theresa Yurkewich Hoffmann
Audience claims only 0.1 % of organisations currently practice responsible AI, despite upcoming data-protection legislation [234-235] Theresa implies many customers are already “clued up” on responsible AI and that mapping harms is a common practice [202-203][306-313]
The audience’s statement suggests a near‑nonexistent uptake of responsible AI, whereas Theresa’s remarks convey that a substantial number of organisations already engage with responsible‑AI practices, revealing a surprising mismatch in perceived adoption levels.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent assessments indicate low adoption of responsible AI practices across organisations, underscoring the need to bridge the principle-practice gap highlighted in governance reviews [S43][S44].
Overall Assessment

The discussion reveals several key points of contention: (1) how to rank the four AI trust dimensions, especially the relative weight of sovereignty versus responsible/value AI; (2) the optimal approach to handling IP and building platform‑level AI services; (3) the appropriate scope of government regulation and guidance; and (4) a surprising gap between perceived and actual responsible‑AI adoption. While participants share a common goal of trustworthy, scalable AI, they diverge on priorities, implementation pathways, and the current state of practice.

Moderate to high – the disagreements centre on strategic priorities and policy approaches rather than factual disputes, which could impede coordinated action and slow the development of unified frameworks for AI governance.

Partial Agreements
Theresa and Omeed concur that AI projects must be evaluated through multiple lenses (sovereignty, sustainability, responsibility, value) and that a comprehensive, multi‑dimensional strategy is essential for trustworthy AI deployment.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Both present a four-dimensional (sovereignty, green, responsible AI, valuable AI) framework for building trust and preventing harms [64-66][137-144][154-165][168-176][181-196] Both state that no single lens is sufficient and that a holistic approach is required for scaling AI [349-351]
Takeaways
Key takeaways
Only about 30 % of AI pilots reach production, largely due to trust deficits across reliability, data handling, societal impact, and job effects. A rapid rise in AI incidents (e.g., voice‑cloning scams, un‑attributed AI‑generated books, biased facial‑recognition) erodes public confidence. The 4D framework (Sovereignty, Green/Sustainability, Responsible AI, Valuable AI) is proposed as a holistic lens to build trustworthy, scalable AI. Six common failure categories for PoCs were identified: adoption/impact gap, governance failures, misalignment with societal goals, sovereignty issues, sustainability pressures, and change‑management challenges. Government regulation and AI sovereignty are critical; differing national approaches (UK supplier rules, US lack of formal rules, Serbia’s domestic LLMs) influence trust and adoption. Trade‑offs between the four dimensions are inevitable; organizations must map high‑ vs low‑concern harms and make transparent prioritisation decisions. Private‑sector faces a platform vs IP dilemma: large clients treat AI as proprietary, hindering broader societal value and responsible‑AI outcomes. Concrete recommendations include creating an AI policy, adopting a responsible‑AI framework, defining measurable KPIs for each dimension, up‑skilling teams, and lobbying for sovereign‑AI support.
Resolutions and action items
Publish and distribute the discussed white paper (link to be shared via LinkedIn and email). Encourage participants to draft an AI policy that explicitly addresses the four dimensions. Adopt a responsible‑AI framework with defined questions, safeguards, and governance processes. Develop quantitative KPIs for sustainability, ethics, user impact, and business value to support funding and reporting. Implement up‑skilling programmes and incorporate diverse stakeholder perspectives into AI projects. Engage with government bodies to advocate for baseline safety standards and sovereign‑AI initiatives (e.g., domestic model development, smart‑data sharing). Consider a service‑oriented, co‑creation model for platform‑level AI to retain IP while enabling multi‑client use. Use a high‑/low‑concern harm mapping exercise to prioritize trade‑offs before scaling pilots.
Unresolved issues
How to formally rank the four lenses (Sovereignty, Green, Responsible, Valuable) for a given project and when one should outweigh the others. Specific pathways for governments to provide clear, enforceable guidance on safe AI use for private‑sector innovators. Practical mechanisms to overcome client‑driven IP lock‑in and enable platform‑scale AI solutions across competing firms. Details on how upcoming data‑protection and personalization legislation will be operationalised and enforced. Concrete examples of KPI definitions for each dimension and how they should be integrated into project governance. Resolution of the audience’s final question about balancing sovereignty versus value/responsibility in real‑world deployments.
Suggested compromises
Adopt a hybrid service‑plus‑IP model: retain core IP while offering a shared platform/service layer for multiple clients. Map harms into high‑ and low‑concern categories to transparently decide which dimension to prioritise in a given context. Treat Responsible AI as an umbrella that can incorporate sustainability and value considerations, reducing the need for separate trade‑offs. Balance rapid value delivery (using external models) with long‑term sovereignty by gradually transitioning to domestically hosted models. Accept that sustainability may increase costs initially but yields long‑term economic and scalability benefits, encouraging joint investment.
Thought Provoking Comments
Only 30 % of all AI projects actually go into production. The main reason we’re seeing so many pilots fail is that we don’t have trust – trust in the technology, in the data, in the outcomes, and in the impact on jobs.
She quantifies the failure rate of AI pilots and pins the root cause on trust, framing the whole session’s problem statement and giving the audience a clear metric to rally around.
This comment set the agenda for the whole discussion, prompting participants to think about trust‑related dimensions and leading directly to the later introduction of the 4‑D framework.
Speaker: Theresa Yurkewich Hoffmann
We’ve built a 4‑D model – Sovereignty, Green (sustainability), Responsible AI and Valuable AI – as four lenses you need to look at to build trust and avoid harms before you scale.
It introduces a concrete, structured tool that reframes the conversation from vague ‘trust’ to actionable categories, giving participants a shared language.
The 4‑D model became the backbone of the breakout scenarios and the poll questions, steering the discussion toward evaluating each dimension in real‑world examples.
Speaker: Theresa Yurkewich Hoffmann
Sovereignty isn’t just about an organisation or a nation – it’s about the people whose data is used. Who is looking at your data, why, and what they will do with it determines whether people will trust the system.
He expands the notion of sovereignty from a technical or geopolitical issue to a human‑centred one, linking data control directly to user trust.
This broadened view shifted the tone from a purely technical discussion to one that emphasises citizen rights, prompting audience members to raise concerns about data ownership and regulatory gaps.
Speaker: Omeed Hashim
Sustainability and cost are two sides of the same coin – the greener the system, the cheaper it is to run at scale. If an AI system can’t be economically viable, it won’t scale, and the carbon impact will stay high.
He ties environmental impact to business economics, turning ‘green AI’ from an optional add‑on into a core business requirement.
This insight sparked a brief debate on trade‑offs between performance and carbon footprint, and later informed the audience poll where sustainability was ranked low by many participants.
Speaker: Omeed Hashim
Private‑sector firms need government to define what is safe to use and what is still experimental. Without clear risk categories (low, medium, high) and transparency rules, companies are left to guess and risk failure.
She brings a real‑world policy perspective, highlighting the gap between fast‑moving AI innovation and slow regulatory frameworks.
Her comment opened a new thread about the role of public policy, leading to further discussion on sovereign AI policies, responsible AI frameworks, and the need for a national AI strategy.
Speaker: Ami Kotecha (Audience)
Instead of selling a bespoke IP to a single client, think of AI as a service platform that can be layered and co‑created. This avoids lock‑in and lets multiple customers benefit, similar to how India’s UPI created an ecosystem.
He proposes a concrete business‑model solution to the audience’s frustration about IP lock‑in, drawing on the successful UPI example.
This suggestion reframed the earlier complaints about proprietary solutions into a discussion about ecosystem building, prompting participants to consider platform strategies and collaborative models.
Speaker: Omeed Hashim
When you have to trade‑off between sovereignty and value, you must ask: if we rely on foreign models we get speed, but we lose control. If we build locally we keep control but may sacrifice short‑term value. The decision has to be explicit and documented.
He articulates the core tension that many participants were grappling with, turning an abstract dilemma into a concrete decision‑making framework.
This comment acted as a turning point, leading to the audience poll on which dimension is “must‑have” and reinforcing the session’s emphasis on explicit trade‑off analysis.
Speaker: Omeed Hashim
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that moved the conversation from a vague sense of AI pilot failure to a structured, multi‑dimensional analysis. Theresa’s opening statistics and the 4‑D framework gave participants a shared problem definition and a toolkit. Omeed’s deep‑dives into sovereignty, sustainability, and trade‑offs reframed technical concerns as human‑centred and economic issues, prompting the audience to consider policy, business models, and ecosystem approaches. Audience contributions, especially the call for government guidance and the platform‑vs‑IP dilemma, introduced real‑world pressures that forced the speakers to connect the theoretical lenses to actionable strategies. Together, these comments shaped a dynamic dialogue that progressed from problem identification to concrete recommendations on policy, governance, and business design.

Follow-up Questions
How will AI adoption and government regulation evolve over the next 6‑12 months/years, and what role will governments play in defining safe versus experimental AI use?
Understanding the timeline and scope of regulatory frameworks is crucial for private firms to plan investments, risk management, and compliance strategies.
Speaker: Ami Kotecha (co‑founder, Amro Partners)
How can companies build platform‑level AI solutions rather than bespoke, client‑specific ones, especially when large customers demand exclusivity?
A platform approach can unlock broader market reach and societal impact, but requires strategies to overcome IP lock‑in and client exclusivity pressures.
Speaker: Audience member (entrepreneur building agentic AI for vending machines)
How should organizations handle situations where a buyer acquires technology but does not commercialize it for societal benefit, raising concerns for responsible and valuable AI?
This scenario highlights tensions between commercial interests and the public good, necessitating guidance on responsible stewardship of AI assets.
Speaker: Audience member (owner of sustainability‑focused IP)
Why aren’t major IT consulting firms (e.g., Kainos, Infosys, Accenture) pursuing platform/service models for AI, and can such initiatives be driven organically or need coordinated effort?
Identifying barriers to platform adoption within large service firms can inform policy or industry initiatives to promote scalable, reusable AI solutions.
Speaker: Audience member (same as above)
In what scenarios might prioritising AI sovereignty conflict with responsible or valuable AI, and how can these trade‑offs be managed or aligned?
Understanding trade‑offs between control of data/models and ethical/value outcomes is essential for designing AI governance frameworks that balance national security with societal benefit.
Speaker: Audience member (question on sovereignty vs. responsibility/value)
How should the four AI lenses—sovereignty, green/sustainability, responsible, and value—be ranked in priority for a given project?
Prioritisation guidance helps project teams allocate resources and address the most critical dimensions early, improving chances of successful production deployment.
Speaker: Audience member (ranking low to high)
Which single AI lens is an absolute must‑have to avoid project derailment?
Identifying a non‑negotiable dimension can focus governance efforts and ensure that critical risks are not overlooked.
Speaker: Audience member (poll on absolute must‑have lens)
What quantitative KPIs can be developed to measure sustainability, ethics, and value in AI projects?
Turning qualitative principles into measurable metrics enables monitoring, reporting, and accountability, which are needed for funding and regulatory compliance.
Speaker: Theresa Yurkewich Hoffmann (and Omeed Hashim)
What are the detailed implications of data and model sovereignty—especially regarding offshore hosting, auditability, and control—and how can they be mitigated?
Data/model sovereignty affects trust, legal compliance, and operational continuity; research is needed to create practical guidelines for sovereign AI deployments.
Speaker: Omeed Hashim
How can organisations systematically assess and manage trade‑offs between rapid AI value delivery and sustainability or sovereignty concerns?
A structured trade‑off analysis framework would help decision‑makers justify compromises and align AI initiatives with broader organisational goals.
Speaker: Theresa Yurkewich Hoffmann
What lessons can be drawn from human‑centred AI design in sensitive domains (e.g., elderly monitoring) regarding responsibility, privacy, and value creation?
Case studies in high‑stakes settings can reveal unforeseen harms and inform best practices for responsible, valuable AI design.
Speaker: Omeed Hashim (example of nursing home AI)
How can the growing number of AI incidents reported by the OECD AI Observatory be reduced, and what preventive measures are most effective?
Understanding root causes of AI harms is essential for developing mitigation strategies, improving trust, and lowering the incident rate.
Speaker: Theresa Yurkewich Hoffmann

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.