Building Sovereign and Responsible AI Beyond Proof of Concepts

20 Feb 2026 11:00h - 12:00h

Building Sovereign and Responsible AI Beyond Proof of Concepts

Session at a glance

Summary

This discussion focused on the challenges of scaling AI projects from pilot phases to production, with speakers Theresa Yurkewich Hoffmann and Omeed Hashim presenting a framework for building trust in AI systems. The session began by highlighting a critical problem: while numerous AI pilots exist globally, only 30% successfully transition to production, largely due to trust issues and failure to consider broader implications beyond technical functionality.


The speakers introduced the “AI in 4D” framework, which examines AI projects through four essential dimensions: sovereignty (control and security), green/sustainability (environmental and economic viability), responsibility (ethics, governance, and human-centered design), and value (real-world benefits to people). Through interactive scenarios involving healthcare, transportation, justice, and social services, participants learned to identify which dimensions were overlooked in failed AI implementations. Common failures included ignoring power consumption and water usage in medical imaging systems, optimizing traffic flow without considering community impact and pedestrian safety, and deploying benefit allocation systems without explainability or bias mitigation.


The discussion emphasized that successful AI deployment requires considering all four dimensions simultaneously, as they are interconnected rather than independent. Participants engaged in debates about prioritization and trade-offs between dimensions, with most agreeing that responsible AI was the most critical foundation. However, the speakers stressed that no single dimension alone ensures success. The session concluded with practical recommendations including developing AI policies, implementing responsible AI frameworks, establishing measurable KPIs for each dimension, and ensuring diverse perspectives in AI development teams to build truly trustworthy and scalable AI systems.


Keypoints

Major Discussion Points:

AI Project Failure Rate and Trust Issues: Only 30% of AI projects actually go into production, with lack of trust being a primary barrier. The discussion highlighted growing AI incidents globally (600 in December 2025 alone) including voice cloning scams, AI-generated books with visible prompts, and biased facial recognition systems.


The 4D Framework for AI Implementation: Introduction of a four-dimensional approach to building trustworthy AI systems: Sovereignty (control and security), Green/Sustainability (environmental impact and scalability), Responsibility (ethics, governance, bias prevention), and Value (real-world benefits beyond just functionality).


Real-World Scenario Analysis: Interactive examination of AI project failures through case studies including healthcare radiology systems failing due to power/water requirements, traffic optimization systems harming pedestrian safety and diverting traffic to low-income areas, justice systems with offshore hosting concerns, and social benefits systems with unexplainable decisions and bias issues.


Government Role vs. Private Sector Challenges: Discussion of the tension between waiting for government regulation/guidance versus private companies needing to innovate quickly, including challenges around data sovereignty, platform development versus client-specific solutions, and the need for standardized approaches like India’s UPI system.


Trade-offs and Prioritization in AI Development: Exploration of difficult decisions organizations must make when different AI principles conflict, such as choosing between speed of implementation and sovereignty concerns, or balancing sustainability impacts with rapid AI adoption and training needs.


Overall Purpose:

The discussion aimed to provide a framework for successfully scaling AI projects from pilot to production by addressing trust issues through a comprehensive four-dimensional approach, helping participants understand why most AI projects fail and how to prevent common pitfalls.


Overall Tone:

The discussion maintained a professional, educational tone throughout, with presenters acting as knowledgeable guides sharing practical insights. The atmosphere became more interactive and engaged during the scenario-based exercises and Q&A portions, with audience members actively participating and sharing real-world challenges. The tone remained collaborative and solution-focused, with presenters encouraging questions and offering to continue conversations beyond the session.


Speakers

Speakers from the provided list:


Theresa Yurkewich Hoffmann: Session presenter and AI expert discussing AI trust, governance, and the 4D framework (sovereignty, green/sustainability, responsibility, and value). Works with customers on AI projects and co-authored a white paper on AI implementation.


Omeed Hashim: Co-presenter and AI deployment expert at Kainos, specializing in deploying AI systems into production for government departments in the UK, Canada, and US. Focuses on practical implementation challenges and solutions.


Audience: Multiple audience members who participated in Q&A and discussions, including entrepreneurs, business leaders, and professionals from various sectors.


Additional speakers:


Ami Kotecha: Co-founder of Amro Partners, a real estate company involved in a data spin-out. Discussed challenges with AI adoption in private sector and the need for government guidance.


Unnamed audience member: Expert on Indian data protection laws who provided information about upcoming legislation (November 2025 implementation of data protection and personalization law).


Unnamed entrepreneur: Building agentic AI for vending machines, previously worked in food and beverage sector innovation. Discussed challenges with platform-level value creation versus individual customer solutions.


Unnamed audience member: Asked questions about trade-offs between sovereignty, responsible AI, and value creation in AI implementations.


Unnamed audience member: Asked final question about building AI systems while considering multiple dimensions and potential trade-offs (attempted to ask in both English and Hindi).


Full session report

This comprehensive discussion on scaling AI projects from pilot to production was led by Theresa Yurkewich Hoffmann and Omeed Hashim from Kainos, a company that deploys AI systems for government departments in the UK, Canada, and US. The session, part of a larger AI summit, addressed one of the most pressing challenges in artificial intelligence implementation: the significant gap between AI experimentation and real-world deployment, with only 30% of AI pilots successfully transitioning to production.


The Trust Crisis in AI Implementation

The speakers established that trust represents the fundamental barrier preventing AI projects from scaling successfully. This trust deficit manifests across multiple dimensions: organisational trust in AI reliability, individual trust regarding data usage and outputs, societal trust concerning impacts on people’s lives, and workforce trust around job security implications. The magnitude of this challenge is evidenced by exponentially growing AI incidents globally, with the OECD AI Observatory documenting hundreds of separate incidents.


These incidents span a troubling range of AI misuse and failure. In Romania, AI was employed to clone voices for elaborate scams, creating distress calls that deceived victims into believing loved ones were in danger. At a book fair in Cairo, numerous publications were discovered to be AI-generated, complete with visible prompts and instructions still embedded in the text. Facial recognition systems deployed at borders have demonstrated unequal performance across different demographic groups, creating discriminatory outcomes that erode public confidence in AI systems.


The Six Pillars of AI Project Failure

The speakers identified six critical areas where AI proof-of-concepts consistently fail when attempting to scale. The adoption-impact gap represents perhaps the most fundamental issue – organisations focus on producing functional AI systems without adequately considering how people will actually use them or whether they will achieve intended goals. A legal AI tool, for instance, might technically function but require more human review time than it saves, negating its purported value.


Governance failures encompass the absence of comprehensive risk management frameworks. Organisations often lack clear processes for identifying emerging risks, assigning accountability for addressing them, or managing issues such as bias, fairness, and security vulnerabilities. This connects directly to misalignment between organisational objectives and societal values – when companies prioritise automation that displaces workers, they create tension with broader social concerns about employment and economic stability.


Sovereignty challenges have become increasingly prominent, particularly given current geopolitical tensions. Questions around maintaining control over AI systems, understanding who bears responsibility for decisions, and managing dependencies on foreign governments or corporations represent critical vulnerabilities. The sustainability pressure dimension addresses the substantial carbon costs of AI deployment, whilst change management encompasses the human elements – workplace culture adaptation, training requirements, and the evolving relationship between people and AI agents.


The AI in 4D Framework

To address these multifaceted challenges, the speakers introduced Kainos’s proprietary “AI in 4D” framework, which examines AI projects through four essential dimensions that must be considered simultaneously for successful deployment: Sovereign, Green, Responsible, and Valuable.


Sovereign AI centres on control – not merely data sovereignty, but comprehensive understanding of who controls the AI system, where models originate, who has access, and what security measures protect the entire infrastructure. Omeed Hashim emphasised that sovereignty fundamentally concerns whether organisations and nations can maintain autonomy over their AI capabilities, particularly during crises when dependencies on external providers become critical vulnerabilities. Countries like Serbia are investing in developing their own large language models specifically to maintain this control.


Green AI addresses both environmental impact and economic viability. The speakers argued that these concerns are intrinsically linked – more economical AI systems typically generate fewer greenhouse gases and prove more scalable long-term. The environmental implications are staggering, with new data centres consuming electricity equivalent to entire cities. The framework posits that AI systems unable to scale sustainably simply will not scale at all.


Responsible AI encompasses ethics, governance, bias prevention, and crucially, human-centred design. This requires clear vision of who the AI system serves and how it impacts all stakeholders. Prime Minister Modi’s emphasis on human-centred AI design, referenced from the previous day’s summit sessions, underscores the importance of building systems that genuinely serve human needs rather than merely demonstrating technical capability.


Valuable AI extends beyond financial metrics to consider real-world benefits and measurable improvements in people’s lives. The UAE’s clear objective of making 12 million people as productive as 120 million provides an example of concrete, measurable value definition. However, the speakers noted that such objectives must be contextually appropriate.


Interactive Scenario Analysis

The session’s interactive component proved particularly illuminating. Originally planned as breakout groups, the format was adapted to collaborative Q&A due to the intimate audience size. Participants analysed real-world AI failure scenarios through the 4D lens by raising hands to vote on which dimension each scenario represented.


A public health AI system designed to read X-rays failed due to insufficient computational infrastructure and substantial water requirements for GPU cooling in a water-sensitive area – exemplifying Green AI failures. A traffic optimisation system that reduced average commute times nonetheless failed due to community backlash when it diverted traffic through lower-income areas, failing the Valuable test. A justice system AI for triaging complaints demonstrated Sovereign failures when the team discovered the model was hosted offshore with no control over updates or transparency. A social benefits allocation system that couldn’t explain decisions or provide appeals mechanisms failed both Responsible and Valuable tests.


Government Role and Private Sector Challenges

A particularly engaging discussion emerged around the tension between government regulation and private sector innovation needs. Ami Kotecha, co-founder of Amro Partners, articulated feeling “left in the lurch” to make critical AI decisions without adequate government guidance on what constitutes safe versus experimental technology.


The speakers acknowledged varying regulatory approaches across jurisdictions. The European Union has implemented comprehensive AI regulation with risk-based categorisation, whilst the UK is developing regulation focused on third-party suppliers critical to national infrastructure. An audience member contributed information about India’s data protection and personalisation laws with 18-24 month preparation periods, though current responsible AI practices remain minimal.


An entrepreneur developing agentic AI for vending machines raised fundamental questions about platform versus client-specific development, using India’s UPI system as an exemplar of successful platform-level innovation that serves entire countries rather than individual companies. This highlighted the tension between competitive advantage and societal benefit.


Trade-offs and Prioritisation

One of the most sophisticated aspects involved exploring inevitable trade-offs between the four dimensions. When asked to prioritise, most participants voted for Responsible AI as most critical, viewing it as foundational to the others. However, the speakers stressed that this represents contextual choice rather than universal truth.


The discussion of an elderly care monitoring system using computer vision to monitor hydration levels illustrated complex stakeholder impacts – raising privacy concerns about bedroom surveillance, potential negative impacts on nursing staff evaluation, and broader questions about dignity and autonomy in care settings.


Implementation Gaps and Practical Solutions

Perhaps the most sobering revelation came when participants were asked about current AI governance practices. Despite detailed discussion of frameworks and best practices, none had implemented responsible AI frameworks, sovereign AI policies, or sustainability measures. This gap between theoretical understanding and practical implementation highlights significant work required.


The speakers provided concrete recommendations for bridging this gap: develop comprehensive AI policies defining usage parameters and priorities, implement responsible AI frameworks with specific requirements across ethics and security domains, convert abstract goals into measurable KPIs for sustainability and user impact, upskill teams to understand these concepts, and incorporate diverse perspectives in AI development processes.


Key Takeaways and Future Implications

The discussion demonstrated that successful AI deployment requires far more than technical competence. It demands holistic thinking about all four dimensions – Sovereign, Green, Responsible, and Valuable – supported by robust governance frameworks and genuine commitment to human-centred design.


The speakers offered to continue conversations beyond the session and provided access to their detailed white paper containing “eight to ten things” organisations can implement for each dimension. This comprehensive exploration revealed that the path from AI pilot to production success lies not in choosing between dimensions, but in thoughtfully integrating all four whilst maintaining transparency about the difficult trade-offs that real-world implementation inevitably requires.


The framework provides valuable structure for thinking about AI scaling, but significant challenges remain around platform-based solutions versus client-specific customisation, government guidance for private sector AI safety decisions, and the cultural and organisational change required to support responsible AI development and deployment.


Session transcript

Theresa Yurkewich Hoffmann

Okay. Sounds good. Okay. Well, this session will be all around that. So if we can have the next slide. So what we want to talk to you today is that there are so many different AI projects and AI pilots happening in the world. And a pilot is the same as a proof of concept. It’s an idea that you’re testing. And it’s a concept that you’re testing. And it’s a concept that you’re testing. And it’s a concept that you’re testing. to see if that idea is something that you can put into implementation later on. And I was looking at the stat of how many AI pilots are in the world, and that was very difficult to quantify.

But what I did find was that only 30 % of all the AI projects actually go into production. So what we’re finding in the world is that we have lots of different AI ideas, but really a difficulty in translating that into something real. And the point of this session and what I think is the point of the whole AI summit was that one of those reasons is because we don’t have trust. So if we can have the next slide. So if we think about trust, that could be an organization’s trust that the AI will work. It can be trust in us as individuals around how our data will be shared, the outputs that it will give us.

It could be trust in terms of the impacts that it will have on people and people’s lives. It could be trust in terms of jobs and how that will work. And with that, what we’re seeing is a lot of these AI projects are failing to consider that. And I don’t know if you’re familiar with the OECD AI Observatory, but they do a monitor where they essentially monitor all of the harms and all of the AI incidents around the world. And you can see that it’s been growing exponentially. In 2025 of December only, there were 600 different incidents in the world. So those are 600 different times that people were harmed or that there was some kind of AI hazard that was created through a pilot.

If we can have the next slide. It’s just to zoom in, so this is a little bit difficult for you to read now. But in that harms monitor, you can click on any of them and learn more about them. So some that I found, the first one is in Romania. AI was being used to clone people’s voices and then scam their voices. By making them think that they were in distress. As well, there was an example, I believe it was in Cairo. So there was a book fair, and a lot of the books there were actually produced using an equivalent of chat GBTs, using generative AI. But there was no humans included in that project, so the books were printed with the prompts and the AI instructions still in them.

So that created a lot of issue of creativity, and are these books generated by AI? Are they what we’re looking for? Is that what we thought we were buying? And then there’s several other examples happening all around the world where this is happening with facial recognition, for example. So using that at borders, and all of a sudden that might not work equal between different types of people. And all of these really build towards people losing trust in AI and being fearful of using it. So these are some examples, and we’ll kind of go into next what we can do about that. So. Next we’re going to look at. why do these proof of concepts fail and how do we shift from just experimenting to actually having impact.

So I can have the next slide. So I put here six ideas of what we’re seeing with the customers we work on is why proof of concepts are not working. The first one is between adoption and impact. So a lot of times we’ll have organizations that are working on AI and they’ve just thought about producing something but they haven’t actually thought about how will people use it. Will it have the goal that you’re hoping it to have? Or say, for example, I’m using a legal tool. Will it actually serve the purpose that I’m looking for? Will it require more work for me to actually review everything it’s doing? So there’s a gap there. The second is around governance failures.

So I’m not sure how many of you have thought about risk management. How do you identify all of the risks that are coming up? Who’s going to be accountable to solving them? That might be things like, is it treating people differently? Is it biased? It might be things around security, for example. And then there’s also a failure around misalignment. So between what you’re looking for in society, those might not be aligned. So if you’re, for example, prioritizing AI use to automate people, all of a sudden people are thinking, what about job loss? So there’s not really a link in value there, and that’s another reason. We’ve got three other challenges. The first one is sovereignty, which I think if anyone was around the summit today or this week, everybody was talking about sovereignty.

So questions around how do we maintain control? Who is responsible? If, for example, a foreign government decides to turn off that AI access, is that something we trust? Or how do we deal with that? we also have sustainability pressure so thinking about the carbon cost of using AI and lack of clarity around that and then change management is really all the people so if we’re thinking about these frontier firms where people are working with agents what does that work culture look like have we actually thought about how people use AI and have time to test it and practice with it have we thought about the relationship between people and AI and how that works as well so these are six quick concepts and if we can have the next slide is just a point to make is that when we’re considering a proof of concept we’re really just considering does it function we weren’t considering any of those other six things and if we want to scale AI we need to think about everything else so next slide so I guess the point of this session is really to think about how do we actually do that so what we have thought of is calling it AI and 4D so four dimensional the idea that you need to look at four different lenses to build trust in AI if we could have next slide and when we’re looking at that we’re thinking if you can look at all these four different lenses that’s really going to help you predict any harms or challenges that could come with the AI model and actually prevent them so that you can deploy and scale that AI there’s four dimensions that we’re looking at the first one is sovereignty so thinking about who controls it not just data but looking at all the security measures behind it where does the model come from who has access to it we’re looking at green so that’s sustainability can this scale without destroying our climate goals for example We’re looking at responsibility, so that is thinking about ethics and governance and bias and fairness and human -centered.

And then valuable, so is this project actually really going to deliver a real -world benefit to people? So next slide. This one, I think it might be difficult for us to create a poll, so what we’ll do is we’ll do it by hand instead. So if we can just go to the next slide. What I thought we could do before we give you more information of those 4D and how to apply them and break out into groups is we could just have some quick scenarios and test what your knowledge is of those themes already. So I’m going to give you an example, and then we’ll do a show of hands of who thinks what lens is missed here.

So this example is with a public health company. They’re using AI to read different x -rays and radiology scans. And the point of the proof of concept is to help triage different illnesses or different breaks, things that you might find in the scan, and reduce that backlog. So when they actually started modeling and rolling it out, the team realized that this required more compute than they needed. It would exceed, actually, the available power supply, so there was not going to be ability to use it consistently. And that, actually, there was a large demand on water because the GPUs needed to be cooled, and this is in a water -sensitive area. So that would be another challenge between people and the planet.

So this program failed, this hypothetical program failed, because it was financially and politically impossible to run. So who thinks that this is a problem because of sovereignty? Who thinks that this is a problem with sustainability? Yeah? who thinks that this is a problem with responsible AI and value. Yeah, I agree. So I mark this one as sustainability. I think it’s an example of the dynamics that we might have in the real world is we want to scale AI, do really great things, but actually we haven’t considered the power or the water usage that that has because we either don’t have the information or it hasn’t been something that’s been baked in the front to think about.

And we will give you some higher level into what this means and how to apply it in a moment. Okay, the next one. So we’ve got a second one. This is dealing with transport. So I think we’ve all dealt with traffic this week. We’re looking at this in this scenario here. It’s thinking about this project is to optimize traffic lights across the city and smooth congestion. But when they started implementing this project, it was only looking for average commute time. It was diverting traffic into lower income areas and pedestrian safety actually became worse. So while this met the technical triggers that it did reduce and optimize time, there was a lot of community backlash. So does someone want to tell me which one they think this is a failure of?

Audience

Sovereign and responsibility.

Theresa Yurkewich Hoffmann

Yeah, we’ve got some sovereignty, we’ve got responsibility. I think this one is actually value. So here, what the ministry had thought was valuable, reduce overall time, is not what’s valuable to the people. What’s valuable to the people is that they want to have safety and walking. And what’s valuable to them is that you protect communities and you don’t have unbiased impact. Next one. So now we’re looking at justice. So here we’ve got a justice system. Our justice department is building AI. to triage different complaints from citizens and reroute them to the right legal body, so whether it’s the courts or a commissioner or something like that. In the pilot, it performed really well, but later when they started to prepare to deploy this into production, the team discovers that, one, the model is hosted offshore.

Two, they don’t have a lot of information on when the model will be updated, and they don’t have control over that. This government doesn’t. That different logic within the model could change based on updates that they couldn’t control, and that they can’t audit the logs. So what do we think this time?

Audience

Yes.

Theresa Yurkewich Hoffmann

Okay, everyone is sovereignty. Sorry, did you say something else? A responsible AI? I think that could also be here, because they hadn’t thought of maybe all these risks beforehand. but I agree here especially when you’ve got a national organization they need to have control of the model and how it functions not being able to update it or audit it in such a sensitive area like justice is a real challenge so sovereignty is a challenge here and then last one okay so here we’ve got a social science agency and they’re using AI to determine who’s eligible for social benefits and the pilot showed that they were able to progress and reduce the time and have fewer manual checks but when they were actually doing this in real life the model wasn’t able to explain why it had made a decision so why it had allocated benefits to someone versus someone else there was no ability to understand how to appeal it so if you were rejected for example you couldn’t understand why that was and how to change that decision There was bias discovered between different groups, so age groups or ethnicity or gender.

It wasn’t applying it the same to equal to everyone. And there was no agreed process for how you would escalate if there was a problem. So this became very seriously harmful, and there was a lot of vulnerable citizens who could be impacted. So in this scenario, what do we think between responsible and value? Anybody else? Training data not accurate. Agreed. So I agree. I think this one is a good one of responsible but also valuable here. Responsible AI is thinking about bias. It’s thinking about fairness. It’s thinking about the data that you have. It’s thinking about all these. It’s thinking about all these harms up front and how you’re going to deal with them. And then equally with value, people need to see value of why they’re using AI in a public system.

And if it’s actually harming people, then it’s not necessarily a good use case. So far, everyone is doing good. I think we can move on. But what we wanted to go through now is how does this work in real life? What does this actually look like? And so I’ll pass to Omid. Can we have the next slide, please?

Omeed Hashim

Right. So I think it’s clear, you know, having had this conversation and the contribution from yourselves, that it’s not so straightforward because there are different dimensions, and this is the point that Riz is making in terms of having to look at different angles. So over the last two days, or definitely the day before yesterday, I was going around in the summit hall, and I was asking everyone, because you see everywhere it says sovereign AI, sovereign AI. I was asking them, what do you mean by sovereign AI? And some people were talking about, oh, we need to have our data centers here. Somebody was saying, or our models need to be here. There were different kind of conversations in terms of what sovereign AI actually means in the context of AI and how it works and how it deploys and so on and so forth.

But the key thing is that ultimately it comes down to control. And my view is that it’s not even just about the organization, the sector potentially, or the nation, but also about the people. So where is your data? Who’s actually looking at your data? Why are they looking at your data? What will they do with your data? If you don’t have an understanding of that, the likelihood of you trusting that system is very low, and therefore it would be susceptible to failure. So it’s really, really key to understand the implications of data sovereignty, AI sovereignty, and so on. I mean, I was talking to one country… called Serbia, and they were saying that we have a view that we need to have control of our own environment, we’re building new large language models in our own geography, and we are going to have control over what we do.

And I think that’s the key thing. But the important thing is that if the trust is lost in terms of the sovereignty, the likelihood is that the system will fail. And I can assure you that if it’s not designed in at the beginning, you’re going to test this under a lot of pressure. You’re likely to be in a crisis as well, because when you don’t know if your health data is trained on somebody else’s data, or you’re using very commercially available large language models. then the thing is you’re actually beholden to those people and therefore you may not be able to achieve what you want to achieve as an objective. So it’s a really, really important dimension in terms of a successful deployment.

And all of the stuff that I’m going to go through here, whilst I’ve seen them through failure, but also they’re the recipe for success. So you can think of it in both ways. So if I could have the next slide, please. So green AI, I mean, this is kind of not dissimilar to what we had before in terms of cloud and green computing and the fact that unless you actually look at the environment, look at it from an economic viability of the system, ultimately what it means is that it’s going to cost a lot more and it won’t scale. And if it doesn’t scale and you cannot handle the data volumes and the amount of usage that you do, you’re going to have, the likelihood is that it would stop.

Now, in my mind, the approach to take here is to make sure you address both. And what happens is that addressing both the environmental effects as well as the cost actually work very, very nicely together. So we had a similar scenario before in how we deployed cloud services, and the same thing is translating to this now. So the more economic your system is, the more likely that it’s going to reduce less greenhouse gases as well. And as a result of that, you can sustain this system longer term. I mean, we all know people are building now massive data centers. Yesterday, there was, I think, a discussion around Microsoft building the new data center that consumes, as much electricity as all of Los Angeles, and Los Angeles is an enormous city.

So the environmental effects of what we’re doing are really key, and it has a direct link into the costs that are driven out of that as well. And I can again assure you, I think it does only take away, that if an AI system can’t scale sustainably, then it won’t scale at all. I’m pretty convinced of that. So we can move on. So the next one is responsible AI, and I think a lot of people here are familiar with that. In terms of governance, assurance, are we doing the right things ethically, is there bias in the system, all of those things fall under the responsible AI banner. And it’s really fundamental in terms of giving people that trust that Teresa was talking about in order to use the system in anger and kind of really link their kind of lifestyle to that, and so on and so forth.

And as you know now, there are all sorts of other systems now like the AI companions that kind of help you achieve different things, whether it’s weight loss or even provide you counseling and help you along in your life. But unless they’re done in an ethical way and an unbiased way, they’re not leading you down a particular path, they’re likely to fail as well. Now, one thing that I wanted to bring to attention, and yesterday, Prime Minister Modi was talking about this, which is really key as far as I’m concerned in the responsible AI area, is the human -centered design of AI. Because when you’re actually building an AI system, you need to have in your mind who you’re trying to help and how.

And what does this actually mean? And if you’re trying to do something, you have to have a clear vision of what you’re trying to do. And if you’re trying to do something, you have to have a clear vision of what you’re trying to do. to them when they start to use the system. So I think the example around the traffic management was a very good one because we all struggled over the last few days with the traffic. And if a system is put into place which does not take into account what the purpose of what they’re doing is, then it is likely to fail. I think the goal of the system itself as well is really key in terms of whether it gets the right sort of results or not.

So there are many systems where people don’t consider that and as a result of that, it becomes unusable by the people or it might have harms built into it as a result of that. But the last one, the last dimension is how valuable that AI is and what does it mean in terms of the outcomes and what the measures are and so on. So a couple of days ago I attended a session where we had a senior executive from UAE. They were talking about, as a country, what they’re trying to do. And it’s really key for us to understand what we’re trying to achieve. So they had a very simple kind of thinking in terms of what they were trying to do, which made what they were doing much more measurable.

So what was the intention for them? The intention was that there are about 12 million people in the United Arab Emirates. And they wanted to effectively be, rather than 12 million people, with the introduction of AI, do as much work as 120 million, almost like 10 times the size. And I think that actually is really, really key. Very simple reason as to why you’re doing what you’re doing and how you measure it. And what the value is. Now, if you actually think about that in the context of, say, India, in my opinion, that ambition doesn’t give India the value. So to create, I don’t know, lots of agents to replace people’s jobs or do more jobs, right, doesn’t actually have the right outcome because there’s already a lot of people here.

Why would you do that, right? So you have to think really carefully about what the value is of the system itself because without thinking about that, you end up building a system that you cannot measure the value of. And then ultimately what you would do is that it would just become a dead weight. Why do we have this at all? Should we be getting rid of it or not? So hopefully you kind of understand all of the aspects of the different areas. At Kainos, we deploy systems, AI systems, into production. So we see. A lot of these issues. And we are quite lucky because our customers, which are all the government departments. are actually very, very clued up as well in terms of what different aspects of what we’re doing are, and they see value in it.

So it’s not just about deploying the technology, but how is this technology going to affect the UK citizens and where we work in other countries like Canada, US, and so on, those countries respectively. So I think that was my last slide. I think I’m going to hand it over to you.

Theresa Yurkewich Hoffmann

So we had originally intended to maybe do different breakout groups. The audience is quite small, so it’s up to you. We could either have everyone kind of have a few discussions and talk about what you think is the most challenging, or we could use 10 minutes if we want to do a Q &A, if people want to share their thoughts. Put your hands up if you want to go into a breakout group and discuss one of the concepts together. okay so we’ll do the second nobody voted for that so why don’t we, yeah we can have a discussion it’d be interesting to hear are you looking at these four challenges which do you think is the most difficult, which do you feel like you’ve solved and we can have a little discussion around that for a little bit introduce yourself

Audience

Hi there thank you my name is Ami Kotecha I’m co -founder of Amro Partners we are a real estate company and we are now getting involved in a data spin out my challenge is as follows, I’m one of the co -founders of the company as a leader I’m very keen of course that there’s AI adoption there’s upskilling etc. in the company and of course productivity challenges where we have them should be addressed using this technology. I feel like I am often left in the lurch to actually literally make all the decisions within the private sector environment whereas I think government needs to step in and make some of these decisions on our behalf in terms of model utilization, where we go, what we do with it.

I mean we are good experimenters so fortunately we are throwing capital at experimenting not every company can afford to do that or would want to do that because of the same sort of issues you mentioned right at the start which are aligned with just the fear of adopting something that is going to break your system or open yourself up to some kind of cyber attack etc so how do you see this sort of playing out in the next 6 months, 8 years 12 months because obviously the technology is moving really fast as to what role the government is going to play in saying this is safe to use and this is still experimental and you should worry about it?

Theresa Yurkewich Hoffmann

It’s like, go ahead and do it. But then there’s a medium risk, a high risk would be something that would be like really critical infrastructure or something that’s impacting people directly. And if it’s a high risk, then there’s a load of different things that you need to do around transparency with people. There’s also prohibited use cases of how to use AI. So I think that’s one example where some governments are actually saying, this is what we’ve deemed safe. And if it’s not one of these uses, then we want to see a lot of other checks. In the UK, we have regulation that’s looking at third party suppliers right now. And if they’re critical to the infrastructure or not of the country, then there will be new requirements on AI as well, in terms of like the updates that go in transparency around models, explainability.

But then maybe you have the US approach where you don’t have regulation yet. So I think that’s one example. it really depends on the country. I think a lot of what we heard yesterday was around, you know, for India, thinking about ethical and responsible AI, but I don’t know if you have any regulation in place around that yet. I think, yeah, I think it’s very difficult otherwise for a private company because otherwise you’re fighting to who gets to the bottom, who’s the cheapest, who’s the quickest. And this week I was touring around with different businesses and everyone was thinking, how do we do agents? But no one was thinking about human -centered, ethical, responsible. So I think it does need to come from the government to have a base.

But I noticed that some are maybe more forthcoming with that than others.

Audience

I think just before, I just wanted to answer your question about the government. There is a data protection and data personalization law that was, you know, legislated last year. November 2025 is going to be legal in the next 27 October onwards. They are getting a time of, you know, around 18 to 24 months. After that, what you are saying, the addressing of how the data is handled by the person who is creating the data, who is like the person who is created, who is the principal or one who is the repository, all that rules are coming. But presently, I would say it is 0 .1 % of that responsible AI part which is happening. But over a period of these two years, the preparation is going to happen where it will slowly get into that mode, actually.

Omeed Hashim

I was just going to say, so she’s a high flyer entrepreneur in the UK, actually. But I was just going to say, in my mind, right, there are a couple of things that we should really push the government to do, right? One is about smartness. Smart data. So they’ve been playing around with this for years and years. So we’ve got quite a lot of open banking applications now. But this can be extended way beyond open banking where different organizations can share data. Like, for instance, in the property market, you know, how do you go through the cycle of all the way from putting an offer in to conveyancing to, I don’t know, valuation to the end, right?

So that’s really critical. The other side of it is actually having trust in language models which are built within the U.K. itself, right? And I think most of the – even Serbia is doing that, right? French have already done it with Mistral. So there is a lot of examples of this, and that’s where the government can really help, and that’s what we should be lobbying them to do, in my opinion. Any other comment? Oh, yeah. Maybe behind you? Oh, sorry. You had your hand up. You had your hand up first. You go first, and then behind you next.

Audience

Yeah. Yeah. So my – I am building a agentic AI for vending machines. And I have been an entrepreneur in the corporate world. But before three years, I was just doing physical stuff, right? Doing products, innovation, the food and beverage sector. One of the challenges which I am seeing is how to build value at a platform level rather than an individual customer level, right? For example, if I offer this vending machine agentic AI for a PepsiCo, they would say don’t do it for Coca -Cola, right? Give it to us only and keep it with us. But UPI, for example, was not a master card or a visa card thing, right? It was for the whole country, right?

So how do you get that kind of attraction to build a platform instead of one very customized for a customer who might say that don’t give it to anybody else. So that is the key question that I am trying to address and I do not seem to find answers.

Theresa Yurkewich Hoffmann

I agree I think that is a challenge in the corporate world I used to work at Microsoft and even there it was if you’re using our technology if we’re coming on a panel then we’re on a panel but we’re not having Amazon on the panel or Google on the panel with us but I think like you say that’s really figuring out what you have that’s so unique and that actually goes to the value lens I think is that if you have something that’s really valuable to people you make the case that it has to be shared but it is it is difficult if you’re building it with one customer first because that almost becomes their IP that they want to keep right so something that we are doing when we’re working on response by our projects is we’re looking at all this similarity of requests that come in and we’re sort of doing the work on ourselves in the background and then we’re taking elements that we need and exposing them to the different customers and that way we keep that IP but it is very difficult to get multiple customers on board if they’re all competing

Audience

yeah so for example I build a few IP in the area of sustainability like clean air clean water I sold to a company but that company is not commercializing it I don’t want to name that company because it didn’t want to commercialize it wanted to keep that technology right so that’s a big challenge that I am seeing in the corporate world that a company will buy another company but it won’t implement for the society or for the good right so that is the challenge that I am seeing how do you handle that because that is part of the responsible AI as well as the valuable AI part

Omeed Hashim

yeah I think you’re right and I think you have your own kind of description of this problem but I was in the US a few months ago and I saw, I don’t know whether you’re familiar with SVB, but it’s basically Silicon Valley Bank, right? And they did a presentation to us where they were talking about where all the funds are going, right? And if you actually see what is going on in terms of this, I think it’s about a trillion dollars worth of investment. This investment is flowing only into a handful of companies. What those companies are doing, they literally are stifling everybody else, right? This is a commercial reality, right? But if I was to offer you some options, I would say there shouldn’t be just the IP.

You should be thinking about it more as a service that you could build layers on, right? So you may retain the IP or you may share the IP. It could be a co -created, whatever it is, but it’s got to have a service model attached to it because if PepsiCo buys X and then co -creates, Coca -Cola buys Y, why would they be buying it and how would you be able to build on top of that but you know it’s very very commercially challenging problem it’s been there for many years this is nothing new

Audience

as Shri said exactly like that UPI beat that right so today UPI compared to a Mastercard or a Visa in India everyone is using that right and there are applications which are attached to UPI whether it’s a Paytm or a Google or whatever Amazon Pay all of them are on the platform of API right so the question I had was why are IT companies for example Kainos right or an Infosys or an Accenture not looking at the platform approach and looking at the services approach where they can put their team manpower and run projects right so I see this as a challenge I have been talking to the top management of Infosys Accenture every time I go with the proposal they say just do it for a client you know and we will attach you as an expert I don’t want to do that I want to build a platform there is nobody who is really interested in building that sort of a business which is path breaking it takes longer time right like UPI it happened organically can these kind of initiatives happen inorganically that was the question

Theresa Yurkewich Hoffmann

I think they are looking at both yeah so I think we should take one more question because we have very few minutes left so we can talk after and I want to get to the person behind you for his question as well and then we will do a quick back up

Audience

Good afternoon thanks for covering those areas in the lectures that were much needed to understand so you talked about sovereignty AI and then you talked about value or responsible AI so there might be few scenarios where while chasing sovereignty we might have to bypass value additions or responsibility for the citizens while the other way round also so can you discuss about those scenarios where you value sovereignty more than talking about responsible AI or value additions AI and the otherwise also and when they can be parallelly taken into account

Theresa Yurkewich Hoffmann

So you’re asking around Responsible AI and valuable AI Where they link Where one might be more useful Than the other So where I see Responsible AI I think it can actually incorporate As a lens for everything But it’s much easier to think of it as separate I think Responsible AI can encompass five things So like ethics Trust Like bias and fairness Human centered Governance and security Where I think that distinguishes from value Is value Is looking beyond Financial growth So a lot of organizations You might work with Or when you think of many organizations In the world, they’re looking at how much money Will this save me Or how much time or how much productivity But I think valuable AI is looking at What goes beyond that And what’s the value of value And what’s the value of value And what’s the value of value does it actually create more well -being in people?

Does it give people time back with their families, for example, or other hobbies they want to do? Valuable is thinking about what’s the long -term benefit that this will have in terms of how we change society. Maybe it’s going to create a whole bunch of different jobs now in something else. So I think actually if you’re using responsible AI, it will create value. So I still think they go hand -in -hand, but that’s probably how I distinguish it. Is that your question? No. I’m not sure. Maybe Omid has an answer. You know also. Yeah.

Omeed Hashim

So I think you’re saying what happens when you have to do a trade -off, right, between sovereignty and value. And I think this is a very good question, to be honest, right? Because when you – so, again, yesterday I was wandering around in the summit. I keep asking people questions about different things. Right. And one of the countries that I spoke to – they know that using GPD models or Claude and various other things is a quick route to building what they need to build because they’re there it is immediate and it can be done almost like without any issues at all but they’re taking the hard route so they’re saying actually we don’t want to do that because what if tomorrow we fall out with them as Europeans are falling out with Americans anyway so what happens if they turn off the systems, what would we do then so if you think about it in terms of the speed and the value is actually going with what you got but the more challenging thing which is the value the value is can we actually use this system for our citizens on an ongoing basis is that data something that belongs to us are the models aligned with what we are doing.

So they want to be able to enable their people in order to deliver the right outcomes. And that would not happen if they just outsourced their sovereignty to the U.S. So I think those are some of the very, very important factors that need to be explained. But ultimately, from a value perspective, three is a spot on. I think it’s about what is the value in terms of to the people that are going to use that system. So if tomorrow we found this fantastic system, like I give you an example, we’ve stopped multiple times in terms of the traffic because some VIP was coming out of somewhere, right? And then just literally closed the road. So we’re sitting there for like half an hour and then we get going again.

That’s happened, I think, three or four times so far. So if you were to build the system, you would be needing to think, you know, what is the value for those taxi drivers, all these public that are going around. and that’s the key thing that you need to be able to use AI to achieve, right? This needs to be measurable, it needs to actually help the people itself. So yes, it’s a very tricky trade -off.

Theresa Yurkewich Hoffmann

I think the trade -off is really good, especially in sustainability as well. So a lot of times organizations might just think, how do we adopt AI as quick as possible, get people to use it as much as possible, but actually every query that you use has a sustainability impact. And so I think there’s a trade -off there because there might be a sustainability impact, but depending on where you are, you might value training people to use AI more, so you might be okay with that impact because it’s more about getting people comfortable with using it. But then if you are an organization that really values sustainability, you have really strong carbon goals or net zero goals.

then actually that might be the trade -off that you have. So I think one thing that we’re doing when we’re working with organizations is we’re getting them to make that very difficult decision of here’s high concern, here’s low concern. We map out all the harms that we can think of and all the principles and values that align to them and they can’t put any side by side. And they have to move all of them from high to low concern. And very quickly that makes you see what’s real for your organization and I’ve seen a lot of them put sustainability at the bottom, which to me is a little bit concerning, but it does start to think about really understanding your organization and how those trade -offs are going to play.

And that’s what we’re finding in the human one as well.

Audience

So just 10 seconds more, adding to yours, ranking low to high. So out of all these four, sustainability, sovereignty, responsibility and valuation, how do you rate them as low to high, all the four factors that you have covered in your paper?

Theresa Yurkewich Hoffmann

how do you rate all this on a scale of how do I rate them I think that’s very difficult I think I’m putting response by AI at the top because I think it can actually it’s a cheat because it can kind of include sustainability actually and I think it will create value I think so then I would probably put sovereignty lower than that but obviously this year has maybe changed that geopolitically I think I still put response by AI at the top I’ll make that hard choice what do you say

Omeed Hashim

I think I kind of agree and I think Modi Prime Minister Modi said this himself yesterday about human centered AI design is part of responsible AI a few days ago me and Teresa were talking to someone and they were describing a system now if you just indulge me for a couple of minutes let me explain the background of the system and then you see how it’s relevant so they were building a system for kind of the nursing or old people’s home so you may know that elderly get dehydrated and they forget to drink water and then that causes a lot of problems for them so they built a system where using AI and vision they were seeing if the elderly were having enough liquid in the day or not now that’s fantastic everybody says this is a brilliant idea but then you think about it they are monitoring those elderly both in the area where it’s common as well as where they may be in their bedrooms or whatever so that brings a challenge and then the other challenge was what about the people, the nurses who are actually hydrating them because that could become a negative effect on them because somebody might be saying you’re not doing your job right right and what about the family of the elderly, what about the impact on them, so I think it is really important to understand why we build the system who it affects how it affects them and what the long term benefits are which brings the value, right, this is why it’s four dimensions, none of these are independent, I think they’re all relating to one another, one shape or form

Theresa Yurkewich Hoffmann

yeah so we’ll work towards wrapping up because I think we’re getting the time check can we but this one switched because it said 8 and then it said 17 and now it says 10 and then she told me I had 7 this one’s right, okay well let’s see if there are more questions yeah, but we have the takeaways and things to go through also, so I think we’ll wrap up and we can talk to people individually afterwards, can we skip through some of the slides, because we wanted to ask next one, next one next one Next one, I think. Okay, so we actually wanted to flip that question and ask this to you in the audience as well.

Which one would be your top? So of those four lenses, sovereign, green, responsible AI, which one do you think is an absolute must -have? And you can only pick one. And if this isn’t there, it’s going to derail the project. Shall I show of hands? Who says sovereignty is the most important? Who says that it’s green AI sustainability? Who says it’s responsible and then value? Some people didn’t vote. You didn’t vote back there. But I think it sounds like a lot of people are in the responsible and value is the most important. I think I agree. But I think what we wanted to come across is all of these need to come into play as well.

Can we do the next slide? Of that question, though, who has a responsible AI practice in place? Who uses a framework or anything like that? Anybody? Who has a sovereign AI policy in place? No? And who is looking at sustainability? None of us. So that’s a takeaway for all of us. I think we wanted to wrap up with how do I take this forward. So a couple points I want to make. The first is that we have taken a lot of the learnings and things you talked about in this and we’ve turned it into a white paper. There’s a link below, but we can share it with you. We’ve shared it on LinkedIn as well to talk about these learnings.

And we wrapped it up to say for each of those themes, here’s eight to ten things that you could do if you really wanted to take sovereignty, green, sustainable, responsible, and valuable AI forward. So please check out. I’m very happy to talk about that paper. I’m very happy to talk about it and give you insight. My thing, the key takeaway for us here is that no single dimension is the answer. I think that’s come out in the scenarios, in the conversations that we’ve had, and in how we’re prioritizing is that you can’t really have one, just one. You need all of them if you want to scale that project and really make it to production.

The second point is on tradeoffs. So it was really good that that came up in the conversation, is that just being aware of the tradeoffs that you will have to make and having something in process to talk about why you made that decision is important. I think a takeaway for everyone here is think about an AI policy, which talks about how you’ll use AI and what you will prioritize. Think about having a responsible AI framework, which is essentially all the questions and things that you want to be implemented across ethics or across trust, security. and then really think about how you can turn some of this into numbers. So what are the KPIs that you can actually look at for sustainability, for users, for ethics?

Don’t just make them a concept of we will be ethical. Think about what that actually means for you and how you’re going to measure it. That’s important if you want to get funding, investment, and show the project is a success. And then finally, think about how you can upskill your teams to understand these concepts and how you can incorporate diverse views. I think that’s probably the most important in building out the responsibility. So we will wrap up. This is to say if you want to get in touch with us, here’s our details. Find us on LinkedIn. Send us an email. We can have a couple minutes after this since I know there was two questions in the audience that we might not have got to.

But otherwise, we hope this session was useful. If you want to give us feedback, here’s a bit. Bigger QR code. so if you want to stay in touch then fill out this, let us know if there’s anything we can improve on the session or any questions that you have, we’re super happy to hear that and otherwise just a big thank you for your participation and yes, hope you have a good rest of the day and a good weekend

Omeed Hashim

Yeah, I was just going to say I think great questions there about trade -off and absolutely the right question to ask because none of these are unique Sorry, please, go ahead

Audience

Yeah Like you were talking about trade -offs, I just wanted to say okay, every model has their own aspects like pros and cons or as you say, different dimensions, I’ve got most of my answers from those questions but I just wanted to ask, if we’re building something and taking these aspects like okay, taking responsible AIA, valuable AIA on them. But if we are taking, we’ll be missing some aspects. As he said about responsibility, if we are taking accuracy and fairness, if we will take if it makes it easier to speak in Hindi, I understand. It’s okay, but fairness, okay. If we are doing Sorry, sorry. Again.

Theresa Yurkewich Hoffmann

I think there’s no issue. You can ask us on email as well. It’s not an issue. We’re more than happy we’ll respond to you if you want to ask.

Omeed Hashim

Just a question.

T

Theresa Yurkewich Hoffmann

Speech speed

170 words per minute

Speech length

4540 words

Speech time

1599 seconds

Trust and low conversion of AI pilots to production

Explanation

Theresa points out that only a small fraction of AI projects reach production and identifies a lack of trust as a key barrier. She explains that mistrust stems from concerns over data and model control, leading to project failure.


Evidence

“But what I did find was that only 30 % of all the AI projects actually go into production.” [1]. “And the point of this session and what I think is the point of the whole AI summit was that one of those reasons is because we don’t have trust.” [3]. “And all of these really build towards people losing trust in AI and being fearful of using it.” [6].


Major discussion point

Trust and low conversion of AI pilots to production


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


The 4D framework for trustworthy AI

Explanation

Theresa introduces a four‑dimensional framework—sovereignty, green, responsible and valuable AI—to help organisations assess and mitigate harms before scaling. The lenses are presented as a way to build trust and predict challenges.


Evidence

“So of those four lenses, sovereign, green, responsible AI, which one do you think is an absolute must -have?” [38]. “the idea that you need to look at four different lenses to build trust in AI … the first one is sovereignty … we’re looking at green … responsibility … value.” [39].


Major discussion point

The 4D framework for trustworthy AI (Sovereignty, Green, Responsible, Value)


Topics

Artificial intelligence | Data governance | Environmental impacts | Human rights and the ethical dimensions of the information society


Common reasons proof‑of‑concept pilots fail

Explanation

Theresa lists six failure categories—adoption/impact, governance, misalignment, sovereignty, sustainability pressure, and change‑management—showing why many pilots do not scale. She stresses that governance and accountability must be addressed early.


Evidence

“The second is around governance failures.” [65]. “And then there’s also a failure around misalignment.” [66]. “So I put here six ideas of what we’re seeing with the customers we work on is why proof of concepts are not working.” [67]. “So there are many systems where people don’t consider that and as a result of that it becomes unusable…” [68]. “The first one is between adoption and impact.” [70]. “So does someone want to tell me which one they think this is a failure of?” [71].


Major discussion point

Common reasons proof‑of‑concept pilots fail


Topics

Artificial intelligence | Building confidence and security in the use of ICTs | The enabling environment for digital development


Role of government and regulation in AI adoption

Explanation

Theresa describes emerging UK regulations targeting high‑risk AI, including transparency, explainability and third‑party supplier rules. She notes that similar regulatory moves are expected elsewhere to safeguard critical infrastructure.


Evidence

“In the UK, we have regulation that’s looking at third party suppliers right now.” [81]. “And if they’re critical to the infrastructure or not of the country, then there will be new requirements on AI as well, in terms of like the updates that go in transparency around models, explainability.” [82]. “And if it’s a high risk, then there’s a load of different things that you need to do around transparency with people.” [83].


Major discussion point

Role of government and regulation in AI adoption


Topics

Artificial intelligence | The enabling environment for digital development | Data governance | Building confidence and security in the use of ICTs


Trade‑offs and practical implementation challenges

Explanation

Theresa highlights trade‑offs between sustainability and rapid AI adoption, and between sovereignty (control) and value (speed). She urges organisations to map high‑ vs low‑concern harms to decide priorities.


Evidence

“And so I think there’s a trade -off there because there might be sustainability impact, but depending on where you are, you might value training people to use AI more…” [73]. “So I think you’re saying what happens when you have to do a trade -off, right, between sovereignty and value.” [79]. “And very quickly that makes you see what’s real for your organization and I’ve seen a lot of them put sustainability at the bottom…” [94].


Major discussion point

Trade‑offs and practical implementation challenges


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development | Building confidence and security in the use of ICTs


O

Omeed Hashim

Speech speed

161 words per minute

Speech length

2796 words

Speech time

1039 seconds

Trust depends on sovereignty

Explanation

Omeed stresses that loss of trust in data or model sovereignty makes AI systems likely to fail, and highlights the need to understand data‑sovereignty implications.


Evidence

“But the important thing is that if the trust is lost in terms of the sovereignty, the likelihood is that the system will fail.” [16]. “So it’s really, really key to understand the implications of data sovereignty, AI sovereignty, and so on.” [17].


Major discussion point

Trust and low conversion of AI pilots to production


Topics

Artificial intelligence | Building confidence and security in the use of ICTs | Data governance


Domestic AI models and government support

Explanation

Omeed argues that governments should foster home‑grown large language models to retain control and reduce reliance on foreign providers, positioning this as a sovereign AI strategy.


Evidence

“I was talking to … Serbia … we need to have control of our own environment, we’re building new large language models in our own geography…” [86]. “So there is a lot of examples of this, and that’s where the government can really help, and that’s what we should be lobbying them to do, in my opinion.” [87].


Major discussion point

Role of government and regulation in AI adoption


Topics

Artificial intelligence | Data governance | The enabling environment for digital development


Sovereignty vs value trade‑off

Explanation

Omeed points out the tension between using fast, foreign AI models for immediate value and building sovereign, locally‑controlled models that may be slower but ensure long‑term trust and alignment.


Evidence

“So I think you’re saying what happens when you have to do a trade -off, right, between sovereignty and value.” [79]. “… they are taking the hard route so they’re saying actually we don’t want to do that because what if tomorrow we fall out with them… the speed and the value is actually going with what you got but the more challenging thing which is the value… can we actually use this system for our citizens on an ongoing basis is that data something that belongs to us…” [102].


Major discussion point

Trade‑offs and practical implementation challenges


Topics

Artificial intelligence | Data governance | The enabling environment for digital development


A

Audience

Speech speed

154 words per minute

Speech length

1127 words

Speech time

437 seconds

Upcoming data‑protection law and low responsible AI adoption

Explanation

Audience members note that responsible AI practices are currently near zero and that a new data‑protection and personalization law will become enforceable in late 2025, creating a regulatory push for responsible AI.


Evidence

“But presently, I would say it is 0 .1 % of that responsible AI part which is happening.” [4]. “There is a data protection and data personalization law that was, you know, legislated last year.” [27]. “November 2025 is going to be legal in the next 27 October onwards.” [30].


Major discussion point

Trust and low conversion of AI pilots to production


Topics

Data governance | The enabling environment for digital development | Artificial intelligence


Agreements

Agreement points

Trust is fundamental to AI scaling and implementation

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

AI Project Implementation Challenges and Trust Issues


Sovereignty and Control in AI Systems


Summary

Both speakers agree that trust issues are the primary barrier preventing AI projects from moving from pilot to production phase, with Theresa citing statistics showing only 30% success rate and Omeed emphasizing that without understanding data control, trust remains low


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Holistic approach required beyond technical functionality

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Six key failure points: adoption vs impact gap, governance failures, misalignment, sovereignty issues, sustainability pressure, and change management


Human-Centered Design and Responsible Implementation


Summary

Both speakers emphasize that successful AI deployment requires consideration of multiple dimensions beyond just technical proof of concept, including human impact, governance, and stakeholder considerations


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Sustainability and economic viability are interconnected

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Green AI addresses sustainability and environmental impact of AI systems


Sustainability and Economic Viability


Summary

Both speakers agree that environmental sustainability and cost-effectiveness work together, with more economical systems typically producing fewer greenhouse gases and being more scalable long-term


Topics

Artificial intelligence | Environmental impacts | Financial mechanisms


Government role is essential in AI governance

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim
– Audience

Arguments

Trade-offs Between AI Dimensions


Government Role and Regulation


Government Role and Regulation


Summary

All speakers acknowledge that government intervention is necessary for establishing AI frameworks, data protection laws, and creating trusted national AI infrastructure, though implementation varies by country


Topics

Artificial intelligence | The enabling environment for digital development | Data governance


Trade-offs between AI dimensions are inevitable and must be managed

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim
– Audience

Arguments

Trade-offs Between AI Dimensions


Trade-offs Between AI Dimensions


Trade-offs Between AI Dimensions


Summary

All speakers recognize that organizations and countries must make difficult decisions about prioritizing different AI dimensions, requiring transparent processes for justifying these trade-offs


Topics

Artificial intelligence | The enabling environment for digital development


Similar viewpoints

Both speakers advocate for comprehensive frameworks that consider multiple stakeholder impacts, with Theresa proposing the 4D framework and Omeed emphasizing human-centered design principles

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Four-Dimensional AI Framework (AI in 4D)


Human-Centered Design and Responsible Implementation


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers emphasize that AI sovereignty fundamentally concerns control over data, models, and systems, extending to individual data rights and organizational autonomy

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Sovereignty dimension focuses on control over data, models, and security measures


Sovereignty and Control in AI Systems


Topics

Artificial intelligence | Data governance | Building confidence and security in the use of ICTs


Both speakers stress the importance of defining and measuring real-world value that goes beyond technical metrics to create meaningful improvements in people’s lives

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Valuable AI ensures real-world benefits and measurable outcomes for people


Value Creation and Measurement


Topics

Artificial intelligence | Social and economic development | Monitoring and measurement


Unexpected consensus

Platform vs. client-specific AI solutions

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim
– Audience

Arguments

Trade-offs Between AI Dimensions


Government Role and Regulation


Private Sector Implementation Challenges


Explanation

There was unexpected consensus that the tension between building scalable platforms versus client-specific solutions represents a fundamental challenge, with all speakers acknowledging the difficulty of creating shared AI infrastructure when commercial interests favor exclusivity


Topics

Artificial intelligence | The digital economy | The enabling environment for digital development


Responsible AI as the top priority dimension

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim
– Audience

Arguments

Responsible AI encompasses ethics, governance, bias prevention, and human-centered design


Human-Centered Design and Responsible Implementation


Trade-offs Between AI Dimensions


Explanation

Despite the framework presenting four equal dimensions, there was unexpected consensus that responsible AI should be prioritized as it can encompass and influence the other dimensions, with audience voting also supporting this prioritization


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Overall assessment

Summary

The speakers demonstrated strong consensus on the need for holistic AI governance frameworks that go beyond technical considerations to include trust, sustainability, sovereignty, and human-centered design. They agreed on the fundamental challenges facing AI implementation and the necessity of government involvement in creating enabling environments.


Consensus level

High level of consensus with complementary perspectives rather than conflicting viewpoints. The agreement suggests a mature understanding of AI implementation challenges and points toward actionable frameworks for addressing them, though practical implementation details and specific trade-off decisions remain context-dependent.


Differences

Different viewpoints

Prioritization of AI dimensions

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim
– Audience

Arguments

Theresa puts responsible AI at the top because it can include sustainability and create value


Omeed agrees with responsible AI being top priority, emphasizing human-centered design


Audience members showed varied preferences when voting on which dimension is most important


Summary

While there was general agreement on the importance of all four dimensions, speakers showed different preferences for which should be prioritized, with some favoring responsible AI while others emphasized sovereignty or value


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Government vs private sector responsibility for AI safety

Speakers

– Audience
– Theresa Yurkewich Hoffmann

Arguments

Private sector entrepreneur argues government needs to step in and make decisions about model utilization and safety on behalf of companies


Theresa acknowledges this varies by country, citing examples of different regulatory approaches (EU regulation vs US lack of regulation)


Summary

Disagreement over the extent to which government should provide guidance versus leaving AI safety decisions to private companies, with audience member wanting more government intervention while speaker noting varied international approaches


Topics

Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs


Platform vs client-specific AI development

Speakers

– Audience
– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Audience member advocates for platform approach like UPI that serves multiple competitors rather than exclusive client solutions


Theresa explains the corporate reality where clients want exclusive access and don’t want competitors to benefit


Omeed suggests service model approach but acknowledges the commercial challenges


Summary

Fundamental disagreement about whether AI solutions should be developed as shared platforms for societal benefit or as exclusive client-specific solutions, with tension between business realities and broader social value


Topics

Artificial intelligence | The digital economy | Social and economic development


Unexpected differences

Lack of current implementation of discussed frameworks

Speakers

– Theresa Yurkewich Hoffmann
– Audience

Arguments

Theresa advocates for responsible AI frameworks and policies


Audience members admit to having no responsible AI practices, sovereign AI policies, or sustainability measures in place


Explanation

Despite the detailed discussion of the importance of these frameworks, the audience revealed they have not implemented any of the recommended practices, creating an unexpected gap between theoretical agreement and practical implementation


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Overall assessment

Summary

The discussion revealed moderate levels of disagreement primarily around prioritization and implementation approaches rather than fundamental principles. All speakers agreed on the importance of the four-dimensional AI framework but differed on which dimensions should take precedence and how to implement them in practice.


Disagreement level

Moderate disagreement with significant implications for AI governance and implementation. The disagreements suggest that while there is consensus on the need for comprehensive AI frameworks, there are substantial challenges in translating this into practical policies and business models that balance competing interests and priorities.


Partial agreements

Partial agreements

Both speakers agree that all four AI dimensions (sovereignty, green, responsible, valuable) are necessary for successful AI deployment, but they differ in their emphasis and prioritization, with Theresa focusing more on responsible AI as encompassing other dimensions while Omeed emphasizes sovereignty and human-centered design

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Four-Dimensional AI Framework (AI in 4D)


Sovereignty and Control in AI Systems


Human-Centered Design and Responsible Implementation


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance


All parties agree that government has a role in AI governance and that private sector faces implementation challenges, but they disagree on the extent and nature of government intervention needed, with audience wanting more prescriptive guidance while speakers note the complexity of different national approaches

Speakers

– Audience
– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Private Sector Implementation Challenges


Government Role and Regulation


Topics

Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs


Similar viewpoints

Both speakers advocate for comprehensive frameworks that consider multiple stakeholder impacts, with Theresa proposing the 4D framework and Omeed emphasizing human-centered design principles

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Four-Dimensional AI Framework (AI in 4D)


Human-Centered Design and Responsible Implementation


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers emphasize that AI sovereignty fundamentally concerns control over data, models, and systems, extending to individual data rights and organizational autonomy

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Sovereignty dimension focuses on control over data, models, and security measures


Sovereignty and Control in AI Systems


Topics

Artificial intelligence | Data governance | Building confidence and security in the use of ICTs


Both speakers stress the importance of defining and measuring real-world value that goes beyond technical metrics to create meaningful improvements in people’s lives

Speakers

– Theresa Yurkewich Hoffmann
– Omeed Hashim

Arguments

Valuable AI ensures real-world benefits and measurable outcomes for people


Value Creation and Measurement


Topics

Artificial intelligence | Social and economic development | Monitoring and measurement


Takeaways

Key takeaways

Only 30% of AI projects successfully transition from pilot to production, primarily due to trust issues and failure to consider broader implementation factors beyond technical functionality


AI incidents are growing exponentially (600 incidents in December 2025 alone), highlighting the urgent need for comprehensive risk management frameworks


The AI in 4D framework provides a holistic approach requiring four dimensions: Sovereignty (control over data/models), Green AI (sustainability), Responsible AI (ethics/governance), and Valuable AI (measurable real-world benefits)


All four dimensions are interconnected and necessary for successful AI scaling – no single dimension alone is sufficient for project success


Human-centered design is fundamental to responsible AI implementation, requiring clear understanding of who the system serves and how it impacts all stakeholders


Trade-offs between dimensions are inevitable and organizations must have transparent processes for making and documenting these decisions


Government regulation and guidance are essential for private sector AI adoption, particularly for risk assessment and establishing safety standards


Current implementation of responsible AI practices is minimal (0.1%) but expected to improve as data protection laws and frameworks mature


Resolutions and action items

Organizations should develop AI policies that clearly define how AI will be used and what will be prioritized


Implement responsible AI frameworks with specific questions and requirements across ethics, trust, and security domains


Convert conceptual goals into measurable KPIs for sustainability, user impact, and ethical outcomes rather than keeping them as abstract concepts


Upskill teams to understand the four AI dimensions and incorporate diverse perspectives in AI development


Create formal processes for identifying, ranking, and documenting trade-offs between different AI dimensions (high to low concern mapping)


Access the white paper provided by presenters for detailed implementation guidance on all four AI dimensions


Government should focus on enabling smart data sharing between organizations and building trusted national language models


Unresolved issues

How to balance platform-based AI solutions versus client-specific customizations when clients demand exclusivity


The challenge of companies acquiring AI technologies but not commercializing them for broader societal benefit


Specific timelines and mechanisms for government regulation implementation beyond the mentioned 18-24 month preparation periods


How to handle scenarios where companies prioritize speed of AI adoption over sovereignty concerns


The technical details of implementing sustainability measurements and carbon impact assessments for AI systems


How to address the massive power consumption of AI data centers (equivalent to entire cities like Los Angeles)


Standardization of AI risk assessment across different industries and use cases


Suggested compromises

When facing sovereignty versus speed trade-offs, countries should consider the long-term value to citizens over immediate technical benefits, even if it means slower development


For sustainability concerns, organizations can accept some environmental impact during AI training and adoption phases if it serves the greater goal of upskilling people and building AI literacy


Private companies can retain core IP while building service layers on top that allow for broader platform approaches rather than purely exclusive client solutions


Organizations should transparently rank their priorities among the four AI dimensions, accepting that some areas may receive lower priority based on their specific context and goals


Balance responsible AI implementation with practical business needs by starting with high-risk use cases and gradually expanding frameworks to lower-risk applications


Thought provoking comments

I feel like I am often left in the lurch to actually literally make all the decisions within the private sector environment whereas I think government needs to step in and make some of these decisions on our behalf in terms of model utilization, where we go, what we do with it… how do you see this sort of playing out in the next 6 months, 8 years 12 months because obviously the technology is moving really fast as to what role the government is going to play in saying this is safe to use and this is still experimental and you should worry about it?

Speaker

Ami Kotecha (co-founder of Amro Partners)


Reason

This comment was particularly insightful because it highlighted a critical gap between theoretical frameworks and practical implementation challenges. It shifted the discussion from academic concepts to real-world business pressures, revealing how private sector leaders feel overwhelmed by AI decision-making without regulatory guidance.


Impact

This comment fundamentally redirected the conversation from the presenters’ framework to practical governance challenges. It prompted detailed responses about different regulatory approaches (EU vs UK vs US) and sparked a broader discussion about the role of government in AI adoption, making the session more interactive and practically focused.


For example, if I offer this vending machine agentic AI for a PepsiCo, they would say don’t do it for Coca-Cola, right? Give it to us only and keep it with us. But UPI, for example, was not a master card or a visa card thing, right? It was for the whole country, right? So how do you get that kind of attraction to build a platform instead of one very customized for a customer who might say that don’t give it to anybody else.

Speaker

Audience member (entrepreneur building agentic AI for vending machines)


Reason

This comment was exceptionally thought-provoking because it introduced a completely new dimension to the AI scaling problem – the tension between platform-level value creation versus customer-specific solutions. The UPI analogy was brilliant in illustrating how some technologies succeed by being universally accessible rather than proprietary.


Impact

This comment opened up an entirely new thread of discussion about business model challenges in AI deployment. It led to deeper exploration of how commercial interests can conflict with societal benefits, and prompted discussion about the role of IT companies in platform versus service approaches. The comment also connected back to the ‘valuable AI’ dimension in unexpected ways.


There might be few scenarios where while chasing sovereignty we might have to bypass value additions or responsibility for the citizens while the other way round also so can you discuss about those scenarios where you value sovereignty more than talking about responsible AI or value additions AI and the otherwise also and when they can be parallelly taken into account

Speaker

Audience member


Reason

This comment was highly insightful because it challenged the presenters’ framework by pointing out potential conflicts between the four dimensions. It moved beyond accepting the framework to critically examining whether these dimensions might be mutually exclusive in certain scenarios.


Impact

This comment forced the presenters to acknowledge and explore trade-offs more deeply, leading to concrete examples like the traffic management system for VIPs and sustainability impacts of AI queries. It elevated the discussion from a simple framework presentation to a more nuanced analysis of real-world implementation challenges and forced prioritization decisions.


A company will buy another company but it won’t implement for the society or for the good right so that’s a challenge that I am seeing in the corporate world that a company will buy another company but it won’t commercialize it wanted to keep that technology right so that’s a big challenge… how do you handle that because that is part of the responsible AI as well as the valuable AI part

Speaker

Same entrepreneur (follow-up comment)


Reason

This comment was particularly thought-provoking because it revealed how corporate acquisition strategies can actually hinder societal benefit from AI innovations. It connected business strategy decisions to the broader responsible AI framework in an unexpected way.


Impact

This comment deepened the discussion about systemic barriers to AI scaling and introduced the concept of ‘technology hoarding’ as an obstacle to valuable AI deployment. It prompted discussion about trillion-dollar investments flowing to only a handful of companies and how this concentration of resources stifles broader innovation and societal benefit.


So they were building a system for kind of the nursing or old people’s home so you may know that elderly get dehydrated and they forget to drink water… they built a system where using AI and vision they were seeing if the elderly were having enough liquid in the day or not… but then you think about it they are monitoring those elderly both in the area where it’s common as well as where they may be in their bedrooms… and what about the people, the nurses who are actually hydrating them because that could become a negative effect on them

Speaker

Omeed Hashim


Reason

This example was exceptionally insightful because it demonstrated how a seemingly beneficial AI application can have multiple unintended consequences across different stakeholder groups. It perfectly illustrated the complexity of the human-centered design principle and why all four dimensions must be considered simultaneously.


Impact

This example served as a powerful capstone to the discussion, demonstrating why the 4D framework is necessary. It showed how a ‘valuable’ AI solution could simultaneously raise sovereignty (privacy), responsibility (surveillance ethics), and stakeholder impact concerns, reinforcing the presenters’ main thesis that no single dimension is sufficient for successful AI deployment.


Overall assessment

These key comments transformed what could have been a one-way presentation into a dynamic, practical discussion. The audience contributions consistently elevated the conversation by introducing real-world challenges that tested and refined the presenters’ theoretical framework. The comments revealed critical gaps between AI theory and practice, particularly around regulatory guidance, business model conflicts, and unintended stakeholder impacts. Most importantly, they demonstrated that the 4D framework, while useful, faces significant implementation challenges in competitive markets and complex organizational environments. The discussion evolved from explaining the framework to critically examining its limitations and trade-offs, making it far more valuable for practitioners facing actual AI deployment decisions.


Follow-up questions

What role will government play in determining AI safety and regulation in the next 6-12 months, particularly for private sector companies?

Speaker

Ami Kotecha


Explanation

This addresses the critical gap between private sector AI experimentation and the need for government guidance on what is safe to use versus what remains experimental, especially given the rapid pace of technological change.


How can AI platforms be built at a platform level rather than individual customer level to avoid vendor lock-in situations?

Speaker

Audience member (vending machine AI entrepreneur)


Explanation

This explores the challenge of creating shared AI infrastructure similar to UPI rather than proprietary solutions that limit broader adoption and innovation.


How can companies prevent larger corporations from acquiring AI technology and then not commercializing it for societal benefit?

Speaker

Audience member (vending machine AI entrepreneur)


Explanation

This addresses the problem of technology acquisition for competitive suppression rather than innovation, which impacts both responsible AI and valuable AI dimensions.


In what scenarios should sovereignty be prioritized over responsible AI or value, and vice versa?

Speaker

Audience member


Explanation

This explores the critical trade-offs between different AI dimensions and when organizations might need to make difficult choices between competing priorities.


How should the four AI dimensions (sovereignty, green, responsible, valuable) be ranked in order of priority?

Speaker

Audience member


Explanation

This seeks to understand relative importance of different AI considerations for implementation planning and resource allocation.


What are the specific KPIs and measurable outcomes for each of the four AI dimensions?

Speaker

Theresa Yurkewich Hoffmann


Explanation

This addresses the need to move beyond conceptual frameworks to concrete, measurable metrics for sustainability, ethics, sovereignty, and value in AI projects.


How can organizations develop comprehensive AI policies and responsible AI frameworks?

Speaker

Theresa Yurkewich Hoffmann


Explanation

This addresses the gap identified where none of the audience members had formal responsible AI practices, sovereignty policies, or sustainability measures in place.


How can accuracy and fairness be balanced when building AI systems, particularly in multilingual contexts?

Speaker

Audience member (incomplete question)


Explanation

This touches on the technical trade-offs within responsible AI, specifically between model performance and fairness across different languages and populations.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.