AI as critical infrastructure for continuity in public services
20 Feb 2026 10:00h - 11:00h
AI as critical infrastructure for continuity in public services
Session at a glance
Summary
This discussion focused on building trust and ensuring successful implementation of AI in governance and public services, featuring perspectives from government officials, international organizations, and private sector representatives. Minister Rosiński from Poland emphasized the importance of protecting critical infrastructure and developing national language models like Bielik to ensure data sovereignty and trustworthy AI deployment. Atsuko Okuda from the International Telecommunication Union highlighted how global standards enhance interoperability across regions, noting that ITU has over 200 approved AI standards with 200 more in development.
The conversation emphasized that trust in AI is built through inclusive, multi-stakeholder governance involving government, civil society, technical communities, and private sector participants. Community-driven approaches were identified as crucial for local trust-building, particularly regarding linguistic diversity and cultural context in AI deployment. Regulatory alignment was discussed as both a challenge and opportunity for international trade, with the EU AI Act serving as a potential framework for global cooperation.
Technical experts identified data governance, organizational alignment, and human factors as the primary barriers to AI implementation, rather than technological limitations. The human element emerged as a critical theme throughout the discussion, with concerns about user adoption, fear of job displacement, and the need for careful communication about AI benefits. Participants stressed that successful AI deployment requires addressing data sovereignty, ensuring explainability, maintaining system resilience, and building proper feedback mechanisms. The discussion concluded that long-term confidence in AI systems depends on combining technical standards, inclusive governance, regulatory clarity, and thoughtful change management that prioritizes human needs and concerns.
Keypoints
Major Discussion Points:
– National AI Infrastructure and Sovereignty: Discussion of how countries like Poland are developing national language models (like Bielik) and building domestic AI capabilities to ensure data sovereignty, cybersecurity, and independence from foreign AI systems, particularly for critical infrastructure like energy, water, and healthcare.
– Global Standards and Interoperability: Extensive coverage of the need for international AI standards to ensure systems can work across borders, with ITU highlighting their 200+ approved AI standards and 200 more in development, focusing on data formats, APIs, communication protocols, and harmonized terminology.
– Multi-stakeholder Governance and Trust Building: Emphasis on inclusive governance involving government, civil society, technical communities, and private sector to build legitimacy and public trust in AI systems, with particular attention to local community participation and transparent decision-making processes.
– Implementation Challenges and Human Factors: Discussion of practical barriers to AI deployment including data silos, organizational alignment issues, regulatory compliance across regions, and most critically, human resistance to change and concerns about job displacement.
– Regulatory Alignment and Cross-border Trade: Examination of how regulatory frameworks like the EU AI Act impact international business and investment, with discussion of both the challenges and benefits of having clear regulatory guidelines for AI development and deployment.
Overall Purpose:
The discussion aimed to explore how to build and maintain public trust in AI governance and deployment across different levels – from local community implementation to international cooperation – while addressing practical challenges in scaling AI solutions globally.
Overall Tone:
The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s points rather than disagreeing. The tone was professional and solution-oriented, with speakers sharing practical experiences and concrete examples. There was a consistent emphasis on the importance of human-centered approaches to AI deployment, and the conversation became increasingly focused on implementation challenges and practical solutions as it progressed from strategic discussions to operational realities.
Speakers
– Lidia: Moderator/Host of the discussion
– Rafał Rosiński: Minister (Poland) – expertise in digital governance, cybersecurity, and AI implementation in national systems
– Atsuko Okuda: Representative from ITU (International Telecommunication Union) – expertise in AI standards and international telecommunications
– Chengetai Masango: Expert in multi-stakeholder cooperation and AI governance – works with Internet Governance Forum (IGF)
– Odes: Expert in community-driven digital ecosystems and AI inclusivity
– J.J. Singh: Representative from Polish Chamber of Commerce – expertise in international trade and regulatory alignment between Poland, EU, and India
– Mariusz Kura: Private sector representative from Bilenium – expertise in scaling AI solutions across regions and AI compliance
– Pramod: Infrastructure expert – expertise in data sovereignty, secure compute, and resilient digital backbone for AI systems
– Edyta Gorzon: User adoption specialist – expertise in AI adoption by teams and change management
Additional speakers:
None – all speakers mentioned in the transcript are included in the provided speakers names list.
Full session report
This panel discussion on building trust in AI governance and public services brought together government officials, international organization representatives, and private sector experts to address implementation challenges that extend beyond technical capabilities to encompass regulatory frameworks, human factors, and international cooperation.
National AI Infrastructure and Sovereignty
Minister Rafał Rosiński from Poland emphasized the critical importance of protecting national infrastructure through trustworthy AI systems. He presented Poland’s approach to developing national language models, including the Bielik LLM, through cooperation with academia and the private sector. This strategy addresses data sovereignty concerns while building domestic AI capabilities for critical infrastructure including energy, water management, and healthcare.
The discussion revealed that data sovereignty encompasses more than simple data localization. As Pramod noted, true sovereignty requires control over legal frameworks governing data access, encryption keys, and infrastructure management. This expanded definition challenges conventional approaches and suggests nations must consider the full technology stack when developing sovereign AI capabilities.
The emphasis on national language models reflects growing recognition that AI systems trained on local data and cultural contexts can provide more relevant services to citizens while reducing dependency on foreign AI systems for critical government functions.
Global Standards and Interoperability
Atsuko Okuda from the International Telecommunication Union highlighted the extensive work underway to create AI interoperability frameworks, with “over 200 already approved AI standards, and 200 more in the pipeline.” The ITU’s standards work covers data formats, system integration approaches, and shared terminology that addresses fundamental communication challenges between different AI stakeholders.
However, Okuda observed a critical implementation gap: despite comprehensive standards being available, many organizations and governments lack awareness of how to utilize these resources effectively. This awareness challenge represents a significant barrier to AI adoption that extends beyond technical capabilities to encompass education and capacity building.
The standards serve multiple functions beyond technical interoperability, providing frameworks for testing, validation, and conformance that enable organizations to verify system performance across different regulatory environments and cultural contexts.
Multi-stakeholder Governance and Trust Building
Chengetai Masango emphasized that trust in AI systems requires genuine multi-stakeholder participation involving government, civil society, technical communities, and private sector representatives. His insight that “inclusivity breeds legitimacy and thereby trust” reflects lessons from internet governance that apply directly to AI deployment.
Trust building must occur at multiple levels simultaneously. While global frameworks provide necessary foundations, trust is ultimately built locally through community engagement and transparent decision-making processes. This requires bidirectional communication flows where local communities can influence global standards rather than simply receiving implementation guidance.
Masango stressed the importance of accountability mechanisms, noting that multi-stakeholder cooperation without clear accountability structures fails to build trust because affected parties have no recourse when systems fail or cause harm.
Community-Driven Development and Inclusion
Odes provided insights into how community-driven approaches can build local trust in AI systems, emphasizing linguistic diversity as crucial for democratic participation. When AI systems operate in languages understood by only portions of the population, they effectively exclude citizens from accessing public services.
The discussion revealed systemic challenges in current AI development approaches, with most datasets originating from contexts that may not function effectively when deployed in different cultural and linguistic environments. This creates dependency relationships where communities can only consume AI solutions rather than participate in their development.
Odes emphasized the importance of feedback mechanisms, noting that AI systems require ongoing community input to remain effective and aligned with local needs and values as they learn and evolve.
Regulatory Frameworks and Business Implementation
J.J. Singh from the Polish Chamber of Commerce argued that regulation enables rather than hinders international trade, challenging assumptions about the relationship between regulation and business development. He noted that the EU AI Act, with its 2026 implementation timeline, provides clear guidelines that help companies prepare for market entry.
Singh observed that regulatory uncertainty, rather than regulatory strictness, poses the greatest barrier to international AI business. He mentioned sandbox solutions that allow businesses to test compliance approaches, demonstrating how regulators can balance strict guidelines with practical business needs.
The discussion revealed that medium-sized enterprises often prefer working with trusted local providers when regulatory requirements are unclear, potentially limiting international collaboration. Singh emphasized the need for control mechanisms, particularly regarding generative AI, noting that different countries use AI for varying purposes – some primarily for surveillance, others focusing on profit maximization.
Implementation Challenges and Organizational Barriers
The discussion revealed that technological readiness significantly exceeds organizational capacity for AI deployment. Pramod noted that almost 80% of AI pilots fail to reach production, primarily due to data governance issues rather than technical limitations.
Data silos emerged as a primary barrier, with organizations struggling to integrate data across different systems and departments. This challenge extends beyond technical integration to encompass governance frameworks, privacy protections, and organizational change management.
Cross-functional alignment represents another significant barrier, as AI systems cut across traditional organizational boundaries, requiring coordination between technology teams, legal departments, business units, and external stakeholders.
Mariusz Kura mentioned their development of AI compliance suite tools to help organizations navigate regulatory complexity while optimizing for cost-effectiveness, illustrating how compliance challenges are driving innovation in AI governance technologies.
Human Factors and User Adoption
Edyta Gorzon provided crucial insights into user adoption challenges, noting that the majority of AI users are not technically sophisticated conference participants. She observed that fear of replacement by AI systems represents a significant barrier extending beyond rational job displacement concerns to psychological and emotional responses to technological change.
Gorzon emphasized communicating quality improvements rather than productivity gains, addressing the cognitive overload many workers experience when faced with constant pressure to do more, faster. She noted that “our brains are not capable to manage” the increasing number of impulses and changes, suggesting that successful AI implementation requires attention to human psychological and cognitive limitations.
She stressed that simple communication using practical examples is essential for non-technical users, and that messages focused solely on productivity improvements may actually increase resistance by creating additional stress about performance expectations.
Infrastructure Requirements for Trusted AI
Pramod outlined key requirements for trusted AI infrastructure, focusing on three critical dimensions: control, explainability, and resilience. The control dimension encompasses data sovereignty, infrastructure sovereignty, and audit capabilities throughout the technology stack.
Explainability requirements extend beyond AI model interpretability to encompass full visibility across data flows, network operations, and system interactions. This comprehensive approach is essential for AI systems deployed in critical applications where failures can have serious consequences.
The resilience requirement emphasizes that AI systems must achieve high reliability standards, particularly when deployed in healthcare, emergency services, or other critical applications, representing a shift from treating AI as experimental technology to viewing it as essential infrastructure.
Unresolved Challenges
Despite comprehensive discussion, several critical challenges remain. The gap between successful AI pilots and production deployment represents a systemic issue requiring new approaches to data governance and organizational change management.
The balance between AI regulation and innovation remains contentious, with different jurisdictions taking varying approaches to risk management. Developing harmonized international frameworks that respect national sovereignty while enabling cross-border cooperation represents an ongoing challenge.
The capacity and awareness gaps identified throughout the discussion suggest that successful AI governance requires significant investment in education and capacity building, as existing standards and frameworks are underutilized due to knowledge gaps.
Conclusion
This discussion revealed that building trust in AI governance requires a multi-layered approach addressing technical, regulatory, organizational, and human factors simultaneously. The conversation demonstrated consensus on fundamental principles while revealing significant implementation challenges.
The emphasis on human factors as the primary barrier to AI adoption challenges technology-centric approaches to AI deployment. Successful AI governance must prioritize user experience, change management, and psychological factors alongside technical capabilities and regulatory compliance.
The panel’s evolution from policy frameworks to concrete implementation challenges reflects a maturing understanding of AI governance that addresses real-world deployment realities. While strong consensus emerged on the importance of inclusive, multi-stakeholder approaches, significant work remains to translate these insights into effective governance frameworks and implementation strategies.
Session transcript
I direct my first question to Minister Rosiński. Minister, Poland has been implemented and shaping digital governance and also investing in sustainability and resilience of national systems. What are lessons learned and what lessons are the most relevant when we talk about implementation of AI in national systems? Maybe the other one. Yeah.
Thank you very much. Thank you. Thank you. like energy sector, water price, health care. That is the main point of our day. Critical infrastructure, I think it’s the crucial point in every country. We cannot imagine how can we run the business if we have… We have no energy, no water, and our data is not enough protected. And we support also local government. We create local… through cyber security. And that is connected with digital skills, especially hygiene with this area. And cyber security is linked with AI, with trustworthy AI. That is the also important thing if we use AI, especially national LLMs, and we can use it for the security of our business. And if we use AI, we can also use it for the security of our business.
And if we use AI, we can also use it for the security of our business. And how can we train the national data? That’s why in Poland we’ve built also Polish LLMs. And how can we train the national data? That’s why in Poland we’ve built also Polish LLMs. is Bielik, which is one public LLM, and the second one is Bielik that is the first one Plan, the second is Bielik that is with cooperation with academia, with private sector, and we support also. That can allow also be competitive for Polish business. That’s whole, if we see this whole ecosystem and we can also exchange our ideas and show our knowledge with other countries. That is the way, the proper way.
to be safe and to use trustworthy AI.
Thank you very much, Minister, for using beautiful examples of language model from Poland and their role in Polish ecosystem regarding both public sector and private sector and for framing AI as a matter of public responsibility and resilience. And now let’s move at the international level and have a look at the global dimension. And I would like to ask a question to Atsuko Okuda. How can global standards ensure interoperability and resilience of AI systems across regions?
Thank you very much. First of all, good afternoon to all of you. And I would like to thank the audience. organizer for inviting ITU, International Telecommunication Union. And as some of you may know, ITU is the oldest UN agency specialized for digital technology. And we have standardization work, including on the topic of AI. Now, what does AI standards do for all of us? Number one, it will enhance interoperability, which means that if a system or solution is developed in India that can talk to the system, as His Excellency mentioned, in Poland and vice versa, and that will lower the investment cost, that will increase the efficiency. So what are those standards that could be useful because of the interoperability, and especially within the country as well as within the region or globally?
So one concrete standard… Oh, by the way, just to give you the magnitude, ITU has over 200 already approved AI standards, and 200 more are in the pipeline. So in total, we have about 500 standards in place as well as in the pipeline. So you can see there are many different standards which are available for everyone. So what are those standards? Number one, for the interoperability, we believe that data, the interface, and protocol are critical. For example, we have a shared data format that we can all use. Otherwise, how can I share my data with you with a different data format? Two, standardized API so that system -to -system communication will be smooth. And three, of course, communication protocol.
Now, because based on these standards, we have more, how can I say, comprehensive standards. Thank you. For example, AI for network automation, multimedia AI processing, standards as well as machine -to -machine data sharing, the frameworks, for example. And second, we also have a harmonized terminology, vocabulary, and reference architectures. Because when I talk to, it’s not only you, but with anyone, some aspect of AI, how do we know that we understand the same thing? So this taxonomy, vocabulary, and the reference architecture is critical for interoperability and for us to be able to develop and exchange data or develop the algorithm together. So we have our AI model. Life cycle definition, so I know what you are referring to, and you know what I’m referring to.
Three, we have a context. Performance and testing are related. so that we can test, validate, and we have also conformance specifics that we use as a standard to validate that what you are sharing is what I can validate. So this, I hope, the standards are useful for enhancing the interoperability as well as to enhance the collaboration within the country as well as across the regions. Thank you.
Thank you very much. Standards are a very important pillar of building trust. Another is inclusive governance. Changatai, how does multi -stakeholder cooperation translate into real public trust in AI governance?
Thank you very much and thank you very much for the invitation and I’d like also to thank the organisers Millennium and Poland of course for inviting me now for your question, for any process I think, inclusivity breeds legitimacy and thereby trust, so if you have all the stakeholders who are affected by whatever policy that is, so you must have government, civil society, the technical community and the private sector all talking to each other and giving their point of views from their perspectives, I think then you can result in policies that have a greater buy -in so once people are involved in the process they’re more likely to adopt that process and secondly the transparency of the process also matters people need to know how these decisions came about and also what was the decided and this can be done with open consultations public comment periods and accessible documentation that builds confidence.
This is basically the same model that has built the internet what it is now. You have the public comment period etc and then these are adopted The IGF as well shows that this works The Internet Governance Forum is a multi -stakeholder dialogue and within our framework we discuss AI governance as well and a lot of other things misinformation, disinformation etc and this approach can anchor AI governance in legitimacy Trust as well is built locally so these discussions should not just be happening at a global level and then trickle down Local communities should be able to contribute in some manner and this process should be a cycle. So the feedback loop should be down but also up.
So there’s a resonance going on there. And then I think lastly, accountability mechanisms is also very, very important. So a multi -stakeholder corporation without clear accountability methods, people will not trust it because they need to know if they have an issue, where can they go and express that concern and that it will be dealt with in some manner or function. Thank you.
Thank you very much. I couldn’t agree more. Trust is also built locally and that’s why I would like to direct my next question to Odes. How can community -driven digital ecosystems can contribute to building trust to AI locally?
Thank you. Good afternoon, everyone. I say that modestly and saying thank you for your attention. Thank you for the invitation to join this panel. to give context to community participation, both at the innovation level and at the policy level, I would like to start with where Chinetai just finished, which is that community is a big stakeholder and a big participant in the multi -stakeholder framework. If you think about deploying AI solutions, especially for public services, then you realize that the inclusivity is what builds trust. The ability to deploy AI and to be consumed by every citizen is at the core of the trust between the users and the providers of the services. So taking into account that community, making sure that it’s included.
I’ll give an example. If you think about linguistic diversity that is there in many of the communities, in many of the countries of this world, you realize that if you build such a product, or an AI solution and it’s in language that only 20%, 50 % of the population understands, then the trust is broken between the provider, which is the public sector, and that part of the population, which is the citizens. The second part is that in the innovation cycle as well, we’ve seen on and on AI being deployed, but it doesn’t reflect the realities of certain communities, and that’s both, you can think about it linguistically, you can think about it contextually, you can think about it in different forms and shapes, it takes in different domains.
So the participation of the community into that, in ensuring that the innovation and the policy level align with the needs and the realities of those particular communities are very important. To finish off, I think that communities or cities and communities, and the citizens are also a big part of that. on how AI systems are improved because once you deploy such a system and you don’t have a feedback loop, then you realize that those particular technologies only work for some time and the adoption goes down after some time. So I think those three things are very key in building trust. First, inclusivity, part of it. Second, the participation in the innovations as well. And lastly, the feedback mechanism for how those services are being consumed, are being used and what can be improved.
Thank you very much. Trust also can influence economic confidence and cross -border collaboration. That’s why I would like to direct my next question to JJ. Does regulatory… Alignment. directly influence international trade? What is your perspective and observation? If you could share experience from in the Polish Chamber of Commerce.
J.J. Singh:
Well, I will just share the experience from the perspective of Poland in EU and India. Normally, all are saying a lot of regularities always, you know, dishearten the business and the investments. But I think in this particular case, if it comes to the AI, I think we need a guidebook because without that, everything can go haywire. So if you look at the regulation with the EU AI Act, which has been implemented in 2026, I think in a way it makes a kind of issue for the investors. But on the other hand, if you take it, if you have the clear guidelines, it’s always very good in the lieu of the India, EU FTA that the Indian companies will be ready.
for deployment of the AI algorithms and other things within Europe. Now, let’s take the example also how EU, even businesses are saying that, well, the regulations are very tough, the compliance is very tough, but EU is also doing from their own side to make it easier for the businesses. I can use the example here from 2025, where in France there are 10 AI companies from India, which are actually part of the accelerator program, and EU is also ready to give a sandbox solution for all the regulations. So in all my perspective is you need a kind of control, especially on the generative AI, and you need some kind of control on the AI. So the rulebook which EU has given, it will be like, you know, I would say it’s a playbook for all the AI companies involved, and I think that India should be involved.
India should take the advantage of that because if… If they are already prepared to adhere to the rules, then I think the entry will be easier for the companies. So I definitely support the regulation because in this particular matter of AI, we need regulation. Because if you see the other countries, I will not take the names. One is using for policing its own people, and second is using it only for making the money. So yes, it’s good, but with sense.
Thank you very much. In our discussion, we have also three representatives of the private sector who know practical aspects very well because they have to deal with all these challenges on a daily basis. So I would like to start with Mariusz Kura. Mariusz, how do you scale AI solutions across regions while managing regulatory divergence?
Thank you, Lidia, and good afternoon, everyone. Good afternoon. Distributed software development. for the international IT companies is not new. We have started practicing this a millennium, 10 years back, when we, together, we were opening the office, the delivery center in Pune, Maharashtra, here in India. And simple practice to scale up and be fast is to have exactly the global offices, and like our development team, can build some solution, let’s say, in one day and deploy it, and the next day, business in Europe can verify if it’s working as it was expected. If not, then our development team in India can fix it even on the same day. So that’s the one way how we’ve been scaling up so far.
But the challenge nowadays is exactly how to scale up and follow all the regulations, and how to work for the different regions, for the different countries, where we have exactly, like for the public sector, a lot of rules. And hopefully… Hopefully from ITU we have as well two more hundred certifications. So, yeah, the way how we can standardize it, standardizations. So, AI engineers and AI solution providers in India need to learn and need to be compliant with all those standards. And it’s very difficult nowadays because it’s so fast. It’s changing almost like every week. And how to exactly follow that? At Bilenium, recently we have developed as well one dedicated solution, which is the AI compliance suite.
And this tool is quite complex. It’s not only covering the governments and compliance area, but as well as helping the organizations to use the right AI tools. Nowadays the enterprises they are using in a while, Edita will be talking about the Copilot, but there are plenty of the different tools used in the enterprises. And our solution is helping the organizations to navigate the users to the right solution. And what does it mean, the right solution? For example, it could be as well from the cost -effective perspective. Like, for example, should we use this and utilize the tokens from that provider? Or maybe another provider is having the better license practice and policy offering. So, that’s, I believe, what can help, yeah, kind of that solutions for the IT solution providers.
Thank you.
Thank you very much for a beautiful example how AI can help manage AI. And now let’s have a look at infrastructure. And I have a question to Pramod from an infrastructure standpoint. What does trusted AI require? On the ground, what does it require? in terms of data sovereignty, in terms of secure compute and resilient digital backbone.
God afternoon, everyone. Pleasure to be here. So when AI starts moving, getting adopted into public services, critical national security deployments, the trust moves not just on the models, but moves from the models and data to the underlying foundation. When I say foundation, where is the model running? What compute is it tuning? Is it running on? Do you control the data? Is there, you know, what jurisdiction? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? There is, you know, the security components around it. So all in all, you know, there are three questions that one needs to ask before you say that you fully trust AI, right?
The first question is on the control. The second one is, you know, can you tell me what happened, right? The AI system, will you be able to explain what happened across each of these layers? And third one is, is it up? So the control part is like we just discussed, you know, control, not just in the data. Data sovereignty just doesn’t mean that, you know, data space is local. But what we’ve seen from our customers asking, you know, is there any other jurisdictional law that can, you know, override saying, hey, I need full visibility of the data, of that infrastructure, you know, auditability and so on and so forth. So I think that’s, do you have the keys?
Is a key question one needs to ask. The second one. is on the explainability, on the visibility, and not just on the model monitoring, whether I am getting accurate data, but overall on data, who accessed, what is the governance around it, what happened in the network. So across all the foundation, if you don’t have full visibility, you will not be able to explain why a system took a decision, right? Because now we are talking about critical infrastructure. The decision it takes can impact the impact could be disastrous. The third one is, again, resilience. So the resilience, by resilience, we mean can AI stay up? Let’s say if it is in healthcare, in a remote tier city, a hospital deploys an AI to diagnose the system.
A patient walking in at 2 a .m. on a Sunday morning, you know, it, the system needs to be out. It needs to be resilient like any other financial system, but here the implications are huge. So AI is moving from being just a software service to AI as a foundation where all of these elements need to come together before anyone can say I fully trust. I think that’s the
Thank you very much. And it is common knowledge that technology are widely diffused and used only when they are trusted. And sometimes human factor is important barrier in AI adoption. That’s why I would like to ask Edita, who works with users a lot, what determines whether AI is truly adopted by teams?
Excellent question. Thank you so much for that. Good afternoon, everybody. Thank you for all the comments. So we’ve been talking about infrastructure, about security, cybersecurity, about the legal aspects of AI. However, we should remember that deployment is technology, but the users, they want to change. We want to change the way how they are acting with AI. From the practical perspective, because I’m responsible for driving adoption in the past, it was the topic of the modern work. Now we have AI, and we should remember that majority of users of AI are end users. They are not people who are taking part in conferences like this one. They are not that fluent with technology, but in the same time, we expect from them to be fluent and to change the way how they act.
How they work. So from my experience, it’s extremely important to communicate in the right way in a simple word. in simple words and simple examples how AI can be the powerful tool. Not because of the features, because we all know that features are not driving anything, nor business, nor processes, nor business scenarios, whatever we have in our minds. And in AI, everybody can use AI in a different way. This is the biggest challenge from the change management perspective as well, because we can have the best technology, the best model, but if the users, they don’t know how to use it, if they don’t know where it leads to, it’s hard to expect that we’re going to succeed on scale.
Thank you very much, and thank you to all of you for sharing your views in the first round of questions. In the second round, we will turn from strategy to implementation, and I will ask all of you, for a very short reflection from this level. And Minister, what is the most… complex operational challenge governments face when deploying AI in public services what is your view
Shortly, of course, what JJ mentioned about I talked about this, uh, that this, um, a very important for also Polish perspective and how can also see that perspective other other countries, except EU. That is the other it’s important that how can we train the data, how can we use the data, and how it will be the future or generative AI? That we have to use, of course wisely. It is a very important the final goal, and how it will be used. Especially for public sector and especially for for our for our citizens if we look in in in that way, that will be good for for everyone. And of course um implementation of of ai in public sector and of course when use also this data private private companies that is important to see how can we also fight against deep fakes, and the false information thank you
Thank you very much. Atsuko where do you see the big implementation gap today? Is it standards, lack of standards, skills, governance, what it is?
Thank you for this very important question. I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole discussion on standards came as a surprise to many of the participants. Actually, this is not the first session I’m talking about standards. This is actually third during the summit. But I am not sure, unless you are the standardization person, you don’t normally think about, okay, there are building blocks available, right, that I can start building something based on the building blocks. So we are trying to promote the importance of standardization and using the standards so that you don’t have to. Thank you. I believe we need a lot of different capacities, the capacity to articulate the issue.
What is it that you or we want to address? Sometimes AI may or may not be the answer. Some other technologies may be able to help you better. So I believe this articulation is a huge maybe opportunity and challenge as well. After you articulate, how do you plan, how do you translate that articulated issue into an operational project and initiative? I believe it’s another layer of a capacity challenge. So I can see that there are many countries, companies, agencies who want to take advantage of the AI, but I hope that this discussion is helpful. To concretize those steps moving forward. Thank you,
Thank you very much. my next question will be directed to our technical experts Pramod and Mariusz and the question is in real AI projects what most often slows down implementation.
first definitely not technology because I think we’ve seen technology is always almost ahead very true over the last couple of years the advancement that have happened so despite advanced technology being available despite GPUs being available the platforms being available we still don’t see too many monetizable AI use cases and and that’s that’s a big problem Everybody is trying to figure out where my ROI is, what is that use case. And that again boils down to few key aspects. One is the biggest friction is on data. So we’ve seen, especially in India, we’ve seen many, many, many pilots. And almost 80 % of those pilots don’t make it to production. And the key reason is on the data.
Data is siloed, data is not ready for AI scale. There’s no governance built around data. And that’s why POCs, you use a good set of data and you show value. But then when it comes to production, most of the times they don’t have enough data to get the value out of it. The second, again, AI cuts across. In an organization, AI cuts across. It cuts across many functions. There is the technology. It cuts across multiple functions. team is saying, you know, we are ready with this, but then there is legal aspects, there is an IT guy sitting, you know, I cannot allow you to do this, and so forth. So that alignment is not thought through, right, and that also again slows down the adoption.
So I think these are the primary, and then again, you know, the trust factor comes in, the third part is, how much do you really trust AI to do, you know, do you see the how much risk comfort do you have, is there a human afterthought required for every decision it makes, so I think that organizations need to choose that balance on or choose the best use case where, you know, it’s balanced without requiring too much of human intervention, can I deploy this? Those are the key factors that we see, especially in India, that are slowing down the adoption.
It seems that whatever we are discussing, infrastructure or other challenges, human factor is always at the end and behind everything. Mariusz, is your experience similar or do you have different observations?
I totally do agree with Pramod. It’s not us technology who is slowing down it. Maybe sometimes, but it’s many times on the business side and especially for the medium -sized enterprises. If they don’t know if they can work with some solutions or if they don’t know if they can take the solutions, for example, from India, they will step back and they will go to the more trusted local providers. So I believe that the standards that we are talking, it will help us a lot. So that’s my practice.
Okay. Edyta, what… What is the most common… human barrier from your view?
Thank you for this question. So first of all, we talk again about humans, the most important factor in the same time the biggest challenge and the biggest opportunity. From my perspective, I think that while talking with users, because today I’m a user voice, I can hear very often that people, they are reflecting what’s going to be next if I’m going to be replaced by AI. What’s in it for me? And we also need to find the message as organization, no matter if public or a private sector, how to communicate all of those changes that are coming. Another topic I’m facing while talking with the users, they basically don’t know what to expect next because as we have noticed that AI is another revolution and the revolutions are getting one after another very shortly.
And when the users, they can hear, okay, I should be more productive. I don’t want to be more productive anymore, right? I don’t want to do faster meetings. I don’t want to do faster notes, right? It’s nice. But in the same time, my brain and the number of different impulses I’m getting from outside is simply too high. Our brains are not capable to manage that in the right way. We’re closer to depression and we know in which direction it goes. So how we are communicating AI as a part of the tool is extremely important. So be careful what are you talking to your users. Don’t tell them that they will be more productive. But maybe the quality of their work is going to be better.
Maybe they don’t have to repeat the same tasks every day, but we must be very, very careful what kind of wording we’re using in regards AI adoption. Thank you.
Thank you, thank you very much. My next question will be to Chengetai because he looks at these challenges from the global perspective and has access to data from all regions. What, in your view, what would be the most important practical step to strengthen public trust in AI deployment?
Thank you very much for that question. And by the way, I totally agree with you. I think the first one is quite obvious. Inclusive participation in AI decision -making. So ensuring that the affected communities or the affected individuals are not affected by AI. And I think that’s a really important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. into how the systems operate and before they are deployed, so not after the fact. We shouldn’t be fixing things after the fact, but we should go on an input before the deployment.
The second one is independent oversight, so establishing review bodies that include civil society and the technical experts, so not just the regulators and industry, but a 360 approach to it. Thank you
Thank you very much. We are approaching the end of our session, so I would like to ask Odes for a quick comment. What ensures AI remains inclusive in real world implementation?
There are a few key factors to look through when you talk about inclusivity. I think the first is to look who it is meant for and to ensure that they are accounted for. And this can happen in different forms. For example, when you look at data sets that power AI models, most of the time they tend to come from, let’s say, the global north, meaning that they won’t be very contextually aware when they’re deployed in the global south. So there’s that need to contextualize the AI system being developed to ensure that they really respond to the users that are meant for. I think the second part of ensuring inclusivity is also ensuring the local value creation.
I think we’ve seen too often imputation of AI systems, but not… the understanding of how especially small nations can participate in building and deploying AI for their interests. So I think those two things are very, very critical. And the other part is also, I guess, the linguistic perspective that I mentioned before, looking at the linguistic diversity that exists around the globe and ensuring that people are able to consume that particular technology being developed. I think when we often think about AI and how it’s deployed, we tend to look at the first 20 % of the market, but the rest 80 % also needs to be accounted for. So, yeah.
Thank you very much. Last question, and I will ask JJ for a very brief one sentence answer. What creates long -term confidence? cross -border AI investments from your perspective?
J.J. Singh:
Well, you know, I think I can simply say it’s a mix of everything. The involvement from the right people, I would rather say the people who are on the top, who are taking the serious decision investments, because that’s very important. And the people who are involved, they should know what they want it for, because AI deployment is a big thing, but you should know what you want to solve with it. So that’s very important.
Thank you very much. Its time to wrapwrap up our discussion.
Lidia
Speech speed
47 words per minute
Speech length
716 words
Speech time
903 seconds
Role of trust in economic confidence
Explanation
The moderator highlights that trust is a key factor influencing economic confidence and cross‑border collaboration. Trust, reinforced by standards, helps create a stable environment for international AI cooperation.
Evidence
“Trust also can influence economic confidence and cross -border collaboration.” [54]. “Standards are a very important pillar of building trust.” [39].
Major discussion point
Role of trust in economic confidence
Topics
Building confidence and security in the use of ICTs | The digital economy
Rafał Rosiński
Speech speed
63 words per minute
Speech length
418 words
Speech time
394 seconds
National AI implementation & resilience (Poland)
Explanation
Poland has developed its own large language models to safeguard critical infrastructure and maintain competitiveness. National LLMs such as Bielik are intended to provide secure, trustworthy AI for public and private sectors.
Evidence
“That’s why in Poland we’ve built also Polish LLMs.” [16]. “is Bielik, which is one public LLM, and the second one is Bielik that is the first one Plan, the second is Bielik that is with cooperation with academia, with private sector, and we support also.” [17]. “Critical infrastructure, I think it’s the crucial point in every country.” [20]. “to be safe and to use trustworthy AI.” [6].
Major discussion point
National AI implementation & resilience
Topics
Building confidence and security in the use of ICTs | Artificial intelligence | The enabling environment for digital development
Operational challenges for governments deploying AI
Explanation
Deploying AI in the public sector raises complex issues such as training national data and combating deepfakes and misinformation. Governments must manage data quality, security, and the risk of false information while integrating AI into services.
Evidence
“And how can we train the national data?” [15]. “And of course um implementation of of ai in public sector and of course when use also this data private private companies that is important to see how can we also fight against deep fakes, and the false information thank you” [88].
Major discussion point
Operational challenges for governments
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Atsuko Okuda
Speech speed
120 words per minute
Speech length
695 words
Speech time
345 seconds
Global AI standards & interoperability
Explanation
Standardized APIs, shared data formats, harmonized terminology and clear protocols are essential for seamless system‑to‑system communication and cross‑regional AI collaboration.
Evidence
“Two, standardized API so that system -to -system communication will be smooth.” [24]. “And second, we also have a harmonized terminology, vocabulary, and reference architectures.” [26]. “Number one, for the interoperability, we believe that data, the interface, and protocol are critical.” [31]. “For example, we have a shared data format that we can all use.” [28].
Major discussion point
Global AI standards & interoperability
Topics
Data governance | Artificial intelligence | The enabling environment for digital development
Implementation gaps: awareness & capacity
Explanation
Many participants are unfamiliar with existing AI standards, creating both awareness and capacity challenges. Articulating issues and building expertise are necessary to close the gap.
Evidence
“I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole discussion on standards came as a surprise to many of the participants.” [93]. “I believe we need a lot of different capacities, the capacity to articulate the issue.” [45].
Major discussion point
Implementation gaps: awareness & capacity
Topics
Capacity development | Artificial intelligence
Chengetai Masango
Speech speed
149 words per minute
Speech length
501 words
Speech time
200 seconds
Inclusive multi‑stakeholder governance & trust
Explanation
Inclusivity of all affected stakeholders creates legitimacy and trust. Transparency, public comment periods and accountability mechanisms further strengthen confidence in AI policies.
Evidence
“inclusivity breeds legitimacy and thereby trust, so if you have all the stakeholders who are affected by whatever policy that is, so you must have government, civil society, the technical community and the private sector all talking to each other and giving their point of views from their perspectives, I think then you can result in policies that have a greater buy -in so once people are involved in the process they’re more likely to adopt that process and secondly the transparency of the process also matters people need to know how these decisions came about and also what was the decided and this can be done with open consultations public comment periods and accessible documentation that builds confidence.” [35]. “And then I think lastly, accountability mechanisms is also very, very important.” [43].
Major discussion point
Inclusive multi‑stakeholder governance & trust
Topics
Internet governance | Human rights and the ethical dimensions of the information society | Artificial intelligence
Practical steps to strengthen public trust in AI
Explanation
Before deployment, input from communities and independent oversight bodies should be established. Review panels that include civil society and technical experts ensure accountability and pre‑emptive governance.
Evidence
“We shouldn’t be fixing things after the fact, but we should go on an input before the deployment.” [115]. “The second one is independent oversight, so establishing review bodies that include civil society and the technical experts, so not just the regulators and industry, but a 360 approach to it.” [116].
Major discussion point
Practical steps to strengthen public trust
Topics
Internet governance | Human rights and the ethical dimensions of the information society | Artificial intelligence
Odes
Speech speed
136 words per minute
Speech length
633 words
Speech time
278 seconds
Community‑driven ecosystems & local trust
Explanation
Linguistic diversity and community participation are crucial for building trust in AI services. When AI solutions are offered only in languages understood by a minority, trust erodes between providers and citizens.
Evidence
“If you think about linguistic diversity that is there in many of the communities, in many of the countries of this world, you realize that if you build such a product, or an AI solution and it’s in language that only 20%, 50 % of the population understands, then the trust is broken between the provider, which is the public sector, and that part of the population, which is the citizens.” [51]. “So the participation of the community into that, in ensuring that the innovation and the policy level align with the needs and the realities of those particular communities are very important.” [53].
Major discussion point
Community‑driven ecosystems & local trust
Topics
Closing all digital divides | Artificial intelligence
Ensuring AI remains inclusive in practice
Explanation
Beyond language, inclusive AI must generate local value and respect community contexts. Ensuring local value creation and addressing linguistic needs keeps AI relevant and trusted.
Evidence
“If you think about linguistic diversity… trust is broken…” [51]. “I think the second part of ensuring inclusivity is also ensuring the local value creation.” [46].
Major discussion point
Ensuring AI remains inclusive in practice
Topics
Closing all digital divides | Artificial intelligence
J.J. Singh
Speech speed
160 words per minute
Speech length
438 words
Speech time
163 seconds
Regulatory alignment & international trade
Explanation
The EU AI Act serves as a playbook and sandbox, helping foreign AI firms navigate regulations and invest across borders. Clear guidelines reduce uncertainty for investors and facilitate trade.
Evidence
“So the rulebook which EU has given, it will be like, you know, I would say it’s a playbook for all the AI companies involved, and I think that India should be involved.” [56]. “So if you look at the regulation with the EU AI Act, which has been implemented in 2026, I think in a way it makes a kind of issue for the investors.” [57]. “I can use the example here from 2025, where in France there are 10 AI companies from India, which are actually part of the accelerator program, and EU is also ready to give a sandbox solution for all the regulations.” [58].
Major discussion point
Regulatory alignment & international trade
Topics
The enabling environment for digital development | Artificial intelligence
Long‑term confidence for cross‑border AI investment
Explanation
Involvement of senior decision‑makers who understand strategic goals creates lasting confidence for cross‑border AI investments. Clear purpose and top‑level commitment are essential for sustained collaboration.
Evidence
“The involvement from the right people, I would rather say the people who are on the top, who are taking the serious decision investments, because that’s very important.” [120]. “And the people who are involved, they should know what they want it for, because AI deployment is a big thing, but you should know what you want to solve with it.” [121].
Major discussion point
Long‑term confidence for cross‑border AI investment
Topics
The enabling environment for digital development | Artificial intelligence
Mariusz Kura
Speech speed
140 words per minute
Speech length
475 words
Speech time
203 seconds
Scaling AI across regions & managing regulatory divergence
Explanation
Distributed software development, a dedicated AI compliance suite, and global offices enable rapid scaling while respecting diverse regulations. This approach helps navigate differing rules across countries.
Evidence
“Distributed software development.” [65]. “At Bilenium, recently we have developed as well one dedicated solution, which is the AI compliance suite.” [66]. “And simple practice to scale up and be fast is to have exactly the global offices, and like our development team, can build some solution, let’s say, in one day and deploy it, the next day, business in Europe can verify if it’s working as it was expected.” [67]. “But the challenge nowadays is exactly how to scale up and follow all the regulations, and how to work for the different regions, for the different countries, where we have exactly, like for the public sector, a lot of rules.” [68].
Major discussion point
Scaling AI across regions & managing regulatory divergence
Topics
The enabling environment for digital development | Artificial intelligence | Data governance
Factors that slow AI project implementation (business side)
Explanation
Businesses often hesitate to adopt foreign AI solutions, preferring trusted local providers. Uncertainty and lack of familiarity on the business side delay project progress.
Evidence
“If they don’t know if they can work with some solutions… they will step back and they will go to the more trusted local providers.” [107]. “Maybe sometimes, but it’s many times on the business side and especially for the medium -sized enterprises.” [109].
Major discussion point
Factors that slow AI project implementation
Topics
The enabling environment for digital development | Artificial intelligence
Pramod
Speech speed
141 words per minute
Speech length
823 words
Speech time
348 seconds
Trusted AI infrastructure requirements
Explanation
Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational and maintain sovereignty over data to be trusted.
Evidence
“So the resilience, by resilience, we mean can AI stay up?” [2]. “Do you control the data?” [77].
Major discussion point
Trusted AI infrastructure requirements
Topics
Building confidence and security in the use of ICTs | Artificial intelligence | Data governance
Factors that slow AI project implementation (data & governance)
Explanation
Data silos, lack of governance and insufficient data quality cause most pilots to stall before production. Without proper data management, AI projects cannot scale.
Evidence
“Data is siloed, data is not ready for AI scale.” [71]. “So almost 80 % of those pilots don’t make it to production.” [98]. “There’s no governance built around data.” [101].
Major discussion point
Factors that slow AI project implementation
Topics
Capacity development | Artificial intelligence | Data governance
Edyta Gorzon
Speech speed
144 words per minute
Speech length
559 words
Speech time
231 seconds
Human factors & adoption barriers
Explanation
Human factors such as fear of replacement and communication style are major barriers to AI adoption. Simple, clear messaging and addressing user concerns are essential for successful uptake.
Evidence
“And sometimes human factor is important barrier in AI adoption.” [81]. “people, they are reflecting what’s going to be next if I’m going to be replaced by AI.” [84]. “in simple words and simple examples how AI can be the powerful tool.” [63].
Major discussion point
Human factors & adoption barriers
Topics
Capacity development | Human rights and the ethical dimensions of the information society
Agreements
Agreement points
Human factors are the primary barrier to AI adoption
Speakers
– Pramod
– Edyta Gorzon
– Lidia
Arguments
Cross-functional organizational alignment is needed as AI cuts across multiple business functions
Users fear replacement by AI and need clear communication about benefits beyond productivity
Human factors are always behind technology challenges in AI implementation
Summary
All three speakers recognize that despite technological advancement, human-related challenges including organizational alignment, user fears, and communication issues are the fundamental barriers to successful AI implementation
Topics
Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society
Standards and regulatory frameworks are essential for AI trust and adoption
Speakers
– Atsuko Okuda
– J.J. Singh
– Mariusz Kura
– Lidia
Arguments
ITU has 200+ approved AI standards with 200 more in pipeline, covering data formats, APIs, and protocols
EU AI Act provides necessary guidelines despite compliance challenges, with sandbox solutions for businesses
AI compliance tools help organizations navigate regulatory requirements and choose cost-effective solutions
Standards are a very important pillar of building trust in AI systems
Summary
There is strong consensus that standardization and clear regulatory frameworks, while creating compliance challenges, are necessary for building trust and enabling AI adoption across borders
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
Multi-stakeholder participation is crucial for AI governance legitimacy
Speakers
– Chengetai Masango
– Odes
– Lidia
Arguments
Inclusive participation of all stakeholders (government, civil society, technical community, private sector) breeds legitimacy and trust
Community participation ensures linguistic diversity and contextual relevance in AI deployment
Trust is built locally and requires community-level engagement
Summary
All speakers agree that inclusive participation from diverse stakeholders, particularly at the community level, is essential for creating legitimate and trustworthy AI governance
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Internet governance
Data governance and sovereignty are fundamental for trusted AI
Speakers
– Rafał Rosiński
– Pramod
– Odes
Arguments
Critical infrastructure protection requires trustworthy AI and national LLMs like Poland’s Bielik model
Data sovereignty requires control over jurisdiction, keys, and infrastructure beyond just local data storage
AI datasets often come from global north, requiring contextualization for global south deployment
Summary
Speakers agree that true data sovereignty and proper data governance are essential for building trusted AI systems, requiring more than just local data storage but full control over infrastructure and contextual relevance
Topics
Artificial intelligence | Data governance | Building confidence and security in the use of ICTs
Similar viewpoints
Both speakers from the private sector observe that despite technological readiness, business adoption faces practical challenges including unclear ROI and preference for trusted local solutions when regulatory uncertainty exists
Speakers
– Pramod
– Mariusz Kura
Arguments
Technology advancement outpaces monetizable use cases, creating ROI challenges
Medium-sized enterprises prefer trusted local providers when uncertain about compliance
Topics
Artificial intelligence | The digital economy | The enabling environment for digital development
Both speakers emphasize the critical need for better communication and awareness-building, whether about available standards or AI benefits, particularly for non-technical audiences
Speakers
– Atsuko Okuda
– Edyta Gorzon
Arguments
Awareness and capacity gaps exist in understanding available standards and building blocks
Simple communication and practical examples are essential for non-technical end users
Topics
Artificial intelligence | Capacity development | Closing all digital divides
Both speakers advocate for proactive inclusion of communities in AI development processes, emphasizing the need to consider broader populations beyond the most accessible markets
Speakers
– Chengetai Masango
– Odes
Arguments
Affected communities should have input before AI deployment, not after implementation
Local value creation and linguistic diversity must be considered beyond the first 20% of markets
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Unexpected consensus
Regulation as enabler rather than barrier for international business
Speakers
– J.J. Singh
– Mariusz Kura
Arguments
EU AI Act provides necessary guidelines despite compliance challenges, with sandbox solutions for businesses
AI compliance tools help organizations navigate regulatory requirements and choose cost-effective solutions
Explanation
Unexpectedly, both business representatives view regulation positively as providing necessary guidance and enabling international trade, contrary to the common view that regulation hinders business. They see clear regulatory frameworks as facilitating rather than impeding cross-border AI business
Topics
Artificial intelligence | The digital economy | The enabling environment for digital development
Technology readiness is not the limiting factor for AI adoption
Speakers
– Pramod
– Atsuko Okuda
– Edyta Gorzon
Arguments
Technology advancement outpaces monetizable use cases, creating ROI challenges
Awareness and capacity gaps exist in understanding available standards and building blocks
Simple communication and practical examples are essential for non-technical end users
Explanation
There is unexpected consensus across technical and business perspectives that technology advancement is ahead of practical implementation capabilities. This challenges the common assumption that technical limitations are the primary barrier to AI adoption
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Overall assessment
Summary
The discussion reveals strong consensus on several key areas: the primacy of human factors in AI adoption challenges, the necessity of standards and regulatory frameworks for building trust, the importance of multi-stakeholder and community-driven governance, and the fundamental role of data sovereignty. There is also unexpected agreement that regulation can enable rather than hinder international business, and that technology readiness exceeds implementation capabilities.
Consensus level
High level of consensus across diverse stakeholders (government, international organizations, private sector, civil society) on fundamental principles of AI governance, with practical alignment on implementation challenges. This suggests a mature understanding of AI governance issues and potential for coordinated action on building trusted AI systems globally.
Differences
Different viewpoints
Approach to AI regulation and compliance
Speakers
– J.J. Singh
– Mariusz Kura
Arguments
EU AI Act provides necessary guidelines despite compliance challenges, with sandbox solutions for businesses
Medium-sized enterprises prefer trusted local providers when uncertain about compliance
Summary
Singh views EU AI Act as beneficial despite compliance challenges and emphasizes the need for regulation, while Kura observes that regulatory uncertainty causes businesses to retreat to local providers, suggesting regulations may hinder international cooperation
Topics
Artificial intelligence | The enabling environment for digital development | The digital economy
Focus of AI communication to users
Speakers
– Edyta Gorzon
– Pramod
Arguments
Quality improvement messaging works better than productivity-focused communication
Technology advancement outpaces monetizable use cases, creating ROI challenges
Summary
Gorzon emphasizes avoiding productivity messaging to prevent user stress and focusing on quality improvements, while Pramod focuses on the business challenge of finding monetizable use cases and ROI, representing different priorities in AI adoption
Topics
Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society
Primary barriers to AI implementation
Speakers
– Atsuko Okuda
– Pramod
Arguments
Awareness and capacity gaps exist in understanding available standards and building blocks
Data silos and lack of AI-ready data governance prevent 80% of pilots from reaching production
Summary
Okuda identifies awareness and capacity gaps around standards as the main implementation challenge, while Pramod points to data governance and organizational alignment issues as the primary barriers, representing different perspectives on what prevents successful AI deployment
Topics
Artificial intelligence | The enabling environment for digital development | Data governance
Unexpected differences
Role of productivity messaging in AI adoption
Speakers
– Edyta Gorzon
– General discussion context
Arguments
Quality improvement messaging works better than productivity-focused communication
Explanation
Gorzon’s strong opposition to productivity-focused AI messaging was unexpected in a business and policy context where productivity gains are typically seen as key benefits. Her emphasis on mental health concerns and user stress represents an unusual perspective in AI governance discussions
Topics
Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society
Scope of data sovereignty requirements
Speakers
– Pramod
– Other speakers
Arguments
Data sovereignty requires control over jurisdiction, keys, and infrastructure beyond just local data storage
Explanation
Pramod’s expansive definition of data sovereignty going beyond local storage to include jurisdictional law protection and infrastructure control was more comprehensive than typically discussed, suggesting deeper concerns about foreign government access to AI systems
Topics
Artificial intelligence | Data governance | Building confidence and security in the use of ICTs
Overall assessment
Summary
The discussion revealed relatively low levels of direct disagreement, with most speakers focusing on different aspects of AI governance rather than opposing viewpoints. Main areas of disagreement centered on regulatory approaches, implementation barriers, and user communication strategies
Disagreement level
Low to moderate disagreement level. The speakers generally shared common goals of trustworthy AI deployment but emphasized different pathways and priorities. This suggests a maturing field where stakeholders are converging on objectives while still working out implementation details. The implications are positive for AI governance cooperation, as fundamental alignment exists despite tactical differences
Partial agreements
Partial agreements
Both agree on the importance of national control and sovereignty in AI systems, but Rosiński focuses on national LLMs as the solution while Pramod emphasizes broader infrastructure control including jurisdictional and encryption key management
Speakers
– Rafał Rosiński
– Pramod
Arguments
Critical infrastructure protection requires trustworthy AI and national LLMs like Poland’s Bielik model
Data sovereignty requires control over jurisdiction, keys, and infrastructure beyond just local data storage
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Data governance
Both advocate for inclusive participation in AI governance, but Masango focuses on formal multi-stakeholder processes at policy level while Odes emphasizes grassroots community involvement and addressing linguistic diversity in implementation
Speakers
– Chengetai Masango
– Odes
Arguments
Inclusive participation of all stakeholders (government, civil society, technical community, private sector) breeds legitimacy and trust
Community participation ensures linguistic diversity and contextual relevance in AI deployment
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Closing all digital divides
Both recognize the importance of standards and compliance frameworks for AI implementation, but Okuda focuses on developing global technical standards while Kura emphasizes practical compliance tools for businesses to navigate existing regulations
Speakers
– Atsuko Okuda
– Mariusz Kura
Arguments
ITU has 200+ approved AI standards with 200 more in pipeline, covering data formats, APIs, and protocols
AI compliance tools help organizations navigate regulatory requirements and choose cost-effective solutions
Topics
Artificial intelligence | The enabling environment for digital development | The digital economy
Similar viewpoints
Both speakers from the private sector observe that despite technological readiness, business adoption faces practical challenges including unclear ROI and preference for trusted local solutions when regulatory uncertainty exists
Speakers
– Pramod
– Mariusz Kura
Arguments
Technology advancement outpaces monetizable use cases, creating ROI challenges
Medium-sized enterprises prefer trusted local providers when uncertain about compliance
Topics
Artificial intelligence | The digital economy | The enabling environment for digital development
Both speakers emphasize the critical need for better communication and awareness-building, whether about available standards or AI benefits, particularly for non-technical audiences
Speakers
– Atsuko Okuda
– Edyta Gorzon
Arguments
Awareness and capacity gaps exist in understanding available standards and building blocks
Simple communication and practical examples are essential for non-technical end users
Topics
Artificial intelligence | Capacity development | Closing all digital divides
Both speakers advocate for proactive inclusion of communities in AI development processes, emphasizing the need to consider broader populations beyond the most accessible markets
Speakers
– Chengetai Masango
– Odes
Arguments
Affected communities should have input before AI deployment, not after implementation
Local value creation and linguistic diversity must be considered beyond the first 20% of markets
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Takeaways
Key takeaways
Trust in AI systems requires a multi-layered approach encompassing technical infrastructure, governance frameworks, and human factors
National AI sovereignty involves developing domestic capabilities (like Poland’s Bielik LLM) while maintaining data control and jurisdictional independence
Global AI standards are essential for interoperability, with ITU providing 200+ approved standards and 200 more in development covering data formats, APIs, and protocols
Multi-stakeholder governance involving government, civil society, technical community, and private sector is crucial for building legitimacy and public trust
The biggest implementation barriers are not technological but organizational – including data silos, lack of cross-functional alignment, and inadequate governance frameworks
Human adoption challenges center on fear of replacement, communication gaps, and the need for quality-focused rather than productivity-focused messaging
Regulatory frameworks like the EU AI Act, despite compliance challenges, provide necessary guidelines that can facilitate international trade and market entry
Inclusive AI development requires contextualizing systems for local communities, addressing linguistic diversity, and ensuring participation beyond the first 20% of markets
Critical infrastructure AI deployments require three key elements: control (data sovereignty), explainability (full visibility across systems), and resilience (financial-grade uptime)
Resolutions and action items
Organizations should utilize existing ITU AI standards as building blocks rather than developing solutions from scratch
AI compliance tools should be implemented to help navigate regulatory requirements and optimize cost-effectiveness
Communication strategies for AI adoption should focus on quality improvement and specific problem-solving rather than general productivity gains
Affected communities should be included in AI decision-making processes before deployment, not after implementation
Independent oversight bodies should be established that include civil society and technical experts alongside regulators and industry
Investment decisions should involve top-level decision-makers who clearly understand the specific problems AI is meant to solve
Unresolved issues
How to effectively bridge the gap between AI pilots (which often succeed) and production deployment (where 80% fail due to data and organizational issues)
How to balance the need for AI regulation with maintaining innovation and competitiveness across different jurisdictions
How to address the capacity and awareness gaps in understanding and implementing available AI standards
How to manage the psychological impact of AI adoption on workers who fear replacement while maintaining productivity goals
How to ensure smaller nations and the global south can participate meaningfully in AI development rather than just consuming solutions from the global north
How to create sustainable business models and clear ROI frameworks for AI implementations beyond pilot projects
How to harmonize different national approaches to AI governance while maintaining sovereignty and local context
Suggested compromises
EU providing sandbox solutions for AI regulation compliance to balance strict guidelines with business flexibility
Using AI compliance tools to help organizations navigate between regulatory requirements and cost-effectiveness
Focusing communication on quality improvement rather than productivity to address worker concerns while achieving business goals
Developing national LLMs through cooperation between academia and private sector to balance sovereignty with resource efficiency
Implementing phased approaches where human oversight is maintained during initial AI deployment phases
Creating standardized frameworks that allow for local customization and contextual adaptation
Thought provoking comments
Trust is built locally so these discussions should not just be happening at a global level and then trickle down. Local communities should be able to contribute in some manner and this process should be a cycle. So the feedback loop should be down but also up.
Speaker
Chengetai Masango
Reason
This comment challenges the traditional top-down approach to AI governance by emphasizing bidirectional communication between global and local levels. It introduces the concept of cyclical feedback loops rather than linear implementation, which is a sophisticated understanding of governance dynamics.
Impact
This comment directly influenced the moderator’s next question to Odes about community-driven ecosystems, shifting the discussion from abstract global standards to concrete local implementation. It established a framework that other speakers built upon throughout the session.
If you think about linguistic diversity that is there in many of the communities… if you build such a product, or an AI solution and it’s in language that only 20%, 50% of the population understands, then the trust is broken between the provider, which is the public sector, and that part of the population, which is the citizens.
Speaker
Odes
Reason
This comment provides a concrete, measurable example of how AI exclusion occurs, moving beyond abstract discussions of inclusivity to specific barriers. It quantifies the trust problem and connects technical decisions to democratic legitimacy.
Impact
This linguistic diversity perspective became a recurring theme, with Odes returning to it in the final question. It grounded the abstract concept of ‘inclusive AI’ in tangible, relatable terms that other participants could build upon.
Actually, this is not the first session I’m talking about standards. This is actually third during the summit. But I am not sure, unless you are the standardization person, you don’t normally think about, okay, there are building blocks available, right, that I can start building something based on the building blocks.
Speaker
Atsuko Okuda
Reason
This meta-observation about the discussion itself reveals a critical gap between technical solutions and awareness. It’s insightful because it acknowledges that having 500 AI standards means nothing if people don’t know they exist or how to use them.
Impact
This comment shifted the conversation from ‘what standards exist’ to ‘how do we make people aware of and able to use standards.’ It introduced the concept of awareness as a fundamental implementation barrier.
I don’t want to be more productive anymore, right? I don’t want to do faster meetings. I don’t want to do faster notes, right? It’s nice. But in the same time, my brain and the number of different impulses I’m getting from outside is simply too high. Our brains are not capable to manage that in the right way.
Speaker
Edyta Gorzon
Reason
This comment challenges the fundamental assumption that productivity gains from AI are inherently desirable. It introduces psychological and cognitive limitations as barriers to AI adoption, moving beyond technical and regulatory concerns to human wellbeing.
Impact
This was a turning point that reframed AI adoption from a technical challenge to a human psychology challenge. It influenced the moderator’s observation that ‘human factor is always at the end and behind everything’ and shifted subsequent discussions to focus more on user experience and change management.
So we’ve seen, especially in India, we’ve seen many, many, many pilots. And almost 80% of those pilots don’t make it to production. And the key reason is on the data. Data is siloed, data is not ready for AI scale.
Speaker
Pramod
Reason
This statistic provides concrete evidence of the implementation gap between AI pilots and production systems. It challenges the narrative that technology is the primary barrier and identifies data governance as the critical bottleneck.
Impact
This 80% failure rate became a focal point that validated other speakers’ concerns about implementation challenges. It shifted the discussion from theoretical frameworks to practical deployment realities and influenced Mariusz’s agreement about business-side barriers.
We want to change the way how they are acting with AI… majority of users of AI are end users. They are not people who are taking part in conferences like this one. They are not that fluent with technology, but in the same time, we expect from them to be fluent and to change the way how they act.
Speaker
Edyta Gorzon
Reason
This comment highlights a fundamental disconnect between AI policy discussions and actual users. It’s insightful because it points out that the people making AI decisions are not representative of the people who must use AI systems.
Impact
This observation influenced the entire latter half of the discussion, with multiple speakers acknowledging the human factor as central to AI implementation. It established user experience as equally important to technical and regulatory considerations.
Overall assessment
These key comments fundamentally shifted the discussion from a technical and regulatory focus to a human-centered perspective on AI implementation. The conversation evolved from abstract policy frameworks to concrete implementation barriers, with speakers increasingly acknowledging that the primary challenges are not technological but human, organizational, and social. The comments created a progression from global standards to local implementation, from technical capabilities to user adoption, and from productivity promises to human wellbeing concerns. This evolution made the discussion more nuanced and practical, moving beyond theoretical frameworks to address real-world deployment challenges.
Follow-up questions
How can we train national data effectively for AI systems?
Speaker
Rafał Rosiński
Explanation
This is crucial for developing sovereign AI capabilities and ensuring AI systems work effectively with local data while maintaining data sovereignty
How can we effectively fight against deep fakes and false information using AI in public sector?
Speaker
Rafał Rosiński
Explanation
This addresses a critical security and trust challenge when implementing AI in government services and protecting citizens from misinformation
How do we determine when AI is the right solution versus other technologies for specific problems?
Speaker
Atsuko Okuda
Explanation
This relates to the capacity challenge of properly articulating problems and choosing appropriate technological solutions rather than defaulting to AI
How can we better prepare and govern data for AI scale implementation?
Speaker
Pramod
Explanation
This addresses the primary barrier to moving AI pilots to production, as 80% fail due to data readiness issues including silos and lack of governance
How can organizations achieve better cross-functional alignment for AI implementation?
Speaker
Pramod
Explanation
AI cuts across multiple organizational functions and lack of alignment between technology, legal, and IT teams significantly slows down adoption
How do we find the right balance between AI automation and human oversight in critical decisions?
Speaker
Pramod
Explanation
Organizations need guidance on determining appropriate levels of human intervention required for different AI use cases, especially in critical infrastructure
How should organizations communicate AI changes to avoid productivity pressure and mental health impacts?
Speaker
Edyta Gorzon
Explanation
Users are experiencing cognitive overload and fear of replacement, requiring careful communication strategies that focus on work quality rather than just productivity gains
How can small nations participate in building and deploying AI for their own interests rather than just importing solutions?
Speaker
Odes
Explanation
This addresses the need for local value creation and ensuring that AI development benefits extend beyond just consumption to actual participation in the AI economy
How can we better contextualize AI systems for deployment in the Global South when most datasets come from the Global North?
Speaker
Odes
Explanation
This is critical for ensuring AI systems are culturally and contextually appropriate for diverse global markets and populations
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

