AI as critical infrastructure for continuity in public services
20 Feb 2026 10:00h - 11:00h
AI as critical infrastructure for continuity in public services
Summary
The panel opened with Lidia asking Minister Rafał Rosiński about the lessons Poland has learned while embedding AI into national systems, emphasizing the need to protect critical infrastructure such as energy, water and data [1-3][9-16]. Rosiński highlighted that trustworthy AI, supported by domestic large-language models like Bielik, is central to securing both public and private sector operations and fostering competitiveness [20-24].
Atsuko Okuda of the ITU explained that over 200 AI standards are already approved, covering data formats, standardized APIs and communication protocols, which together lower investment costs and enhance cross-border interoperability [36-48]. She added that harmonized terminology, reference architectures, lifecycle definitions and conformance testing further enable seamless collaboration among countries [50-57].
Chengetai Masango argued that inclusive, multi-stakeholder participation-bringing together government, civil society, technical experts and industry-creates legitimacy and trust, especially when processes are transparent and accountable [63-70]. Odes reinforced this view by showing how community-driven ecosystems, attentive to linguistic diversity and feedback loops, ensure AI solutions are relevant and trusted at the local level [78-89].
J.J. Singh noted that clear regulatory frameworks such as the EU AI Act, complemented by sandbox environments, can actually facilitate international trade by giving companies a predictable rulebook to follow [96-108]. Mariusz Kura described how his firm scales AI across regions through distributed development centers, but stresses that rapidly changing compliance requirements demand dedicated tools like an AI compliance suite to navigate standards and cost-effectiveness [115-129][130-138].
Pramod emphasized that trustworthy AI rests on three pillars-control over data and compute (including sovereignty), explainability of decisions, and resilience of services-especially for critical sectors like healthcare [145-165][166-176]. He and other speakers identified the main implementation bottlenecks as fragmented data, lack of governance, legal silos and lingering human mistrust, which together slow the transition from pilots to production [227-244]. Mariusz agreed that business-side uncertainty and the need for recognized standards further impede adoption, particularly for medium-sized enterprises [247-252].
Edyta Gorzon highlighted that users often fear replacement and are overwhelmed by rapid AI change, so clear, modest communication focusing on quality improvements rather than productivity gains is essential to overcome the human barrier [255-272]. The discussion concluded that building long-term confidence in AI requires a mix of inclusive participation, independent oversight, and clear strategic intent from senior decision-makers, ensuring both cross-border investment and societal acceptance [277-290][308-311].
Keypoints
Major discussion points
– Trustworthy AI and national digital sovereignty – The Polish minister highlighted that critical infrastructure (energy, water, health) must be protected and that AI security is linked to cyber-security and trustworthy AI, especially through national large-language models such as “Bielik” to keep data and services under Polish control [9-16][20-23].
– Global standards as the backbone of interoperability and trust – The ITU representative explained that AI standards (over 200 approved, 200 more in pipeline) enable systems from different countries to communicate via shared data formats, standardized APIs and protocols, and also provide harmonised terminology, reference architectures and conformance testing [35-48][43-48].
– Inclusive, multi-stakeholder governance builds legitimacy and public confidence – Both the African and the community-focused speakers stressed that involving government, civil society, technical experts and the private sector in policy design, with transparent consultations, independent oversight and feedback loops, creates legitimacy and trust in AI deployments [63-70][75-88].
– Regulatory alignment influences cross-border trade and investment – The chamber of commerce delegate argued that clear AI regulatory frameworks (e.g., the EU AI Act) act as a “playbook” that can facilitate Indian companies’ entry into European markets, while sandbox programmes and harmonised rules reduce compliance friction and support international AI commerce [96-108][308-311].
– Practical implementation hurdles are largely data-, governance- and human-factor related – Participants pointed out that data silos, missing data-governance, rapid regulatory change, and users’ fear of replacement are the biggest blockers to scaling AI; solutions such as compliance suites, clear accountability, and careful change-management communication are needed [229-237][242-244][255-272][247-252].
Overall purpose / goal of the discussion
The panel was convened to explore how governments, international bodies, industry and civil society can jointly shape trustworthy AI ecosystems-covering policy, standards, regulatory alignment, and on-the-ground implementation-so that AI can be deployed safely, inclusively, and economically across national borders.
Overall tone and its evolution
The conversation began with a constructive and forward-looking tone, emphasizing national initiatives and the promise of AI for public services. As the dialogue progressed, the tone shifted to a pragmatic and problem-solving focus, acknowledging concrete challenges such as standards gaps, data governance, and human resistance. By the end, the tone became balanced and solution-oriented, summarising key actions (inclusive governance, clear regulations, robust standards) needed to sustain long-term confidence in AI.
Speakers
– Lidia
– Role/Title: Moderator / Facilitator of the panel (co-founder and president of the Foundation Polistratos Institute)
– Areas of Expertise: Digital policy, AI governance, multi-stakeholder dialogue
– Sources: [S12]
– Rafał Rosiński
– Role/Title: Minister (Poland)
– Areas of Expertise: Digital governance, AI implementation in critical infrastructure, national AI strategy
– Sources: [S3]
– Atsuko Okuda
– Role/Title: ITU representative (International Telecommunication Union) – works on AI standardisation
– Areas of Expertise: AI standards, interoperability, global digital standards development
– Odes
– Role/Title: Panel speaker on community-driven digital ecosystems
– Areas of Expertise: Community participation in AI deployment, inclusive AI design, linguistic diversity
– Sources: (none beyond transcript)
– J.J. Singh
– Role/Title: Representative of the Polish Chamber of Commerce (participating in the discussion on regulatory alignment)
– Areas of Expertise: International trade, AI regulation, EU-India AI collaboration
– Sources: [S2]
– Mariusz Kura
– Role/Title: Representative of Bilenium (AI solutions provider)
– Areas of Expertise: AI scaling across regions, regulatory compliance, AI compliance suite development
– Sources: [S13]
– Edyta Gorzon
– Role/Title: AI adoption lead (responsible for driving AI adoption within her organisation)
– Areas of Expertise: Change management, user adoption of AI, communication of AI benefits
– Sources: [S14]
– Pramod
– Role/Title: Co-founder & Chief Architect, NFH India (AI Impact Summit)
– Areas of Expertise: Trusted AI infrastructure, data sovereignty, secure compute, resilience of digital backbone
– Chengetai Masango
– Role/Title: Head of Office, UN Secretariat for the IGF (Internet Governance Forum)
– Areas of Expertise: Global AI governance, multi-stakeholder participation, public trust in AI deployment
– Sources: [S18], [S19], [S20]
Additional speakers:
– None identified beyond the listed speakers.
Lidia opened the panel by asking Minister Rafał Rosiński what lessons Poland had learned while embedding artificial intelligence into its national systems and how these lessons relate to digital governance, sustainability and resilience [1-4]. Rosiński answered that protecting critical infrastructure – energy, water and health-care – is the core focus of Poland’s AI strategy and that trustworthy AI is essential for keeping these services running [9-12][15-16]. He linked cyber-security and digital-skill development to trustworthy AI and highlighted Poland’s home-grown large-language models, the public “Bielik” LLM and a second version co-developed with academia and the private sector, as tools that keep data and services under Polish control while enhancing competitiveness [20-24].
Turning to the international dimension, Lidia thanked the minister and asked Atsuko Okuda of the International Telecommunication Union (ITU) how global standards can ensure interoperability and resilience of AI systems across regions [28-30]. Okuda explained that the ITU has approved more than 200 AI standards, with another 200 in the pipeline, totalling roughly 500 standards and drafts [39-41]. She described the three technical building blocks for interoperability – a shared data format, standardized APIs and common communication protocols – and noted that the ITU’s portfolio also covers AI for network automation, multimedia processing, machine-to-machine data sharing, as well as harmonised terminology, vocabularies, reference architectures, lifecycle, testing and conformance [43-57].
Lidia then asked Chengetai Masango how multi-stakeholder cooperation translates into real public trust in AI governance [60-62][63-70]. Masango argued that inclusivity breeds legitimacy: when government, civil society, the technical community and industry all participate, policies gain greater buy-in, transparency and accountability. He cited the Internet Governance Forum as a model of multi-stakeholder dialogue that now also addresses AI, misinformation and disinformation, emphasizing local feedback loops and accountability mechanisms as anchors of trust [63-70].
Next, Lidia invited Odes to discuss how community-driven digital ecosystems can contribute to local trust [73-74][75-89]. Odes stressed that linguistic diversity must be respected so that AI solutions are understandable to the whole population; otherwise trust erodes. He added that community participation throughout the innovation cycle ensures AI reflects local realities and that continuous feedback loops keep services relevant and adopted over time [82-89].
Lidia’s question on the economic dimension was directed to J.J. Singh of the Polish Chamber of Commerce [92-95][96-108]. Singh explained that the EU AI Act, despite being stringent, provides a clear “playbook” that helps Indian firms prepare for European deployment, and that sandbox programmes in France have already enabled ten Indian AI companies to accelerate under EU oversight. He argued that regulation, when paired with practical tools, is necessary to prevent misuse of AI for policing or profit-driven exploitation [99-108]. Lidia noted that trust underpins economic confidence and facilitates cross-border AI collaboration [92-95].
Addressing the challenge of scaling AI across regions while managing regulatory divergence, Lidia turned to Mariusz Kura [113-119][120-129][130-138]. Kura described a distributed development model in which global offices allow a solution to be built in India one day and tested in Europe the next, enabling rapid fixes. He highlighted the difficulty of keeping up with fast-changing compliance requirements and presented Bilenium’s AI compliance suite – a complex tool that guides organisations through government regulations, cost-effectiveness and licensing choices, thereby helping them navigate divergent standards [115-138].
Trust pillar – Across the discussion, speakers converged on what constitutes trustworthy AI. Rosiński reiterated that trustworthy AI for critical infrastructure requires national-level large-language models and robust cyber-security [9-12][15-16][20-24]. Masango emphasized that inclusive, multi-stakeholder processes generate legitimacy and transparency [63-70]. Odes added that community-driven ecosystems, especially those that respect linguistic diversity, are essential for local acceptance [82-89]. Pramod Masango distilled trust into three questions: who controls the data and compute (data sovereignty and jurisdiction), can the system’s decisions be explained across all layers, and is the service resilient enough to stay up when needed [161-176]. Edyta Gorzon highlighted the human factor, arguing that clear, simple communication that frames AI as a quality-enhancing tool – rather than a productivity-only promise – mitigates fear of replacement and cognitive overload [181-199]. Finally, J.J. Singh linked regulation to trust, noting that a clear regulatory “playbook” builds confidence for cross-border AI investments [99-108].
In the second round of reflections, Lidia asked Minister Rosiński about the most complex operational challenge governments face when deploying AI in public services [198-206][201-206]. He identified the need to train national data, manage generative AI responsibly and combat deep-fakes as central to protecting citizens and ensuring wise AI use [202-206].
Lidia then probed Atsuko Okuda on the biggest implementation gap today [207-214][211-215][210-218][219-222]. Okuda pointed to an awareness and capacity gap: many participants are unaware of existing standards, and those who know them often lack the ability to articulate problems and translate them into operational projects [211-218][219-222]. The awareness and capacity gap identified by ITU complements the data-silo and standards-uncertainty challenges highlighted later by Pramod and Mariusz [211-218][229-252].
Pramod Masango and Mariusz Kura discussed what most often slows down AI projects. Pramod highlighted fragmented, siloed data, missing governance and cross-functional misalignment as primary blockers, noting that 80 % of pilots in India fail to reach production because the data are not ready for scale, and that legal constraints and a lack of trust further delay adoption [229-244][242-244]. Mariusz echoed this, adding that medium-sized enterprises hesitate to adopt foreign AI solutions without recognised standards, reinforcing the need for trusted, widely accepted standards to reduce business-side uncertainty [247-252].
Addressing the human barrier, Lidia asked Edyta Gorzon what the most common obstacle is [253-272]. Gorzon replied that users worry about being replaced and feel overwhelmed by rapid AI change; organisations must therefore communicate carefully, focusing on quality improvements and providing reassurance rather than promising higher productivity [253-272].
Lidia sought a practical step to strengthen public trust, turning again to Chengetai Masango [276-290][277-287][288-290]. He reiterated that inclusive participation before deployment is the most important action, complemented by independent oversight bodies that bring together civil society, technical experts and regulators to review AI systems proactively [277-290].
Finally, Lidia asked Odes how AI can remain inclusive in real-world implementation [291-304][294-304][295-304]. Odes identified three key factors: ensuring the target community is accounted for by contextualising data sets (especially for the Global South), fostering local value creation so small nations can participate in AI development, and respecting linguistic diversity so that the majority of users – not just the first 20 % of the market – can benefit [295-304][298-304][300-304].
For the last question, Lidia invited J.J. Singh to summarise what creates long-term confidence in cross-border AI investments [306-311][308-311]. Singh answered succinctly that confidence stems from the involvement of senior decision-makers who understand the purpose of the investment and can align resources accordingly [308-311].
The moderator thanked all participants and closed the discussion, signalling the end of the panel [312-313].
Overall, the panel converged on four core themes: (1) trust is indispensable for AI in critical infrastructure and must be built on control, explainability and resilience; (2) global standards – shared data formats, standardized APIs, communication protocols and harmonised terminology – lower costs and underpin interoperability; (3) inclusive, multi-stakeholder governance and community-driven ecosystems generate legitimacy, transparency and local relevance; and (4) robust data governance, capacity-building and clear regulatory guidance are essential to overcome the main implementation bottlenecks. Speakers highlighted divergent views on the primary barrier – data silos versus regulatory awareness versus business-side hesitation – and on whether regulation is chiefly an enabler or a hurdle, suggesting that coordinated policy actions addressing standards awareness, data sovereignty and both community- and market-oriented trust mechanisms will be needed to realise trustworthy, inclusive AI at national and cross-border scales.
I direct my first question to Minister Rosiński. Minister, Poland has been implemented and shaping digital governance and also investing in sustainability and resilience of national systems. What are lessons learned and what lessons are the most relevant when we talk about implementation of AI in national systems? Maybe the other one. Yeah.
Thank you very much. Thank you. Thank you. like energy sector, water price, health care. That is the main point of our day. Critical infrastructure, I think it’s the crucial point in every country. We cannot imagine how can we run the business if we have… We have no energy, no water, and our data is not enough protected. And we support also local government. We create local… through cyber security. And that is connected with digital skills, especially hygiene with this area. And cyber security is linked with AI, with trustworthy AI. That is the also important thing if we use AI, especially national LLMs, and we can use it for the security of our business. And if we use AI, we can also use it for the security of our business.
And if we use AI, we can also use it for the security of our business. And how can we train the national data? That’s why in Poland we’ve built also Polish LLMs. And how can we train the national data? That’s why in Poland we’ve built also Polish LLMs. is Bielik, which is one public LLM, and the second one is Bielik that is the first one Plan, the second is Bielik that is with cooperation with academia, with private sector, and we support also. That can allow also be competitive for Polish business. That’s whole, if we see this whole ecosystem and we can also exchange our ideas and show our knowledge with other countries. That is the way, the proper way.
to be safe and to use trustworthy AI.
Thank you very much, Minister, for using beautiful examples of language model from Poland and their role in Polish ecosystem regarding both public sector and private sector and for framing AI as a matter of public responsibility and resilience. And now let’s move at the international level and have a look at the global dimension. And I would like to ask a question to Atsuko Okuda. How can global standards ensure interoperability and resilience of AI systems across regions?
Thank you very much. First of all, good afternoon to all of you. And I would like to thank the audience. organizer for inviting ITU, International Telecommunication Union. And as some of you may know, ITU is the oldest UN agency specialized for digital technology. And we have standardization work, including on the topic of AI. Now, what does AI standards do for all of us? Number one, it will enhance interoperability, which means that if a system or solution is developed in India that can talk to the system, as His Excellency mentioned, in Poland and vice versa, and that will lower the investment cost, that will increase the efficiency. So what are those standards that could be useful because of the interoperability, and especially within the country as well as within the region or globally?
So one concrete standard… Oh, by the way, just to give you the magnitude, ITU has over 200 already approved AI standards, and 200 more are in the pipeline. So in total, we have about 500 standards in place as well as in the pipeline. So you can see there are many different standards which are available for everyone. So what are those standards? Number one, for the interoperability, we believe that data, the interface, and protocol are critical. For example, we have a shared data format that we can all use. Otherwise, how can I share my data with you with a different data format? Two, standardized API so that system -to -system communication will be smooth. And three, of course, communication protocol.
Now, because based on these standards, we have more, how can I say, comprehensive standards. Thank you. For example, AI for network automation, multimedia AI processing, standards as well as machine -to -machine data sharing, the frameworks, for example. And second, we also have a harmonized terminology, vocabulary, and reference architectures. Because when I talk to, it’s not only you, but with anyone, some aspect of AI, how do we know that we understand the same thing? So this taxonomy, vocabulary, and the reference architecture is critical for interoperability and for us to be able to develop and exchange data or develop the algorithm together. So we have our AI model. Life cycle definition, so I know what you are referring to, and you know what I’m referring to.
Three, we have a context. Performance and testing are related. so that we can test, validate, and we have also conformance specifics that we use as a standard to validate that what you are sharing is what I can validate. So this, I hope, the standards are useful for enhancing the interoperability as well as to enhance the collaboration within the country as well as across the regions. Thank you.
Thank you very much. Standards are a very important pillar of building trust. Another is inclusive governance. Changatai, how does multi -stakeholder cooperation translate into real public trust in AI governance?
Thank you very much and thank you very much for the invitation and I’d like also to thank the organisers Millennium and Poland of course for inviting me now for your question, for any process I think, inclusivity breeds legitimacy and thereby trust, so if you have all the stakeholders who are affected by whatever policy that is, so you must have government, civil society, the technical community and the private sector all talking to each other and giving their point of views from their perspectives, I think then you can result in policies that have a greater buy -in so once people are involved in the process they’re more likely to adopt that process and secondly the transparency of the process also matters people need to know how these decisions came about and also what was the decided and this can be done with open consultations public comment periods and accessible documentation that builds confidence.
This is basically the same model that has built the internet what it is now. You have the public comment period etc and then these are adopted The IGF as well shows that this works The Internet Governance Forum is a multi -stakeholder dialogue and within our framework we discuss AI governance as well and a lot of other things misinformation, disinformation etc and this approach can anchor AI governance in legitimacy Trust as well is built locally so these discussions should not just be happening at a global level and then trickle down Local communities should be able to contribute in some manner and this process should be a cycle. So the feedback loop should be down but also up.
So there’s a resonance going on there. And then I think lastly, accountability mechanisms is also very, very important. So a multi -stakeholder corporation without clear accountability methods, people will not trust it because they need to know if they have an issue, where can they go and express that concern and that it will be dealt with in some manner or function. Thank you.
Thank you very much. I couldn’t agree more. Trust is also built locally and that’s why I would like to direct my next question to Odes. How can community -driven digital ecosystems can contribute to building trust to AI locally?
Thank you. Good afternoon, everyone. I say that modestly and saying thank you for your attention. Thank you for the invitation to join this panel. to give context to community participation, both at the innovation level and at the policy level, I would like to start with where Chinetai just finished, which is that community is a big stakeholder and a big participant in the multi -stakeholder framework. If you think about deploying AI solutions, especially for public services, then you realize that the inclusivity is what builds trust. The ability to deploy AI and to be consumed by every citizen is at the core of the trust between the users and the providers of the services. So taking into account that community, making sure that it’s included.
I’ll give an example. If you think about linguistic diversity that is there in many of the communities, in many of the countries of this world, you realize that if you build such a product, or an AI solution and it’s in language that only 20%, 50 % of the population understands, then the trust is broken between the provider, which is the public sector, and that part of the population, which is the citizens. The second part is that in the innovation cycle as well, we’ve seen on and on AI being deployed, but it doesn’t reflect the realities of certain communities, and that’s both, you can think about it linguistically, you can think about it contextually, you can think about it in different forms and shapes, it takes in different domains.
So the participation of the community into that, in ensuring that the innovation and the policy level align with the needs and the realities of those particular communities are very important. To finish off, I think that communities or cities and communities, and the citizens are also a big part of that. on how AI systems are improved because once you deploy such a system and you don’t have a feedback loop, then you realize that those particular technologies only work for some time and the adoption goes down after some time. So I think those three things are very key in building trust. First, inclusivity, part of it. Second, the participation in the innovations as well. And lastly, the feedback mechanism for how those services are being consumed, are being used and what can be improved.
Thank you very much. Trust also can influence economic confidence and cross -border collaboration. That’s why I would like to direct my next question to JJ. Does regulatory… Alignment. directly influence international trade? What is your perspective and observation? If you could share experience from in the Polish Chamber of Commerce.
Well, I will just share the experience from the perspective of Poland in EU and India. Normally, all are saying a lot of regularities always, you know, dishearten the business and the investments. But I think in this particular case, if it comes to the AI, I think we need a guidebook because without that, everything can go haywire. So if you look at the regulation with the EU AI Act, which has been implemented in 2026, I think in a way it makes a kind of issue for the investors. But on the other hand, if you take it, if you have the clear guidelines, it’s always very good in the lieu of the India, EU FTA that the Indian companies will be ready.
for deployment of the AI algorithms and other things within Europe. Now, let’s take the example also how EU, even businesses are saying that, well, the regulations are very tough, the compliance is very tough, but EU is also doing from their own side to make it easier for the businesses. I can use the example here from 2025, where in France there are 10 AI companies from India, which are actually part of the accelerator program, and EU is also ready to give a sandbox solution for all the regulations. So in all my perspective is you need a kind of control, especially on the generative AI, and you need some kind of control on the AI. So the rulebook which EU has given, it will be like, you know, I would say it’s a playbook for all the AI companies involved, and I think that India should be involved.
India should take the advantage of that because if… If they are already prepared to adhere to the rules, then I think the entry will be easier for the companies. So I definitely support the regulation because in this particular matter of AI, we need regulation. Because if you see the other countries, I will not take the names. One is using for policing its own people, and second is using it only for making the money. So yes, it’s good, but with sense.
Thank you very much. In our discussion, we have also three representatives of the private sector who know practical aspects very well because they have to deal with all these challenges on a daily basis. So I would like to start with Mariusz Kura. Mariusz, how do you scale AI solutions across regions while managing regulatory divergence?
Thank you, Lidia, and good afternoon, everyone. Good afternoon. Distributed software development. for the international IT companies is not new. We have started practicing this a millennium, 10 years back, when we, together, we were opening the office, the delivery center in Pune, Maharashtra, here in India. And simple practice to scale up and be fast is to have exactly the global offices, and like our development team, can build some solution, let’s say, in one day and deploy it, and the next day, business in Europe can verify if it’s working as it was expected. If not, then our development team in India can fix it even on the same day. So that’s the one way how we’ve been scaling up so far.
But the challenge nowadays is exactly how to scale up and follow all the regulations, and how to work for the different regions, for the different countries, where we have exactly, like for the public sector, a lot of rules. And hopefully… Hopefully from ITU we have as well two more hundred certifications. So, yeah, the way how we can standardize it, standardizations. So, AI engineers and AI solution providers in India need to learn and need to be compliant with all those standards. And it’s very difficult nowadays because it’s so fast. It’s changing almost like every week. And how to exactly follow that? At Bilenium, recently we have developed as well one dedicated solution, which is the AI compliance suite.
And this tool is quite complex. It’s not only covering the governments and compliance area, but as well as helping the organizations to use the right AI tools. Nowadays the enterprises they are using in a while, Edita will be talking about the Copilot, but there are plenty of the different tools used in the enterprises. And our solution is helping the organizations to navigate the users to the right solution. And what does it mean, the right solution? For example, it could be as well from the cost -effective perspective. Like, for example, should we use this and utilize the tokens from that provider? Or maybe another provider is having the better license practice and policy offering. So, that’s, I believe, what can help, yeah, kind of that solutions for the IT solution providers.
Thank you.
Thank you very much for a beautiful example how AI can help manage AI. And now let’s have a look at infrastructure. And I have a question to Pramod from an infrastructure standpoint. What does trusted AI require? On the ground, what does it require? in terms of data sovereignty, in terms of secure compute and resilient digital backbone.
God afternoon, everyone. Pleasure to be here. So when AI starts moving, getting adopted into public services, critical national security deployments, the trust moves not just on the models, but moves from the models and data to the underlying foundation. When I say foundation, where is the model running? What compute is it tuning? Is it running on? Do you control the data? Is there, you know, what jurisdiction? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? There is, you know, the security components around it. So all in all, you know, there are three questions that one needs to ask before you say that you fully trust AI, right?
The first question is on the control. The second one is, you know, can you tell me what happened, right? The AI system, will you be able to explain what happened across each of these layers? And third one is, is it up? So the control part is like we just discussed, you know, control, not just in the data. Data sovereignty just doesn’t mean that, you know, data space is local. But what we’ve seen from our customers asking, you know, is there any other jurisdictional law that can, you know, override saying, hey, I need full visibility of the data, of that infrastructure, you know, auditability and so on and so forth. So I think that’s, do you have the keys?
Is a key question one needs to ask. The second one. is on the explainability, on the visibility, and not just on the model monitoring, whether I am getting accurate data, but overall on data, who accessed, what is the governance around it, what happened in the network. So across all the foundation, if you don’t have full visibility, you will not be able to explain why a system took a decision, right? Because now we are talking about critical infrastructure. The decision it takes can impact the impact could be disastrous. The third one is, again, resilience. So the resilience, by resilience, we mean can AI stay up? Let’s say if it is in healthcare, in a remote tier city, a hospital deploys an AI to diagnose the system.
A patient walking in at 2 a .m. on a Sunday morning, you know, it, the system needs to be out. It needs to be resilient like any other financial system, but here the implications are huge. So AI is moving from being just a software service to AI as a foundation where all of these elements need to come together before anyone can say I fully trust. I think that’s the
Thank you very much. And it is common knowledge that technology are widely diffused and used only when they are trusted. And sometimes human factor is important barrier in AI adoption. That’s why I would like to ask Edita, who works with users a lot, what determines whether AI is truly adopted by teams?
Excellent question. Thank you so much for that. Good afternoon, everybody. Thank you for all the comments. So we’ve been talking about infrastructure, about security, cybersecurity, about the legal aspects of AI. However, we should remember that deployment is technology, but the users, they want to change. We want to change the way how they are acting with AI. From the practical perspective, because I’m responsible for driving adoption in the past, it was the topic of the modern work. Now we have AI, and we should remember that majority of users of AI are end users. They are not people who are taking part in conferences like this one. They are not that fluent with technology, but in the same time, we expect from them to be fluent and to change the way how they act.
How they work. So from my experience, it’s extremely important to communicate in the right way in a simple word. in simple words and simple examples how AI can be the powerful tool. Not because of the features, because we all know that features are not driving anything, nor business, nor processes, nor business scenarios, whatever we have in our minds. And in AI, everybody can use AI in a different way. This is the biggest challenge from the change management perspective as well, because we can have the best technology, the best model, but if the users, they don’t know how to use it, if they don’t know where it leads to, it’s hard to expect that we’re going to succeed on scale.
Thank you very much, and thank you to all of you for sharing your views in the first round of questions. In the second round, we will turn from strategy to implementation, and I will ask all of you, for a very short reflection from this level. And Minister, what is the most… complex operational challenge governments face when deploying AI in public services what is your view
Shortly, of course, what JJ mentioned about I talked about this, uh, that this, um, a very important for also Polish perspective and how can also see that perspective other other countries, except EU. That is the other it’s important that how can we train the data, how can we use the data, and how it will be the future or generative AI? That we have to use, of course wisely. It is a very important the final goal, and how it will be used. Especially for public sector and especially for for our for our citizens if we look in in in that way, that will be good for for everyone. And of course um implementation of of ai in public sector and of course when use also this data private private companies that is important to see how can we also fight against deep fakes, and the false information thank you
Thank you very much. Atsuko where do you see the big implementation gap today? Is it standards, lack of standards, skills, governance, what it is?
Thank you for this very important question. I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole discussion on standards came as a surprise to many of the participants. Actually, this is not the first session I’m talking about standards. This is actually third during the summit. But I am not sure, unless you are the standardization person, you don’t normally think about, okay, there are building blocks available, right, that I can start building something based on the building blocks. So we are trying to promote the importance of standardization and using the standards so that you don’t have to. Thank you. I believe we need a lot of different capacities, the capacity to articulate the issue.
What is it that you or we want to address? Sometimes AI may or may not be the answer. Some other technologies may be able to help you better. So I believe this articulation is a huge maybe opportunity and challenge as well. After you articulate, how do you plan, how do you translate that articulated issue into an operational project and initiative? I believe it’s another layer of a capacity challenge. So I can see that there are many countries, companies, agencies who want to take advantage of the AI, but I hope that this discussion is helpful. To concretize those steps moving forward. Thank you,
Thank you very much. my next question will be directed to our technical experts Pramod and Mariusz and the question is in real AI projects what most often slows down implementation.
first definitely not technology because I think we’ve seen technology is always almost ahead very true over the last couple of years the advancement that have happened so despite advanced technology being available despite GPUs being available the platforms being available we still don’t see too many monetizable AI use cases and and that’s that’s a big problem Everybody is trying to figure out where my ROI is, what is that use case. And that again boils down to few key aspects. One is the biggest friction is on data. So we’ve seen, especially in India, we’ve seen many, many, many pilots. And almost 80 % of those pilots don’t make it to production. And the key reason is on the data.
Data is siloed, data is not ready for AI scale. There’s no governance built around data. And that’s why POCs, you use a good set of data and you show value. But then when it comes to production, most of the times they don’t have enough data to get the value out of it. The second, again, AI cuts across. In an organization, AI cuts across. It cuts across many functions. There is the technology. It cuts across multiple functions. team is saying, you know, we are ready with this, but then there is legal aspects, there is an IT guy sitting, you know, I cannot allow you to do this, and so forth. So that alignment is not thought through, right, and that also again slows down the adoption.
So I think these are the primary, and then again, you know, the trust factor comes in, the third part is, how much do you really trust AI to do, you know, do you see the how much risk comfort do you have, is there a human afterthought required for every decision it makes, so I think that organizations need to choose that balance on or choose the best use case where, you know, it’s balanced without requiring too much of human intervention, can I deploy this? Those are the key factors that we see, especially in India, that are slowing down the adoption.
It seems that whatever we are discussing, infrastructure or other challenges, human factor is always at the end and behind everything. Mariusz, is your experience similar or do you have different observations?
I totally do agree with Pramod. It’s not us technology who is slowing down it. Maybe sometimes, but it’s many times on the business side and especially for the medium -sized enterprises. If they don’t know if they can work with some solutions or if they don’t know if they can take the solutions, for example, from India, they will step back and they will go to the more trusted local providers. So I believe that the standards that we are talking, it will help us a lot. So that’s my practice.
Okay. Edyta, what… What is the most common… human barrier from your view?
Thank you for this question. So first of all, we talk again about humans, the most important factor in the same time the biggest challenge and the biggest opportunity. From my perspective, I think that while talking with users, because today I’m a user voice, I can hear very often that people, they are reflecting what’s going to be next if I’m going to be replaced by AI. What’s in it for me? And we also need to find the message as organization, no matter if public or a private sector, how to communicate all of those changes that are coming. Another topic I’m facing while talking with the users, they basically don’t know what to expect next because as we have noticed that AI is another revolution and the revolutions are getting one after another very shortly.
And when the users, they can hear, okay, I should be more productive. I don’t want to be more productive anymore, right? I don’t want to do faster meetings. I don’t want to do faster notes, right? It’s nice. But in the same time, my brain and the number of different impulses I’m getting from outside is simply too high. Our brains are not capable to manage that in the right way. We’re closer to depression and we know in which direction it goes. So how we are communicating AI as a part of the tool is extremely important. So be careful what are you talking to your users. Don’t tell them that they will be more productive. But maybe the quality of their work is going to be better.
Maybe they don’t have to repeat the same tasks every day, but we must be very, very careful what kind of wording we’re using in regards AI adoption. Thank you.
Thank you, thank you very much. My next question will be to Chengetai because he looks at these challenges from the global perspective and has access to data from all regions. What, in your view, what would be the most important practical step to strengthen public trust in AI deployment?
Thank you very much for that question. And by the way, I totally agree with you. I think the first one is quite obvious. Inclusive participation in AI decision -making. So ensuring that the affected communities or the affected individuals are not affected by AI. And I think that’s a really important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. into how the systems operate and before they are deployed, so not after the fact. We shouldn’t be fixing things after the fact, but we should go on an input before the deployment.
The second one is independent oversight, so establishing review bodies that include civil society and the technical experts, so not just the regulators and industry, but a 360 approach to it. Thank you
Thank you very much. We are approaching the end of our session, so I would like to ask Odes for a quick comment. What ensures AI remains inclusive in real world implementation?
There are a few key factors to look through when you talk about inclusivity. I think the first is to look who it is meant for and to ensure that they are accounted for. And this can happen in different forms. For example, when you look at data sets that power AI models, most of the time they tend to come from, let’s say, the global north, meaning that they won’t be very contextually aware when they’re deployed in the global south. So there’s that need to contextualize the AI system being developed to ensure that they really respond to the users that are meant for. I think the second part of ensuring inclusivity is also ensuring the local value creation.
I think we’ve seen too often imputation of AI systems, but not… the understanding of how especially small nations can participate in building and deploying AI for their interests. So I think those two things are very, very critical. And the other part is also, I guess, the linguistic perspective that I mentioned before, looking at the linguistic diversity that exists around the globe and ensuring that people are able to consume that particular technology being developed. I think when we often think about AI and how it’s deployed, we tend to look at the first 20 % of the market, but the rest 80 % also needs to be accounted for. So, yeah.
Thank you very much. Last question, and I will ask JJ for a very brief one sentence answer. What creates long -term confidence? cross -border AI investments from your perspective?
Well, you know, I think I can simply say it’s a mix of everything. The involvement from the right people, I would rather say the people who are on the top, who are taking the serious decision investments, because that’s very important. And the people who are involved, they should know what they want it for, because AI deployment is a big thing, but you should know what you want to solve with it. So that’s very important.
Thank you very much. Its time to wrapwrap up our discussion.
Minister Rafał Rosiński from Poland emphasized the critical importance of protecting national infrastructure through trustworthy AI systems. He presented Poland’s approach to developing national langu…
EventThis is a strategic concern for national security and autonomy, as very few countries can be completely digitally sovereign, but they need to ensure continuity of services and control over their data.
EventAdisa argues that policies should require AI threat modeling and red teaming as regulatory requirements for AI systems, especially in critical infrastructure. This should be a continuous process to te…
EventSovereignty dimension focuses on control over data, models, and security measures
EventHe positioned this approach as leveraging ITU’s 160 years of experience and its global community’s commitment to collaboration. Onoe noted that “the principles we want to revive must be embedded in th…
EventOnoe acknowledged the rise of a novel AI innovation ecosystem and the indispensable role of standards in extending this ecosystem globally. He stressed that effective standards must emerge from a proc…
Event‘Standards can underpin regulatory frameworks and […] provide appropriate guardrails for responsible, safe and trustworthy AI development.’ This very simple, yet very powerful statement comes from t…
TopicBenifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests this is crucial for enabling interoperability and cooperation between different p…
EventSpeakers agreed that effective governance requires multi-stakeholder approaches involving governments, civil society, private sector, and technical communities. Marjorie Buchser advocated for outcome-…
EventThe African IGF (AIGF) emphasises the importance of a multi-stakeholder approach to ensure its success. This approach involves collaboration from various stakeholders including government, civil socie…
EventAlain Ndayishimiye:So the technical development and deployment of AI is… So here I’m referring to ethical considerations when developing and actually deploying these technologies. It’s what often ke…
Event(72) The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase wit…
ResourceThe AI Act mandates CE marking for high-risk AI systems; and additional certification requirements are demanded in specific application contexts. Therefore, it is appropriate to foresee su…
ResourceDespite their different approaches, both speakers demonstrated remarkable consensus on fundamental principles. They agreed that risk-based regulation represents the most effective methodology, though …
Event– **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological advancement and regulatory lag, with different regions (China, EU, US) developing…
EventData governance and security concerns present another significant barrier. Shetty shared a compelling anecdote about an aerospace company discovering their proprietary designs appearing in ChatGPT, de…
Event“Protecting critical infrastructure – energy, water and health‑care – is the core focus of Poland’s AI strategy and trustworthy AI is essential for keeping these services running.”
The knowledge base states that Minister Rafał Rosiński emphasized the critical importance of protecting national infrastructure through trustworthy AI systems, confirming this focus.
“Poland’s home‑grown large‑language models, the public “Bielik” LLM and a second version co‑developed with academia and the private sector, keep data and services under Polish control while enhancing competitiveness.”
Source S2 describes Poland’s development of national language models, including the Bielik LLM, through cooperation with academia and the private sector, supporting the claim.
“Chengetai Masango, head of the Internet Governance Forum, argues that inclusivity and multi‑stakeholder participation (government, civil society, technical community, industry) builds legitimacy and trust in AI governance.”
Masango’s role at the IGF and his emphasis on multi‑stakeholder dialogue are documented in sources S30 and S92, confirming the statement.
“The ITU has approved more than 200 AI standards, with another 200 in the pipeline, totalling roughly 500 standards and drafts, and defines three technical building blocks for interoperability: shared data format, standardized APIs, and common communication protocols.”
While the knowledge base does not give the exact numbers, it outlines ITU’s broad standardisation mandate, its 10 study groups, and its role in fostering interoperable ICT standards, providing contextual background for the claim [S27] and [S86].
The participants show strong convergence on four core themes: (1) the centrality of trust for AI, especially in critical infrastructure; (2) the role of standards and interoperability in lowering costs and building confidence; (3) the necessity of inclusive, multi‑stakeholder and community‑driven processes to legitimize AI; and (4) the importance of robust data governance and capacity to overcome implementation bottlenecks.
High consensus – most speakers echo each other’s points across different domains, indicating a shared understanding that trustworthy, standards‑based, and inclusive AI, underpinned by solid data governance, is essential for successful national and cross‑border AI deployment. This alignment suggests that coordinated policy actions on standards, capacity building, and inclusive governance are likely to receive broad support among stakeholders.
The discussion shows moderate disagreement. Participants agree on the overarching importance of trust, standards and inclusivity, but diverge on what they see as the principal obstacle to AI scaling (data governance vs business trust vs regulatory awareness) and on whether regulation primarily enables or hinders cross‑border AI investment. Unexpected friction appears between community‑focused and business‑focused trust approaches.
Moderate – the disagreements are largely about emphasis and implementation pathways rather than fundamental contradictions, suggesting that coordinated policy that addresses data governance, standards awareness, and both community and business trust needs could reconcile the differing views.
The discussion evolved from high‑level policy framing to concrete technical and human‑centred challenges, driven by a handful of pivotal remarks. Atsuko’s standards overview anchored the conversation in tangible interoperability needs; Chengetai’s inclusivity argument expanded the scope to legitimacy and trust; J.J.’s regulatory playbook reframed rules as market enablers; Pramod’s three‑pillar model gave a clear framework for trustworthy AI infrastructure; Edyta’s emphasis on communication highlighted the critical human adoption barrier; Odes’ focus on data bias and market inclusion deepened the equity dimension; and Mariusz’s compliance suite demonstrated how the private sector can operationalise these insights. Together, these comments redirected the dialogue from abstract aspirations to actionable pathways, shaping a multidimensional narrative that interwove standards, governance, economics, infrastructure, and user experience.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

