AI as critical infrastructure for continuity in public services
Summary
The panel opened with Lidia asking Minister Rafał Rosiński about lessons from Poland’s digital governance and AI rollout in national systems [1-3]. Rosiński emphasized that protecting critical infrastructure-energy, water, and data-is the cornerstone of trustworthy AI, linking cybersecurity and the development of national large-language models such as Bielik to support both public and private sectors [9-16][20-24]. He noted that building a domestic AI ecosystem and exchanging knowledge internationally are essential for safety and competitiveness [24-25].
Atsuko Okuda of the ITU explained that over 200 AI standards are already approved, with another 200 in the pipeline, and that standards on data formats, APIs, and protocols are vital for cross-border interoperability and reduced investment costs [39-48]. She added that harmonized terminology, reference architectures, lifecycle definitions and conformance testing further enable collaboration across regions [50-57]. Chengetai Masango argued that inclusive, multi-stakeholder processes-government, civil society, technical community, and private sector-create legitimacy and trust, especially when decisions are transparent through public consultations and accountability mechanisms [63-70]. Odes reinforced this by showing that community-driven ecosystems, linguistic diversity, and feedback loops are key to building trust in AI services for citizens [78-89].
J.J. Singh highlighted that clear regulatory frameworks such as the EU AI Act act as a “playbook” that can facilitate cross-border AI trade, citing sandbox programs that help Indian firms enter European markets [96-107]. Mariusz Kura described how his company scales AI solutions through global delivery centers but faces the challenge of complying with rapidly changing regulations, prompting the development of an AI compliance suite to guide enterprises [115-124][128-138]. Pramod stressed that trusted AI requires control over data and compute, explainability of decisions, and resilience of services, especially for critical public-sector deployments [145-165][166-176].
Both Pramod and Mariusz identified data silos, lack of governance, and legal alignment as major bottlenecks, while Edyta Gorzon pointed out that user adoption hinges on clear, simple communication that addresses fears of replacement and clarifies benefits [227-244][255-272]. Atsuko later noted that the biggest implementation gap is awareness and capacity, urging participants to articulate needs, translate them into projects, and leverage existing standards [210-223]. Chengetai concluded that inclusive participation in AI decision-making and independent oversight are the most effective practical steps to strengthen public trust [277-290]. Odes added that ensuring AI serves the intended users, contextualizing data, fostering local value creation, and respecting linguistic diversity are essential for sustained inclusivity [294-304]. Finally, J.J. Singh summed up that long-term confidence in cross-border AI investments arises from a mix of top-level commitment, clear purpose, and coordinated stakeholder involvement [308-311].
Keypoints
Major discussion points
– National AI strategy must be anchored in trustworthy, secure infrastructure and home-grown models.
The Polish minister highlighted critical infrastructure (energy, water, health) as the foundation for AI, linked cybersecurity to “trustworthy AI,” and described the development of Polish large language models (Bielik) to keep data and security under national control[9-16][20-22].
– Global standards are essential for interoperability, resilience and shared understanding of AI systems.
The ITU representative explained that more than 200 AI standards are already approved (≈500 in total) and that common data formats, standardized APIs and communication protocols, together with harmonised terminology, reference architectures and conformance testing, enable cross-border AI collaboration[35-48][43-48][50-57].
– Inclusive, multi-stakeholder governance builds legitimacy and public trust.
Participants from civil-society (Chengetai) and community-focused experts (Odes) stressed that involving government, industry, academia and citizens in policy design, providing transparent consultation processes, and ensuring accountability are the core mechanisms that turn AI governance into trusted practice[63-70][78-89].
– Implementation hurdles centre on data, regulatory divergence, and the human factor.
Private-sector speakers (Mariusz, Pramod, Edyta) pointed to fragmented data, rapidly changing compliance requirements, the need for AI-specific compliance tools, and the difficulty of changing user behaviour and expectations as the main reasons pilots fail to reach production[115-124][128-138][144-165][227-236][241-244][255-272].
– Clear regulatory frameworks and cross-border coordination boost economic confidence and investment.
The chamber representative (J.J. Singh) argued that a well-defined “playbook” such as the EU AI Act, combined with sandbox environments and early alignment by companies (e.g., Indian firms preparing for EU rules), facilitates international AI trade and long-term confidence[96-108][308-311].
Overall purpose / goal of the discussion
The panel was convened to explore how governments, international bodies, civil society and industry can jointly shape AI governance that is secure, interoperable, inclusive and trustworthy, while identifying practical steps to overcome technical, regulatory and human-centred barriers to the deployment of AI in public services and the broader economy.
Overall tone and its evolution
– The conversation began with a formal and optimistic tone, emphasizing national achievements and the promise of AI (e.g., Poland’s LLMs, ITU’s standards).
– It then shifted to a constructive, solution-focused tone as speakers detailed concrete mechanisms for standards, multi-stakeholder processes, and compliance tools.
– Mid-discussion the tone became pragmatic and cautionary, highlighting real-world obstacles such as data silos, regulatory churn, and user resistance.
– The final segment adopted a forward-looking and conciliatory tone, stressing the need for clear guidelines, cross-border cooperation and inclusive practices to sustain trust and investment.
Overall, the dialogue moved from showcasing potential to confronting challenges and ending with a consensus on collaborative actions.
Speakers
– Chengetai Masango
– Areas of expertise: Multi-stakeholder governance, AI policy
– Role: Head of Secretariat, Internet Governance Forum (IGF)
– Title/Affiliation: IGF Secretariat [S3]
– Atsuko Okuda
– Areas of expertise: AI standards, telecommunications, standardization
– Role: Regional Director, International Telecommunication Union (ITU) Regional Office for Asia and the Pacific
– Title/Affiliation: ITU Regional Director [S4][S5]
– Lidia
– Areas of expertise: Policy facilitation, AI governance (panel moderator)
– Role: Moderator / facilitator of the discussion panel
– Title/Affiliation:
– Pramod
– Areas of expertise: AI infrastructure, data sovereignty, secure compute, resilient digital backbone
– Role: Co-founder & Chief Architect, NFH India (AI Impact Summit)
– Title/Affiliation: NFH India [S9][S10]
– Edyta Gorzon
– Areas of expertise: AI adoption, change management, user-centric deployment
– Role: Lead for AI adoption initiatives (responsible for driving adoption)
– Title/Affiliation:
– Rafał Rosiński
– Areas of expertise: National AI strategy, critical infrastructure, trustworthy AI
– Role: Minister (Poland)
– Title/Affiliation: Minister Rosiński
– J.J. Singh
– Areas of expertise: International trade, regulatory alignment, AI policy
– Role: Representative, Polish Chamber of Commerce
– Title/Affiliation: Polish Chamber of Commerce [S16]
– Mariusz Kura
– Areas of expertise: AI compliance, scaling AI solutions across regions, software development
– Role: Representative of Bilenium (AI compliance suite provider)
– Title/Affiliation: Bilenium
– Odes
– Areas of expertise: Community-driven digital ecosystems, inclusive AI deployment
– Role: Panel participant / speaker
– Title/Affiliation:
Additional speakers:
– None identified beyond the listed speakers.
The session opened with moderator Lidia directing her first question to Minister Rafał Rosiński. He explained that protecting critical infrastructure – energy, water and health-care – is the cornerstone of trustworthy AI because society cannot function without secure, data-protected services [9-12]. He added that digital-skill hygiene, support for local governments and the use of AI to enhance business security are also essential [??-??]. Rosiński highlighted Poland’s strategy of developing national large-language models, namely a public LLM called Bielik and a second, academia-partnered version of Bielik, to keep data and AI capabilities under national control and to boost the competitiveness of Polish firms [15-22][23-24].
Atsuko Okuda of the International Telecommunication Union (ITU) then described the role of global AI standards in achieving interoperability and resilience. She noted that the ITU already has more than 200 approved AI standards, with another 200 in the pipeline, totalling roughly 500 [39-41]. The standards focus on three technical building blocks – a shared data format, a standardised API and a common communication protocol – which lower investment costs and enable systems from different countries to communicate smoothly [43-48]. Beyond these basics, the ITU is developing harmonised terminology, reference architectures, lifecycle definitions and conformance-testing procedures to ensure AI components can be exchanged and validated across borders [50-57].
Chengetai Masango argued that inclusive, multi-stakeholder participation – involving government, civil society, the technical community and the private sector – creates legitimacy and public trust. He stressed that transparency through open consultations, public comment periods and accessible documentation is vital, and that accountability mechanisms must be in place so concerns can be addressed effectively [63-70]. He cited the Internet Governance Forum (IGF) as a successful model of multi-stakeholder dialogue that can be replicated for AI governance [65-66].
Building on inclusion, Odes highlighted the importance of community-driven digital ecosystems. He explained that AI services must be linguistically and culturally appropriate; otherwise trust erodes when only a minority can understand the language of an AI system [82-84]. He advocated for continuous feedback loops that allow communities to influence both innovation and policy, ensuring AI solutions remain relevant and are continuously improved [85-89]. He also stressed that local value creation and linguistic diversity are essential for inclusive AI deployment [??-??].
J.J. Singh shifted the focus to regulatory alignment and its impact on international trade. Referring to the EU AI Act, which will be fully applicable in 2026, he described it as a “playbook” that, despite initial investor concerns, provides clear guidelines that facilitate market entry for non-EU firms, such as Indian AI companies participating in EU sandbox programmes [96-104][101-103]. He gave concrete examples of AI misuse in other countries – for policing and profit-driven deployments – to illustrate why regulation is needed [??-??]. Singh argued that such regulatory clarity is a prerequisite for cross-border AI investment and for protecting citizens from misuse [105-108][109-110].
From the private-sector perspective, Mariusz Kura described how his company scales AI solutions through a network of global delivery centres, allowing rapid development in one location and immediate testing in another [116-120]. He identified the main obstacle as the need to comply with rapidly evolving, region-specific regulations, and presented his firm’s AI-compliance suite – a tool that helps organisations select cost-effective AI providers and manage licensing across jurisdictions – as a way to navigate legal requirements [128-138]. Kura reiterated that forthcoming ITU certifications will support AI engineers in meeting these standards [122-125].
Pramod presented a technical framework for “trusted AI”, centred on three pillars: (1) control – who holds the keys to data and compute, ensuring data sovereignty and auditability (he repeatedly asked “Do you control the data?”) [161-169]; (2) explainability – the ability to trace decisions across model, data and network layers [170-174]; and (3) resilience – the system must remain operational when needed, especially in critical sectors such as health-care [175-176]. He warned that without full visibility and control, AI decisions cannot be reliably explained, posing risks to public-sector services [170-174].
Both Pramod and Kura identified data-related issues as the most common implementation bottleneck. Pramod cited that around 80 % of AI pilots in India never reach production because data is siloed, not ready for scale and lacks proper governance [232-237]. He added that organisational misalignment – where legal, IT and business units are not coordinated – further slows adoption [241-244]. Kura echoed these points, noting that medium-sized enterprises often hesitate to adopt foreign AI solutions due to trust deficits and the absence of recognised standards [249-252].
Edyta Gorzon focused on the human factor, arguing that successful AI adoption depends on clear, simple communication that addresses users’ fears of replacement and cognitive overload. She recommended framing AI as a tool that improves the quality of work rather than merely increasing productivity, and stressed the need for organisations to manage the “brain-overload” that rapid AI change can cause [194-197][257-272].
All participants agreed that common AI standards are a cornerstone for building trust and enabling seamless cross-border interaction, thereby reducing costs and accelerating deployment [43-48][50-57]. They also concurred that inclusive, multi-stakeholder governance and transparent communication are essential for legitimacy, public confidence and user acceptance [63-70][78-80][194-197].
Views diverged on the primary barrier to AI implementation: Pramod highlighted data silos and governance [232-237]; Okuda stressed low awareness of existing standards and limited capacity to apply them [211-222]; Kura pointed to business-level trust deficits and the lack of widely accepted standards [249-252]; Gorzon emphasized human-centred concerns such as fear of replacement and cognitive overload [257-267].
Participants also differed on the preferred route to cross-border scaling. Kura advocated the use of global delivery centres and compliance tools [116-120][128-133]; Pramod argued that trust must first be established through control, explainability and resilience [161-165]; Okuda maintained that closing the awareness and capacity gap around standards is the prerequisite [211-222].
Key take-aways
– Protecting critical infrastructure and developing national LLMs (including digital-skill hygiene and support for local governments) are essential for AI sovereignty [9-12][15-22][??-??].
– Global AI standards – shared data formats, APIs and protocols – are central to interoperability [43-48].
– The main implementation gap is awareness and capacity to adopt standards, not the lack of standards themselves [211-222].
– Inclusive, transparent multi-stakeholder processes build legitimacy [63-70].
– Community-driven ecosystems that respect linguistic and cultural diversity enhance trust [82-89][??-??].
– Regulatory alignment, exemplified by the EU AI Act and sandbox programmes, catalyses cross-border trade [96-108].
– AI-compliance tools help organisations navigate divergent regulations and select cost-effective providers [128-138].
– The three-pillar model of control, explainability and resilience underpins trusted AI [161-176].
– Clear, user-focused communication that stresses quality improvement and task relief mitigates human resistance [194-197][257-267].
In the final round, Lidia asked the Minister to identify the most complex operational challenge for governments deploying AI in public services. Rosiński identified training national data for generative AI, ensuring responsible use, and combating deep-fakes and misinformation [202-206]. Okuda was then asked where the biggest implementation gap lies; she reiterated that awareness and capacity, rather than the absence of standards, are the main obstacles [210-222]. Pramod and Kura were each asked what most often slows down real AI projects; both pointed to data readiness, governance and legal alignment as primary frictions [227-236][247-252]. Gorzon was questioned about the human barrier and stressed that messaging must focus on quality improvement and task relief rather than mere productivity gains [194-197][269-272]. Finally, Singh was asked what creates long-term confidence in cross-border AI investments; he answered that a mix of top-level commitment, clear purpose and coordinated stakeholder involvement is essential [308-311].
The session was wrapped up with Lidia thanking the participants and signalling the end of the discussion [??-??].
I direct my first question to Minister Rosiński. Minister, Poland has been implemented and shaping digital governance and also investing in sustainability and resilience of national systems. What are lessons learned and what lessons are the most relevant when we talk about implementation of AI in national systems? Maybe the other one. Yeah.
Thank you very much. Thank you. Thank you. like energy sector, water price, health care. That is the main point of our day. Critical infrastructure, I think it’s the crucial point in every country. We cannot imagine how can we run the business if we have… We have no energy, no water, and our data is not enough protected. And we support also local government. We create local… through cyber security. And that is connected with digital skills, especially hygiene with this area. And cyber security is linked with AI, with trustworthy AI. That is the also important thing if we use AI, especially national LLMs, and we can use it for the security of our business. And if we use AI, we can also use it for the security of our business.
And if we use AI, we can also use it for the security of our business. And how can we train the national data? That’s why in Poland we’ve built also Polish LLMs. And how can we train the national data? That’s why in Poland we’ve built also Polish LLMs. is Bielik, which is one public LLM, and the second one is Bielik that is the first one Plan, the second is Bielik that is with cooperation with academia, with private sector, and we support also. That can allow also be competitive for Polish business. That’s whole, if we see this whole ecosystem and we can also exchange our ideas and show our knowledge with other countries. That is the way, the proper way.
to be safe and to use trustworthy AI.
Thank you very much, Minister, for using beautiful examples of language model from Poland and their role in Polish ecosystem regarding both public sector and private sector and for framing AI as a matter of public responsibility and resilience. And now let’s move at the international level and have a look at the global dimension. And I would like to ask a question to Atsuko Okuda. How can global standards ensure interoperability and resilience of AI systems across regions?
Thank you very much. First of all, good afternoon to all of you. And I would like to thank the audience. organizer for inviting ITU, International Telecommunication Union. And as some of you may know, ITU is the oldest UN agency specialized for digital technology. And we have standardization work, including on the topic of AI. Now, what does AI standards do for all of us? Number one, it will enhance interoperability, which means that if a system or solution is developed in India that can talk to the system, as His Excellency mentioned, in Poland and vice versa, and that will lower the investment cost, that will increase the efficiency. So what are those standards that could be useful because of the interoperability, and especially within the country as well as within the region or globally?
So one concrete standard… Oh, by the way, just to give you the magnitude, ITU has over 200 already approved AI standards, and 200 more are in the pipeline. So in total, we have about 500 standards in place as well as in the pipeline. So you can see there are many different standards which are available for everyone. So what are those standards? Number one, for the interoperability, we believe that data, the interface, and protocol are critical. For example, we have a shared data format that we can all use. Otherwise, how can I share my data with you with a different data format? Two, standardized API so that system -to -system communication will be smooth. And three, of course, communication protocol.
Now, because based on these standards, we have more, how can I say, comprehensive standards. Thank you. For example, AI for network automation, multimedia AI processing, standards as well as machine -to -machine data sharing, the frameworks, for example. And second, we also have a harmonized terminology, vocabulary, and reference architectures. Because when I talk to, it’s not only you, but with anyone, some aspect of AI, how do we know that we understand the same thing? So this taxonomy, vocabulary, and the reference architecture is critical for interoperability and for us to be able to develop and exchange data or develop the algorithm together. So we have our AI model. Life cycle definition, so I know what you are referring to, and you know what I’m referring to.
Three, we have a context. Performance and testing are related. so that we can test, validate, and we have also conformance specifics that we use as a standard to validate that what you are sharing is what I can validate. So this, I hope, the standards are useful for enhancing the interoperability as well as to enhance the collaboration within the country as well as across the regions. Thank you.
Thank you very much. Standards are a very important pillar of building trust. Another is inclusive governance. Changatai, how does multi -stakeholder cooperation translate into real public trust in AI governance?
Thank you very much and thank you very much for the invitation and I’d like also to thank the organisers Millennium and Poland of course for inviting me now for your question, for any process I think, inclusivity breeds legitimacy and thereby trust, so if you have all the stakeholders who are affected by whatever policy that is, so you must have government, civil society, the technical community and the private sector all talking to each other and giving their point of views from their perspectives, I think then you can result in policies that have a greater buy -in so once people are involved in the process they’re more likely to adopt that process and secondly the transparency of the process also matters people need to know how these decisions came about and also what was the decided and this can be done with open consultations public comment periods and accessible documentation that builds confidence.
This is basically the same model that has built the internet what it is now. You have the public comment period etc and then these are adopted The IGF as well shows that this works The Internet Governance Forum is a multi -stakeholder dialogue and within our framework we discuss AI governance as well and a lot of other things misinformation, disinformation etc and this approach can anchor AI governance in legitimacy Trust as well is built locally so these discussions should not just be happening at a global level and then trickle down Local communities should be able to contribute in some manner and this process should be a cycle. So the feedback loop should be down but also up.
So there’s a resonance going on there. And then I think lastly, accountability mechanisms is also very, very important. So a multi -stakeholder corporation without clear accountability methods, people will not trust it because they need to know if they have an issue, where can they go and express that concern and that it will be dealt with in some manner or function. Thank you.
Thank you very much. I couldn’t agree more. Trust is also built locally and that’s why I would like to direct my next question to Odes. How can community -driven digital ecosystems can contribute to building trust to AI locally?
Thank you. Good afternoon, everyone. I say that modestly and saying thank you for your attention. Thank you for the invitation to join this panel. to give context to community participation, both at the innovation level and at the policy level, I would like to start with where Chinetai just finished, which is that community is a big stakeholder and a big participant in the multi -stakeholder framework. If you think about deploying AI solutions, especially for public services, then you realize that the inclusivity is what builds trust. The ability to deploy AI and to be consumed by every citizen is at the core of the trust between the users and the providers of the services. So taking into account that community, making sure that it’s included.
I’ll give an example. If you think about linguistic diversity that is there in many of the communities, in many of the countries of this world, you realize that if you build such a product, or an AI solution and it’s in language that only 20%, 50 % of the population understands, then the trust is broken between the provider, which is the public sector, and that part of the population, which is the citizens. The second part is that in the innovation cycle as well, we’ve seen on and on AI being deployed, but it doesn’t reflect the realities of certain communities, and that’s both, you can think about it linguistically, you can think about it contextually, you can think about it in different forms and shapes, it takes in different domains.
So the participation of the community into that, in ensuring that the innovation and the policy level align with the needs and the realities of those particular communities are very important. To finish off, I think that communities or cities and communities, and the citizens are also a big part of that. on how AI systems are improved because once you deploy such a system and you don’t have a feedback loop, then you realize that those particular technologies only work for some time and the adoption goes down after some time. So I think those three things are very key in building trust. First, inclusivity, part of it. Second, the participation in the innovations as well. And lastly, the feedback mechanism for how those services are being consumed, are being used and what can be improved.
Thank you very much. Trust also can influence economic confidence and cross -border collaboration. That’s why I would like to direct my next question to JJ. Does regulatory… Alignment. directly influence international trade? What is your perspective and observation? If you could share experience from in the Polish Chamber of Commerce.
Well, I will just share the experience from the perspective of Poland in EU and India. Normally, all are saying a lot of regularities always, you know, dishearten the business and the investments. But I think in this particular case, if it comes to the AI, I think we need a guidebook because without that, everything can go haywire. So if you look at the regulation with the EU AI Act, which has been implemented in 2026, I think in a way it makes a kind of issue for the investors. But on the other hand, if you take it, if you have the clear guidelines, it’s always very good in the lieu of the India, EU FTA that the Indian companies will be ready.
for deployment of the AI algorithms and other things within Europe. Now, let’s take the example also how EU, even businesses are saying that, well, the regulations are very tough, the compliance is very tough, but EU is also doing from their own side to make it easier for the businesses. I can use the example here from 2025, where in France there are 10 AI companies from India, which are actually part of the accelerator program, and EU is also ready to give a sandbox solution for all the regulations. So in all my perspective is you need a kind of control, especially on the generative AI, and you need some kind of control on the AI. So the rulebook which EU has given, it will be like, you know, I would say it’s a playbook for all the AI companies involved, and I think that India should be involved.
India should take the advantage of that because if… If they are already prepared to adhere to the rules, then I think the entry will be easier for the companies. So I definitely support the regulation because in this particular matter of AI, we need regulation. Because if you see the other countries, I will not take the names. One is using for policing its own people, and second is using it only for making the money. So yes, it’s good, but with sense.
Thank you very much. In our discussion, we have also three representatives of the private sector who know practical aspects very well because they have to deal with all these challenges on a daily basis. So I would like to start with Mariusz Kura. Mariusz, how do you scale AI solutions across regions while managing regulatory divergence?
Thank you, Lidia, and good afternoon, everyone. Good afternoon. Distributed software development. for the international IT companies is not new. We have started practicing this a millennium, 10 years back, when we, together, we were opening the office, the delivery center in Pune, Maharashtra, here in India. And simple practice to scale up and be fast is to have exactly the global offices, and like our development team, can build some solution, let’s say, in one day and deploy it, and the next day, business in Europe can verify if it’s working as it was expected. If not, then our development team in India can fix it even on the same day. So that’s the one way how we’ve been scaling up so far.
But the challenge nowadays is exactly how to scale up and follow all the regulations, and how to work for the different regions, for the different countries, where we have exactly, like for the public sector, a lot of rules. And hopefully… Hopefully from ITU we have as well two more hundred certifications. So, yeah, the way how we can standardize it, standardizations. So, AI engineers and AI solution providers in India need to learn and need to be compliant with all those standards. And it’s very difficult nowadays because it’s so fast. It’s changing almost like every week. And how to exactly follow that? At Bilenium, recently we have developed as well one dedicated solution, which is the AI compliance suite.
And this tool is quite complex. It’s not only covering the governments and compliance area, but as well as helping the organizations to use the right AI tools. Nowadays the enterprises they are using in a while, Edita will be talking about the Copilot, but there are plenty of the different tools used in the enterprises. And our solution is helping the organizations to navigate the users to the right solution. And what does it mean, the right solution? For example, it could be as well from the cost -effective perspective. Like, for example, should we use this and utilize the tokens from that provider? Or maybe another provider is having the better license practice and policy offering. So, that’s, I believe, what can help, yeah, kind of that solutions for the IT solution providers.
Thank you.
Thank you very much for a beautiful example how AI can help manage AI. And now let’s have a look at infrastructure. And I have a question to Pramod from an infrastructure standpoint. What does trusted AI require? On the ground, what does it require? in terms of data sovereignty, in terms of secure compute and resilient digital backbone.
God afternoon, everyone. Pleasure to be here. So when AI starts moving, getting adopted into public services, critical national security deployments, the trust moves not just on the models, but moves from the models and data to the underlying foundation. When I say foundation, where is the model running? What compute is it tuning? Is it running on? Do you control the data? Is there, you know, what jurisdiction? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? There is, you know, the security components around it. So all in all, you know, there are three questions that one needs to ask before you say that you fully trust AI, right?
The first question is on the control. The second one is, you know, can you tell me what happened, right? The AI system, will you be able to explain what happened across each of these layers? And third one is, is it up? So the control part is like we just discussed, you know, control, not just in the data. Data sovereignty just doesn’t mean that, you know, data space is local. But what we’ve seen from our customers asking, you know, is there any other jurisdictional law that can, you know, override saying, hey, I need full visibility of the data, of that infrastructure, you know, auditability and so on and so forth. So I think that’s, do you have the keys?
Is a key question one needs to ask. The second one. is on the explainability, on the visibility, and not just on the model monitoring, whether I am getting accurate data, but overall on data, who accessed, what is the governance around it, what happened in the network. So across all the foundation, if you don’t have full visibility, you will not be able to explain why a system took a decision, right? Because now we are talking about critical infrastructure. The decision it takes can impact the impact could be disastrous. The third one is, again, resilience. So the resilience, by resilience, we mean can AI stay up? Let’s say if it is in healthcare, in a remote tier city, a hospital deploys an AI to diagnose the system.
A patient walking in at 2 a .m. on a Sunday morning, you know, it, the system needs to be out. It needs to be resilient like any other financial system, but here the implications are huge. So AI is moving from being just a software service to AI as a foundation where all of these elements need to come together before anyone can say I fully trust. I think that’s the
Thank you very much. And it is common knowledge that technology are widely diffused and used only when they are trusted. And sometimes human factor is important barrier in AI adoption. That’s why I would like to ask Edita, who works with users a lot, what determines whether AI is truly adopted by teams?
Excellent question. Thank you so much for that. Good afternoon, everybody. Thank you for all the comments. So we’ve been talking about infrastructure, about security, cybersecurity, about the legal aspects of AI. However, we should remember that deployment is technology, but the users, they want to change. We want to change the way how they are acting with AI. From the practical perspective, because I’m responsible for driving adoption in the past, it was the topic of the modern work. Now we have AI, and we should remember that majority of users of AI are end users. They are not people who are taking part in conferences like this one. They are not that fluent with technology, but in the same time, we expect from them to be fluent and to change the way how they act.
How they work. So from my experience, it’s extremely important to communicate in the right way in a simple word. in simple words and simple examples how AI can be the powerful tool. Not because of the features, because we all know that features are not driving anything, nor business, nor processes, nor business scenarios, whatever we have in our minds. And in AI, everybody can use AI in a different way. This is the biggest challenge from the change management perspective as well, because we can have the best technology, the best model, but if the users, they don’t know how to use it, if they don’t know where it leads to, it’s hard to expect that we’re going to succeed on scale.
Thank you very much, and thank you to all of you for sharing your views in the first round of questions. In the second round, we will turn from strategy to implementation, and I will ask all of you, for a very short reflection from this level. And Minister, what is the most… complex operational challenge governments face when deploying AI in public services what is your view
Shortly, of course, what JJ mentioned about I talked about this, uh, that this, um, a very important for also Polish perspective and how can also see that perspective other other countries, except EU. That is the other it’s important that how can we train the data, how can we use the data, and how it will be the future or generative AI? That we have to use, of course wisely. It is a very important the final goal, and how it will be used. Especially for public sector and especially for for our for our citizens if we look in in in that way, that will be good for for everyone. And of course um implementation of of ai in public sector and of course when use also this data private private companies that is important to see how can we also fight against deep fakes, and the false information thank you
Thank you very much. Atsuko where do you see the big implementation gap today? Is it standards, lack of standards, skills, governance, what it is?
Thank you for this very important question. I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole discussion on standards came as a surprise to many of the participants. Actually, this is not the first session I’m talking about standards. This is actually third during the summit. But I am not sure, unless you are the standardization person, you don’t normally think about, okay, there are building blocks available, right, that I can start building something based on the building blocks. So we are trying to promote the importance of standardization and using the standards so that you don’t have to. Thank you. I believe we need a lot of different capacities, the capacity to articulate the issue.
What is it that you or we want to address? Sometimes AI may or may not be the answer. Some other technologies may be able to help you better. So I believe this articulation is a huge maybe opportunity and challenge as well. After you articulate, how do you plan, how do you translate that articulated issue into an operational project and initiative? I believe it’s another layer of a capacity challenge. So I can see that there are many countries, companies, agencies who want to take advantage of the AI, but I hope that this discussion is helpful. To concretize those steps moving forward. Thank you,
Thank you very much. my next question will be directed to our technical experts Pramod and Mariusz and the question is in real AI projects what most often slows down implementation.
first definitely not technology because I think we’ve seen technology is always almost ahead very true over the last couple of years the advancement that have happened so despite advanced technology being available despite GPUs being available the platforms being available we still don’t see too many monetizable AI use cases and and that’s that’s a big problem Everybody is trying to figure out where my ROI is, what is that use case. And that again boils down to few key aspects. One is the biggest friction is on data. So we’ve seen, especially in India, we’ve seen many, many, many pilots. And almost 80 % of those pilots don’t make it to production. And the key reason is on the data.
Data is siloed, data is not ready for AI scale. There’s no governance built around data. And that’s why POCs, you use a good set of data and you show value. But then when it comes to production, most of the times they don’t have enough data to get the value out of it. The second, again, AI cuts across. In an organization, AI cuts across. It cuts across many functions. There is the technology. It cuts across multiple functions. team is saying, you know, we are ready with this, but then there is legal aspects, there is an IT guy sitting, you know, I cannot allow you to do this, and so forth. So that alignment is not thought through, right, and that also again slows down the adoption.
So I think these are the primary, and then again, you know, the trust factor comes in, the third part is, how much do you really trust AI to do, you know, do you see the how much risk comfort do you have, is there a human afterthought required for every decision it makes, so I think that organizations need to choose that balance on or choose the best use case where, you know, it’s balanced without requiring too much of human intervention, can I deploy this? Those are the key factors that we see, especially in India, that are slowing down the adoption.
It seems that whatever we are discussing, infrastructure or other challenges, human factor is always at the end and behind everything. Mariusz, is your experience similar or do you have different observations?
I totally do agree with Pramod. It’s not us technology who is slowing down it. Maybe sometimes, but it’s many times on the business side and especially for the medium -sized enterprises. If they don’t know if they can work with some solutions or if they don’t know if they can take the solutions, for example, from India, they will step back and they will go to the more trusted local providers. So I believe that the standards that we are talking, it will help us a lot. So that’s my practice.
Okay. Edyta, what… What is the most common… human barrier from your view?
Thank you for this question. So first of all, we talk again about humans, the most important factor in the same time the biggest challenge and the biggest opportunity. From my perspective, I think that while talking with users, because today I’m a user voice, I can hear very often that people, they are reflecting what’s going to be next if I’m going to be replaced by AI. What’s in it for me? And we also need to find the message as organization, no matter if public or a private sector, how to communicate all of those changes that are coming. Another topic I’m facing while talking with the users, they basically don’t know what to expect next because as we have noticed that AI is another revolution and the revolutions are getting one after another very shortly.
And when the users, they can hear, okay, I should be more productive. I don’t want to be more productive anymore, right? I don’t want to do faster meetings. I don’t want to do faster notes, right? It’s nice. But in the same time, my brain and the number of different impulses I’m getting from outside is simply too high. Our brains are not capable to manage that in the right way. We’re closer to depression and we know in which direction it goes. So how we are communicating AI as a part of the tool is extremely important. So be careful what are you talking to your users. Don’t tell them that they will be more productive. But maybe the quality of their work is going to be better.
Maybe they don’t have to repeat the same tasks every day, but we must be very, very careful what kind of wording we’re using in regards AI adoption. Thank you.
Thank you, thank you very much. My next question will be to Chengetai because he looks at these challenges from the global perspective and has access to data from all regions. What, in your view, what would be the most important practical step to strengthen public trust in AI deployment?
Thank you very much for that question. And by the way, I totally agree with you. I think the first one is quite obvious. Inclusive participation in AI decision -making. So ensuring that the affected communities or the affected individuals are not affected by AI. And I think that’s a really important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. into how the systems operate and before they are deployed, so not after the fact. We shouldn’t be fixing things after the fact, but we should go on an input before the deployment.
The second one is independent oversight, so establishing review bodies that include civil society and the technical experts, so not just the regulators and industry, but a 360 approach to it. Thank you
Thank you very much. We are approaching the end of our session, so I would like to ask Odes for a quick comment. What ensures AI remains inclusive in real world implementation?
There are a few key factors to look through when you talk about inclusivity. I think the first is to look who it is meant for and to ensure that they are accounted for. And this can happen in different forms. For example, when you look at data sets that power AI models, most of the time they tend to come from, let’s say, the global north, meaning that they won’t be very contextually aware when they’re deployed in the global south. So there’s that need to contextualize the AI system being developed to ensure that they really respond to the users that are meant for. I think the second part of ensuring inclusivity is also ensuring the local value creation.
I think we’ve seen too often imputation of AI systems, but not… the understanding of how especially small nations can participate in building and deploying AI for their interests. So I think those two things are very, very critical. And the other part is also, I guess, the linguistic perspective that I mentioned before, looking at the linguistic diversity that exists around the globe and ensuring that people are able to consume that particular technology being developed. I think when we often think about AI and how it’s deployed, we tend to look at the first 20 % of the market, but the rest 80 % also needs to be accounted for. So, yeah.
Thank you very much. Last question, and I will ask JJ for a very brief one sentence answer. What creates long -term confidence? cross -border AI investments from your perspective?
Well, you know, I think I can simply say it’s a mix of everything. The involvement from the right people, I would rather say the people who are on the top, who are taking the serious decision investments, because that’s very important. And the people who are involved, they should know what they want it for, because AI deployment is a big thing, but you should know what you want to solve with it. So that’s very important.
Thank you very much. Its time to wrapwrap up our discussion.
“Poland’s strategy of developing national large‑language models, namely a public LLM called Bielik and a second, academia‑partnered version of Bielik, to keep data and AI capabilities under national control and to boost the competitiveness of Polish firms.”
The knowledge base states that Poland’s approach to developing national language models involves collaboration between academic institutions and private companies, creating competitive advantages while maintaining national control over AI capabilities [S12].
“The session opened with moderator Lidia directing her first question to Minister Rafał Rosiński.”
The transcript excerpt shows the first question was directed to Minister Rosiński, confirming the moderator’s opening move [S120]; Lidia Stepinska-Ustasiak is identified as the session facilitator in the knowledge base [S8].
“Protecting critical infrastructure – energy, water and health‑care – is the cornerstone of trustworthy AI because society cannot function without secure, data‑protected services.”
The knowledge base discusses AI as critical infrastructure for continuity of public services, underscoring the importance of secure, data-protected systems for societal functions [S7].
“The ITU already has more than 200 approved AI standards, with another 200 in the pipeline, totalling roughly 500.”
While the knowledge base highlights ITU’s active role in AI standards development and the existence of hundreds of standards across organisations, it does not provide the specific figures cited in the report; the broader context of ITU’s standards work is described in ITU initiative summaries [S35] and the high-level AI standards panel overview [S31].
“The ITU’s standards focus on three technical building blocks – a shared data format, a standardised API and a common communication protocol – which lower investment costs and enable systems from different countries to communicate smoothly.”
The knowledge base notes that ITU’s standards work encompasses harmonised terminology, reference architectures and conformance-testing procedures, providing additional detail on the technical foundations of AI interoperability [S30] and the broader standards landscape [S35].
“The Internet Governance Forum (IGF) is a successful model of multi‑stakeholder dialogue that can be replicated for AI governance.”
The IGF is referenced in the knowledge base as a venue where standards promote transparency, collaboration and interoperability, supporting its role as a multi-stakeholder platform [S128].
The panel showed strong convergence on the importance of standards, inclusive multi‑stakeholder processes, and the human factor as central to trustworthy AI deployment. Participants from government, international agencies, and the private sector all agreed that standards and inclusive governance are pillars of trust, while also recognising that human‑centred communication and clear regulatory frameworks are needed to overcome adoption barriers.
High consensus across most thematic areas, indicating a shared understanding that technical, regulatory and societal measures must be coordinated to achieve trustworthy, interoperable AI. This consensus suggests that future policy initiatives can build on these common foundations to advance AI governance globally.
The discussion reveals moderate disagreement centered on the perceived primary obstacles to AI adoption (data governance vs awareness vs human factors) and on the preferred governance tools (standards versus regulation). While participants share common goals—trustworthy, scalable AI— they propose divergent pathways, indicating a need for coordinated strategies that address data, capacity, standards, regulation, and human‑centred change management.
Medium level of disagreement; it highlights the complexity of AI deployment and suggests that without aligning on priority barriers and governance mechanisms, progress may be fragmented.
The discussion evolved from high‑level policy framing to concrete technical and human‑centred challenges, driven by a handful of pivotal remarks. Rafał’s national‑LLM narrative set the stage for sovereignty concerns; Atsuko’s standards overview supplied the technical scaffolding; Chengetai’s inclusivity thesis reframed trust as a participatory process; Odes and Edyta grounded that trust in linguistic, cultural, and communication realities; J.J.’s defence of regulation turned a perceived obstacle into an enabler; Pramod’s three‑pillar model gave a clear, actionable trust framework; his later data‑governance warning pinpointed the biggest operational bottleneck; and Mariusz’s compliance‑suite illustrated how industry can translate standards into practice. Together, these comments redirected the conversation repeatedly, each spawning new sub‑topics and prompting other speakers to expand, critique, or apply the ideas, ultimately shaping a multi‑dimensional dialogue that moved from abstract governance to actionable implementation pathways.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
