AI as critical infrastructure for continuity in public services

20 Feb 2026 10:00h - 11:00h

AI as critical infrastructure for continuity in public services

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with Lidia asking Minister Rafał Rosiński about the lessons Poland has learned while embedding AI into national systems, emphasizing the need to protect critical infrastructure such as energy, water and data [1-3][9-16]. Rosiński highlighted that trustworthy AI, supported by domestic large-language models like Bielik, is central to securing both public and private sector operations and fostering competitiveness [20-24].


Atsuko Okuda of the ITU explained that over 200 AI standards are already approved, covering data formats, standardized APIs and communication protocols, which together lower investment costs and enhance cross-border interoperability [36-48]. She added that harmonized terminology, reference architectures, lifecycle definitions and conformance testing further enable seamless collaboration among countries [50-57].


Chengetai Masango argued that inclusive, multi-stakeholder participation-bringing together government, civil society, technical experts and industry-creates legitimacy and trust, especially when processes are transparent and accountable [63-70]. Odes reinforced this view by showing how community-driven ecosystems, attentive to linguistic diversity and feedback loops, ensure AI solutions are relevant and trusted at the local level [78-89].


J.J. Singh noted that clear regulatory frameworks such as the EU AI Act, complemented by sandbox environments, can actually facilitate international trade by giving companies a predictable rulebook to follow [96-108]. Mariusz Kura described how his firm scales AI across regions through distributed development centers, but stresses that rapidly changing compliance requirements demand dedicated tools like an AI compliance suite to navigate standards and cost-effectiveness [115-129][130-138].


Pramod emphasized that trustworthy AI rests on three pillars-control over data and compute (including sovereignty), explainability of decisions, and resilience of services-especially for critical sectors like healthcare [145-165][166-176]. He and other speakers identified the main implementation bottlenecks as fragmented data, lack of governance, legal silos and lingering human mistrust, which together slow the transition from pilots to production [227-244]. Mariusz agreed that business-side uncertainty and the need for recognized standards further impede adoption, particularly for medium-sized enterprises [247-252].


Edyta Gorzon highlighted that users often fear replacement and are overwhelmed by rapid AI change, so clear, modest communication focusing on quality improvements rather than productivity gains is essential to overcome the human barrier [255-272]. The discussion concluded that building long-term confidence in AI requires a mix of inclusive participation, independent oversight, and clear strategic intent from senior decision-makers, ensuring both cross-border investment and societal acceptance [277-290][308-311].


Keypoints


Major discussion points


Trustworthy AI and national digital sovereignty – The Polish minister highlighted that critical infrastructure (energy, water, health) must be protected and that AI security is linked to cyber-security and trustworthy AI, especially through national large-language models such as “Bielik” to keep data and services under Polish control [9-16][20-23].


Global standards as the backbone of interoperability and trust – The ITU representative explained that AI standards (over 200 approved, 200 more in pipeline) enable systems from different countries to communicate via shared data formats, standardized APIs and protocols, and also provide harmonised terminology, reference architectures and conformance testing [35-48][43-48].


Inclusive, multi-stakeholder governance builds legitimacy and public confidence – Both the African and the community-focused speakers stressed that involving government, civil society, technical experts and the private sector in policy design, with transparent consultations, independent oversight and feedback loops, creates legitimacy and trust in AI deployments [63-70][75-88].


Regulatory alignment influences cross-border trade and investment – The chamber of commerce delegate argued that clear AI regulatory frameworks (e.g., the EU AI Act) act as a “playbook” that can facilitate Indian companies’ entry into European markets, while sandbox programmes and harmonised rules reduce compliance friction and support international AI commerce [96-108][308-311].


Practical implementation hurdles are largely data-, governance- and human-factor related – Participants pointed out that data silos, missing data-governance, rapid regulatory change, and users’ fear of replacement are the biggest blockers to scaling AI; solutions such as compliance suites, clear accountability, and careful change-management communication are needed [229-237][242-244][255-272][247-252].


Overall purpose / goal of the discussion


The panel was convened to explore how governments, international bodies, industry and civil society can jointly shape trustworthy AI ecosystems-covering policy, standards, regulatory alignment, and on-the-ground implementation-so that AI can be deployed safely, inclusively, and economically across national borders.


Overall tone and its evolution


The conversation began with a constructive and forward-looking tone, emphasizing national initiatives and the promise of AI for public services. As the dialogue progressed, the tone shifted to a pragmatic and problem-solving focus, acknowledging concrete challenges such as standards gaps, data governance, and human resistance. By the end, the tone became balanced and solution-oriented, summarising key actions (inclusive governance, clear regulations, robust standards) needed to sustain long-term confidence in AI.


Speakers

Lidia


– Role/Title: Moderator / Facilitator of the panel (co-founder and president of the Foundation Polistratos Institute)


– Areas of Expertise: Digital policy, AI governance, multi-stakeholder dialogue


– Sources: [S12]


Rafał Rosiński


– Role/Title: Minister (Poland)


– Areas of Expertise: Digital governance, AI implementation in critical infrastructure, national AI strategy


– Sources: [S3]


Atsuko Okuda


– Role/Title: ITU representative (International Telecommunication Union) – works on AI standardisation


– Areas of Expertise: AI standards, interoperability, global digital standards development


– Sources: [S5], [S6]


Odes


– Role/Title: Panel speaker on community-driven digital ecosystems


– Areas of Expertise: Community participation in AI deployment, inclusive AI design, linguistic diversity


– Sources: (none beyond transcript)


J.J. Singh


– Role/Title: Representative of the Polish Chamber of Commerce (participating in the discussion on regulatory alignment)


– Areas of Expertise: International trade, AI regulation, EU-India AI collaboration


– Sources: [S2]


Mariusz Kura


– Role/Title: Representative of Bilenium (AI solutions provider)


– Areas of Expertise: AI scaling across regions, regulatory compliance, AI compliance suite development


– Sources: [S13]


Edyta Gorzon


– Role/Title: AI adoption lead (responsible for driving AI adoption within her organisation)


– Areas of Expertise: Change management, user adoption of AI, communication of AI benefits


– Sources: [S14]


Pramod


– Role/Title: Co-founder & Chief Architect, NFH India (AI Impact Summit)


– Areas of Expertise: Trusted AI infrastructure, data sovereignty, secure compute, resilience of digital backbone


– Sources: [S15], [S16]


Chengetai Masango


– Role/Title: Head of Office, UN Secretariat for the IGF (Internet Governance Forum)


– Areas of Expertise: Global AI governance, multi-stakeholder participation, public trust in AI deployment


– Sources: [S18], [S19], [S20]


Additional speakers:


– None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

Lidia opened the panel by asking Minister Rafał Rosiński what lessons Poland had learned while embedding artificial intelligence into its national systems and how these lessons relate to digital governance, sustainability and resilience [1-4]. Rosiński answered that protecting critical infrastructure – energy, water and health-care – is the core focus of Poland’s AI strategy and that trustworthy AI is essential for keeping these services running [9-12][15-16]. He linked cyber-security and digital-skill development to trustworthy AI and highlighted Poland’s home-grown large-language models, the public “Bielik” LLM and a second version co-developed with academia and the private sector, as tools that keep data and services under Polish control while enhancing competitiveness [20-24].


Turning to the international dimension, Lidia thanked the minister and asked Atsuko Okuda of the International Telecommunication Union (ITU) how global standards can ensure interoperability and resilience of AI systems across regions [28-30]. Okuda explained that the ITU has approved more than 200 AI standards, with another 200 in the pipeline, totalling roughly 500 standards and drafts [39-41]. She described the three technical building blocks for interoperability – a shared data format, standardized APIs and common communication protocols – and noted that the ITU’s portfolio also covers AI for network automation, multimedia processing, machine-to-machine data sharing, as well as harmonised terminology, vocabularies, reference architectures, lifecycle, testing and conformance [43-57].


Lidia then asked Chengetai Masango how multi-stakeholder cooperation translates into real public trust in AI governance [60-62][63-70]. Masango argued that inclusivity breeds legitimacy: when government, civil society, the technical community and industry all participate, policies gain greater buy-in, transparency and accountability. He cited the Internet Governance Forum as a model of multi-stakeholder dialogue that now also addresses AI, misinformation and disinformation, emphasizing local feedback loops and accountability mechanisms as anchors of trust [63-70].


Next, Lidia invited Odes to discuss how community-driven digital ecosystems can contribute to local trust [73-74][75-89]. Odes stressed that linguistic diversity must be respected so that AI solutions are understandable to the whole population; otherwise trust erodes. He added that community participation throughout the innovation cycle ensures AI reflects local realities and that continuous feedback loops keep services relevant and adopted over time [82-89].


Lidia’s question on the economic dimension was directed to J.J. Singh of the Polish Chamber of Commerce [92-95][96-108]. Singh explained that the EU AI Act, despite being stringent, provides a clear “playbook” that helps Indian firms prepare for European deployment, and that sandbox programmes in France have already enabled ten Indian AI companies to accelerate under EU oversight. He argued that regulation, when paired with practical tools, is necessary to prevent misuse of AI for policing or profit-driven exploitation [99-108]. Lidia noted that trust underpins economic confidence and facilitates cross-border AI collaboration [92-95].


Addressing the challenge of scaling AI across regions while managing regulatory divergence, Lidia turned to Mariusz Kura [113-119][120-129][130-138]. Kura described a distributed development model in which global offices allow a solution to be built in India one day and tested in Europe the next, enabling rapid fixes. He highlighted the difficulty of keeping up with fast-changing compliance requirements and presented Bilenium’s AI compliance suite – a complex tool that guides organisations through government regulations, cost-effectiveness and licensing choices, thereby helping them navigate divergent standards [115-138].


Trust pillar – Across the discussion, speakers converged on what constitutes trustworthy AI. Rosiński reiterated that trustworthy AI for critical infrastructure requires national-level large-language models and robust cyber-security [9-12][15-16][20-24]. Masango emphasized that inclusive, multi-stakeholder processes generate legitimacy and transparency [63-70]. Odes added that community-driven ecosystems, especially those that respect linguistic diversity, are essential for local acceptance [82-89]. Pramod Masango distilled trust into three questions: who controls the data and compute (data sovereignty and jurisdiction), can the system’s decisions be explained across all layers, and is the service resilient enough to stay up when needed [161-176]. Edyta Gorzon highlighted the human factor, arguing that clear, simple communication that frames AI as a quality-enhancing tool – rather than a productivity-only promise – mitigates fear of replacement and cognitive overload [181-199]. Finally, J.J. Singh linked regulation to trust, noting that a clear regulatory “playbook” builds confidence for cross-border AI investments [99-108].


In the second round of reflections, Lidia asked Minister Rosiński about the most complex operational challenge governments face when deploying AI in public services [198-206][201-206]. He identified the need to train national data, manage generative AI responsibly and combat deep-fakes as central to protecting citizens and ensuring wise AI use [202-206].


Lidia then probed Atsuko Okuda on the biggest implementation gap today [207-214][211-215][210-218][219-222]. Okuda pointed to an awareness and capacity gap: many participants are unaware of existing standards, and those who know them often lack the ability to articulate problems and translate them into operational projects [211-218][219-222]. The awareness and capacity gap identified by ITU complements the data-silo and standards-uncertainty challenges highlighted later by Pramod and Mariusz [211-218][229-252].


Pramod Masango and Mariusz Kura discussed what most often slows down AI projects. Pramod highlighted fragmented, siloed data, missing governance and cross-functional misalignment as primary blockers, noting that 80 % of pilots in India fail to reach production because the data are not ready for scale, and that legal constraints and a lack of trust further delay adoption [229-244][242-244]. Mariusz echoed this, adding that medium-sized enterprises hesitate to adopt foreign AI solutions without recognised standards, reinforcing the need for trusted, widely accepted standards to reduce business-side uncertainty [247-252].


Addressing the human barrier, Lidia asked Edyta Gorzon what the most common obstacle is [253-272]. Gorzon replied that users worry about being replaced and feel overwhelmed by rapid AI change; organisations must therefore communicate carefully, focusing on quality improvements and providing reassurance rather than promising higher productivity [253-272].


Lidia sought a practical step to strengthen public trust, turning again to Chengetai Masango [276-290][277-287][288-290]. He reiterated that inclusive participation before deployment is the most important action, complemented by independent oversight bodies that bring together civil society, technical experts and regulators to review AI systems proactively [277-290].


Finally, Lidia asked Odes how AI can remain inclusive in real-world implementation [291-304][294-304][295-304]. Odes identified three key factors: ensuring the target community is accounted for by contextualising data sets (especially for the Global South), fostering local value creation so small nations can participate in AI development, and respecting linguistic diversity so that the majority of users – not just the first 20 % of the market – can benefit [295-304][298-304][300-304].


For the last question, Lidia invited J.J. Singh to summarise what creates long-term confidence in cross-border AI investments [306-311][308-311]. Singh answered succinctly that confidence stems from the involvement of senior decision-makers who understand the purpose of the investment and can align resources accordingly [308-311].


The moderator thanked all participants and closed the discussion, signalling the end of the panel [312-313].


Overall, the panel converged on four core themes: (1) trust is indispensable for AI in critical infrastructure and must be built on control, explainability and resilience; (2) global standards – shared data formats, standardized APIs, communication protocols and harmonised terminology – lower costs and underpin interoperability; (3) inclusive, multi-stakeholder governance and community-driven ecosystems generate legitimacy, transparency and local relevance; and (4) robust data governance, capacity-building and clear regulatory guidance are essential to overcome the main implementation bottlenecks. Speakers highlighted divergent views on the primary barrier – data silos versus regulatory awareness versus business-side hesitation – and on whether regulation is chiefly an enabler or a hurdle, suggesting that coordinated policy actions addressing standards awareness, data sovereignty and both community- and market-oriented trust mechanisms will be needed to realise trustworthy, inclusive AI at national and cross-border scales.


Session transcriptComplete transcript of the session
Lidia

I direct my first question to Minister Rosiński. Minister, Poland has been implemented and shaping digital governance and also investing in sustainability and resilience of national systems. What are lessons learned and what lessons are the most relevant when we talk about implementation of AI in national systems? Maybe the other one. Yeah.

Rafał Rosiński

Thank you very much. Thank you. Thank you. like energy sector, water price, health care. That is the main point of our day. Critical infrastructure, I think it’s the crucial point in every country. We cannot imagine how can we run the business if we have… We have no energy, no water, and our data is not enough protected. And we support also local government. We create local… through cyber security. And that is connected with digital skills, especially hygiene with this area. And cyber security is linked with AI, with trustworthy AI. That is the also important thing if we use AI, especially national LLMs, and we can use it for the security of our business. And if we use AI, we can also use it for the security of our business.

And if we use AI, we can also use it for the security of our business. And how can we train the national data? That’s why in Poland we’ve built also Polish LLMs. And how can we train the national data? That’s why in Poland we’ve built also Polish LLMs. is Bielik, which is one public LLM, and the second one is Bielik that is the first one Plan, the second is Bielik that is with cooperation with academia, with private sector, and we support also. That can allow also be competitive for Polish business. That’s whole, if we see this whole ecosystem and we can also exchange our ideas and show our knowledge with other countries. That is the way, the proper way.

to be safe and to use trustworthy AI.

Lidia

Thank you very much, Minister, for using beautiful examples of language model from Poland and their role in Polish ecosystem regarding both public sector and private sector and for framing AI as a matter of public responsibility and resilience. And now let’s move at the international level and have a look at the global dimension. And I would like to ask a question to Atsuko Okuda. How can global standards ensure interoperability and resilience of AI systems across regions?

Atsuko Okuda

Thank you very much. First of all, good afternoon to all of you. And I would like to thank the audience. organizer for inviting ITU, International Telecommunication Union. And as some of you may know, ITU is the oldest UN agency specialized for digital technology. And we have standardization work, including on the topic of AI. Now, what does AI standards do for all of us? Number one, it will enhance interoperability, which means that if a system or solution is developed in India that can talk to the system, as His Excellency mentioned, in Poland and vice versa, and that will lower the investment cost, that will increase the efficiency. So what are those standards that could be useful because of the interoperability, and especially within the country as well as within the region or globally?

So one concrete standard… Oh, by the way, just to give you the magnitude, ITU has over 200 already approved AI standards, and 200 more are in the pipeline. So in total, we have about 500 standards in place as well as in the pipeline. So you can see there are many different standards which are available for everyone. So what are those standards? Number one, for the interoperability, we believe that data, the interface, and protocol are critical. For example, we have a shared data format that we can all use. Otherwise, how can I share my data with you with a different data format? Two, standardized API so that system -to -system communication will be smooth. And three, of course, communication protocol.

Now, because based on these standards, we have more, how can I say, comprehensive standards. Thank you. For example, AI for network automation, multimedia AI processing, standards as well as machine -to -machine data sharing, the frameworks, for example. And second, we also have a harmonized terminology, vocabulary, and reference architectures. Because when I talk to, it’s not only you, but with anyone, some aspect of AI, how do we know that we understand the same thing? So this taxonomy, vocabulary, and the reference architecture is critical for interoperability and for us to be able to develop and exchange data or develop the algorithm together. So we have our AI model. Life cycle definition, so I know what you are referring to, and you know what I’m referring to.

Three, we have a context. Performance and testing are related. so that we can test, validate, and we have also conformance specifics that we use as a standard to validate that what you are sharing is what I can validate. So this, I hope, the standards are useful for enhancing the interoperability as well as to enhance the collaboration within the country as well as across the regions. Thank you.

Lidia

Thank you very much. Standards are a very important pillar of building trust. Another is inclusive governance. Changatai, how does multi -stakeholder cooperation translate into real public trust in AI governance?

Chengetai Masango

Thank you very much and thank you very much for the invitation and I’d like also to thank the organisers Millennium and Poland of course for inviting me now for your question, for any process I think, inclusivity breeds legitimacy and thereby trust, so if you have all the stakeholders who are affected by whatever policy that is, so you must have government, civil society, the technical community and the private sector all talking to each other and giving their point of views from their perspectives, I think then you can result in policies that have a greater buy -in so once people are involved in the process they’re more likely to adopt that process and secondly the transparency of the process also matters people need to know how these decisions came about and also what was the decided and this can be done with open consultations public comment periods and accessible documentation that builds confidence.

This is basically the same model that has built the internet what it is now. You have the public comment period etc and then these are adopted The IGF as well shows that this works The Internet Governance Forum is a multi -stakeholder dialogue and within our framework we discuss AI governance as well and a lot of other things misinformation, disinformation etc and this approach can anchor AI governance in legitimacy Trust as well is built locally so these discussions should not just be happening at a global level and then trickle down Local communities should be able to contribute in some manner and this process should be a cycle. So the feedback loop should be down but also up.

So there’s a resonance going on there. And then I think lastly, accountability mechanisms is also very, very important. So a multi -stakeholder corporation without clear accountability methods, people will not trust it because they need to know if they have an issue, where can they go and express that concern and that it will be dealt with in some manner or function. Thank you.

Lidia

Thank you very much. I couldn’t agree more. Trust is also built locally and that’s why I would like to direct my next question to Odes. How can community -driven digital ecosystems can contribute to building trust to AI locally?

Odes

Thank you. Good afternoon, everyone. I say that modestly and saying thank you for your attention. Thank you for the invitation to join this panel. to give context to community participation, both at the innovation level and at the policy level, I would like to start with where Chinetai just finished, which is that community is a big stakeholder and a big participant in the multi -stakeholder framework. If you think about deploying AI solutions, especially for public services, then you realize that the inclusivity is what builds trust. The ability to deploy AI and to be consumed by every citizen is at the core of the trust between the users and the providers of the services. So taking into account that community, making sure that it’s included.

I’ll give an example. If you think about linguistic diversity that is there in many of the communities, in many of the countries of this world, you realize that if you build such a product, or an AI solution and it’s in language that only 20%, 50 % of the population understands, then the trust is broken between the provider, which is the public sector, and that part of the population, which is the citizens. The second part is that in the innovation cycle as well, we’ve seen on and on AI being deployed, but it doesn’t reflect the realities of certain communities, and that’s both, you can think about it linguistically, you can think about it contextually, you can think about it in different forms and shapes, it takes in different domains.

So the participation of the community into that, in ensuring that the innovation and the policy level align with the needs and the realities of those particular communities are very important. To finish off, I think that communities or cities and communities, and the citizens are also a big part of that. on how AI systems are improved because once you deploy such a system and you don’t have a feedback loop, then you realize that those particular technologies only work for some time and the adoption goes down after some time. So I think those three things are very key in building trust. First, inclusivity, part of it. Second, the participation in the innovations as well. And lastly, the feedback mechanism for how those services are being consumed, are being used and what can be improved.

Lidia

Thank you very much. Trust also can influence economic confidence and cross -border collaboration. That’s why I would like to direct my next question to JJ. Does regulatory… Alignment. directly influence international trade? What is your perspective and observation? If you could share experience from in the Polish Chamber of Commerce.

J.J. Singh

Well, I will just share the experience from the perspective of Poland in EU and India. Normally, all are saying a lot of regularities always, you know, dishearten the business and the investments. But I think in this particular case, if it comes to the AI, I think we need a guidebook because without that, everything can go haywire. So if you look at the regulation with the EU AI Act, which has been implemented in 2026, I think in a way it makes a kind of issue for the investors. But on the other hand, if you take it, if you have the clear guidelines, it’s always very good in the lieu of the India, EU FTA that the Indian companies will be ready.

for deployment of the AI algorithms and other things within Europe. Now, let’s take the example also how EU, even businesses are saying that, well, the regulations are very tough, the compliance is very tough, but EU is also doing from their own side to make it easier for the businesses. I can use the example here from 2025, where in France there are 10 AI companies from India, which are actually part of the accelerator program, and EU is also ready to give a sandbox solution for all the regulations. So in all my perspective is you need a kind of control, especially on the generative AI, and you need some kind of control on the AI. So the rulebook which EU has given, it will be like, you know, I would say it’s a playbook for all the AI companies involved, and I think that India should be involved.

India should take the advantage of that because if… If they are already prepared to adhere to the rules, then I think the entry will be easier for the companies. So I definitely support the regulation because in this particular matter of AI, we need regulation. Because if you see the other countries, I will not take the names. One is using for policing its own people, and second is using it only for making the money. So yes, it’s good, but with sense.

Lidia

Thank you very much. In our discussion, we have also three representatives of the private sector who know practical aspects very well because they have to deal with all these challenges on a daily basis. So I would like to start with Mariusz Kura. Mariusz, how do you scale AI solutions across regions while managing regulatory divergence?

Mariusz Kura

Thank you, Lidia, and good afternoon, everyone. Good afternoon. Distributed software development. for the international IT companies is not new. We have started practicing this a millennium, 10 years back, when we, together, we were opening the office, the delivery center in Pune, Maharashtra, here in India. And simple practice to scale up and be fast is to have exactly the global offices, and like our development team, can build some solution, let’s say, in one day and deploy it, and the next day, business in Europe can verify if it’s working as it was expected. If not, then our development team in India can fix it even on the same day. So that’s the one way how we’ve been scaling up so far.

But the challenge nowadays is exactly how to scale up and follow all the regulations, and how to work for the different regions, for the different countries, where we have exactly, like for the public sector, a lot of rules. And hopefully… Hopefully from ITU we have as well two more hundred certifications. So, yeah, the way how we can standardize it, standardizations. So, AI engineers and AI solution providers in India need to learn and need to be compliant with all those standards. And it’s very difficult nowadays because it’s so fast. It’s changing almost like every week. And how to exactly follow that? At Bilenium, recently we have developed as well one dedicated solution, which is the AI compliance suite.

And this tool is quite complex. It’s not only covering the governments and compliance area, but as well as helping the organizations to use the right AI tools. Nowadays the enterprises they are using in a while, Edita will be talking about the Copilot, but there are plenty of the different tools used in the enterprises. And our solution is helping the organizations to navigate the users to the right solution. And what does it mean, the right solution? For example, it could be as well from the cost -effective perspective. Like, for example, should we use this and utilize the tokens from that provider? Or maybe another provider is having the better license practice and policy offering. So, that’s, I believe, what can help, yeah, kind of that solutions for the IT solution providers.

Thank you.

Lidia

Thank you very much for a beautiful example how AI can help manage AI. And now let’s have a look at infrastructure. And I have a question to Pramod from an infrastructure standpoint. What does trusted AI require? On the ground, what does it require? in terms of data sovereignty, in terms of secure compute and resilient digital backbone.

Pramod

God afternoon, everyone. Pleasure to be here. So when AI starts moving, getting adopted into public services, critical national security deployments, the trust moves not just on the models, but moves from the models and data to the underlying foundation. When I say foundation, where is the model running? What compute is it tuning? Is it running on? Do you control the data? Is there, you know, what jurisdiction? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? Do you control the data? There is, you know, the security components around it. So all in all, you know, there are three questions that one needs to ask before you say that you fully trust AI, right?

The first question is on the control. The second one is, you know, can you tell me what happened, right? The AI system, will you be able to explain what happened across each of these layers? And third one is, is it up? So the control part is like we just discussed, you know, control, not just in the data. Data sovereignty just doesn’t mean that, you know, data space is local. But what we’ve seen from our customers asking, you know, is there any other jurisdictional law that can, you know, override saying, hey, I need full visibility of the data, of that infrastructure, you know, auditability and so on and so forth. So I think that’s, do you have the keys?

Is a key question one needs to ask. The second one. is on the explainability, on the visibility, and not just on the model monitoring, whether I am getting accurate data, but overall on data, who accessed, what is the governance around it, what happened in the network. So across all the foundation, if you don’t have full visibility, you will not be able to explain why a system took a decision, right? Because now we are talking about critical infrastructure. The decision it takes can impact the impact could be disastrous. The third one is, again, resilience. So the resilience, by resilience, we mean can AI stay up? Let’s say if it is in healthcare, in a remote tier city, a hospital deploys an AI to diagnose the system.

A patient walking in at 2 a .m. on a Sunday morning, you know, it, the system needs to be out. It needs to be resilient like any other financial system, but here the implications are huge. So AI is moving from being just a software service to AI as a foundation where all of these elements need to come together before anyone can say I fully trust. I think that’s the

Lidia

Thank you very much. And it is common knowledge that technology are widely diffused and used only when they are trusted. And sometimes human factor is important barrier in AI adoption. That’s why I would like to ask Edita, who works with users a lot, what determines whether AI is truly adopted by teams?

Edyta Gorzon

Excellent question. Thank you so much for that. Good afternoon, everybody. Thank you for all the comments. So we’ve been talking about infrastructure, about security, cybersecurity, about the legal aspects of AI. However, we should remember that deployment is technology, but the users, they want to change. We want to change the way how they are acting with AI. From the practical perspective, because I’m responsible for driving adoption in the past, it was the topic of the modern work. Now we have AI, and we should remember that majority of users of AI are end users. They are not people who are taking part in conferences like this one. They are not that fluent with technology, but in the same time, we expect from them to be fluent and to change the way how they act.

How they work. So from my experience, it’s extremely important to communicate in the right way in a simple word. in simple words and simple examples how AI can be the powerful tool. Not because of the features, because we all know that features are not driving anything, nor business, nor processes, nor business scenarios, whatever we have in our minds. And in AI, everybody can use AI in a different way. This is the biggest challenge from the change management perspective as well, because we can have the best technology, the best model, but if the users, they don’t know how to use it, if they don’t know where it leads to, it’s hard to expect that we’re going to succeed on scale.

Lidia

Thank you very much, and thank you to all of you for sharing your views in the first round of questions. In the second round, we will turn from strategy to implementation, and I will ask all of you, for a very short reflection from this level. And Minister, what is the most… complex operational challenge governments face when deploying AI in public services what is your view

Rafał Rosiński

Shortly, of course, what JJ mentioned about I talked about this, uh, that this, um, a very important for also Polish perspective and how can also see that perspective other other countries, except EU. That is the other it’s important that how can we train the data, how can we use the data, and how it will be the future or generative AI? That we have to use, of course wisely. It is a very important the final goal, and how it will be used. Especially for public sector and especially for for our for our citizens if we look in in in that way, that will be good for for everyone. And of course um implementation of of ai in public sector and of course when use also this data private private companies that is important to see how can we also fight against deep fakes, and the false information thank you

Lidia

Thank you very much. Atsuko where do you see the big implementation gap today? Is it standards, lack of standards, skills, governance, what it is?

Atsuko Okuda

Thank you for this very important question. I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole discussion on standards came as a surprise to many of the participants. Actually, this is not the first session I’m talking about standards. This is actually third during the summit. But I am not sure, unless you are the standardization person, you don’t normally think about, okay, there are building blocks available, right, that I can start building something based on the building blocks. So we are trying to promote the importance of standardization and using the standards so that you don’t have to. Thank you. I believe we need a lot of different capacities, the capacity to articulate the issue.

What is it that you or we want to address? Sometimes AI may or may not be the answer. Some other technologies may be able to help you better. So I believe this articulation is a huge maybe opportunity and challenge as well. After you articulate, how do you plan, how do you translate that articulated issue into an operational project and initiative? I believe it’s another layer of a capacity challenge. So I can see that there are many countries, companies, agencies who want to take advantage of the AI, but I hope that this discussion is helpful. To concretize those steps moving forward. Thank you,

Lidia

Thank you very much. my next question will be directed to our technical experts Pramod and Mariusz and the question is in real AI projects what most often slows down implementation.

Pramod

first definitely not technology because I think we’ve seen technology is always almost ahead very true over the last couple of years the advancement that have happened so despite advanced technology being available despite GPUs being available the platforms being available we still don’t see too many monetizable AI use cases and and that’s that’s a big problem Everybody is trying to figure out where my ROI is, what is that use case. And that again boils down to few key aspects. One is the biggest friction is on data. So we’ve seen, especially in India, we’ve seen many, many, many pilots. And almost 80 % of those pilots don’t make it to production. And the key reason is on the data.

Data is siloed, data is not ready for AI scale. There’s no governance built around data. And that’s why POCs, you use a good set of data and you show value. But then when it comes to production, most of the times they don’t have enough data to get the value out of it. The second, again, AI cuts across. In an organization, AI cuts across. It cuts across many functions. There is the technology. It cuts across multiple functions. team is saying, you know, we are ready with this, but then there is legal aspects, there is an IT guy sitting, you know, I cannot allow you to do this, and so forth. So that alignment is not thought through, right, and that also again slows down the adoption.

So I think these are the primary, and then again, you know, the trust factor comes in, the third part is, how much do you really trust AI to do, you know, do you see the how much risk comfort do you have, is there a human afterthought required for every decision it makes, so I think that organizations need to choose that balance on or choose the best use case where, you know, it’s balanced without requiring too much of human intervention, can I deploy this? Those are the key factors that we see, especially in India, that are slowing down the adoption.

Lidia

It seems that whatever we are discussing, infrastructure or other challenges, human factor is always at the end and behind everything. Mariusz, is your experience similar or do you have different observations?

Mariusz Kura

I totally do agree with Pramod. It’s not us technology who is slowing down it. Maybe sometimes, but it’s many times on the business side and especially for the medium -sized enterprises. If they don’t know if they can work with some solutions or if they don’t know if they can take the solutions, for example, from India, they will step back and they will go to the more trusted local providers. So I believe that the standards that we are talking, it will help us a lot. So that’s my practice.

Lidia

Okay. Edyta, what… What is the most common… human barrier from your view?

Edyta Gorzon

Thank you for this question. So first of all, we talk again about humans, the most important factor in the same time the biggest challenge and the biggest opportunity. From my perspective, I think that while talking with users, because today I’m a user voice, I can hear very often that people, they are reflecting what’s going to be next if I’m going to be replaced by AI. What’s in it for me? And we also need to find the message as organization, no matter if public or a private sector, how to communicate all of those changes that are coming. Another topic I’m facing while talking with the users, they basically don’t know what to expect next because as we have noticed that AI is another revolution and the revolutions are getting one after another very shortly.

And when the users, they can hear, okay, I should be more productive. I don’t want to be more productive anymore, right? I don’t want to do faster meetings. I don’t want to do faster notes, right? It’s nice. But in the same time, my brain and the number of different impulses I’m getting from outside is simply too high. Our brains are not capable to manage that in the right way. We’re closer to depression and we know in which direction it goes. So how we are communicating AI as a part of the tool is extremely important. So be careful what are you talking to your users. Don’t tell them that they will be more productive. But maybe the quality of their work is going to be better.

Maybe they don’t have to repeat the same tasks every day, but we must be very, very careful what kind of wording we’re using in regards AI adoption. Thank you.

Lidia

Thank you, thank you very much. My next question will be to Chengetai because he looks at these challenges from the global perspective and has access to data from all regions. What, in your view, what would be the most important practical step to strengthen public trust in AI deployment?

Chengetai Masango

Thank you very much for that question. And by the way, I totally agree with you. I think the first one is quite obvious. Inclusive participation in AI decision -making. So ensuring that the affected communities or the affected individuals are not affected by AI. And I think that’s a really important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. And I think that’s the most important step. into how the systems operate and before they are deployed, so not after the fact. We shouldn’t be fixing things after the fact, but we should go on an input before the deployment.

The second one is independent oversight, so establishing review bodies that include civil society and the technical experts, so not just the regulators and industry, but a 360 approach to it. Thank you

Lidia

Thank you very much. We are approaching the end of our session, so I would like to ask Odes for a quick comment. What ensures AI remains inclusive in real world implementation?

Odes

There are a few key factors to look through when you talk about inclusivity. I think the first is to look who it is meant for and to ensure that they are accounted for. And this can happen in different forms. For example, when you look at data sets that power AI models, most of the time they tend to come from, let’s say, the global north, meaning that they won’t be very contextually aware when they’re deployed in the global south. So there’s that need to contextualize the AI system being developed to ensure that they really respond to the users that are meant for. I think the second part of ensuring inclusivity is also ensuring the local value creation.

I think we’ve seen too often imputation of AI systems, but not… the understanding of how especially small nations can participate in building and deploying AI for their interests. So I think those two things are very, very critical. And the other part is also, I guess, the linguistic perspective that I mentioned before, looking at the linguistic diversity that exists around the globe and ensuring that people are able to consume that particular technology being developed. I think when we often think about AI and how it’s deployed, we tend to look at the first 20 % of the market, but the rest 80 % also needs to be accounted for. So, yeah.

Lidia

Thank you very much. Last question, and I will ask JJ for a very brief one sentence answer. What creates long -term confidence? cross -border AI investments from your perspective?

J.J. Singh

Well, you know, I think I can simply say it’s a mix of everything. The involvement from the right people, I would rather say the people who are on the top, who are taking the serious decision investments, because that’s very important. And the people who are involved, they should know what they want it for, because AI deployment is a big thing, but you should know what you want to solve with it. So that’s very important.

Lidia

Thank you very much. Its time to wrapwrap up our discussion.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Protecting critical infrastructure – energy, water and health‑care – is the core focus of Poland’s AI strategy and trustworthy AI is essential for keeping these services running.”

The knowledge base states that Minister Rafał Rosiński emphasized the critical importance of protecting national infrastructure through trustworthy AI systems, confirming this focus.

Confirmedhigh

“Poland’s home‑grown large‑language models, the public “Bielik” LLM and a second version co‑developed with academia and the private sector, keep data and services under Polish control while enhancing competitiveness.”

Source S2 describes Poland’s development of national language models, including the Bielik LLM, through cooperation with academia and the private sector, supporting the claim.

Confirmedhigh

“Chengetai Masango, head of the Internet Governance Forum, argues that inclusivity and multi‑stakeholder participation (government, civil society, technical community, industry) builds legitimacy and trust in AI governance.”

Masango’s role at the IGF and his emphasis on multi‑stakeholder dialogue are documented in sources S30 and S92, confirming the statement.

Additional Contextmedium

“The ITU has approved more than 200 AI standards, with another 200 in the pipeline, totalling roughly 500 standards and drafts, and defines three technical building blocks for interoperability: shared data format, standardized APIs, and common communication protocols.”

While the knowledge base does not give the exact numbers, it outlines ITU’s broad standardisation mandate, its 10 study groups, and its role in fostering interoperable ICT standards, providing contextual background for the claim [S27] and [S86].

External Sources (95)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Jitendra Singh- Role/Title: Honorable Minister, Minister of State for Personnel, Minister of State for Personal Gri…
S2
AI as critical infrastructure for continuity in public services — – Atsuko Okuda- J.J. Singh- Mariusz Kura- Lidia
S3
S4
Open Forum #40 Governing the Future Internet: The 2025 Web 4.0 Conference — Rafał Kownacki: worlds and 4.0? Thank you once again for the question. So I would like to thank Professor Obi just me…
S5
DC3 Community Networks: Digital Sovereignty and Sustainability | IGF 2023 — Atsuko Okuda, ITU Asia-Pacific, intergovernmental organisation (TBC)
S6
All hands on deck to connect the next billions | IGF 2023 WS #198 — Atsuko Okuda, Intergovernmental Organization, Intergovernmental Organization
S7
https://dig.watch/event/india-ai-impact-summit-2026/ai-as-critical-infrastructure-for-continuity-in-public-services — I think we’ve seen too often imputation of AI systems, but not… the understanding of how especially small nations can …
S8
Keynote-Demis Hassabis — -Demis Hassabis: Role – Co-founder and CEO of Google DeepMind; Titles – Sir, Nobel laureate; Areas of expertise – Artifi…
S9
Open Forum #47 Demystifying WSis+20 — – **UNKNOWN** – Role/title not specified in transcript
S10
Day 0 Event #1 IGF LAC Space — – LIDIA ANCHAMORO: Part of Colnodo, Colombian organization; Participates in IGF Secretariat FEDERICA TORTORELLA: Feder…
S11
AI as critical infrastructure for continuity in public services — – Atsuko Okuda- J.J. Singh- Mariusz Kura- Lidia – Chengetai Masango- Odes- Lidia – Pramod- Edyta Gorzon- Lidia
S12
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Lidia Stepinska Ustasiak: Excellencies, distinguished delegates, ladies and gentlemen, good afternoon. My name is Lidia …
S15
Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit — -Moderator: Session moderator (no specific expertise, role, or title mentioned beyond moderating the discussion) …inf…
S16
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Moderator:Thank you. Thank you so much. I first look over to Pramod. Do you want to react? Yeah. So, yeah, I think much …
S17
AI as critical infrastructure for continuity in public services — – Atsuko Okuda- Pramod – Edyta Gorzon- Pramod
S18
Pre 8: IGF Youth Track: AI empowering education through dialogue to implementation – Follow-up to the AI Action Summit declaration from youth — – **Chengetai Masango** – Representative from the IGF Secretariat Chengetai Masango: The IGF Secretariat, along with th…
S19
Workshop 4: NRI-Assembly: How can the national and regional IGFs contribute to the implementation of the UN Global Digital Compact? — – **Chengetai Masango** – Head of office for the UN Secretariat for the IGF Chengetai Masango from the IGF Secretariat …
S20
Open Microphone Taking Stock — – Chengetai Masango: Head of the IGF Secretariat Chengetai Masango mentioned the post-IGF “taking stock” process, encou…
S21
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — There is a need to address these issues, along with the growing challenge of deepfakes. The evolving nature of AI techno…
S22
The role of AI in fighting deepfakes and misinformation — Deepfakes and misinformation have emerged as significant threats in the digital age. Deepfakes, created using AI techniq…
S23
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Standards are voluntary codes of best practice that companies adhere to. They assure quality, safety, environmental targ…
S24
AI Transformation in Practice_ Insights from India’s Consulting Leaders — This comment is insightful because it identifies the fundamental paradox of technological adoption – humans create techn…
S25
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Atsuko Okuda:Ah, yeah. Atsuko, perhaps you can answer. Sure. Thank you. Thank you for this very important question. I ha…
S26
GOVERNING AI FOR HUMANITY — – a. Outlining data-related definitions and principles for global governance of AI training data, including as distilled…
S27
International Telecommunication Union — Standards create efficiencies enjoyed by all market players, efficiencies, and economies of scale that ultimately result…
S28
ITU — Standards create efficiencies enjoyed by all market players, efficiencies, and economies of scale that ultimately result…
S29
WAIGF Opening Ceremony & Keynote — – Chengetai Masango: Head of the United Nations IGF Secretariat (mentioned but did not speak) Anja Gengo: Excellent, we…
S30
IGF 2024 Newcomers Session — – Chengetai Masango: Head of the Secretariat of the Internet Governance Forum Chengetai Masango: Is it possible to hav…
S31
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa argues that policies should require AI threat modeling and red teaming as regulatory requirements for AI systems, …
S32
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S33
The role of standards in shaping an AI-driven future — He positioned this approach as leveraging ITU’s 160 years of experience and its global community’s commitment to collabo…
S34
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — These key comments transformed what could have been a dry technical discussion into a compelling narrative about the str…
S35
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S36
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:So the technical development and deployment of AI is… So here I’m referring to ethical consideratio…
S37
Session-Unpacking the EU AI Act — Although not required to align with EU standards, strategic alignment with the EU approach could facilitate internationa…
S38
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort, Vice President Consulting at CGI in the Netherlands, supported this view: “Regulation does not hamper innov…
S39
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Key barriers to scaling include the need for high-quality data foundations, reimagined business processes, and comprehen…
S40
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S41
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S42
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S43
Smart Regulation Rightsizing Governance for the AI Revolution — Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that g…
S44
Building a Digital Society, from Vision to Implementation — Stacey Hines, joining from Vancouver at 4 AM Kingston time, cited research from Web Summit where AI expert Gary Marcus p…
S45
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S46
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Brandon Mello introduced a sobering statistic: 95% of AI pilots never reach production deployment. The primary barriers …
S47
Leveraging AI4All_ Pathways to Inclusion — -Multi-layered Access Challenges in AI Implementation: The discussion emphasized that good technology alone doesn’t auto…
S48
Legitimacy of multistakeholderism in IG spaces | IGF 2023 — In the context of internet governance, there is a growing recognition of the importance of inclusive participation and i…
S49
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — The analysis emphasises the significance of multi-stakeholder engagement in policy processes, specifically in the contex…
S50
Secure Finance Risk-Based AI Policy for the Banking Sector — Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with pub…
S51
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Roy Jakobs argues that the healthcare industry must establish self-regulation standards for AI implementation since regu…
S52
Democratizing AI Building Trustworthy Systems for Everyone — Absolutely. I mean, not one of those five limbs is possible without deep partnership. And that coordination of those fiv…
S53
Toward Collective Action_ Roundtable on Safe & Trusted AI — Gosh, that’s a difficult question. I think part of it has to be about transparency. How is a decision being made? People…
S54
Driving Indias AI Future Growth Innovation and Impact — Trust infrastructure is as critical as technical infrastructure, requiring institutional safeguards, transparency, and e…
S55
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Aurelie Jacquet :Thank you. So, following on Wansi’s point, I think what’s important to know is it’s actually good to se…
S56
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — International collaboration, trust-building efforts, and effective regulations are key to ensuring the secure and equita…
S57
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Currency and other local conditions affect who can and how they use technological platforms. Finally, the importance of…
S58
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Governments can play a significant role by implementing policies that recognize and protect local languages, allocating …
S59
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S60
I hereby declare that this dissertation is my own original work. — With such a premium placed on trustworthiness, how do successful information sharing mechanisms build trust among member…
S61
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — Current regulatory trends that pressure companies to make content decisions or incentivize closed ecosystems are counter…
S62
Leaders TalkX: Building inclusive and knowledge-driven digital societies — Human rights | Sociocultural WACC advocates for media ecosystems where community-led voices are not just supported but …
S63
AI as critical infrastructure for continuity in public services — Minister Rafał Rosiński from Poland emphasized the critical importance of protecting national infrastructure through tru…
S64
Building Population-Scale Digital Public Infrastructure for AI — This is a strategic concern for national security and autonomy, as very few countries can be completely digitally sovere…
S65
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa argues that policies should require AI threat modeling and red teaming as regulatory requirements for AI systems, …
S66
Building Sovereign and Responsible AI Beyond Proof of Concepts — Sovereignty dimension focuses on control over data, models, and security measures
S67
The role of standards in shaping an AI-driven future — He positioned this approach as leveraging ITU’s 160 years of experience and its global community’s commitment to collabo…
S68
The role of standards in shaping a safe and sustainable AI-driven future — Onoe acknowledged the rise of a novel AI innovation ecosystem and the indispensable role of standards in extending this …
S69
Digital standards — ‘Standards can underpin regulatory frameworks and […] provide appropriate guardrails for responsible, safe and trustwo…
S70
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S71
[Parliamentary session 1] Digital deceit: The societal impact of online mis- and disinformation — Speakers agreed that effective governance requires multi-stakeholder approaches involving governments, civil society, pr…
S72
Resilient and Responsible AI | IGF 2023 Town Hall #105 — The African IGF (AIGF) emphasises the importance of a multi-stakeholder approach to ensure its success. This approach in…
S73
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:So the technical development and deployment of AI is… So here I’m referring to ethical consideratio…
S74
EU Artificial Intelligence Act — (72) The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing a controlled experi…
S75
INTRODUCTION — The AI Act mandates CE marking for high-risk AI systems; and additional certification requirements are deman…
S76
Comprehensive Report: European Approaches to AI Regulation and Governance — Despite their different approaches, both speakers demonstrated remarkable consensus on fundamental principles. They agre…
S77
From principles to practice: Governing advanced AI in action — – **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological a…
S78
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Data governance and security concerns present another significant barrier. Shetty shared a compelling anecdote about an …
S79
Séance d’ouverture : « La gouvernance internationale du numérique et de l’IA : à la croisée des chemins ? » — Audience J’ai une question pour M. Lacina Koné. Nous nous basons sur des expériences précédentes, des problèmes communau…
S80
Open Forum #10 Multistakeholder Governance Intl Law in Cyberspace — Joanna Kulesza: Great. Thank you. Wonderful. I think that’s the perfect summary to emphasize how the general links with …
S81
Discussion Report: Sovereign AI in Defence and National Security — Civil Defence:Protection of critical infrastructure including energy grids, water systems, hospitals, and transportation…
S82
Open Forum #3 Cyberdefense and AI in Developing Economies — José Cepeda outlined specific European approaches, mentioning the NIS2 directive, DORA regulations, and the need for sha…
S83
OPENING SESSION | IGF 2023 — Ema Arisa:Thank you, Ms. Wan. I would like to move on to the next question. So the guiding principles and code of conduc…
S84
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Ambassador Francisca Mendez:And good afternoon, everybody. Thank you so much, Excellency, Australia, Ethiopia, dear coll…
S85
Enhancing CSO participation in global digital policy processes: Roles, structures, and accountability — Civil society organisations (CSOs) provide valuable expertise and insights, crucial for crafting technically robust stan…
S86
International Standards: A Commitment to Inclusivity — Good afternoon. The session places a great emphasis on the vital role of inclusivity within standardisation, recognising…
S87
The potential of technical standards to either strengthen or undermine human rights and fundamental freedoms in case of artificial intelligence systems and other emerging technologies — Isabel Ebert:Thanks very much for the invitation. Many thanks also to the organizers who bring this panel together. I th…
S88
High-level AI Standards panel — During the live demonstration, Dr. Jamoussi showcased the database’s user interface, highlighting its ability to search …
S89
Embedding Human Rights in AI Standards: From Principles to Practice — – ITU’s approved work plan with OHCHR through the Telecommunication Standardisation Advisory Group Florian Ostmann: Tha…
S90
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S91
We are the AI Generation — In her conclusion, Martin articulated that the fundamental question should not be “who can build the most powerful model…
S92
Newcomers Orientation Session — Chengetai Masango: Yes. OK. So, as we mentioned, is best practice forums. So what are best practice forums? So each year…
S93
How to make AI governance fit for purpose? — This comment elevated the discussion to a more philosophical level, moving beyond technical regulatory approaches to con…
S94
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S95
Informal Stakeholder Consultation Session — Digital transformation affects every sector, so coordinated policymaking helps ensure coherence and better outcomes for …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Rafał Rosiński
2 arguments63 words per minute418 words394 seconds
Argument 1
Trustworthy AI essential for critical infrastructure
EXPLANATION
Rosiński emphasizes that reliable AI is crucial for the operation of essential services such as energy, water, and data protection, and that cyber security is closely linked to trustworthy AI.
EVIDENCE
He states that critical infrastructure is the crucial point for every country and that business cannot run without energy, water, and protected data, highlighting the importance of cyber security and trustworthy AI for national security and business continuity [9-12][15-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion on AI as critical infrastructure highlights that reliable AI is vital for energy, water, data protection and national security, confirming the need for trustworthy AI [S2].
MAJOR DISCUSSION POINT
Trustworthy AI essential for critical infrastructure
AGREED WITH
Lidia, Pramod, Edyta Gorzon
Argument 2
Training national data, managing generative AI, and combating deep‑fakes are key challenges
EXPLANATION
Rosiński notes that Poland is building its own large language models to train national data, and stresses the need to manage generative AI responsibly while fighting deep‑fakes and misinformation.
EVIDENCE
He explains that Poland has built Polish LLMs such as Bielik to train national data and to keep Polish business competitive, and later mentions the need to combat deep-fakes and false information when implementing AI in the public sector [201-206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on AI-driven cyber defence and deep-fake threats underline the urgency of managing generative AI and combating misinformation and deep-fakes [S21][S22].
MAJOR DISCUSSION POINT
Training national data, managing generative AI, and combating deep‑fakes are key challenges
DISAGREED WITH
Pramod, Mariusz Kura, Atsuko Okuda
L
Lidia
3 arguments47 words per minute716 words903 seconds
Argument 1
AI framed as public responsibility and resilience
EXPLANATION
Lidia frames AI as a matter of public responsibility, linking its development to national resilience and the need for trustworthy deployment.
EVIDENCE
She thanks the minister for using Polish language models and for framing AI as a public responsibility and resilience for both public and private sectors [26-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator explicitly thanks the minister for framing AI as a public responsibility and a resilience issue for both public and private sectors [S2].
MAJOR DISCUSSION POINT
AI framed as public responsibility and resilience
Argument 2
Standards are a pillar of building trust
EXPLANATION
Lidia states that standards constitute an essential pillar for establishing trust in AI systems.
EVIDENCE
She explicitly says, “Standards are a very important pillar of building trust” [60-61].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Standards are identified as a crucial pillar for building trust in AI systems, and international multistakeholder work on AI standards further reinforces this role [S2][S23][S27][S28].
MAJOR DISCUSSION POINT
Standards are a pillar of building trust
AGREED WITH
Atsuko Okuda, Mariusz Kura
Argument 3
Human factor is a critical barrier in AI adoption
EXPLANATION
Lidia points out that the human factor—people’s trust and acceptance—is often the decisive barrier to AI adoption.
EVIDENCE
She remarks that technology is adopted only when trusted and that the human factor is an important barrier in AI adoption [182-184].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-factor barriers such as fear of replacement and the need for clear communication are highlighted as major adoption obstacles [S2][S24].
MAJOR DISCUSSION POINT
Human factor is a critical barrier in AI adoption
AGREED WITH
Chengetai Masango, Odes
DISAGREED WITH
Pramod, Edyta Gorzon
A
Atsuko Okuda
4 arguments120 words per minute695 words345 seconds
Argument 1
Shared data formats, APIs, and protocols enable cross‑border AI interoperability
EXPLANATION
Okuda explains that common data formats, standardized APIs and communication protocols are the technical foundations that allow AI systems from different countries to work together.
EVIDENCE
She lists shared data format, standardized API and communication protocol as the three critical elements for interoperability [44-47] and notes that such standards lower investment costs and increase efficiency [36-37].
MAJOR DISCUSSION POINT
Shared data formats, APIs, and protocols enable cross‑border AI interoperability
AGREED WITH
Lidia, Mariusz Kura
Argument 2
Standards lower investment costs and boost efficiency
EXPLANATION
Okuda argues that AI standards reduce the cost of investment and improve operational efficiency by making systems interoperable.
EVIDENCE
She states that standards will lower investment cost and increase efficiency when a system developed in one country can communicate with another [36-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Standards create efficiencies, lower investment costs and enable interoperability across markets, as noted by ITU and other bodies [S27][S28][S7].
MAJOR DISCUSSION POINT
Standards lower investment costs and boost efficiency
Argument 3
Lack of awareness and capacity to apply existing standards hampers implementation
EXPLANATION
Okuda points out that many participants are unaware of existing standards and lack the capacity to apply them, creating an implementation gap.
EVIDENCE
She describes an awareness challenge and a capacity challenge, noting that standards came as a surprise to many participants and that non-standardization experts often do not think about building blocks [211-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A significant implementation gap is reported: many participants are unaware of existing standards and lack capacity to apply them [S2].
MAJOR DISCUSSION POINT
Lack of awareness and capacity to apply existing standards hampers implementation
AGREED WITH
Pramod, Mariusz Kura
DISAGREED WITH
Pramod, Mariusz Kura, Rafał Rosiński
Argument 4
Articulating problems and translating them into projects is a major capacity issue
EXPLANATION
Okuda highlights that moving from problem articulation to concrete operational projects requires additional capacity, which many countries and organisations lack.
EVIDENCE
She mentions the need to articulate issues, translate them into operational projects, and that this represents a further capacity challenge [221-223].
MAJOR DISCUSSION POINT
Articulating problems and translating them into projects is a major capacity issue
C
Chengetai Masango
3 arguments149 words per minute501 words200 seconds
Argument 1
Inclusivity creates legitimacy, transparency, and accountability
EXPLANATION
Masango argues that involving all stakeholders—government, civil society, technical community, and private sector—creates legitimacy, enhances transparency and ensures accountability, thereby building public trust.
EVIDENCE
He says inclusivity breeds legitimacy and trust, and that transparency through open consultations, public comment periods and accessible documentation is essential; accountability mechanisms are also highlighted as crucial [63-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusivity of all stakeholders is said to generate legitimacy, transparency and accountability, strengthening public trust in AI policies [S2][S23].
MAJOR DISCUSSION POINT
Inclusivity creates legitimacy, transparency, and accountability
AGREED WITH
Odes, Lidia
Argument 2
Multi‑stakeholder dialogue (government, civil society, tech, private) builds trust
EXPLANATION
Masango points to the Internet Governance Forum as an example of a multi‑stakeholder platform that successfully builds trust through inclusive dialogue.
EVIDENCE
He references the IGF as a multi-stakeholder dialogue that discusses AI governance, misinformation, and other issues, showing how such a model anchors AI governance in legitimacy [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
International multistakeholder cooperation for AI standards and the IGF’s multi-stakeholder model are cited as mechanisms that build trust [S23][S29][S30].
MAJOR DISCUSSION POINT
Multi‑stakeholder dialogue (government, civil society, tech, private) builds trust
Argument 3
Inclusive participation before deployment and independent oversight bodies are vital
EXPLANATION
Masango stresses that involving affected communities before AI systems are deployed and establishing independent oversight bodies are essential practical steps to strengthen public trust.
EVIDENCE
He recommends inclusive participation before deployment and the creation of independent oversight bodies that include civil society and technical experts [280-290].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive governance, public comment periods and independent oversight are recommended to ensure legitimacy and accountability before AI deployment [S2][S23].
MAJOR DISCUSSION POINT
Inclusive participation before deployment and independent oversight bodies are vital
O
Odes
3 arguments136 words per minute633 words278 seconds
Argument 1
Community participation, linguistic diversity, and feedback loops foster trust
EXPLANATION
Odes explains that involving communities, respecting linguistic diversity, and establishing feedback mechanisms are key to building trust in AI‑driven public services.
EVIDENCE
He gives the example that AI services delivered only in a language understood by a minority break trust, and stresses the need for community participation and feedback loops to keep services relevant and trusted [82-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines for AI governance stress cultural and linguistic diversity, community participation and feedback mechanisms to build trust [S26][S2].
MAJOR DISCUSSION POINT
Community participation, linguistic diversity, and feedback loops foster trust
AGREED WITH
Chengetai Masango, Lidia
Argument 2
Community involvement ensures linguistic and contextual relevance of AI services
EXPLANATION
Odes highlights that AI solutions must reflect the linguistic and contextual realities of the communities they serve to maintain trust.
EVIDENCE
He notes that if AI products are built in a language understood by only a fraction of the population, trust is broken, and that community input helps align innovation and policy with local realities [78-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to respect linguistic diversity and contextual relevance in AI services is highlighted in AI governance recommendations [S26].
MAJOR DISCUSSION POINT
Community involvement ensures linguistic and contextual relevance of AI services
Argument 3
Contextualize datasets, create local value, and address linguistic diversity
EXPLANATION
Odes outlines three pillars for inclusive AI: ensuring datasets are contextualized for local needs, fostering local value creation, and accommodating linguistic diversity.
EVIDENCE
He states that most datasets come from the global north and need contextualization, that local value creation is essential, and that linguistic diversity must be considered so that the remaining 80 % of the market is served [295-303].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for contextualized datasets, local value creation and linguistic diversity are part of emerging AI data-governance principles [S26].
MAJOR DISCUSSION POINT
Contextualize datasets, create local value, and address linguistic diversity
M
Mariusz Kura
3 arguments140 words per minute475 words203 seconds
Argument 1
Distributed development with global offices enables rapid regional scaling
EXPLANATION
Kura describes how having development teams in multiple global offices allows a solution to be built in one location and deployed and tested in another within a day, facilitating fast scaling across regions.
EVIDENCE
He explains that a development team can build a solution in one day, deploy it, and the European business can verify it the next day, with fixes applied the same day if needed [118-120].
MAJOR DISCUSSION POINT
Distributed development with global offices enables rapid regional scaling
Argument 2
AI compliance suite helps navigate differing regulations and choose cost‑effective solutions
EXPLANATION
Kura presents his company’s AI compliance suite, which assists organisations in meeting diverse regulatory requirements and selecting the most cost‑effective AI tools and licensing options.
EVIDENCE
He describes the AI compliance suite as covering government compliance, guiding organisations to the right AI tools, and evaluating cost-effectiveness such as token usage versus licensing [128-137].
MAJOR DISCUSSION POINT
AI compliance suite helps navigate differing regulations and choose cost‑effective solutions
AGREED WITH
Pramod, Atsuko Okuda
DISAGREED WITH
J.J. Singh
Argument 3
Business‑side hesitation and need for trusted standards slow adoption
EXPLANATION
Kura notes that medium‑sized enterprises often hesitate to adopt foreign AI solutions due to lack of trust, and that widely‑accepted standards would alleviate this hesitation.
EVIDENCE
He says businesses may step back if they are unsure about solutions from abroad and that trusted standards would help them, especially for medium-sized enterprises [249-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Businesses hesitate to adopt foreign AI solutions without trusted standards; standards are identified as a trust-building pillar [S2][S23][S24].
MAJOR DISCUSSION POINT
Business‑side hesitation and need for trusted standards slow adoption
AGREED WITH
Atsuko Okuda, Lidia
DISAGREED WITH
Pramod, Rafał Rosiński, Atsuko Okuda
P
Pramod
3 arguments141 words per minute823 words348 seconds
Argument 1
Trust requires control over data, explainability of decisions, and system uptime
EXPLANATION
Pramod outlines three essential questions for trustworthy AI: who controls the data and infrastructure, can the system’s decisions be explained, and is the system reliably up and running.
EVIDENCE
He lists the three questions-control, explainability, and whether the AI is up-as the core of trust in AI systems [161-165].
MAJOR DISCUSSION POINT
Trust requires control over data, explainability of decisions, and system uptime
DISAGREED WITH
Edyta Gorzon, Lidia
Argument 2
Data sovereignty and auditability across jurisdictions are essential
EXPLANATION
Pramod stresses that beyond local data storage, organisations must have visibility and auditability over data that may be subject to foreign jurisdictional laws.
EVIDENCE
He discusses the need for keys, auditability, and the ability to know which jurisdiction can override data access, emphasizing data sovereignty and auditability [166-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI governance frameworks call for clear data-related definitions, provenance, and sovereignty to ensure auditability across jurisdictions [S26][S2].
MAJOR DISCUSSION POINT
Data sovereignty and auditability across jurisdictions are essential
Argument 3
Data silos, missing governance, and cross‑functional misalignment delay production
EXPLANATION
Pramod identifies fragmented data, lack of data governance, and misalignment between legal, IT and business functions as primary reasons why AI pilots often fail to move into production.
EVIDENCE
He notes that 80 % of pilots do not reach production because data is siloed and not ready, governance is missing, and legal/IT constraints cause misalignment, slowing adoption [229-244].
MAJOR DISCUSSION POINT
Data silos, missing governance, and cross‑functional misalignment delay production
AGREED WITH
Mariusz Kura, Atsuko Okuda
DISAGREED WITH
Mariusz Kura, Rafał Rosiński, Atsuko Okuda
E
Edyta Gorzon
2 arguments144 words per minute559 words231 seconds
Argument 1
Simple communication and addressing user fears are key for adoption
EXPLANATION
Edyta argues that clear, simple messaging and directly addressing users’ concerns about being replaced are essential to drive AI adoption.
EVIDENCE
She stresses the need to communicate in simple words, explain what AI can do, and answer the “what’s in it for me?” question, noting that fear of replacement is a common user concern [194-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-factor barriers such as fear of replacement and the need for simple, clear messaging are emphasized as essential for AI uptake [S2][S24].
MAJOR DISCUSSION POINT
Simple communication and addressing user fears are key for adoption
AGREED WITH
Rafał Rosiński, Lidia, Pramod
DISAGREED WITH
Pramod, Lidia
Argument 2
Users need clear, simple messaging; fear of replacement must be addressed
EXPLANATION
Edyta reiterates that users require straightforward explanations and reassurance that AI will augment rather than replace them.
EVIDENCE
She observes that users often wonder if they will be replaced by AI and that organizations must convey the benefits without overpromising productivity, focusing instead on quality and reduced repetitive tasks [258-272].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Clear communication that AI augments rather than replaces workers is highlighted as crucial to overcome adoption resistance [S2][S24].
MAJOR DISCUSSION POINT
Users need clear, simple messaging; fear of replacement must be addressed
J
J.J. Singh
3 arguments160 words per minute438 words163 seconds
Argument 1
EU AI Act provides a guidebook that can facilitate cross‑border AI investment
EXPLANATION
Singh states that the EU AI Act, despite being stringent, offers clear guidelines that help foreign firms, such as Indian companies, prepare for investment and deployment in Europe.
EVIDENCE
He mentions that the EU AI Act, implemented in 2026, creates a guidebook that, when clear, eases investor concerns and prepares Indian companies for EU deployment [99-101].
MAJOR DISCUSSION POINT
EU AI Act provides a guidebook that can facilitate cross‑border AI investment
DISAGREED WITH
Mariusz Kura
Argument 2
Sandbox and compliance tools help Indian firms meet EU regulations
EXPLANATION
Singh cites the example of Indian AI startups participating in a French accelerator and using an EU sandbox to navigate regulatory requirements.
EVIDENCE
He refers to a 2025 example where ten Indian AI companies joined a French accelerator program and the EU offered a sandbox solution to ease compliance [102-103].
MAJOR DISCUSSION POINT
Sandbox and compliance tools help Indian firms meet EU regulations
Argument 3
Involvement of senior decision‑makers with clear objectives builds lasting confidence
EXPLANATION
Singh concludes that long‑term confidence in cross‑border AI investments stems from top‑level decision‑makers who understand the purpose of AI deployments.
EVIDENCE
He says confidence comes from the involvement of the right senior people who know what they want to solve with AI [308-311].
MAJOR DISCUSSION POINT
Involvement of senior decision‑makers with clear objectives builds lasting confidence
Agreements
Agreement Points
Trust is essential for AI deployment, especially in critical infrastructure and public services
Speakers: Rafał Rosiński, Lidia, Pramod, Edyta Gorzon
Trustworthy AI essential for critical infrastructure Human factor is a critical barrier in AI adoption Trust requires control over data, explainability, and system uptime Simple communication and addressing user fears are key for adoption
All speakers stress that trust-whether through reliable, secure AI for critical services, addressing human concerns, ensuring data control and explainability, or communicating clearly with users-is a prerequisite for successful AI adoption [9-12][15-16][60-61][161-165][194-199].
POLICY CONTEXT (KNOWLEDGE BASE)
Trust is framed as requiring predictability, explainability, accountability and institutional safeguards, as highlighted in risk-based AI policy for the finance sector and India’s AI trust-infrastructure discussions [S50][S54].
Standards are pivotal for interoperability, cost reduction, and building trust
Speakers: Atsuko Okuda, Lidia, Mariusz Kura
Shared data formats, APIs, and protocols enable cross‑border AI interoperability Standards are a pillar of building trust Business‑side hesitation and need for trusted standards slow adoption
Atsuko explains that common data formats, APIs and protocols enable interoperability and lower investment costs; Lidia calls standards a key pillar of trust; Mariusz notes that trusted standards would alleviate business hesitation, especially for medium-sized enterprises [44-47][36-37][60-61][249-252].
POLICY CONTEXT (KNOWLEDGE BASE)
International multistakeholder initiatives stress standards as essential for interoperability, cost efficiencies and trust, with IGF reports linking standards to cross-border data flows and AI ecosystem cohesion [S55][S56][S51].
Inclusive, multi‑stakeholder participation strengthens legitimacy and public trust
Speakers: Chengetai Masango, Odes, Lidia
Inclusivity creates legitimacy, transparency, and accountability Community participation, linguistic diversity, and feedback loops foster trust Human factor is a critical barrier in AI adoption
Chengetai argues that inclusivity breeds legitimacy and accountability; Odes highlights community involvement, linguistic relevance and feedback mechanisms as trust builders; Lidia points out the human factor as a barrier, underscoring the need for inclusive approaches [63-69][82-86][182-184].
POLICY CONTEXT (KNOWLEDGE BASE)
IGF analyses underline that inclusive, multi-stakeholder engagement enhances legitimacy and public trust, and that deep partnerships across government, civil society and industry are critical for trustworthy AI [S48][S49][S52][S55].
Data silos, lack of governance and capacity gaps impede AI production and scaling
Speakers: Pramod, Mariusz Kura, Atsuko Okuda
Data silos, missing governance, and cross‑functional misalignment delay production AI compliance suite helps navigate differing regulations and choose cost‑effective solutions Lack of awareness and capacity to apply existing standards hampers implementation
Pramod notes that 80 % of pilots fail due to siloed data and missing governance; Mariusz offers a compliance suite to manage regulatory diversity; Atsuko points out that many participants are unaware of existing standards and lack capacity to apply them [229-244][128-137][211-215].
POLICY CONTEXT (KNOWLEDGE BASE)
Research shows pervasive data silos, missing governance structures and capacity shortages as primary obstacles, noting that 80 % of AI pilots fail to reach production and calling for stronger data-governance frameworks [S41][S47][S42].
Similar Viewpoints
Both emphasize that secure, controllable data and infrastructure are fundamental to trustworthy AI for essential services [9-12,15-16][161-165].
Speakers: Rafał Rosiński, Pramod
Trustworthy AI essential for critical infrastructure Trust requires control over data, explainability, and system uptime
Both see standards as the key mechanism to overcome cross‑regional regulatory and trust barriers, enabling smoother AI deployment [44-47][36-37][249-252].
Speakers: Atsuko Okuda, Mariusz Kura
Shared data formats, APIs, and protocols enable cross‑border AI interoperability Business‑side hesitation and need for trusted standards slow adoption
Both argue that clear regulatory frameworks or standards (e.g., EU AI Act, compliance tools) are necessary to give businesses confidence for cross‑border AI investment [99-101][249-252].
Speakers: J.J. Singh, Mariusz Kura
EU AI Act provides a guidebook that can facilitate cross‑border AI investment Business‑side hesitation and need for trusted standards slow adoption
Unexpected Consensus
Local language models and linguistic relevance as trust builders
Speakers: Rafał Rosiński, Odes
Training national data, managing generative AI, and combating deep‑fakes are key challenges Community participation, linguistic diversity, and feedback loops foster trust
Rosiński highlights the development of Polish LLMs to train national data and keep business competitive, while Odes stresses that delivering AI services in languages understood by the community is essential for trust; both converge on the need for locally-tailored AI to build confidence, a link not explicitly anticipated at the start of the discussion [20-23][82-86].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on multilingual internet access and local conditions emphasize that supporting local languages and region-specific models boosts adoption and trust in AI services [S58][S57].
Overall Assessment

The participants show strong convergence on four core themes: (1) the centrality of trust for AI, especially in critical infrastructure; (2) the role of standards and interoperability in lowering costs and building confidence; (3) the necessity of inclusive, multi‑stakeholder and community‑driven processes to legitimize AI; and (4) the importance of robust data governance and capacity to overcome implementation bottlenecks.

High consensus – most speakers echo each other’s points across different domains, indicating a shared understanding that trustworthy, standards‑based, and inclusive AI, underpinned by solid data governance, is essential for successful national and cross‑border AI deployment. This alignment suggests that coordinated policy actions on standards, capacity building, and inclusive governance are likely to receive broad support among stakeholders.

Differences
Different Viewpoints
Primary barrier to AI implementation
Speakers: Pramod, Mariusz Kura, Rafał Rosiński, Atsuko Okuda
Data silos, missing governance, and cross‑functional misalignment delay production Business‑side hesitation and need for trusted standards slow adoption Training national data, managing generative AI, and combating deep‑fakes are key challenges Lack of awareness and capacity to apply existing standards hampers implementation
Pramod points to fragmented data, absent governance and organisational mis-alignment as the main blocker [229-244]; Mariusz stresses that medium-sized firms hesitate to adopt foreign AI solutions and need trusted standards [249-252]; Rosiński highlights the need to train national data, manage generative AI and fight deep-fakes as the core challenge [202-206]; Atsuko argues that many participants simply do not know about the existing standards and lack the capacity to use them [211-215]. Each speaker therefore identifies a different primary obstacle to scaling AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses identify organizational and economic factors-not technology-as the chief barrier, citing 95 % pilot attrition and limited ROI on AI investments [S44][S46][S42].
Regulation: barrier versus enabler for cross‑border AI investment
Speakers: J.J. Singh, Mariusz Kura
EU AI Act provides a guidebook that can facilitate cross‑border AI investment AI compliance suite helps navigate differing regulations and choose cost‑effective solutions
Singh argues that the EU AI Act, despite its stringency, offers a clear guidebook that eases investor concerns and supports Indian firms entering Europe [99-101]; Kura, while acknowledging the need for compliance tools, emphasizes that regulations change rapidly (almost weekly) and that businesses struggle to keep up, making the regulatory landscape a practical hurdle [124-127]. Thus they differ on whether regulation mainly enables or impedes international AI trade.
POLICY CONTEXT (KNOWLEDGE BASE)
Experts argue that over-reaching regulation can hinder cross-border AI flows, while rights-sized governance and harmonised rules can act as enablers, as discussed in Chatham House and cross-border data-flow studies [S43][S56][S61].
Approach to building trust in AI systems
Speakers: Pramod, Edyta Gorzon, Lidia
Trust requires control over data, explainability of decisions, and system uptime Simple communication and addressing user fears are key for adoption Human factor is a critical barrier in AI adoption
Pramod frames trust technically – it depends on data control, explainability and continuous availability [161-165]; Edyta stresses that trust is achieved through clear, simple messaging and by answering users’ fear of replacement [194-199]; Lidia highlights the human factor – people’s acceptance and confidence – as the decisive barrier to adoption [182-184]. All agree trust matters, but propose different levers (technical control vs communication vs human-centred change).
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus points to transparency, explainability and risk-based frameworks as core to trust-building, reflected in AI policy roundtables and sector-specific self-regulation proposals [S50][S53][S54][S51].
Unexpected Differences
Community‑driven ecosystems versus business‑centric trust mechanisms
Speakers: Odes, Mariusz Kura
Community participation, linguistic diversity, and feedback loops foster trust Business‑side hesitation and need for trusted standards slow adoption
Odes argues that trust is built by involving local communities, respecting linguistic diversity and maintaining feedback loops [78-86]; Kura, however, points out that medium-sized enterprises often refrain from adopting AI solutions from abroad unless trusted standards exist, emphasizing a business-oriented trust model [249-252]. The tension between a community-centric versus a market-centric trust strategy was not anticipated given the overall consensus on inclusivity.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent commentary warns that closed, profit-driven ecosystems undermine trust, advocating open, community-led platforms and multistakeholder governance as preferable models [S61][S62][S55].
Overall Assessment

The discussion shows moderate disagreement. Participants agree on the overarching importance of trust, standards and inclusivity, but diverge on what they see as the principal obstacle to AI scaling (data governance vs business trust vs regulatory awareness) and on whether regulation primarily enables or hinders cross‑border AI investment. Unexpected friction appears between community‑focused and business‑focused trust approaches.

Moderate – the disagreements are largely about emphasis and implementation pathways rather than fundamental contradictions, suggesting that coordinated policy that addresses data governance, standards awareness, and both community and business trust needs could reconcile the differing views.

Partial Agreements
All participants concur that building public trust is essential for successful AI deployment. Rosiński links trust to the reliability of critical services [9-12,15-16]; Pramod focuses on technical control, explainability and uptime [161-165]; Lidia points to the human factor as the decisive barrier [182-184]; Edyta stresses clear communication and fear‑addressing [194-199]; Odes adds community involvement, linguistic relevance and feedback mechanisms [78-86]. While the end goal (trust) is shared, the pathways differ.
Speakers: Rafał Rosiński, Pramod, Lidia, Edyta Gorzon, Odes
Trustworthy AI essential for critical infrastructure Trust requires control over data, explainability of decisions, and system uptime Human factor is a critical barrier in AI adoption Simple communication and addressing user fears are key for adoption Community participation, linguistic diversity, and feedback loops foster trust
Takeaways
Key takeaways
Trustworthy AI is essential for the resilience of critical national infrastructure (energy, water, health) and must be treated as a public responsibility. Global AI standards—shared data formats, APIs, protocols, harmonized terminology and reference architectures—enable interoperability, lower investment costs, and build trust across borders. Inclusive, multi‑stakeholder governance (government, civil society, technical community, private sector) creates legitimacy, transparency, accountability and thus public trust. Community‑driven ecosystems that respect linguistic, cultural and contextual diversity, and that provide feedback loops, are crucial for local trust and adoption. Regulatory alignment, exemplified by the EU AI Act and sandbox approaches, can facilitate cross‑border AI trade when clear guidelines and compliance tools are available. Distributed development models and AI compliance suites help firms scale solutions while navigating divergent regulations. Trusted AI infrastructure requires data sovereignty, auditability, explainability and high availability/resilience of compute resources. Human factors—clear communication, addressing fear of replacement, and change‑management—are often the primary barrier to AI adoption, not technology itself. Key operational challenges for governments include training national data, managing generative AI, and combating deep‑fakes. Implementation gaps stem from lack of awareness, capacity to apply existing standards, and difficulty articulating problems into actionable projects.
Resolutions and action items
ITU to increase outreach and capacity‑building on existing AI standards to improve awareness among non‑standardisation experts. Poland to continue development and deployment of national LLMs (e.g., Bielik) as part of a trustworthy AI ecosystem. Adoption of AI compliance suites (as developed by Bilenium) to help organisations navigate regulatory requirements and select cost‑effective AI tools. Establish independent oversight bodies that include civil‑society and technical experts to review AI systems before deployment. Promote sandbox environments (e.g., EU‑India accelerator) to allow firms to test AI solutions under regulated conditions. Encourage global and regional stakeholders to embed inclusive participation and feedback mechanisms in AI project lifecycles.
Unresolved issues
How to systematically build and maintain data governance frameworks that eliminate silos and ensure data readiness for production‑scale AI. Specific mechanisms for aligning divergent national regulations beyond voluntary compliance tools and sandbox pilots. Concrete methods for measuring and assuring explainability and auditability of AI decisions across multi‑jurisdictional deployments. Scalable approaches for continuous community engagement and linguistic localisation in AI services at national scale. Clear guidelines on balancing human oversight with AI autonomy to address trust and risk concerns.
Suggested compromises
Use of sandbox programmes that provide regulatory flexibility while maintaining safety standards, allowing firms to innovate without full compliance burden. Adopting a layered governance model where core standards are mandatory, but implementation details can be adapted to local contexts and capacities. Balancing strict regulation (e.g., EU AI Act) with practical guidance and toolkits (compliance suites) to reduce friction for businesses. Combining top‑down regulatory frameworks with bottom‑up community participation to ensure both legal certainty and local relevance.
Thought Provoking Comments
“ITU has over 200 already approved AI standards, and 200 more are in the pipeline… For interoperability we need shared data formats, standardized APIs, and communication protocols, plus harmonized terminology and reference architectures.”
She quantifies the breadth of existing standards and breaks down the concrete technical building blocks needed for global AI interoperability, moving the conversation from abstract policy to actionable specifications.
Her detailed enumeration shifted the discussion toward concrete technical solutions, prompting later speakers (e.g., Pramod and Mariusz) to reference standards and compliance tools as essential for trustworthy AI deployment.
Speaker: Atsuko Okuda
“Inclusivity breeds legitimacy and thereby trust… transparency of the process, open consultations, public comment periods, and accountability mechanisms are essential for AI governance.”
He links multi‑stakeholder participation directly to legitimacy and trust, introducing a governance lens that balances the technical focus introduced earlier.
This comment broadened the debate, leading Lidia to ask about community‑driven ecosystems and prompting Odes and Edyta to discuss local inclusion, linguistic diversity, and user‑centred communication.
Speaker: Chengetai Masango
“The EU AI Act will act as a playbook… sandbox solutions and clear guidelines actually make it easier for Indian companies to enter the European market.”
He reframes regulation not as a barrier but as an enabler of cross‑border trade, providing a concrete example of how standards and regulatory sandboxes can facilitate international AI commerce.
His perspective introduced the economic and trade dimension, influencing subsequent remarks about regulatory divergence (Mariusz) and the need for clear compliance tools (Pramod).
Speaker: J.J. Singh
“Trust in AI requires three questions: control (who owns the data and compute), explainability (can we trace what happened), and resilience (does the system stay up).”
He distills the foundation of trustworthy AI into three clear pillars, connecting data sovereignty, auditability, and operational reliability in a succinct framework.
This framework became a reference point for later speakers; Pramod’s later remarks on data silos and Mariusz’s compliance suite were framed against these three pillars, deepening the technical analysis.
Speaker: Pramod
“We must communicate AI as a tool that improves quality of work, not just productivity; wording matters because users fear replacement and overload.”
She highlights the human‑centred change‑management challenge, emphasizing that the narrative around AI adoption can make or break user acceptance.
Her focus on communication shifted the tone toward the human factor, prompting Lidia to ask about human barriers and leading Odes to stress linguistic inclusivity.
Speaker: Edyta Gorzon
“Most AI datasets come from the Global North; we need to contextualize models for the Global South and ensure local value creation, otherwise we serve only the first 20 % of the market.”
He points out systemic bias in data and market focus, urging a shift toward inclusive, locally relevant AI that serves the majority of users.
This comment reinforced Chengetai’s inclusivity point and added a concrete dimension (data origin and market share), influencing the later discussion on community‑driven ecosystems and linguistic diversity.
Speaker: Odes
“We have built an AI compliance suite that helps organisations navigate regulatory requirements, cost‑effectiveness of providers, and licensing policies.”
He presents a practical tool that operationalises the earlier talk about standards and regulatory divergence, showing how private sector can address compliance complexity.
His example provided a tangible solution that linked back to Atsuko’s standards and Pramod’s three pillars, illustrating how businesses can turn policy into actionable products.
Speaker: Mariusz Kura
Overall Assessment

The discussion evolved from high‑level policy framing to concrete technical and human‑centred challenges, driven by a handful of pivotal remarks. Atsuko’s standards overview anchored the conversation in tangible interoperability needs; Chengetai’s inclusivity argument expanded the scope to legitimacy and trust; J.J.’s regulatory playbook reframed rules as market enablers; Pramod’s three‑pillar model gave a clear framework for trustworthy AI infrastructure; Edyta’s emphasis on communication highlighted the critical human adoption barrier; Odes’ focus on data bias and market inclusion deepened the equity dimension; and Mariusz’s compliance suite demonstrated how the private sector can operationalise these insights. Together, these comments redirected the dialogue from abstract aspirations to actionable pathways, shaping a multidimensional narrative that interwove standards, governance, economics, infrastructure, and user experience.

Follow-up Questions
How can global standards ensure interoperability and resilience of AI systems across regions?
Understanding how standards can facilitate cross‑border AI integration and reduce costs.
Speaker: Lidia (to Atsuko Okuda)
How does multi‑stakeholder cooperation translate into real public trust in AI governance?
Explores the mechanisms by which inclusive processes build legitimacy and confidence.
Speaker: Lidia (to Chengetai Masango)
How can community‑driven digital ecosystems contribute to building trust to AI locally?
Seeks insight on the role of local participation and feedback loops in fostering trust.
Speaker: Lidia (to Odes)
Does regulatory alignment directly influence international trade? Share experience from the Polish Chamber of Commerce.
Examines the impact of AI regulations on cross‑border commerce and investment.
Speaker: Lidia (to J.J. Singh)
How do you scale AI solutions across regions while managing regulatory divergence?
Looks for strategies to expand AI deployments despite differing national rules.
Speaker: Lidia (to Mariusz Kura)
What does trusted AI require on the ground in terms of data sovereignty, secure compute and resilient digital backbone?
Identifies essential infrastructure elements for trustworthy AI services.
Speaker: Lidia (to Pramod)
What determines whether AI is truly adopted by teams?
Aims to uncover factors that drive or hinder organizational uptake of AI.
Speaker: Lidia (to Edyta Gorzon)
What is the most complex operational challenge governments face when deploying AI in public services?
Seeks to pinpoint the toughest hurdle for public‑sector AI implementation.
Speaker: Lidia (to Minister Rafał Rosiński)
Where do you see the big implementation gap today? Is it standards, lack of standards, skills, governance?
Attempts to identify the primary barrier slowing AI rollout.
Speaker: Lidia (to Atsuko Okuda)
In real AI projects, what most often slows down implementation?
Looks for common bottlenecks such as data issues, legal constraints, or trust concerns.
Speaker: Lidia (to Pramod and Mariusz Kura)
What is the most common human barrier to AI adoption?
Focuses on psychological and cultural obstacles that impede user acceptance.
Speaker: Lidia (to Edyta Gorzon)
What would be the most important practical step to strengthen public trust in AI deployment?
Seeks actionable measures to enhance societal confidence in AI systems.
Speaker: Lidia (to Chengetai Masango)
What ensures AI remains inclusive in real‑world implementation?
Explores safeguards to guarantee AI serves diverse populations and contexts.
Speaker: Lidia (to Odes)
What creates long‑term confidence in cross‑border AI investments?
Looks for factors that sustain international AI collaboration and funding.
Speaker: Lidia (to J.J. Singh)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.