Keynote-Martin Schroeter

19 Feb 2026 14:15h - 14:30h

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Speaker 1 introducing Martin Schroeter, chairman and CEO of Kindrill, as a leading voice on moving AI from laboratory optimism to real-world production ([1-4]).


Schroeter framed the central challenge as turning AI into reliable, day-to-day operations at scale rather than isolated demos, emphasizing that failures in critical sectors such as hospitals or energy grids can have life-changing consequences ([12-15]).


He argued that the problem is not a lack of innovation-AI technology is “brilliant”-but a readiness gap, noting that while over two-thirds of organisations have heavy AI investment, almost half still fail to achieve meaningful returns ([20-23]).


In India, 75 % of projects stall after proof-of-concept, which he attributes to the fact that AI has not yet been industrialized; the necessary infrastructure, data pipelines, operational processes and skilled people are missing ([24-28]).


Kindrill’s customers therefore seek clarity on four readiness questions: how to deploy AI across fragmented data environments, whether systems can run 24/7 without failure or cyber-attack, how to integrate agentic AI into regulated, mission-critical settings, and how to prepare the workforce for new AI-augmented roles ([29-38]).


Trust emerges as the overarching concern, with leaders needing assurance that AI decisions are accountable, transparent and explainable, especially in regulated domains such as government and banking ([44-45]).


Schroeter highlighted India as a crucial proving ground for industrializing AI at national scale, citing initiatives like Digital India and the India AI Mission that create policy, digital and talent foundations ([50-53]).


He gave concrete examples: the Unified Lending Interface that reduces loan approval time from weeks to minutes, and the deployment of agentic AI at Bangalore International Airport to shift IT operations from reactive to proactive, self-healing modes ([54-58]).


Through community partnerships, Kindrill is also building digital and cybersecurity skills and launching a cyber-defense operations centre in Bangalore to counter AI-enabled threats at the network edge ([59-60]).


The speaker stressed that moving from invention to impact requires industrializing AI governance, embedding auditability, logging, explainability and compliance directly into live systems, a strategy he calls “policy as code” ([65-68]).


He urged policymakers and companies to focus on scalable infrastructure, trustworthy security and a skilled workforce as the fundamentals for responsible AI deployment ([69-71]).


Finally, Schroeter concluded that the future of AI will be decided not by research labs or boardrooms but by the choices and investments made today to bridge experimentation and industrialization, thereby strengthening the institutions societies rely on ([76-83]).


The discussion underscored that responsibly industrialized AI can move beyond optimization to deliver reliable, inclusive outcomes for people, planet and progress ([55-57][82-83]).


Keypoints

Major discussion points


AI is at a readiness / industrialization crossroads, not an innovation problem. Schroeter stresses that while AI technology is “brilliant,” it is not yet industrialized; the infrastructure, data, operations, and people are unprepared for large-scale, reliable deployment [21-28]. He calls for moving governance from policy documents into live systems, embedding auditability, explainability, and compliance [65-68].


Four critical readiness questions dominate customers’ concerns. These include how to deploy AI across fragmented, multi-cloud data; whether AI can run 24 × 7 without failure, cyber-attacks, or data drift; the suitability of agentic AI for mission-critical, regulated environments; and how to prepare the workforce for new AI-augmented ways of working [30-41].


India is presented as a strategic proving ground for responsible, large-scale AI. The speaker highlights national initiatives such as Digital India and the India AI Mission, and cites concrete examples-Unified Lending Interface, agentic AI at Bangalore International Airport, and a new cyber-defense operations centre-to illustrate how AI can be deployed at national scale [50-60].


Trust and governance are portrayed as prerequisites for AI impact. Trust is built through clear guardrails, accountability, transparency, and explainability, especially in regulated sectors like banking, government, and healthcare [44-46][71-75]. “Policy as code” is offered as a mechanism to embed these safeguards directly into AI systems [66-68].


A call to action for coordinated investment, reskilling, and partnership. The speaker urges companies and governments to focus on scalable infrastructure, security, and people-skill development, emphasizing that the future of AI depends on closing the gap between experimentation and industrialization [69-83].


Overall purpose / goal


The discussion aims to shift the narrative from AI hype to practical, responsible industrialization. Schroeter seeks to convince policymakers, business leaders, and technologists that achieving real-world impact requires addressing readiness challenges-technical, regulatory, and human-through scalable infrastructure, robust governance, and workforce transformation, using India’s ecosystem as a model.


Tone of the discussion


Opening (0:00-5:00): Formal, appreciative, and optimistic, thanking leaders and framing AI as a transformative opportunity.


Middle (5:00-15:00): Cautionary and analytical, highlighting concrete readiness gaps, operational risks, and the need for trust.


Later (15:00-end): Solution-focused and inspirational, showcasing successful Indian deployments, outlining governance approaches, and issuing a rallying call for collective action. The tone progresses from celebratory acknowledgment to sober problem-identification, and finally to an urgent, hopeful call to industrialize AI responsibly.


Speakers

Martin Schroeter – Role/Title: Chairman and CEO, Kindrill (Kyndryl) – Area of expertise: IT infrastructure services, AI operationalization and industrialization [S2]


Speaker 1 – Role/Title: Moderator/host introducing the keynote speaker – Area of expertise: (not specified)[S3][S5]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened the session by introducing Martin Schroeter, chairman and CEO of Kindrel, as a leading voice on turning AI hype into production-grade solutions. Schroeter thanked Prime Minister Narendra Modi and the summit’s ministers, policymakers, CEOs and global livestream audience for convening the event and framed the gathering as an “extraordinary opportunity” to discuss responsible AI for people, industry and communities [1-4][5-11].


He identified the core problem: AI must move from demos and pilots to reliable, day-to-day operation at national and enterprise scale. In hospitals, banks, transport networks and energy grids, failure is not a mere inconvenience but a threat to lives, making operational reliability a prerequisite for the summit’s pillars of people, planet and progress [12-18][13-15].


Schroeter argued that the bottleneck is not a lack of innovation; the technology is “brilliant,” but AI has not yet been industrialised. The gap lies in infrastructure, data pipelines, operational processes and skilled personnel. While more than two-thirds of organisations globally invest heavily in AI, almost half still struggle to realise meaningful returns, and in India 75 % of projects stall after the proof-of-concept stage [22-24].


Kindrel’s customers focus on four critical readiness questions:


1) Deploying AI across fragmented, multi-cloud and edge data environments while integrating legacy core systems [30-31];


2) Guaranteeing 24 × 7 reliability, security, resilience to cyber-attacks, data drift and regulatory scrutiny, thereby earning user trust [32-36];


3) Safely integrating agentic AI into regulated, mission-critical settings [37-39];


4) Upskilling the workforce for AI-augmented roles, noting that nine in ten leaders expect profound change while fewer than one in three feel staff are ready [40-43].


Trust is built through clear guardrails that embed auditability, logging, explainability and compliance directly into AI systems-a “policy-as-code” approach that moves governance from static documents into live code [62-66].


India is presented as a strategic proving ground for responsible, large-scale AI. National initiatives such as Digital India and the India AI Mission provide policy, digital and talent foundations. A concrete example is the Unified Lending Interface, which reduces loan-approval times from weeks to minutes while enhancing transparency [50-52].


Kindrel’s footprint in India includes building scalable platforms for banking, citizen services, telecoms and airports that handle millions of transactions each day [54-56]. At Bangalore International Airport, agentic AI enables proactive, self-healing IT operations, shifting from reactive to autonomous management [55-57].


Through community partnerships, Kindrel is upskilling digital and cybersecurity talent, and it will open a new cyber-defence operations centre in Bangalore to detect and contain AI-driven threats at the network edge before they cause disruption [58-60].


Schroeder called on policymakers and enterprises to focus on three fundamentals-scalable infrastructure, trustworthy security and a skilled workforce-and to measure AI’s impact beyond productivity gains, including how institutions help societies adapt to the next phase of industrial automation [69-73].


He concluded that the future of AI will not be decided in research labs or boardrooms but by today’s choices and investments that bridge experimentation and industrialisation. The transition is both technical and human: building trust, reskilling workers at scale and ensuring AI systems are worthy of the institutions society relies on [70-74].


Session transcriptComplete transcript of the session
Speaker 1

Ladies and gentlemen, I would now like to welcome Mr. Martin Schroeter, who is the chairman and CEO, Kindrill. As the leader of the world’s largest IT infrastructure services company spun out of IBM, Mr. Martin Schroeter manages the technology backbone of thousands of enterprises across the globe. His view of what it takes to actually run AI in production environments offers a necessary corrective to summit stage optimism. Ladies and gentlemen, please join me in welcoming the chairman and CEO of Kindrill, Mr. Martin Schroeter.

Martin Schroeter

Thank you. Thank you. Thank you very much. Good afternoon, everybody. First, I want to thank the Honorable Prime Minister of India, Sri Narendra Modi, for convening this distinguished group of ministers, policymakers, global leaders, fellow CEOs, and of course, everybody watching on the live stream. And I want to thank all of you for your support and for your support for the initiative that we are carrying out in this country. And I want to thank all of you for your support and for your support for the initiative that we are carrying out in this country. It is an extraordinary opportunity for us to be here with you as we all focus on how to usher in this new era of AI responsibly for people, for industry, and for our communities.

Today, I’m proud to represent the collective knowledge and experience of Kindrel’s engineers, technical practitioners, problem -solving consultants, the people who support the mission -critical systems that the world depends on every day. As the largest IT infrastructure services provider, the question that we continuously come back to at Kindrel, and one that I suspect many of the policymakers and the business leaders and the technologists and the citizens here among us have, is how do we actually make AI work in the real world for real -world impact? Not a demo, not a pilot or an experiment. And not in theory, but in day -to -day operations under real constraints with people working alongside AI agents at national and enterprise scale.

Scale means something here in India that’s different than anywhere else, where failure of these systems is just not an option. Because when AI moves, when it moves from labs into the systems that power economies, the hospitals and the banks and the transportation networks and the energy grids and the governments, getting it wrong, and these are the systems we run every day, getting it wrong is not just an inconvenience, it actually impacts lives. And these systems sit at the heart of what this summit represents, the people, the planet, and the progress that we’re all working on. Progress in all three depends on the ability to operationalize AI reliably and, again, at scale. So today I’ll share a bit about what we’re learning, working with our global customer base and our partners to close the gap between investments, intelligence and reality, and where AI either becomes part of how we work and how work actually gets done.

or never makes it out of the experimentation phase. And what we’re seeing is not an innovation problem. The innovation is real, but it’s a readiness problem. We’ve conducted global studies with business and IT leaders countless times, and our research shows that while more than two -thirds of global organizations are already heavily invested in AI, almost half still struggle to see meaningful returns. And in India, in India alone, 75 % said their innovation efforts stall after the proof -of -concept stage. So based on our research and our experience with our customers, both in regulated and unregulated industries, the reason, the leading indicator for why projects stall is not because of the technology isn’t smart. It’s brilliant. It’s brilliant.

It’s because we haven’t industrialized it yet. AI today is not industrialized. The infrastructure, the data, the operations, and the people simply aren’t ready to support AI adoption and deployment at scale. So our customers really want greater clarity and greater support on four critical questions. First, on operational conduct, they want to know how to deploy AI when data is fragmented across clouds, across their core systems of record, and at the edge of the environments in which they operate. When business processes were never designed for AI, and when regulations differ by sector and by geography, and when trust, security, and resilience are imperative to how it works. Second, and more systemically, they’re asking, can this system really run 24 by 7 without failure?

Can it withstand cyber attacks and outages and data drift and regulatory scrutiny? And can the people trust it when it matters most? And can it? Can they trust the decisions it’s going to make? Those are the systems we run every day. Third, they’re asking about agentic AI. Whether they’re truly ready to use it in mission -critical environments, are they able to meet the regulatory requirements that come with those environments, and are they able to integrate with existing systems? And fourth, they’re asking about their workforce. How to prepare people for new ways of working with AI. Nine in ten leaders expect AI to fundamentally reshape work, yet fewer than one in three believe their workforce is ready.

Or that they’re equipped to help their teams get there. All of this ladders up to trust. Can leaders trust these AI systems and the insights they provide? And that trust is built when AI operates within clear guardrails where actions are accountable and transparent and explainable, which is essential for organizations in every industry, and especially in government, in banking, and other regulated environments. These are the core readiness questions. And the core readiness challenges that we see every day. And they’re at the heart of why so many AI initiatives stall. They remind us that innovation must operate reliably, predictably, and securely, day after day, in the real world. So I’m thrilled that this year’s AI Summit is India because India is one of the world’s most important proving grounds for industrializing AI at extraordinary scale.

Under the leadership of Prime Minister Modi, India has recognized AI as a strategic national priority, building policy and digital and talent foundations needed to support innovation, and again, at scale. Through initiatives like Digital India and the India AI Mission, and investments in digital public infrastructure, India has positioned itself not just as an adopter of AI, but as a global contributor to how AI can be deployed responsibly and inclusively. AI -powered platforms like the Unified Lending Interface are expanding access to credit at scale, reducing loan times from weeks to minutes, and while improving transparency and inclusion. India’s digital experience offers an important lesson for the world when technology must operate at a national scale across public services and financial systems, healthcare, transportation, and energy.

Reliability, governance, and human integration are not features, they are prerequisites. Kindle is very proud to be a partner to many of India’s leading companies and government agencies. Our local engineering teams have built scalable platforms for banking, for citizen services, for telecoms, and for airports to handle the millions of users and transactions every day. At Bangalore International Airport, we’ve applied agentic AI to shift IT operations from a reactive response to a proactive resilience, supporting self -healing capabilities that improve operational predictability and strengthen trust in the airport’s digitalization. Through our community partnerships in India, we’re helping build digital and cybersecurity skills because safe, responsible AI adoption depends on people being ready. not just technology. And because sophisticated adversaries are already using AI to move at machine speed tomorrow, tomorrow we’re opening a new cyber defense operations center in Bangalore so we can detect and contain threats that already start at the edge of the network before they become disruptions.

So we are deeply committed to helping India and our partners around the world implement AI at the scale to drive people, planet, and progress outcomes. In every part of the globe, conversation about agentic must now shift from intelligence to industrialization, from what AI can do to how it’s orchestrated and how it’s governed and secured and integrated, and how it’s sustained with agents and humans partnering to drive business impact. This is a transition every major technology invention has gone through. Invention comes first, but impact only comes when society’s learned how to industrialize it safely, reliably, and at scale. A critical part of this industrialization is operationalizing the governance of AI. That means moving governance out of policy documents and into live systems, embedding auditability, logging, explainability, and compliance directly into how AI operates.

We’re seeing how our approaches, like policy as code, can establish clear guardrails for agentic AI to drive trust and compliance, giving regulators, boards, and the citizens alike the confidence in these systems are controlled, accountable, and safe. So what do we do next? Excuse me. We get ready by focusing on the fundamentals, infrastructure that can scale, security that earns trust, and people with the skills to operate. We operate AI responsibly. This readiness perspective is particularly important for policymakers. Excuse me. Because the impact of AI cannot be measured only by productivity gains or economic growth. as important as those are to drive the future, it will also be measured by how institutions help people adapt in the next phase of industrial automation and how work evolves.

Excuse me. AI can absolutely change the world. It can change work, it can change skills, it can change mindsets, and it can change operating models. But it will only change, oh, thank you very much, it will only change the world when it is embedded responsibly and reliably into the systems that society depends on every day. The future of AI will not be decided in the research labs or the boardrooms. It will be decided by the choices and the investments we make now, by how we close the gap between experimentation and industrialization. Excuse me. The work ahead is hard, because this is not just a technology shift, it’s a human shift. We have to build trust in AI, we have to reskill our workforces at scale, and we have to ensure these systems are worthy of the societies that depend on them.

The responsibility belongs to the companies and the governments alike. And it is a responsibility worth embracing, because when AI is industrialized responsibly, it doesn’t just optimize. It strengthens the institutions people rely on every day. And that is how AI truly changes the world. Thank you very much.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (5)
!
Correctionhigh

“Martin Schroeter, chairman and CEO of Kindrel”

The knowledge base identifies Martin Schroeter as chairman and CEO of Kyndryl, not Kindrel, indicating the company name is misspelled in the report.

Confirmedmedium

“The bottleneck is not a lack of innovation; the technology is “brilliant,” but AI has not yet been industrialised.”

The knowledge base states that technology is not the bottleneck and that success requires changes to processes, organization, incentives, skills, and culture, confirming the speaker’s point.

!
Correctionmedium

“In India 75 % of projects stall after the proof‑of‑concept stage”

The knowledge base reports that almost 80 % of AI pilots do not make it to production, but it does not provide an India‑specific 75 % figure, suggesting the reported statistic is inaccurate or unsupported.

Additional Contextlow

“Deploying AI across fragmented, multi‑cloud and edge data environments while integrating legacy core systems”

The knowledge base discusses the risks and complexities of hybrid and multi‑cloud environments, adding nuance to the challenges of fragmented cloud deployments.

Additional Contextmedium

“While more than two‑thirds of organisations globally invest heavily in AI, almost half still struggle to realise meaningful returns”

The knowledge base notes that a large share of AI pilots (around 80 %) fail to reach production, providing supporting context for the difficulty organisations face in achieving returns.

External Sources (86)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
https://dig.watch/event/india-ai-impact-summit-2026/keynote-martin-schroeter — Ladies and gentlemen, I would now like to welcome Mr. Martin Schroeter, who is the chairman and CEO, Kyndryl. As the lea…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
Youth-Driven Tech: Empowering Next-Gen Innovators | IGF 2023 WS #417 — Celestine Alves:Yes. Thank you, Denise. Well, I will speak from perspective from a lawyer, of course, and from a Brazili…
S7
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at …
S8
Building Climate-Resilient Systems with AI — The time for action is immediate – moving from research and pilots to deployment and impact is essential
S9
Scramble for Internet: you snooze, you lose | IGF 2023 WS #496 — Emerging technologies like Artificial Intelligence (AI), Blockchain, automation, and 5G/6G networks also impact internet…
S10
Safe and Responsible AI at Scale Practical Pathways — “Deep work on working on fragmented data silos.”[5]. “It can be bridged but we have to think about how to make data inte…
S11
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S12
9821st meeting — First, the Council must foster an inclusive discussion on AI governance. To make sure that artificial intelligence syste…
S13
Responsible AI in India Leadership Ethics & Global Impact — And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. E…
S14
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S15
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S16
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S17
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — but not least, trust. If you make a deal with a Swede, that’s a handshake. You can trust. You know what you’re going to …
S18
Driving Indias AI Future Growth Innovation and Impact — Thank you, Mridu, and thank you, everyone, for joining us for the unveiling of this important blueprint. As we have hear…
S19
Building Scalable AI Through Global South Partnerships — And we believe that probably the most effective way of dealing with this is to actually be able to cooperate amongst our…
S20
The future of work: preparing for automation and the gig economy — According to asurvey conducted by Willis Towers Watsonin 38 countries, over half of the surveyed employers (57%) conside…
S21
AI expected to reshape 89% of jobs across the workforce in 2026 — AI is set totransformtheUKworkforce in 2026, with nearly 9 out of 10 senior HR leaders expecting AI to reshape jobs, acc…
S22
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — In conclusion, regulators face an ongoing challenge in safeguarding both industry and consumers from cybersecurity risks…
S23
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Helmut Reisinger:Yeah. Good afternoon, everybody. As-salamu alaykum. I am representing Palo Alto Networks. We are a cybe…
S24
Hardware for Good: Scaling Clean Tech — 4. The need for innovation in technology, policy, and deployment strategies. Ann Mettler: Because I work on these issu…
S25
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — So what can we expect these discussions to focus on? There are at least four main policy questions that these forums, as…
S26
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — India’s deployment of technology as an inclusive, developmental resource was highlighted. Here, the national AI strategy…
S27
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S28
From principles to practice: Governing advanced AI in action — Trust and Transparency Requirements
S29
Press Conference: Closing the AI Access Gap — Finally, there is strong agreement among the speakers for trust-based, multi-stakeholder partnerships in AI. They argue …
S30
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S31
Bridging the AI innovation gap — ## Call for Partnerships The speaker stressed that all stakeholders—government, industry, academia, and civil society—h…
S32
Shaping the Future AI Strategies for Jobs and Economic Development — Investment and infrastructure development require collaborative approaches
S33
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S34
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — While both speakers acknowledge the importance of governance, there’s an unexpected difference in their emphasis on who …
S35
Blended Finance’s Broken Promise and How to Fix It / Davos 2025 — Despite their different institutional backgrounds, both speakers emphasize the need for tailored, context-specific appro…
S36
Day 0 Event #189 Toward the Hamburg Declaration on Responsible AI for the SDG — While both speakers agree on the need for the Hamburg Declaration, they emphasize different aspects of its scope. Opp fo…
S37
Keynote-Martin Schroeter — “AI today is not industrialized”[1]. “The innovation is real, but it’s a readiness problem”[2]. “It’s because we haven’t…
S38
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — This quote from the UN Secretary General, shared by Beridze, captures a fundamental challenge in AI governance – the gap…
S39
AI Meets Cybersecurity Trust Governance & Global Security — Easy questions at the end there. Well, just on a personal note, I have to say I really enjoyed this and I want to say th…
S40
https://dig.watch/event/india-ai-impact-summit-2026/keynote-martin-schroeter — Excuse me. AI can absolutely change the world. It can change work, it can change skills, it can change mindsets, and it …
S41
Delegated decisions, amplified risks: Charting a secure future for agentic AI — – Kenneth Cukier- Moderator People should not feel intimidated by technology and should ask fundamental questions about…
S42
Challenging the status quo of AI security — These are critical questions for multi-agent systems operating within organizations and handling sensitive data
S43
Collaborative AI Network – Strengthening Skills Research and Innovation — Garg detailed four critical requirements for AI-ready data: discoverable (through proper metadata), trustworthy (through…
S44
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S45
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S46
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S47
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — This explores India’s unique position as both a large market and testing ground for technologies that can then be scaled…
S48
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — Amon highlights India’s unique positioning to benefit from this AI transformation, noting the country’s successful mobil…
S49
Driving Indias AI Future Growth Innovation and Impact — Trust infrastructure is as critical as technical infrastructure, requiring institutional safeguards, transparency, and e…
S50
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Trust-building requires transparency, explainability, and meaningful stakeholder involvement. Jakulevičienė stressed t…
S51
From principles to practice: Governing advanced AI in action — Trust and Transparency Requirements
S52
Press Conference: Closing the AI Access Gap — In conclusion, the speakers in the discussion recognize the transformative potential of AI with its economic and humanit…
S53
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S54
Bridging the AI innovation gap — ## Call for Partnerships LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of…
S55
Shaping the Future AI Strategies for Jobs and Economic Development — Investment and infrastructure development require collaborative approaches
S56
International Standards: A Commitment to Inclusivity — The address concludes with an advanced expression of thanks, signalling the speaker’s anticipation for the informative r…
S57
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S58
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S59
Building Trusted AI at Scale – Keynote Anne Bouverot — The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and appreciation, m…
S60
Welcome address — The tone is formal, diplomatic, and consistently optimistic throughout. The speaker maintains an authoritative yet colla…
S61
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — In conclusion, the analysis provided valuable insights into various aspects of manufacturing, technology, energy transit…
S62
Ready for Goodbyes? : Critical System Obsolescence — Ben Miller:It really does come back to the idea of prevention does eventually fail. And so not just creating a strong ar…
S63
Closing Session  — Key Technical and Operational Recommendations
S64
Agenda item 5 : Day 4 Afternoon session — Japan:Thank you, Mr. Chair. Japan believes that capacity building is essential for maintaining peace and stability and p…
S65
AI Governance Dialogue: Steering the future of AI — The tone is inspirational and urgent, maintaining an optimistic yet realistic perspective throughout. The speaker uses m…
S66
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S67
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S68
AI for Good Technology That Empowers People — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for so…
S69
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S70
Powering the Technology Revolution / Davos 2025 — Dan Murphy: ♫ ♫ Welcome to Red Bee Media’s Live Remote Broadcasting Service. I’m from CNBC, I’m CNBC’s Middle E…
S71
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Honourable Prime Minister Modi, Excellencies, dear colleagues, ladies and gentlemen. It is a great honour for me to be i…
S72
Welcome Address — Prime Minister Narendra Modi
S73
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Hari Shetty, Strategist and Technology Officer at Wipro, addressed the persistent challenge of moving from pilot project…
S74
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — Pentland presented a future where AI agents would handle virtually every business and government process, essentially ad…
S75
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S76
Building Population-Scale Digital Public Infrastructure for AI — The discussion emphasized that successful AI diffusion requires technology to become “boring” and invisible rather than …
S77
WS #139 Internet Resilience Securing a Stronger Supply Chain — Mark Nottingham from CloudFlare provided front-line operational perspectives, explaining that the internet is “inherentl…
S78
The quantum internet is closer than it seems — The University of Pennsylvania’s engineering team has made abreakthroughthat could bring the quantum internet much close…
S79
Table of Contents — (I) The ability of a system to remain in operation or existence despite adverse conditions, including natural occurrence…
S80
The Intelligent Coworker: AI’s Evolution in the Workplace — Technology is not the bottleneck; success requires changing processes, organization, incentives, skills, and culture wit…
S81
Welcome remarks | 30 May — Disparities exist in access to data, algorithms, computing power, and expertise.
S82
National digital transformation strategies in Africa | IGF 2023 Open Forum #124 — Mactar Seck:Thank you very much for this question. First on this AU digital transformation strategy, we have several com…
S83
Main Topic 3: Europe at the Crossroads: Digital and Cyber Strategy 2030 — Having all the necessary infrastructure and building capacity for the future is meaningless without people who can actua…
S84
Open Forum #1 Challenges of cyberdefense in developing economies — Wolfgang Kleinwachter: It’s indeed a difficult question, and it was already mentioned by previous speakers. We have th…
S85
Building a Digital Society, from Vision to Implementation — – Chukwuemeka Cameron Economic | Sociocultural Hines cites research from Gary Marcus presented at Web Summit showing t…
S86
Acknowledgements — – Multiple interfaces. Dealing with cloud services from multiple providers compounds the risks since it is likely that e…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Martin Schroeter
10 arguments158 words per minute1673 words632 seconds
Argument 1
Innovation is not the problem; readiness is – (Martin Schroeter)
EXPLANATION
Schroeter argues that while AI innovation is abundant, the main barrier to impact is the lack of operational readiness. Companies struggle to move beyond proof‑of‑concept because the supporting infrastructure, data, processes and people are not prepared for large‑scale deployment.
EVIDENCE
He notes that innovation is real but the issue is readiness, citing global studies showing that more than two-thirds of organisations invest heavily in AI yet almost half fail to see meaningful returns, and that in India 75 % of projects stall after the proof-of-concept stage. He further explains that AI is not yet industrialised because the infrastructure, data, operations and workforce are not ready to support AI at scale [20-23][24-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schroeter’s claim that the barrier is readiness rather than lack of innovation is echoed in his keynote where he states “The innovation is real, but it’s a readiness problem” and that AI is not yet industrialised [S1].
MAJOR DISCUSSION POINT
Readiness, not innovation, is the bottleneck for AI impact.
Argument 2
AI must move from labs to production for real‑world impact – (Martin Schroeter)
EXPLANATION
The speaker stresses that AI needs to transition from experimental demos and pilots to everyday operational use in critical sectors. Only by embedding AI in day‑to‑day processes can it deliver tangible benefits for people, industry and communities.
EVIDENCE
He contrasts demos and pilots with the need for AI to work in day-to-day operations under real constraints, emphasizing that AI must move from labs into systems that power hospitals, banks, transport, energy grids and governments, where failure affects lives [13-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to shift AI from research pilots to operational deployment is highlighted in the “Building Climate-Resilient Systems with AI” source, which stresses moving from labs to impact [S8].
MAJOR DISCUSSION POINT
Transition AI from labs to production for real‑world impact.
AGREED WITH
Speaker 1
Argument 3
Fragmented data and legacy processes impede deployment – (Martin Schroeter)
EXPLANATION
Schroeter points out that AI deployment is hampered by data that is scattered across multiple clouds, core systems and edge environments, as well as business processes that were never designed for AI. These technical and organisational silos make scaling difficult.
EVIDENCE
He describes customers’ need to know how to deploy AI when data is fragmented across clouds, core systems of record and edge environments, and when legacy processes were not built for AI integration [30-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Challenges of fragmented data silos and legacy processes are discussed in the safe and responsible AI at scale guide, which calls for making data interoperable and AI-ready [S10].
MAJOR DISCUSSION POINT
Data fragmentation and legacy processes hinder AI scaling.
Argument 4
AI systems must run 24 by 7 without failure, be secure, resilient, and trustworthy – (Martin Schroeter)
EXPLANATION
The speaker asserts that for AI to be adopted at scale it must operate continuously without downtime, withstand cyber‑attacks, data drift and regulatory scrutiny, and earn the trust of users when critical decisions are made.
EVIDENCE
He lists the systemic questions customers ask: can the system run 24/7, survive cyber attacks, outages, data drift and regulatory scrutiny, and can people trust its decisions when it matters most [32-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schroeter’s questions about 24/7 operation, cyber-attack resistance and data drift are directly quoted in his keynote [S1] and reinforced by literature on AI as critical infrastructure emphasizing resilience and secure compute [S11].
MAJOR DISCUSSION POINT
AI must be continuously reliable, secure and trusted.
Argument 5
Embedding auditability, explainability, and policy‑as‑code creates guardrails – (Martin Schroeter)
EXPLANATION
Schroeter explains that moving AI governance from static policy documents into live systems—through audit logs, explainability, and policy‑as‑code—establishes clear, enforceable guardrails that build trust and compliance.
EVIDENCE
He describes operationalising AI governance by embedding auditability, logging, explainability and compliance directly into AI systems, and cites the use of policy-as-code to set guardrails for agentic AI [65-67].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of audit logs, explainability and policy-as-code as guardrails is reflected in the responsible AI in India discussion of embedded principles [S13], the scaling enterprise-grade AI article that stresses guardrails for trust [S14], and UN Council notes on transparency and accountability [S15].
MAJOR DISCUSSION POINT
Policy‑as‑code and auditability embed AI governance.
Argument 6
Trust arises when AI actions are transparent, accountable, and compliant – (Martin Schroeter)
EXPLANATION
The speaker argues that trust in AI is built when its actions are visible, accountable and meet regulatory requirements, ensuring that decisions can be explained and justified.
EVIDENCE
He states that trust is built when AI operates within clear guardrails where actions are accountable, transparent and explainable, which is essential for organisations in regulated environments [44-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust built on transparency, accountability and compliance is highlighted in the UN Security Council AI governance notes [S15] and reiterated in Schroeter’s keynote on trust as the fundamental prerequisite [S1].
MAJOR DISCUSSION POINT
Transparency, accountability and compliance generate AI trust.
Argument 7
India’s Digital India and AI Mission make it a proving ground for large‑scale AI – (Martin Schroeter)
EXPLANATION
Schroeter highlights India’s strategic focus on AI through initiatives like Digital India and the India AI Mission, positioning the country as a key testbed for industrialising AI at national scale.
EVIDENCE
He notes that under Prime Minister Modi, India has recognised AI as a strategic priority, built policy, digital and talent foundations, and through programmes such as Digital India and the India AI Mission, it is positioned as a global contributor to responsible AI deployment at scale [51-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s strategic AI focus is described in the “Scaling Trusted AI” piece noting Digital India and the AI Mission as a large-scale testbed [S16] and in the “Driving India’s AI Future” briefing that positions the country at the cusp of AI-driven change [S18].
MAJOR DISCUSSION POINT
India’s policy framework makes it a large‑scale AI proving ground.
Argument 8
Partnerships with Indian firms and government demonstrate scalable AI deployments – (Martin Schroeter)
EXPLANATION
He provides examples of collaborations where Kindrill (Kindle) has built large‑scale AI platforms for banking, citizen services, telecoms and airports, showing practical, high‑volume deployments in India.
EVIDENCE
He mentions Kindrill’s partnerships building scalable platforms for banking, citizen services, telecoms and airports, the use of agentic AI at Bangalore International Airport to shift IT operations to proactive resilience, and the launch of a cyber-defence operations centre in Bangalore to detect threats at the network edge [56-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schroeter’s statements about building scalable platforms for banking, citizen services, telecoms and airports in India are captured in his keynote [S1], and the “Building Scalable AI Through Global South Partnerships” source emphasizes such collaborations [S19].
MAJOR DISCUSSION POINT
Collaborations showcase large‑scale AI implementations in India.
Argument 9
Leaders expect AI to reshape work, yet most workforces are unprepared – (Martin Schroeter)
EXPLANATION
Schroeter points out the gap between expectations and reality: while nine in ten leaders anticipate AI will fundamentally change work, fewer than one in three believe their employees are ready for that transformation.
EVIDENCE
He cites a statistic that nine in ten leaders expect AI to reshape work, but fewer than one in three think their workforce is ready or equipped to help teams transition [41-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Surveys showing 89% of leaders expect AI to reshape work and concerns about workforce readiness are reported in the future-of-work study [S20] and the AI impact on jobs survey [S21].
MAJOR DISCUSSION POINT
Workforce not ready for AI‑driven change.
Argument 10
Reskilling and building cybersecurity skills are essential for responsible AI – (Martin Schroeter)
EXPLANATION
He stresses that responsible AI adoption depends on people having the right digital and cybersecurity capabilities, not just technology, and announces new initiatives to develop these skills.
EVIDENCE
He describes community partnerships that build digital and cybersecurity skills, and the opening of a new cyber-defence operations centre in Bangalore to detect and contain threats before they cause disruption [59-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for capacity building in digital and cyber skills is discussed in the “AI-driven Cyber Defense” report [S22] and the “Tech Transformed Cybersecurity” article on AI-enabled threat detection [S23]; Schroeter also mentions a new cyber-defence centre in his keynote [S1].
MAJOR DISCUSSION POINT
Reskilling and cyber skills are crucial for responsible AI.
S
Speaker 1
1 argument133 words per minute86 words38 seconds
Argument 1
Speaker 1 introduces Martin Schroeter and frames the summit’s focus on responsible AI – (Speaker 1)
EXPLANATION
The moderator welcomes Martin Schroeter, highlighting his role as chairman and CEO of Kindrill and setting the stage for a discussion on how to run AI responsibly in production environments.
EVIDENCE
The opening remarks introduce Mr. Martin Schroeter as the chairman and CEO of Kindrill, describe his company’s global IT infrastructure role, and state that his perspective offers a corrective to summit-stage optimism about AI [1-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The opening remarks introducing Schroeter as chairman and CEO of Kindrill are documented in his keynote transcript [S1].
MAJOR DISCUSSION POINT
Opening welcome and framing of responsible AI theme.
AGREED WITH
Martin Schroeter
Agreements
Agreement Points
Both speakers emphasize the need for responsible, operational AI deployment at scale
Speakers: Speaker 1, Martin Schroeter
Speaker 1 introduces Martin Schroeter and frames the summit’s focus on responsible AI – (Speaker 1) AI must move from labs to production for real‑world impact – (Martin Schroeter)
Speaker 1 opens the session by positioning Martin Schroeter’s perspective as a corrective to summit-stage optimism and stresses responsible AI [1-4]. Martin Schroeter reinforces this by stating that AI must transition from demos and pilots to day-to-day operations in critical sectors to deliver real-world impact [13-18]. Both stress that AI’s value depends on trustworthy, production-grade deployment.
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus aligns with the Hamburg Declaration on Responsible AI for the Sustainable Development Goals and its linkage to the Global Digital Compact, reflecting ongoing policy efforts to ensure responsible, large-scale AI deployment [S36]. It also echoes themes from recent AI strategy discussions that stress governance, infrastructure and collaborative solutions for operational AI at scale [S33].
Similar Viewpoints
Schroeter consistently argues that the primary barrier to AI impact is not lack of innovation but the lack of operational readiness, governance, and trust. He calls for moving AI into production, embedding auditability and policy‑as‑code, and ensuring transparency and accountability to build trust [20-28][30-36][65-67][44-46].
Speakers: Martin Schroeter
Innovation is not the problem; readiness is – (Martin Schroeter) AI must move from labs to production for real‑world impact – (Martin Schroeter) Embedding auditability, explainability, and policy‑as‑code creates guardrails – (Martin Schroeter) Trust arises when AI actions are transparent, accountable, and compliant – (Martin Schroeter)
Unexpected Consensus
Overall Assessment

The discussion shows a clear convergence on the importance of responsible, production‑grade AI. Both the moderator and the keynote speaker stress that AI must be trustworthy, governed, and operationally ready before it can deliver societal benefits. Schroeter’s detailed arguments about readiness, governance, and workforce skills reinforce this shared stance.

High consensus on the need for responsible AI deployment; this alignment signals strong support for policies and industry actions that prioritize industrialisation, governance, and trust as prerequisites for AI impact.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript shows strong alignment between the moderator’s framing and Schroeter’s detailed discussion. Both stress moving AI from pilot to production, the importance of readiness, and the need for trustworthy, transparent systems. No substantive contradictions appear between the speakers.

Minimal to none – the speakers are largely in agreement, indicating a cohesive narrative that reinforces the summit’s focus on responsible, industrial‑scale AI deployment.

Partial Agreements
Both speakers emphasize that AI should be deployed responsibly and beyond hype, stressing the need for real‑world operationalisation and trustworthy systems rather than just optimistic demos. Speaker 1 frames Schroeter’s perspective as a corrective to summit‑stage optimism, while Schroeter elaborates on the practical readiness and trust requirements for production AI [1-4][13-18][20-23][44-46].
Speakers: Speaker 1, Martin Schroeter
Speaker 1 introduces Martin Schroeter and frames the summit’s focus on responsible AI — (Speaker 1) AI must move from labs to production for real‑world impact — (Martin Schroeter) Innovation is not the problem; readiness is – (Martin Schroeter) Trust arises when AI actions are transparent, accountable, and compliant — (Martin Schroeter)
Takeaways
Key takeaways
AI innovation is mature; the primary barrier is readiness and industrialization for real‑world, large‑scale deployment. Scaling AI faces operational challenges such as fragmented data, legacy processes, and the need for 24/7, secure, resilient, and trustworthy systems. Embedding governance directly into AI (auditability, explainability, policy‑as‑code) is essential to build trust, accountability, and regulatory compliance. India’s Digital India and AI Mission position the country as a critical proving ground for large‑scale, responsible AI, with Kindrill actively partnering on national‑level projects. Workforce readiness is a major gap; despite expectations that AI will reshape work, most organizations lack the skills and reskilling programs needed. Responsible AI requires a coordinated effort between companies and governments to reskill workers, secure systems, and embed trust into operational AI.
Resolutions and action items
Kindrill will open a new cyber‑defense operations center in Bangalore to detect and contain AI‑driven threats at the network edge. Kindrill will continue building scalable AI platforms for Indian banks, citizen services, telecoms, and airports, demonstrating industrial‑scale deployments. Kindrill will promote and implement “policy as code” approaches to embed auditability, explainability, and compliance into live AI systems. Kindrill will expand community partnerships in India to develop digital and cybersecurity skills for responsible AI adoption.
Unresolved issues
How to systematically transform fragmented, multi‑cloud data environments into AI‑ready architectures across diverse industries. Specific regulatory frameworks and standards needed for agentic AI in mission‑critical sectors remain undefined. Effective large‑scale reskilling strategies to prepare the majority of workforces for AI‑augmented roles are not yet established. Methods for guaranteeing continuous 24/7 operation of AI systems under cyber‑attack, data drift, and outage conditions need further development.
Suggested compromises
None identified
Thought Provoking Comments
AI today is not industrialized. The infrastructure, the data, the operations, and the people simply aren’t ready to support AI adoption and deployment at scale.
Challenges the prevailing narrative that AI breakthroughs alone will drive impact, shifting focus from technology hype to the practical readiness gap that blocks real‑world deployment.
Marks a turning point from optimism to a reality‑check, prompting the audience to consider foundational constraints. It sets up the rest of the talk by framing the problem that all subsequent points aim to solve.
Speaker: Martin Schroeter
The leading indicator for why projects stall is not because the technology isn’t smart. It’s brilliant. It’s because we haven’t industrialized it yet.
Re‑frames failure of AI initiatives as a process issue rather than a technology flaw, directly confronting the assumption that more sophisticated models will automatically yield returns.
Deepens the conversation by introducing a diagnostic lens (industrialization) that guides the audience toward looking at operational, governance, and workforce dimensions rather than just model performance.
Speaker: Martin Schroeter
Our customers really want greater clarity and greater support on four critical questions: operational conduct, 24/7 reliability, agentic AI, and workforce readiness.
Provides a concrete, structured framework that moves the discussion from abstract challenges to actionable inquiry, making the problem space tangible for policymakers and executives.
Shifts the tone to solution‑oriented, inviting listeners to map their own organizations onto these four pillars and thereby steering the dialogue toward concrete policy and implementation considerations.
Speaker: Martin Schroeter
Trust is built when AI operates within clear guardrails where actions are accountable, transparent, and explainable—essential for regulated environments like government, banking, and healthcare.
Highlights trust as the linchpin linking technology to societal acceptance, introducing the ethical and regulatory dimension that many technical talks overlook.
Elevates the conversation from technical readiness to societal impact, prompting the audience to think about governance mechanisms and the role of regulators in AI deployment.
Speaker: Martin Schroeter
Industrialization is a transition every major technology invention has gone through. Invention comes first, but impact only comes when society learns how to industrialize it safely, reliably, and at scale.
Places AI within a historical context of technology adoption, offering a macro‑level perspective that reframes current challenges as part of a familiar evolutionary pattern.
Broadens the discussion, encouraging stakeholders to view current hurdles as temporary stages in a longer trajectory, which can reduce panic and foster long‑term strategic planning.
Speaker: Martin Schroeter
Operationalizing the governance of AI means moving governance out of policy documents and into live systems—embedding auditability, logging, explainability, and compliance directly into how AI operates (policy as code).
Introduces a concrete technical approach—policy as code—that bridges the gap between high‑level regulation and day‑to‑day system behavior, a novel idea for many non‑technical policymakers.
Creates a pivot toward actionable steps, inspiring the audience to consider concrete engineering solutions for compliance and opening a pathway for collaboration between regulators and technologists.
Speaker: Martin Schroeter
The impact of AI cannot be measured only by productivity gains or economic growth; it will also be measured by how institutions help people adapt in the next phase of industrial automation and how work evolves.
Expands the metrics of AI success beyond traditional economic indicators to include social and workforce outcomes, challenging a narrow view of AI value.
Shifts the conversation toward inclusive, people‑centric policy considerations, prompting leaders to think about reskilling, equity, and societal resilience alongside ROI.
Speaker: Martin Schroeter
The work ahead is hard, because this is not just a technology shift, it’s a human shift. We have to build trust in AI, reskill our workforce at scale, and ensure these systems are worthy of the societies that depend on them.
Emphasizes the human dimension of AI adoption, underscoring that technical solutions alone are insufficient without cultural and organizational change.
Serves as an emotional and motivational climax, reinforcing earlier points about workforce readiness and trust, and leaving the audience with a call to collective responsibility.
Speaker: Martin Schroeter
Overall Assessment

Martin Schroeter’s remarks transformed what could have been a routine product showcase into a nuanced, multi‑layered dialogue about AI’s readiness for real‑world, mission‑critical use. By first debunking the myth that technology alone drives impact, he redirected attention to the industrialization gap. His articulation of four concrete readiness questions and the concept of ‘policy as code’ supplied a practical roadmap, while historical analogies and the emphasis on trust and human factors broadened the scope to include ethical, regulatory, and societal dimensions. These pivotal comments steered the discussion from abstract optimism to grounded, actionable challenges, prompting policymakers, executives, and technologists to reconsider priorities, align on governance frameworks, and invest in workforce transformation. Collectively, they shaped the conversation into a forward‑looking, responsibility‑centered narrative that underscores AI’s potential only when it is reliably, transparently, and human‑centrically industrialized.

Follow-up Questions
How can organizations deploy AI effectively when data is fragmented across multiple clouds, core systems of record, and edge environments?
Fragmented data hampers AI integration and scalability; addressing this is crucial for reliable, real‑world AI deployment.
Speaker: Martin Schroeter
What architectures and safeguards are needed to ensure AI systems operate 24/7 without failure, resist cyber‑attacks, handle outages, data drift, and meet regulatory scrutiny while maintaining user trust?
Continuous, trustworthy operation is essential for mission‑critical sectors like healthcare, finance, and energy where failures have severe consequences.
Speaker: Martin Schroeter
Are organizations truly ready to adopt agentic AI in mission‑critical environments, and how can they satisfy regulatory requirements and integrate with existing legacy systems?
Agentic AI introduces autonomous decision‑making; understanding readiness and compliance is vital to prevent unintended risks.
Speaker: Martin Schroeter
What strategies and programs are most effective for reskilling and preparing the workforce to collaborate with AI, ensuring that employees are ready for the fundamental reshaping of work?
Only about one‑third of leaders believe their workforce is ready; workforce readiness is a key determinant of AI’s successful industrialization.
Speaker: Martin Schroeter
How can AI governance be operationalized by embedding auditability, logging, explainability, and compliance directly into live AI systems rather than keeping it in policy documents?
Embedding governance ensures real‑time accountability and builds trust, especially in regulated industries.
Speaker: Martin Schroeter
What is the efficacy of a ‘policy‑as‑code’ approach for establishing clear guardrails for agentic AI, and how does it impact regulator, board, and public confidence?
Policy‑as‑code could automate compliance checks, but its practical impact on trust and safety needs validation.
Speaker: Martin Schroeter
Beyond productivity and economic growth, how should the impact of AI be measured in terms of institutional resilience, societal adaptation, and evolution of work?
Comprehensive impact metrics are needed to assess AI’s broader societal effects and guide responsible policy making.
Speaker: Martin Schroeter
What best practices can be derived from India’s experience in scaling AI for public services, finance, healthcare, transportation, and energy to inform other nations’ AI industrialization efforts?
India serves as a proving ground; extracting transferable lessons can accelerate global responsible AI deployment.
Speaker: Martin Schroeter
How can proactive AI‑driven cyber‑defense operations be designed to detect and contain threats at the network edge before they cause disruptions?
Advanced adversaries use AI; developing edge‑focused AI security is critical to protect critical infrastructure.
Speaker: Martin Schroeter
What frameworks are needed to build and sustain trust in AI systems among regulators, corporate boards, and citizens, ensuring decisions are accountable, transparent, and explainable?
Trust is foundational for AI adoption in regulated sectors; clear frameworks are required to institutionalize it.
Speaker: Martin Schroeter

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.